Disk Organization and Operating System Fundamentals
Disk Organization
A disk comprises several disks or plates that store information on their surfaces. This information is organized in concentric tracks, further divided into sectors. Tracks are arranged in cylinders, where a cylinder consists of all tracks occupying the same position across various plates. Reading all tracks within a cylinder avoids head movement, which is time-consuming.
The drive reads/writes data to a disk sector by sector, storing the data in a buffer. Interleaving, a technique where non-adjacent sectors are accessed, improves performance. The interleaving factor determines the separation between logical sectors.
Disk Access Cost
Disk access time is determined by:
- Seek Time (Positioning Time): Time taken for the head arm to position itself on the correct cylinder.
- Rotational Delay: Time taken for the disk to rotate to the desired sector (calculated as half the rotation time).
- Transfer Time: Time taken to transfer the data, depending on the amount of information.
Blocks are numbered starting from cylinder zero, sector zero, to sector “n”. Numbering continues to the next sector (n+1, n+2, etc.) and subsequent cylinders.
RAID
- RAID 0 (Disk Striping): Interleaves data across multiple disks for improved performance but no redundancy.
- RAID 1 (Mirroring): Mirrors data across disks for redundancy but increased cost.
- RAID 2: Uses Hamming code for redundancy (e.g., 4 data disks, 3 control disks).
- RAID 3: Uses a parity bit (e.g., 4 data disks, 1 control disk).
- RAID 4: Block-level parity.
- RAID 5: Distributed block-level parity.
- RAID 6: Dual redundancy.
I/O Techniques
- Programmed I/O: The processor waits for I/O operations to complete.
- Interrupt-Driven I/O: The processor continues execution after issuing an I/O command and is interrupted upon completion.
- Direct Memory Access (DMA): A DMA module handles data transfer, interrupting the processor only after the entire block is transferred.
I/O Logic Functions Structure
- Local Peripherals: Simple communication as byte streams or registers.
- Communication Device: Involves a communication architecture.
- Secondary Storage/File System: Includes directory management, file system, and physical organization layers.
I/O Buffering
Buffering or intermediate storage improves I/O efficiency. Distinguish between block devices and flow devices based on their minimum transfer units.
- Simple Buffer: Uses a single buffer, reducing wait time.
- Double Buffer: Uses two buffers, allowing concurrent transfer and processing.
- Circular Buffer: Uses multiple buffers in a producer/consumer model.
Disk Scheduling Policies
- FIFO (First In, First Out): Processes requests sequentially.
- LIFO (Last In, First Out): Processes the most recent request first.
- Priority System: Processes requests based on priority.
- Shortest Seek Time First (SSTF): Selects the request with the shortest seek time.
- SCAN: The arm moves in one direction, servicing requests along the way.
- C-SCAN: The arm moves in one direction and returns to the opposite end.
- N-Step SCAN and FSCAN: Divide the request queue into segments to reduce arm stickiness.
File System Architecture
- Device Management: Handles communication with peripherals.
- Basic File System/Physical I/O: Primary interface with the external environment.
- Basic I/O Supervisor: Manages I/O operations.
- Logical I/O: Provides user and application access to files.
- Access Method: Standard interface between applications and file systems.
Logical File Organization and Access
- Piles: Data is collected in the order it arrives.
- Sequential: Records of the same length and fields.
- Indexed Sequential: Sequential records organized by a key field.
- Indexed: Records accessed based on attributes other than the key field.
- Direct/Hashed: Direct access to blocks using known addresses.
File Allocation Methods
- Contiguous Allocation: Adjacent blocks allocated to a file.
- Chained Allocation: Blocks linked together.
- Indexed Allocation: Uses an index block to locate data blocks.
Free Space Management
- Bit Tables: A bit vector representing free and used blocks.
- Chained Free Sections: Free sections linked together.
Operating System Definition
An operating system controls software execution, acting as an interface between the user and hardware. Its objectives are convenience, efficiency, and resource delivery.
Operating System as a Resource Manager
Manages computer resources for data transfer, storage, and processing.
Operating System History
- First Generation (1945-1955): Serial processing, machine language.
- Second Generation (1955-1965): Simple batch systems, monitor program.
- Third Generation (1965-1980): Multiprogramming, multiple jobs concurrently.
- Fourth Generation (1980-1990): Personal computers, VLSI circuits.
Operating System Classification
By Resource Operation:
- Single-programming: One process in memory.
- Multiprogramming: Multiple processes in memory.
- Multiprocessing: Multiple processors.
By Interactivity:
- Batch Processing: No user interaction.
- Timesharing: Multiple users share processor time.
- Real-time: Immediate response time.
By Number of Users:
- Single-user: One user at a time.
- Multi-user: Multiple users concurrently.
Five-State Process Model
- New: Process being created.
- Running: Process executing instructions.
- Blocked (Waiting): Process waiting for an event.
- Ready: Process waiting for the CPU.
- Finished (Terminated): Process completed.
Suspended Processes
Adds suspended states to the process model for processes swapped to secondary memory.
Process Control Structures
- Memory Tables: Track main and secondary memory allocation.
- I/O Tables: Manage I/O devices and channels.
- File Tables: Information about files in secondary memory.
- Process Tables: Information about each process.
Process Image
Includes program code, data, stack, and process control block (PCB).
Process Creation
Uses fork()
to create a child process and exec()
to load a new program.
Processes and Threads
Threads are lightweight processes within a process, sharing resources.
Multiprogramming Objective
Keep the CPU busy by having a process running at all times.
Preemptive Scheduling
The OS can interrupt a running process to allocate the CPU to another process.
Scheduling Criteria
- CPU Utilization: Percentage of time the CPU is busy.
- Throughput: Number of processes completed per unit time.
- Turnaround Time: Time between process arrival and completion.
- Waiting Time: Time a process spends in the ready queue.
- Response Time: Time from process arrival to first response.
Scheduling Algorithms
- FCFS (First-Come, First-Served): Simple, but can lead to long wait times.
- SPN (Shortest Process Next): Favors short processes.
- SRTF (Shortest Remaining Time First): Preemptive version of SPN.
- RR (Round Robin): Time-sliced scheduling.
- HRRN (Highest Response Ratio Next): Considers waiting time and service time.