I/O Management in Computer Systems
Part 2 – Interface Between Processors and Peripherals
Organization of I/O (T.1)
Introduction (1.1)
Three subsystems exist on a computer system: memory, CPU, and I/O. The I/O system facilitates data movement between external devices and the CPU-memory tandem. It includes:
- I/O devices: These components interface the computer with external peripherals, enabling user interaction (e.g., mouse, keyboard) and device-to-device communication (e.g., network, storage).
- Interconnections: These are the physical connections between components, the transmission mechanisms, I/O device interfaces, and the I/O organization.
I/O system characteristics are often determined by current technology and influenced by other system components.
I/O Features:
- Diverse devices with varying operational modes and transfer speeds.
- Peripheral speeds often much slower than CPU or memory.
- Different data formats and word sizes.
I/O System Design Parameters: Performance, scalability, expandability, and fault tolerance.
Performance Measures (1.2)
Bandwidth, latency, and cost are closely related. Generally, increasing bandwidth increases system cost. Improving latency can be challenging without changing technology implementations.
Latency (Response Time or Execution Time): Total time to complete a task (measured in time units or clock cycles).
- TCPU = TUser + TSystem
- Improved performance = Lower latency
Bandwidth (Power or Productivity): Amount of work done in a given time (measured in quantity per unit time).
- Common measurements: Data rate (amount of data moved per unit time) or I/O rate (number of I/O operations per unit time).
- Improved performance = Higher bandwidth
- Bandwidth is the rate of request servicing, not always the inverse of latency: Concurrent request handling allows bandwidth to exceed 1/latency.
Other Performance Indicators:
- I/O interference with the processor (ideally minimized).
- Device diversity (range of connectable I/O devices).
- Capacity/Scalability/Expandability (number of connectable I/O devices).
- Storage capacity (for storage devices).
Performance in a Computer: Often focuses solely on CPU performance, neglecting other system components.
- Time = Instruction Cycle Time * Number of Instructions
- Optimizing CPU time isn’t the only way to improve performance; memory and I/O also significantly influence performance.
Improvement Options:
- Optimize CPU (maximize speed and efficiency).
- Optimize memory (maximize access efficiency).
- Optimize I/O (maximize operation efficiency).
Systems can be CPU-limited, memory-limited, or I/O-limited.
Acceleration (performance measure after an upgrade): PerformanceAfter Improvement / PerformanceBefore Improvement = Execution TimeBefore Improvement / Execution TimeAfter Improvement
Amdahl’s Law: Calculates the acceleration achievable with a specific improvement based on two factors:
- FractionImprovement: The fraction of original execution time benefiting from the improvement.
- AccelerationImprovement: The acceleration achieved if the entire system could benefit from the improvement (AccelerationImprovement > 1).
AccelerationOverall = 1 / [(1 – FractionImprovement) + (FractionImprovement / AccelerationImprovement)]
Device Model (1.3)
Two basic components:
- Physical Device: The main part of the device, often mechanical in peripherals like storage devices. This performs the peripheral’s tasks.
- Device Controller: The electronic interface between the device(s) and the system. Its functions include control and timing, data buffering, and error detection.
The CPU communicates with devices using I/O registers.
CPU – I/O Interface (Logical) (1.4)
For CPU access, an I/O device must be addressable. Two approaches:
- Memory-Mapped I/O: A portion of the system memory map is reserved for I/O. The CPU accesses I/O registers as memory locations using the same instructions. This simplifies CPU design, making it faster and cheaper. This is the most popular approach, used in RISC and embedded systems.
- Isolated I/O: Separate address spaces for memory and I/O. The CPU uses special instructions or a separate I/O bus. This helps protect I/O and clarifies assembly code.
Advantages:
- Isolated I/O: Doesn’t restrict address space, helps protect I/O, cleaner assembly code.
- Memory-Mapped I/O: Benefits from a larger instruction set, simpler and cheaper CPU design.
With 32/64-bit addressing, address space is less of a concern.
I/O Management (1.5)
Three techniques:
- Programmed I/O: The simplest technique. The CPU directly controls I/O operations, including checking device status, sending commands, and transferring data. The CPU polls the device for status, which wastes CPU cycles. Useful in specific-purpose systems, embedded systems, or event monitoring.
- Interrupt-Driven I/O: The device interrupts the CPU when ready. This enables multitasking. Interrupt identification methods include multiple interrupt lines, software polling, and vectored interrupts. Multiple IRQ management involves prioritizing devices (e.g., daisy chain, bus arbitration).
- Direct Memory Access (DMA): A specialized device transfers data between I/O and memory without CPU intervention. The DMA controller acts as a bus master. DMA options include using the bus when the CPU doesn’t need it, using multiport memory, and cycle stealing. DMA improves efficiency but introduces challenges with virtual memory and cache coherence.