Computer Architecture: Memory Hierarchy, Von Neumann Model & Parallel Processing

Memory Hierarchy

Memory hierarchy organizes computer memory based on speed, cost, and size, balancing fast access with large capacity. It ranges from fast, small CPU registers to slow, large storage devices.

Memory Types

Registers

Speed: Fastest memory, within the CPU.
Size: Extremely small (a few bytes).
Purpose: Stores data the CPU is currently processing.
Access Time: Nanoseconds or less.

Cache Memory

Speed: Very fast, close to the CPU, divided into levels.
Size: Small (kilobytes to megabytes).
Purpose: Stores frequently used data.
Access Time: A few nanoseconds.
Cost: Higher than RAM.

Main Memory (RAM)

Speed: Slower than cache, faster than secondary storage.
Size: Larger (gigabytes).
Purpose: Holds data and programs while running tasks.
Access Time: Tens of nanoseconds.
Cost: Moderate.

Secondary Storage (e.g., SSDs, HDDs)

Speed: Much slower than RAM.
Size: Large (terabytes or more).
Purpose: Long-term storage.
Access Time: Milliseconds (HDDs), microseconds (SSDs).
Cost: Lower than RAM.

Tertiary and Off-line Storage

Speed: Slowest.
Size: Potentially very large (e.g., tape drives, cloud storage).
Purpose: Backup and archiving.
Access Time: Hours to seconds.
Cost: Lowest.

Memory Hierarchy Characteristics

Speed: Higher levels are faster but smaller and more expensive.
Capacity: Lower levels have larger capacities but slower access.
Cost: Faster memory is more expensive per byte.

Von Neumann Model

The Von Neumann model, a foundational computer architecture, consists of:

  1. CPU: Executes instructions (Control Unit and ALU).
  2. Memory: Stores data and instructions.
  3. Input/Output Devices: Handles data input and output.
  4. Storage: Long-term storage.
  5. Bus System: Connects components.

The CPU follows a fetch-decode-execute cycle, but the shared bus creates the Von Neumann bottleneck, limiting performance.

Modified Instruction Cycle

This cycle adds steps for interrupts and pipelining:

  1. Fetch: Instruction retrieval.
  2. Decode: Instruction interpretation.
  3. Execute: Operation performance.
  4. Interrupt Check: Handling interrupts.
  5. Store: Result storage.

Multiplexers

A multiplexer (MUX) selects one input from multiple inputs and sends it to a single output based on a select signal. A MUX with n select lines chooses between 2^n inputs.

Half and Full Adders

  • Half Adder: Adds two single-bit binary numbers (Sum and Carry outputs). Cannot handle carry input.
  • Full Adder: Adds three inputs (two bits and a carry input) producing Sum and Carry outputs. Enables multi-bit addition.

Flynn’s Classification of Parallel Processing

  1. SISD (Single Instruction, Single Data): Traditional sequential processing.
  2. SIMD (Single Instruction, Multiple Data): Same instruction on multiple data streams (e.g., image processing).
  3. MISD (Multiple Instruction, Single Data): Rarely used.
  4. MIMD (Multiple Instruction, Multiple Data): Multiple processors, different instructions, different data (e.g., multi-core systems).