Computer Architecture: From Firmware to Cache Memory

Firmware and Software

Firmware refers to specific instructions recorded in non-volatile memory (ROM, Flash, etc.). It’s the low-level logic that controls a physical device. Assembler, a low-level language, is a more direct representation of machine code. It is used especially when we want to directly manipulate the hardware. The kernel is the fundamental part of the OS. It’s a software program that facilitates secure access to hardware and manages system resources (CPU, memory, etc.). The OS ensures effective management of resources (CPU, disk drives, memory) and provides transparency in the use of the computer. The user interface, via the command processor, is responsible for the actual execution of the instructions that make up the programs and the OS.

Processor Components

  • ALU (Arithmetic Logic Unit): Circuits that perform logical and mathematical operations. They calculate arithmetic operations (like addition, subtraction, etc.) and typical logical operations of Boolean algebra (such as OR, NOT, XOR, etc.) between two numbers. Currently, ALUs have evolved considerably, and processors often have several.
  • Bank of Registers: A small data storage area that the processor uses, allowing multiple simultaneous accesses. For example, when performing an addition operation, all addends can be read at once. It’s used to store the results of the execution of instructions, load data from external memory, or store it.
  • Data Bus: Used for data transfer between the CPU and memory or I/O.
  • Address Bus: Specifies the memory address or register address of I/O.
  • Control Bus: Carries control signals that manage the transfer (clock, read/write, etc.).
  • Control Unit: The main function of the control unit is to direct the sequence of steps so that the computer carries out a full course of instruction execution, and it does this with all the instructions in the program. It is formed basically by an element that interprets the instructions and various memory elements called registers. One of these registers stores the instruction while the interpreter is translating its meaning, which is called the Instruction Register (IR). The rest of the instructions remain in memory, waiting their turn for execution.

Memory Addressing and Operations

  • Memory Address: Used to indicate to the system memory what position you want to reference.
  • Address Register (AR): Contains the address of the cell on which to act, either reading it or writing to it. This address is obtained from the address bus.
  • Memory Buffer Register (MBR): This register serves as temporary storage in read/write operations. For reading, this register can be loaded from memory to the processor and is then passed through the data bus. In the case of a write, the IR is loaded with the data to write, then the AR takes the address where to write, and then the data from the MBR goes to the cell selected by AR, completing the writing.
  • Memory Selector: The selector is the element that converts the address in the AR into an effective address.

Cache Memory

Today, this memory is integrated into the processor, and its function is to store a set of instructions and data that the processor accesses continuously so that these accesses are instant.

  • 1st Level Cache (L1): This cache is integrated into the processor core, working as fast as the core itself. The amount of L1 cache varies from one processor to another, usually between 64KB and 256KB. This memory is usually divided into two parts: one for instructions and one for data.
  • 2nd Level Cache (L2): Also integrated into the processor, though not directly in the core, it has the same advantages as the L1 cache but is somewhat slower. The L2 cache is usually larger than the L1 cache, exceeding 2MB. In contrast to the L1 cache, it is not divided, and its use is more aimed at data than instructions.
  • 3rd Level Cache (L3): A type of cache memory slower than the L2, very little used at present. Initially, this cache was built into the motherboard, not the processor, and the access speed was quite a bit slower than an L2 or L1 cache. Although it remains a fast memory (much higher than RAM), it depends on the communication between the processor and the motherboard.

Memory Hierarchy

The memory hierarchy consists of cache memory, main memory, and virtual memory. The main reason that memory systems are constructed hierarchically is that the cost per bit of memory technology is generally proportional to the speed of access technology. Fast memory, such as static RAM (SRAM), tends to have a high cost per bit and is therefore more limited in capacity. Dynamic RAM (DRAM) is cheaper, making it possible to build higher-capacity memories.

Principles of Memory Hierarchy

  • Principle of Locality: Memory references that occur close in time tend to access local addresses as well, making it more likely that other addresses within the same block will be accessible once the first address of that block has been accessed.
  • Inclusion: The presence of an address at a given level of the memory system ensures that the address is present in all lower memory systems of the hierarchy.

Types of Memory Access

  • Sequential Access: Memory access is initiated by enabling the address of the first cell and then sequentially accessing the following cells. In this type of access, it is not possible to access the i-th cell if the earlier cells have not been accessed. The access time to a cell depends on its physical location in the storage medium. Example: Magnetic tape.
  • Direct Access: Memory access is done at the “cell block” level, and within each block, sequential access is used to identify a cell. Each “cell block” has a unique address. The access time to a block depends on its physical location on the storage medium. Example: Magnetic disk.
  • Random Access: Memory access is performed at the cell level. Each cell has a unique address, and each cell can be accessed in any order. The access time is the same for any cell (independent of its location). Example: Main memory, ROM.
  • Associative Access: A variant of random access. Each cell can be accessed by its content rather than its physical location. Searching the contents of a cell is done by comparing it with a “search pattern” provided by the user. All cells are accessed in parallel, regardless of the size of memory. The access time is the same for any cell (independent of location). Example: Cache memory.