Virtual Memory and Page Replacement Strategies

Inverted Page Tables

One entry for each real page of memory. Each entry contains the virtual address of the page stored in that real memory location, along with information about the process that owns the page. This approach decreases the memory needed to store each page table but increases the time required to search the table when a page reference occurs. A hash table is often used to limit the search to one or a few page table entries.

Virtual Memory Concepts

Virtual Memory: Separation of Physical Memory

Virtual memory separates physical memory from user logical memory. Only a portion of the program needs to be in memory for execution. The logical address space can therefore be much larger than the physical address space. This allows address spaces to be shared by several processes and enables more efficient process creation. Virtual memory is mapped into real physical memory.

Virtual Address Space

When a process starts, only the initial part of the program is brought into real memory. The virtual address space is fully initialized. Dynamic exchange between virtual and physical space occurs based on context references. Translation from virtual to physical space is performed using page or segment tables.

Page Fault

A page fault occurs when the requested memory cell is not present in physical memory.

Paging Techniques

  • Demand Paging: A page is brought into memory only when it is needed.
  • Paging at Process Creation: The program is inserted into memory during process start-up.
  • Pre-paging: Pages that are likely to be used are loaded into memory in advance.
  • Pre-cleaning: Dirty pages are stored on disk.
  • Copy-on-Write: Pages are not copied initially but are marked as read-only. A copy is created only if the page is modified.

Page Replacement Algorithms

FIFO (First-In, First-Out)

A simple, but not very effective, algorithm.

Optimal Algorithm

Requires knowledge of the future. The victim page is the one that will not be used for the longest period of time. This is not practically implementable.

LRU (Least Recently Used)

Based on the assumption that a page that has not been used for a long time will likely not be used in the future. The victim page is the one that was not used for the longest period. Considered the best approximation of the optimal algorithm, but it can be difficult to implement.

Second Chance

FIFO with a second chance mechanism. It requires a reference bit. If the reference bit is 0, the page is deleted. If the page has been accessed (reference bit = 1), it gets a second chance.

LFU (Least Frequently Used)

Replaces the page with the smallest count (least frequently used).

MFU (Most Frequently Used)

Based on the argument that the page with the smallest count was probably just brought in and has yet to be used.

Memory Allocation Strategies

Global Replacement

A process selects a replacement frame from the set of all frames; one process can take a frame from another.

Local Replacement

Each process selects a replacement frame only from its own set of allocated frames.

Fixed Allocation

Each process receives a fixed number of frames.

Priority Allocation

Processes with higher priority receive more frames to run faster. If a page fault occurs, a process with higher priority can take a frame from a process with lower priority.

Thrashing and Working Set

Thrashing

Thrashing occurs when a process is busy swapping pages in and out. If the sum of the working sets for all processes (Σ WSi) exceeds the total capacity of physical memory, thrashing occurs.

Working Set

Represents the pages that a process has used in its last N instructions.

Page Fault Frequency (PFF)

PFF is a variable-space algorithm that uses a more ad hoc approach. It attempts to equalize the fault rate among all processes and maintain a tolerable system-wide fault rate.

Page Size Considerations

  • Big Pages: Fewer page faults, but larger internal fragmentation. If the page size is bigger than the process size, virtual space is not necessary.
  • Small Pages: A larger number of small pages, leading to fewer page faults, smaller fragmentation, but potentially decreasing the effectiveness of disk operations. Also, the page table is bigger and the selection of a victim for swap-out is more complicated.