Understanding Memory Allocation and Paging in Operating Systems
Week 8: Contiguous Memory Allocation
What is Contiguous Memory Allocation?
Contiguous memory allocation is a method that allocates a single, continuous section of memory to a process or file. It considers the current size and potential growth of the process or file.
Advantages of Contiguous Memory Allocation:
- Speeds up processing: Accessing data is faster due to the contiguous nature of memory.
Disadvantages of Contiguous Memory Allocation:
- Memory wastage: Allocating more memory than needed can lead to unused space.
- Inflexibility: Difficult to accommodate growing processes or files.
- External fragmentation: Free memory becomes divided into small, non-contiguous blocks, making it difficult to allocate large chunks of memory.
What is External Fragmentation and its Impact?
External fragmentation occurs when free memory is divided into small, non-contiguous blocks. This can hinder the allocation of memory to new processes, potentially requiring memory compaction (rearranging memory to create larger contiguous blocks). Memory compaction can be time-consuming and resource-intensive.
Page-Based Allocation
What is Page-Based Allocation?
Paged memory allocation involves storing portions of an executing process on secondary storage (e.g., disk). Memory is divided into fixed-size units called frames (physical memory) and pages (logical memory representing parts of a process).
Problems Solved by Page-Based Allocation:
- Enables running large programs: Programs larger than available main memory can be executed by loading only necessary pages into memory.
Challenges Introduced by Page-Based Allocation:
- Internal fragmentation: Wasted space within a page if the process doesn’t fully utilize it.
Does a Program Need to be Fully Loaded into Memory to Run?
No, a program does not need to be fully loaded into memory to run. This is achieved through demand paging, where pages are loaded into memory only when needed. Each process has its own virtual memory space (e.g., 4GB in a 32-bit system), which is larger than physical memory. Demand paging allows the operating system to manage the mapping between virtual and physical memory.
What is a Page Fault?
A page fault occurs when a program tries to access a page that is not currently loaded in memory.
What Happens During a Page Fault?
- The operating system is notified.
- If the page reference is valid, the OS fetches the page from secondary storage. Otherwise, the process is aborted.
- The OS finds a free frame or swaps out an existing page.
- The requested page is loaded into the free frame, and the page table is updated.
- The instruction that caused the page fault is restarted.
Page Fault Analysis
Analyzing a sequence of page demands can help determine the number of page faults that will occur under different page replacement algorithms. The maximum number of page faults typically occurs with the FIFO (First-In, First-Out) algorithm, while the minimum number is usually equal to the number of unique pages.
Swap Space
What is Swap Space?
Swap space is a dedicated area on the hard disk used as an extension of physical memory. It acts as virtual memory, storing process memory images when physical memory is full. When the system runs low on RAM, it can swap out less frequently used pages to the swap space, freeing up physical memory for active processes.
Page Replacement Algorithms
FIFO (First-In, First-Out)
FIFO replaces the oldest page in memory when a page fault occurs and no free frames are available. It’s simple to implement but can lead to a high number of page faults.
LRU (Least Recently Used)
LRU replaces the page that has not been used for the longest time. It performs better than FIFO as it considers the usage pattern of pages.
Optimal Algorithm
The optimal algorithm replaces the page that will not be needed for the longest time in the future. It provides the lowest possible number of page faults but is not practically implementable as it requires knowledge of future page requests.
Belady’s Anomaly
Belady’s anomaly is a counterintuitive phenomenon where increasing the number of available frames can sometimes lead to an increase in page faults. This anomaly is observed in some page replacement algorithms like FIFO but not in stack-based algorithms like LRU.
Why Don’t We Use the Optimal Algorithm?
The optimal algorithm is not practical because it requires predicting future page requests, which is impossible in a real-world operating system.
Thrashing
What is Thrashing?
Thrashing occurs when a process spends more time swapping pages in and out of memory than executing instructions. It’s caused by a lack of sufficient memory allocated to a process, leading to excessive page faults.
Consequences of Thrashing:
- Low CPU utilization: The system spends most of its time handling page faults.
- Reduced system throughput: Fewer processes can make progress due to the constant swapping.
- Increased response time: Processes become sluggish and unresponsive.
Analyzing CPU Utilization and Page Fault Rates
Analyzing CPU utilization and page fault rates can help identify potential thrashing. If CPU utilization is low but the page fault rate is high, it indicates that the system might be thrashing. If CPU utilization is high, adding more processes can worsen the situation. If CPU utilization and page fault rate are both low, it might be possible to add more processes without causing performance issues.