Understanding Deadlock Detection and Memory Management in OS
D&E – Deadlock Detection:
18 – Spin-lock: General (counting) semaphore using busy waiting instead of blocking. Used in multiprocessor operating systems to implement short critical sections.
19 – Deadlock: Two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes.
20 – Starvation: Indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended.
21 – Monitors: A high-level abstraction that provides a convenient and effective mechanism for process synchronization. Only one process may be active within the monitor at a time. Options: Wait or Signal.
1 – Deadlock Problem: A set of blocked processes each holding a resource and waiting to acquire a resource held by another process in the set. Scheduling can prevent deadlock if there is information about all processes’ demands. Deadlock can occur if all four conditions are met simultaneously:
- Mutual Exclusion: Only one process at a time can use a resource.
- Hold and Wait: A process holding at least one resource is waiting to acquire additional resources held by other processes.
- No Preemption: A resource can be released only voluntarily by the process holding it (after it completes the task).
- Circular Wait: A circular chain of processes exists, each waiting for a resource held by the next process in the chain.
2 – Safe State: When a process requests an available resource, the system must decide if immediate allocation leaves the system in a safe state. The system is in a safe state if there exists a safe sequence of all processes.
3 – Banker’s Algorithm: Always keep enough resources to satisfy the needs of at least one client. Multiple instances. Each process must a priori claim maximum use and must return resources in a finite amount of time. When a process requests a resource, it may have to wait.
4 – Recovery from Deadlock:
- Process Termination: Abort all deadlocked processes (very expensive). Abort one process at a time in this order: process priority, time the process has computed, time until completion, used/needed resources, needed resources.
- Resource Preemption: Selecting a victim to minimize cost.
- Rollback: Return to some safe state and restart the process from that state.
- Starvation: The same process may always be picked as a victim; include the number of rollbacks in the cost factor.
F – Memory Management:
- 1 – Physical Address Space: Address in internal computer memory, also called real memory.
- 2 – Logical Address Space: Generated by the CPU, also referred to as virtual address space. It is stored in memory, on a hard disk, or doesn’t exist if it was not used.
- 3 – Uses of Memory: A running program has to be placed into memory. The program is transformed into a structure that can be implemented by the CPU through different steps. Internal memory stores data and programs that are running or waiting. Memory management is part of the operating system.
- 4 – Overlays: The first solution for using more memory than the physical address space allows. Special instructions switch part of the memory to access by the address bus. Defined by the user and implemented by the compiler with minimal support from the OS.
- 5 – Simple Segments: Address is composed of a 16-bit address of the segment and a 16-bit address of the offset inside the segment. It is not real virtual memory, only a system to use larger memory.
- 6 – Segmentation: Support for user definition of logical address space. A program is a set of segments. Pros: Possible to move data in memory without user detection. It is possible to set access for segments and detect access outside of the segment, throwing a new type of error – segmentation fault. Cons: Challenges include how to place segments into main memory, segments having different lengths, and overhead to compute physical addresses from virtual addresses (one comparison, one addition).
- 7 – Paging: A different solution for virtual memory implementation. Paging removes the basic problem of segments having different sizes. All pages have the same size defined by CPU architecture. Fragmentation occurs only inside the page (small overhead). Each page has its own position in physical memory. Divide physical memory into fixed-sized blocks called frames and logical memory into blocks with the same size as frames, called pages. The OS keeps track of all frames. To run a process of size n, pages need to find n free frames.
- 8 – Implementation of Page Table: Paging is implemented in hardware. The page table is kept in main memory. The Page Table Base Register (PTBR) points to the page table, and the Page Table Length Register (PTLR) indicates the size of the page.
- 9 – Hash Tables: Common in address spaces greater than 32 bits. The virtual page number is hashed into a page table, which contains a chain of elements hashing to the same location. Virtual page numbers are compared in this chain while searching for a match. If a match is found, the corresponding physical frame is extracted.
- 10 – Inverted Page Tables: One entry for each real page of memory. Each entry consists of the virtual address of the page stored in that real memory location, along with information about the process that owns that page. This decreases memory needed to store each page table but increases the time needed to search the table when a page reference occurs. A hash table can be used to limit the search to one or a few page table entries.