Operating Systems: Core Concepts and Mechanisms
Distributed and Parallel Operating Systems
Distributed Operating Systems:
- Manages a group of independent computers and makes them appear as a single system to the user.
- Focuses on resource sharing, fault tolerance, and transparency.
- Examples: Network OS, Cloud OS.
Parallel Operating Systems:
- Manages multiple processors in a single system to perform tasks simultaneously.
- Aims to enhance computation speed and resource utilization.
- Examples: Symmetric Multiprocessing (SMP), Cluster systems.
Critical Section Problem
The Critical Section Problem involves ensuring that multiple processes can access shared resources safely without causing conflicts or inconsistencies. It occurs when multiple processes access shared resources concurrently, leading to data inconsistency.
- Example: Two processes updating a shared counter without synchronization.
Conditions for a Solution:
- Mutual Exclusion: Only one process accesses the critical section at a time.
- Progress: Processes outside the critical section cannot block others.
- Bounded Waiting: Limits the time a process waits to enter the critical section.
Solutions:
- Peterson’s Algorithm:
- Ensures mutual exclusion and progress using shared variables.
- Semaphores:
- Use signal and wait operations to control access.
- Monitors:
- Encapsulate shared resources and synchronize access through methods.
Resource-Allocation Graph (RAG)
A Resource-Allocation Graph (RAG) represents resource allocation and requests in a system:
- Nodes: Represent processes and resources.
- Edges:
- Request edges (process → resource).
- Allocation edges (resource → process).
Role:
- Detects potential deadlocks by identifying cycles in the graph.
- Helps prevent deadlocks by analyzing resource dependencies.
Advantages of Bitmap-Based Free-Space Management
- Efficient Space Utilization:
- Tracks free and allocated blocks using a compact representation, reducing overhead.
- Fast Searching:
- Quickly identifies contiguous free blocks by scanning bits.
- Simplicity:
- Easy to implement and manage for fixed-size blocks.
Scheduling Algorithms and Starvation
Two scheduling algorithms that can result in starvation:
- Priority Scheduling:
- Processes with lower priority may never execute if higher-priority processes keep arriving.
- Shortest Job Next (SJN):
- Long processes may face indefinite delays if shorter processes continue to arrive.
Sector Slipping in Disk Storage
Sector Slipping is a technique used in disk storage to handle bad sectors. When a sector becomes faulty, the data is moved to a spare sector, and the logical addressing skips the defective one.
Uses:
- Maintains data integrity and reliability by avoiding bad sectors.
- Ensures uninterrupted operation without requiring hardware replacement.
Physical Memory vs. Virtual Memory
Aspect | Definition | Capacity | Speed | Usage | Implementation |
---|---|---|---|---|---|
Physical Memory | Actual hardware (RAM) used to store data and instructions. | Limited by the size of installed RAM. | Faster as it directly accesses RAM. | Stores actively used processes and data. | Managed by hardware and firmware. |
Virtual Memory | Logical memory created using a combination of RAM and disk. | Can exceed physical memory using disk space. | Slower due to involvement of disk storage. | Stores inactive data, providing an illusion of larger memory. | Managed by the OS through paging and swapping. |
Core Functions of an Operating System
The Operating System (OS) is a vital software layer that acts as an intermediary between hardware and users. It provides services and functionalities to ensure efficient operation and usability of a computer system. Below are the core functions of an OS and their contributions:
- Process Management:
- Handles creation, scheduling, execution, and termination of processes.
- Manages multitasking by allowing multiple processes to share CPU time through scheduling algorithms such as Round Robin, FCFS, and Priority Scheduling.
- Ensures proper synchronization between processes to avoid conflicts, especially in critical section problems.
- Contributes by ensuring efficient CPU utilization and enabling simultaneous program execution for enhanced user experience.
- Memory Management:
- Keeps track of memory allocation and deallocation.
- Allocates memory to processes when required and frees it when no longer needed.
- Handles swapping between main memory and secondary storage in virtual memory systems.
- Improves system efficiency by optimizing memory usage and enabling larger applications to run seamlessly.
- File System Management:
- Provides mechanisms for file creation, deletion, reading, writing, and access.
- Organizes files in directories, maintains metadata, and ensures efficient storage retrieval.
- Manages permissions to secure data and prevent unauthorized access.
- Enhances user experience by simplifying data management and ensuring data integrity.
- Device Management:
- Manages communication between the system and hardware peripherals (printers, disks, keyboards).
- Includes device drivers to abstract hardware details from applications.
- Allocates and monitors device usage to ensure fair access.
- Improves usability by ensuring seamless hardware integration.
- Security and Protection:
- Safeguards the system against unauthorized access using mechanisms like authentication and encryption.
- Enforces resource access control to prevent malicious activities or accidental data corruption.
- Ensures overall system stability and data integrity.
Inter-Process Communication (IPC)
Inter-Process Communication (IPC) is essential for processes to exchange information and synchronize their activities effectively. Providing an environment for IPC offers several advantages:
Reasons for IPC:
- Data Sharing:
- Enables processes to share data and results, particularly in collaborative tasks or parallel computations.
- Facilitates modular program design where processes perform specific functions and exchange results.
- Synchronization:
- Coordinates the execution of multiple processes, ensuring they operate in a well-defined sequence.
- Prevents race conditions and ensures consistency of shared data.
- Resource Sharing:
- Allows processes to access shared system resources like memory, files, or printers without conflicts.
- Optimizes resource utilization by managing access efficiently.
- Performance Improvement:
- Promotes faster communication compared to external storage mechanisms like files.
- Reduces overhead by avoiding unnecessary duplication of data.
Characteristics of Shared Memory IPC:
- Shared Memory Space:
- Processes share a common memory region for communication.
- High-speed communication as it avoids copying data between processes.
- Synchronization Requirements:
- Requires synchronization mechanisms such as semaphores, mutexes, or monitors to prevent conflicts.
- Ensures consistency when multiple processes read or write simultaneously.
- Implementation:
- OS provides system calls to create and manage shared memory segments.
- Access control mechanisms ensure that only authorized processes can use the shared memory.
Shared memory IPC is particularly efficient for systems requiring high-speed data exchange and low latency.
Major Categories of System Calls
What are the five major categories of system calls? Briefly discuss the characteristics of each class.
- Process Control:
- Manage processes (e.g., creation, termination).
- Examples: fork(), exec(), exit().
- File Management:
- Handle files (e.g., open, read, write).
- Examples: open(), read(), write(), close().
- Device Management:
- Interact with hardware devices.
- Examples: ioctl(), read(), write().
- Information Maintenance:
- Retrieve and set system data.
- Examples: getpid(), alarm(), gettimeofday().
- Communication:
- Facilitate inter-process communication.
- Examples: pipe(), shmget(), msgsnd().
Shortest Job Next (SJN) Scheduling Algorithm
SJF: Working:
- SJF selects the process with the shortest CPU burst time.
- It can be preemptive or non-preemptive.
Characteristics:
- Prioritizes shorter processes, minimizing average waiting time.
- Efficient for batch systems.
Advantages:
- Minimizes average waiting and turnaround times.
- Simple to implement.
Disadvantages:
- Can cause starvation for longer processes.
- Requires precise knowledge of burst times, which may not always be available.
Process Control Block (PCB)
The Process Control Block (PCB) is a data structure used by the operating system to store information about a process. It is critical for process management as it maintains the state and context of a process during execution and switching.
Purpose of PCB:
- Process Management:
- Tracks and manages processes in a multitasking system.
- Context Switching:
- Stores process states so the OS can resume execution after a switch.
- Resource Allocation:
- Maintains details about resources allocated to the process.
- Inter-Process Communication:
- Stores communication details for processes sharing information.
Information Contained in PCB:
- Process Identification:
- Unique Process ID (PID).
- Parent Process ID (PPID).
- Process State:
- States like new, ready, running, waiting, or terminated.
- CPU Registers:
- Saves the CPU context, including program counter and general-purpose registers.
- Memory Management Information:
- Base and limit registers, page tables, or segment tables.
- Scheduling Information:
- Priority, scheduling queue pointers, and CPU time used.
- Accounting Information:
- CPU time, user ID (UID), and group ID (GID).
- I/O Status Information:
- List of open files, allocated devices, and I/O requests.
Conclusion: The PCB acts as a repository of process-specific information, enabling efficient multitasking, resource allocation, and process management in operating systems.