Operating Systems and Process Management Essentials
MODULE 1 – Operating Systems & Process Management
1. List any six functions of an Operating System. (3 Marks)
An Operating System (OS) is responsible for managing hardware and software resources. The six key functions of an OS are:
1. Process Management – Controls process creation, scheduling, execution, and termination. The OS ensures efficient CPU utilization by implementing scheduling policies.
2. Memory Management – Allocates and deallocates memory dynamically, ensuring efficient memory usage and avoiding conflicts.
3. File System Management – Manages file storage, retrieval, access permissions, and organization of data.
4. Device Management – Uses device drivers to facilitate communication between hardware and the OS. Controls input/output devices.
5. Security and Protection – Implements authentication and access control mechanisms to protect system data from unauthorized access.
6. Error Detection and Handling – Identifies system errors, logs them, and takes corrective actions to ensure stability.
2. Explain the different operations on processes. (3 Marks)
A process undergoes multiple operations during its lifecycle:
1. Process Creation – The OS creates a new process using system calls like fork()
.
2. Process Scheduling – The OS decides which process should execute next based on scheduling algorithms.
3. Process Execution – The process is executed by the CPU.
4. Process Synchronization – Ensures that multiple processes sharing resources execute safely without conflicts.
5. Process Communication – Enables inter-process communication (IPC) using message passing or shared memory.
6. Process Termination – A process completes execution and is removed from memory.
3. Differentiate Microkernel and Exokernel structures of OS. (3 Marks)
Feature | Microkernel | Exokernel |
---|---|---|
Definition | A minimal OS kernel that provides only essential services like IPC, memory management, and scheduling. | A kernel that allows applications to manage hardware directly with minimal interference. |
Performance | Slower due to inter-process communication (IPC). | Faster as applications access hardware directly. |
Security | Higher due to strict access control. | Lower, as applications have direct control over resources. |
Flexibility | Moderate flexibility. | High flexibility as applications control resource allocation. |
Examples | QNX, Minix, L4 | ExOS, Nemesis |
4. Explain the differences between Pre-emptive and Non-Preemptive Scheduling Policies. (3 Marks)
Feature | Preemptive Scheduling | Non-Preemptive Scheduling |
---|---|---|
Definition | CPU can be taken away from a running process. | CPU is assigned to a process until it finishes or blocks. |
Interrupts | Uses clock interrupts to switch processes. | No interrupts; process runs till completion. |
Response Time | Lower response time, better for interactive systems. | Higher response time, can cause long waiting times. |
Overhead | High due to context switching. | Low, as there is no switching during execution. |
Examples | Round Robin, Shortest Remaining Time First (SRTF) | First Come First Serve (FCFS), Shortest Job First (SJF) |
5. Draw the state diagram of RTOS queue and explain. (5 Marks)
RTOS Queue State Diagram
New → Ready → Running → Exit
↘ ↗
Blocked (Waiting)
Explanation:
- New → Ready – A newly created process is placed in the ready queue.
- Ready → Running – The CPU scheduler selects a process from the ready queue to execute.
- Running → Blocked – If a process needs an I/O operation, it moves to the blocked state.
- Blocked → Ready – Once the I/O operation completes, the process moves back to the ready state.
- Running → Exit – If the process finishes execution, it moves to the exit state.
6a. Explain the functions of an OS as a Resource Manager. (7 Marks)
The Operating System (OS) is responsible for efficiently managing system resources. The key functions include:
1. Processor Management – Allocates CPU time to processes using scheduling algorithms.
2. Memory Management – Manages RAM allocation, ensuring efficient usage and preventing memory leaks.
3. File Management – Organizes and controls file storage, retrieval, and security.
4. Device Management – Controls I/O devices using drivers and ensures smooth interaction between hardware and software.
5. Security & Protection – Implements authentication and access control to prevent unauthorized access.
6. Networking – Manages communication between different systems over networks.
7. Job Scheduling – Decides the order in which processes execute to optimize system performance.
6b. Describe the structure of a Process Control Block (PCB). (7 Marks)
A Process Control Block (PCB) is a data structure that stores information about a process.
Structure of a PCB:
1. Process ID (PID) – A unique identifier assigned to each process.
2. Process State – The current state of the process (New, Ready, Running, Blocked, or Exit).
3. Program Counter – Stores the address of the next instruction to execute.
4. CPU Registers – Stores execution details such as instruction pointers and stack pointers.
5. Memory Pointers – Tracks memory allocation for the process.
6. I/O Information – Stores data on assigned input/output devices.
7. Priority Information – Determines the scheduling priority of the process.
7a. Explain the Monolithic and Microkernel Architectures of OS Kernel. (7 Marks)
Feature | Monolithic Kernel | Microkernel |
---|---|---|
Structure | All OS services run in kernel mode. | Minimal kernel, with services running in user space. |
Performance | Faster due to direct service execution. | Slower due to message-passing overhead. |
Security | Less secure as all services run in kernel mode. | More secure due to modular design. |
Examples | Linux, Windows | Minix, QNX |
MODULE 2 – Process Scheduling & Threads
1. Process Scheduling: FCFS, SJF, Priority, Round-Robin (7 Marks)
1.1 First-Come-First-Serve (FCFS) Scheduling
📌 Definition:
Non-preemptive scheduling algorithm.
The process that arrives first gets executed first (FIFO).
Disadvantage: Convoy effect (Long processes delay short ones).
Process | Arrival Time | Burst Time |
---|---|---|
P1 | 0 | 5 |
P2 | 1 | 3 |
P3 | 2 | 8 |
📌 Gantt Chart:
P1 | P2 | P3
0 5 8 16
Waiting Time (WT) = Turnaround Time – Burst Time
Process | Completion Time | Turnaround Time (CT-AT) | Waiting Time (TAT-BT) |
---|---|---|---|
P1 | 5 | 5 – 0 = 5 | 5 – 5 = 0 |
P2 | 8 | 8 – 1 = 7 | 7 – 3 = 4 |
P3 | 16 | 16 – 2 = 14 | 14 – 8 = 6 |
📌 Average WT = (0 + 4 + 6) / 3 = 3.33 ms
📌 Average TAT = (5 + 7 + 14) / 3 = 8.66 ms
📌 FCFS is simple but inefficient for time-sharing systems.
1.2 Shortest Job First (SJF) Scheduling
📌 Definition:
- Non-preemptive algorithm: Shortest job executes first.
- Preemptive version = Shortest Remaining Time First (SRTF).
- Advantage: Minimizes average waiting time.
- Disadvantage: Starvation (Longer jobs may never execute).
📌 Example:
Process | Arrival Time | Burst Time |
---|---|---|
P1 | 0 | 6 |
P2 | 1 | 3 |
P3 | 2 | 5 |
📌 Gantt Chart:
P2 | P3 | P1
1 4 9 15
1.3 Priority Scheduling
📌 Definition:
- Each process has a priority value.
- Higher priority processes execute first.
- Can be preemptive or non-preemptive.
- Issue: Starvation (Low-priority processes may wait indefinitely).
- Solution: Aging (Increase priority of older processes).
📌 Example:
Process | Arrival Time | Burst Time | Priority |
---|---|---|---|
P1 | 0 | 5 | 2 |
P2 | 1 | 3 | 1 |
P3 | 2 | 2 | 4 |
📌 Gantt Chart:
P2 | P1 | P3 : 1 4 9 11
1.4 Round Robin (RR) Scheduling
📌 Definition:
- Time-sharing algorithm with fixed time quantum.
- Preemptive: If a process does not finish, it moves to the end of the queue.
📌 Example (Time Quantum = 2):
Process | Arrival Time | Burst Time |
---|---|---|
P1 | 0 | 5 |
P2 | 1 | 3 |
P3 | 2 | 1 |
📌 Gantt Chart:
P1 | P2 | P3 | P1 | P2
0 2 4 5 7 8
📌 RR ensures fairness but increases context switching overhead.
2. Multilevel Queue and Multilevel Feedback Queue Scheduling (7 Marks)
2.1 Multilevel Queue Scheduling
📌 Definition:
- Processes are divided into multiple queues (e.g., system, interactive, background).
- Each queue has its own scheduling algorithm.
- Example:
- Foreground Queue (Round Robin)
- Background Queue (FCFS)
2.2 Multilevel Feedback Queue (MLFQ)
- Processes can move between queues based on execution time.
- Short jobs stay in high-priority queues.
📌 Example:
- Queue 1: RR (Quantum = 2)
- Queue 2: RR (Quantum = 5)
- Queue 3: FCFS
📌 MLFQ balances efficiency and responsiveness.
3. Multiprocessor Scheduling (7 Marks)
- CPU Scheduling for multi-core systems.
- Types:
- Load Balancing: Distributes workload evenly.
- Processor Affinity: Keeps threads on the same CPU to improve cache performance.
- Gang Scheduling: Groups related threads together.
4. Threads (User and Kernel Level, Multi-threading Models) (7 Marks)
4.1 Thread Basics
- Thread = Lightweight process that runs inside a program.
- Types:
- User-level threads (Managed by applications, faster).
- Kernel-level threads (Managed by OS, more stable).
4.2 Multi-threading Models:
1️⃣ Many-to-One Model:
- Many user threads → One kernel thread.
- Fast but blocks all threads if one thread blocks.
2️⃣ One-to-One Model:
- Each user thread has its own kernel thread.
- More overhead but better performance.
3️⃣ Many-to-Many Model:
- Multiple user threads map to multiple kernel threads.
- Best of both worlds.
6. Priority Inversion in Real-Time Systems (3 Marks)
📌 Definition:
- A high-priority task is blocked by a low-priority task due to resource locking.
📌 Example:
- P1 (Low Priority) holds a resource.
- P2 (High Priority) is waiting for P1 to release the resource.
📌 Solution: Priority Inheritance Protocol
- The low-priority process temporarily inherits the high priority to finish its task and release the resource.