CPU Scheduling Algorithms: A Comprehensive Guide
1. Determining Length of Next CPU Burst
Estimating the length of the next CPU burst is crucial for efficient scheduling. This can be achieved using the length of previous CPU bursts, often employing exponential averaging.
2. Priority Scheduling
Priority scheduling assigns a priority number (integer) to each process. The CPU is allocated to the process with the highest priority (smallest integer = highest priority). Shortest Job First (SJF) is a priority scheduling algorithm where priority is based on the predicted next CPU burst time.
Problem of Starvation: Low-priority processes may never execute.
Solution: Aging: As time progresses, increase the priority of the process.
3. Round Robin (RR) Scheduling
In Round Robin scheduling, each process receives a small unit of CPU time (time quantum). After this time elapses, the process is preempted and added to the end of the ready queue.
With n processes in the ready queue and a time quantum of q, each process gets 1/n of the CPU time in chunks of at most q time units. No process waits more than (n-1)q time units.
RR typically has a higher average turnaround time than SJF but offers better response time.
4. Multilevel Queue Scheduling
Multilevel queue scheduling utilizes separate queues for different types of processes, such as foreground (interactive) and background (batch) processes. Each queue has its own scheduling algorithm, and scheduling must occur between the queues.
Processes can move between queues, and aging can be implemented within this framework.
5. Multiprocessor Scheduling
CPU scheduling becomes more complex when multiple CPUs are available. This involves deciding which process to execute and on which CPU.
Homogeneous Processors: All processors within a multiprocessor system are identical.
Asymmetric Multiprocessing: Only one processor accesses the system data structures, eliminating the need for data sharing.
Symmetric Multiprocessing (SMP): Each processor is self-scheduling. All processes share a common ready queue, or each processor has its own private queue of ready processes.
Processor Affinity: A process may have an affinity for the processor on which it has been recently running.
6. Linux Scheduling
Linux employs two primary scheduling algorithms:
- Timesharing: Prioritized credit-based scheduling, where the process with the most credits is scheduled next.
- Realtime: The highest priority process always runs first. FCFS and RR are also used.
7. Real-time Systems
Real-time systems require results to be not only correct but also delivered within a specified time period.
Embedded System: A computing device that is part of a larger system.
Safety-critical System: A real-time system where failure can have catastrophic consequences.
Hard Real-time System: Guarantees that real-time tasks complete within their required deadlines.
Soft Real-time System: Prioritizes real-time tasks over non-real-time tasks.
8. Real-time CPU Scheduling
Periodic processes require the CPU at specified intervals (periods).
9. Rate Monotonic Scheduling (RMS)
In RMS, priority is assigned based on the inverse of the process’s period. Shorter periods receive higher priority, while longer periods receive lower priority.
10. Earliest Deadline First (EDF)
EDF assigns priorities according to deadlines: earlier deadlines receive higher priority, and later deadlines receive lower priority.
EDF performs well even when RMS fails. Preemption may occur.
11. RMS and EDF Comparison
RMS: A deeply elaborated algorithm used in many real-time operating systems. It must satisfy the Rate Monotonic Scheduling Theorem.
EDF: Periodic process deadlines are kept even at 100% CPU load. However, the consequences of overload are unknown and unpredictable. When deadlines and periods are not equal, the behavior is unknown.
12. Cooperating Processes
Independent Processes: Cannot affect or be affected by the execution of other processes.
Cooperating Processes: Can affect or be affected by the execution of other processes.
13. Interprocess Communication (IPC)
IPC provides mechanisms for processes to communicate and synchronize their actions.
Implementation:
- Message System: Processes send or receive messages.
- Shared Memory: Processes share a common memory region.
- Communication Link: Physical or logical connection between processes.
14. Direct or Indirect Communication
Direct Communication: Links are established automatically and are associated with exactly one pair of communicating processes. Communication may be unidirectional but is usually bidirectional.
Indirect Communication: Messages are directed and received from mailboxes (also known as ports), which have unique IDs and are created by the kernel. A link is established only if processes share a common mailbox. Multiple processes can be associated with a mailbox, and communication can be unidirectional or bidirectional.
15. Synchronization
Message passing can be either blocking or non-blocking.
Blocking (Synchronous):
- Blocking Send: The sender blocks until the message is received by the other process.
- Blocking Receive: The receiver blocks until a message is available.
Non-blocking (Asynchronous):
- Non-blocking Send: The sender sends the message and continues executing.
- Non-blocking Receive: The receiver gets either a valid message or a null message.
Often, a combination of non-blocking send and blocking receive is used.
16. Critical Section
A critical section is a part of the code where one process tries to access a particular resource shared with another process.
Solutions:
- Software at the Application Layer: Implemented using software mechanisms.
- Hardware Support: Provided by hardware for specific operations.
- Software Solution with OS Support: Software solutions with support from the operating system.
17. Semaphores
Semaphores are synchronization tools that do not require busy waiting and solve critical section problems.
18. Spin-lock
A spin-lock is a general (counting) semaphore that uses busy waiting instead of blocking. It is used in multiprocessor operating systems to implement short critical sections.
19. Deadlock
Deadlock occurs when two or more processes are waiting indefinitely for an event that can only be caused by one of the waiting processes.
20. Starvation
Starvation refers to indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended.
21. Monitors
Monitors are a high-level abstraction that provides a convenient and effective mechanism for process synchronization. Only one process can be active within the monitor at a time.
Options:
- Wait: A process can wait for a specific condition to become true.
- Signal: A process can signal a condition to wake up another waiting process.