Operating System Kernels, Multitasking, and Task Scheduling

Kernel Fundamentals and Operating System Types

Kernel: The kernel is the core of the operating system, responsible for managing system resources and communication between hardware and other system services. It acts as an abstraction layer between system resources and user applications, containing a set of system libraries and services. The kernel handles process management, memory (primary and secondary) management, file system management, I/O (device) management, and time management.

Kernel Types

Based on kernel design, kernels are classified into:

A) Monolithic Kernel

In a monolithic kernel architecture, all kernel services run in the kernel space. All kernel modules operate within the same memory space under a single kernel thread. A major drawback is that any error or failure in one module can crash the entire kernel.

B) Microkernel

The microkernel design incorporates only essential operating system services into the kernel. Other services are implemented in programs called ‘servers’ that run in user space. Essential services include memory management, process management, timer systems, and interrupt handlers.

Operating System Types

GPOS (General Purpose Operating System)

Operating systems deployed in general computing systems are referred to as GPOS. The kernel is generalized and contains all services required for executing generic applications. GPOS are often non-deterministic, injecting random delays into application software and causing slow responsiveness at unexpected times. Personal computers are examples of systems where GPOS are deployed.

RTOS (Real-Time Operating System)

Real-time implies deterministic timing behavior. In the RTOS context, this means OS services consume known and expected amounts of time regardless of the number of services. The RTOS decides which applications should run in which order and how much time to allocate to each. Predictable performance is the hallmark of a well-designed RTOS.




AYugI5ECcO3CAAAAAElFTkSuQmCC


HA8Vhy2okQ0V1AAAAAElFTkSuQmCC


Multiprocessing and Multitasking

Multiprocessing: The ability to execute multiple processes simultaneously.

Multitasking: The ability of an operating system to hold multiple processes in memory and switch the processor (CPU) from one process to another. Multitasking involves ‘Context switching’, ‘Context saving’, and ‘Context retrieval’.

  • Context Switching: Switching of execution context from one task to another.
  • Context Saving: Saving the current execution context when a task/process switch happens.
  • Context Retrieval: Retrieving the saved context of a task to be executed during context switching.

Types of Multitasking

Co-operative Multitasking

A task/process gets a chance to execute only when the currently executing task/process voluntarily relinquishes the CPU. Any task/process can use the CPU for as long as it wants. This relies on tasks cooperating with each other to share CPU time. If a task is non-cooperative, others may wait a long time to get CPU time.

Preemptive Multitasking

Ensures that every task/process gets a chance to execute. The timing and duration depend on the preemptive scheduling implementation. The currently running task/process is preempted to allow other tasks/processes to execute. Preemption may be based on time slots or task/process priority.

Non-preemptive Multitasking

The process/task given CPU time is allowed to execute until it terminates (enters the ‘Completed’ state) or enters the ‘Blocked/Wait’ state, waiting for I/O. Co-operative and non-preemptive multitasking differ in their behavior when in the ‘Blocked/Wait’ state.


Task Scheduling

In a multitasking system, a mechanism is needed to share the CPU among different tasks and decide which process/task to execute at a given time. This is known as task/process scheduling and forms the basis of multitasking. Scheduling policies provide guidelines for determining which task to execute when. These policies are implemented in an algorithm run by the kernel as a service. The kernel service/application implementing the scheduling algorithm is known as the ‘Scheduler’.


Non-Preemptive Scheduling

First-Come, First-Served (FCFS): Processes are executed in the order they arrive in the ready queue. Simple but may lead to poor device utilization and longer average waiting times.

Last-Come, First-Served (LCFS): Last entered process is executed first. May favor CPU-bound processes and lead to poor I/O device utilization.

Shortest Job First (SJF): Processes with the shortest execution time are scheduled first. Optimal average waiting time but prone to starvation for long processes.

Priority-Based Scheduling: Processes are scheduled based on priority. Starvation of low-priority processes can be mitigated with aging.

Preemptive Scheduling

Preemptive SJF (Shortest Remaining Time): A newly arrived process with a shorter remaining execution time can preempt the currently running process. Optimizes response time but involves more context switching.

Round Robin (RR): Each process gets a fixed time slice in turn. Ensures fairness but may have higher waiting times if the time slice is not well-chosen.

Preemptive Priority Scheduling: High-priority processes can preempt lower-priority processes. Similar to non-preemptive but allows dynamic priority changes.

Task Communication

Exchange of data or synchronization between tasks in a multitasking system.

Types:

  • Co-operating Processes: Share data/resources; rely on interaction.
  • Competing Processes: Compete for system resources without data sharing.

Methods:

  • Shared Memory: Common memory area accessed by tasks (e.g., notice board concept).
  • Message Passing: Exchange of limited data via message queues, mailboxes, or signals.
  • Remote Procedure Call (RPC): Allows invoking a procedure on another machine.


Task Synchronization

Coordinating task access to shared resources to avoid conflicts (e.g., racing, deadlock).

Techniques:

  • Mutual Exclusion: Exclusive access to critical sections (e.g., locks, semaphores).
  • Sleep & Wakeup: Processes sleep when access is denied, reducing CPU wastage.
  • Semaphores: Used for resource access; binary (mutex) for single access and counting for multiple access.

Issues:

  • Racing: Competing tasks modify shared data, causing errors.
  • Deadlock: Tasks block each other by holding required resources.
  • Priority Inversion: Lower-priority task delays higher-priority one (solved by priority inheritance).

Choosing an RTOS

Factors:

Functional:

  • Processor support and memory requirements.
  • Real-time capabilities (low latency, efficient scheduling).
  • IPC and synchronization support.

Non-Functional:

  • Cost: Development, licensing, and maintenance.
  • Tools: Availability of debugging/development tools.
  • Support: After-sales services like bug fixes and updates.

Customization: Use open-source or commercial RTOS, depending on requirements.


Deadlock: Definition, Conditions, Detection, and Prevention

  • Definition: A state where tasks cannot proceed due to circular resource dependency.
  • Conditions:
    1. Mutual Exclusion: Resource held by one task.
    2. Hold and Wait: Task holding resources and waiting for more.
    3. No Preemption: Resources can’t be forcibly taken.
    4. Circular Wait: Circular dependency among tasks.
  • Detection:
    • Use resource allocation graphs.
    • Analyze for cycles in the graph.
  • Prevention:
    • Break one of the conditions (e.g., allow preemption).
    • Use careful resource allocation (e.g., request all resources at once).