Introduction to Operating Systems and Concurrency
Operating System Concepts
Introduction
An operating system (OS) acts as a virtual machine, abstracting the complexities and limitations of hardware from application programmers. It manages various aspects of computation, including job scheduling, resource allocation, and process management.
Jobs and Processing
A job represents a unit of processing. Early operating systems employed batch processing, collecting multiple jobs before execution and printing results afterward. Modern systems often utilize timesharing, allowing users to interact with the machine through terminals.
Processes and Threads
A thread is the smallest unit of CPU scheduling, representing a sequential execution stream. Threads within a process share the same address space, creating the illusion of multiple CPUs on a single machine.
A process consists of an address space and at least one thread of execution. It’s a fundamental unit of computation and represents a running instance of a program.
Context switching is the procedure the CPU follows to switch between tasks without conflicts.
Concurrency Concepts
- Uniprogramming: Running one process at a time.
- Multiprogramming: Running multiple processes on a machine.
- Multithreading: Having multiple threads per address space.
- Multiprocessing: Running programs on a machine with multiple processors.
- Multitasking: A single user running multiple processes.
Benefits of concurrency include:
- Running multiple applications concurrently.
- Improved resource utilization.
- Better average response time.
- Enhanced performance.
Program vs. Process
A program is a collection of instructions in a programming language, while a process is a running instance of a program with its own address space.
Dispatching Loop
The dispatching loop manages thread execution by saving the current thread’s state, choosing a new thread, and loading its state.
Operating System Components and Operations
Master Boot Record (MBR)
The MBR, located in the first sector of the boot device, describes the disk’s physical layout and initiates the boot process.
System Calls
System calls allow user-mode processes to access kernel functions.
User Mode vs. Kernel Mode
User mode is when the OS runs user applications. Kernel mode is a privileged mode for OS operations. A single bit differentiates these modes, restricting certain instructions to kernel mode for system protection.
Concurrency Challenges and Solutions
Starvation
Starvation occurs when the continuous arrival of short jobs prevents longer jobs from running.
Atomic Operations
Atomic operations are indivisible and always run to completion (e.g., memory loads and stores).
Race Conditions
Race conditions arise when the output of concurrent threads depends on the timing of their execution.
Mutual Exclusion and Critical Sections
Mutual exclusion ensures that only one thread executes a critical section (a piece of code requiring exclusive access) at a time.
Busy Waiting
Busy waiting occurs when a process repeatedly checks a condition, consuming CPU time while waiting.
Semaphores
A semaphore is a generalized lock, a non-negative integer with two atomic operations:
- P(): Waits for the semaphore to become positive, then decrements it.
- V(): Increments the semaphore and wakes up a waiting thread if any.
Safety, Liveness, and Fairness
- Safety: Mutual exclusion ensures at most one thread is in the critical section at any time.
- Liveness: Progress guarantees that if multiple threads want to enter the critical section, some eventually will.
- Fairness: Bounded waiting ensures every thread wanting to enter the critical section eventually does, preventing starvation.