Concurrent Applications and Operating System Scheduling

Concurrent Applications

1. Define a Competing Application and Give an Example

A competing application, also known as a concurrent application, is structured so that different parts of the code can run concurrently.

2. What is Mutual Exclusion and How is it Implemented?

Mutual exclusion prevents two or more processes from accessing the same resource simultaneously.

3. What’s Wrong with Disabling Interrupts for Mutual Exclusion?

This solution, though simple, has some limitations. First, multiprogramming can be seriously compromised since competition between processes is based on the use of interrupts.

4. What is Busy Waiting and What is its Problem?

In busy waiting, every time a process cannot enter its critical region because another process is already accessing the resource, the process remains in a loop testing a condition until it is allowed access. This wastes CPU time.

5. Explain Semaphores with Examples

A semaphore is a non-negative integer variable that can only be handled by two statements: DOWN and UP. Semaphores can be used for mutual exclusion and conditional synchronization.

6. Explain Monitors with Examples

Monitors are high-level synchronization mechanisms that simplify the development of concurrent applications. They can be used for mutual exclusion and conditional synchronization.

7. Advantages of Asynchronous Communication

The advantage of this mechanism is to increase the efficiency of competing applications. To implement this solution, buffers are needed to store the messages, and there must be other mechanisms that enable the synchronization process to identify whether a message has been sent or received.

8. Deadlock: Conditions and Solutions

Deadlock is a situation in which a process waits for a resource that will never be available or an event that will never occur. Four conditions are needed simultaneously for a deadlock to occur:

  • Mutual exclusion: Each resource can only be allocated to a single process at a given moment.
  • Hold and wait: A process, beyond the resources already allocated, may be waiting for other resources.
  • Non-preemption: A resource cannot be taken from a process just because other processes want it.
  • Circular wait: A process may have to wait for a resource held by another process, and vice-versa.

To prevent deadlock, we must ensure that at least one of these four conditions is never satisfied.

Operating System Scheduling

1. What is an OS Scheduling Policy?

A scheduling policy consists of criteria to determine which process in the ready state will be chosen to use the processor.

2. Functions of the Scheduler and Dispatcher

The scheduler is an operating system routine whose main function is to implement the criteria of the scheduling policy. The dispatcher is responsible for the context switching of processes after the scheduler determines which process should use the processor.

3. Main Criteria Used in a Scheduling Policy

Processor utilization, throughput, processor time (CPU time), waiting time, turnaround time, and response time.

4. Differentiate Processor, Waiting, Turnaround, and Response Times

CPU time is the time that a process takes in the running state during processing. Waiting time is the total time that a process remains in the ready queue, waiting to be executed. Turnaround time is the time it takes a process from its inception until its termination, including time spent waiting for memory allocation, waiting in the ready queue, CPU processing time, and waiting for I/O. Response time is the elapsed time between a request to the system or application and the moment when the response is displayed.

5. Differentiate Preemptive and Non-preemptive Scheduling

In preemptive scheduling, the operating system can interrupt a running process and move it to the ready state to allocate another process to the CPU. In non-preemptive scheduling, when a process is running, no external event may cause it to lose processor usage. The process only leaves the running state if it completes its instructions or runs code that causes a shift to the waiting state.

6. Difference Between FIFO and Round-Robin Scheduling

FIFO (First-In, First-Out) is a non-preemptive scheduling process where the first process to reach the ready state is selected for execution. Round-Robin is a preemptive scheduling algorithm designed especially for time-sharing systems.

7. Difference Between Preemption by Time and by Priority

Preemption by time occurs when the operating system interrupts the running process according to the expiration of its time slice and replaces it with another process. Preemption by priority occurs when the operating system interrupts the running process because a process with higher priority has entered the ready state.

8. Multiple Queues with Feedback: CPU-bound vs. I/O-bound

I/O-bound processes are favored in this type of scheduling. Since the probability of such a process having a long preemption is low, the trend is for I/O-bound processes to remain at a high priority in the queues, while CPU-bound processes tend to position themselves in queues of lower priority.