Process Scheduling & Memory Management: SRT, HRN, Queues, RAM
SRT (Shortest Remaining Time)
The next process to enter the processor is the one with the shortest remaining runtime. When a new process arrives with a shorter runtime than the currently running process, the latter is evicted from the CPU, and the shorter process begins execution. It’s a preemptive variant of the Shortest Job Next (SJN) algorithm.
Features:
- Variation of SJN.
- Preemptive policy.
- Very efficient.
- Possibility to assign priorities, where the system adjusts runtime based on priority (multiplying or dividing remaining time/quantums).
Problem: Some processes might face indefinite postponement, potentially becoming zombie processes. There’s no monitoring or control, leading to priority variations affecting execution.
Algorithms can be appropriate or inappropriate depending on whether we can remove a process from memory and postpone it if needed. In non-preemptive cases, a process runs to completion or becomes a zombie, although other possibilities exist.
HRN (Highest Response Ratio Next)
This policy prioritizes the process with the highest response ratio. Each process gets a priority based on its response index: Priority = (waiting time + service time) / service time. To prevent prolonged waiting, increasing the service time increases priority. Initially, priority is 1, increasing with waiting time to avoid delaying processes. When a process leaves the processor (completion or I/O operation), the highest priority process in the ready queue executes. This is a non-preemptive policy. It can behave like an interrupt-driven system if a high-priority process arrives, but others may proceed first.
Disadvantages: A short process arriving after a long one may experience a long wait. It’s an expensive policy to implement as priority must be calculated for all waiting processes, especially with frequent process completions or interruptions. System overhead increases due to these calculations. It’s fundamentally non-preemptive, changing only on process termination, and isn’t conducive to prioritizing one process over another. It’s costly and can overload the system.
Queues
When multiple processes can be grouped by characteristics, we can define different queues with different scheduling algorithms. We need an algorithm to manage these multiple queues. Processes are grouped and placed in memory queues, each with its own memory. A queue processing algorithm is needed to prioritize queues based on system requirements.
Multiple Queues with Feedback (FB)
All queues have access to the processor, each implementing its own scheduling policy. Instead of complex queue control, queues are assigned time slots (e.g., 3 time units for queue 1, 2 for queue 2, 1 for queue 3). The last level runs least often, with processes moving down queues (levels) upon rule infringement. All queues have independent processor access, favoring shorter processes and those limited by I/O. When a process uses its quantum, the processor selects a process from a lower-level queue. After consuming its quantum a certain number of times, a process moves to the next higher-level queue (cascading assignment).
Features: Supports overhead well. This is a preemptive policy. It’s very adaptable to system needs, allowing different management for each queue.
RAM (Primary Memory)
Main Memory Management
GENERAL USES:
- To run programs: Programs and data must be in main memory for processor access.
- To increase processor performance (multiprogramming): Main memory is divided into ‘n’ parts, each holding a running process.
- Meeting point: Data exchange between the processor, devices, and secondary storage occurs through main memory. Speed limitations here create a bottleneck for the computer’s processing speed.
CONCEPTS:
- Memory access time: Time between the start and end of a read/write operation.
- Memory cycle time: Time imposed by hardware between the end of one I/O operation and the start of the next (system reset, parameter restitution).
ADDRESSING
Errors, Blue Screen 0A01: Virtual Address (VA) = V * 65453 + displacement (where V is a virtual segment number).
Address Definition: Correspondence between a memory address and physical location.
Address Types:
- Symbolic: Refers to a name (label). Source code uses symbolic addresses, converted to absolute addresses during compilation. Variables are assigned symbolic names by the programmer, meaningful only in the source program.
- Relative: Values assigned by the compiler based on the program’s compilation order, used for variables.
- Absolute (Physical): The actual address used by the processor to access memory. Calculated as (Segment Size * (Segment Number – 1)) + Displacement within the segment.
Cache memory is part of main memory, used to accelerate data input/output. Cache and main memory constitute primary memory. Cache is faster but smaller.
Multiprogramming Concept
Multiprogramming allows running multiple programs concurrently in main memory. Main memory is divided into partitions or regions, each holding a process. The number of partitions determines the degree of multiprogramming. Each segment has its own cache entry and exit (the system manages ‘n’ caches).
Memory Protection
Segmentation When performing memory and load in each segment a different process, we will find the need to control the range of access of each process, ie to avoid overlap of programs or data or unauthorized access from one program to another segmento.Para each partition boundaries were used two records pointing to the top and bottom so that any address subsequently generated by the process should be included among those margins . This technique requires that the addresses generated by the processes to be absolute, whether generated at compile time or runtime. In both cases the allocation is static. Solution 2: Load a register containing the starting address of the partition and the other with the size of it, one is called the base register and other record limit, thus it is possible dynamic assignment of addresses and it will be sufficient to update the contents of register base to access a memory area. Each address generated from the process should be less that the registration deadline. Memory Management OS Development at All use much of its software to manage memory. Monoprogramming: Execution of programs one after another. Through dedicated memory ( loads the program will be implemented and the rest remains unused memory) first laid them. Memory Division. (border problems) to put several processes have to assign a policy of division boundaries. Reassignment directions addresses the reallocation of border registration indicate the point from which to load the user program, it is necessary to reassign the addresses of the program depending on the border. Reassignment static-> reallocation will take place during compilation or load the program into memory, this means that any variation in size in the operating system requires a new build or load the program. This technique is easy to make but too rigid. Reassignment dynamic-> this is done during program execution. A special device will intercept system hardware each logical address generated by the program and will add the contents of register border for the corresponding real address. The OS with the help of hardware system determines the correspondence between the real and relative addresses that make up the physical space of direcciones.Ge stión partition of fixed size memory One of the easiest ways to manage memory is to partition size fixed contiguous (one after another), so that their number and sizes will be defined at system initialization and remain immutable for the duration of the session. When a program has to load the operating system will assign a partition that can contain it, this requires a program that declares their needs to the system. The first systems that used this type of management will manage the memory by queues, each program the OS assigned to a queue depending on the size of memory requested.This technique is simple and easy to develop but its effectiveness is strongly influenced by the memory requirements of programs and their order of arrival. Limiting factors of this procesoà happens when an application does not fit in the memory designed “? You can not run you have to wait at the end of the supervisor process and reallocate system memory partitions more grandes.Cuando is loading a program that is smaller than the assigned partition, there is a waste of memory. Every time we lose memory, he was called memory fragmentation. Fragmentation external memory segment is not usable after you make all the segmentations of main memory. (What is left of the reallocation of the segments) internal fragmentation that memory segment is not used within the process execution. If by a small program the piece of memory that is used internal fragmentation. Methods to optimize the fragmentation (which no fragmentation) of variable size contiguous Partitions: memory allocated dynamically to the work according to their size. The SO will maintain an internal table which recorded memory areas, giving each job a partition of the requested size. Operation: When booting the system will allocate to each program is a segment according to the size you need, we have a segment external fragmentation. As you are ending the processes by replacing other processes in a segment that was measured to the previous one, this process will be wasting memory, internal fragmentation appears. You can have two different policies: FIRST FIT à puts the program in the first free partition regardless of size, has the danger that if the first free partition was too large, wasting much memory. BEST FIT to get the first free partition best fit the size of the program to be executed. The memory manager is empty contiguous joining partitions. Board records from the base. Has the problem of external fragmentation accumulate. Regular compaction is needed to eliminate external fragmentation. (This slows down overall performance much of the process). Every so often there must be a reallocation. The page is a form of memory allocation allows assigning discontinuous processes memory pages located in different frames.Frame or frame: It is a physical memory segment to be the exact size of n memory pages that are defined in it. Pages: logical partitions that divide a frame. For each program is allocated memory pages needed for enforcement, regardless of that frame is. This method is accomplished avoid excessive external fragmentation, but it will continue producing an internal fragmantacion, since it is impossible that all processes are adapted to the pagina.Si rendiemiento you want to optimize memory we will have a tendency to decrease the size of message, the smaller the page size, less memory is wasted, but this involves a greater number of tables to control páginas.El paging system works as follows: For each of the active jobs using a special register called “base register” to indicate the direction of the table of pages of work in progress at this time, so if you turn a job restores the following table proceso.Para transform every logical address generated by the processor in your real address, you need to resort to the corresponding page table, and then the real address, so that every action causes two memory accesses and multiply execution times. To avoid this , or to speed up information, there are two basic solutions: g UARD cache memory addresses used pages, thus reducing the delay of the page. The problem with this is that the cache is very expensive and must be added to this process the management of this memory. In the case of small systems with message boards with few entries, you can use associative memories to contain such tables . These reports consist of a hard set of records (which are the associative registers) so that they can do a comparison with all the records at once (search for content). One advantage of paging is that it has external fragmentation and only minimal internal fragmentation on the last page of each program and recoverable. segmentation programs are underway around a central core which bifurcate routines or data areas (stacks, tables) so that from the Logically, it is a set of components of varying size, ie a logical address space. Advantage: external fragmentation is timely, since the end of each routine frees the memory, easily recovering the external fragmentation. Combisystems Page segment: is to segment the page table appropriate to the size of the program. For this segment maintains a table whose entries indicate the start address of each page table and its size. Always wear a special hard. Paged segmentation: segments whose size is used always or whole number of pages, thus avoiding external fragmentation, characteristic of the segmentation. Virtual memory: is a management technique that combines hard and soft. Advantages of this are: logical memory may be larger than the actual available. You can raise the level of multiprogramming and hence the efficiency of the system. It takes less disk I / S in the loading and exchange programs. The different parts of the program are loaded into memory as needed. To this we must consider three aspects: Load: Lots of programs are loaded when needed. If this is the case is called page request. If you are charged in advance is called prepaginación. Location: The virtual memory systems that use segmentation to decide to load the new segment, ie if they use the policy of best fit or first fit. Substitution: When you need to load a new part of a program we have to take a decision, delete the last part of the program, delete the oldest or the more time that is not used.Virtual memory can work in two ways Charge at the request of pages is the most common. The algorithm does not bear the entire program, but only the pages that request, and only load a new page when the address generated by the processor is not in memory. Page replacement fee: The routine of the system that manages fault interruption the page works as follows: You have to find the requested page in secondary storage. You have to find a free frame. If this frame exists, use it, if it exists, using the replacement algorithm to select the page to replace. Save the page replaced in memory, updating the tables concerned. Bring the requested page to the free frame and update the corresponding tables. A replacement lgoritmos optimal replacement algorithm is one that replaces the page that will take longer to be used, however, is impossible to foresee the future behavior of processes. The algorithms have been used to control the replacement part of the use Last of the pages to try to approximate an optimal solution. His fitness will be defined by two factors: The number of page faults caused by the process. The cost of use. FIFO Policy: This policy, when you need to replace a page choose the one that takes more time in memory. To manage the process time in memory using a memory queue arrival, an algorithm is very simple and easy to carry. It causes very little stress and relative efficiency. LRU policy is the least used. Replace those pages that has been least recently used. Considers that the sites that have been widely used in the recent past are also the most used in the future. In order to verify the age of the message has to resort to a method