Chapter three: Operating system – scheduling algorithm

Tips: Learning ideas of various scheduling algorithms

  • Algorithm thought
  • Algorithm rules
  • Is this scheduling algorithm used for job scheduling or process scheduling?
  • Preemptive? Non-preemptive?
  • The advantages and disadvantages
  • Whether it causes hunger

1. First come, first served (FCFS)

1.1 overview,

  • Algorithm idea: Mainly from a “fair” perspective (similar to the queuing example in our life), the process that requests the CPU first is allocated to the CPU first. When a process enters the ready queue, its PCB is linked to the end of the queue. When the CPU is free, it is allocated to the process at the head of the queue, and the running process is removed from the queue.
  • Algorithm rules: Services are performed in the order in which jobs/processes arrive
  • For job/process scheduling: For job scheduling, the consideration is which job reaches the backup queue first; For process scheduling, which process reaches the ready queue first is considered
  • Is it preemptable? : Non-preemptive algorithm, in which once a CPU is allocated to a process, the process will use the CPU until it is released, i.e. the program terminates or requests I/O. FCFS algorithms are particularly troublesome for time-sharing systems, where each user needs to get a certain amount of CPU time on a regular basis. Allowing a process to use the CPU for too long would be a serious mistake
  • The advantages and disadvantages
    • Advantages: fair, simple algorithm implementation
    • Disadvantages: in short operations (process) at the back of the long need to wait for a long time of operation, the weighted turnaround time is very big, bad user experience for short assignment, such as the following process of P3, weighted turnaround time is 8 s, but it only needs to be done 1 s can run, namely the FCFS algorithm is good for long assignments, against short assignments (Eg: queuing to buy milk tea…).
  • Does it lead to hunger (a process/operation being unserviced for a long time) : No

1.2, for example,

Example: The following table shows the time when each process reaches the ready queue and the required running time. The first come first service scheduling algorithm is used to calculate the waiting time, average waiting time, turnover time, average turnover time, weighted turnover time and average weighted turnover time of each process.

process Time of arrival The elapsed time
P1 0 7
P2 2 4
P3 4 1
P4 5 4

First come, first served scheduling algorithm: scheduling according to the order of arrival, in fact, the longer the waiting time, the more priority to get service.

Therefore, the scheduling order is as follows: P1→P2→P3→P4. The scheduling order is as follows: AFTER P1 arrives, P2 starts to schedule after running 7 times…

Calculation time of each process:

2. Short work priority (SJF)

2.1 overview,

  • Algorithm idea: the pursuit of the least average waiting time, the least average turnaround time, the least average turnaround time with weight
  • Algorithm rule: According to “shortest job/process to get service first (the so-called” shortest “, refers to the shortest service time required)
  • For job/process scheduling: It can be used for job scheduling or process scheduling. Shortest Process First (SPF) algorithm used in Process scheduling
  • Is it preemptable? : SJF and SPF are non-preemptive algorithms. But there’s also a preemptive version, SRTN (Shortest Remaining Time Next)
  • The advantages and disadvantages
    • Pros: “shortest” average wait time, average turnaround time
    • Cons: Unfair. Good for short work, bad for long work. Starvation may occur. In addition, the running time of the job/process is provided by the user, which is not necessarily true and does not necessarily give priority to true short jobs
  • Does it lead to hunger (a process/operation being unserviced for a long time) : Yes. If a continuous flow of short jobs/processes arrives, long jobs/processes may not be served for a long time, resulting in “hunger” phenomenon. If you don’t get service all the time, it’s called “starvation.”

2.2, for example,

Example: The following table shows the time when each process reaches the ready queue and the required running time. The non-preemptive short-job priority scheduling algorithm is used to calculate the waiting time, average waiting time, turnaround time, average turnaround time, weighted turnaround time and average weighted turnaround time of each process.

process Time of arrival The elapsed time
P1 0 7
P2 2 4
P3 4 1
P4 5 4

Short job/process first scheduling algorithm: Selects the job/process that has arrived at the current time and has the shortest running time for each scheduling algorithm.

Therefore, the scheduling sequence is P1→P3→P2→P4

In other words, only P1 arrived at the beginning, so P1 is scheduled first. In the running time of P1, other threads have arrived, among which P3 has the shortest running time, so P3 is the second thread to be scheduled. Finally, between P2 and P4, because P2 arrived first, so P2 is scheduled third.

Calculation time of each process:

3. Preemptive short job priority (SRTN)

3.1 overview,

Example: The following table shows the time when each process reaches the ready queue and the required running time. Preemptive short-job priority scheduling algorithm is used to calculate the waiting time, average waiting time, turnover time, average turnover time, weighted turnover time and average weighted turnover time of each process.

The preemptive short job first algorithm is also called the shortest remaining time first algorithm SRTN

process Time of arrival The elapsed time
P1 0 7
P2 2 4
P3 4 1
P4 5 4

Shortest remaining time first algorithm: whenever a process joins the ready queue, it needs to schedule. If the remaining time of the newly arrived process is shorter than the remaining time of the current running process, the new process will seize the CPU, and the current running process will return to the ready queue. In addition, when a process is completed, it also needs to be scheduled according to this scheduling algorithm

Note that the ready queue changes when a new process arrives and is checked according to the above rules. Pn(m) indicates that the remaining time of the Pn process is M. The situation at each moment is as follows:

  • Moment 0 (ARRIVAL of P1) : P1 (7)
  • Moment 2 (P2 arrival) : P1 (5), P2 (4)
  • Moment 4 (ARRIVAL of P3):P1(5), P2(2), P3(1)
  • Time 5 (P3 completed and P4 just arrived) : P1 (5), P2 (2), P4 (4)
  • 7 Moment (P2 completed) : P1 (5), P4 (4)
  • Moment 11 (P4 completed) : P1 (5)

The scheduling situation is shown as follows:

Calculation time of each process:

4. Description of short-job priority algorithm

Note a few small details:

  1. If not specified in the title, the “short job/process first algorithm” mentioned is non-preemptive by default
  2. Many books will say that “SJF scheduling algorithm has the lowest average waiting time and average turnaround time”.
    1. Strictly speaking, this statement is wrong and inaccurate. The previous example shows that the shortest remaining time first algorithm results in less average waiting time and less average turnaround time
    2. A condition should be added: “When all processes are running at the same time, the average waiting time and average turnaround time of SJF scheduling algorithm are the least”;
    3. Or “when all processes arrive almost at the same time, SJF scheduling algorithm has the least average waiting time and average turnover time”;
    4. If the above conditions are not added, it should be said that “preemptive short job/process first scheduling algorithm (shortest remaining time first,SRNT algorithm) has the least average wait time, the least average turnaround time”
  3. Although strictly speaking, SJF does not necessarily have the least average waiting time and average turnaround time, compared with other algorithms (such as FCFS), SJF can still obtain less average waiting time and average turnaround time
  4. If you meet the choice “SJF algorithm has the least average waiting time and the least average turnaround time”, it is best to determine whether the other choice is obviously wrong. If there is no more suitable choice, you should also choose this choice.

5. Thinking about FCFS/SJF algorithm

  • The FCFS algorithm selects a job (process) with the longest waiting time to serve it. However, the running time of the job is not taken into account, thus causing problems that are not friendly to short jobs
  • SJF algorithm selects a job with the shortest execution time as its service. However, the waiting time of each operation is not considered at all, which leads to the unfriendly problem of long operation and even causes the problem of hunger
  • Can you design an algorithm that takes into account the wait time of each job, but also the run time? ==== High response ratio priority algorithm

6. High Response Ratio Priority Algorithm (HRRN)

6.1 overview,

  • Algorithm idea: The job/process wait time and the service request time should be considered together

  • Algorithm rule: First calculate the response ratio of each job/process during each scheduling, and select the job/process with the highest response ratio to serve it [response ratio >=1]

    Response ratio =(Wait time + Request service time)/ Request service time

  • For job/process scheduling: It can be used for job scheduling or process scheduling.

  • Is it preemptable? : non-preemptive algorithm. Therefore, only when the currently running job/process voluntarily abandons the handler need to be scheduled and the response ratio needs to be calculated

  • The advantages and disadvantages

    • Combined wait time and uptime (required service time)
    • With the same waiting time, shorter service time is preferred (advantage of SJF)
    • Require the same service time, long wait time is preferred (ADVANTAGES of FCFS)
    • For long operations, the response ratio will become larger as the waiting time becomes longer, thus avoiding the problem of long operations starvation
  • Does it lead to hunger (a process/operation being unserviced for a long time) : No

6.2, for example,

Example: The following table shows the time when each process reaches the ready queue and the required running time. The high response ratio priority scheduling algorithm is used to calculate the waiting time, average waiting time, turnaround time, average turnaround time, weighted turnaround time and average weighted turnaround time of each process.

High response-to-priority algorithm: non-preemptive scheduling algorithm, scheduling is required only when the currently running process voluntarily abandons CPU (normal/abnormal completion, or active blocking). During scheduling, the response ratio of all ready processes is calculated, and the process with the highest response ratio is selected on the processor.

The scheduling situation is shown as follows:

To sum up: comparison of three algorithms

Note: These algorithms mainly care about fairness to users, average turnaround time, average waiting time and other indicators to evaluate the overall performance of the system, but they do not care about “response time” and do not distinguish the urgency of tasks. Therefore, the interaction is very bad for users. ** Therefore, these three algorithms were generally suitable for the early batch processing systems, ** Of course, FCFS algorithm is often used in conjunction with other algorithms, and still plays a very important role today. Scheduling algorithms suitable for interactive systems are described in the next section…

7. Time slice rotation scheduling algorithm RR

7.1 overview,

  • Algorithm idea: serve each process fairly and in turn, so that each process can get a response within a certain time interval
  • Algorithm rule: let each process execute a time slice (such as 100ms) in turn according to the order that each process reaches the ready queue. If the process is not finished within a time slice, the processor is stripped and the process is re-queued at the end of the ready queue.
  • Used for job/process scheduling: used for process scheduling (a processor’s time slice can be allocated only after a job has been put into memory to create a corresponding process)
  • Is it preemptable? : If the process fails to run within the time slice, it will be forcibly deprived of the processor’s right to use, so the time slice rotation scheduling algorithm belongs to the preemptive algorithm. A clock interrupt is issued by the clock device to notify the CPU that the time slice is up
  • The advantages and disadvantages
    • Advantages: fairness; Fast response, suitable for time-sharing operating system;
    • Disadvantages: Due to the high frequency of process switching, there is some overhead; There is no distinction between the urgency of the task.
  • Does it lead to hunger (a process/operation being unserviced for a long time) : No.

7.2, for example,

Time slice rotation scheduling algorithm: It was born with the birth of time-sharing system, which turns the ready queue processes to execute time slices in turn (each time the process is selected at the head of the ready queue).

Example: The following table shows the time when each process reaches the ready queue and the required running time. The time slice rotation scheduling algorithm is used to analyze the process running condition when the time slice size is 2 and 5 respectively.

The time slice size is 2 (note: the following brackets indicate the process in the ready queue at the current moment and the remaining running time of the process)

  • At time 0 (P1(5)): at time 0, only P1 reaches the ready queue, and let P1 run a time slice on the processor
  • Moment 2(P2(4)→P1(3)): at moment 2, P2 arrives at the ready queue. After P1 runs a time slice, it is stripped of the processor and placed at the end of the queue again. Now P2 is at the head of the queue, so let P2 go to the processor. (Note: at time 2, P1 steps down from the processor, and at the same time new process P2 arrives. If this is the case in the problem, by default the newly arrived process enters the ready queue first.)
  • Moment 4 (P1(3)→P3(1)→P2(2)): At moment 4, P3 arrives and is first inserted into the end of the ready team, followed by P2 inferior processor
  • Moment 5 (P3(1)→P2(2)→P4(6)): At moment 5, P4 arrives and is inserted into the end of the ready queue (note: since the time slice of P1 has not been used up, it is not scheduled temporarily. In addition, P1 is running and not in the ready queue.
  • At moment 6 (P3(1)→P2(2)→P4(6)→P1(1)): at moment 6, when P1 time slice is used up, the processor is put back to the end of the ready queue and scheduling occurs
  • Moment 7 (P2(2)→P4(6)→P1(1)): although the time slice of P3 is not used up, since P3 only needs to run for 1 unit of time, it will give up the processor after running, so scheduling will also occur. Queue head process P2 on the processor.
  • Moment 9 (P4(6)→P1(1): process P2 runs out of time slice and just finishes running, scheduling occurs and P4 is on the processor
  • Moment 11 (P1(1)→P4(4)): P4 time slice runs out and returns to the ready queue. P1 upper processor
  • At moment 12 (P4(4)) : after P1 runs, the processor is actively abandoned. At this time, only P4 and the upper processor are left in the ready queue
  • Time 14 () : The ready queue is empty, so let P4 then run a time slice.
  • 16 time: All processes finish running

The time slice size is 5 (note: the following brackets indicate the process in the ready queue at the current moment and the remaining running time of the process)

  • Time 0 (P1(5)) : only P1 arrives, and P1 is on the processor.
  • Moment 2(P2(4)) : P2 arrives, but P1 time slice has not ended, so it is not scheduled temporarily
  • Time 4 (P2(4) →P3(1) : P3 arrives, but P1 time slice has not ended, so it is not scheduled temporarily
  • Moment 5 (P2(4)→P3(1)→P4(6)) : P4 arrives and P1 runs at the same time. Scheduling occurred, P2 on processor.
  • Moment 9 (P3(1)→P4(6)) : P2 runs to the end. Although the time slice is not used up, it will give up the processor. Scheduling occurs.
  • Time 10 (P4(6)) : P3 runs to the end. Although the time slice is not used up, it will give up the processor. Scheduling occurs.
  • Time 15 () : P4 time slice is used up, but the ready queue is empty, so P4 will continue to execute a time slice.
  • Time 16 () : the processor is abandoned when P4 is finished. All processes are complete.

7.3. Algorithm evaluation

Based on the first come, first served scheduling algorithm… , when the time slice is 5, the result is the same as the first-come-first-served FCFS algorithm

If the time slice is too large, so that each process can complete in a time slice, the time slice rotation scheduling algorithm degenerates into a first-come-first-served scheduling algorithm, and increases the process response time. So the time slice can’t be too big. For example, 10 processes are running concurrently in the system. If the time slice is 1 second, a process may wait 9 seconds for a response. In other words, if a user issues a debug command from the keyboard outside the time slice of his own process, it may take up to 9 seconds for the system to respond

Process scheduling, switch, on the other hand, there is a time cost (save, restore operation environment), so if the time is too small, will lead to a toggling of frequent process [in general, the design time for a little while to make the process of switching costs accounted for less than 1%), the system will spend a lot of time to process the switch process, resulting in the actual used to reduce the percentage of time process execution. So the time slice can’t be too small.

8. Priority scheduling algorithm

8.1 overview,

  • Algorithm thought: With the development of computer, especially the emergence of real-time operating system, more and more application scenarios need to decide the processing order according to the urgency of the task

  • Algorithm rule: Selects the job or process with the highest priority during scheduling

  • For job/process scheduling: Can be used for both job scheduling and process scheduling. It is even used in I/O scheduling, which we will learn later

  • Is it preemptable? : Preemptive and non-preemptive. The difference is that non-preemption schedules only when a process voluntarily abandons the handler, whereas preemption checks if preemption occurs when the ready queue changes.

  • The advantages and disadvantages

    • Advantages: Distinguish the degree of urgency and importance with priority, suitable for real-time operating system. Flexibility to adjust the degree of preference for various jobs/processes.
    • Cons: A steady stream of high-priority processes can lead to starvation
  • Does it lead to hunger (a process/operation being underserved for a long time) :

8.2, for example,

Non-preemptive priority scheduling algorithm

Example: The time for each process to reach the ready queue, the required running time, and the number of process priorities are shown in the following table. A non-preemptive priority scheduling algorithm is used to analyze process performance. (Note: The higher the number of priorities, the higher the priority)

Non-preemptive priority scheduling algorithm: Each schedule selects the currently arrived process with the highest priority. Scheduling occurs when the current process voluntarily aborts processing.

Note: The processes currently in the ready queue are represented in brackets below

  • Time 0 (P1) : only P1 reaches the processor.
  • 7 moment (P2, P3, P4) : AFTER P1 runs, the processor is abandoned, and other processes have arrived. P3 has the highest priority and P3 is on the processor.
  • 8 time (P2, P4) : P3 is complete, P2 and P4 have the same priority, because P2 arrived first, so P2 has priority on processor
  • Time 12 (P4) : P2 completed, ready queue only P4, P4.
  • Moment 16 () : P4 is complete and all processes are finished

Preemptive priority scheduling algorithm

Example: The time for each process to reach the ready queue, the required running time, and the number of process priorities are shown in the following table. Preemptive priority scheduling algorithm is used to analyze the running status of the process. (Note: The higher the number of priorities, the higher the priority)

Preemptive priority scheduling algorithm: each schedule selects the currently arrived process with the highest priority. Scheduling occurs when the current process voluntarily aborts processing. In addition, you also need to check for preemption when the ready queue changes.

Note: The processes currently in the ready queue are represented in brackets below

  • Time 0 (P1) : only P1 reaches the processor.
  • Moment 2 (P2) : P2 reaches the ready queue, has a higher priority than P1, and preemption occurs. P1 goes back to the ready queue, P2 goes on processor.
  • Time 4 (P1, P3) : P3 arrives, has higher priority than P2, P2 returns to the ready queue, and P3 preempts the processor.
  • Moment 5 (P1, P2, P4) : P3 is finished, and the processor is released actively. At the same time, P4 also arrives. Since P2 enters the ready queue earlier than P4, P2 is selected to load the processor
  • Moment 7 (P1, P4) : P2 is completed, and the ready queue only has P1, P4, and P4 on the processor.
  • Moment 11 (P1) : P4 is completed and P1 is on the processor

8.3. Algorithm supplement

There is not necessarily one ready queue and it can be organized according to different priorities. Alternatively, higher-priority processes can be placed closer to the head of the queue

According to whether the priority can be changed dynamically, you can classify the priority into static priority and dynamic priority.

  • Static priority: Set when a process is created and remains unchanged.
  • Dynamic priority: There is an initial value when the process is created, and the priority is adjusted dynamically after that.

thinking

  1. How to properly set the priorities of various processes?
    1. Common: System processes have a higher priority than user processes, foreground processes have a higher priority than background processes, and the operating system prefers I/O processes (or BUSY I/O processes). Note: Computing processes (or BUSY CPU processes) are compared with I/O processes.
    2. The I/O device and CPU can work in parallel. If busy I/O processes are given priority, it is more likely that I/O devices will be put to work as soon as possible, resulting in improved resource utilization and system throughput.
  2. If a dynamic priority is used, when should it be adjusted?
    1. It can be considered from the perspective of pursuing fairness and improving resource utilization
    2. If a process is waiting in the ready queue for a long time, it can be prioritized appropriately
    3. If a process has been running on the processor for a long time, it may be appropriate to reduce its priority
    4. If you find that a process frequently performs I/0 operations, you can increase its priority

9. Multi-level feedback queue Scheduling Algorithm

9.1 Evaluation of the above three algorithms

  • The FCFS algorithm has the advantage of fairness
  • The advantage of SJF algorithm is that it can deal with short jobs as soon as possible, and the average waiting/turnaround time and other parameters are excellent
  • Time slice rotation scheduling algorithm can make each process get timely response
  • Priority scheduling algorithms can flexibly adjust the opportunities for various processes to be served
  • Is there a trade-off between the other algorithms? What about an algorithm that has a good overall performance balance?
  • Multi-level feedback queue scheduling algorithm

9.2 Multi-level feedback queue scheduling algorithm

9.2.1 overview,

  • Algorithm idea: the tradeoff of other scheduling algorithms

  • Algorithm rules:

    • Set multi-level ready queues with priority from high to low and time slices from small to large
    • When a new process arrives, it enters the tier 1 queue first and waits for the allocated time slice according to THE FCFS principle. If the time slice is used up and the process is not finished, the process enters the end of the next tier queue.
    • If it is already at the lowest level of the queue, it is put back to the end of the queue
    1. A time slice is allocated to the k+1 queue head only if the k queue is empty
  • Used for job/process scheduling: Used for process scheduling.

  • Is it preemptable? : preemptive algorithm. If a new process enters a queue with a higher priority (level 1 to k-1), the new process preempts the processor, and the original process is put back to the end of the k-queue.

  • The advantages and disadvantages

    • Fairness to all types of processes (advantages of FCFS);
    • Each newly arrived process can get a response quickly (advantage of RR);
    • Short processes can be completed in less time (SPF advantage);
    • No need to implement an estimate of the running time of the process (to avoid user fraud);
    • Flexibility to adjust preferences for various types of processes, such as CPU intensive processes, I/O intensive processes (extension: I/O blocked processes can be put back on the original queue, so that I/O processes can remain high priority)
  • Does it lead to hunger (a process/operation being unserviced for a long time) : Yes

9.2.2 Principle of the algorithm

Multilevel Fedback Queue schduling algorithm adopts multi-ready queue structure, as shown in the figure. Multiple ready queues are organized in such a way that the priority of each ready queue decreases in order and the length of time slice increases in order. That is, the ready process in the smaller ordinal ready queue has a higher priority than the ready process in the larger ordinal ready queue, but it gets a shorter CPU slice than the latter. Processes in a ready queue with a certain priority are sorted in order.

When a process enters the ready queue from the wait state, the first person enters the queue with a smaller ordinal number. When a process is assigned to a processing machine, it is given a time slice corresponding to the ready queue. When the time slice is used up, it is forced to release the processor and enter the ready queue of the next level (when the ordinal number increases by 1, the corresponding time slice also increases by 1). Although its re-execution time is delayed, the time slice is increased by twice the next time it gets the processor. When in the largest ordinal ready queue, the time slice can be infinitely large, that is, once the processor is partitioned, it always ends.

When THE CPU is idle, it first searches from the ready queue with the smallest serial number and takes the first element of the queue to run. If the ready queue is empty, it selects the process of the next ready queue with increasing number to run. In this way, the process in the queue with the largest sequence number can only be scheduled if all queues on it are empty.

It can be seen that the algorithm can first use a smaller time slice to process those processes with shorter time slices, and allocate a larger time slice to those processes with longer time slices, so as not to affect the efficiency of the processor due to frequent interruption of the longer process.

Multi-level feedback queue scheduling is a more general CPU scheduling algorithm, but it is also a more complex algorithm.

9.2.3, for example,

10. Summary of scheduling algorithm

Note: Compared with the early batch operating system, the interactive operating system (including time-sharing operating system, real-time operating system, etc.) after the computer cost is greatly reduced pays more attention to the system response time, fairness, balance and other indicators. These algorithms can also meet the requirements of interactive system. Therefore, these three algorithms are suitable for interactive systems. (UNIX, for example, uses a multi-level feedback queue scheduling algorithm)