After learning about virtual memory paging in the last article, I will take a look at scheduling algorithms in the book Modern Operating Systems. I will not write very detailed, try to clear my mind, build a framework, the future article as an index to check the gaps, not carefully read the book, simply copy the content of the book into the blog is meaningless.
First of all, different scheduling algorithms are needed in different environments, because different environments have different objectives, optimization objectives are different, according to local conditions.
- The batch
- interactive
- real-time
Scheduling in batch processing
First come, first served
There’s nothing to say about this, is there
Shortest job first
Note that this is the shortest job, not the shortest process
Minimum remaining time is preferred
Scheduling in interactive systems
Rotation scheduling
Each process is assigned a period of time, called a time slice, to which it is stopped. Note that the length of the time slice is important, too short will result in more time spent in the context switch process, too long will result in a longer response time to short interaction requests. It is generally considered that 20 ~ 50ms is reasonable.
Priority scheduling
It is equivalent to divide various processes into several layer groups, and schedule the processes in the lower layer group from the highest layer group, if this layer is empty, then run the processes in the lower layer group.
It is clear that if priorities are not adjusted in time, the process in the lowest-priority group can produce starvation.
- You can lower the priority of this process every time you run a timeslice
- You can set the priority of an IO – intensive process to 1/ F, where f is the proportion of the process in the last time slot. The idea is: if you ran less time in the last slice, you’ll get more chances next time, and the threads that ran out of time in the last slice will get less chances next time and have a lower priority
Multi-level queue
For CPU intensive, it is more efficient to have longer time slices than to frequently give them shorter ones, which should reduce the number of CPU intensive swaps and allow them to focus on calculations. However, the long elapsed time will affect the process response time, that is, if you allow it to calculate for a long time, then my other processes will have to wait for a long time.
The solution is to set the priority class. The priority scheduling method is improved on the basis of above. In the previous priority scheduling, different priority processes ran in the same time slice. However, in the multi-level queue, the lower the priority, the more time slices can be run, that is, the longer the running time.
For example, when a process with the highest priority 1 runs a slice, its priority changes to 2, but on its next turn it can run 2 slices, on its third turn it can run 4 slices, and so on.
Shortest process first
The question is: how do you know how long each process will run?
It can be inferred from past behavior. A weighted average of the current measurement and the previous estimate to get the next estimate is also known as aging
Ensure that scheduling
Lottery scheduling
Just found that the blue “Introduction to operating System” for scheduling algorithm is also very detailed, step by step, for a while to see, tomorrow in the article to supplement.