The purpose of CPU scheduling is to improve CPU utilization and prevent idle cpus. The CPU is a valuable resource. If a process is running on the CPU and suddenly requests I/O because it needs TO use I/O resources, the CPU hangs, causing other processes in the ready queue to wait, which causes waste.
First, CPU scheduling algorithm
1. First come, first served (FCFS)
Whoever requests the CPU first gets the service until the service is finished. This is probably the easiest scheduling. However, the efficiency is not high, and some processes that only need a very short CPU interval can only die, resulting in a long average wait time.
\
2. Minimum work priority (SJF)
Optimal scheduling algorithm with minimum average waiting time. The question is, how do you know the length of the next CPU interval, or how much CPU time the process is consuming?
There are algorithms that can predict this length.
\
3. Priority
The process is assigned a priority. The highest priority gets the CPU first.
Priority scheduling can be preemptive or non-preemptive. ,
Preemption: when a process with a higher priority reaches the ready queue, if the running process has a lower priority, it immediately preempts the CPU.
Non-preemptive: When a process with a higher priority reaches the ready queue, it is placed at the head of the ready queue
The disadvantage is that there can be endless congestion or starvation, causing some low-priority processes to wait wirelessly.
The solution to hunger is aging. That is, the longer a process waits, its priority increases.
\
4. Rotation method (RR)
The CPU time slice is limited, and the processes in the ready queue take turns to execute. Each process will exit upon completion of execution, or queue again. If the time slice is long enough, it’s equal to FCFS.
The rotation method is designed for time-sharing systems.
\
5. Multi-level queues
Set up multiple ready queues with different priorities, in which processes are scattered. Once in a queue, you cannot move to another queue. The queue with a higher priority is executed first until the queue with a lower priority is empty. As soon as a new process enters the high-priority queue, it immediately grabs the CPU.
Again, there is hunger.
\
6. Multi-level feedback queues
Same multi-level queue, but allows process migration. If a process consumes a large number of CPU slices, it sinks to a low priority queue. Can prevent hunger.
\
Second, multi-processor scheduling
1. Asymmetric and symmetric polyprocessing
Asymmetric multiprocessing: One CPU is responsible for scheduling, IO processing, and other system activities, while the other cpus execute user code. The advantage is that only one CPU has access to the system data, reducing the need for data sharing.
Symmetric processing (SMP) : Each CPU can be scheduled and even has private ready queues. The algorithm is complicated. Modern operating systems all support SMP.
\
2. Processor affinity
Trying to keep a process running only on the same processor is called processor affinity.
The cache needs to be rebuilt, mainly because processes are migrated to other processors. So this is what OS is trying to avoid.
Compatibility can be soft or hard. If only hard, and can not guarantee, then soft also; If it is hard, it is hard, such as Linux.
\
3. Load balancing
How to put a large number of processes to run on the CPU as soon as possible to take full advantage of the advantages of multiple cpus involves load balancing.
There are two ways to do this:
1) push
A particular task periodically checks the cpus, finds imbalances, and allocates them appropriately
\
2),
When the CPU is idle, it takes the initiative to pull a process from other places to run.
\
Load balancing is incompatible with processor affinity. How to strike a balance, there is a measure of consideration.
\
4. Symmetric Multithreading (SMT)
Multiple physical cpus running multithreading at the same time is symmetric multiprocessor (SMP); Multiple logical cpus are provided to run multiple threads at the same time, known as symmetric multithreading (SMT) in Intel processors.
The idea of SMT is to generate multiple logical cpus on the same physical CPU.
SMT comes from the hardware, not the software.
\
5. Thread scheduling
The OS schedules kernel threads. In order for a user thread to get the CPU to run, there are two steps:
1) User threads hook to kernel threads
2) The kernel thread gets the CPU
\
There are one-to-one, many-to-one, many-to-many relationships between user threads and kernel threads. In the latter two models, the thread library schedules user threads to an LWP (lightweight process).
But this does not mean that user threads can execute. When a thread’s process gets the CPU, threads in the process also go through contention to get the CPU.
LWP, called process competitive range (PCS);
Obtaining the CPU is called the system Competitive range (SCS).
SCS are required in the model corresponding to all user threads and kernel threads. In the one-to-one model, only SCS were found.
\