Welcome to my blog

This section describes common concepts, algorithms, and features of CPU scheduling

concept

CPU scheduling decisions occur when:

(Ready <- Running <-> Wait) A When a process moves from running to waiting (request) B running to ready (interrupt) C Waiting to running D WHEN a process terminates b&C is preemption and A&D is non-preemptionCopy the code

Scheduling criteria:

CPU usage: CPU utilization time/uptime Throughput (number of processes that can be completed per unit of time) Turnaround time (total time it takes for a process to commit to completion) Wait time (total time it takes for a process to wait in a ready queue) Response time (time it takes to submit a request for the first response) Weighted turnaround time refers to the ratio of job turnaround time to actual job uptime. Weighted turnaround time = job turnaround time/actual job uptimeCopy the code

Scheduling algorithm

First come, first served (FCFS) :

The processes in the ready state are linked to the ready queue in sequence, and the FCFS scheduling algorithm selects the process that enters the ready queue first according to the sequence of ready process entering the ready queue to execute until the process is blocked or terminated, and then the next process selection scheduling is carried out. FCFS scheduling algorithm adopts a non-preemptive scheduling method. Once a process occupies the processor, it will continue to run until the process finishes its work or cannot continue to execute due to waiting for a certain event, and then the processor will be released.Copy the code

Minimum Job First Scheduling (SJF) :

Short job (process) first scheduling algorithm refers to the algorithm of short job (process) first scheduling short job (SJF) scheduling algorithm selects one or several jobs with the shortest estimated running time from the backup queue and calls them into memory to run. The short process first (SPF) scheduling algorithm selects a process with the shortest estimated running time from the ready queue, assigns the processor to it, and makes it execute immediately. The processor will not be released until it is completed or blocked by some event. (1) SRTF preemption: also known as the shortest remaining priority, when the CPU range of the new process is shorter than the remaining CPU range of the current process, the preemption. (2) Non-preemption: called the next shortest first, because the process that selects the shortest CPU segment in the ready queue is placed at the head of the queue. SJF scheduling algorithm is characterized by high throughput, minimum average waiting time and minimum average turnover time. However, the algorithm is very unfavorable to long operations and does not consider the urgency of the operation.Copy the code

Priority scheduling

Time slice rotation scheduling (RR)

For time-sharing system, the system arranges all ready processes into a queue according to the order of arrival time. The process scheduler always selects the first process in the ready queue to execute, namely, the principle of first come, first served. But only one time slice can be run. After using the first slice, the process must release the processor to the next ready process, even if it has not finished running. A process that has been arrested more than once returns to the end of the ready queue to requeue, waiting to run again. In time slice rotation scheduling algorithm, the size of time slice has great influence on system performance, if the time is large enough. Then the time-slice rotation scheduling algorithm degenerates into a first-come-first-service scheduling algorithm. If the time slice is small, the processor will switch between processes too frequently, increasing the overhead of the processor. The length of the time slice is usually determined by the response time of the system, the number of processes in the ready queue, and the processing power of the system. Time slice q = system requirements for the corresponding time RT/the maximum number of processes N (experience shows that the value of time slice should make 80% of processes complete the required CPU running activities within the time.)Copy the code

Multi-level Feedback Queue Scheduling (MLFQ)

Set up multiple ready queues and assign different priorities to each queue. The first queue has the highest priority, the second queue has the lowest priority, and the rest queues have the lowest priority. Only when 1 ~ I - a queue is empty, the operating system scheduler will dispatch the ith process is running in a queue In the higher priority queue, each process of the execution time slice is smaller or larger When a ready process need chain into the ready queue, operating system, first of all, put it in the first at the end of the queue, according to the principle of FCFS queue scheduling. If it is the process's turn to execute and does not complete at the end of a time slice, the operating system scheduler puts the process to the end of the second queue and waits for execution on the same first come, first served basis. As such, when a long process drops from the first queue to the last queue, FCFS or RR scheduling algorithms can be used to run the process in the last queue. If a processor is serving a process in the I (I >1) queue and a new process enters the K (k< I) queue, the new process will preempt the processor of the running process, that is, the scheduler will put the executing process back to the end of the I queue and re-allocate the processor to the new process in the K queue.Copy the code

The title

Suppose there are three feedback queues in the system: Q1,Q2 and Q3, with time slices of 2, 4 and 8 respectively. [1] There are three operations J1,J2 and J3 arriving at time 0, 1 and 3 respectively. The CPU time required is 3, 2, and 1 time slice, respectively. 1. Time 0 J1 arrives. So it goes to queue 1, runs a slice of time, the slice of time is not up yet, and then J2 arrives. 2. Time 1 J2 arrives. Since the same queue is first come, first served, J2 waits. After running one time slice, J1 has completed the limitation of two time slices in Q1, so J1 is placed in Q2 to be scheduled. The current processor is assigned to J2. 3. At moment 2, J1 enters Q2 to wait for scheduling, and J2 obtains CPU and starts running. 4. J3 arrives at time 3. Since the same queue adopts first come, first served, J3 waits for scheduling in Q1, and J1 also waits for scheduling in Q2. J2 is processed at time 4. Both J3 and J1 are waiting for scheduling, but the priority of the queue where J3 is located is higher than that of the queue where J1 is located, so J3 is scheduled and J1 continues to wait in Q2. 6. Time 5 J3 is completed after 1 time slice. 7. Since Q1 is idle at time 6, the job in Q2 is scheduled, and J1 gets the processor to start running. J1 goes through one more time slice, and it's done. So the whole scheduling process ends.Copy the code