Abstract:

Process scheduling is the core function of the operating system. The scheduler is only a part of the scheduling process. Process scheduling is a very complex process, which requires multiple systems to work together. This article focuses only on the scheduler, whose main job is to select the most suitable one among all the RUNNING processes.

The introduction

Process scheduling is the core function of operating system. The scheduler is only a part of the scheduling process. Process scheduling is a very complex process, which requires multiple systems to work together. This article focuses only on the scheduler, whose main job is to select the most suitable one among all the RUNNING processes. As a general-purpose operating system, the Linux scheduler divides processes into three categories:

Interactive process

Such processes have a lot of human-computer interaction, so the process is constantly asleep, waiting for user input. A typical application is the editor VI. This type of process requires a high system response time; otherwise, users may feel that the system responds slowly.

Batch process

This type of process does not require human-computer interaction and runs in the background, requiring a large amount of system resources. But can tolerate response delays. Like compilers.

Real-time process

Real-time has the highest scheduling latency requirements, and these processes tend to perform very important operations that require immediate response and execution. Such programs as video playback software or aircraft flight control systems clearly cannot tolerate long delays that can, at best, affect the performance of a film or kill a plane.

Linux uses different scheduling policies for different root data processes. For real-time processes, FIFO or Round Robin scheduling is adopted. For normal processes, you need to distinguish between interactive and batch. The traditional Linux scheduler prioritizes interactive applications so they can be scheduled faster. The core idea of new schedulers such as CFS and RSDL is “complete fairness”. This design concept not only greatly simplifies the code complexity of the scheduler, but also provides more perfect support for various scheduling requirements.

Before exploring CFS and RSDL, let’s first review the schedulers used in Linux2.4 and Linux2.6.0.

Back to the first page

A brief history of the kernel scheduler

2.1 scheduler for Linux2.4

The scheduler used in Linux2.4.18 has a priority-based design and is not much different from the scheduler Linus released in 1992. The scheduler’s pick Next algorithm is simple: the priorities of all the processes in the Runqueue are compared in sequence, and the process with the highest priority is selected as the next scheduled process. (Runqueue is the queue in the Linux kernel that holds all ready processes.) The term pick Next is used to refer to the process of selecting from among all candidate processes the next process to be scheduled.

Each process is created with a time slice. Clock interrupts decrement the time slice of the currently running process, and when the process runs out of time slices, it must wait for a new slice before it has a chance to run. The Linux2.4 scheduler guarantees that time slices will be reassigned to all RUNNING processes only after all time slices have been used up. This period of time is called an epoch. This design ensures that each process has a chance to execute.

Different processes have different scheduling requirements. The Linux2.4 scheduler mainly depends on changing the priority of processes to meet the scheduling requirements of different processes. In fact, all subsequent schedulers relied primarily on modifying process priorities to meet different scheduling requirements.

Real-time process

The priority of real-time processes is statically set, andalwaysThe priority of a process is greater than that of a common process. Therefore, regular processes can only be scheduled if there are no real-time processes in the Runqueue.

Real-time processes use two scheduling policies: SCHED_FIFO and SCHED_RR. FIFO adopts a first-in, first-out policy. For all processes with the same priority, the process that enters the Runqueue first is always scheduled first. Round Robin Uses a fairer Round Robin policy so that real-time processes with the same priority can be scheduled in turn.

Normal process

Over normal processes, the scheduler tends to prioritize interactive processes because they require fast user responses. The priority of a normal process is mainly determined by the Counter field in the process descriptor (plus the static priority set by nice). The process is created with the child’s counter value half as high as the parent’s counter value, ensuring that no process can repeatedly fork() the child to get more execution.

How does the Linux2.4 scheduler prioritize interactive processes? As mentioned earlier, when the time slices of all RUNNING processes are used up, the scheduler recalculates the counter values of all processes, including not only RUNNING processes but also sleeping processes. Sleeping processes do not run out of counters, and when recalculated, their counter values are added to the unused portion, increasing their priority. Interactive processes often sleep while waiting for user input, and when they wake up again and enter the runqueue, they get the CPU ahead of other processes. From the user’s point of view, the interactive process becomes more responsive.

Major disadvantages of the scheduler:

Poor scalability: The scheduler needs to traverse the runqueue to select the best process, so the algorithm’s execution time is proportional to the number of processes. In addition, the time spent in recalculating counter will increase linearly with the increase of the number of processes in the system. When the number of processes is large, the cost of updating counter operation will be very high, resulting in the decline of the overall performance of the system.

Scheduling performance on high-load systems is lowThe scheduler of: 2.4 preallocates a large slice of time to each process, so on a high-load server, the scheduler is less efficient, because the average waiting time of each process is proportional to the size of the slice.

The optimization of interactive processes is not perfect: Linux2.4 recognizes interactive processes based on the assumption that interactive processes are in a SUSPENDED state more frequently than batch processes. However, this is often not the case. Some batch processes do not interact with users but still perform IO operations frequently. For example, a database engine frequently performs disk I/OS while processing queries, although they do not require fast user response, they are prioritized. When such processes are heavily loaded in the system, the response time of real interactive processes can be affected.

Insufficient support for real-time processesThe: Linux2.4 kernel is non-preemption. Preemption does not occur when a process is in kernel state, which is unacceptable for real time applications.

To address these issues, Ingo Molnar developed the new O(1) scheduler. Prior to CFS and RSDL, this scheduler was adopted not only by Linux2.6, but also by Backport into Linux2.4, which was used in many commercial distributions.

2.2 O(1) scheduler for Linux2.6

As the name suggests, the O(1) scheduler primarily addresses extensibility issues in previous versions. O(1) The time taken by the scheduling algorithm is constant and has nothing to do with the number of processes in the current system. In addition, Linux2.6 kernel supports kernel-state preemption, so real-time processes are better supported. The O (1) scheduler also makes a better distinction between interactive and batch processes than its predecessor.

The Linux2.6 kernel also supports three scheduling policies. SCHED_FIFO and SCHED_RR are for real-time processes, while SCHED_NORMAL is for normal processes. O(1) Scheduler modified Linux2.4 scheduler in two aspects, one is the calculation method of process priority; The second is the Pick Next algorithm.

2.2.1 Process Priority Calculation

Priority calculation for common processes

Common process priorities are calculated dynamically, and static priorities are included in the calculation formula. In general, the higher the static priority, the longer the time slice the process can be allocated, and the user can change the static priority of the process through the NICE system call.

Dynamic priority by

Formula one

It is calculated that:

Formula one
Dynamic Priority = Max (100, min (static priority - bonus +5, 139))Copy the code

Bonus depends on the average sleep time of the process. It can be seen that in linux2.6, the relationship between the priority of an ordinary process and the average sleep time is as follows: the longer the average sleep time, the larger the bonus, and thus the higher the priority.

Average sleep time was also used to determine whether a process was an interactive process. A process is considered an interactive process if it satisfies the following formula:

Formula 2
Dynamic priority ≤ 3 x static priority /4 + 28Copy the code

The average sleep time is the amount of time a process spends waiting for sleep. This value increases when the process is asleep and decreases when it is in the RUNNING state. The update timing of this value is distributed among many kernel functions: clock interrupt scheduler_tick(); Process creation; The process wakes up from the TASK_INTERRUPTIBLE state. Load balancing.

Real-time process priority calculation

The priority of real-time processes is set by sys_sched_setschedule(). This value is not dynamically modified and always takes precedence over normal processes. Represented by the RT_PRIORITY field in the process descriptor.

2.2.2 Pick Next Algorithm

The general process scheduling selection algorithm is based on the priority of the process, the process with the highest priority is selected by the scheduler. In 2.4, time slice counter also indicates the priority of a process. In 2.6, time slices are represented by the time_slice field in the task descriptor, and priorities are represented by prio (normal process) or Rt_PRIORITY (real-time process).

The scheduler maintains two process queue arrays for each CPU: an active array and an EXPIRE array. The elements in the array hold Pointers to the process queue for a particular priority. There are 140 different priorities in the system, so both arrays are 140 in size.

When it is necessary to select the current highest priority process, the 2.6 scheduler does not traverse the entire runqueue, but directly selects the first process in the current highest priority queue from the active array. Assume that the highest priority of all current processes is 50(in other words, no process in the system has a priority less than 50). Then the scheduler directly reads active[49] and gets the process queue pointer with priority 50. The first process on the queue head is the selected process. The complexity of this algorithm is O(1), thus solving the scalability problem of 2.4 scheduler.

To implement the algorithm, the active array maintains a bitmap that is set when a process is inserted into the list at a particular priority level. The Sched_find_first_bit() function queries the bitmap and returns the array subscript with the highest priority currently set. In the example above, the sched_find_first_bit function returns 49. It can be realized by BSFL instruction on IA processor.

In order to improve the response time of interactive processes, the O(1) scheduler not only dynamically increases the priority of that class of processes, but also uses the following methods:

Each time a clock tick is interrupted, the time_slice of the process is decreased by one. When time_slice is 0, the scheduler determines the type of the current process and, if it is an interactive or real-time process, resets its time slice and reinserts the active array. If it is not an interactive process, move from the active array to the Expired array. In this way, real-time and interactive processes always get the CPU first. However, these processes cannot remain in the active array forever, otherwise processes entering the EXPIRE array will starve. When a process has consumed more than a fixed amount of CPU time, it is moved to the EXPIRE array even if it is a live or interactive process.

When all the processes in the active array have been moved into the EXPIRE array, the scheduler swaps the active and EXPIRE arrays. When a process is moved into the EXPIRE array, the scheduler resets its time slice, so that the new active array is restored to its original state and the EXPIRE array is empty, starting a new round of scheduling.

2.2.3 O(1) Scheduler section

The Linux2.6 scheduler improves the scalability problems of its predecessor, with the schedule() function having O(1) time complexity. This depends on two improvements:

one The Pick Next algorithm uses an active array and does not need to traverse the RunQueue.

two The periodic update of all process counters has been removed. Dynamic priority changes are distributed among process switches, clock tick interrupts, and other kernel functions.

The O(1) scheduler’s algorithm for distinguishing interactive and batch processes is much improved, but still fails in many cases. There are some well-known programs that can slow down the scheduler, causing interactive processes to react slowly:

fiftyp.c, thud.c, chew.c, ring-test.c, massive_intr.c

These deficiencies gave birth to Con Kolivas’s stair scheduling algorithm, SD, and a later improved version, RSDL. Ingo Molnar developed CFS after the RSDL, which was eventually adopted by the 2.6.23 kernel. Next, we introduce these new generation schedulers.

Back to the first page

3 New generation scheduler

Before the release of Linux2.6.0, many feared that problems with the scheduler would hinder the release of the new version. It remains unresponsive for interactive applications, and NUMA support is patchy. In order to solve these problems, a large amount of complex code that was difficult to maintain and read was added to the scheduler module of Linux2.6.0. Although many performance problems were solved, another serious problem still plagued many kernel developers. That’s the complexity of the code.

Con Kolivas, in 2004, proposed the first patch of improved scheduler design: Staircase Scheduler. It provides a new idea for the design of scheduler. Later RSDL and CFS are based on many of the basic ideas of SD. In this chapter, we will briefly explore these three main scheduler algorithms.

Staircase Scheduler scheduling algorithm

Stair algorithm (SD) is quite different from O(1) algorithm in that it abandons the concept of dynamic priority. It takes a completely fair approach. The main complexity of the previous algorithm comes from the calculation of dynamic priorities, where the scheduler corrects the priorities of processes and distinguishes interactive processes based on average sleep time and some hard-to-understand empirical formulas. Such code is difficult to read and maintain.

The staircase algorithm is simple in its thinking, but experiments show that it responds better to interactive processes than its predecessor and greatly simplifies the code.

Like the O(1) algorithm, the staircase algorithm also maintains a list of processes for each priority and organizes these lists in an active array. The SD algorithm also reads directly from the active array when the next scheduled process is selected.

Unlike the O(1) algorithm, when a process runs out of its slice, it is not moved into the EXPIRE array. Instead, it is added to the lower priority list of the active array, lowering it by one level. Note, however, that the task is only inserted into the list of lower-priority tasks; the priority of the task itself is not changed. When the time slice runs out again, the task is put into the lower priority task queue again. Like a staircase, a task runs out of time and goes down one step at a time.

When a task goes down the lowest staircase, if the time slice runs out again, it will return to the task queue at the next level of the initial priority. For example, when a process with priority 1 reaches the last step 140, it will return to the task queue with priority 2 when it runs out of time again, that is, the second step. However, the time_slice allocated to the task will be twice as large. For example, the original time_slice of the task was 10ms, but now it is 20ms. The basic principle is that when the task goes down to the bottom of the staircase, run out of time again and return to the next step from the beginning of the previous staircase. And give the task the same slice of time as it was originally assigned. The summary is as follows:

Let the task itself have priority P, and when it starts down the stairs at step N and reaches the bottom, it will return to step N+1. And give the task N+1 time slice.

The above describes the scheduling algorithm for common processes. Real-time processes still use the original scheduling policy, namely FIFO or Round Robin.

The stair algorithm can avoid process starvation, in which the high-priority process will eventually compete with the low-priority process, so that the low-priority process will eventually get the chance to execute.

In the case of an interactive application, when it goes to sleep, other processes of its same priority step by step walk down the stairs into the lower priority process queue. When the interactive process wakes up again, it remains on the higher stair steps, allowing it to be selected more quickly by the scheduler, speeding up response times.

Advantages of the staircase algorithm

From the perspective of implementation, SD basically still uses the overall framework of O(1), but removes the complex code in O(1) scheduler that dynamically changes the priority; The EXPIRE array was also eliminated, simplifying the code. Its most important significance lies in proving the feasibility of the idea of perfect fairness.

Rotating Staircase Deadline Schedule (RSDL)

RSDL was also developed by Con Kolivas and is an improvement on the SD algorithm. The central idea is “total fairness”. There is no complex dynamic priority adjustment strategy.

The RSDL reintroduces the EXPIRE array. It assigns a “group time quota” to each priority, which we label Tg; Each process of the same priority has the same “priority time quota”, which is denoted as Tp for subsequent description.

When a process runs out of Tp, it drops to the next priority group of processes. This process is the same as SD, and in the RSDL this process is called the minor rotation. Note that Tp is not equal to the time slice of the process, but less than the time slice of the process. The following figure shows the minor rotation. The process steps down from priority1 to Priority140 and back to Priority2, as shown on the left, then steps down again from Priority 2 and bounceback to Priority3, as shown in Figure 1.

Figure 1.

In THE SD algorithm, a low-priority process at the bottom of the staircase must wait for all high-priority processes to complete before it can get a CPU. Therefore, the wait time for low-priority processes is uncertain. In the RSDL, when a high-priority process group uses up its Tg(group time quota), all processes belonging to that group are forced to be relegated to the next priority process group, regardless of whether there are any processes in the group that have not used up Tp. In this way, low-priority tasks can be scheduled in a predictable future. Thus, the fairness of scheduling is improved. That’s what Deadline stands for in the RSDL.

When a process runs out of its time_slice (T2 in the figure below), it will be placed in its original priority queue in the EXPIRE array (priority 1).

Figure 2

The Major rotation is triggered when the active array is empty, or when all processes are reduced to the lowest priority. The Major Rotation swaps the active and expire arrays, and all processes revert to their original state. The minor rotation process starts over again.

RSDL support for interactive processes

In the same way as SD, when an interactive process is sleeping, all its competitors are relegated to the low-priority process queue by the minor rotation. When it re-enters the RUNNING state, it has a relatively high priority and can be quickly responded to.

3.3 CFS completely fair scheduler

CFS was the scheduler that was eventually adopted by the kernel. It takes the idea of complete fairness from RSDL/SD and does not track the sleep time of processes or attempt to distinguish between interactive processes. It treats all processes equally, and that’s what fairness means. The algorithm and implementation of CFS are fairly simple, and numerous tests show that its performance is excellent.

According to author Ingo Molnar, “eighty percent of the work of CFS can be summed up in one sentence: CFS emulates a fully ideal multitasking processor on real hardware.” In a “perfect multitasking processor,” each process gets CPU execution time at the same time. When there are two processes in the system, the CPU’s computing time is divided in two, with each process getting 50%. On real hardware, however, when one process consumes CPU, other processes must wait. This creates unfairness.

Suppose there are n processes in the RunQueue and the current process has been running for 10ms. In a “perfectly ideal multitask processor”, 10ms should be split between n processes (regardless of the nice values of each process), so the current process should have taken (10/n)ms, but it ran 10ms. So CFS will punish the current process so that other processes can replace the current process as much as possible on the next schedule. Finally realize the fair scheduling of all processes. Here are some important parts of the CFS implementation to delve into how the geographic solution CFS works.

How does CFS implement Pick Next

CFS discards the active/ EXPIRE array and uses a red-black tree to select the next scheduled process. All processes in the RUNABLE state are inserted into the red-black tree. At each scheduling point, the CFS scheduler selects the leftmost leaf node of the red-black tree as the next process that will get the CPU.

Tick the interrupt

In CFS, tick interrupts first update scheduling information. Then adjust the position of the current process in the red-black tree. After adjustment, if the current process is no longer the leftmost leaf, the need_resCHED flag is marked and the scheduler() is called to complete the process switch when the interrupt returns. Otherwise, the current process continues to occupy CPU. Here you can see CFS abandoning the traditional concept of time slices. Tick interrupts only need to update the red-black tree. All previous schedulers decrement time slices in Tick interrupts. Priority adjustment and rescheduling are triggered when time slices or quotas are used up.

Red-black tree key value calculation

The key to understanding CFS is to understand how the red-black tree key values are computed. The key value is calculated by three factors: the CPU time that the process has occupied; Second, the nice value of the current process; The third is the current CPU load.

The amount of CPU time used by the process has the greatest impact on the key value, and for the most part we can simply think of CFS as equal to the amount of CPU time used by the process. Therefore, the larger the value, the larger the key value, causing the current process to move to the right of the red-black tree. In addition, CFS states that a process with a nice value of 1 gets 10% more CPU time than a process with a nice value of 0. This factor is also taken into account when calculating the key value, so the larger the nice value, the larger the key value.

CFS maintains two important variables for each process: fair_clock and wait_runtime. In this article, we will refer to the variables maintained for each process as process-level variables, those maintained for each CPU as cpu-level variables, and those maintained for each runqueue as runqueue-level variables.

The key value of the process inserted into the red-black tree is

Fair_clock – wait_runtime.

Fair_clock is literally the amount of CPU time a process deserves, divided by the total number of processes in the current runqueue. Wait_runtime is the wait time for a process. Their difference represents how fair a process is. The larger the value is, the more unfair the current process is compared with other processes.

For interactive tasks, wait_Runtime is not updated for a long time, so it can have a higher red-black tree key value, closer to the left of the red-black tree. To get a quick response.

Red-black trees are balanced trees in which the scheduler always reads one leaf node from the left at a time. The time complexity of the read operation is O(LgN).

Scheduler manager

To support real-time processes, CFS provides a scheduler module manager. Various scheduler algorithms can be registered with the manager as a module. Different processes can choose to use different scheduler modules. In 2.6.23, CFS implements two scheduling algorithms, the CFS algorithm module and the real-time scheduling module. The real-time scheduling module will be used for the real-time process. The CFS algorithm is used for common processes. Ingo Molnar also invited Con Kolivas to write RSDL/SD as a scheduling algorithm module.

CFS source code analysis

The scheduler_tick() function in sched.c is called directly by a clock interrupt. It first updates the runqueue-level variable clock; The CFS tick handler task_tick_fair() is then called. Task_tick_fair is in sched_fair.c. The main thing it does is call entity_tick()

Function entiry_tick source code is as follows:

static void entity_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr)
{
	struct sched_entity *next;
	dequeue_entity(cfs_rq, curr, 0);
	enqueue_entity(cfs_rq, curr, 0);
	next = __pick_next_entity(cfs_rq);
	if (next == curr)
		return;
	__check_preempt_curr_fair(cfs_rq, next, curr,
			sched_granularity(cfs_rq));
}Copy the code

The current process is removed from the red-black tree by calling dequeue_entity() and re-inserted by calling enqueue_entity (). These two actions adjust the position of the current process in the red-black tree. _pick_next_entity() returns the leftmost node in the red-black tree and calls _check_preempt_curr_fair if it is no longer the current process. This function sets the schedule flag, which is called schedule() when the interrupt returns.

The enqueue_entity() function looks like this:

enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int wakeup)
{
	/*
	 * Update the fair clock.
	 */
	update_curr(cfs_rq);
	if (wakeup)
		enqueue_sleeper(cfs_rq, se);
	update_stats_enqueue(cfs_rq, se);
	__enqueue_entity(cfs_rq, se);
}Copy the code

Its first job is to update scheduling information. The process is then inserted into the red-black tree. The update_curr() function is the core. The scheduling information is updated.

static void update_curr(struct cfs_rq *cfs_rq)
{
	struct sched_entity *curr = cfs_rq_curr(cfs_rq);
	unsigned long delta_exec;
	if(unlikely(! curr))return;
	delta_exec = (unsigned long)(rq_of(cfs_rq)->clock - curr->exec_start);
	curr->delta_exec += delta_exec;
	if (unlikely(curr->delta_exec > sysctl_sched_stat_granularity)) {
		__update_curr(cfs_rq, curr);
		curr->delta_exec = 0;
	}
	curr->exec_start = rq_of(cfs_rq)->clock;
}Copy the code

Rq_of (CFs_rq)->clock is updated in tick interrupt. Curr ->exec_start is the timestamp when the current process started to acquire CPU. Subtract the two values and you get the CPU time for the current process. Store this variable in curr->delta_exec. Then call __update_curr()

__update_curr(struct cfs_rq *cfs_rq, struct sched_entity *curr)
{
	unsigned long delta, delta_exec, delta_fair, delta_mine;
	struct load_weight *lw = &cfs_rq-load;
	unsigned long load = lw->weight;
	delta_exec = curr->delta_exec;
	schedstat_set(curr->exec_max, max((u64)delta_exec, curr->exec_max));
	curr->sum_exec_runtime += delta_exec;
	cfs_rq->exec_clock += delta_exec;
	if(unlikely(! load))return;
	delta_fair = calc_delta_fair(delta_exec, lw);
	delta_mine = calc_delta_mine(delta_exec, curr->load.weight, lw);
	if (cfs_rq->sleeper_bonus > sysctl_sched_min_granularity) {
		delta = min((u64)delta_mine, cfs_rq->sleeper_bonus);
		delta = min(delta, (unsigned long)(
			(long)sysctl_sched_runtime_limit - curr->wait_runtime));
		cfs_rq->sleeper_bonus -= delta;
		delta_mine -= delta;
	}
	cfs_rq->fair_clock += delta_fair;
	add_wait_runtime(cfs_rq, curr, delta_mine - delta_exec);
}Copy the code

The main job of __update_curr() is to update the fair_clock and wait_runtime mentioned earlier. The difference between these two values is the key that the subsequent process inserts into the red-black tree. The variable Delta_exec holds the previously obtained CPU time used by the current process. The calc_delta_fair() function modifies delta_exec based on the CPU load (stored in the LW variable), then saves the result to the delta_fair variable, and then increments fair_clock to delta_fair. The function calc_delta_mine() modifies delta_exec based on the nice value (stored in curr->load.weight) and CPU load, saving the result in delta_mine. According to the comments in the source code, delta_mine represents the amount of CPU time the current process should receive.

Delta_fair is then added to fair_clock and delta_mine-delta_exec to wait_Runtime. The add_wait_runtime function subtracts wait_runtime from delta_mine-delta_exec twice. Since the calc_delt_xx() function makes only minor changes to delta_exec, we can ignore their changes to delta_exec for the sake of discussion. The end result can be approximated as fair_clock doubling the delta_exec and wait_runtime doubling the delta_exec. So the key value fair_clock-wait_runtime ends up doubling the delta_exec value. The key value is increased so that the current process is inserted into the red-black tree again and moved to the right.

CFS summary

The above discussion shows that CFS has made significant changes to the previous scheduler. Using red-black tree instead of priority series group; Replace dynamic priority strategy with completely fair strategy; Introduced the module manager; It changes 70% of the code of the original Linux2.6.0 scheduler module. The structure is more simple and flexible, and the algorithm is more adaptable. Compared to RSDL, although both are based on the principle of complete fairness, their implementation is completely different. By contrast, CFS is much cleaner, simpler and more extensible.

Another important feature of CFS is its small scheduling granularity. In the pre-CFS scheduler, each process was preempted only after it had used up its time slice or quota, except that the process called some blocking function to actively participate in the schedule. CFS checks each tick and preempts if the current process is no longer to the left of the red-black tree. On high-load servers, better scheduling performance can be achieved by adjusting scheduling granularity.

4 summarizes

By discussing the history development of Linux scheduler, we can further understand the background knowledge of CFS scheduler development. In fact, no scheduler algorithm currently meets the needs of all applications, and CFS has some negative test reports. We believe that new scheduling algorithms will emerge as Linux continues to evolve, so let’s wait and see. Aspiring programmers can also try to contribute to Linux in this area.



This article from MoShuiQian flow blog garden, the original link: http://www.cnblogs.com/zhoug2020/p/3967385.html, if you want to reprint, please contact our author


Copyright Notice: The content of this article is contributed by Internet users, copyright belongs to the author, the community does not have the ownership, also do not assume the relevant legal responsibility. If you find any content suspected of plagiarism in our community, you are welcome to send an email to [email protected] to report and provide relevant evidence. Once verified, our community will immediately delete the content suspected of infringement.

The original link