Analyze the realization principle of Concurrent mode in Fiber architecture
The realization principle and coding form of Fiber architecture have been analyzed for three lectures. In the previous three lectures, most of the main points of the Fiber architecture at the implementation level were explained by analyzing and concatenating the entire reactdom.render link. On the one hand, this part of knowledge is relatively complex and needs some patience to repeatedly understand and digest. On the other hand, it’s because what we’re going to talk about has a strong dependence on it; Therefore, it is particularly important to grasp these pre-knowledge.
Take a look at the following functions to see if you remember when they were executed and what they did:
-
PerformSyncWorkOnRoot,
-
WorkLoopSync,
-
PerformUnitOfWork,
-
BeginWork,
-
CompleteWork,
-
CompleteUnitOfWork,
-
ReconcileChildFibers,
If you still don’t feel familiar with them, go back to the first three classes, combine the case and source code, reorganize your ideas, and then come back to your knowledge link.
Next, if the above method and related logic are involved, we will not repeat the repetitions.
In this class, we’ll take a look at the most fascinating aspect of the Fiber architecture — the implementation of “time slicing” and “priority” in Concurrent mode.
Before we get down to business, let’s answer the “two trees” question left over from the first two lectures. The cooperative pattern between the “two trees” is sufficient to link the mount and update processes and is a good learning point for this lecture.
1. Current tree and workInProgress Tree: realization of “double buffering” mode in Fiber architecture
1) What is “double buffering” mode
“Double buffering” is a classic design pattern that has been around for a long time in games. To understand it, let’s take a real life example: if you go to see a play that is only one hour long, there is no break in the play, and the play needs to be performed continuously.
According to the demands of the plot, there is a change at half an hour. The so-called transition means that the lighting, setting and atmosphere of the stage of a drama should all be switched to another style. What if you want to make a transition without interrupting the show? No matter how fast the work is done, it will take ten or twenty minutes, which is too long for a one-hour drama. The audience also cannot accept such a plot “Caton” experience.
One solution is to prepare two stages for the play, while the first stage is in use, the layout of the second stage is completed. In this way, when the performance of the first stage is over, all you need to do is turn off the lights of the first stage and turn on the lights of the second stage, so that the plot can be seamless.
In fact, in real plays, it is often seen that the actor moves from the left side of the stage to the right, the lights change, and then from the bedroom (left stage) to the park (right stage); From the park (right stage) to the office (left stage). The set on the left stage changed from a bedroom to an office, and this was done when the actors used the right stage.
In this process, it can be considered that the left stage and the right stage are two sets of buffered data respectively, and the coherent picture presented to the audience is the result of different buffered data being read alternately.
In the field of computer graphics, by making the graphics hardware read two sets of buffered data alternately, it can realize the seamless switching of the picture and reduce the jitter and even the lag in the visual effect. In React, the main benefit of dual-buffering mode is that it allows us to maximize the reuse of Fiber nodes, thus reducing performance overhead.
2) How do current trees and workInProgress trees “use each other”
In React, the Current tree and the workInProgress tree can compare two sets of buffered data in dual-buffered mode: when the Current tree is presented to the user, all updates are performed by the workInProgress tree. The workInProgress tree will silently make all changes out of sight of the user (in memory) until the “light” hits it, which is when the current pointer points to it, indicating that the COMMIT phase has finished. The workInProgress tree becomes the Current tree that appears on the interface.
Next I’ll use a Demo to get a feel for how the workInProgress and Current trees “interact”. The code is as follows:
import { useState } from 'react';
function App() {
const [state, setState] = useState(0)
return (
<div className="App">
<div onClick={() => { setState(state + 1) }} className="container">
<p style={{ width: 128, textAlign: 'center' }}>
{state}
</p>
</div>
</div>
);
}
export default App;
Copy the code
When mounted, the component displays a simple interface with the number 0, as shown below:
Every time you click on the number 0, its value will be +1, which is the update action for the demo.
2. Mounted Fiber tree
The construction process of the Fiber tree has been explained in detail before and will not be repeated here. After the Render phase of mount and before commit, the two Fiber trees look like this:
After the commit phase is complete, the DOM tree corresponding to the workInProgress tree on the right is actually rendered to the page, and the current pointer points to the workInProgress tree:
Since mounting is a process from scratch, in which new nodes are constantly created, there is no “node reuse” yet. Node reuse is something to look at during the update process.
(1) First update
Now click on the number 0 to trigger an update. In this update, the rootFiber node highlighted below will be reused:
This reused logic is in the createWorkInProgress method in the beginWork call link. The createWorkInProgress method contains the following key logic:
In the createWorkInProgress method, the alternate property of the current node is first taken and denoted as the workInProgress node. For the rootFiber node, its alternate property is actually the rootFiber of the previous current tree, as shown in the highlight below:
When a rootFiber exists in the previous current tree, React will reuse the node as the next workInProgress node. So it’s going to go into the else logic of createWorkInProgress. If there is a difference between it and the target’s workInProgress node, you can simply modify the properties on that node to match the target node without creating a new Fiber node.
As for the rest of the App, div, p nodes, since no alternate node exists, their createWorkInProgress calls go into the logic highlighted below:
In this logic, createFiber is called to create a new FiberNode.
After the first update, a new workInProgress Fiber tree will be created, and the current pointer will eventually point to the new workInProgress Fiber tree, as shown below:
(2) Second update
Next, click the number 1 again to trigger a second update of state. In this update, every alternate property in the current tree is not empty (as shown in the figure above). So every time the beginWork triggers the createWorkInProgress call, it consistently goes into the else logic, which simply reuses the existing node.
The above is the process of “cooperation” between current tree and Work tree to realize node reuse.
3. Update link element disassembly
In the previous lecture, you learned about the render link in the mount phase. The update link in synchronous mode is basically the same as the render stage of the mount link, both of which trigger the depth-first search process including beginWork and completeWork through performSyncWorkOnRoot. Here is a call stack for the update process, as shown below:
You’ll find the same recipe, the same taste. ReactDOM. Render, like setState, is also a way to trigger updates. In React, methods such as reactdom. render, setState, and useState can trigger updates. The invocation links of these methods are similar because they all end up in the same update workflow by creating update objects.
1) create the update
Next, we continue to dismantle the elements of the update link using the initial Demo as an example. After clicking the number, the click-related callback is executed, which first triggers the dispatchAction method, as shown below:
Focusing on the two function calls highlighted in red, you will see the dispatchAction method to the left of performSyncWorkOnRoot. In other words, the overall update link should look like this:
DispatchAction creates the update object as shown in red:
② From the Update object to scheduleUpdateOnFiber
This logic may sound familiar. If you remember the first tutorial in the Reactdom.Render series, you may recall the updateContainer method. In updateContainer, React does exactly the same thing. Here’s the logic in the updateContainer function:
The logic of the code in the figure is very clear, bounded by enqueueUpdate, and it does three things.
-
Before enqueueUpdate: Creates an UPDATE.
-
EnqueueUpdate call: enqueueUpdate. In simple terms, each Fiber node has its own update Ue for storing multiple updates in the form of a linked list. In the Render phase, the contents of the Update Ue become the basis for the Render phase to calculate the new state of the Fiber node.
-
ScheduleUpdateOnFiber: schedules updates. If you’re still fresh from what you’ve learned, you may remember that in the synchronous mount link, this method is followed by the Render phase triggered by performSyncWorkOnRoot, as shown below:
Now go back to the logic of dispatchAction and you will see that dispatchAction also handles these three actions. The partial screenshot of dispatchAction above contains the creation and enqueue processing of the Update object. The update dispatchAction of dispatchAction is at the end of the function, as shown below:
With dispatchAction, the node that is currently triggering the update needs to be distinguished from the mount process. During mount, updateContainer schedules the root node directly. In fact, in the case of an update scenario, it is true that most of the update action is not triggered by the root node, which is where the Render phase starts. So in scheduleUpdateOnFiber, there is a method that looks like this, shown in red below:
MarkUpdateLaneFromFiberToRoot will begin from the current Fiber node, traverse upward until the root node, and returns the root node.
4, scheduleUpdateOnFiber how to distinguish synchronous and asynchronous?
If you are still impressed by the synchronous render link analysis, you will be familiar with the following logic:
This is a piece of logic in scheduleUpdateOnFiber. In a synchronous render link, lane === SyncLane is valid, so it directly enters the performSyncWorkOnRoot logic and starts the synchronous render process. In asynchronous rendering mode, else logic is entered.
One of the “else” options is the ensureRootIsScheduled method, which is critical to enabling the Render stage corresponding to the current update. “EnsureRootIsScheduled” is one of the core logic (defined in the ensureRootIsScheduled annotation) :
If (newCallbackPriority === SyncLanePriority) {// Synchronize updated render entry newCallbackNode = scheduleSyncCallback(performSyncWorkOnRoot.bind(null, root)); } else {// Convert the current task's lane priority to a priority that scheduler can understand lanePriorityToSchedulerPriority(newCallbackPriority); / / asynchronous update render entrance newCallbackNode = scheduleCallback (schedulerPriorityLevel, performConcurrentWorkOnRoot. Bind (null, root)); }Copy the code
Focus on performSyncWorkOnRoot and performConcurrentWorkOnRoot these two methods: the former is an update mode render phase entrance; The latter is the render stage entry in asynchronous mode.
As you can see from this logic, will React based on the current update task priority type, decision is scheduling performSyncWorkOnRoot or performConcurrentWorkOnRoot next. ScheduleSyncCallback and scheduleCallback are scheduleSyncCallback and scheduleCallback respectively. Both functions perform task scheduling by calling unstable_scheduleCallback internally. Unstable_scheduleCallback is a core method derived from Scheduler, which is also the focus of this lecture.
Before explaining how unstable_scheduleCallback works, let’s take a look at Scheduler.
Scheduler — The driving force behind “time slices” and “priorities”
Architecturally, Scheduler is the “scheduling layer” in Fiber architecture hierarchy. In terms of implementation, it is not an embedded logic, but a folder at the same level as react-DOM, as shown in the figure below, which converges all relatively common scheduling logic:
We already know that the core characteristics of asynchronous rendering (i.e. Concurrent mode) in Fiber architecture are “time slicing” and “priority scheduling”. These are the core capabilities of Scheduler. Next, use these two features as clues to unlock how Scheduler works.
(1) Combine React call stack to understand the “time slice” phenomenon
Before understanding the realization principle of time slicing, we should first understand what kind of phenomenon time slicing is.
① What is time slice
The render phase in synchronous render mode is a synchronous, depth-first search process, as emphasized in the reactdom.render lecture. What kind of trouble does the synchronization process cause? In lecture 13, we got a glimpse of this at a theoretical level. Now, to understand it directly from the call stack, here’s a React Demo with a relatively heavy rendering load:
import React from 'react'; function App() { const arr = new Array(1000).fill(0) const renderContent = arr.map( (i, index) => <p style={{ width: 128, textAlign: 'Center'}}> ${index} line '}</p>) return (<div className="App"> <div className="container"> {renderContent}</ div> </div> ); } export default App;Copy the code
The App component will render 1000 lines of text on the interface, with partial effects as shown below:
When using reactdom.render to render the long list, its call stack looks like this:
Instead of focusing on the beginWork, completeWork and the like, we focus on the top of the call stack, where the graph is marked in red — a continuous gray bar of “Task” that is meant to be an uninterruptible Task to the browser.
On a personal browser, the execution time of this Task is more than 130ms (hover the mouse over the Task bar to see the execution time). Browsers refresh at 60Hz, which means they refresh every 16.6ms. In this 16.6ms, besides the JS thread, the rendering thread also has work to deal with, but the excessively long Task will obviously crowd out the working time of the rendering thread, causing “frame drop”, and thus causing the risk of running out of time. This is exactly the “timeout occupation of the main thread by JS” mentioned in lecture 13.
If you change the reactdom.render call to a createRoot call (i.e. turn Concurrent mode on), the call stack looks like this:
We continue to focus on the Task bar at the top level.
You will notice that the single continuous Task bar (large Task) is now “chopped” into several intermittent Task “short bars” (small Task), each of which takes about 5ms to execute in a person’s browser. These short tasks add up to the same amount of work as the long tasks before. But the time gaps between short tasks give the browser a chance to breathe, a so-called “time slice” effect.
② How is time slice realized?
In synchronous rendering, the loop of creating Fiber nodes and building Fiber trees is triggered by the workLoopSync function. Here’s a look at the workLoopSync source code:
In workLoopSync, as long as the workInProgress is not empty, the while loop does not end and triggers a synchronous performUnitOfWork loop call.
In asynchronous rendering mode, the loop is started by workLoopConcurrent. WorkLoopConcurrent works very similar to workLoopSync, with only one difference in loop judgment. Please note the source code highlighted in red below:
The literal translation of shouldYield is “need to give way”. As the name implies, when the shouldYield() call returns true, the main thread needs to be relinquish, the whille loop is false, and the while loop will not continue.
So who is this shouldYield? In the source code, you can find two lines of assignment statements like:
var Scheduler_shouldYield = Scheduler.unstable_shouldYield,
......
var shouldYield = Scheduler_shouldYield;
Copy the code
From these two lines of code, we can see that the body of shouldYield is actually scheduler. unstable_shouldYield, which is the unstable_shouldYield method derived from the Scheduler package. Its source code is shown below in red:
Where unstable_now actually takes the value of performance.now(), that is, the current time. So what’s deadline? It can be understood as the expiration time of the current time slice, and its calculation process can be found in the Scheduler package’s performWorkUntilDeadline method, which is highlighted in red in the following figure:
In this formula currentTime is the currentTime and yieldInterval is the length of the time slice. The React time slice length is not a constant. It is calculated based on the browser’s frame rate and is dependent on browser performance.
React calculates the size of time slices based on the browser frame rate, and calculates the expiration time of each slice based on the current time. In workLoopConcurrent, “before each execution of the while loop, the shouldYield function is called to ask if the current time slice is due,” and if so, the loop is terminated and the main thread is given control.
(2) How is priority scheduling implemented
At the end of the section “Update link Element Unassembly”, we learned that both scheduleSyncCallback and scheduleCallback are ultimately initiated by calling unstable_scheduleCallback. Unstable_scheduleCallback is a core method that Scheduler exports, which performs different scheduling logic for tasks based on their priority information.
Let’s take a look at how this process is implemented using the source code (explained in the comments).
function unstable_scheduleCallback(priorityLevel, callback, Var currentTime = exports.unstable_now(); // declare startTime, which is the expected startTime of the task. If (typeof options === 'object' && options! == null) { var delay = options.delay; If (typeof delay === 'number' && delay > 0) {startTime = currentTime + delay; if (typeof delay === 'number' && delay > 0) {startTime = currentTime + delay; } else { startTime = currentTime; } } else { startTime = currentTime; } // expirationTime is a expirationTime. // Determine the value of timeout according to priorityLevel switch (priorityLevel) {case ImmediatePriority: timeout = IMMEDIATE_PRIORITY_TIMEOUT; break; case UserBlockingPriority: timeout = USER_BLOCKING_PRIORITY_TIMEOUT; break; case IdlePriority: timeout = IDLE_PRIORITY_TIMEOUT; break; case LowPriority: timeout = LOW_PRIORITY_TIMEOUT; break; case NormalPriority: default: timeout = NORMAL_PRIORITY_TIMEOUT; break; } // The higher the priority, the smaller the timout, the smaller the expirationTime = startTime + timeout; Var newTask = {id: taskIdCounter++, callback: callback, priorityLevel: priorityLevel, startTime: startTime, expirationTime: expirationTime, sortIndex: -1 }; { newTask.isQueued = false; } // if the currentTime is less than the startTime, the task can be delayed (not expired) if (startTime > currentTime) {// push the task to "timerQueue" newtask.sortindex = startTime; push(timerQueue, newTask); // If the taskQueue has no tasks to execute, If (peek(taskQueue) === null && newTask === peek(timerQueue)) {...... RequestHostTimeout (handleTimeout, startTime-currentTime); }} else {// else} newtask. sortIndex = expirationTime; // Expired tasks are pushed to taskQueue push(taskQueue, newTask); . RequestHostCallback (flushWork); } return newTask; }Copy the code
Unstable_scheduleCallback creates a task based on the current task and pushes the task to timerQueue or taskQueue with startTime information. Finally, according to timerQueue and taskQueue, the execution of delayed tasks or immediate tasks.
To understand this process, the following concepts need to be clear.
-
StartTime: indicates the startTime of the task.
-
ExpirationTime: This is a priority-related value. The smaller your expirationTime is, the higher the priority of your task.
-
TimerQueue: A small top heap sorted by startTime that stores tasks whose startTime is greater than the current time (that is, to be executed).
-
TaskQueue: A small top heap sorted by expirationTime that stores tasks whose startTime is less than the current time (that is, expired).
The concept of “small top heap” may touch some people’s blind spots, but here’s a brief explanation: a heap is a special kind of complete binary tree. A complete binary tree is called a “small top heap” if the value of each node is no greater than the value of the nodes of its left and right children. The special insert and delete logic of the small top heap determines that no matter how many elements are added or deleted from the small top heap, the root node must be the node with the lowest value of all elements. Because of this nature, the small top heap is often used to implement priority queues.
Unstable_scheduleCallback = timerQueue = taskQueue = timerQueue = taskQueue = timerQueue = taskQueue
// If the current time is less than the start time of the task, If (startTime > currentTime) {// Push the task that has not expired to "timerQueue" newtask.sortIndex = startTime; push(timerQueue, newTask); // If the taskQueue has no tasks to execute, If (peek(taskQueue) === null && newTask === peek(timerQueue)) {...... RequestHostTimeout (handleTimeout, startTime-currentTime); requestHostTimeout(handleTimeout, startTime-currentTime) Newtask. sortIndex = expirationTime;}} else {// else} newtask. sortIndex = expirationTime; // Expired tasks are pushed to taskQueue push(taskQueue, newTask); . RequestHostCallback (flushWork); }Copy the code
If the current task is determined to be a task to be executed, the task is pushed to the timerQueue after the sortIndex attribute is assigned to startTime. Then, there is the logic of the judgment:
// If the taskQueue has no tasks to execute, If (peek(taskQueue) === null && newTask === peek(timerQueue)) {...... RequestHostTimeout (handleTimeout, startTime-currentTime); requestHostTimeout(handleTimeout, startTime-currentTime) }Copy the code
To understand this logic, you first need to understand what peek(XXX) does: The entry to peek() is a small top heap, and it takes out the top elements of the small top heap.
The taskQueue stores expired tasks. If the tasks peek(taskQueue) retrieves are empty, the taskQueue is empty and has no expired tasks. In the case that there are no expired tasks, the timerQueue, which is the unexpired task queue, is further determined.
From the previous popular science, we already know that the small top heap is a relatively ordered data structure. The timerQueue, as a small top heap, is sorted by the size of the sortIndex property. The sortIndex attribute is set to startTime, which means that the top task of the small top heap must be the smallest startTime task in the timerQueue heap structure, that is, the earliest unexpired task that needs to be executed.
If the current task (newTask) is the first unexpired task to be executed in timerQueue, unstable_scheduleCallback initiates a delayed call for the current task by calling requestHostTimeout.
Note that this delayed call (i.e., handleTimeout) does not directly schedule the execution of the current task — it simply takes the current task out of the timerQueue when it expires, adds it to the taskQueue, and triggers a call to flushWork. The actual scheduling execution takes place in flushWork. FlushWork calls the workLoop, which executes the tasks in the taskQueue one by one until the scheduling process is paused (the time slice is exhausted) or the task is emptied.
This is how to deal with tasks that have not expired. Based on this, it is not difficult to understand the logic for processing expired tasks in else (i.e., this code) :
Newtask. sortIndex = expirationTime; newtask. sortIndex = expirationTime; // Expired tasks are pushed to taskQueue push(taskQueue, newTask); . RequestHostCallback (flushWork); }Copy the code
Unlike timerQueue, taskQueue is a small top heap with a expirationTime as a sortIndex. React will issue an immediate task to the flushWork using requestHostCallback(flushWork) after pushing an expired task to the taskQueue. FlushWork executes tasks that are out of date in taskQueue.
React 17.0.0 uses setTimeout and MessageChannel to initiate Task scheduling. In cases where the host environment does not support MessageChannel, setTimeout is degraded. But both setTimeout and MessageChannel initiate asynchronous tasks.
Therefore, the “immediate task” initiated by requestHostCallback will not be executed until the next event loop at the earliest. “Instant” simply means that it does not need to wait for a specified interval, as opposed to a “delayed task”, and does not mean synchronous invocation.
For easy understanding, the workflow of the unstable_scheduleCallback method is summarized in a big picture:
This big picture needs to be digested in conjunction with the previous text analysis, and you need to chew carefully to understand it.
6, summary
In this lecture, we first understand the realization of “dual cache” mode in Fiber architecture, then disassemble various elements of update link, understand the essence of mount/update and other actions. Finally, the core capabilities of Scheduler, namely “time slicing” and “priority scheduling”, are analyzed based on the source code. Finally, the mystery of asynchronous rendering in Fiber architecture is revealed and the implementation logic behind Concurrent mode is understood.
This brings us to the end of the Fiber architecture discussion. The next lecture will cover “Special event systems: How React events differ from DOM events.” Keep trying
7, the appendix
To study the source