The introduction
To improve the user experience, the React team introduced the Concurrent mode. Concurrent mode allows the browser to remain responsive to the user while applying updates, making appropriate adjustments based on the user’s device performance and network speed. Let’s look at the differences between Legacy and Concurrent mode with an example:
The page in our example has a square that we animate to move left and right back and forth. The div with the ID root is the mount point for the React application.
<style>
@keyframes move {
from {
margin-left: 0;
}
to {
margin-left: 200px; }}#square {
width: 100px;
height: 100px;
margin-top: 10px;
background-color: red;
animation: move 2s ease 0s infinite alternate;
}
</style>
<body>
<div id="square"></div>
<div id="root"></div>
</body>
Copy the code
Our React application is relatively simple, rendering 2000 squares of different colors. To simulate heavy rendering, we run a time-consuming for loop for each Item function component:
const Item = ({i}) = > {
for (let i = 0; i< 999999; i++){}return <span key={i} style={{
display: 'inline-block',
width: '5px',
height: '5px',
backgroundColor: `rgb(${255*Math.random()}, ${255*Math.random()}, ${255*Math.random()}) `}} / >
}
const App = () = > {
const n = 2000
return (
<div style={{fontSize:0}}>
{[...new Array(n)].map((_, i) => {
return <Item i={i} />
})}
</div>)}Copy the code
Legacy (reactdom.render (
Legacy | Concurrent |
---|---|
As you can see, in Legacy mode the square appears and does not move until the rendering process is complete, whereas in Concurrent mode this does not happen.
In Legacy mode, the Render phase (see React source code for the first Render process) is executed in a Task, which takes too long and blocks other tasks in the browser:
In Concurrent mode, the Render phase is broken up into smaller tasks:
To realize the function of time slice, React’s newly added Scheduler is indispensable, which is the content of this paper.
Scheduler
Scheduler is a new addition to React16 and is responsible for scheduling the priority of tasks. From the description of the library, we can see that the library is intended to be a general purpose library in the future:
This is a package for cooperative scheduling in a browser environment. It is currently used internally by React, but we plan to make it more generic.
Copy the code
So let’s get rid of React and see what it does.
Scheduling task priorities
import Scheduler from 'react/packages/scheduler'
Scheduler.unstable_scheduleCallback(2.function func1() {
console.log('1')})const task = Scheduler.unstable_scheduleCallback(1.function func2(didTimeout){
console.log('2')})Copy the code
Unstable_scheduleCallback The first parameter is the priority of the task (the smaller, the higher). So the above example prints 2, then 1.
There are a few points to note here:
1 Scheduler.unstable_scheduleCallback Returns a task that has the following attributes:
attribute | instructions |
---|---|
id | slightly |
callback | Pass in the function unstable_scheduleCallback |
priorityLevel | The priority of unstable_scheduleCallback is passed |
startTime | The start time of the task |
expirationTime | The expiration time of the task |
sortIndex | A field that a task sorts, typically a startTime or expirationTime value |
The task callback function executes with an argument, didTimeout in the code above, that indicates whether the current task has expired.
Delaying task Execution
import Scheduler from 'react/packages/scheduler'
Scheduler.unstable_scheduleCallback(2.function func1() {
console.log('1')
})
Scheduler.unstable_scheduleCallback(1.function func2(){
console.log('2')}, {delay: 100})
Copy the code
Unstable_scheduleCallback The delay field of the third parameter can delay the execution of the current task, even if the current task has a higher priority. So the above example prints 1, then 2. Pay attention to the
Cancel the task
import Scheduler from 'react/packages/scheduler'
Scheduler.unstable_scheduleCallback(2.function func1 () {
console.log('1')})const task = Scheduler.unstable_scheduleCallback(1.function func2(){
console.log('2')
})
Scheduler.unstable_cancelCallback(task)
Copy the code
Scheduler. Unstable_cancelCallback Cancels a task. So the above example only prints 1.
For scheduling
import Scheduler from 'react/packages/scheduler'
function func2(didTimeout) {
if(! didTimeout)console.log(2)}function func1() {
console.log(1)
return func2
}
Scheduler.unstable_scheduleCallback(1, func1)
Copy the code
When the callback value of a scheduler. unstable_scheduleCallback Task is still a function, the returned function continues to be executed in the current Task. So the above example prints 1 first, and when func2 is executed again, 2 is not printed because didTimeout is true.
Give up time
import Scheduler from 'react/packages/scheduler'
function work() {
while(! Scheduler.unstable_shouldYield()) {console.log('work')}console.log('yield to host')
}
Scheduler.unstable_scheduleCallback(1.function func2(){
work()
})
Copy the code
Scheduler. Unstable_shouldYield determines whether there is time left for the task to run. The example above keeps printing work for a while and finally prints yield to host.
Time slice
With that in mind, let’s simulate Render using time slices in React:
import Scheduler from 'react/packages/scheduler'
function createLinkedList(n) {
let p = {
value: `Node 1`.next: null
}
const head = p
for (let index = 1; index < n; index++) {
p.next = {
value: `Node ${index+1}`.next: null
}
p = p.next
}
return head
}
const head = createLinkedList(9000)
let workInProgress = head
function workLoopConcurrent() {
while(workInProgress ! = =null&&! Scheduler.unstable_shouldYield()) { performUnitOfWork(workInProgress); }}function workLoopSync() {
while(workInProgress ! = =null) { performUnitOfWork(workInProgress); }}function performUnitOfWork(unitOfWork) {
for (let i = 0; i< 999999; i++){}console.log(performance.now(), unitOfWork.value)
workInProgress = unitOfWork.next
}
function run(didTimeout) {
// The current task is out of date
if (didTimeout) workLoopSync()
// The current task has not expired, so you can take it easy and take a break
else workLoopConcurrent()
if(workInProgress ! = =null) {
return run
}
return null
}
const NormalPriority = 3;
animate()
Scheduler.unstable_scheduleCallback(NormalPriority, run)
Copy the code
This example first creates a linked list of 2000 nodes, assigns the table header to workInProgress, and then schedules a task to execute run, which calls workLoopSync or workLoopConcurrent, depending on whether the current task is expired. The difference is that workLoopSync synchronizes the entire list at once, whereas workLoopConcurrent processes a portion of the tasks in each time slice and stops the while loop when it needs to yield time.
Back to run, if workInProgress is not empty, i.e. the list has not been traversed, run will be returned to continue running in the currently scheduled task. After doing this a few times, didTimeout will be true when run is executed again, and the rest of the tasks will be done synchronously.
Let’s see how this time slice works:
Implementation principle of time slice
unstable_scheduleCallback
function unstable_scheduleCallback(priorityLevel, callback, options) {
var currentTime = getCurrentTime();
// Set startTime according to options
var startTime;
if (typeof options === 'object'&& options ! = =null) {... }else{... }// Determine the timeout according to the priority. The larger the priority, the smaller the timeout, that is, the earlier the timeout expires
var timeout;
switch (priorityLevel) {
...
}
var expirationTime = startTime + timeout;
// A new task
var newTask = {
id: taskIdCounter++,
callback,
priorityLevel,
startTime,
expirationTime,
sortIndex: -1};if (enableProfiling) {
newTask.isQueued = false;
}
if (startTime > currentTime) {
// The task is delayed, i.e. the delay specified in options
newTask.sortIndex = startTime;
push(timerQueue, newTask);
if (peek(taskQueue) === null && newTask === peek(timerQueue)) {
// All tasks are delayed, and this is the task with the earliest delay.
if (isHostTimeoutScheduled) {
// Cancel an existing timeout.
cancelHostTimeout();
} else {
isHostTimeoutScheduled = true;
}
// Schedule a timeout.requestHostTimeout(handleTimeout, startTime - currentTime); }}else {
newTask.sortIndex = expirationTime;
push(taskQueue, newTask);
if (enableProfiling) {
markTaskStart(newTask, currentTime);
newTask.isQueued = true;
}
// Schedule a host callback. If a host callback is already scheduled, wait until the next time it expires
if(! isHostCallbackScheduled && ! isPerformingWork) { isHostCallbackScheduled =true; requestHostCallback(flushWork); }}return newTask;
}
Copy the code
This method first identifies currentTime, startTime, expirationTime, and then creates a new newTask with the method to be scheduled as the callback attribute of that object.
Next, the different branches are determined based on whether the task has started or not, and if it has not, it is placed in timerQueue and if it has, in taskQueue. TimerQueue and taskQueue are both priority queues implemented by the smallest heap. The elements in timerQueue are sorted by startTime, and the elements in taskQueue are sorted by expirationTime.
We didn’t specify delay in our time slicing example, so we’ll go to else, and requestHostCallback(flushWork) will be executed when newTask is put into taskQueue. This step opens a macro task in which to flushWork is executed.
React is implemented using MessageChannel:
const channel = new MessageChannel();
const port = channel.port2;
channel.port1.onmessage = performWorkUntilDeadline;
function requestHostCallback(callback) {
scheduledHostCallback = callback;
if(! isMessageLoopRunning) { isMessageLoopRunning =true;
port.postMessage(null); }}Copy the code
The scheduledHostCallback is used to cache the flushWork passed in. When port.postMessage(null) is executed, performWorkUntilDeadline is executed:
const performWorkUntilDeadline = () = > {
if(scheduledHostCallback ! = =null) {
const currentTime = getCurrentTime();
// shouldYieldToHost uses this to determine if we should give up time
// yieldInterval is 5ms, that is, you are yieldInterval if you perform a task in a time slice that exceeds 5ms
deadline = currentTime + yieldInterval;
const hasTimeRemaining = true;
let hasMoreWork = true;
try {
hasMoreWork = scheduledHostCallback(hasTimeRemaining, currentTime);
} finally {
if (hasMoreWork) {
port.postMessage(null);
} else {
isMessageLoopRunning = false;
scheduledHostCallback = null; }}}else {
isMessageLoopRunning = false;
}
needsPaint = false;
};
Copy the code
In shouldYieldToHost you are using the yieldInterval yieldInterval = 5ms, that is, if a task in a time slice is executed for more than 5ms, you are yieldInterval. ScheduledHostCallback (flushWork) ¶
function flushWork(hasTimeRemaining, initialTime) {... isPerformingWork =true;
const previousPriorityLevel = currentPriorityLevel;
try {
if (enableProfiling) {
try {
return workLoop(hasTimeRemaining, initialTime);
} catch(error) { ... }}else {
// No catch in prod code path.
returnworkLoop(hasTimeRemaining, initialTime); }}finally {
currentTask = null;
currentPriorityLevel = previousPriorityLevel;
isPerformingWork = false; . }}Copy the code
WorkLoop (currentTask, currentTask, currentTask, currentTask);
function workLoop(hasTimeRemaining, initialTime) {
let currentTime = initialTime;
// Move the task started in timerQueue to taskQueue
advanceTimers(currentTime);
currentTask = peek(taskQueue);
while( currentTask ! = =null &&
!(enableSchedulerDebugging && isSchedulerPaused)
) {
if( currentTask.expirationTime > currentTime && (! hasTimeRemaining || shouldYieldToHost()) ) {// The current task is not expired and the time slice is used up
break;
}
// Callback is the callback passed in when we call unstable_scheduleCallback
const callback = currentTask.callback;
if (typeof callback === 'function') {
// Reset the current task callback to null
currentTask.callback = null;
currentPriorityLevel = currentTask.priorityLevel;
const didUserCallbackTimeout = currentTask.expirationTime <= currentTime;
markTaskRun(currentTask, currentTime);
const continuationCallback = callback(didUserCallbackTimeout);
currentTime = getCurrentTime();
if (typeof continuationCallback === 'function') {
If unstable_scheduleCallback returns a callback value that is still a function, the current task will continue to schedule the function, so that many things can be done in a task
currentTask.callback = continuationCallback;
markTaskYield(currentTask, currentTime);
} else {
// Callback returned by unstable_scheduleCallback does not function
if (enableProfiling) {
markTaskCompleted(currentTask, currentTime);
currentTask.isQueued = false;
}
// The reason to check here is that it is possible to insert a higher priority task into the callback
if (currentTask === peek(taskQueue)) {
pop(taskQueue);
}
}
advanceTimers(currentTime);
} else {
pop(taskQueue);
}
currentTask = peek(taskQueue);
}
if(currentTask ! = =null) {
// If performWorkUntilDeadline is told there are still tasks, performWorkUntilDeadline will start another task:
/** try { hasMoreWork = scheduledHostCallback(hasTimeRemaining, currentTime); } finally { if (hasMoreWork) { port.postMessage(null); } else { isMessageLoopRunning = false; scheduledHostCallback = null; }} * * /
return true;
} else {
// taskQueue has no tasks, timerQueue has tasks
const firstTimer = peek(timerQueue);
if(firstTimer ! = =null) {
// requestHostTimeout 就是 setTimeout
// At this point we take out the first task of the timerQueue, the first task that started
// We delay firsttimer.startTime-currentTime and then handleTimeout
requestHostTimeout(handleTimeout, firstTimer.startTime - currentTime);
}
return false; }}Copy the code
When the loop ends, there are two cases:
currentTask
Is not empty and returns at this timetrue
tellperformWorkUntilDeadline
And work, thenperformWorkUntilDeadline
A new macro task is opened to continue processing. In this way, a new round ofperformWorkUntilDeadline
->flushWork
->workLoop
.currentTask
Is empty, where iftimerQueue
If it’s not empty, it should becurrentTask
Not empty the same processing can also be, becausetimerQueue
Tasks are always started in the middle of a schedule, but this can lead to many macro tasks that do not perform any tasks and go to waste. So a more efficient way to do this is to go straight throughsetTimeout
To start a macro task, whilesetTimeout
The delay time of istimerQueue
The difference between the first task (the one that started earliest) and the current time. whilesetTimeout
In the macro task that is started, thehandleTimeout
:
function handleTimeout(currentTime) {
isHostTimeoutScheduled = false;
// Move started tasks from timerQueue to taskQueue
advanceTimers(currentTime);
if(! isHostCallbackScheduled) {if(peek(taskQueue) ! = =null) {
isHostCallbackScheduled = true;
requestHostCallback(flushWork);
} else {
const firstTimer = peek(timerQueue);
if(firstTimer ! = =null) { requestHostTimeout(handleTimeout, firstTimer.startTime - currentTime); }}}}Copy the code
RequestHostCallback (flushWork) is called, and the rest of the process is just as it was before. You may wonder why peek(taskQueue) is empty, since it is possible that during the period from requestHostTimeout to handleTimeout, the user cancels the original task.
At this point, the general operation process of time slice has been analyzed, which can be expressed as follows:
conclusion
In this paper, we first introduce the Concurrent mode of React by using an example. Then we introduce the basic use of Scheduler and simulate how React uses time slice to Render in Concurrent mode. Finally, the realization principle of time slice is analyzed.
reference
-
Introducing Concurrent Mode (Experimental)
-
React Technology revealed
Welcome to the public account