preface

This article covers a lot of principles and source code methods in the render link, so before reading this article, you need to have a general understanding of the Render process of React. If you don’t know, you can read my first source code analysis article.

How do you render the React code into the DOM?

This paper mainly analyzes the dual-buffering mode of update link in Fiber architecture and the realization principle of time slice and priority in Concurrent mode.

double-buffering

The main use of dual buffering mode is to maximize the reuse of Fiber nodes and reduce the performance overhead.

In the previous article, we learned that two trees, the Current tree and the workInProgress tree, are created during the first rendering. The Current tree and the workInProgress tree are essentially two sets of buffered data: when the Current tree is rendered to the page, all data updates are taken over by the workInProgress tree. The workInProgress tree will silently complete all changes in memory until the next commit phase of rendering completes, and fiberRoot object’s current points to the workInProgress tree, The workInProgress tree becomes the Current tree rendered to the page.

Let’s use a practical example to help understand:

import { useState } from 'react';
function App() {
  const [state, setState] = useState(0)
  return (
    <div className="App">
      <div onClick={()= > { setState(state + 1) }}>
        <p>{state}</p>
      </div>
    </div>
  );
}
Copy the code

Initialize the

The simple function of this example is to click once and add 1. In the demo above, after the Render phase, the two Fiber trees before the commit phase are shown below

When the Commit phase is complete, the workInProgress tree is rendered to the page, at which point the fiberRoot object’s current points to the workInProgress tree, the currently rendered Fiber tree.

First Update

Click a number and we enter the first update process. Focus on beginWork calling the createWorkInProgress method in the link.

In the figure above, the current. Alternate of the child node under the workInProgress tree corresponds to the child node of the current tree, but the current tree has no child node at present, so it is null, and the process equal to null is entered. Create the same child node for the current tree as the child node of workInProgress.

Then, incommitWhen the phase is over,currentThe tree will be rendered onto the page,fiberRootThe object’scurrentWill be a return tocurrentTree, as shown in the following figure

Second Update

Clicking on the number again triggers a second update of state, again looking at the createWorkInProgress method.

At this point, current. Alternate exists because both trees have been built. So every time the beginWork triggers the createWorkInProgress call, it’s going to consistently go into the else logic, which is just reusing the existing node. This is how the double buffering mechanism realizes node reuse.

Updating link Elements

React source code analysis first analyzed the first render link, the update link is essentially the same as the first render.

First apply colours to a drawing can be interpreted as a special kind of update, ReactDOM. Render, setState, useState, is a kind of trigger updates. The invocation links from these methods are similar because they all end up in the same update workflow by creating update objects.

Following the demo flow, clicking on a number triggers a dispatchAction method in which the Update object is created

After the update is created, the updateContainer method is used as it was for the first render (the update in the first render link is created in this method). There are two main methods

enqueueUpdate(current, update);
scheduleUpdateOnFiber(current, lane, eventTime);
Copy the code
  • EnqueueUpdate: Queues the update. Each Fiber node has its own update Ue for storing multiple updates in the form of a linked list. In the Render phase, the contents of the Update Ue become the basis for the Render phase to calculate the new state of the Fiber node.

  • ScheduleUpdateOnFiber: schedules updates. This method is immediately followed by the Render phase triggered by performSyncWorkOnRoot.

One point to note here is that dispatchAction dispatches the node that is currently triggering the update, which needs to be distinguished from the mount process. During mount, updateContainer schedules the root node directly. In fact, in the case of an update scenario, it is true that most of the update action is not triggered by the root node, which is where the Render phase starts. So in scheduleUpdateOnFiber, there’s a method that does this

It starts at the current Fiber node, traverses up to the root node, and returns the root node. So, the React update process starts at the root node and iterates through the entire Fiber tree, which is why most of our performance optimizations focus on reducing component rerender.

Another important judgment in scheduleUpdateOnFiber is the judgment logic for synchronization and asynchrony.

One of the previous instances where we analyzed the synchronous first render flow was the performSyncWorkOnRoot method, but for asynchronous modes the ensureRootIsScheduled method was run. Next piece of core logic

if (newCallbackPriority === SyncLanePriority) {
    // Synchronize the updated render entry
    newCallbackNode = scheduleSyncCallback(performSyncWorkOnRoot.bind(null, root));
  } else {
    // Converts the lane priority of the current task to a priority that scheduler can understand
    var schedulerPriorityLevel = lanePriorityToSchedulerPriority(newCallbackPriority);
    // Asynchronously updated render entry
    newCallbackNode = scheduleCallback(schedulerPriorityLevel, performConcurrentWorkOnRoot.bind(null, root));
  }
Copy the code

We can see from this logic, will React based on the current update task priority type, decision is scheduling performSyncWorkOnRoot or performConcurrentWorkOnRoot next. ScheduleSyncCallback and scheduleCallback are scheduleSyncCallback and scheduleCallback respectively. Both functions perform task scheduling by calling unstable_scheduleCallback internally. This method is one of the core methods that Scheduler exports.

The core capability of Scheduler is that fiber implements the core features of time slicing and priority scheduling.

Time slice

So what does time slice do?

import React from 'react';
function App() {
  const arr = new Array(1000).fill(0);
  return (
    <div className="App">
      <div className="container">
        {
          arr.map((i, index) => <p>{' test text ${index} line '}</p>)}</div>
    </div>
  );
}
Copy the code

The code above is to render 1000 P tags to the page, when we use reactdom.render, because it is a synchronous process, all links are executed in a macro task. Depending on the performance of the user’s computer and browser, the execution time of this macro task can be 100ms, 200ms, 300ms or even more. Because the JS thread and the render thread are mutually exclusive, our browser’s render thread will block during this longer macro task. We know that the browser refreshes at 60Hz, which means every 16.6ms. This long macro task causes the rendering thread to block, resulting in significant stuttering and frame loss.

Time slicing is to “cut” this section of macro tasks that need to run for a long time into a section of macro tasks that try to ensure that the running time is below the browser refresh interval. Allow time for the rendering thread to run smoothly. Let’s look at two diagrams. The first is the call stack in synchronous mode

The next one is to change the reactdom. render call to createRoot and render in Concurrent mode

We can see that a long “big task” has been cut into a short “small task”.

How is time slice implemented?

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

function workLoopSync() {
  // If workInProgress is not empty
  while(workInProgress ! = =null) {
    // Execute the performUnitOfWork method on itperformUnitOfWork(workInProgress); }}Copy the code

This is an unstoppable process. Once started, it cannot be stopped.

In asynchronous mode, the React will call performConcurrentWorkOnRoot, Fiber workLoopConcurrent through renderRootConcurrent calls to construct the tree.

function workLoopConcurrent() {
  // Perform work until Scheduler asks us to yield
  while(workInProgress ! = =null&&! shouldYield()) { performUnitOfWork(workInProgress); }}Copy the code

We can see that the asynchronous method is just adding a shouldYield() method. When shouldYield() is true, the while loop will stop, giving up the main thread to the render thread.

The body of shouldYield is actually a method derived from the Scheduler. Unstable_shouldYield is very simple. The source address

export function unstable_shouldYield() {
  return getCurrentTime() >= deadline;
}
Copy the code

Return true when the current time is greater than the deadline for the current time slice, stopping the workLoopConcurrent loop.

Let’s see how deadline is defined

deadline = getCurrentTime() + yieldInterval;
Copy the code

GetCurrentTime () is the current time, and yieldInterval is a constant,5ms, source address

const yieldInterval = 5;
Copy the code

Therefore, the interval of time slices is 5ms(actually it should be slightly larger than 5ms, because the expiration can only be determined by shouldYield() method after the current fiber node is completed).

When the workLoopConcurrent loop is interrupted, React re-dispatches (setTimeout or MessageChannel) to check if there is an event response, higher priority task, or other code that needs to be executed, and if so, execute it. If not, re-create the working loop workLoopConcurrent and perform the Fiber node build for the rest of the work.

Priority scheduling

In the update link, both scheduleSyncCallback and scheduleCallback are called unstable_scheduleCallback to initiate scheduling. Unstable_scheduleCallback is a core method that Scheduler exports, which performs different scheduling logic for tasks based on their priority information. The source address

function unstable_scheduleCallback(priorityLevel, callback, options) {
  // Get the current time
  var currentTime = getCurrentTime();
  // declare startTime, which is the expected startTime of the task
  var startTime;
  // The following is the processing of the options entry parameter
  if (typeof options === 'object'&& options ! = =null) {
    var delay = options.delay;
    // If the input parameter specifies a delay time, the delay time is accumulated
    if (typeof delay === 'number' && delay > 0) {
      startTime = currentTime + delay;
    } else{ startTime = currentTime; }}else {
    startTime = currentTime;
  }
  // Timeout is a context for expirationTime
  var timeout;
  // Specify the value of timeout according to priorityLevel
  switch (priorityLevel) {
    case ImmediatePriority:
      timeout = IMMEDIATE_PRIORITY_TIMEOUT;
      break;
    case UserBlockingPriority:
      timeout = USER_BLOCKING_PRIORITY_TIMEOUT;
      break;
    case IdlePriority:
      timeout = IDLE_PRIORITY_TIMEOUT;
      break;
    case LowPriority:
      timeout = LOW_PRIORITY_TIMEOUT;
      break;
    case NormalPriority:
    default:
      timeout = NORMAL_PRIORITY_TIMEOUT;
      break;
  }
  // The higher the priority, the smaller the TimOut, the smaller the expirationTime
  var expirationTime = startTime + timeout;
  // Create a task object
  var newTask = {
    id: taskIdCounter++,
    callback,
    priorityLevel,
    startTime,
    expirationTime,
    sortIndex: -1};if (enableProfiling) {
    newTask.isQueued = false;
  }
  // If the current time is less than the start time, the task can be delayed (not expired).
  if (startTime > currentTime) {
    // Push unexpired tasks to "timerQueue"
    newTask.sortIndex = startTime;
    push(timerQueue, newTask);
    
    // If there is no task available in the taskQueue and the current task is the first task in the timerQueue
    if (peek(taskQueue) === null && newTask === peek(timerQueue)) {
      // All tasks are delayed, and this is the task with the earliest delay.
      if (isHostTimeoutScheduled) {
        // Cancel an existing timeout.
        cancelHostTimeout();
      } else {
        isHostTimeoutScheduled = true;
      }
      // Then a delay task is sent to check whether the current task is expiredrequestHostTimeout(handleTimeout, startTime - currentTime); }}else {
    // else handle the current time is greater than startTime, indicating that the task is expired
    newTask.sortIndex = expirationTime;
    // Expired tasks are pushed to the taskQueuepush(taskQueue, newTask); .// Execute tasks in taskQueue
    requestHostCallback(flushWork);
  }
  return newTask;
}
Copy the code

The main work of unstable_scheduleCallback is to create a task for the current task, and then push the task to timerQueue or taskQueue based on startTime information. Perform delayed or instant tasks.

There are a couple of concepts you need to know

  • StartTime: indicates the startTime of the task.
  • ExpirationTime: This is a priority-related value. The smaller your expirationTime is, the higher the priority of your task.
  • TimerQueue: A small top heap sorted by startTime that stores tasks whose startTime is greater than the current time (that is, to be executed).
  • TaskQueue: A small top heap sorted by expirationTime that stores tasks whose startTime is less than the current time (that is, expired).

Heap is a special kind of complete binary tree. A complete binary tree is called a “small top heap” if the value of each node is no greater than the value of the nodes of its left and right children. The small top heap’s own insert and delete logic determines that no matter how we add or delete elements to the small top heap, the root node must be the lowest of all elements.

Let’s look at the core logic

if (startTime > currentTime) {
    // Push unexpired tasks to "timerQueue"
    newTask.sortIndex = startTime;
    push(timerQueue, newTask);
    
    // If there is no task available in the taskQueue and the current task is the first task in the timerQueue
    if (peek(taskQueue) === null && newTask === peek(timerQueue)) {
      ...... 
      
      // Then a delay task is sent to check whether the current task is expiredrequestHostTimeout(handleTimeout, startTime - currentTime); }}else {
    // else handle the current time is greater than startTime, indicating that the task is expired
    newTask.sortIndex = expirationTime;
    // Expired tasks are pushed to the taskQueuepush(taskQueue, newTask); .// Execute tasks in taskQueue
    requestHostCallback(flushWork);
  }
Copy the code

If the current task is judged to be an unexpired task, the task will be pushed to timerQueue after the sortIndex attribute is assigned to startTime. The taskQueue stores expired tasks. If the tasks peek(taskQueue) retrieves are empty, the taskQueue is empty and has no expired tasks. If there are no expired tasks and the current task (newTask) is the earliest unexpired task in timerQueue, unstable_scheduleCallback invokes requestHostTimeout to initiate a delayed call for the current task.

Note that this delayed call (i.e., handleTimeout) does not directly schedule the execution of the current task — it simply takes the current task out of the timerQueue when it expires, adds it to the taskQueue, and triggers a call to flushWork. The actual scheduling execution takes place in flushWork. FlushWork calls the workLoop, which executes the tasks in the taskQueue one by one until the scheduling process is paused (the Task is re-scheduled when time runs out) or the Task is emptied entirely.

The React initiates Task scheduling in two postures: setTimeout and MessageChannel. In cases where the host environment does not support MessageChannel, setTimeout is degraded. But both setTimeout and MessageChannel initiate asynchronous tasks (macro tasks, which will be called in the next eventLoop).

Thank you

If this article helped you, please give it a thumbs up. Thanks!