Last week, I shared the React runtime optimization solution in the company. The following is the text version of the share. The article is quite long and there are many dry facts.

React was released in version 18 two months ago. The last time I read the React source code in detail was three years ago, when I read version 15 of React. At that time, I mainly studied the rendering mechanism of the virtual DOM and the implementation mechanism of setState. React synthesizes events and React synthesizes events.

  • 【React in-Depth 】setState execution mechanism
  • React Event mechanism
  • React In-depth analysis of the virtual DOM rendering process and features
  • React Goes from Mixin to HOC to Hook

After studying React, I couldn’t help but sigh that I would never read the React source code again. Compared to other frameworks, the React source code was too difficult to read.

However, with the React 16-18 update, the React infrastructure has changed a lot and introduced many interesting new features that would not have been possible under the old architecture. This rekindled my curiosity about what had happened to the React source code, so I decided to re-read the latest version of the source code.

Fortunately, predecessors plant trees, descendants enjoy the shade, there are many big cattle in the community has done a very good interpretation of the source code, such as Kasong React technology revealed, 7KMS graphic React source code. Based on these tutorials, I reorganized the source code division of the new React architecture, went through the overall process again, and read and studied important modules by myself.

I don’t think there’s a need for a new source code interpretation series. The first two have done a good job, but I’d like to give you a brief look at the main directions of React and what the last few major updates have done.

React has gone through several major releases between 15 and 18 since 2016. Except for Hooks, there have been few major updates to the new features of React, until some time ago, the long silence of React finally got a wave of new apis.

In previous releases, however, he did a lot of work, giving us concepts like Concurrent Mode, Fiber, Suspense, Lanes, Scheduler, and Concurrent Rendering that were a bit of a drag for novice developers.

The main objective of this article is to sort out these concepts, take a look at what React has been working on in recent years, and take a look at some of the latest features. In the sharing we may not be very detailed to analyze the specific scheduling process and details, but we will select some source code for interpretation in some optimization strategies.

So why is run-time the subject of this article? Let’s take a look at a few big frames in the design of the contrast.

Several big JS framework design ideas

React is a runtime framework. After data changes, React does not directly manipulate dom. Instead, it generates a new so-called virtual DOM, which helps us solve cross-platform and compatibility problems, and obtains the minimum operation behavior through diff algorithm. All of this is done at run time.

Svelte, which is very popular recently, is a typical recompilation framework. As a developer, we only need to write templates and data. After Svelte compilation and preprocessing, the code will be basically parsed into native DOM operations, and the performance of Svelte is also closest to native JS.

So, Vue is a framework that makes a good tradeoff between runtime and precompilation. It retains the virtual DOM, but controls the granularity of the virtual DOM in a responsive manner. In precompilation, it makes enough performance optimizations to update the virtual DOM on demand.

So let’s take a look at what compile-time optimization is.

What is compile-time optimization?

Vue uses template syntax, which is limited by syntax. We can use v-if v-for as the specified syntax to encode, although this is not dynamic, but because the syntax is enumerable, it can do more prediction at the pre-compilation level, making Vue better performance at run time. Let’s take a look at a specific compile-time optimization for Vue 3.0.

The traditional VDOM Diff algorithm always traverses layer by layer according to the hierarchical structure of the VDOM tree, so the Diff performance is positively related to the size of the template, and has nothing to do with the number of dynamic nodes. In cases where some components have only a small number of dynamic nodes throughout the template, these traversals are a waste of performance.

For example, in the code example above, the static nodes cannot change during the component update phase. If we can skip static content in the diff phase, we can avoid useless DOM tree traversal and alignment.

In Vue3.0, there is a similar optimization strategy. Its compiler can create different Patchflags for each virtual DOM according to the dynamic properties of the node, for example, the node has dynamic text or dynamic class. Will be stamped with different patchflags.

Then patchFlag can be combined with block tree to achieve targeted update of different nodes.

Dead knock running time

React itself is written in pure JS, which is very flexible, but it also makes it difficult to do much at compile time. Compile-time optimizations like the one above are difficult to implement. As a result, we can see that the optimizations for several major versions of React are mostly at runtime.

So, what are our main concerns at runtime?

First of all, there is the CPU problem. The refresh rate of mainstream browsers is generally 60Hz, which is 60 times per second, or about 16.6ms a browser refresh. Since the GUI rendering thread and the JS thread are mutually exclusive, JS script execution and browser layout and drawing cannot be executed at the same time.

During this 16.6ms time, the browser needs to complete both JS execution and style rearrangement and redrawing. If JS execution takes too long beyond 16.6ms, the refresh will not have time to perform style layout and style drawing, and the page will appear sluggish.

The IO problem is easier to understand. Many components need to wait for some network latency. So how can users reduce the perception of network latency when network latency exists? That’s what we need to solve.

React optimizations are implemented at runtime, and some of the major issues that the runtime resolves. Here are some of the major updates and changes to React in recent releases.

React 15 – Semi-automatic batch processing

Take a look at React 15. React was probably the release that took off after React, and updates to React became slower and slower after React.

architecture

The framework of this edition is relatively simple, consisting mainly of two parts: Reconciler and Renderer.

  • Reconciler(Coordinator) – Responsible for the invocationrenderGenerate virtual Dom for Diff and find out the changed virtual Dom
  • Renderer(Renderer) – Responsible for catchingReconcilerNotification to render the changing components in the current hosting environment, such as the browser, which varies from hosting environment to hosting environmentRenderer.

The batch

React 15 uses a batch optimization. React 15 uses a batch optimization. React 15 uses a batch optimization. Explore the implementation mechanism of setState by practical problems.

For example, the following code calls setState four times in a lifetime, the last two of which are in the setTimeout callback.

class Example extends React.Component {
  constructor() {
    super(a);this.state = {
      val: 0
    };
  }
  
  componentDidMount() {
    this.setState({val: this.state.val + 1});
    console.log(this.state.val);   
    this.setState({val: this.state.val + 1});
    console.log(this.state.val);   

    setTimeout(() = > {
      this.setState({val: this.state.val + 1});
      console.log(this.state.val); 
      this.setState({val: this.state.val + 1};
      console.log(this.state.val);  
    }, 0);
  }

  render() {
    return null; }};Copy the code

Let’s consider two scenarios:

  • Assuming thatReactThere is no batch mechanism at all, so implement onesetStateIt immediately triggers a page render, in print orderOne, two, three, four
  • Assuming thatReactWith a perfect batch processing mechanism, all renders should be processed uniformly after the entire function is finished, and the print order should be0, 0, 0, 0

In fact, the code printed in this version is 0, 0, 2, and 3. The setState call itself is synchronized, and the React batching mechanism is not immediately available.

When setState is triggered multiple times at the same time, the browser will always be blocked by the JS thread, so the browser will drop frames and the page will stall. React introduced the batching mechanism, mainly to merge updates triggered in the same context into one update.

The _processPendingState function merges the state queue and returns a merged state.

 _processPendingState: function (props, context) {
  var inst = this._instance;
  var queue = this._pendingStateQueue;
  var replace = this._pendingReplaceState;
  this._pendingReplaceState = false;
  this._pendingStateQueue = null;

  if(! queue) {return inst.state;
  }

  if (replace && queue.length === 1) {
   return queue[0];
  }

  var nextState = _assign({}, replace ? queue[0] : inst.state);
  for (var i = replace ? 1 : 0; i < queue.length; i++) {
   var partial = queue[i];
   _assign(nextState, typeof partial === 'function' ? partial.call(inst, nextState, props, context) : partial);
  }

  return nextState;
 },
Copy the code

Let’s just focus on the following code:

_assign(nextState, typeof partial === 'function' ? partial.call(inst, nextState, props, context) : partial);
Copy the code

If an object is passed in, it will obviously be merged into one:

Object.assign(
 nextState,
 {index: state.index+ 1},
 {index: state.index+ 1})Copy the code

If a function is passed in, the function argument preState is the result of the previous merge, so the calculation is accurate.

If setState is called no matter how many times in an environment where batch processing is required (React life cycle, composite events), the update is not performed. Instead, the state to be updated is stored in _pendingStateQueue and the component to be updated is stored in dirtyComponent. When the last update mechanism completes, for example in the lifecycle, all components, the topmost component didmount, will set isBranchUpdate to false. The setState accumulated before will be executed.

React uses a batchedUpdates function to call all batch-enabled functions.

batchedUpdates(onClick, e);

export function batchedUpdates<A.R> (fn: A => R, a: A) :R {
  // ...
  try {
    return fn(a);
  } finally {
    / /...}}Copy the code

Since batchedUpdates themselves are called synchronously, if there is an asynchronous execution within fn, the batch will have already finished. Therefore, this version of batch cannot handle asynchronous functions, also known as semi-automatic batch processing.

React provides a function to do unstable batchedUpdates manually.

React 15’s flaws

Although React 15 introduced optimization logic such as batch processing, due to the architecture of React 15, if there are too many nodes, even if there is only one state change, React still needs to perform complex recursive update. Once the update starts, it cannot be interrupted in the middle. The main thread is not released until the entire tree is traversed.

We can refer to the example in the figure, when the hierarchy is very deep, the recursive update time is over 16ms, and if there is user action or animation rendering etc. at this time, it will show up as stalling.

React 16 – Makes Concurrent Mode possible

architecture

Taking a look at the React 16 version, compared to React 15, we can see that in the new architecture there is an additional layer of Scheduler, or Scheduler, which was then reconfigured using Fiber architecture in the Reconciler’s layer. The details will be covered in a later chapter.

  • Scheduler(Scheduler) – the priority of scheduling tasks, and the priority of high-priority tasksReconciler
  • ReconcilerCoordinator – responsible for finding components that are changing (useFiberRefactoring)
  • RendererRenderer – responsible for rendering the changing components onto the page

React continued to use this architecture in subsequent major releases.

In addition to the architectural changes, React introduces a very important concept in this release, Concurrent Mode.

Concurrent Mode

React’s official description looks like this:

Concurrent mode is a new set of React features that help applications stay responsive and adjust appropriately to the user’s device performance and network speed.

In order for an application to remain responsive, we need to understand what is constrains the application to remain responsive?

One of the things that’s really important here is to make your application responsive, so let’s think about what’s limiting your application’s responsiveness?

In the previous section, we also mentioned that the main bottlenecks at runtime are CPU and IO. If these two bottlenecks can be broken, the persistence of the application can be achieved.

On the CPU, our main problem is that pages will stall when JS execution exceeds 16.6ms. The React solution is to set aside some time for the JS thread in each frame of the browser. React uses this time to update components. When the reserved time runs out, React gives thread control back to the browser to render the UI, and waits for the next frame to resume interrupted work.

In fact, the operation of breaking a long task into each frame and performing a small task in each frame mentioned above is often referred to as time slicing.

In I/O, the problem to be solved is that a network request cannot be quickly responded to until the data is returned. React tries to solve this problem by controlling the priority of component rendering.

In fact, Concurrent Mode is a new architecture designed to address both of these issues, with the emphasis on making the rendering of components “interruptible” and “prioritized,” consisting of several different modules, each responsible for different tasks. First, let’s look at how to make the rendering of components “interruptible”.

Underlying architecture – Fiber

In the previous chapter, we talk about the Reconciler in React15 being implemented recursively, with data stored in a recursive call stack, a recursive traversal that is certainly impossible to break.

As a result, React spent two years reconstructing the Fiber framework, and Reconciler’s React16 Reconciler is realized based on Fiber nodes. Each Fiber node corresponds to a React element. Note that it corresponds, not equals. The result of calling the Render function is the React Element, and the Fiber node is created from the React Element.

Here is an example of a Fiber node. In addition to the type of the component and the CORRESPONDING DOM information of the component, the Fiber node also stores the changed state of the component in this update, the work to be performed, whether it needs to be deleted, inserted into the page, or updated.

function FiberNode(
  tag: WorkTag,
  pendingProps: mixed,
  key: null | string,
  mode: TypeOfMode,
) {
  // As properties of static data structures
  this.tag = tag;
  this.key = key;
  this.elementType = null;
  this.type = null;
  this.stateNode = null;

  // It is used to connect other Fiber nodes to form Fiber tree
  this.return = null;
  this.child = null;
  this.sibling = null;
  this.index = 0;
  this.ref = null;

  // Dynamic unit of work properties
  this.pendingProps = pendingProps;
  this.memoizedProps = null;
  this.updateQueue = null;
  this.memoizedState = null;
  this.dependencies = null;

  this.mode = mode;
  this.effectTag = NoEffect;
  this.nextEffect = null;
  this.firstEffect = null;
  this.lastEffect = null;

  // Scheduling priorities are related
  this.lanes = NoLanes;
  this.childLanes = NoLanes;

  // Point to the corresponding fiber in another update
  this.alternate = null;
}
Copy the code

In addition, we can also see the link relationship between the current node and other nodes. A Fiber node includes its child, Sibling, return and other attributes.

Double cache

In React, at most two Fiber trees will exist at the same time. The Fiber tree corresponding to what is currently displayed on the screen is called the Current Fiber tree, and the Fiber tree being built in memory is called the workInProgress Fiber tree, and they are connected via an alternate property.

The root node of the React application uses a current pointer to the current Fiber tree. When the workInProgress Fiber tree is built and rendered on the page by the Renderer, the current pointer to the application root node points to the workInProgress Fiber tree, The workInProgress Fiber tree becomes the Current Fiber tree.

React changed the rendering mechanism of the DOM tree to alternate between two Fiber trees, so we could switch the pointer pointer after the update was complete, and we could always give up making changes to the other tree before the pointer switch. This makes it possible for updates to be interruptible.

We mentioned a few concepts above: Current Fiber, workInProgress Fiber, JSX objects known as React Elements, and actual DOM nodes.

The Reconciler’s job, then, is to use the Diff algorithm to compare Current Fiber and React Element to generate workInProgress Fiber, which is interruptible. The Renderer’s job is to convert the workInProgress Fiber into a real DOM node.

Scheduler – Scheduler

If we run Fiber synchronously with reactdom.render, the Fiber architecture is the same as before the refactoring. But when combined with the time slicing mentioned above, it is possible to achieve “asynchronous interruptible updates” by assigning each unit of work a runnable time based on the current host environment performance.

Scheduler does this for us, and we can see that our long update task is broken up into small chunks. This gives the browser time to perform style layout and style drawing, reducing the possibility of dropping frames.

The animation in the image has also become very silky.

requestIdelCallback

In the picture above, the browser is doing something in a frame. Here we can see that when everything is done, a requestIdleCallback function is called. In this function we can get the browser’s current string of remaining time.

So what can this API do? Let’s look at an example:

If we had a very long, time-consuming task like the one on the left that needed to be executed without any additional processing, the entire task would have taken over 16.6ms to execute.

With the help of the requestIdleCallback function, we can split a large task into several smaller tasks and perform the smaller tasks gradually with free time in each frame.

With this API, we can make the browser execute the script only during idle periods. The essence of time slicing is to emulate the requestIdleCallback function.

Due to compatibility and refresh framerate issues, React doesn’t use requestIdleCallback directly. Instead, it uses a MessageChannel emulation implementation, which works the same way.

Interrupt update

In the Render phase of React, when Concurrent Mode is enabled, the shouldYield method provided by Scheduler will be used to determine whether the traversal needs to be interrupted before each traversal so that the browser has time to render. Refer to the workLoopConcurrent function below.


function workLoopConcurrent() {
  // Perform work until Scheduler asks us to yield
  while(workInProgress ! = =null&&! shouldYield()) { performUnitOfWork(workInProgress); }}Copy the code

The most important thing to determine whether a task is interrupted is whether the remaining time of each task is used up. The function shouldYield() checks if the time is due.

shouldYield(...) --> Scheduler_shouldYield(...) --> unstable_shouldYield(...)
--> shouldYieldToHost(...)
--> getCurrentTime() >= deadline
-->
  var yieldInterval = 5; var deadline = 0;
  var performWorkUntilDeadline = function() {...var currentTime = getCurrentTime()
      deadline = currentTime + yieldInterval
      ...
  }
Copy the code

As you can see, in Schdeduler, each time it expires, it breaks out of the work loop, gives control of the thread to the browser, and then resumes the current work on the next task. In this way, a long JS task is broken up into several smaller tasks.

The yieldInterval dynamically calculates the FPS of the current device, in response to the definition of a Concurrent Mode mentioned earlier, which helps applications stay responsive and adjust appropriately to the user’s device performance and network speed.

 if (fps > 0) {
      yieldInterval = Math.floor(1000 / fps);
    } else {
      // reset the framerate
      yieldInterval = 5;
    }
Copy the code

The Fiber architecture, in conjunction with Scheduler, implements the underlying “asynchronous interruptible updates” of Concurrent Mode.

isInputPending

So, now, it’s not just when we use React that we can enjoy this optimization strategy.

In Chrome 87, the React team teamed up with the Chrome team to add a new API, isInputPending, to the browser. It was also the first API to use the operating system concept of interrupts for web development.

Even without React, we can use this API to balance priorities between JS execution, page rendering, and user input.

The isInputPending method can be used to respond to user input while rendering the page. IsInputPending can be used to interrupt JS execution when a long JS task needs to be executed. Return control to the browser to execute the user response.

Priority control

If an update is interrupted during the run and a new update is restarted, we can say that the later update broke the previous one.

Take a simple example: we are having dinner, and suddenly your girlfriend calls you. You may have to stop eating, answer the phone, and continue eating.

In other words, answering the phone is a higher priority than eating. React assigns different priorities to status updates generated in different scenarios based on research results of human-computer interaction. For example:

  • If lifecycle method: is the highest priority, synchronous execution.
  • Controlled user input: for example, input text into the input box, synchronous execution.
  • Some interactive events, such as animations, are executed with high priority.
  • Other: such as data request, or usesuspense,transitionSuch updates are low-priority.

For example, let’s take a look at the two updates in the figure below: First, we have an update that changes the current theme, which is low priority and time-consuming. So, before the Render phase of the status update of the theme change is complete, the user enters a new character in the Input field.

The user input is of high priority, so we interrupt the theme update operation, respond to the user input first, and then proceed with the render and COMMIT processes based on the results of the last update. This is a high – priority task interrupt the low – priority task operation. React prioritization is implemented in the React source code.

Task priority

Let’s start with this code, which declares five different priorities:

  • ImmediatePriority represents the immediate execution priority, with the highest level
  • UserBlockingPriority: Represents the priority of the user’s blocking level
  • NormalPriority: this is the most common NormalPriority
  • LowPriority: indicates the lower priority
  • IdlePriority: indicates the lowest priority, which indicates that the task can be idle

Inside React, whenever priority scheduling is involved, the runWithPriority function is used. This function takes a priority as well as a callback function. Within the callback function, the method that gets the priority takes the priority passed in by the first argument.

So how do these different priority variables affect the specific update task?

We can look at the code above, with different priority variables, we will calculate the expirationTime of different expirationTime. Each update task has a expirationTime. The closer the expirationTime of a task is to the current time, the higher the priority of this task is.

So, expirationTime, you get startTime which is the current time, plus a timeout. For example, if the timeout corresponding to the ImmediatePriority is -1, then the expiration time of this task is shorter than the current time, indicating that it has expired and needs to be executed immediately.

The React app may generate different tasks at the same time, and the Scheduler will prioritize the task with the highest priority and schedule its updates. So, what’s the fastest way to find a high-priority task?

In effect, Scheduler stores all tasks that are ready to execute in a queue called taskQueue, which uses a data structure called the small top heap. In the small top heap, all tasks are arranged in order of task expiration time, so that the Scheduler can find the earliest expired, or highest priority, task in the queue for only O(1) complexity.

Fiber priority

So, the priority mechanism we just mentioned is actually the priority mechanism of the React Scheduler. Inside React, Scheduler is an independent package, which is only responsible for task scheduling, and it doesn’t even care what the task is. This works even if we use Scheduler away from React.

The priority mechanism inside Scheduler is also independent of React. React also has its own priority mechanism, because we need to know which fibers and Update objects in a Fiber tree are of high priority.

In Act16, Fiber and Update priorities are similar to task priorities. React adds a expirationTime to every Fiber Update based on its operation priority. However, due to some problems, React no longer uses the expirationTime in Fiber to express the priority. We will talk about this later.

Life cycle changes

In the new React architecture, a component’s rendering is divided into two phases: The first phase (also known as the Render phase) can be interrupted by React. If it is interrupted, everything that was done in this phase is scrapped. When React returns from an emergency, the component will still be re-rendered, and the first phase will be redone.

The second phase is called the Commit phase, which cannot be interrupted once it has started, meaning that the second phase works directly to the end of rendering the component.

The cut-off point between the two stages is the render function. All lifecycle functions prior to the Render function (including render) belong to the first stage, and all subsequent ones belong to the second stage. With Concurrent Mode enabled, all life cycles prior to render may be interrupted or called repeatedly:

  • componentWillMount
  • componentWillReceiveProps
  • componentWillUpdate

React v16.3 introduced a new life cycle function getDerivedStateFromProps, which is called getDerivedStateFromProps. The lifecycle is a static method in which the current component cannot be accessed through this at all, input can only be through parameters, and impact on component rendering can only be through return values.

Thus, getDerivedStateFromProps must be a pure function, and React forces developers to accommodate Concurrent Mode by requiring such a pure function.

React allows applications to override CPU issues and remain responsive. What about IO issues?

Suspense

React 16.6 has a new component called Suspense>, which focuses on runtime IO issues.

Suspense lets components “wait” for an asynchronous operation until it finishes rendering. Take a look at the code below, where we implement lazy loading of a component through Suspense.

const MonacoEditor = React.lazy(() = > import('react-monaco-editor'));
      
<Suspense fallback={<div>Editor Loading...</div>} ><MonacoEditor 
       height={500} 
       language="json" 
       theme="vs" 
       value={errorFileContext} 
       options={{}} 
    />
</Suspense>
Copy the code

Suspense so why do you solve IO problems in Suspense? We can do this lazy loading ourselves in other ways.

In Suspense, we can lower the priority of loading and reduce flicker screen problems. For example, when the data is returned quickly, we can directly display the loading status without displaying the flash screen. Load state explicitly only if there is no return from timeout. Component subtrees in Suspense essentially have lower priority than the rest of the component tree. We can imagine that without Suspense we would have to implement a loading ourselves that would have the same priority as other component renderings. No matter how fast the IO is, our screen will flash.

So if we load other components during the IO request time, as long as the time is small enough, we don’t need to show Lodaing, which will reduce the problem of splash screen.

Suspense doesn’t stop there. It also optimizes waiting for asynchronous operations to be written in React, which we won’t cover here.

React 16’s flaws

Although the core work of React 16 is all in Concurrent Mode, this does not mean that Concurrent Mode is stable. React 16 does all the work that makes Concurrent Mode possible. In addition, he has made some small attempts at Concurrent Mode, which is still the synchronous rendering Mode by default in version 16. He still has a lot of work to do in order to enable Concurrent Mode on a large scale.

React 17 – Stable interim version of Concurrent Mode

No new features?

We can see that there are few new features in the update log for Act17, but from the few official descriptions we can see that Act17 is a transitional release to stabilize CM.

Due to the Breaking Change brought by Concurrent Mode, many libraries are incompatible, so it is impossible for us to use them in new projects. React provides support for the coexistence of multiple versions in a single project. Another important support is: The CM priority algorithm was reconstructed using Lanes.

Realize the coexistence of multiple versions

A quick word about multi-version coexistence.

React implements its own event mechanism and simulates event bubbling and capturing to smooth out compatibility issues between browsers.

For example, it doesn’t attach them to the corresponding DOM node when you declare them. Instead, React attaches a handler directly to the Document node for each event type. This approach not only has performance advantages in large application trees, but also makes it easier to add new features.

But if there are multiple React versions on a page, they all register events on document. This breaks the mechanism of event bubbling, and external trees will still receive the event, making it difficult to nest different versions of React.

This is why React changes the underlying implementation of attach to the DOM.

In React 17, React attaches events to the root DOM container of the React rendering tree, instead of attaching them to the document level:

const rootNode = document.getElementById('root'); 
ReactDOM.render(<App />, rootNode);
Copy the code

This makes it possible for multiple versions to coexist.

New priority algorithm – Lanes

The Scheduler priority is not the same as the React priority. Before React 16, React Fiber also uses the expirationTime to indicate the priority. React reconstructs the Fiber priority algorithm using Lanes.

So, what is the problem with expirationTime before? When the expirationTime was first designed, there was no concept of Suspense asynchronous rendering in React system. Suppose you have A scenario where you have three tasks with priority A > B > C, and normally you just need to execute them in priority order.

But here’s the thing: Suspense A and C tasks are CPU intensive, while Suspense B tasks are IO intensive, i.e., A(cup) > B(IO) > C(CPU). In this case, high priority I/O tasks interrupt low priority CPU tasks. This, of course, is unreasonable.

So if you want to use the expirationTime, it would set A priority as the update standard of the whole tree, not A specific component. In this case, we need to separate task B from A batch of tasks, and deal with CPU task A and C first. If you implement this with expirationTime, it is difficult to express the concept of batch, and it is difficult to separate individual tasks from a batch of tasks, then we need a more fine-grained priority.

So Lanes came along. Fields that used to be represented by expirationTime are changed to Lanes. Such as:

update.expirationTime -> update.lane
fiber.expirationTime -> fiber.lanes
Copy the code

Lane and Lanes are singular and plural. A single task is Lane, and multiple tasks are Lanes.

The type of Lane is defined as binary variables, so that we use bitwise operations when we do priority calculation. In the case of frequent updates, less memory is used and the calculation speed is faster.

React defines a total of 18 Lane/Lanes variables, each of which has one or more bits. Each Lane/Lanes has its own priority.

As you can see in the code, the lower the priority of lanes, the more bits are occupied. For example, InputDiscreteLanes (i.e. priority of discrete interactions) account for 2 bits, and TransitionLanes account for 9 bits. The reason for this is that updates with lower priorities are more likely to be interrupted (if all tracks of the current priority are occupied, the current priority is reduced by one priority), resulting in a backlog, so more bits are needed. In contrast, the best SyncLane that synchronizes updates requires no extra lanes.

React 18 – More flexible Concurrent Renderring

React recently released the Alpha version of React 18, which cannot be enabled by default due to the massive Break Change caused by Concurrent Mode. React was renamed Concurrent Rendering and simultaneous Rendering.

React 17 already supports multiple versions, so React recommends incremental upgrades rather than a one-size-fits-all approach. Only updates triggered by these new features will enable concurrent rendering, so you can use React 18 without making a lot of code changes and try out the new features at your own pace.

createRoot

React provides us with three modes. The reactdom. render application we’ve been using is legacy, and updates are synchronous in this mode, with render phases corresponding to commit phases.

If you create an application with ReactDOM. CreateRoot, you have concurrent rendering enabled by default.

import React from 'react';
import ReactDOM from 'react-dom';
import App from './App';
const container = document.getElementById('root');
// Create a root.
const root = ReactDOM.createRoot(container);
// Render the top component to the root.
root.render(<App />);
Copy the code

In addition, there is a blocking mode created through the createBlockingRoot function to facilitate the transition between the above two modes.

Below, we can also see the comparison of features supported by different schemas.

Batch optimization

React 15 implemented the first version of batch processing. If we trigger multiple updates in a single event callback, they are merged into a single update for processing.

The main reason is that the function batchedUpdates itself is called synchronously. If there is an asynchronous execution inside fn, the batch processing will have already finished, so this version of batch processing cannot process asynchronous functions.

In React, however, we have a lot of status updates to do in asynchronous callbacks. In React 18, we can solve this problem by enabling asynchronous rendering.

class Example extends React.Component {
  constructor() {
    super(a);this.state = {
      val: 0
    };
  }
  
  componentDidMount() {
    this.setState({val: this.state.val + 1});
    console.log(this.state.val);   
    this.setState({val: this.state.val + 1});
    console.log(this.state.val);   

    setTimeout(() = > {
      this.setState({val: this.state.val + 1});
      console.log(this.state.val); 
      this.setState({val: this.state.val + 1};
      console.log(this.state.val);  
    }, 0);
  }

  render() {
    return null; }};Copy the code

In Concurrent mode, updates are merged based on priority.

As you can see, the final output of this code is 0, 0, 1, 1. Why is this output? Here’s a quick look at priority-based batch processing:

After the fiber update is mounted on the component, the “scheduling flow” will start. The Scheduler Scheduler is used to select the update with the highest priority among different priorities and proceed to the update process with that priority. The process after entering the scheduling is as follows:

First, we take out the Lane with the highest priority among all current priorities, and then obtain the priority to be scheduled this time according to Lane.

We then need to obtain whether a previous schedule existed before the formal update process was executed, and if so, compare it with the priority of the current schedule.

If you’re doing setState for the first time, existingCallbackPriority is definitely not there, So start for the first time will update process performConcurrentWorkOnRoot through scheduleCallback scheduling.

But the second setState come in, because before already has a scheduling, and is consistent, and local priority can returen directly, no longer call scheduleCallback performConcurrentWorkOnRoot for scheduling.

Then, after a certain amount of time, all previous updates of the same priority will enter the formal update process together. Since the following setState is called in setTimeout, setTimeout has a lower priority and all will be executed in the next batch. Therefore, the final printed result is 0, 0, 1, 1.

This is the priority-based automatic batch process. With this process, we don’t need the manual batch function unstable_batchedUpdates provided by React.

startTransition

React 18 API startTransition

This API allows us to manually distinguish between non-emergency status updates, essentially controlling component rendering priorities. For example, here is a scenario where we go to the Input field and enter a value, and then we need to present some data filtered through the value we entered.

Because you need to dynamically render the filtered values each time, you might store the input values in a state, and your code might look something like this:

setInputValue (input) ; 
setSearchQuery (input) ;
Copy the code

First of all, the user input values would definitely need to be rendered immediately, but the filtered association data might not need to be rendered that fast. If we didn’t do any extra processing, before React 18, all updates would be rendered immediately. If you had a lot of raw data, Each time you enter a new value, the amount of computation you have to do to filter out the data that matches the input value is very large, so there may be a lag every time the user enters a value.

So, in the past, we might have added some anti-shake operations ourselves to artificially delay filtering data calculation and rendering.

The new startTransition API lets us mark our data as transitions.

import { startTransition } from 'react';


// Urgent: Show what was typed
setInputValue(input);

// Mark any state updates inside as transitions
startTransition(() = > {
  // Transition: Show the results
  setSearchQuery(input);
});
Copy the code

All updates in the startTransition callback are considered non-urgent, and if there is a more urgent update (such as a new value entered by the user), the update is interrupted until there is no more urgent action.

How, is it more elegant than our artificial realization of a anti – shake 😇

React also provides a Hook with an isPending transition flag:

import  {  useTransition  }  from  'react' ; 

const  [ isPending ,  startTransition ]  =  useTransition ( ) ;
Copy the code

You can use it in combination with loading animations:

{ isPending  &&  < Spinner  / > }
Copy the code

Here’s a more typical example:

Dragging the left slider changes the number of nodes rendered in the tree. Dragging the top slider changes the tree tilt. At the top is a frame radar that shows how many frames are being dropped during updates. When don’t clickUse startTransitionButton, drag the top slider. As you can see: the drag is not smooth, the top frame radar shows the drop frame.

In this case, we put the tree render into startTransition, although the tree update is still very slow, but the radar will not drop frames.

StartTransition is easy to implement. All operations performed in the startTransition callback get an isTransition flag, according to which React assigns a lower priority to updates.

useDeferredValue

In addition to manually prioritizing certain operations, we can also prioritize a specific state. React 18 gives us a new Hook useDeferredValue.

For example, we now have such a scene, after the user input some information, we need to do some processing on the information and render it into the details below, if this processing is time-consuming, then the continuous user input will feel stuck. If you look at this example, all of the inputs are actually continuous.

In fact, we want the user’s input to respond quickly, but it doesn’t matter if we wait a little longer to render the details below.

In this case, we can create a deferredText with useDeferredValue, which means that the rendering of deferredText is marked as low priority and has another parameter, the maximum delay time for rendering. We can probably guess that useDeferredValue is implemented similarly to expairedTime.

As you can see in the figure, user input no longer feels stuck.

So what’s the difference between this and our manual anti-shaking?

The main problem with anti-shake is that no matter how fast our computer renders, it will always have a fixed delay, whereas useDeferredValue will only delay the priority when rendering is time-consuming, in most cases there will be no unnecessary delay.

Lazy loading support under SSR

Suspense components are not supported in Suspense prior to React 18 in SSR mode. In React 18, server rendering components are also supported in Suspense. If you wrap components in Suspense, the server will first stream the components in Fallback as HTML. Once the main component is loaded, React will send new HTML to replace the whole component.

  <Layout> 
  < Article /> 
  <Suspense fallback={<Spinner />} ><Comments /> 
  </Suspense>
 </Layout>
Copy the code

For example, in the code above, the

component will be rendered first, and the

component will be replaced with

by fallback. Once the

component is loaded, React will send it to the browser to replace the

component.




The last

Finally, if you want to read the React source code, you are not advised to read it directly, because some of the code is difficult to understand.

Recommend you according to the outline of the following two tutorials to see, first understand the division of the overall structure of the source code, and then the actual debugging through the whole process, and finally according to their own needs into each module for targeted reading.

  • React: github.com/7kms/react-…
  • React Technology Revealed: react.iamkasong.com/

If you want to join us, please contact ConardLi on wechat to communicate with me. In addition, the article will be published on my wechat public account code Secret Garden, and you are welcome to follow it.

If there are any mistakes in this article, please leave them in the comments section. If this article helped you, please like it and follow me. Your likes and attention are the biggest support for me!