JS execution is usually in a single-threaded environment, when we encounter time-consuming code, the first thing we think of is to split the task, so that it can be interrupted, and at the same time, when other tasks arrive, give up the execution right, when other tasks are executed, and then asynchronously execute the rest of the calculation from the previously interrupted part. So the key is to implement an asynchronous interruptible scheme. So how do we implement a solution with task segmentation, asynchronous execution, and delegating execution rights? React provides corresponding solutions.

background

React grew out of an internal Facebook project to build the Instagram site and opened source in May 2013. The framework was primarily a JavaScript library for building user interfaces, primarily for building UIs, which were unique to the front-end world of two-way data binding at the time. More uniquely, he introduced a partial refresh mechanism into the page refresh. React has many advantages. The main features of React are as follows:

1. 1

The framework assumes that THE UI simply transforms data into another form of data through mapping. The same input must have the same output. This happens to be a pure function.

1.2 the abstract

In a real world scenario, only one function is needed to implement a complex UI. Importantly, you need to abstract the UI into multiple hidden internal details, and you can use multiple functions. The implementation of complex user interfaces by calling one function within another is called abstraction.

1.3 combination

In order to achieve reusable characteristics, each combination creates a new container for them. Yes. You also need “other abstract containers to be combined again.” That’s two or more containers. The different abstractions are merged into one.

The Core value of React has always been to focus on the purpose of updates. Combining updates with the best user experience is something the React team has been working hard on.

Slow down ==> Upgrade

As the application becomes more complex, the DOM diff times of more than 16.6ms in the Act15 architecture can cause pages to stall. So what causes React to be slow and need to be refactored?

Prior to React15, the reconciliation process was synchronous, also known as stack Reconciler, and because JS was executed single-threaded, it resulted in a lack of responsiveness to high-priority tasks when updating time-consuming tasks, such as delays in user input pages when processing time-consuming tasks. For example, rendering a React component, sending network requests, and executing functions occupy CPU resources. If the CPU usage is too high, congestion will occur. How to solve this problem?

In the us in the development of daily, JS performs in a single-threaded environment, usually encountered more time-consuming code, we will first think of the task division, make it can be interrupted, in other tasks at the same time comes up executive power, while the rest of the task execution, before from the calculation of the remaining part of the asynchronous execution. So the key is to implement an asynchronous interruptible scheme.

So how do we implement a solution with task segmentation, asynchronous execution, and delegating execution rights? React provides corresponding solutions.

2.1 Task Division

React provides a set of data structures that allow it to map to the actual DOM as well as be used as a unit for partitioning. This brings us to Fiber.

Fiber

Fiber is the smallest unit of work for React. In React, everything is a component. On an HTML page, the combination of multiple DOM elements can be called a component, an HTML tag can be a component (HostComponent), and a common text node can also be a component (HostText). Each component corresponds to a fiber node, and many fiber nodes are nested and associated with each other to form a Fiber tree (why use the linked list structure: because the linked list structure is for space and time, it is very good for inserting and deleting operations), just as the relationship between the fiber tree and DOM is shown below:

Fiber div # the DOM tree root div # root | |<App/>Div | / \ div p/a / ↖ ↖ p -- -- -- -- ><Child/>
             |
             a
Copy the code

A DOM node must have a fiber optic node node, but a fiber optic node has a matching DOM node node. The structure of fiber as a unit of work is as follows:

export type Fiber = {
  // Recognize the label of type fiber.
  tag: TypeOfWork,

  // Unique identifier of child.
  key: null | string.// The element value. Type that is used to preserve identity while coordinating the child.
  elementType: any.// The resolved function/class associated with this fiber.
  type: any.// Current state related to this fiber.
  stateNode: any.// fiber remaining fields

  // Return to fiber after handling this problem.
  // This is actually parent.
  It is conceptually the same as the return address of a stack frame.
  return: Fiber | null.// Single list tree structure.
  child: Fiber | null.sibling: Fiber | null.index: number.// The last reference used to connect the node.
  ref:
    | null
    | (((handle: mixed) = > void) & { _stringRef:?string. }) | RefObject,// Enter the data of this fiber. The Arguments and Props.
  pendingProps: any.// This type will become more specific once we overload the label.
  memoizedProps: any.// The item used to create output.

  // a queue for status updates and callbacks.
  updateQueue: mixed,

  // The state used to create output
  memoizedState: any.mode: TypeOfMode,

  // Effect
  effectTag: SideEffectTag,
  subtreeTag: SubtreeTag,
  deletions: Array<Fiber> | null.// Single linked list fast to the next fiber side effect.
  nextEffect: Fiber | null.// In this subtree, the first and last fiber have side effects.
  // This allows us to reuse a segment of the linked list while reusing the work done in this fiber.
  firstEffect: Fiber | null.lastEffect: Fiber | null.// This is an integrated version of Fiber. Each updated fiber ends up in a pair.
  // In some cases, we can clean up pairs of fibers if necessary to save memory.
  alternate: Fiber | null};Copy the code

After understanding the structure of optical fiber, how to create a linked list tree between optical fibers. Here we introduce double buffering

The tree that is refreshed in the page to render the user interface, called current, is used to render the current user interface. Whenever there is an update, Fiber creates a workInProgress tree (memory footprint), which is created from the update data in the React element. React performs work on the workInProgress tree and uses the updated tree for the next rendering. Once the workInProgress tree is rendered to the user interface, it becomes the Current tree.

2.2 Asynchronous Execution

How is fiber executed asynchronously by time slice? Here is an example

let firstFiber
let nextFiber = firstFiber
let shouldYield = false
//firstFiber->firstChild->sibling
function performUnitOfWork(nextFiber){
  / /...
  return nextFiber.next
}

function workLoop(deadline){
  while(nextFiber && ! shouldYield){ nextFiber = performUnitOfWork(nextFiber) shouldYield = deadline.timeReaming <1
        }
  requestIdleCallback(workLoop)
}

requestIdleCallback(workLoop)
Copy the code

We know that the browser has an API called requestIdleCallback, which can perform some tasks when the browser is idle. We use this API to perform react updates and let high-priority tasks respond first. For the requsetIdleCallback function, here’s how it works.

const temp = window.requestIdleCallback(callback[, options]);
Copy the code

For the average user interaction, render a frame on to the next frame render time is belong to the system idle time, Input the Input, the fastest single character Input time average is 33 ms (through continuous press the same key to trigger), equal to, a frame to the next frame is greater than 16.4 ms leisure time among them, that is any discrete interaction, The minimum system idle time is also 16.4ms, which means that the minimum frame length for discrete interaction is generally 33ms.

The requestIdleCallback callback call timing is executed during the idle time between the last frame rendering and the next frame rendering after the callback registration is complete

Callback is the callback function to execute, passing in a Deadline object, which contains:

TimeRemaining: Remaining time, in ms, which refers to the remaining time of the frame.

DidTimeout: Boolean: True indicates that no callback has been executed in this frame and the frame has timed out.

If given a timeout, the callback will be executed immediately, regardless of whether there is any time left.

However, the fact is that requestIdleCallback has problems with browser compatibility and trigger instability, so we need to implement a time slice running mechanism with JS. In React, this part is called scheduler. At the same time, the React team didn’t see any browser vendors pushing requestIdleCallback overwriting, so React resorted to hack polyfill.

RequestIdleCallback Polyfill scheme (Scheduler)

As mentioned above, the time slice operation mechanism implemented in React is called scheduler. To understand the time slice, the whole process of page rendering in general scenarios is called a frame, while the complete process of browser rendering is roughly as follows

Execute JS– > Compute Style– > Build Layout –> Paint –> Composite

The characteristics of the frame:

The rendering process of a frame is after the JS execution flow or an event loop

The rendering of frames is handled in a separate UI thread, along with the GPU thread, which is used to draw the 3D view

Frame rendering and frame update presentation are asynchronous processes, because the screen refresh rate is a fixed refresh rate, usually 60 times/second, that is to say, the rendering time of a frame should be as low as 16.6 milliseconds, otherwise there will be frame loss and lag in some high frequency interactive actions. This is the typical user interaction due to the mismatch between render frames and refresh rates. Render times of a frame are not required to be less than 16.6ms, but it is also required to follow Google’s RAIL model

How does Polyfill control the execution of tasks within a fixed number of frames? Basically, it uses requestAnimationFrame to control the execution of a batch of flat tasks in exactly 33ms chunks of time.

Lane

This is our asynchronous scheduling strategy, but with only asynchronous scheduling, how do we determine which tasks should be scheduled, which should be scheduled first and which should be scheduled last, which leads to Lane similar to microtask macro tasks

With asynchronous scheduling, we also need to manage the priority of each task in a fine-grained way, so that the high-priority tasks can be executed first, and the priority of each Fiber work unit can be compared, so that tasks with the same priority can be updated together

For lane design, see this article:

Github.com/facebook/re…

Application scenarios

With the asynchronous interruptible allocation mechanism described above, we can implement a series of operations such as batchUpdates:Before the update fiberAfter the update fiber

In addition to CPU bottlenecks, there are also side effects related to fetching data, file manipulation, etc. React’s ability to isolate side effects depends on device performance and network conditions. How does React deal with these side effects so that we can code best practices and run applications consistently?

Design serve computer

We have all written codes to obtain data, showing loading before data acquisition, and canceling loading after data acquisition. Assuming that our device performance and network condition are good, data will be acquired soon, is it necessary to show loading at the beginning? How can we have a better user experience?

Take a look at the following example

function getSomething(id) {
  return fetch(`${host}? id=${id}`).then((res) = >{
    return res.param
  })
}

async function getTotalSomething(id1, id2) {
  const p1 = await getSomething(id1);
  const p2 = await getSomething(id2);

  return p1 + p2;
}

async function bundle(){
  await getTotalSomething('001'.'002');
}
Copy the code

We can usually get data with async+await, but this will cause the calling method to become an asynchronous function, this is the nature of Async, there is no separation of side effects.

To isolate side effects, refer to the code below

function useSomething(id) {
  useEffect((id) = >{
      fetch(`${host}? id=${id}`).then((res) = >{
       return res.param
      })
  }, [])
}

function TotalSomething({id1, id2}) {
  const p1 = useSomething(id1);
  const p2 = useSomething(id2);

  return <TotalSomething props={... }>
}
Copy the code

This is the ability of hooks to decouple side effects.

Decoupling side effects is very common in the practice of functional programming, such as Redux-Saga, where side effects are separated from saga and you don’t handle side effects yourself, just making requests.

function* fetchUser(action) {
   try {
      const user = yield call(Api.fetchUser, action.payload.userId);
      yield put({type: "USER_FETCH_SUCCEEDED".user: user});
   } catch (e) {
      yield put({type: "USER_FETCH_FAILED".message: e.message}); }}Copy the code

React doesn’t technically support Algebraic Effects, but Suspense is an extension of that concept where fiber implements updates and then gives the browser control over how to schedule them.

const ProductResource = createResource(fetchProduct);

​const Proeuct = (props) = > {
    const p = ProductResource.read( // Write asynchronous code synchronously!
          props.id
    );
  return <h3>{p.price}</h3>;
}

function App() {
  return (
    <div>
      <Suspense fallback={<div>Loading...</div>} ><Proeuct id={123} />
      </Suspense>
    </div>
  );
}
Copy the code

ProductResource. Read throws a special Promise before fetching data. Due to scheduler, ProductResource. The scheduler can catch the promise, suspend the update, and return the execution when the data is retrieved. The ProductResource can be localStorage or even redis, mysql and other databases. That’s what I understand the prototype of Server Componet to be.

This article as react16.5+ version of the core source content, a brief analysis of the mechanism of asynchronous scheduling allocation, understand the principle of which we will have a better overall view in the case of system design and model construction. It also plays a certain supporting role in the design of complex business scenarios. This is the first of a series of react source code updates that will continue to help you.

happy hacking~~