I am a fan of Vue. I happened to have an egg fight with the big guy on the weekend. I turned around and talked about Fiber, which was ridiculed by the big guy.

The big guy also joked with me about the current forget environment

  1. Baidu is not credible, baidu to the things out of the advertisement of the rest are from the same author (most of the case is true)
  2. A lot of hydrology is produced in copy form, and the article you see is probably several versions out of date (most of the time it is)

React Fiber read the React Fiber code. Why Fiber? Because Vue didn’t have it, Vue3 didn’t have it, but it was amazing.

This dish was written at: 2020/05/25, refer to the source version of the day v16.13.1

What problem was Fiber introduced to solve? < skip over >

First of all, why Fiber

React updates: When React decides to load or update the component tree, it does a number of things, such as calling the lifecycle functions of each component, evaluating and comparing the Virtual DOM, and finally updating the DOM tree.

For example: updating a component takes 1 millisecond, and updating 1000 components takes 1 second, during which the main thread is concentrating on the update.

The browser redraws the current page at regular intervals. The typical frequency is 60 times per second. This means that every 16 milliseconds (1/60 ≈ 0.0167) the browser periodically redraws, which we call a frame every 16 milliseconds. What does the browser do in this frame?

  1. JS.
  2. Calculate the Style.
  3. Build a Layout.
  4. Draw layer Styles (Paint).
  5. Composite computations render render results.

If any of these six steps takes too long, after a total of more than 16ms, the user may see a lag. The synchronization of the components in the above example takes 1 second, which means the user is stuck for almost 1 second!! (Almost!)

Due to the single-threaded nature of JavaScript, each synchronization task should not take too long, otherwise the application will not respond to other inputs. React Fiber is a change in this situation.

What is Fiber?

One solution to synchronous updates is time slicing: fragmentation of the update process, breaking up a long task into many smaller pieces. Performs non-blocking rendering, applies updates based on priority, and pre-renders content in the background.

Fiber is composed of formunitofwork (PS: As a data structure, the work unit controlled by the method is used to represent some workers. In other words, it is a work unit. Fiber architecture provides a convenient way to track, schedule, suspend and stop work.

Creation and use process of Fiber:

  1. The data from each React element returned by the Render method is merged into the Fiber node tree
  2. React creates a fiber node for each React element
  3. Unlike the React element, fiber is not recreated during each render
  4. In subsequent updates, React reused fiber nodes and updated the necessary properties with data from the corresponding React element.
  5. Meanwhile React will maintain oneworkInProgressTreeFor computing updates (double buffering), think of it as a tree that represents the current progress of work. React builds a WIP tree while comparing it to an old rendered tree.alternatePoints to the equivalent node of the old tree.

PS: The workInProgress belongs to the beginWork process. If you want to write it down, it will almost double the length. (Mainly because I am lazy and food…)

Fiber’s architecture is divided into two main phases: Reconciliation/Render and Commit,

React Reconciliation stage

The Reconciliation phase is similar to the old version after Fiber refactoring, except that it doesn’t recurse and doesn’t commit changes immediately.

Hooks involving life

  • shouldComponentUpdate
  • componentWillMount(waste)
  • componentWillReceiveProps(waste)
  • componentWillUpdate(waste)
  • static getDerivedStateFromProps

Reconciliation features:

  • React cedes control if the time slice runs out during the coordination phase. Because the work performed in the coordination phase does not result in any user-visible changes, ceding control during this phase is not a problem.
  • Because the React coordination phase can be interrupted, resumed, or even redone, the React coordination phase’s lifecycle hooks can be called multiple times! For example, componentWillMount may be called twice.
  • Therefore, the coordination phase lifecycle hook cannot contain side effects, so the hook is deprecated

The reconciliation process is completed. Depth-first search (DFS) is used, processing the child nodes first, then the sibling nodes, until the loop is complete.

React Commit phase

Hooks involving life

  • componentDidMount
  • componentDidUpdate
  • componentWillUnmount(waste)
  • getSnapshotBeforeUpdate

Render and commit: cannot be paused, the interface will be updated until it is finished

How does Fiber handle priority?

The following issues need to be considered for the UI:

Not all state updates need to be displayed immediately, such as:

  • Not all updates that are off-screen have the same priority
  • A response entered by the user takes precedence over a response populated by a request
  • Ideally, for some high-priority operations, it should be possible to interrupt low-priority operations

So React defines a set of event priorities

Below is the source of priority time

[Source] (github.com/facebook/re…

  var maxSigned31BitInt = 1073741823;

  // Times out immediately
  var IMMEDIATE_PRIORITY_TIMEOUT = - 1;
  // Eventually times out
  var USER_BLOCKING_PRIORITY = 250;
  var NORMAL_PRIORITY_TIMEOUT = 5000;
  var LOW_PRIORITY_TIMEOUT = 10000;
  // Never times out
  var IDLE_PRIORITY = maxSigned31BitInt;
Copy the code

Instead of immediately Diff an Update task, the current Update is sent to an Update Queue and sent to the Scheduler, which processes the Update based on the main thread usage.

Fiber ensures state consistency and view consistency no matter how the execution is split or in what order.

How do I ensure that tasks with the same priority triggered within a certain period of time have the same expiration time? React is implemented using the ceiling method… This dish not used | grammar…

Below is the ceiling source code that handles the expiration time

[Source] (github.com/facebook/re…

function ceiling(num, precision) {
  return (((num / precision) | 0) + 1) * precision;
}
Copy the code

So why do we need time consistency? See below.

How to schedule Fiber?

The first step is to find the address scheduleUpdateOnFiber,

Each root has a unique scheduled task, and if it already exists, we need to ensure that the due time is the same as for the next level of task (so use the ceiling method mentioned above to control the due time).

The source code file

export function scheduleUpdateOnFiber(fiber: Fiber, expirationTime: ExpirationTime,) {
  checkForNestedUpdates();
  warnAboutRenderPhaseUpdatesInDEV(fiber);

  / / call markUpdateTimeFromFiberToRoot, update expirationTime fiber node
  // the root fiber tree has only one root fiber.
  const root = markUpdateTimeFromFiberToRoot(fiber, expirationTime);
  if (root === null) {
    warnAboutUpdateOnUnmountedFiberInDEV(fiber);
    return;
  }

  // TODO: computeExpirationForFiber also reads the priority. Pass the
  // priority as an argument to that function and this one.
  // Just TODO
  / / computeExpirationForFiber will read priority.
  // Pass the priority as an argument to the function and the function.
  const priorityLevel = getCurrentPriorityLevel();

  if (expirationTime === Sync) {
    if (
      // Check if we're inside unbatchedUpdates
      // Check if the update is not in batch(executionContext & LegacyUnbatchedContext) ! == NoContext &&// Check if we're not already rendering
      // Check if it is not yet rendered
      (executionContext & (RenderContext | CommitContext)) === NoContext
    ) {
      // Register pending interactions on the root to avoid losing traced interaction data.
      // Register interactions to be processed at root to avoid losing traced interaction data.
      schedulePendingInteractions(root, expirationTime);

      // This is a legacy edge case. The initial mount of a ReactDOM.render-ed
      // root inside of batchedUpdates should be synchronous, but layout updates
      // should be deferred until the end of the batch.
      performSyncWorkOnRoot(root);
    } else {
      ensureRootIsScheduled(root);
      schedulePendingInteractions(root, expirationTime);
      if (executionContext === NoContext) {
        // Flush the synchronous work now, unless we're already working or inside
        // a batch. This is intentionally inside scheduleUpdateOnFiber instead of
        // scheduleCallbackForFiber to preserve the ability to schedule a callback
        // without immediately flushing it. We only do this for user-initiated
        // updates, to preserve historical behavior of legacy mode.
        // Push the scheduling task queueflushSyncCallbackQueue(); }}}else {
    // Schedule a discrete update but only if it's not Sync.
    if( (executionContext & DiscreteEventContext) ! == NoContext &&// Only updates at user-blocking priority or greater are considered
      // discrete, even inside a discrete event.
      (priorityLevel === UserBlockingPriority ||
        priorityLevel === ImmediatePriority)
    ) {
      // This is the result of a discrete event. Track the lowest priority
      // discrete update per root so we can flush them early, if needed.
      if (rootsWithPendingDiscreteUpdates === null) {
        rootsWithPendingDiscreteUpdates = new Map([[root, expirationTime]]);
      } else {
        const lastDiscreteTime = rootsWithPendingDiscreteUpdates.get(root);
        if (
          lastDiscreteTime === undefined|| lastDiscreteTime > expirationTime ) { rootsWithPendingDiscreteUpdates.set(root, expirationTime); }}}// Schedule other updates after in case the callback is sync.ensureRootIsScheduled(root); schedulePendingInteractions(root, expirationTime); }}Copy the code

The above source code mainly does the following things

  1. callmarkUpdateTimeFromFiberToRootUpdate the Fiber nodeexpirationTime
  2. ensureRootIsScheduled(Update highlights)
  3. schedulePendingInteractionsIt actually callsscheduleInteractions
  • scheduleInteractionsFiberRoot will be usedpendingInteractionMapProperties and differentexpirationTime, gets the set of update tasks required for each schedule, records their number, and detects whether these tasks are error-free.

ScheduleUpdateOnFiber is one of the ensureRootIsScheduled(root: FiberRoot) calls to each update.

The source code for ensureRootIsScheduled is below

The source code file

function ensureRootIsScheduled(root: FiberRoot) {
  const lastExpiredTime = root.lastExpiredTime;
  if(lastExpiredTime ! == NoWork) {// Special case: Expired work should flush synchronously.
    root.callbackExpirationTime = Sync;
    root.callbackPriority_old = ImmediatePriority;
    root.callbackNode = scheduleSyncCallback(
      performSyncWorkOnRoot.bind(null, root),
    );
    return;
  }

  const expirationTime = getNextRootExpirationTimeToWorkOn(root);
  const existingCallbackNode = root.callbackNode;
  if (expirationTime === NoWork) {
    // There's nothing to work on.
    if(existingCallbackNode ! = =null) {
      root.callbackNode = null;
      root.callbackExpirationTime = NoWork;
      root.callbackPriority_old = NoPriority;
    }
    return;
  }

  // TODO: If this is an update, we already read the current time. Pass the
  // time as an argument.
  const currentTime = requestCurrentTimeForUpdate();
  const priorityLevel = inferPriorityFromExpirationTime(
    currentTime,
    expirationTime,
  );

  // If there's an existing render task, confirm it has the correct priority and
  // expiration time. Otherwise, we'll cancel it and schedule a new one.
  if(existingCallbackNode ! = =null) {
    const existingCallbackPriority = root.callbackPriority_old;
    const existingCallbackExpirationTime = root.callbackExpirationTime;
    if (
      // Callback must have the exact same expiration time.
      existingCallbackExpirationTime === expirationTime &&
      // Callback must have greater or equal priority.
      existingCallbackPriority >= priorityLevel
    ) {
      // Existing callback is sufficient.
      return;
    }
    // Need to schedule a new task.
    // TODO: Instead of scheduling a new task, we should be able to change the
    // priority of the existing one.
    cancelCallback(existingCallbackNode);
  }

  root.callbackExpirationTime = expirationTime;
  root.callbackPriority_old = priorityLevel;

  let callbackNode;
  if (expirationTime === Sync) {
    // Sync React callbacks are scheduled on a special internal queue
    callbackNode = scheduleSyncCallback(performSyncWorkOnRoot.bind(null, root));
  } else if (disableSchedulerTimeoutBasedOnReactExpirationTime) {
    callbackNode = scheduleCallback(
      priorityLevel,
      performConcurrentWorkOnRoot.bind(null, root),
    );
  } else {
    callbackNode = scheduleCallback(
      priorityLevel,
      performConcurrentWorkOnRoot.bind(null, root),
      // Compute a task timeout based on the expiration time. This also affects
      // ordering because tasks are processed in timeout order.
      {timeout: expirationTimeToMs(expirationTime) - now()},
    );
  }

  root.callbackNode = callbackNode;
}
Copy the code

The above source ensureRootIsScheduled is primarily one that does different push functions depending on the synchronous/asynchronous state.

Function scheduleSyncCallback(callback) :

  • If the queue is not empty, it is pushed into the synchronous queue (syncQueue.push(callback))
  • If empty, push immediatelyTask scheduling queue(Scheduler_scheduleCallback)
  • willperformSyncWorkOnRootAs aSchedulerCallback

The scheduleSyncCallback source code is shown below

The source code file

export function scheduleSyncCallback(callback: SchedulerCallback) {
  // Push this callback into an internal queue. We'll flush these either in
  // the next tick, or earlier if something calls `flushSyncCallbackQueue`.
  if (syncQueue === null) {
    syncQueue = [callback];
    // Flush the queue in the next tick, at the earliest.
    immediateQueueCallbackNode = Scheduler_scheduleCallback(
      Scheduler_ImmediatePriority,
      flushSyncCallbackQueueImpl,
    );
  } else {
    // Push onto existing queue. Don't need to schedule a callback because
    // we already scheduled one when we created the queue.
    syncQueue.push(callback);
  }
  return fakeCallbackNode;
}

Copy the code

Asynchronous scheduling, asynchronous task scheduling is very simple, the asynchronous task directly into the scheduling queue (Scheduler_scheduleCallback), will be as SchedulerCallback performConcurrentWorkOnRoot

export function scheduleCallback(reactPriorityLevel: ReactPriorityLevel, callback: SchedulerCallback, options: SchedulerCallbackOptions | void | null,) {
  const priorityLevel = reactPriorityToSchedulerPriority(reactPriorityLevel);
  return Scheduler_scheduleCallback(priorityLevel, callback, options);
}
Copy the code

Whether it’s synchronous or asynchronous scheduling, Function unstable_scheduleCallback(priorityLevel, callback, options) They will have their own SchedulerCallback

Tip: Since much of the code below uses peek, insert a peek implementation that returns either the first element in the array or null

Peek related source files

  export function peek(heap: Heap) :Node | null {
    const first = heap[0];
    return first === undefined ? null : first;
  }
Copy the code

Scheduler_scheduleCallback

[Source] (github.com/facebook/re…

// Push a task to the task scheduling queue
function unstable_scheduleCallback(priorityLevel, callback, options) {
  var currentTime = getCurrentTime();

  var startTime;
  var timeout;
  if (typeof options === 'object'&& options ! = =null) {
    var delay = options.delay;
    if (typeof delay === 'number' && delay > 0) {
      startTime = currentTime + delay;
    } else {
      startTime = currentTime;
    } 
    timeout =
      typeof options.timeout === 'number'
        ? options.timeout
        : timeoutForPriorityLevel(priorityLevel);
  } else {
    // Calculate different expiration times for different priorities
    timeout = timeoutForPriorityLevel(priorityLevel);
    startTime = currentTime;
  }
  
   // Define a new expiration time
  var expirationTime = startTime + timeout;

  // Define a new task
  var newTask = {
    id: taskIdCounter++,
    callback,
    priorityLevel,
    startTime,
    expirationTime,
    sortIndex: - 1};if (enableProfiling) {
    newTask.isQueued = false;
  }

  if (startTime > currentTime) {
    // This is a delayed task.
    newTask.sortIndex = startTime;

    // Push the timeout task to the timeout queue
    push(timerQueue, newTask);
    if (peek(taskQueue) === null && newTask === peek(timerQueue)) {
      // All tasks are delayed, and this is the task with the earliest delay.
      // When all tasks are delayed and the task is the earliest one
      if (isHostTimeoutScheduled) {
        // Cancel an existing timeout.
        cancelHostTimeout();
      } else {
        isHostTimeoutScheduled = true;
      }
      // Schedule a timeout.requestHostTimeout(handleTimeout, startTime - currentTime); }}else {
    newTask.sortIndex = expirationTime;

    // Push the new task to the task queue
    push(taskQueue, newTask);
    if (enableProfiling) {
      markTaskStart(newTask, currentTime);
      newTask.isQueued = true;
    }
    // Schedule a host callback, if needed. If we're already performing work,
    // wait until the next time we yield.
    // Execute the callback method and wait for a callback to complete if it is already working again
    if(! isHostCallbackScheduled && ! isPerformingWork) { isHostCallbackScheduled =true; (flushWork); }}return newTask;
}
Copy the code

Note: markTaskStart is used to record data and corresponds to markTaskCompleted

The source code file

export function markTaskStart(task: { id: number, priorityLevel: PriorityLevel, ... }, ms: number,) {
  if (enableProfiling) {
    profilingState[QUEUE_SIZE]++;

    if(eventLog ! = =null) {
      // performance.now returns a float, representing milliseconds. When the
      // event is logged, it's coerced to an int. Convert to microseconds to
      // maintain extra degrees of precision.
      logEvent([TaskStartEvent, ms * 1000, task.id, task.priorityLevel]); }}}export function markTaskCompleted(task: { id: number, priorityLevel: PriorityLevel, ... }, ms: number,) {
  if (enableProfiling) {
    profilingState[PRIORITY] = NoPriority;
    profilingState[CURRENT_TASK_ID] = 0;
    profilingState[QUEUE_SIZE]--;

    if(eventLog ! = =null) {
      logEvent([TaskCompleteEvent, ms * 1000, task.id]); }}}Copy the code

Unstable_scheduleCallback does several things

  • throughoptions.delayoptions.timeoutaddtimeoutForPriorityLevel()To obtain anewTaskexpirationTime
  • If the task has expired
    • Push the timeout task to the timeout queue
    • Is called if all tasks are delayed and the task is the earliestcancelHostTimeout
    • callrequestHostTimeout
  • Push the new task to the task queue

The source code file

CancelHostTimeout source code

  cancelHostTimeout = function() {
    clearTimeout(_timeoutID);
  };
Copy the code

RequestHostTimeout source code

  requestHostTimeout = function(cb, ms) {
    _timeoutID = setTimeout(cb, ms);
  };
Copy the code

And then what is cb of requestHostTimeout or handleTimeout?

  function handleTimeout(currentTime) {
    isHostTimeoutScheduled = false;
    advanceTimers(currentTime);

    if(! isHostCallbackScheduled) {if(peek(taskQueue) ! = =null) {
        isHostCallbackScheduled = true;
        requestHostCallback(flushWork);
      } else {
        const firstTimer = peek(timerQueue);
        if(firstTimer ! = =null) { requestHostTimeout(handleTimeout, firstTimer.startTime - currentTime); }}}}Copy the code

The above method is important because it does several things

  1. calladvanceTimersCheck for tasks that are no longer delayed and add them to the queue.

Here is the advanceTimers source code

function advanceTimers(currentTime) {
  // Check for tasks that are no longer delayed and add them to the queue.
  let timer = peek(timerQueue);
  while(timer ! = =null) {
    if (timer.callback === null) {
      // Timer was cancelled.
      pop(timerQueue);
    } else if (timer.startTime <= currentTime) {
      // Timer fired. Transfer to the task queue.
      pop(timerQueue);
      timer.sortIndex = timer.expirationTime;
      push(taskQueue, timer);
      if (enableProfiling) {
        markTaskStart(timer, currentTime);
        timer.isQueued = true; }}else {
      // Remaining timers are pending.
      return; } timer = peek(timerQueue); }}Copy the code
  1. callrequestHostCallbackthroughMessageChannelTo enable task schedulingperformWorkUntilDeadline

The requestHostCallback method is particularly important

The source code file

// Call the performWorkUntilDeadline method with onMessage
channel.port1.onmessage = performWorkUntilDeadline;

// postMessage
requestHostCallback = function(callback) {
  scheduledHostCallback = callback;
  if(! isMessageLoopRunning) { isMessageLoopRunning =true;
    port.postMessage(null); }};Copy the code

Then performWorkUntilDeadline in the same file calls scheduledHostCallback, which is the flushWork we passed in earlier


const performWorkUntilDeadline = (a)= > {
  if(scheduledHostCallback ! = =null) {
    const currentTime = getCurrentTime();
    // Yield after `yieldInterval` ms, regardless of where we are in the vsync
    // cycle. This means there's always time remaining at the beginning of
    // the message event.
    deadline = currentTime + yieldInterval;
    const hasTimeRemaining = true;
    try {
      const hasMoreWork = scheduledHostCallback(
        hasTimeRemaining,
        currentTime,
      );
      if(! hasMoreWork) { isMessageLoopRunning =false;
        scheduledHostCallback = null;
      } else {
        // If there's more work, schedule the next message event at the end
        // of the preceding one.
        port.postMessage(null); }}catch (error) {
      // If a scheduler task throws, exit the current browser task so the
      // error can be observed.
      port.postMessage(null);
      throwerror; }}else {
    isMessageLoopRunning = false;
  }
  // Yielding to the browser will give it a chance to paint, so we can
  // reset this.
  needsPaint = false;
};
Copy the code

The main function of flushWork is to call the workLoop to execute all tasks in a loop

The source code file

function flushWork(hasTimeRemaining, initialTime) {
  if (enableProfiling) {
    markSchedulerUnsuspended(initialTime);
  }

  // We'll need a host callback the next time work is scheduled.
  isHostCallbackScheduled = false;
  if (isHostTimeoutScheduled) {
    // We scheduled a timeout but it's no longer needed. Cancel it.
    isHostTimeoutScheduled = false;
    cancelHostTimeout();
  }

  isPerformingWork = true;
  const previousPriorityLevel = currentPriorityLevel;
  try {
    if (enableProfiling) {
      try {
        return workLoop(hasTimeRemaining, initialTime);
      } catch (error) {
        if(currentTask ! = =null) {
          const currentTime = getCurrentTime();
          markTaskErrored(currentTask, currentTime);
          currentTask.isQueued = false;
        }
        throwerror; }}else {
      // No catch in prod codepath.
      returnworkLoop(hasTimeRemaining, initialTime); }}finally {
    currentTask = null;
    currentPriorityLevel = previousPriorityLevel;
    isPerformingWork = false;
    if (enableProfiling) {
      constcurrentTime = getCurrentTime(); markSchedulerSuspended(currentTime); }}}Copy the code

WorkLoop and flushWork are used in a file to pull the highest-priority task from the scheduled task queue and execute it.

Remember SchedulerCallback?

  • For the synchronization task, is performedperformSyncWorkOnRoot
  • For asynchronous tasks, it isperformConcurrentWorkOnRoot
function workLoop(hasTimeRemaining, initialTime) {
  let currentTime = initialTime;
  advanceTimers(currentTime);
  currentTask = peek(taskQueue);
  while( currentTask ! = =null &&
    !(enableSchedulerDebugging && isSchedulerPaused)
  ) {
    if( currentTask.expirationTime > currentTime && (! hasTimeRemaining || shouldYieldToHost()) ) {// This currentTask hasn't expired, and we've reached the deadline.
      break;
    }
    const callback = currentTask.callback;
    if(callback ! = =null) {
      currentTask.callback = null;
      currentPriorityLevel = currentTask.priorityLevel;
      const didUserCallbackTimeout = currentTask.expirationTime <= currentTime;
      markTaskRun(currentTask, currentTime);
      const continuationCallback = callback(didUserCallbackTimeout);
      currentTime = getCurrentTime();
      if (typeof continuationCallback === 'function') {
        currentTask.callback = continuationCallback;
        markTaskYield(currentTask, currentTime);
      } else {
        if (enableProfiling) {
          markTaskCompleted(currentTask, currentTime);
          currentTask.isQueued = false;
        }
        if (currentTask === peek(taskQueue)) {
          pop(taskQueue);
        }
      }
      advanceTimers(currentTime);
    } else {
      pop(taskQueue);
    }
    currentTask = peek(taskQueue);
  }
  // Return whether there's additional work
  if(currentTask ! = =null) {
    return true;
  } else {
    const firstTimer = peek(timerQueue);
    if(firstTimer ! = =null) {
      requestHostTimeout(handleTimeout, firstTimer.startTime - currentTime);
    }
    return false; }}Copy the code

It’s all going to end up with a FormUnitofwork operation.

This method is just that asynchronous methods can be broken, and we check for timeout each time we call them.

The source code file

function performUnitOfWork(unitOfWork: Fiber) :void {
  // The current, flushed, state of this fiber is the alternate. Ideally
  // nothing should rely on this, but relying on it here means that we don't
  // need an additional field on the work in progress.
  const current = unitOfWork.alternate;
  setCurrentDebugFiberInDEV(unitOfWork);

  let next;
  if(enableProfilerTimer && (unitOfWork.mode & ProfileMode) ! == NoMode) { startProfilerTimer(unitOfWork); next = beginWork(current, unitOfWork, renderExpirationTime); stopProfilerTimerIfRunningAndRecordDelta(unitOfWork,true);
  } else {
    next = beginWork(current, unitOfWork, renderExpirationTime);
  }

  resetCurrentDebugFiberInDEV();
  unitOfWork.memoizedProps = unitOfWork.pendingProps;
  if (next === null) {
    // If this doesn't spawn new work, complete the current work.
    completeUnitOfWork(unitOfWork);
  } else {
    workInProgress = next;
  }

  ReactCurrentOwner.current = null;
}
Copy the code

The above startProfilerTimer and stopProfilerTimerIfRunningAndRecordDelta fiber is actually record working hours.

The source code file

function startProfilerTimer(fiber: Fiber) :void {
  if(! enableProfilerTimer) {return;
  }

  profilerStartTime = now();

  if (((fiber.actualStartTime: any): number) < 0) { fiber.actualStartTime = now(); }}function stopProfilerTimerIfRunningAndRecordDelta(fiber: Fiber, overrideBaseTime: boolean,) :void {
  if(! enableProfilerTimer) {return;
  }

  if (profilerStartTime >= 0) {
    const elapsedTime = now() - profilerStartTime;
    fiber.actualDuration += elapsedTime;
    if (overrideBaseTime) {
      fiber.selfBaseDuration = elapsedTime;
    }
    profilerStartTime = - 1; }}Copy the code

Finally, it comes to the beginWork process. What’s in it? WorkInProgress also has a bunch of Switch cases.

Want to see beginWork source can try their own beginWork related source files

conclusion

Finally is the summary of the part, should not write this thought for a long time, each reader at different times different mood to see the source of sentiment should be different (of course, their own review of the time is also a reader). There should be a summary of each period.

But without a summary, the analysis feels dull and fruitless. So I’m just going to skim it (it must be original, it’s not anywhere else)

  1. Fiber is essentially a node, a traversal of a linked list
  2. Fiber is calculated by priorityexpirationTimeGet expiration time
  3. Because of the linked list structure, time slice can be very convenient to interrupt and restore
  4. Time slice is implemented throughsettimeout + postMessageImplementation of the
  5. Executed when all tasks are delayedclearTimeout
  6. Calculate the number of tasks and working hours

Why is Fiber using linked lists

The use of the linked list structure was an end result, not an end, and the React developers’ initial goal was to emulate the call stack

The call stack is most often used to hold the return address of a subroutine. When any subroutine is called, the main program must temporarily store the address to which the subroutine should return when it finishes running. Therefore, if the called subroutine also calls other subroutines, its own return address must be placed on the call stack and retrieved after it has finished running. In addition to the return address, local variables, function parameters, and environment passes are saved.

Thus, the Fiber object is designed as a linked list structure, consisting of the following main properties

  • typetype
  • returnStores the parent node of the current node
  • childStore the first child node
  • siblingStores the sibling of the first node on the right
  • alternateThe equivalent node of an old tree

When we diff through the DOM tree, even if we break, we only need to remember that one node at the time of the break, and then we can resume traversing and diff at the next time slice. This is one of the benefits of using linked lists for Fiber data structures.

Why not use requestIdleCallback for time slices

An event that the browser executes periodically

Macro task 2. Micro task 4. RequestAnimationFrame 5. IntersectionObserver 6. Update interface 7. RequestIdleCallback 8. The next frameCopy the code

According to the official description:

Window. RequestIdleCallback () method will be called function in browser free time line. This enables developers to perform background and low-priority work on the main event loop without affecting the delay of critical events such as animations and input responses. Functions are typically executed in first-come-first-called order; however, if a callback function specifies a timeout, it is possible to scramble the order of execution in order to execute the function before it times out. You can call requestIdleCallback() in the idle callback function to schedule another callback before passing through the event loop the next time.

It seems to fit perfectly with the idea of time slicing, so we wanted to use this API for time slicing rendering in React. However, the current browser support is not good, and the requestIdleCallback is a bit too strict, and the implementation frequency is not enough to achieve smooth UI rendering.

And we hope that through the Fiber framework, the Reconcilation process can be disrupted. Relinquish CPU execution ‘in time’. So the React team had to implement its own version.

In fact, the idea of Fiber and the concept of coroutines are compatible. Here’s an example:

Normal functions: (cannot be interrupted and resumed)

const tasks = []
function run() {
  let task
  while (task = tasks.shift()) {
    execute(task)
  }
}
Copy the code

If using Generator syntax:

const tasks = []
function * run() {
  let task

  while (task = tasks.shift()) {
    // Determine if there are high priority events that need to be handled, and relinquish control if there are
    if (hasHighPriorityEvent()) {
      yield
    }

    // After processing the high-priority event, resume the function call stack and continue executing...
    execute(task)
  }
}
Copy the code

React, however, tried implementing it with Generator, found it cumbersome and abandoned it.

Why not use Generator for time slicing

There are two main reasons:

  1. GeneratorEach function must be wrapped in a Generator stack. This not only adds a lot of syntax overhead, but also run-time overhead in existing implementations. While something is better than nothing, performance issues remain.
  2. The biggest reason is that generators are stateful. It cannot be recovered in the middle of it. If you want to restore the recursive scene, you may need to start from scratch and revert to the previous call stack.

Why not use Web Workers for time slicing

Can a Web Worker create a multi-threaded environment to achieve time slicing?

The React team also considered, tried to propose shared immutable persistent data structures, tried custom VM tweaks, and so on, but the JavaScript language was not suitable for this.

Because of mutable shared runtimes (such as prototypes), the ecosystem is not ready because you have to repeat code loading and module initialization across workers. If garbage collectors must be thread-safe, they are not as efficient as they are today, and VM implementers seem unwilling to bear the implementation costs of persistent data structures. Shared mutable arrays seem to be evolving, but in today’s ecosystem, requiring all data to pass through this layer does not seem feasible. Artificial boundaries between different parts of the code base also don’t work well and can cause unnecessary friction. Even then, you still have a lot of JS code (such as utility libraries) that must be copied between workers. This causes slow startup time and memory overhead. So, yes, threads may not be possible until we can locate something like a Web Assembly.

You cannot safely abort the background thread. Aborting and restarting threads is not cheap. In many languages, it’s also not safe because you might be in the middle of some lazy initialization. Even if it is effectively broken, you must continue to spend CPU cycles on it.

Another limitation is that there is no way to determine whether two threads are working on the same component at the same time, since threads cannot be terminated immediately. This leads to some limitations, such as the inability to support stateful class instances (such as react.component). Threads can’t just remember part of what you did in one thread and reuse it in another thread.

React/React/React/React/React/React

The last

  1. Please give a thumbs up if you find it helpful
  2. This article is from github.com/zhongmeizhi…
  3. Welcome to pay attention to the public number “front-end advanced class” seriously learn the front end, step up together. replyThe whole stackVueI got a nice gift for you