Introduction to the
Priority of fiber task in React. Lane is different from Scheduler. They convert when they call each other. Binary mask is used to express. (Including and removing according to the mask is not discussed in this paper)
The text is mainly in ConcurrentMode mode.
Indicates priority
As the name suggests, the track. The closer to the right, the higher the priority.
// Give some examples
export const SyncLane: Lane = / * * / 0b0000000000000000000000000000001;
export const InputContinuousLane: Lanes = / * * / 0b0000000000000000000000000000100;
export const DefaultLane: Lanes = / * * / 0b0000000000000000000000000010000;
const TransitionLanes: Lanes = / * * / 0b0000000001111111111111111000000;
const NonIdleLanes = / * * / 0b0001111111111111111111111111111;
export const IdleLane: Lanes = / * * / 0b0100000000000000000000000000000;
/ /... Left lane
Copy the code
When the Update structure is created, a Lane is requested
Triggers an update, calculates a priority Lane, and assigns values to update and Fiber. Take the setState flow as an example:
// Request lane, the corresponding priority lane will be calculated
const lane = requestUpdateLane(fiber);
// Create an update attached lane to chain to the end of the Update ue list
const update: Update<S, A> = {
lane,
action,
eagerReducer: null.eagerState: null.next: (null: any),};// Update lanes from fiber to root
const root = scheduleUpdateOnFiber(fiber, lane, eventTime);
Copy the code
fiber
Go up tofiber root
Will thislane
Incorporated into thelanes
updateQueue
Tail linkupdate
In the specified priority range, the requestUpdateLane specifies the lanes at the higher level (on the right). If all the lanes in the fixed range are specified, the requestUpdateLane specifies the lanes at the lower level in the adjacent range.
// requestUpdateLane will execute this function
export function claimNextTransitionLane() :Lane {
const lane = nextTransitionLane;
nextTransitionLane <<= 1;
if ((nextTransitionLane & TransitionLanes) === 0) {
nextTransitionLane = TransitionLane1;
}
return lane;
}
Copy the code
Schedule updates by priority
- The update will be executed
scheduleUpdateOnFiber
, will be updated this timelane
Incorporated into theroot.pendingLanes
。 - In the implementation
ensureRootIsScheduled
Through theroot.pendingLanes
Find the highest prioritylane
,Priority
.- ** * attention!! ! Batch processing here. Let’s go to the bottom and come back. 支那
- If the current
root
theTask prioritywithPriority
Same, then jump out of the function directly. They share the same working functionroot.callbackPriority === Priority
- The multiple
update
Share oneperformConcurrentWorkOnRoot
。 - The batch.
ConcurrentMode
Mode, throughpriorityThe batch- Synchronous mode does not follow this logic, by
batchedUpdates
Batch processing. (Delayed update)
- will
Priority
convertScheduler
Priority, letScheduler
scheduleperformConcurrentWorkOnRoot
。 - will
Priority
Save toroot.callbackPriority
。 - Task node returned by scheduling tasks
taskNode
Save toroot.callbackNode
.
// Simplify the code
function scheduleUpdateOnFiber(fiber, lane){
// From fiber up to root, merge lane to fiber.lanes along the path. So did childLanes
const root = markUpdateLaneFromFiberToRoot(fiber, lane);
// Merge this lane to root.pendingLanes
markRootUpdated(root, lane, eventTime);
// Schedule this update
ensureRootIsScheduled(root, eventTime);
// Handle cases that require immediate updates in synchronous mode
if (
// Sync priority, NoContext, sync mode
lane === SyncLane && executionContext === NoContext && (fiber.mode & ConcurrentMode) === NoMode
) {
// Execute performSyncWorkOnRoot immediately to renderflushSyncCallbacksOnlyInLegacyMode(); }}function ensureRootIsScheduled(root, currentTime) {
const existingCallbackNode = root.callbackNode;
// Write the current scheduling time,
If a task expires, write root. ExpirationTimes
// add root.expiredlanes to expiredLanes
markStarvedLanesAsExpired(root, currentTime);
// Get lanes, which may merge multiple lanes
const nextLanes = getNextLanes(
root,
root === workInProgressRoot ? workInProgressRootRenderLanes : NoLanes,
);
// Get the highest priority
const newCallbackPriority = getHighestPriorityLane(nextLanes);
const existingCallbackPriority = root.callbackPriority;
// The priority is the same.
// Share the same working function. PerformSyncWorkOnRoot or performConcurrentWorkOnRoot
/ / batch
if ( existingCallbackPriority === newCallbackPriority ) {
return;
}
if(existingCallbackNode ! =null) {
// End the current task with different priorities.
cancelCallback(existingCallbackNode);
}
let newCallbackNode;
// Synchronization priority (highest priority)
if (newCallbackPriority === SyncLane) {
if (root.tag === LegacyRoot) {
// Synchronous mode
scheduleLegacySyncCallback(performSyncWorkOnRoot.bind(null, root));
} else {
// ConcurrentMode
scheduleSyncCallback(performSyncWorkOnRoot.bind(null, root));
}
// Randomly clear all tasks whose priority is immediately executed
if (supportsMicrotasks) {
// If the microtask API is supported, scheduleMicrotask is called
scheduleMicrotask(flushSyncCallbacks);
} else {
// Schedule is not supported
scheduleCallback(ImmediateSchedulerPriority, flushSyncCallbacks);
}
newCallbackNode = null;
} else {
// Change the lane priority to Schedule priority
let schedulerPriorityLevel;
switch (lanesToEventPriority(nextLanes)) {
case DiscreteEventPriority:
schedulerPriorityLevel = ImmediateSchedulerPriority;
break;
case ContinuousEventPriority:
schedulerPriorityLevel = UserBlockingSchedulerPriority;
break;
case DefaultEventPriority:
schedulerPriorityLevel = NormalSchedulerPriority;
break;
case IdleEventPriority:
schedulerPriorityLevel = IdleSchedulerPriority;
break;
default:
schedulerPriorityLevel = NormalSchedulerPriority;
break;
}
/ / scheduling performConcurrentWorkOnRoot
// Save the returned taskNode
newCallbackNode = scheduleCallback(
schedulerPriorityLevel,
performConcurrentWorkOnRoot.bind(null, root),
);
}
// Save priority, taskNode
root.callbackPriority = newCallbackPriority;
root.callbackNode = newCallbackNode;
}
Copy the code
Time slice scheduling work function, high priority queue jumping, hunger problem
Time slicing: Each macro task executes a small piece of the fiber tree and interrupts when the execution time is up. The next macro task is scheduled and this operation is repeated. High-priority queue-jumping: During the execution of each task, determine whether there is a high-priority task. If there is, cancel the current task and execute the high-priority task. Simplified code:
function performConcurrentWorkOnRoot(root, currentTime) {
const originalCallbackNode = root.callbackNode;
let lanes = getNextLanes(
root,
root === workInProgressRoot ? workInProgressRootRenderLanes : NoLanes,
);
// The execution will be interrupted and the queue will be cut by high priority
// Get the status of the task
let exitStatus =
// Check whether any task expires. If the task expires, the time slice will not be executed immediately
shouldTimeSlice(root, lanes)
? renderRootConcurrent(root, lanes)
: renderRootSync(root, lanes);
// Check whether the fiber tree is fully executed because of the interruption.
if(exitStatus ! == RootIncomplete) {// New fiber tree after execution
constFinishedWork = root. Current. Alternate; root.finishedWork = finishedWork; root.finishedLanes = lanes;// commitRoot will be executed
finishConcurrentRender(root, exitStatus, lanes);
}
// Schedule root again, there may be a new high-priority task queueing
ensureRootIsScheduled(root, now());
// This task is not completed, continue to schedule
if (root.callbackNode === originalCallbackNode) {
Callback execution in Schedule returns a new callback, which will instead Schedule the new callback
return performConcurrentWorkOnRoot.bind(null, root);
}
// It is also possible that a high-priority task has been cut in line
// The dispatch is complete
return null;
}
function renderRootConcurrent(root: FiberRoot, lanes: Lanes) {
// High priority jumped the queue
if(workInProgressRoot ! == root || workInProgressRootRenderLanes ! == lanes) {// Discard the old stack and build a new stack. (Reset related variables, related root attributes)
prepareFreshStack(root, lanes);
}
workLoopConcurrent();
}
function workLoopConcurrent() {
// Each time,! ShouldYield () determines if there is still execution time
while(workInProgress ! = =null && !shouldYield()) {
// DFS fiber tree, each workInProgress is the next fiber node
// Execute the specific component render to create the DOM (non-insert).performUnitOfWork(workInProgress); }}Copy the code
hunger
React sets the expiration time to prevent low-priority tasks from being executed if there are too many high-priority tasks. If the task expires, the task is executed immediately.
- Recording time
- Function:
ensureRootIsScheduled
->markStarvedLanesAsExpired
- Take into account the execution time of this task
root.expirationTimes
The array. - Checks whether there are expired tasks
root.expirationTimes
,root.expiredLanes
Merge overduelane
。
- Function:
- Executing expired Tasks
- Function:
performConcurrentWorkOnRoot
->shouldTimeSlice
- in
shouldTimeSlice
In the judgmentroot.expiredLanes
Check whether there are expired tasks. If there are expired tasks, the synchronization task is executed immediately without running the time slicerenderRootSync
。
- Function:
How do I update state by priority?
Each update in the update Ue is created with a lane. When calculating, renderLanes will only handle updates with sufficient priority according to the current level.
New problem: How do I guarantee correct results when updates with different priorities depend on each other?
Example: updateQueue: A1 –> B2 –> C1 –> D2 (priority).
Normally, each update will be deleted after it is executed, and the state will be updated once per update.
However, after prioritizing, there is a problem.
Now let’s assume that the priority is 1, A1, C1, B2 = (preState) => preState + 1, depending on A1, then we’ll get the next update wrong.
So react to make sure the state is correct. Update with insufficient priority in the first is stored in baseUpdate, and calculated values are stored in baseState.
Update Update formula: baseState(last state) + baseUpdateQueue + pendingQueueUpdata = newState
Hooks update state simplified code. The class update process is similar.
// Simplified hooks state update
// The code in useReducer, useState is also the useReducer called
const queue = hook.queue;
const current = currentHook
// Update skipped by priority
let baseQueue = current.baseQueue;
// Render a new update
const pendingQueue = queue.pending;
// Connect baseQueue to the end of the pendingQueue
if(pendingQueue ! = =null) {
if(baseQueue ! = =null) {
// Queue is the last and next is the first
const baseFirst = baseQueue.next;
const pendingFirst = pendingQueue.next;
baseQueue.next = pendingFirst;
pendingQueue.next = baseFirst;
}
/ / store baseQueu
current.baseQueue = baseQueue = pendingQueue;
queue.pending = null;
}
if(baseQueue ! = =null) {
const first = baseQueue.next;
let newState = current.baseState;/ / the sate
let newBaseState = null;
let newBaseQueueFirst = null;
let newBaseQueueLast = null;
let update = first;
do {
const updateLane = update.lane;
// Compare lanes with Update lane to determine whether the priority is sufficient
if(! isSubsetOfLanes(renderLanes, updateLane)) {// The priority is insufficient
/ / update
const clone = {
lane: updateLane,
action: update.action,
eagerReducer: update.eagerReducer,
eagerState: update.eagerState,
next: null};// Build newBaseQueueLast from the first under-priority UPDATE
if (newBaseQueueLast === null) {
newBaseQueueFirst = newBaseQueueLast = clone;
newBaseState = newState;
} else {
newBaseQueueLast = newBaseQueueLast.next = clone;
}
// Merge updateLane to Lanes
// Subsequent merge to root.pendingLanes
// Such low-priority updates will be processed later.
// If lanes are not incorporated, the skipped update may remain unprocessed
currentlyRenderingFiber.lanes = mergeLanes(
currentlyRenderingFiber.lanes,
updateLane,
);
} else {
// Priority dog
if(newBaseQueueLast ! = =null) {
// Indicates that there is an underpriority update
// Subsequent updates are also connected to the end of the baseQueue list
const clone = {
lane: NoLane,
action: update.action,
eagerReducer: update.eagerReducer,
eagerState: update.eagerState,
next: (null: any),}; newBaseQueueLast = newBaseQueueLast.next = clone; }/ / handle the update
// Eager is a calculated performance optimization see hooks specifically
if (update.eagerReducer === reducer) {
newState = update.eagerState;
} else {
// Action is an argument to setSstate(action)
const action = update.action;
/ / calculate the state
newState = reducer(newState, action);
}
}
update = update.next;
} while(update ! = =null&& update ! == first);if (newBaseQueueLast === null) {
// Indicates that all updates have sufficient priority
// baseState and state are the same
newBaseState = newState;
} else {
// baseQueue itself is last next is first
newBaseQueueLast.next = newBaseQueueFirst;
}
// The new state is different from the old state. The global variable marker needs to be updated for subsequent work
if(! is(newState, hook.memoizedState)) { markWorkInProgressReceivedUpdate(); }// state update for useState
hook.memoizedState = newState;
// Priority state
hook.baseState = newBaseState;
// Priority queue
hook.baseQueue = newBaseQueueLast;
}
Copy the code
conclusion
- React works (traversing fiber, rendering DOM) and updates
state
It’s all about priorities. - Priority by
lane
Implementation, scheduling work byScheduler
Implementation, the two priorities will be converted to each other. - The problem of priority agency, task hunger (determining whether a task is overdue to solve),
state
State consistent by saving oldupdate
Linked list solution.