We often say that JS event cycle has micro queue and macro queue, all asynchronous events will be placed in these two queues waiting to be executed, and the micro task will be executed before the macro task. Event loops are actually a way of working with multiple threads. Usually in order to improve the operation efficiency will be the new one or more threads in parallel computing, then was told the result and exit, but sometimes don’t want to every time a new thread, but these threads into the resident, there are tasks while you work, sleep, when no task to avoid frequently creating and destroying threads. This allows these threads to work in an event loop.
1. General JS event loop
We know JS is single-threaded, when performing a comparative long JS code, the page will be stuck, unable to respond to, but you all operations will be another thread records, for example in the card ordered a button when he died, though not immediately trigger the callback, but at the time of JS performed triggers just click operation. So there is a queue that records all operations to be performed. This queue is divided into macro and micro, such as setTimeout/ Ajax/user event belong to macro, while Promise and MutationObserver belong to micro. The micro will execute faster than the macro, as shown in the following code:
setTimeout((a)= > console.log(0), 0);
new Promise(resolve= > {
resolve();
console.log(1)
}).then(res= > {
console.log(2);
});
console.log(3);Copy the code
The output order is 1, 3, 2, 0, where setTimeout is a macro task, so it’s slower than the Promise micro task.
2. The nature of macro tasks
In fact, there is no reference to MacroTask in Chrome source code, the so-called MacroTask is in the general sense of the multithreaded event loop or message loop, rather than called the macro queue is called the message loop queue.
Chrome all resident threads, including browser threads and page rendering threads are running in the event loop, we know that Chrome is multi-process structure, the main thread of the browser process and IO thread is unified responsible for the address input bar response, network request loading resources and other functions of the browser level process, Each page has a separate process, and the main thread of each page process is the render thread, which is responsible for building the DOM, rendering, executing JS, and child IO threads.
These threads are resident threads, they run in a for loop, they have several task queues, they continuously execute their own or another thread’s PostTask, or they sleep until a set time or someone wakes them up when they PostTask.
Cc message_pump_default.cc’s Run function tells us that the event loop works like this:
void MessagePumpDefault::Run(Delegate* delegate) {
// Running in an endless loop
for (;;) {
// DoWork will execute all current pending_tasks (put them in a queue)
bool did_work = delegate->DoWork();
if(! keep_running_)break;
// The pending_task above may create some delay tasks such as timers
// Get delayed time
did_work |= delegate->DoDelayedWork(&delayed_work_time_);
if(! keep_running_)break;
if (did_work)
continue;
// IdL does not execute the deferred task in the first step
did_work = delegate->DoIdleWork();
if(! keep_running_)break;
if (did_work)
continue;
ThreadRestrictions::ScopedAllowWait allow_wait;
if (delayed_work_time_.is_null()) {
// There is no delay time until someone PostTask comes
event_.Wait();
} else {
// If there is delay time, then sleep until the time is upevent_.TimedWaitUntil(delayed_work_time_); }}}Copy the code
The first step is to call DoWork to traverse and retrieve all pending_tasks in the task queue that have not been delayed. Part of the tasks may be deferred to the third step, DoIdlWork. The second step is to execute those delayed tasks. If you cannot execute immediately, set a wait time delayed_work_time_, return did_work is false, and wake up with TimedWaitUntil the last line of code.
This is the basic model for multithreaded event loops. So where does the task that multiple threads want to execute come from?
Each thread has one or more types of task_runner objects, and each task_runner has its own task queue.
kDOMManipulation = 1,
kUserInteraction = 2,
kNetworking = 3,
kMicrotask = 9,
kJavascriptTimer = 10,
kWebSocket = 12,
kPostedMessage = 13.Copy the code
The message loop has its own Message_loop_task_runner. These Task_runner objects are shared and other threads can call the PostTask function of the Task_runner to send tasks. In the for loop above, the pending task is retrieved by task_runner’s TakeTask function.
Post task will wake up the thread when the task is queued:
// A lock is required to prevent multiple threads from executing simultaneously
AutoLock auto_lock(incoming_queue_lock_);
incoming_queue_.push(std::move(pending_task));
task_source_observer_->DidQueueTask(was_empty);Copy the code
Since the task_Runner object is shared by several threads, it needs to be locked when Posting tasks to it. The last line calls DidQueueTask to wake up the notification thread:
/ / the first
message_loop_->ScheduleWork();
// The above code will call
pump_->ScheduleWork();
// Finally return to message_pump to wake up
void MessagePumpDefault::ScheduleWork() {
// Since this can be called on any thread, we need to ensure that our Run
// loop wakes up.
event_.Signal();
}Copy the code
What is a task? A Task is simply a callback called as the second argument:
GetTaskRunner()->PostDelayedTask(
posted_from_,
BindOnce(&BaseTimerTaskInternal::Run, Owned(scheduled_task_)), delay);Copy the code
Wait, all this stuff, like it has nothing to do with JS? It really doesn’t matter, because this is all before JS is executed. Don’t worry yet.
This is a default event loop, but the Mac Chrome rendering thread is not executed there, its event loop uses Mac Cocoa SDK NSRunLoop, according to the source, because the page scroll bar, select dropdown box is used Cocoa. So you must plug into Cococa’s event loop mechanism, as shown in the following code:
#if defined(OS_MACOSX)
// As long as scrollbars on Mac are painted with Cocoa, the message pump
// needs to be backed by a Foundation-level loop to process NSTimers. See
// http://crbug.com/306348#c24 for details.
std: :unique_ptr<base::MessagePump> pump(new base::MessagePumpNSRunLoop());
std: :unique_ptr<base::MessageLoop> main_message_loop(
new base::MessageLoop(std::move(pump)));
#else
// The main message loop of the renderer services doesn't have IO or UI tasks.
std: :unique_ptr<base::MessageLoop> main_message_loop(new base::MessageLoop());
#endifCopy the code
For OS_MACOSX, NSRunLoop is used for pump messages, otherwise the default is used. The word “pump” means the source of information. In fact, in the crbug discussion, the submitters of the Chromium source code still want to remove Cococa from the render thread and use Chrome’s own Skia graphics library to draw the scroll bar, so that the render thread does not respond directly to UI/IO events, but there is no cycle to do so. From earlier discussions we can see that someone tried to do this but it got bugged and was eventually returned to Revert.
Cococa pump and the default pump have unified external interfaces. For example, ScheduleWork is used to wake up threads.
Chrome IO threads (including the page process’s child IO threads) add a message loop provided by the libevent.c library to the default pump. Libevent is a cross-platform event-driven network library, mainly used for socket programming, in an event-driven manner. The pump file that connects to Libevent is message_pump_libevent.cc, which adds a line to the default pump code:
bool did_work = delegate->DoWork();
if(! keep_running_)break;
event_base_loop(event_base_, EVLOOP_NONBLOCK);Copy the code
Just after DoWork to see if libevent has anything to do. So you can see that it’s implementing a libevent event loop inside of a libevent event loop, but it’s a nonblock, so it’s only going to execute once, and it’s also going to wake up.
Now let’s talk about some JS related ones.
(1) User events
When a mouse event is triggered on a page, the browser process receives it and sends it to the page process via Chrome’s Mojo library, as shown in the figure below. Mojo forwards the message to another process:
As you can see, this Mojo works by using local sockets for multi-process communication, so it ends up using write sockets. Socket is a common way of multi-process communication.
PostTask is called to the task_runner message loop by libevent:
This is not directly tested, because it’s not easy to test. But with these libraries and the observation of breaking points, this approach should be reasonable and possible, and the introduction of Libevent makes it easier to do this.
So mouse click messaging looks like this:
The Chromium documentation also describes this process, but it’s a bit old.
Another common asynchronous operation is setTimeout.
(2) the setTimeout
To investigate the behavior of setTimeout, we run the following JS code:
console.log(Object.keys({a: 1}));
setTimeout((a)= > {
console.log(Object.keys({b: 2}));
}, 2000);Copy the code
To observe the execution of setTimeout, break the Runtime_ObjectKeys function in v8/ SRC/Runtime /runtime_object.cc.
We found that the first breakpoint stuck where Object.keys was executed was triggered by HTMLParserScriptParser after DoWork, and the second setTimeout was executed inside DoDelayedWork (the event loop model mentioned above).
Specifically, after executing object. keys for the first time, a DOMTimer will be registered, and the DOMTimer will post a delayed task to the main thread (because it is currently running on the main thread) with the specified delayed task. This inside the event loop of the time will be as the sleep time of TimedWaitUntil (rendering thread is used Cococa CFRunLoopTimerSetNextFireDate). The following code looks like this:
TimeDelta interval_milliseconds = std::max(TimeDelta::FromMilliseconds(1), interval);
// kMinimumInterval = 4 kMaxTimerNestingLevel = 5
// If there are 5 nested setTimeout levels, and the interval is less than 4ms, then take the minimum time 4ms
if (interval_milliseconds < kMinimumInterval && nesting_level_ >= kMaxTimerNestingLevel)
interval_milliseconds = kMinimumInterval;
if (single_shot)
StartOneShot(interval_milliseconds, FROM_HERE);
else
StartRepeating(interval_milliseconds, FROM_HERE);Copy the code
Since it is a setTimeout, it calls StartOneShort on the third-to-last line, which finally calls PostTask of timer_task_runner:
And you can see that the delay time is 2000ms, which is converted into nanoseconds. Message_loop_task_runner runs on the render thread, The timer_task_runner finally uses this delay time to post a delay task to the Task runner of message Loop.
The minimum time to call setInterval is 4ms:
// Chromium uses a minimum timer interval of 4ms. We'd like to go
// lower; however, there are poorly coded websites out there which do
// create CPU-spinning loops. Using 4ms prevents the CPU from
// spinning too busily and provides a balance between CPU spinning and
// the smallest possible interval timer.
static constexpr TimeDelta kMinimumInterval = TimeDelta::FromMilliseconds(4);Copy the code
The goal is to avoid making too many calls to the CPU. In fact, this time depends on the time accuracy provided by the operating system, especially on Windows. According to the file time_win.cc, the normal time accuracy provided by Windows is 10 ~ 15ms, which means that when setTimeout 10ms, The actual execution interval may be a few milliseconds or more than 20 milliseconds. So Chrome makes a judgment about delay time:
#if defined(OS_WIN)
// We consider the task needs a high resolution timer if the delay is
// more than 0 and less than 32ms. This caps the relative error to
// less than 50% : a 33ms wait can wake at 48ms since the default
// resolution on Windows is between 10 and 15ms.
if (delay > TimeDelta() &&
delay.InMilliseconds() < (2 * Time::kMinLowResolutionThresholdMs)) {
pending_task.is_high_res = true;
}
#endifCopy the code
By comparison, if the delay is set to a small value, an attempt is made to use a high-precision time. However, because the high-precision time API (QPC) requires operating system support and is very time consuming and power consuming, it will not be enabled when the laptop is not plugged in. However, in general, we can assume that JS setTimeout can be accurate to 10ms.
Another question, what happens if setTimeout is 0? Similarly, it will post the task at the end, except that the delayed time of the task is 0 and it will be executed in the DoWork function of the message loop.
It is important to note that setTimeout is stored in a sequence_queue, which is strictly enforced to ensure execution order (whereas the queue for the message loop above is not strictly enforced). The associated RunTask function of this sequence is thrown as a task callback to the Task Runner of the event loop to execute the tasks in its queue.
So when we execute setTimeout 0, we will post a task to the message loop queue and then continue to execute the current task, such as the code after setTimeout 0 that has not yet been executed.
So that’s the loop of events, and then we’re going to talk about micro tasks and micro queues.
2. Micro tasks and micro queues
The micro queue is a real queue, an implementation of V8. Microtasks in V8 fall into the following four categories (see microtask.h) :
- callback
- callable
- promiseFullfil
- promiseReject
The first callback is a normal callback, including some task callbacks from blink, such as Mutation Observer. The second callable is a kind of task for internal debugging, and the other two are promise fulfillment and failure. The promise finally, then_finally and catch_finally, is passed internally as an argument to then/catch.
When are micro tasks performed? Debug with the following JS:
console.log(Object.keys({a: 1}));
setTimeout((a)= > {
console.log(Object.keys({b: 2}));
var promise = new Promise((resolve, reject) = > {
resolve(1);
});
promise.then(res= > {
console.log(Object.keys({c: 1}));
});
}, 2000);Copy the code
Here we focus on when promise.then is executed. One of the interesting things about the call stack at the interrupt point is that it runs inside a destruct function:
Pulling out the main code looks like this:
{
v8::MicrotasksScope microtasks_scope(a);
v8::MaybeLocal result = function->Call(receiver, argc, args);
}Copy the code
This code instantiates a scope object, which is placed on the stack, and then calls function.call, which is the JS code currently executing. When the JS is finished executing and out of scope, the stack object will be deconstructed, and the microtask will be executed inside the destructor. Note that C++ also has destructors in addition to constructors. Destructors are used when objects are destroyed. Since C++ doesn’t have automatic garbage collection, you need a destructor that lets you free new memory yourself.
That is, the micro task is executed immediately after the current JS call is executed. It is synchronous. There is no multi-threaded asynchrony in the same call stack. It simply says that then is executed at the end of the entire asynchronous callback that is currently being executed.
So setTimeout 0 adds a new task (callback) to the main thread’s message loop task queue, while Promise. then inserts a task into the microTask in V8 of the current task. The next task must be executed only after the current task is completed.
In addition to promises, other common ways to create micro-tasks are MutationObserver, Vue’s $nextTick, and Promise’s Polyfill. It does this by placing the callback at the end of the currently synchronized JS as a micro-task. When we modify a vue data property to update the DOM modification, the Vue actually overwrites the setter for Object. When we modify the property, the setter for Object is triggered. At this point, the Vue knows that you have made the modification and changes the DOM accordingly. It may just be that the call stack is deep, and when the call stack is complete it means that the DOM changes are complete, and then the inserted microtasks are executed synchronously, so nextTick can be executed after the DOM changes take effect.
In addition, we also create a micro task when JS triggers a request:
let img = new Image();
img.src = 'image01.png? _ = ' + Date.now();
img.onload = function () {
console.log('img ready');
}
console.log(Object.keys({e: 1}));Copy the code
There is often confusion as to whether onload should be written before SRC assignment, in case SRC is added and the request is triggered, but the onload line is not executed yet. In fact, we don’t need to worry, because after performing SRC assignment, blink will create a micro task and push it to the micro queue, as shown in the following code:
This is the enqueue operation done by ImageLoader, followed by the last line of Object.keys, and then RunMicrotasks, which fetch the callback from the newly enqueued task.
The above code for enqueue is used by Blink. V8’s enqueue is stored in builtins-internal-gen.cc. This builtins type file is directly compiled by assembly code generated during compilation. So the source code is displayed as assembly code during debugging. It’s not easy to debug. The purpose may be to directly generate different assembly code according to different platforms, which can speed up execution.
Finally, event loops are a way of working with multiple threads. Chrome uses a shared task_runner object to post tasks to itself and other threads to store, or to retrieve tasks in an endless loop, or to go to sleep and wait to be woken up. The Mac Chrome rendering thread and browser thread also use the Mac SDK Cococa’s NSRunLoop as a message source for UI events. Chrome’s multi-process communication (local socket communication between IO threads of different processes) makes use of the Libevent event loop and is added to the main message loop.
The micro task is not part of the event loop; it is V8’s implementation of Promise’s then/ Reject, and other callback that needs to be synchronized back, and is essentially executed synchronously with the current V8 call stack, just at the end. In addition to the Promise/MutationObserver, requests made in JS also create a microtask for deferred execution.