Before we talk about the Event Loop mechanism, let’s introduce this problem using Chrome’s rendering process. First let’s take a look at Chrome’s multi-process architecture.

Why it’s necessary to use multiple processes: Because in a browser, rendering, event execution, and so on can only be done under one process if only one process exists. A process can be split into multiple threads for execution, but there is no further splitting under threads. If there are multiple processes, one of the tasks can be executed under one process, which can be split into multiple threads.

Chrome’s multi-process architecture includes the following four processes:

  • Browser process (address bar, bookmark bar, forward/back, Web request, file access, etc.)
  • Renderer process (responsible for everything related to page rendering within a Tab, is the core process)
  • GPU process (responsible for GPU-related tasks)
  • Plugin process (responsible for tasks related to Chrome plug-ins)

The process closely related to the Event Loop is the Renderer process, which is our rendering process.

Let’s start with the render process. When you open a TAB in your browser, the render process does almost everything the page does. These tasks are performed by the main thread, composite thread, raster thread, and Worker thread. Let’s take a look at what the main thread does during rendering.

1. The parsing

The browser builds a DOM tree based on the current HTML structure and loads secondary resources, such as CSS style sheets, images, and JS scripts on the page.

During this process, if a JS script is loaded, the browser will stop parsing the DOM tree and execute the JS script first. This is because it is possible to change the structure of the DOM tree in JS. Of course, if the execution in the current JS script is determined not to change the DOM tree, the browser provides non-blocking loading options, such as defer or async asynchronous loading, which can also be used

<link rel="preload">
Copy the code

Preload

2. Style calculation

After parsing the DOM structure, the main thread evaluates the styles based on the CSS selectors and adds the styles to the specific DOM nodes to render the HTML as it should. Even without CSS stylesheets, browsers provide a default style.

3. Layout

In this process, the main thread calculates all the DOM nodes to calculate the style and type the elements by their border size and horizontal and vertical coordinates. The layout tree may be slightly smaller than the DOM tree, but it contains only the layout that is displayed on the current page. For example, if an element displays: None, it will not appear in the final layout tree, but will appear in the DOM tree. P: : before {content: “Hi!” } code that appears in the final layout tree, but not in the DOM tree.

4. Apply colours to a drawing

Knowing the style, structure, and layout, you are now ready to render the page. With all this in mind, you also need to know in what order to render these elements. In the draw step, the main thread generates a draw record based on the layout tree, similar to rendering the background first, then the text, and then the content. The z-index attribute is important in this process.

In the whole process, the change of the data in the previous step will affect the data in the next step. For example, if the data in the layout is changed, the drawing process will also be affected. Therefore, every change in the step has a cost.

The above rendering steps all run on the main thread, whereas JS is a single-threaded language. This means that when you execute JS on the main thread, the above process will be interrupted. But with such a large chunk of traffic, the user experience must be very bad.

This can be avoided by cutting js execution into small segments and using requestAnimationFrame() for each frame rendering.

RequestAnimationFrame, as an asynchronous task, is scheduled and executed by Event Loop. Here, we will talk about the specific principle and related knowledge of Event Loop in JS.

First, let’s take a look at how the JS engine executes JS code.

During JS execution, the execution engine provides the following foundations:

Call stack: the normal JS statement execution environment, each frame in the call stack is a function, follow the principle of first in, last out

Heap: memory allocated by the JS execution engine

Queue: Message queue to which all task messages are placed

How does the normal call stack work

const c = () => { console.log(3); }; const b = () => { c(); console.log(2); } const a = () => { b(); console.log(1) } a(); // output 3,2,1Copy the code

In this process, a is first called as a frame and pushed onto the stack, when A calls B, B is also placed as a frame on top of A, and the last one placed on top is C, so the final execution order is C, B, and A.

Above is the normal call stack, js is performing synchronous tasks, and is a single thread execution. Single threading is a legacy of JS. Js was invented when there were not many multiprocess computers on the market, and few people used JS, so the author did not consider this problem. With the development of The Times, more and more applications use JS as a development language, therefore, we must solve the limitations of JS single thread to code. The first way to overcome this limitation is to use asynchronous tasks, which execute a task without using the main thread. The Event Loop implements this asynchronous task mechanism.

Event Loop is a concurrency model in JavaScript that is responsible for executing code, collecting and processing events, and executing subtasks in queues. This article will show you the principle of Event Loop by combining with browser knowledge.

First of all, we will briefly introduce the basic concept and operation mode of Event Loop.

In the JS engine, there are many user agents to execute JS codes, and each user Agent is composed of execution context, call stack, main thread, other threads created by the Worker, a task queue and a microtask queue. Each agent is scheduled by an Event loop. The Event loop collects the Event messages and adds them to the queue, which then executes the tasks in the task queue. Web applications all run on a thread and share an Event loop. This is the main thread. In addition to running JS code, it also runs and dispatches user events and renders on the page.

There are three types of Event loop

  • Window Event Loop: An event loop in the global scope that controls the event loop in the current domain

  • Worker event loop: Event loop running in Worker thread, including standard Worker, shared Worker, etc

  • Worklet event loop

The function of Event Loop is to add various asynchronous “tasks” to a message queue during page loading and browsing, such as the execution of JS scripts, user interaction with the page, and so on. When there is no task executing in JavaScript, the Event Loop will enter a state of sleep and wait for the task with an endless Loop, asking if there is any Event task entering the message queue. If so, the callback function of the current task will be added to the call stack and the current task will be executed. The next task will not be executed until the current one is complete. If the current event queue completes, the event loop enters a sleep state.

The overall process is as follows:

Task is set -> Execute the corresponding task -> wait for the task to be added

If a task is added while the current engine is still executing it, it is placed in a message queue, which is often called a macro task queue. But there is no such thing as a macro task queue in the source code. This queue usually contains the types of events such as script execution, user interface operations and setTimeout. This queue is stored in the form of a stack, following the principle of first in, last out.

After executing one of the macro tasks in the queue, the current call stack is empty and all tasks in the microtask queue are checked and executed immediately

So what is a microtask queue? Microtask queues are mostly composed of asynchronous tasks such as Promises. QueueMicrotask is a function that performs microtasks directly. All microtasks must be empty on the current call stack, and all tasks in the queue must be emptied at once. This is to ensure that the execution environment does not change during the execution of the two microtasks.

In Chrome this process can be expressed as:

Queues out and executes tasks in the macro task queue

Perform all microtasks

Perform render operation

There will be no UI or network operations during microtask execution

(This is limited to Chrome’s event loop)

During the whole Event Loop process, the next task will not be executed before the execution of one task, let alone the rendering. The DOM node is changed only after the current task has been completed. Therefore, if a JS task takes too long to execute, it will not execute other tasks, but continue to execute the current task, and it will pop up a message box at the top of the page that says “page response time is too long”. It indicates that you kill the current task by closing the page. This is often the case in complex calculations and tasks that cause infinite loops. You also need to be careful when adding microtasks, because js will not execute the next task in the message queue after completing the current microtask queue. Thus, if a microtask is added with infinite recursion, it will continue to be executed until memory runs out.

So it’s a good practice to shorten the processing time of a single message and crop a message into multiple messages when possible, as we did above with requestAnimationFrame for cropping a large render task. And the clipped message does not execute more slowly than the previous message queue.

Therefore, all execution tasks are grouped into three categories: normal call stacks, micro tasks, and macro tasks. Also, microtasks and macro tasks are placed on the call stack when they are executed.

So why is there a difference between macro and micro task queues?

In fact, the concept of a macro task is the concept of a normal message queue. For example, our user interaction callback function, setTimeout execution, are concepts in macro tasks. Microtask queues are asynchronous tasks that are different from message queues. They mainly store Ajax requests, promises and so on.

The macro tasks we commonly refer to are some of the asynchronous tasks performed by the Event Loop. However, we also need to prioritize the execution of asynchronous tasks. Some asynchronous tasks need to be executed before other asynchronous tasks, thus the birth of microtasks. When the stack of the current main thread is empty and the User Agent has not returned control to the Event Loop, if there is a task in the current microtask queue, it will complete the microtask queue and then give control to the Event Loop. The event loop checks to see if there are any macro tasks in the current iteration, and then repeats after executing a macro task. Therefore, the microtask is always executed before the macro task, which is equivalent to a queue-jumping operation.

For example, the following two pieces of code illustrate this process intuitively:

setTimeout(() => console.log(2), 0); setTimeout(() => new Promise((resolve, reject) => resolve()).then(() => console.log(3)), 0); new Promise((resolve, reject) => resolve()).then(() => console.log(1)); // output 1,2,3Copy the code

So, in this code, how does that work? First, after the first setTimeout is encountered, it is placed in the task queue after an interval of 0 ms. A second setTimeout is then encountered, which is then put into the task queue after the same interval of 0ms. Finally, a Promise is encountered, in which a callback is put into the microtask queue. At this point, the execution stack is empty, but the microtask has one task, so the current microtask is executed, printing 1. At this point, the execution stack is empty, and so is the microtask queue. The execution power is handed over to the Event Loop, which checks that there is a task in the current task queue, executes the first setTimeout, and outputs 2. After execution, a second setTimeout is executed, adding a Promise then to the microtask. With the current stack empty, execute the command inside the Promise, printing 3.

So, let’s now reverse the order of the two settimeouts.

setTimeout(() => new Promise((resolve, reject) => resolve()).then(() => console.log(3)), 0); setTimeout(() => console.log(2), 0); new Promise((resolve, reject) => resolve()).then(() => console.log(1)); // output 1,3, 2Copy the code

After reversing the order of the two timeouts, we see that the Promise in the first setTimeout executes before the second setTimeout. Let’s go through how it works. The two setTimeouts are first placed in the task queue in sequence, followed by the Promise callback in the microtask queue. Execute the microtask queue, printing 1, before handing execution to the Event Loop. After the task queue is executed, the callback function of the first setTimeout is put into the callback stack for execution, and a Promise callback function is encountered, set to microtask, and put into the microtask queue. After execution, the current stack is empty. Check the microtask queue, find the microtask just put in, execute the microtask, output 3. Give execution to the Event Loop and execute the second setTimeout.

SetTimeout The second parameter refers to the delay time when the task is put into the queue. It is not a specific execution time. SetTimeout in JS has a delay of at least 4ms.

Through the above article, we have a preliminary understanding of Event Loop. Typically, in a task queue, we store render tasks, user actions, or timeouts, etc. In microtasks, we generally store promises, Mutation Observers, and so on. So why the distinction?

The first step is to think about when you need to use microtasks based on the above description.

Most of the scenarios where we use microtasks are later than the normal js code call stack execution, but before the user’s actions or normal delayed execution, and we use microtasks to maintain a sequence of data or actions. The next step is to ensure a consistent sequence of tasks, even when results or data are available synchronously, while reducing the risk of perceived delays in operations.

The Promise we often use is to be able to perform an undefined task, such as the following callback function, after the remote retrieves available data. Retrieving data and performing operations are sequenced and have an impact on subsequent user interactions. For example, Mutation Observer monitors all dom tree changes (i.e., a batch task change) and triggers the callback function. Dom number changes obviously have a significant impact on rendering and user operations, so they are also a member of the microtask group.

A normal message queue, on the other hand, is just a callback function to handle an action, and there is no guarantee of order or influence between tasks. Only one condition is triggered, which is a normal asynchronous function. There is no guarantee of priority.