Event loops: microtasks and macro tasks

The flow of JavaScript execution in the browser and the flow in Node.js are based on event loops.

Understanding how event loops work is important for code optimization, and sometimes for proper architecture.

In this chapter, we first introduce the theoretical details of the way things work and then introduce the practical applications of that knowledge.

Event loop

The concept of an event loop is very simple. It is an infinite loop between the JavaScript engine waiting for a task, executing a task, and going to sleep for more tasks.

The general algorithm of the engine:

  1. When there is a task:
    • Start with the task that comes in first.
  2. Hibernate until a task appears, then go to Step 1.

This is what happens when we look at a web page. The JavaScript engine does nothing most of the time, only when the script/handler/event is activated.

Example tasks:

  • When external scripts<script src="..." >When the load is complete, the task is to execute it.
  • Tasks are derived when the user moves the mousemousemoveEvent and execution handlers.
  • When scheduledsetTimeoutWhen the time arrives, the task is to execute its callback.
  • … And so on.

Set up tasks — the engine handles them — and then wait for more tasks (that is, sleep, consuming little CPU).

When a task arrives, the engine may be busy and the task will be queued.

Multiple tasks form a queue, known as a “macro task queue” (V8 terminology) :

For example, when the engine is busy executing a script, the user may move the mouse and generate a mousemove event, setTimeout may also just expire, and other tasks that form a queue, as shown in the figure above.

Tasks in the queue are executed on a first-in, first-out basis. When the browser engine finishes executing the script, it handles the Mousemove event, then the setTimeout handler, and so on.

So far, so simple, right?

Two details:

  1. The engine will never render when performing tasks. It doesn’t matter if the task takes a long time to execute. Changes to the DOM are drawn only after the task is complete.
  2. If a task takes too long to execute, the browser will be unable to perform other tasks and handle user events, so after a certain amount of time the browser will raise an alert across the entire page, such as “page unresponsive,” advising you to stop the task. This often happens when there are a lot of complex calculations or programming errors that cause an infinite loop.

So that’s the theory. Now, let’s see how to apply this knowledge.

Case 1: Split the CPU overload task

Suppose we have a CPU overload task.

For example, syntax highlighting (used to color the sample code on this page) is a CPU intensive task. To highlight the code, it performs analysis, creates a lot of colored elements, and then adds them to the document — which can take a long time with a large text document.

When the engine is busy with syntax highlighting, it can’t handle other DOM-related work, such as handling user events. It may even cause the browser to “hiccup” or even “hang” for an unacceptable period of time.

We can avoid this problem by breaking up large tasks into smaller ones. Highlight the first 100 lines, then schedule the next 100 lines with setTimeout (delay parameter 0), and so on.

To demonstrate this approach, for simplicity’s sake, let’s write a function that counts from 1 to 1000000000 without text highlighting.

If you run the following code, you will see the engine “hang” for a while. This is obvious to server-side JS, and if you run it in a browser and try to click another button on the page, you’ll find that no other event is processed until the count ends.

let i = 0;

let start = Date.now();

function count() {

  // Do a heavy task
  for (let j = 0; j < 1e9; j++) {
    i++;
  }

  alert("Done in " + (Date.now() - start) + 'ms');
}

count();
Copy the code

The browser may even display a “script is taking too long” warning.

Let’s split the task using nested setTimeout calls:

let i = 0;

let start = Date.now();

function count() {

  // Do part of the heavy task (*)
  do {
    i++;
  } while (i % 1e6! =0);

  if (i == 1e9) {
    alert("Done in " + (Date.now() - start) + 'ms');
  } else {
    setTimeout(count); // schedule new calls (**)
  }

}

count();
Copy the code

The browser interface can now be used normally during the count process.

A single execution of count completes part of the work (*) and then reschedule its own execution (**) as needed:

  1. First perform the count:i=1... 1000000.
  2. Then perform the count:i=1000001.. 2000000.
  3. … And so on.

Now, if a new side task (such as an onclick event) appears while the engine is busy executing the first part, the side task is queued and then executed at the end of the first part and before the next part begins. Periodically returning an event loop between two count executions provides enough “air” for the JavaScript engine to perform other operations in response to other user actions.

It’s worth noting that the two variants — whether or not a setTimeout is used to split tasks — are comparable in execution speed. There was little difference in the total time it took to perform the count.

Let’s make an improvement to make the two times more similar.

We will move scheduling to the beginning of count() :

let i = 0;

let start = Date.now();

function count() {

  // Move the scheduling to the start
  if (i < 1e9 - 1e6) {
    setTimeout(count); // schedule a new call
  }

  do {
    i++;
  } while (i % 1e6! =0);

  if (i == 1e9) {
    alert("Done in " + (Date.now() - start) + 'ms');
  }

}

count();
Copy the code

Now, when we start calling count() and see that we need to make more calls to count(), we schedule it immediately before work.

If you run it, you’ll easily notice that it takes significantly less time.

Why is that?

This is simple: you’ll recall that multiple nested setTimeout calls have a minimum delay of 4ms in the browser. Even if we set it to 0, it’s still 4ms (or longer). So the earlier we schedule it, the faster it will run.

Finally, we split a heavy task into parts that now don’t clog the user interface. And it doesn’t take much longer.

Use case 2: Progress indicator

Another benefit of splitting overloaded tasks in browser scripts is that we can display progress indicators.

Typically, browsers render after the currently executing code is complete. It doesn’t matter whether the task takes a long time to execute. Changes to the DOM are drawn only after the task is complete.

On the one hand, this is great, because our function might create many elements, insert them one by one into the document, and change their styles — visitors won’t see any unfinished “in-between” content. It’s important, right?

As an example, changes to I will not be shown until the function completes, so we will only see the last value:

<div id="progress"></div>

<script>

  function count() {
    for (let i = 0; i < 1e6; i++) {
      i++;
      progress.innerHTML = i;
    }
  }

  count();
</script>
Copy the code

… But we might also want to show something during a task, such as a progress bar.

If we use setTimeout to break the heavy task into parts, the changes will be drawn between them.

This looks even better:

<div id="progress"></div> <script> let i = 0; Function count() {// do part of the heavy task (*) do {i++; progress.innerHTML = i; } while (i % 1e3 ! = 0); if (i < 1e7) { setTimeout(count); } } count(); </script>Copy the code

Div now shows the increment of I, which is a kind of progress bar.

Use case 3: Do something after the event

In event handlers, we might decide to defer certain actions until the event has bubbled up and been processed at all levels. We can do this by wrapping the code in a zero-delay setTimeout.

In the chapter creating custom events, we saw an example where the custom event menu-open is dispatched (dispatched) in setTimeout, so it occurs after the click event is processed.

menu.onclick = function() {
  // ...

  // Create a custom event with the data of the menu item being clicked
  let customEvent = new CustomEvent("menu-open", {
    bubbles: true
  });

  // Dispatch custom events asynchronously
  setTimeout(() = > menu.dispatchEvent(customEvent));
};
Copy the code

Macro and micro tasks

In addition to the macrotasks described in this chapter, there are also microtasks mentioned in the chapter MicroTasks.

Microtasks only come from our code. They are typically created by promise: the execution of a.then/catch/finally handler becomes a microtask. Microtasks are also used “behind the scenes” of await, as it is another form of promise processing.

There is also a special function, queueMicrotask(func), which queues func for execution in a microtask queue.

After each macro task, the engine immediately executes all tasks in the microtask queue, and then executes any other macro task, or render, or anything else.

For example, consider the following example:

setTimeout(() = > alert("timeout"));

Promise.resolve()
  .then(() = > alert("promise"));

alert("code");
Copy the code

What is the order of execution here?

  1. codeDisplay first, because it is a regular synchronous call.
  2. promiseThe second one appears becausethenWill pass through the microtask queue and execute after the current code.
  3. timeoutDisplay last because it is a macro task.

A more detailed loop of events is shown below (from top to bottom, i.e., scripting first, then microtasks, rendering, etc.) :

The microtask is completed before performing any other event processing, or rendering, or performing any other macro task.

This is important because it ensures that the application environment is essentially the same between microtasks (no mouse coordinate changes, no new network data, and so on).

If we want to execute a function asynchronously (after the current code), but before changes are rendered or new events are processed, we can schedule it using queueMicrotask.

This is a similar example to the previous one with a “count progress bar”, but it uses queueMicrotask instead of setTimeout. You can see it rendering at the end. As if writing synchronous code:

<div id="progress"></div> <script> let i = 0; Function count() {// do part of the heavy task (*) do {i++; progress.innerHTML = i; } while (i % 1e3 ! = 0); if (i < 1e6) { queueMicrotask(count); } } count(); </script>Copy the code

conclusion

A more detailed algorithm for the event loop (although still simplified compared to the specification) :

  1. Dequeue from a macro task queue (such as “script”) and execute the earliest task.

  2. To perform all

    Micro tasks

    :

    • When the microtask queue is not empty:
      • Dequeue and perform the earliest microtask.
  3. Perform render, if any.

  4. If the macro task queue is empty, sleep until the macro task appears.

  5. Go to Step 1.

Schedule a new macro task:

  • Use zero delaysetTimeout(f).

It can be used to break a heavy computing task into parts so that the browser can react to user events and display the progress of the task between the parts of the task.

In addition, it is also used in event handlers to schedule an action after the event has been fully processed.

Schedule a new microtask:

  • usequeueMicrotask(f).
  • The Promise handler also passes through the microtask queue.

There is no PROCESSING of UI or network events between microtasks: they are executed immediately, one after the other.

So, we can use queueMicrotask to execute a function asynchronously while keeping the state of the environment consistent.

Web Workers

For long, heavy computing tasks that should not block event loops, we can use Web Workers.

This is how you run code in another parallel thread.

Web Workers can exchange messages with the main thread, but they have their own variables and event loops.

Web Workers do not have DOM access, so they are useful for computations that use multiple CPU cores at the same time. Original text: useful. Javascript. The info/event – loop