The flow of JavaScript execution in the browser and the flow in Node.js are based on event loops.

Understanding how event loops work is important for code optimization, and sometimes for proper architecture.

Event loop

The concept of an event loop is very simple. It is an infinite loop between the JavaScript engine waiting for a task, executing a task, and going to sleep for more tasks.

The general algorithm of the engine:

  • When there is a task: Start with the task that entered first.
  • Hibernate until a task appears, then go to Step 1.

This is what happens when we look at a web page. The JavaScript engine does nothing most of the time, only when the script/handler/event is activated.

Example tasks:

  • When the external script
  • As the user moves the mouse, the task is to derive the Mousemove event and execution handler.
  • When the scheduled setTimeout time arrives, the task is to execute its callback.
  • … And so on.

Set up tasks — the engine handles them — and then wait for more tasks (i.e. sleep, consuming little CPU)

When a task arrives, the engine may be busy and the task will be queued.

Multiple tasks form a queue, known as a “macro task queue” (V8 terminology) :

For example, when the engine is busy executing a script, the user may move the mouse and generate a mousemove event, setTimeout may also just expire, and other tasks that form a queue, as shown in the figure above.

Tasks in the queue are executed on a first-in, first-out basis. When the browser engine finishes executing the script, it handles the Mousemove event, then the setTimeout handler, and so on.

Two details:

  • The engine will never render when performing tasks. It doesn’t matter if the task takes a long time to execute. Changes to the DOM are drawn only after the task is complete.
  • If a task takes too long to execute, the browser will be unable to perform other tasks and handle user events, so after a certain amount of time the browser will raise an alert across the entire page, such as “page unresponsive,” advising you to stop the task. This often happens when there are a lot of complex calculations or programming errors that cause an infinite loop.

So that’s the theory. Now, let’s see how to apply this knowledge.

Case 1: Split the CPU overload task

Suppose we have a CPU overload task.

For example, syntax highlighting (used to color the sample code on this page) is a CPU intensive task. To highlight the code, it performs analysis, creates a lot of colored elements, and then adds them to the document — which can take a long time with a large text document.

When the engine is busy with syntax highlighting, it can’t handle other DOM-related work, such as handling user events. It may even cause the browser to “hiccup” or even “hang” for an unacceptable period of time. Suppose we have a CPU overload task.

We can avoid this problem by breaking up large tasks into smaller ones. Highlight the first 100 lines, then schedule the next 100 lines with setTimeout (delay parameter 0), and so on.

To demonstrate this approach, for simplicity’s sake, let’s write a function that counts from 1 to 1000000000 without text highlighting.

If you run the following code, you will see the engine “hang” for a while. This is obvious to server-side JS, and if you run it in a browser and try to click another button on the page, you’ll find that no other event is processed until the count ends.

let i = 0;

let start = Date.now();

function count() {

  // Do a heavy task
  for (let j = 0; j < 1e9; j++) {
    i++;
  }

  alert("Done in " + (Date.now() - start) + 'ms');
}

count();
Copy the code

The browser interface can now be used normally during the count process.

A single execution of count completes part of the work (*) and then reschedule its own execution (**) as needed:

First perform the count: I =1… 1000000. Then perform the count: I =1000001.. 2000000. … And so on. Now, if a new side task (such as an onclick event) appears while the engine is busy executing the first part, the side task is queued and then executed at the end of the first part and before the next part begins. Periodically returning an event loop between two count executions provides enough “air” for the JavaScript engine to perform other operations in response to other user actions.

It’s worth noting that the two variants — whether or not a setTimeout is used to split tasks — are comparable in execution speed. There was little difference in the total time it took to perform the count.

Let’s make an improvement to make the two times more similar.

We will move scheduling to the beginning of count() :

let i = 0;

let start = Date.now();

function count() {

  // Move the scheduling to the start
  if (i < 1e9 - 1e6) {
    setTimeout(count); // schedule a new call
  }

  do {
    i++;
  } while (i % 1e6! =0);

  if (i == 1e9) {
    alert("Done in " + (Date.now() - start) + 'ms');
  }

}

count();
Copy the code

Now, when we start calling count() and see that we need to make more calls to count(), we schedule it immediately before work.

If you run it, you’ll easily notice that it takes significantly less time.

Why is that?

This is simple: you’ll recall that multiple nested setTimeout calls have a minimum delay of 4ms in the browser. Even if we set it to 0, it’s still 4ms (or longer). So the earlier we schedule it, the faster it will run.

Finally, we split a heavy task into parts that now don’t clog the user interface. And it doesn’t take much longer

Use case 2: Progress indicator

Another benefit of splitting overloaded tasks in browser scripts is that we can display progress indicators.

As mentioned earlier, changes in the DOM are drawn only after the currently running task has completed, no matter how long the task has taken to run.

On the one hand, this is great, because our function might create many elements, insert them one by one into the document, and change their styles — visitors won’t see any unfinished “in-between” content. It’s important, right?

As an example, changes to I will not be shown until the function completes, so we will only see the last value:

<div id="progress"></div>

<script>

  function count() {
    for (let i = 0; i < 1e6; i++) {
      i++;
      progress.innerHTML = i;
    }
  }

  count();
</script>
Copy the code

… But we might also want to show something during a task, such as a progress bar.

If we use setTimeout to break the heavy task into parts, the changes will be drawn between them.

This looks even better:

<div id="progress"></div>

<script>
  let i = 0;

  function count() {

    // Do part of the heavy task (*)
    do {
      i++;
      progress.innerHTML = i;
    } while (i % 1e3! =0);

    if (i < 1e7) {
      setTimeout(count);
    }

  }

  count();
</script>
Copy the code

Div now shows the increment of I, which is a kind of progress bar.

Use case 3: Do something after the event

In event handlers, we might decide to defer certain actions until the event has bubbled up and been processed at all levels. We can do this by wrapping the code in a zero-delay setTimeout.

In the chapter creating custom events, we saw an example where the custom event menu-open is dispatched (dispatched) in setTimeout, so it occurs after the click event is processed.

menu.onclick = function() {
  // ...

  // Create a custom event with the data of the menu item being clicked
  let customEvent = new CustomEvent("menu-open", {
    bubbles: true
  });

  // Dispatch custom events asynchronously
  setTimeout(() = > menu.dispatchEvent(customEvent));
};
Copy the code

Macro and micro tasks

Macro task

In addition to the macrotasks described in this chapter, there are also microtasks mentioned in the chapter MicroTasks.

Microtasks only come from our code. They are typically created by promise: the execution of a.then/catch/finally handler becomes a microtask. Microtasks are also used “behind the scenes” of await, as it is another form of promise processing.

There is also a special function, queueMicrotask(func), which queues func for execution in a microtask queue.

After each macro task, the engine immediately executes all tasks in the microtask queue, and then executes any other macro task, or render, or anything else.

For example, consider the following example:

setTimeout(() = > alert("timeout"));

Promise.resolve()
  .then(() = > alert("promise"));

alert("code");
Copy the code

What is the order of execution here?

  1. Code is shown first because it is a regular synchronous call.
  2. Promise comes second, because then passes through the microtask queue and executes after the current code.
  3. Timeout is displayed last because it is a macro task.

A more detailed loop of events is shown below (from top to bottom, i.e., scripting first, then microtasks, rendering, etc.) :

The microtask is completed before performing any other event processing, or rendering, or performing any other macro task.

This is important because it ensures that the application environment is essentially the same between microtasks (no mouse coordinate changes, no new network data, and so on).

If we want to execute a function asynchronously (after the current code), but before changes are rendered or new events are processed, we can schedule it using queueMicrotask.

This is a similar example to the previous one with a “count progress bar”, but it uses queueMicrotask instead of setTimeout. You can see it rendering at the end. As if writing synchronous code:

<div id="progress"></div>

<script>
  let i = 0;

  function count() {

    // Do part of the heavy task (*)
    do {
      i++;
      progress.innerHTML = i;
    } while (i % 1e3! =0);

    if (i < 1e6) {
      queueMicrotask(count);
    }

  }

  count();
</script>
Copy the code

conclusion

A more detailed event loop algorithm (although still simplified compared to the specification) :

Dequeue from a macro task queue (such as “script”) and execute the earliest task. Execute all microtasks: When the microtask queue is not empty: dequeue and execute the earliest microtask. Perform render, if any. If the macro task queue is empty, sleep until the macro task appears. Go to Step 1. Schedule a new macro task:

Use setTimeout(f) with zero latency. It can be used to break a heavy computing task into parts so that the browser can react to user events and display the progress of the task between the parts of the task.

In addition, it is also used in event handlers to schedule an action after the event has been fully processed.

Schedule a new microtask:

Use the queueMicrotask (f). The Promise handler also passes through the microtask queue. There is no PROCESSING of UI or network events between microtasks: they are executed immediately, one after the other.

So, we can use queueMicrotask to execute a function asynchronously while keeping the state of the environment consistent.

Microtasks

Promise handlers. Then,. Catch, and. Finally are all asynchronous.

Even if a promise is immediately resolved,.then,.catch, and.finally code will be executed before these handlers.

Example code is as follows:

let promise = Promise.resolve();

promise.then(() = > alert("promise done!"));

alert("code finished"); // This alert is displayed first
Copy the code

If you run it, you’ll see code Finished first, then promise Done.

Which is weird, because the promise must have been made right from the start.

Why is.then triggered later? What’s going on here?

Microtask Queue

Asynchronous tasks require proper management. For this purpose, the ECMA standard specifies an internal queue, PromiseJobs, commonly referred to as the “microtask Queue” (ES8 terminology).

As stated in the specification:

A queue is first-in, first-out: the first task to be queued is run first. Tasks in the task queue are executed only if there are no other tasks running in the JavaScript engine. Or, simply put, when a promise is ready, its.then/catch/finally handlers are put into queues: but they are not executed immediately. When the JavaScript engine finishes executing the current code, it grabs the task from the queue and executes it.

This is why in the example above “Code Finished” is displayed first.

Promise handlers always pass through this internal queue.

If there is a chain containing more than one.then/catch/finally, then each of them is executed asynchronously. That is, it will be queued first and will not be executed until the current code has completed execution and all previously queued handlers have completed.

What if the order of execution is important to us? How do we get code Finished to run after a Promise Done?

It’s as simple as putting it in the queue using.then as follows:

Promise.resolve()
  .then(() = > alert("promise done!"))
  .then(() = > alert("code finished"));
Copy the code

Now the code executes as expected.

An unprocessed rejection

Remember the chapter on using Promises for error handling of unhandledrejection events?

Now we can see exactly how JavaScript finds an unprocessed rejection.

If a promise error is not processed at the end of the microtask queue, an “unprocessed Rejection” appears.

Normally, if we expect an error to occur, we add.catch to the promise chain to handle the error:

let promise = Promise.reject(new Error("Promise Failed!"));
promise.catch(err= > alert('caught'));

// Will not run: error has been handled
window.addEventListener('unhandledrejection'.event= > alert(event.reason));
Copy the code

But if we forget to add.catch, the JavaScript engine will fire the following event when the microtask queue is empty:

let promise = Promise.reject(new Error("Promise Failed!" )); // Promise Failed! window.addEventListener('unhandledrejection', event => alert(event.reason));Copy the code

What if we dealt with this error a little later? Such as:

let promise = Promise.reject(new Error("Promise Failed!"));
setTimeout(() = > promise.catch(err= > alert('caught')), 1000);

// Error: Promise Failed!
window.addEventListener('unhandledrejection'.event= > alert(event.reason));
Copy the code

Now, if we run the above code, we’ll first see Promise Failed! “And then caught.

If we don’t know about microtask queues, we might wonder: “Why is the unhandledrejection handler running? We have caught and handled the error!”

But now we know that unhandledrejection is generated when all the tasks in the microtask queue have completed: The engine checks the promise and the Unhandledrejection event will be triggered if any of the Promises appear in the “Rejected” state.

In the example above, a.catch added to a setTimeout is also raised. It will only be triggered after the unhandledrejection event has occurred, so it doesn’t change anything.

conclusion

Promise processing is always asynchronous, because all Promise behavior goes through an internal “Promise Jobs” queue, also known as a “microtask queue” (ES8 terminology).

Therefore, the.then/catch/finally handler is always called after the current code has completed.

If we need to ensure that a piece of code is executed after.then/catch/finally, we can add it to the.then of the chain call.

In most JavaScript engines (including browsers and Node.js), the concept of microtasks is closely related to “event loops” and “macrotasks.” Since these concepts are not directly related to Promises, we’ll cover them in the event loops chapter, microtasks and Macro Tasks, in another part of this tutorial.

Web Workers

For long, heavy computing tasks that should not block event loops, we can use Web Workers.

This is how you run code in another parallel thread.

Web Workers can exchange messages with the main thread, but they have their own variables and event loops.

Web Workers do not have DOM access, so they are useful for computations that use multiple CPU cores at the same time.

Original text: useful. Javascript. The info/event – loop