Nodejs is non-blocking due to its event-based design pattern, also known as the Reactor pattern.
Nodejs is also single-threaded, meaning that the code written by the developer runs on a single thread, while some of the internal implementation code of Nodejs is multi-threaded, such as handling I/O (reading files, network requests, etc.). The Event Loop was briefly mentioned in another article and will be elaborated in this article.
But for THE I/O request is also written by the developer code, not our own code is running on a single thread, how can it be multithreaded here? This is where the REACTOR schema comes in. Before we do that, let’s briefly understand the Blocking and non-blocking I/O.
Blocking I/O vs Non-blocking I/O
Blocking I/O
Blocking I/O means that your application is waiting for an I/O request until the result comes back, so it’s Blocking I/O and it doesn’t do anything else while it’s waiting. Examples such as:
data = socket.read();
// wait until the data fetch back
print(data)
Copy the code
For a Web Server, multiple requests must be processed. In the Blocking I/O case, multiple requests cannot be processed and each request is processed only after the previous request is processed. The solution is to enable multi-threaded processing, and the processing scenario is shown as follows:
It’s a bit expensive to turn on multiple threads (memory footprint, context switches), and you can see from the diagram that each thread has a lot of free time waiting around, not making the most of it.
Non-blocking I/O
As for non-blocking I/O, you usually return the request directly without waiting for the request result. If there is no data to return, a preset constant is returned to indicate that there is no data to return.
Here’s an example of a basic implementation that loops through these resources until data can be read.
// Resource collection
resources = [socketA, socketB, pipeA];
// As long as there are resources that do not get data, the loop will continue
while(! resources.isEmpty()) {for(i = 0; i < resources.length; i++) {
resource = resources[i];
// Return non-blocking directly
// Return the default constant if there is no data
let data = resource.read();
if(data === NO_DATA_AVAILABLE)
// The resource is waiting and not ready
continue;
if(data === RESOURCE_CLOSED)
// The resource has been read and deleted from the collection
resources.remove(i);
else
// The data has been obtainedconsumeData(data); }}Copy the code
This makes it possible to process multiple request resources concurrently in a single thread. This practice, known as busy-wait, allows a single thread to handle multiple concurrent requests, but the CPU is constantly consumed in the polling, unable to move on to other things. Therefore, non-blocking I/O is generally implemented via synchronous Event demultiplexer.
About what synchronous Event demultiplexer is, here’s a quote from Wikipedia.
Uses an event loop to block on all resources. The demultiplexer sends the resource to the dispatcher when it is possible to start a synchronous operation on a resource without blocking
(Example: a synchronous call to read() will block if there is no data to read. The demultiplexer uses select() on the resource, which blocks until the resource is available for reading. In this case, a synchronous call to read() won’t block, and the demultiplexer can send the resource to the dispatcher.)
In simple terms, the resources in the event loop will be sent to the corresponding program through the demultiplexer for processing, and the corresponding events will be stored in the Event queue for the event loop polling to run.
As in the example above, a call to read() can immediately run the following code without blocking, leaving the blocking up to the dispenser, which varies from system to system, and which is more low-level.
A simple example is:
socketA, pipeB;
// Register events
watchedList.add(socketA, FOR_READ);
watchedList.add(pipeB, FOR_READ);
// deblocking blocking
// events Successfully saved events
while(events = demultiplexer.watch(watchedList)) {
...
}
Copy the code
Reactor Pattern
The event loop in Nodejs is based on the Event demultiplexer and Event Queue, which are the core of Reactor Pattern. The first thing to be clear about the Nodejs event loop is:
There is only one main thread that executes the JS code, and that is the thread where the Event Loop runs. (It’s not like the main thread runs the JS code and then another thread runs the Event loop at the same time.)
The execution process of this mode is roughly shown in the figure below:
-
The Event demultiplexer receives the I/O request and sends it down to the corresponding lower layer for processing.
-
Once the I/O gets the data, the Event demultiplexer adds the registered callback function to the Event queue for the event loop to execute.
-
Callbacks in the Event Queue are executed by the Event loop until the Event Queue is empty.
-
When there is no more data in the event queue or the Event demultiplexer does not receive any more requests, the event loop will terminate, meaning the application will exit. Otherwise, go back to step 1.
Event Demultiplexer
Already told it slightly at the beginning of the Event Demultiplexer is what, described here in the next nodejs Event Demultiplexer.
The event demultiplexer is actually an abstract concept, different systems have different implementations, such as Epoll in Linux, Kqueue in MacOS, IOCP in Windows. Nodejs uses Libuv to shield the implementation of different systems to support cross-platform, and provides apis for specific processing methods of various I/O requests (such as File I/O, Network I/O, DNS processing, etc.).
It can be argued that Libuv combines all this complexity together to form the Event demultiplexer in NodeJS. Libuv structure is shown in the figure below:
In Libuv, some I/O operations take advantage of the blocking and asynchronous features of system-level I/O (epoll, etc.), but for some types of I/O, Libuv handles thread pools due to complexity.
So, as I said at the beginning, the user development level of the code is single-threaded, but in I/O processing there is the possibility of multi-threading, but there is no JS code written by the developer, because the thread pool is in the Libuv library.
Event Queue
The event Queue is used to store callback functions waiting to be processed by the Event loop. But in reality, there is more than one event queue, and there are four main types of queues that the event loop deals with.
- Timers and Intervals Queue: save
setTimeout
andsetInterval
The data structure is the smallest heap implementation, so it is called queue. - IO Event Queue: Saves the I/O callbacks that have been completed.
- Immediates Queue: save
setImmediate
The callback function in. - Close Handlers Queue: All others
close
A callback to an event, such assocket.on('close', ...)
.
In addition to the four main queues mentioned above, there are two more special queues:
- Next Ticks Queue: save
process.nextTick
The callback function in. - Other Microtasks Queue: save
Promise
And other microtask callback functions.
Again, the difference between macrotask and microtask.
So how are these queues processed by the event loop? Just look at the picture.
Timers and Enjoyments Queue, IO Event Queue, Immediates Queue, and Close Handlers Queue are handled by the event loop in sequence. If close Hanlers Queue is handled, Timers and Intervals Exit the event loop if no more data is entered.
The process of processing one of the queues is called a phase. An event loop is a process that deals with these four phases. So when are the other two special queues running? The answer is to check whether there is data in the two queues immediately after each phase runs, and execute the data in the two queues until the queues are empty. When both queues are empty, the Event loop proceeds to the next phase.
The Next Ticks Queue has higher permissions than the Other Microtasks Queue, so the Next Ticks Queue executes first.
Also note that if a recursive call occurs in process.nextTick without a stop condition, the Next Ticks Queue will always have data coming in and will never be empty, blocking the event loop. To prevent this, process.maxtickdepth defines the maximum iteration value, which has been removed since NodeJS V0.12.
reference
1.Event Loop and the Big Picture
2.What you should know to really understand the Node.js Event Loop
3.Node.js Design Patterns
4.what is the eventloop