After studying the V8 Basics, one of the cornerstones of NodeJS, we’ll move on to another cornerstone this time: Libuv. [例 句] Libuv’s design philosophy is an overview of libuv’s design philosophy. If you haven’t finished reading this article, it is not recommended to read the following content, because there will be a “generation gap” problem ~
All of the sample code for this article can be found in this repository: Libuv-Demo
1. Introduction to Libuv
Libuv is a cross-platform library focused on asynchronous IO, featuring the famous Event-loop. We want to learn Libuv, so it is necessary to master Libuv compilation.
1.1 brief introduction to libuv compilation
Like V8, libuv compilation is summarized as follows:
- Download GYP first:
git clone https://chromium.googlesource.com/external/gyp build/gyp
- Specify the ninja:
./gyp_uv.py -f ninja
- Compile:
ninja -C out/Debug
- Run the test:
./out/Debug/run-tests
1.2. Simple use of LibuV
Using the compiled Libuv library file, we can start writing a simple and classic example: Hello World.
#include "stdio.h"
#include "uv.h"
int main() {
uv_loop_t *loop = uv_default_loop();
printf("hello libuv");
uv_run(loop, UV_RUN_DEFAULT);
}
Copy the code
The hello_libuv. C is used to compile all the demo modules using CLion software and the cmakelists. TXT file, using the operation described in this article on how to properly use V8 embedded in our C++ application. Without further details, remember to change the include_directories and link_directories in the cmakelists. TXT file to the directory location of the Libuv static library file you compiled in section 1.
Ok, with that in mind, let’s start with demo to get started on this code base that has so many secrets. The next article may be quite long, if you can’t finish reading it once, it is suggested to collect it and read it several times
2. Introduction and practice of basic concepts of LiBUV
Before we can understand Libuv, we need to understand the following concepts and test them with actual use cases.
2.1. Event-loop thread
We all know that the thread is the operating system’s most basic dispatching unit, and the process is the operating system’s most basic resource allocation unit, so we can know that the process can not run, can run is the thread in the process. A process is simply a container that contains information such as the data structure needed for a thread to run. When a process is created, the operating system creates a thread, called the main thread, and all other slave threads are created in the main thread code, by the programmer. So every executable application has at least one thread
Libuv starts the Event-loop thread and uses the thread pool to create more threads on the main thread. The event-loop thread is a while(1) loop that exits until there are no active handles, at which point the libuv process is destroyed. It’s important to know this for the rest of the study.
2.2, Handle…
The entire libuv implementation is based on Handle and Request. So understanding handles and all of the handle instances that Libuv provides is the only way to really understand Libuv. According to the text, the handle is:
Represents a long-life object capable of performing some action while active.Copy the code
To understand the meaning of this sentence, let’s first grasp two key words: long life cycle, object. All Libuv handles need to be initialized, and initialization calls something like uv_xxx_init. XXX represents the type of the handle. In this function, the passed parameter handle is initialized and assigned to return a specific object, such as initializing a TCP handle:
. Handle ->tcp.serv.accept_reqs = NULL; handle->tcp.serv.pending_accepts = NULL; handle->socket = INVALID_SOCKET; handle->reqs_pending = 0; handle->tcp.serv.func_acceptex = NULL; handle->tcp.conn.func_connectex = NULL; handle->tcp.serv.processed_accepts = 0; handle->delayed_error = 0 ...Copy the code
Understand that a handle is an object, so what if it has a long life? In tcpServer. c, you can see the following TCP server operations: binding port, listening port are based on the TCP handle, the entire handle lives in the application, as long as the TCP server does not die, so it is said to be a long-life object.
All handles provided by Libuv are as follows:
Next we briefly introduce all of the following Libuv handles
2.2.1, uv_handle_t
First of all, Libuv has a basic handle, uV_HANDle_T. Libuv is the basic paradigm of all other handles. Any handle can be strongly converted to this type, and all apis related to this handle can be used by other handles.
Libuv can only continue running if there are active handles, and checking if a handle is active(this can be done using the method uv_is_active(const uv_handle_t* handle)) means different things depending on the type of handle:
uv_async_t
Handles are always active and cannot be deactivated unless useduv_close
Shut offuv_pipe_t
,uv_tcp_t
.uv_udp_t
The I/O handles are also activeuv_check_t
.uv_idle_t
.uv_timer_t
Etc., when these handles start callinguv_check_start()
.uv_idle_start()
Is also active.
To check which handles are active, use uv_print_active_handles(handle->loop, stderr);
Take tcpServer. c as an example, after we start TCP server, start a timer to print the existing handle, the result is as follows:
[-AI] async 0x10f78e9d8
[RA-] tcp 0x10f78e660
[RA-] timer 0x7ffee049d7c0
Copy the code
You can see in the TCP example that the persistent handles are Async, TCP, and timer. The notation in parentheses before them is explained as follows:
R means that the handle is referenced A means that the handle is active and I means that the handle is used internallyCopy the code
2.2.2, uv_timer_t
As the name suggests, Libuv’s timer is used to call the corresponding callback at some point in the future. It is called at the very beginning of the polling process, which we will cover later.
2.2.3, uv_idle_t
Idle handles run a given callback once per iteration of the loop, and in the order before the prepare handle.
The significant difference from the prepare handle is that the loop performs zero timeout polling instead of blocking I/O when there is an active free handle.
In the uv_backend_timeout method we can see that the polling I/O timeout returned is 0:
if(! QUEUE_EMPTY(&loop->idle_handles))return 0;
Copy the code
Callbacks to idle handles are used to perform low-priority tasks.
** Note: Despite the name "idle", idle handles call their callback function every time the loop iterates, not when the loop is actually "idle". **Copy the code
2.2.3, uv_prepare_t
The prepare handle will run the given callback once per iteration of the loop, before I/O polling.
The question is: Why did Libuv create such a handle? Libuv is supposed to provide a way for you to do something before polling I/O, and then use the check handle to do some result verification after polling I/O.
2.2.4, uv_check_t
The check handle will run the given callback once per iteration of the loop, selected after I/O polling. Its purpose has been mentioned above
2.2.5, uv_async_t
The Async handle allows the user to “wake up” the event loop and call a callback registered initially on the main thread. This means sending a message to the main thread (the Event-loop thread) so that it can execute the callback that was originally registered.
** Note: libuv does an aggregation on 'uv_async_send()'. That is, it does not execute a callback once it is called. **Copy the code
We use thread.c as an example and use uv_queue_work and uv_async_send as examples. The result is printed as follows:
I am the master process, ProcessId => 90714 I am event loop thread => 0x7ffF8C2d9380 From the thread ID, you can see that the callback function executes I am work callback, calling in a thread in the thread poolin some thread inThread pool, PID =>90714 work_cb Thread ID 0x700001266000 // The uv_queue_work callback is completed. I am after work callback, calling from event loop thread, Pid =>90714 AFTER_work_cb Thread ID 0x7ffF8C2D9380 // This is a uv_async_init callback, which is triggered because uv_async_send is executed in the work callback, As verified by 0x700001266000, I am async callback, calling from event loop thread, pid=>90714 async_cb thread id 0x7fff8c2d9380 I am receiving msg: This msg from another thread: 0x700001266000Copy the code
2.2.6, uv_poll_t
The Poll handle is used to monitor file descriptors for readability, writability, and disconnection, similar to the purpose of Poll (2).
The purpose of the Poll handle is to support integration with external libraries that rely on event loops to notify socket state changes, such as C-ares or libssh2. It is not recommended to use uv_poll_t for any other purpose; Because things like Uv_tcp_t, Uv_UDp_t, and others offer a faster and more scalable implementation than uv_poll_t, especially on Windows.
Perhaps the polling process will occasionally signal that the file descriptor is readable or writable, even if it is not. Therefore, users should always be prepared to handle an EAGAIN error or similar EAGAIN error again when attempting to read or write from a FD.
You cannot have more than one active Poll handle on the same socket, as this can cause busyloop or other failures in libuv.
Users should not close the file descriptor when an active Poll handle polls it. Otherwise, it might cause the handle to report an error, but it might also start polling for another socket. However, fd can be safely turned off immediately after a call to uv_poll_stop() or uv_close().
* * on Windows, only the socket file descriptor can be polling, on Linux, any [` poll (2) `] (http://linux.die.net/man/2/poll) to accept the file descriptor can be polling * *Copy the code
The following lists the event types for polling:
enum uv_poll_event {
UV_READABLE = 1,
UV_WRITABLE = 2,
UV_DISCONNECT = 4,
UV_PRIORITIZED = 8
};
Copy the code
2.2.7, uv_signal_t
The Signal handle implements UNIX-style Signal processing on a per-event loop basis. Udpserver. c shows how to use the Signal handle:
uv_signal_t signal_handle;
r = uv_signal_init(loop, &signal_handle);
CHECK(r, "uv_signal_init");
r = uv_signal_start(&signal_handle, signal_cb, SIGINT);
void signal_cb(uv_signal_t *handle, int signum) {
printf("signal_cb: recvd CTRL+C shutting down\n");
uv_stop(uv_default_loop()); //stops the event loop
}
Copy the code
A few things to know about the Signal handle:
- Called programmatically
raise()
orabort()
The triggered signal will not be detected by Libuv; So these signals do not correspond to the callback function. - SIGKILL and SIGSTOP are impossible to capture
- Processing SIGBUS, SIGFPE, SIGILL, or SIGSEGV through libuV results in undefined behavior
2.2.8, uv_process_t
The process handle creates a new process and allows the user to control the process and use streams to establish communication channels. C. It is worth noting that the first argument to the structure provided in args refers to the path of the executable, as in demo:
const char* exepath = exepath_for_process();
char *args[3] = { (char*) exepath, NULL, NULL };
Copy the code
Exepath in the example is the execution path of: FsHandle.
Another point to note is the STD configuration of the parent process, some references are provided in the demo, as well as another demo: pipe if you are using pipes
2.2.9, uv_stream_t
The flow handle provides an abstraction of the duplex communication channel. Uv_stream_t is an abstract type, and libuv provides three stream implementations in the form of Uv_tcp_t, Uv_PIPE_t, and Uv_TTy_t. There is no concrete example of this. Libuv has several methods whose input parameter is uv_stream_t, indicating that these methods can be used by TCP/PIPE /tty.
int uv_shutdown(uv_shutdown_t* req, uv_stream_t* handle, uv_shutdown_cb cb)
int uv_listen(uv_stream_t* stream, int backlog, uv_connection_cb cb)
int uv_accept(uv_stream_t* server, uv_stream_t* client)
int uv_read_start(uv_stream_t* stream, uv_alloc_cb alloc_cb, uv_read_cb read_cb)
int uv_read_stop(uv_stream_t*)
int uv_write(uv_write_t* req, uv_stream_t* handle, const uv_buf_t bufs[], unsigned int nbufs, uv_write_cb cb)
int uv_write2(uv_write_t* req, uv_stream_t* handle, const uv_buf_t bufs[], unsigned int nbufs, uv_stream_t* send_handle, uv_write_cb cb)
Copy the code
2.2.10, uv_tcp_t
TCP handles can be used to represent TCP streams and servers. Uv_stream_t is the “parent class” of Uv_tcp_t, which is implemented by structure inheritance. The structure relationship among Uv_HANDLE_t, Uv_STREAM_t, and Uv_tcp_t is shown as follows:
The steps to create a TCP server using Libuv can be summarized as follows:
1. Initialize uv_tcp_t: Uv_tcp_init (loop, &tcp_server) 2, bind address: uv_tcp_bind 3, listen connection: uv_listen 4, call the uv_LISTEN callback when there is a connection, do the following: 4.1 initialize the TCP handle of the client: uv_tcp_init() 4.2 Receive the connection from the client: uv_accept() 4.3 Start reading the data requested by the client: 4.4. Perform corresponding operations after reading. If you need to respond to client data, call uv_write and write back data.Copy the code
See demo for more details
2.2.11, uv_pipe_t
The Pipe handle provides an abstraction of a local socket on Unix and a named Pipe on Windows. It is a “subclass” of Uv_STREAM_t. Pipes can be used for many purposes, from reading and writing files to communicating between threads. We used in the example to achieve the main thread and multiple child threads communication. The implementation model looks like this:
As can be seen from the model, we use pipes to bind the client connection to a random thread, and the subsequent operations are the communication between the thread and the client.
2.2.12, uv_tty_t
A TTY handle represents a stream to the console, which is less commonly used
2.2.13, uv_udp_t
UDP handles encapsulate UDP communication between the client and server. The steps to create a UDP server using Libuv can be summarized as follows:
1, initialize the receiver uv_udp_t: Uv_udp_init (loop, &receive_socket_handle) uv_udp_bind 4, uv_udp_recv_start You can write back the data and send it to the client using the following methods: 4.1 uv_udp_init Initialssend_socket_handle 4.2 uv_udp_bind binds the sender's address, which can be obtained from recV 4.3 uv_udp_send Sends the specified messageCopy the code
Uv_udp_set_broadcast is used to set the address of the broadcast if the example is given in the official documentation. For details, see UDP
2.2.14, uv_fs_event_t
The FS event handle allows the user to monitor update events for a given path, for example, if the file is renamed or there is a common change in it. This handle uses the best solution on each platform.
2.2.15, uv_fs_poll_t
The FS polling handle allows the user to monitor a given change path. Unlike uv_fs_event_t, fs poll handles use STAT to detect when a file has changed, so they can work on file systems that do not support FS event handles.
2.3, the Request
Next comes the concept of Request, a short life cycle, which is a structure similar to REq in NodeJS. Again using the TCP server example above, there is this code:
ifUv_shutdown_t *shutdown_req = malloc(sizeof(uv_shutdown_t)); r = uv_shutdown(shutdown_req, (uv_stream_t *)tcp_client_handle, shutdown_cb); CHECK(r,"uv_shutdown");
}
Copy the code
When the client fails to connect and needs to close the connection, we initiate a Request and pass it to the operation we need to request, in this case shutdown.
Here is a mind map of handles and requests provided by Libuv:
Libuv’s Request operations are relatively small compared to handles. The above illustration illustrates the instructions for each request. All we can do is read the article at any time.
2.3.1, uv_request_t
Uv_request_t is the basic request, and any other request is extended based on this structure. Any API defined by uv_request_t can be used by other requests. Same effect as UV_HANDle_T.
2.4 Three modes of liBUV operation
Moving on to the three operating modes provided by Libuv:
- UV_RUN_DEFAULT Default polling mode, which runs the event loop until there are no active, reference, and request handles
- UV_RUN_ONCE Polling mode, which executes callbacks directly across uv__io_poll if there are callbacks in pending_queue. If not, this method performs only one I/O poll (uv__io_poll). If a callback is pressed into the pending_queue after execution, uv_run will return a non-zero value and you will need to trigger uv_run again at some future time to clear the pending_queue.
- UV_RUN_NOWAIT Polling mode (regardless of pending_queue). This mode is similar to UV_RUN_ONCE but does not determine whether pending_queue has a callback and directly performs an I/O poll.
The last
Ok, limited to space, the basic libuv is still not finished, you can click on me to continue to read the second, or you can first digest ~