Nginx process model analysis

Before introducing the Nginx process model, let’s explain some common terms to help us understand the process model of Nginx. As a Web server, the original intention of the design is to be able to handle more requests from clients. Generally speaking, there are three ways to complete the parallel processing of requests: multi-process, multi-thread and asynchronous.

Multiprocess mode

Each time the server receives a client request, the main process generates a child process to establish a connection and interact with the request until the connection is disconnected and the child process is terminated

The advantage is that the child processes are independent of each other and the client requests are not interfered with each other.

The disadvantage is that generating a child process requires memory replication and incurs some additional overhead in resources and time. A large number of requests may cause pressure on system resources

Multithreaded mode

The multi-threaded approach is similar to the multi-process approach in that each time the server receives a client request, a thread is created to interact with the client. The cost of producing a thread is much smaller than the process, so the way of multi-threading to a certain extent to reduce the requirements of the Web server system resources.

The disadvantage is that there is memory sharing between multiple threads and there is mutual influence between each other

asynchronous

Asynchronous mode and said before the two ways are completely different, about asynchronous this, there are several concepts synchronous, asynchronous; Blocking, non-blocking, and we’ll do an explanation here

We have a good understanding of synchronous and asynchronous. In the synchronous mechanism, the sender sends a request and waits for the response from the receiver before sending the next request. In the asynchronous mechanism, the sender sends a request and continues to send the next request without waiting for the response from the receiver.

Blocking and non-blocking refers to the blocking and non-blocking modes of reading and writing data on a socket. Socket is also IO operation in essence. Each TCP Socket has a send and receive buffer in its kernel. In and blocking mode, if the receive buffer is empty, the thread of the socket’s read method blocks until data enters the receive buffer. For writing data to the socket, if the length of data to be sent is greater than the free length of the send buffer, the write method will block.

At first glance, the explanation of these four concepts will immediately feel big, and it is often said that synchronous asynchrony equals blocking and non-blocking. In fact, it is very easy to distinguish between them. The main difference between synchronous asynchrony and blocking non-blocking is the object.

Synchronous asynchrony is for the caller, who makes a request and waits for feedback from the caller without waiting to do anything else.

Blocking Non-blocking is for the called. The called receives a request and gives feedback after completing the request task. It is non-blocking to give feedback directly after receiving the request and then perform the task.

In non-blocking mode, events are used to do this. We can think of NIO as an UNDERLYING I/O scheduling thread that continuously scans the buffer of each Socket. When it finds that the write buffer is empty, it generates a socket-writable event, at which point the program can write data to the Socket. If the write is not complete, wait for the next writable event notification; On the other hand, when data is found in the buffer, it generates a socket-readable event, which the program receives to read data from the Socket.

Based on these concepts, four concepts are introduced: synchronous blocking, synchronous non-blocking, asynchronous blocking, and asynchronous non-blocking

Synchronous blocking: After sending a request to the receiver, the sender waits for the receiver to respond. The receiver waits for the result of an IO operation performed while processing a request before responding to the sender if the result is not immediately available. Has been blocked during the period;

Synchronous non-blocking: After sending a request to the receiver, the sender waits for a response. When the receiver performs AN I/O operation, the sender can do other things without waiting. However, the sender is still waiting because the result has not been obtained. After receiving the IO, the receiver sends the response to the sender, and then the receiver enters the next request process

Asynchronous blocking: After sending a request to the receiver, the sender can proceed without waiting for a response. If the IO operation performed by the receiver cannot obtain the result immediately, the receiver waits for the result and responds to the sender

Asynchronous non-blocking: After sending a request, the sender can continue to do other things without waiting for a response. The receiver does not wait for an IO operation that does not result immediately, but does something else. After the IO operation is complete, the system notifies the receiver of the result, and the receiver responds to the sender

Nginx server request processing

  • Nginx combines multi-process mechanism and asynchronous mechanism to provide external services
  • Once the Nginx service is started, a main process and multiple worker processes are created.

The master process is mainly used to manage worker processes, including: receiving signals from the outside, sending signals to all worker processes, monitoring the running status of worker processes. When the worker process exits (in abnormal cases), it will automatically restart the new worker process

Basic network events are handled in the worker process. Multiple worker processes are peer. They compete equally for requests from clients and are independent of each other. A request can only be processed in one worker process, and it is impossible for a worker process to process requests from other processes. The number of worker processes can be set, which is generally consistent with the number of CPU cores of the machine

What does the Master process do?

Read and verify the configuration file nginx.conf; Manage worker processes;

What does the Worker process do?

Each Worker process maintains a thread (to avoid thread switching) to handle connections and requests; Note that the number of Worker processes is determined by the configuration file, which is generally related to the number of cpus (conducive to process switching). If you configure several Worker processes, there will be several Worker processes.

Hot deployment

The master manages the worker process, so we only need to communicate with the master process. The master process receives signals from the outside world and does different things based on those signals, such as the ones we used earlier

./sbin/nginx -c conf/nginx.conf -s reload
Copy the code

When this command is executed, the master receives the signal and starts a new Nginx process. The new Nginx process, after parsing the reload parameter, knows to control Nginx to reload the configuration file. It sends the signal to the master, and the master reloads the configuration file. After starting the new worker process and sending a signal to all the old worker processes to tell them that they can retire, the new worker can receive new requests with the new configuration file after starting – the principle of hot deployment

How do worker processes handle requests?

Now that we know basically what happens inside Nginx when operating on it, how does the worker process handle requests? In Nginx, all worker processes are equal, and each process has the same chance to handle each request. When we provide HTTP service for port 80, a connection request comes in, and each process may process the connection.

The worker process is fork from the master process. In the master process, the socket that requires listen will be established first, and then fork out multiple worker processes. When a new connection request comes in, the Work process can handle it. The worker process preempts accept_mutex, the mutex lock, before processing the request. Once the lock is successfully obtained, the worker process can parse and process the request. The request is processed and then returned to the client.

Some benefits of process model processing are as follows: processes are independent, that is, when one worker process exits abnormally, other worker processes will not be affected; In addition, a separate process avoids unnecessary locking operations, which improves processing efficiency and makes development and debugging easier.

Worker processes compete to listen for connection requests from clients: This approach may bring a problem, that is, all the requests may be obtained by a worker process for competition, so other processes are idle, and a certain process will be in a busy state, which may also result in failure to respond to connections in time and discard the requests that it can handle. This inequity needs to be avoided, especially in a highly reliable Web server environment.

To address this, Nginx uses a value to enable accept_mutex. The ngx_accept_disabled flag controls whether a worker process should compete for accept_mutex and receive the Accept event.

Ngx_accept_disabled: 1/8 of the total number of connections to a single nginx process, minus the remaining number of idle connections to obtain the ngx_accept_disabled value.

When ngx_accept_DISABLED is greater than 0, no attempt is made to obtain the accept_mutex lock, and ngx_accept_DISABLED is reduced by 1, so that each time this is done, it is reduced by 1 until it is less than 0.

If an accept_mutex lock is not accepted, the connection is not accept_accept_disable. If an accept_mutex lock is not accepted, the connection is not accept_disable. By not accepting, nginx controls its own connections, and the pool of connections from other processes is used, thus controlling the balance of connections between multiple processes.

Well ~ this article first introduced here, have a question welcome message exchange ~

Original text: my.oschina.net/u/4553593/b…