The first thing to understand is that Nginx uses a multi-process (single thread) & multiplexing IO model. Nginx, using I/O multiplexing, becomes a “concurrent event-driven” server.
Multi-process mode of operation
After Nginx starts, there will be a master process and multiple independent worker processes.
The master receives signals from the outside world and sends signals to each worker process, and each process may process this connection.
The master process can monitor the running status of worker processes. When the worker process exits (under abnormal circumstances), a new worker process will be automatically started.
Pay attention to the number of worker processes, which is generally set to the number of machine CPU cores. Because more workers only cause processes to compete with each other for CPU, resulting in unnecessary context switches.
Using multi-process mode can not only improve the concurrency rate, but also the processes are independent from each other. The suspension of one worker process will not affect other worker processes.
Using multi-process mode can not only improve the concurrency rate, but also the processes are independent from each other. The suspension of one worker process will not affect other worker processes.
Jing group phenomenon
The master process first creates a sock file descriptor through socket() for listening, and then forks the workers process, which inherits the parent process’s SOckFD. Then the child process, accept(), creates the Connected descriptor, and communicates with the client through the connected descriptor.
Then, since all child processes inherit the parent’s SOckFD, all child processes will be notified when a connection comes in and will “race” to establish a connection with it, which is called “stampede.” A large number of processes are activated and suspended, and only one process can accept() the connection, which of course consumes system resources.
Nginx handles stampedes
Nginx provides an accept_mutex, which is a shared lock on accept. That is, each worker process needs to obtain the lock before executing accept. If it cannot obtain the lock, it will give up executing Accept (). With this lock, only one process will accpet() at a time, so there will be no surprise problems. Accept_mutex is a controlled option that can be explicitly turned off; it is turned on by default.
Nginx process in detail
After Nginx starts, there will be one master process and multiple worker processes.
The master process
It is mainly used to manage worker processes, including receiving signals from the outside, sending signals to all worker processes, and monitoring the running status of worker processes. When the worker process exits (in abnormal cases), it will automatically restart the new worker process.
The master process acts as the interface between the entire process group and users and monitors the process. It does not need to deal with network events and is not responsible for the execution of services. It only manages worker processes to restart services, smooth upgrades, replace log files, take effect of configuration files in real time and other functions.
To control nginx, we simply send signals to the master process via kill. For example, kill -hup pid tells Nginx to take the time to restart nginx. We usually use this signal to restart nginx, or to reload the configuration, because the restart is leisurely, so the service is not interrupted. What does the master process do when it receives the HUP signal?
First of all, after receiving the signal, the master process will reload the configuration file first, and then start the new worker process, and send a signal to all the old worker processes to tell them that they can retire with honor. After the new worker starts, it starts to receive new requests, while the old worker stops receiving new requests after receiving the signal from the master, and exits after all unprocessed requests in the current process are processed.
Of course, sending signals directly to the master process is an older way of doing things, but nginx introduced a series of command-line arguments after version 0.8 to make it easier to manage. For example,./nginx -s reload is to restart nginx, and./nginx -s stop is to stop nginx. How do you do that? Using reload, we see that when we execute the command, we start a new nginx process, and the new nginx process, after the reload parameter is resolved, knows that our purpose is to control Nginx to reload the configuration file. It’s going to send a signal to the master, and then it’s going to do exactly the same thing as if we were sending a signal directly to the master.
Worker processes
Basic network events are handled in the worker process. Multiple worker processes are peer. They compete equally for requests from clients and are independent of each other. A request can only be processed in one worker process, and it is impossible for a worker process to process requests from other processes. The number of worker processes can be set, which is generally consistent with the number of MACHINE CPU cores. The reason for this is inseparable from nginx’s process model and event processing model.
Worker processes are equal, and each process has the same opportunity to process requests. When we provide HTTP service for port 80, a connection request comes in, and each process can potentially handle the connection. How does that happen? First of all, each worker process is fork from the master process. In the master process, after establishing the socket (listenFD) that requires listen, multiple worker processes are fork out. The listenFD of all worker processes becomes readable upon the arrival of a new connection. To ensure that only one process processes the connection, all worker processes rush accept_mutex before registering the listenFD read event. The process that grabbed the mutex registered listenFD for the read event, which called Accept to accept the connection. When a worker process accepts the connection, it starts to read the request, parse the request, process the request, generate data and then return it to the client, and finally disconnect the connection. Such a complete request is like this. We can see that a request is handled entirely by the worker process, and only within one worker process.
Worker Process workflow
When a worker process accepts () the connection, it starts to read the request, parse the request, process the request, generate the data, and then return it to the client. Finally, the connection is disconnected and a complete request is made. A request is handled entirely by the worker process, and only within one worker process.
The benefits of doing this:
Save on lock overhead. Each worker process is an independent process that does not share resources and does not need locking. At the same time in the programming and problem check, it will also be very convenient.
Independent process, reduce risk. The use of independent processes can not affect each other. After one process exits, other processes are still working and the service will not be interrupted. The master process will restart the new worker process soon. Of course, worker processes can also exit unexpectedly.
The multi-process model can only handle one IO per process/thread.
Without IO multiplexing, only one request can be processed at a time in a process, such as accept(), and if no connection is forthcoming, the program will block until one connection is forthcoming. Multiplexing, on the other hand, allows us to return control to the program only when an event occurs, while the kernel suspends the process at all times.
Core: Nginx uses the IO multiplexing model ePoll
Nginx registers an event that says, “Let me know if a connection request from a new client comes in.” After that, the server will execute Accept () only when the connection request comes in. And such as to the upstream server (such as PHP – FPM) forward the request, and wait for the request returns, the processing of the worker will not in this block, it will be after sending the request, register an event: “if the buffer to receive the data, let me know, I’ll read it in”, so the process is idle down and wait for events.