Interview questions:

How does Nginx implement concurrency? Why doesn’t Nginx use multithreading? What are the common optimizations in Nginx? What are the possible causes of error 502?

Interviewer psychoanalysis

It depends on whether the candidate is familiar with the basic principles of NGINX, because most operations personnel have some understanding of NGINX, but the true understanding of the principles may be very few. Understand its principle, in order to do optimization, otherwise can only carry the same kind, out of the problem also do not know how to start.

People who know fur, generally do a Web Server, build a Web site; Primary o&M might do HTTPS and configure a reverse proxy. Intermediate operation and maintenance define a upstream, write a regular judgment; Old bird to do a performance optimization, write an ACL, and may change the source code (xiaobian indicates no ability to change the source code).

Analysis of interview questions

1. How does Nginx achieve high concurrency?

Asynchronous, non-blocking, using epoll and lots of low-level code optimization.

If a server uses one process for one request, then the number of processes is the number of concurrent processes. Normally, there are many processes waiting.

Nginx uses one master process and multiple Woker processes.

  • The master process is responsible for collecting and distributing requests. Each time a request comes in, the master pulls up a worker process to handle the request.
  • The master process also monitors the wOKer status to ensure high reliability
  • Woker processes are generally set to the same number of CPU cores. The number of requests nginx’s Woker process can handle at a time is limited only by memory and can handle multiple requests.

Nginx’s asynchronous, non-blocking way of working is taking advantage of this wait time. These processes become idle when they need to wait, thus appearing to solve a large number of concurrency problems with a few processes.

Every time a request comes in, a worker process will process it. But not all the way through. To what extent? Process to where blocking might occur, such as forwarding a request to an upstream (back-end) server and waiting for the request to return. The worker is smart enough to register an event after sending the request: “Let me know if upstream returns and I will continue”. So he went to rest. At this point, if another request comes in, he can quickly do it again. Once the upstream server returns, the event will be triggered, the worker will take over, and the request will continue down.

2. Why doesn’t Nginx use multithreading?

Apache: Creates multiple processes or threads, and each process or thread allocates CPU and memory to it (threads are much smaller than processes, so worker supports higher concurrency than Perfork). Concurrency consumes server resources.

Nginx: The use of single-thread asynchronous non-blocking processing requests (administrators can configure the number of worker processes in the main Nginx process)(epoll), does not allocate CPU and memory resources for each request, saves a lot of resources, and also reduces a lot of CPU context switching. This is why Nginx supports higher concurrency.

3. What are the common optimized configurations of Nginx?

(1) worker_processes adjustment

Refers to the number of workers to be generated by Nginx. Best practice is 1 worker process per CPU.

Know the number of CPU cores in the system, input

$ grep processor / proc / cpuinfo | wc -l 
Copy the code

(2) Maximize worker_connections

The number of clients that the Nginx Web server can serve simultaneously. When used with Worker_processes, the maximum number of clients that can be served per second is obtained

Maximum number of clients per second = worker processes * worker connections

To maximize the full potential of Nginx, set worker connections to the maximum number of processes that the core can run at one time, 1024.

(3) Enable Gzip compression

The compressed file size reduces the HTTP transmission bandwidth of the client and therefore improves page loading speed

An example of the recommended gzip configuration is as follows :(in the HTTP section)

(4) Enable caching for static files

To enable caching for static files to reduce bandwidth and improve performance, add the following command to limit the computer to cache static files for web pages:

location ~* .(jpg|jpeg|png|gif|ico|css|js)$ {  
expires 365d;  
} 
Copy the code

(5) Timeouts

Keepalive connections reduce the CPU and network overhead required to open and close connections.

To disable the access_logs (6)

Access logging, which logs every NGINx request and thus consumes a lot of CPU resources, reduces Nginx performance.

Disable access logging completely

access_log off; 
Copy the code

If access logging is required, enable access log buffering

access_log /var/log/nginx/Access.log Primary buffer =16k 
Copy the code

4. What are the possible causes of 502 error?

(1) Whether the FastCGI process is started

(2) Whether the number of FastCGI worker processes is insufficient

(3) The FastCGI execution time is too long

(4) Insufficient FastCGI Buffer

Nginx, like Apache, has a front-end buffer limit and can adjust the buffer parameters

fastcgi_buffer_size 32k;  
fastcgi_buffers 8 32k; 
Copy the code

(5) Proxy Buffer is insufficient

If you are using Proxying, adjust

proxy_buffer_size 16k;  
proxy_buffers 4 16k; 
Copy the code

(6) PHP script execution takes too long

The PHP – FPM. Conf

<value name="request_terminate_timeout">0s</value> 
Copy the code

0s changes to a time