In the latest million long connection pressure test, four Nginx models with 32C and 128 gb frequently appear in OOM. The memory monitoring when problems occur is as follows:

The following describes the troubleshooting process.

The phenomenon of description

This is a webSocket million long connection to send and receive messages pressure test environment, the client JMeter used hundreds of machines, through four Nginx to back-end services, simplified deployment structure as shown in the figure below.

While maintaining millions of connections without sending data, everything is fine and Nginx memory is stable. Nginx starts to consume hundreds of megabytes of memory per second, until the memory usage approaches 128 GB and Woker processes are frequently killed by the system. Each of the 32 worker processes occupies close to 4G of memory. The output of dmesg -t is shown below.

[Fri Mar 13 18:46:44 2020] Out of memory: Kill process 28258 (nginx) score 30 or sacrifice child
[Fri Mar 13 18:46:44 2020] Killed process 28258 (nginx) total-vm:1092198764kB, anon-rss:3943668kB, file-rss:736kB, shmem-rss:4kB
Copy the code

After the Work process was restarted, a large number of long connections were disconnected, and the pressure measurement could not continue to increase the amount of data.

Analysis of screening process

Nginx does not ESTABLISH a connection with the established state of the send-q stack, the client recv-q stack is very large. The SS output on the Nginx side is shown below.

State recv-q send-q Local Address:Port Peer Address:Port ESTAB 0 792024 1.1.1.1:80 2.2.2.2:50664... State recv-q send-q Local Address:Port Peer Address:Port ESTAB 0 792024 1.1.1.1:80 2.2.2.2:50664...Copy the code

You can occasionally see more zero Windows in the JMeter client packet capture, as shown below.

There are some basic directions here. The first suspicion is that the JMeter client has limited processing power and there are a lot of messages piling up in the transit Nginx.

To test the idea, try to dump nginx memory. Since memory dump is likely to fail in the later stages of high memory usage, this is done shortly after memory has started to rise.

Firstly, pmap is used to check the memory distribution of any worker process, 4199 in this case. The output of the pmap command is shown as follows.

pmap -x  4199 | sort -k 3 -n -r

00007f2340539000  475240  461696  461696 rw---   [ anon ]
...
Copy the code

Then use the cat/proc / 4199 / smaps | grep 7 f2340539000 find the beginning and end of a period of memory addresses, as shown below.

cat /proc/3492/smaps  | grep 7f2340539000

7f2340539000-7f235d553000 rw-p 00000000 00:00 0
Copy the code

Then use GDB to connect to the process and dump the memory.

gdb -pid 4199

dump memory memory.dump 0x7f2340539000 0x7f235d553000
Copy the code

Then use the strings command to look at the readable string contents of the dump file and see a large number of requests and responses.

This is due to the memory increase caused by caching a large number of messages. Then I looked at the Nginx parameter configuration,

location / {
    proxy_pass http://xxx;
    proxy_set_header    X-Forwarded-Url  "$scheme://$host$request_uri";
    proxy_redirect      off;
    proxy_http_version  1.1;
    proxy_set_header    Upgrade $http_upgrade;
    proxy_set_header    Connection "upgrade";
    proxy_set_header    Cookie $http_cookie;
    proxy_set_header    Host $host;
    proxy_set_header    X-Forwarded-Proto $scheme;
    proxy_set_header    X-Real-IP $remote_addr;
    proxy_set_header    X-Forwarded-For $proxy_add_x_forwarded_for;
    client_max_body_size        512M;
    client_body_buffer_size     64M;
    proxy_connect_timeout       900;
    proxy_send_timeout          900;
    proxy_read_timeout          900;
    proxy_buffer_size        64M;
    proxy_buffers            64 16M;
    proxy_busy_buffers_size        256M;
    proxy_temp_file_write_size    512M;
}
Copy the code

You can see that the value of proxy_buffers is set to be extremely large. Upstream vs. upstream: Nginx memory footprint: upstream vs. upstream

Simulate Nginx memory inflation

I’m emulating a slow packet collecting client with a rich backend server on the other side, and then watching to see if Nginx memory changes.

The slow packet collection client is written in Golang and sends HTTP requests using TCP, as shown below.

package main import ( "bufio" "fmt" "net" "time" ) func main() { conn, _ := net.Dial("tcp", "10.211.55.10:80") text := "GET /demo.mp4 "HTTP/1.1\r\nHost: ya.test.me\r\n\r\n" ftt.fprintf (conn, text) for; ; { _, _ = bufio.NewReader(conn).ReadByte() time.Sleep(time.Second * 3) println("read one byte") } }Copy the code

Enable pidstat on test Nginx to monitor memory changes

pidstat -p pid -r 1 1000
Copy the code

Running the golang code above, the Nginx worker process memory changes as shown below.

04:12:13 is when golang starts, and you can see that in a very short time, Nginx’s memory footprint went up to 464136 kB (close to 450M) and will stay there for a long time.

It is also worth noting that the size of proxy_buffers is set for a single connection, and memory usage will continue to grow if more than one connection is sent. Here are the results of running two Golang processes at the same time on Nginx memory.

You can see that by the time the two slow clients connect, the memory has increased to over 900 MEgabytes.

The solution

With millions of connections to support, you need to be careful about resource quotas for individual connections. One of the quickest changes is to set proxy_buffering to off, as shown below.

proxy_buffering off;
Copy the code

Through actual measurement, after modifying this value in the pressure test environment and reducing the value of proxy_BUFFer_SIZE, the memory is stable at about 20G without soaring again. You can then turn on proxy_buffering, and adjust the size of proxy_buffers to achieve a better balance between memory consumption and performance.

Repeat the test as shown below.

You can see that the memory value has increased by about 64MB this time. Why increase by 64M? Check out the Nginx documentation for proxy_buffering (nginx.org/en/docs/htt…

When buffering is enabled, nginx receives a response from the proxied server as soon as possible, saving it into the buffers set by the proxy_buffer_size and proxy_buffers directives. If the whole response does not fit into memory, a part of it can be saved to a temporary file on the disk. Writing to temporary files is controlled by the proxy_max_temp_file_size and proxy_temp_file_write_size directives.

When buffering is disabled, the response is passed to a client synchronously, immediately as it is received. nginx will not try to read the whole response from the proxied server. The maximum size of the data that nginx can receive from the server at a time is set by the proxy_buffer_size directive.

As you can see, when proxy_buffering is on, Nginx receives as much of the content returned by the back-end server as possible and stores it in its own buffer, the maximum size of which is proxy_buffer_size * proxy_buffers memory.

If the messages returned by the back end are too large to fit into the memory, they are put into disk files. Temporary files are determined by the proxy_max_temp_FILe_size and proxy_temp_FILe_write_size directives, which are not expanded here.

When proxy_buffering is off, Nginx does not read as much data from the proxy server as possible, but instead sends a maximum of proxy_buffer_size data to the client at one time.

Nginx’s buffering mechanism is really designed to solve the problem of inconsistent speed between the sending and receiving ends. Without buffering, data is forwarded directly from the back-end service to the client, and buffering can be turned off if the client receives data fast enough. However, in the case of massive connections, the consumption of resources needs to be taken into account at the same time. If someone deliberately forges a slower client, they can consume a lot of resources on the server at a small cost.

In fact, this is a typical problem in non-blocking programming, receiving data does not block sending data, sending data does not block receiving data. If the two ends of Nginx send and receive data at unequal speeds and the buffer is set too large, problems can occur.

Nginx source code analysis

The source code for reading the response from the back end is written to the local buffer in the ngx_event_pipe_read_upstream method in SRC /event/ ngx_event_PIPE. c. This method eventually creates the memory buffer by calling ngx_create_temp_buf. The number of buffers created and the size of each buffer are determined by p->bufs.num (number of buffers) and p->bufs.size (size of each buffer), which are the values of the proxy_buffers parameter we specified in the configuration file. This part of the source code is shown below.

static ngx_int_t ngx_event_pipe_read_upstream(ngx_event_pipe_t *p) { for ( ;; ) { if (p->free_raw_bufs) { // ... } else if (p->allocated < p->bufs.num) {// p->allocated buffer number, P ->bufs.num Maximum size of buffers /* allocate a new buf if it's still allowed */ b = ngx_create_temp_buf(p->pool, p->bufs.size); // Create buffer p->bufs.size if (b == NULL) {return NGX_ABORT; } p->allocated++; }}}Copy the code

The interface for Nginx source debugging is shown below.

Afterword.

There are also some auxiliary judgment methods in the process, such as tracking memory allocation and release through the Strace and SystemTap tools, which are not described here. These tools are the magic tools for analyzing black box programs.

In addition, the worker_connections parameter was not properly set, which caused Nginx to consume 14GB of memory immediately after startup. These problems are difficult to find without a large number of connections.

Finally, underlying principles are essential skills, and tuning is an art. The above content may be wrong, just look at the screening ideas.

Three things to watch ❤️

If you find this article helpful, I’d like to invite you to do three small favors for me:

  1. Like, forward, have your “like and comment”, is the motivation of my creation.

  2. Follow the public account “Java rotten pigskin” and share original knowledge from time to time.

  3. Also look forward to the follow-up article ing🚀

Author: Master Zhang who dug the pit

Reference: club.perfma.com/article/433…