Click on “Road of Technology for Migrant workers” and choose “Top or Star label”

10 o ‘clock every day for you to share different dry goods

Self-tuning NGINX for Performance

Known for its high-performance load balancing, caching, and Web servers, Nginx supports 40% of the world’s busy web sites. The default configurations for Nginx and Linux systems perform better in most scenarios, but some tuning is necessary for optimal performance.

This article discusses some Nginx and Linux configurations to consider when tuning your system. There are many of these configurations, but in this article we will only cover configurations that are suitable for most users. Configurations that are not covered are only considered by those with a deep understanding of Nginx and Linux, or recommended by the Nginx expert services team.

Nginx Expert Services, has worked with some of the world’s busiest web sites to tune Nginx for maximum performance and can support any customer who needs to get the most out of their system.

Introduction to the

A basic understanding of Nginx architecture and configuration concepts is assumed. This article does not repeat the Nginx documentation, but rather Outlines the various configuration options and provides links to related documentation.

When tuning, a good rule of thumb is to change one configuration item at a time, and if there is no performance improvement, revert to the original value.

We’ll start with Linux tuning, because some values affect what can be used in an Nginx configuration.

Linux configuration

Modern Linux kernels (2.6+) do a good job of tuning various configurations, some of which you may want to change. If the operating system configuration is too low, you will see error messages in the kernel logs, so you need to adjust these configurations. Linux configuration items are numerous, and this article only mentions those that are most likely to need tuning under normal workloads. For details on these configurations, refer to the Linux documentation.

The Backlog queue

The following Settings are directly related to the connection and how it is queued. It may be useful to change these configurations if the incoming connection rate is high and the performance level is uneven, such as when some connections seem to be paused.

  • Net.core. somaxconn This parameter sets the queue size for connections to be accepted by Nginx. Since Nginx accepts connection speeds very fast, this value usually doesn’t need to be very large, but the default value is very low, so it’s a good idea to increase this value if you have a high-traffic site. If the setting is too low, you will see error messages in the kernel log, and you should increase this value until there are no error messages. Note: If you set it to a value greater than 512, you should also change the Nginx configuration by matching this value with the Backlog parameter of the Listen directive.

  • Net.core.net dev_max_backlog sets the rate at which packets are cached by the network adapter before being handed to the CPU for processing. For machines with high bandwidth, this value may need to be increased. View the NIC documentation for suggestions or check the kernel logs for errors.

File descriptor

File descriptors are operating system resources that handle things like connecting and opening files. Nginx can use up to two file descriptors per connection. For example, if Nginx is used as a proxy, one is used for client connections and the other is used to connect to the proxied server. If HTTP Keepalive is used, the connection descriptor is much less used. For systems with a large number of connections, the following Settings may need to be adjusted:

  • Sys.fs.file_max This is a system-wide file descriptor limit.

  • Nofiles this is the level of the user’s file descriptor limit, in the/etc/security/limits the conf file configuration

Temporary port

When Nginx is used as a proxy, each connection to the upstream server uses a temporary port.

  • Net.ipv4. ip_local_port_range specifies the start and end ports that can be used. If you see ports running out, you can increase the range. Common Settings are 1024 to 65000.

  • Net.ipv4. tcp_fin_timeout This is used to specify how long before an unused port can be used again by another connection. Typically, this defaults to 60 seconds, but can be safely reduced to 30 or even 15 seconds.

Nginx configuration

Here are some Nginx instructions that may affect performance. As mentioned earlier, we only discuss directives that recommend most user adjustments. Any directives not mentioned here are not recommended to be changed without guidance from the Nginx team.

Work in progress

Nginx can run multiple worker processes, each of which can handle a large number of connections. You can control the number of worker processes and how connections are handled with commands like:

  • Worker_processes This controls the number of worker processes that Nginx runs. Most of the time, one CPU core running one worker process will work fine. This instruction can be set to auto to achieve the number of worker processes matching the number of CPU cores. You can increase this value sometimes, such as when a worker process needs to handle a lot of disk I/O operations. This value defaults to 1.

  • Worker_connections This represents the maximum number of connections that each worker process can process simultaneously. The default is 512, but most systems can handle much larger values. How much this value should be set depends on the server hardware configuration and the nature of the traffic, which can be found by testing.

Keepalives

Persistent connections can have a significant impact on performance by reducing the CPU and network overhead required to open and close connections. Nginx terminates all client connections and has a separate connection to the upstream server. Nginx supports persistent connections between the client and upstream server. The following instructions involve client persistence:

  • Keepalive_requests This indicates how many requests a client can send on a single persistent connection. The default value is 100 and can be set to a higher value, which is useful in test scenarios where a load generator sends a large number of requests from a single client.

  • Keepalive_timeout Indicates how long an idle persistent connection remains open.

The following instruction involves upstream:

  • Keepalive specifies the number of idle persistent connections to the upstream server for each worker process. This directive has no default value.

To enable a persistent connection to upstream, add the following directive:

  • Proxy_http_version 1.1;

  • proxy_set_header Connection “”;

The Access log

Logging each request costs CPU and IO cycles, and one way to reduce this impact is to enable Access logging caching. This causes Nginx to buffer a series of log entries and then write them to the file at once rather than individually.

Access log buffering can be turned on by specifying the “buffer=size” option of the access_log directive, which specifies the size of the buffer to use. You can also use the “flush=time” option to tell Nginx how long it takes to write entries from the buffer to the file.

With these two options defined, Nginx writes entries in the buffer to the log file when the buffer does not fit the next log or when the entries in the buffer exceed the time specified by the Flush parameter. When the worker process reopens or closes the log file, entries in the buffer are also written to the file. Access logging can also be disabled completely.

Sendfile

Sendfile is an operating system feature that can be enabled on Nginx. It provides faster TCP data transfers by copying data from one file descriptor to another in the kernel, often to zero copy. Nginx can use this mechanism to write cache or disk content to sockets without a context switch from kernel space to user space, which is very fast and uses less CPU overhead. Since the data never touches user space, it is impossible to insert filters that need to access the data into the processing chain, and you cannot use any Nginx filters that need to change content, such as gzip filters. Nginx does not enable this mechanism by default.

limit

Nginx and Nginx Plus allow you to set limits that can be used to control client resource consumption without impacting system performance as well as user experience and security. Here are some instructions:

  • Limit_conn/limit_conn_zone These directives can be used to limit the number of connections Nginx allows, such as those from a single client IP address. This prevents a single client from opening too many connections and consuming too many resources.

  • Limit_rate This is used to limit the amount of bandwidth a client is allowed to use on a single connection. This prevents some clients from overloading the system, thus facilitating QoS guarantees for all clients.

  • Limit_req/limit_req_zone These directives can be used to limit the rate at which Nginx processes requests. Together with limit_rate, it prevents certain clients from overloading the system, thus facilitating QoS guarantees for all clients. These commands can also be used to enhance security, especially for login pages, by limiting the rate of requests to make it appropriate for human users and slowing down applications trying to access your application.

  • Max_conns This limits the maximum number of simultaneous connections to a single server in the upstream group. This prevents overloading of the Upstream server. The default value is 0, indicating that there is no limit.

  • Queue If max_conns is set, the queue directive determines what happens when a request cannot be processed because there are no servers available in the upstream group or because these servers have reached the max_conns limit. This directive sets how many requests will be queued and for how long. If this directive is not set, there will be no queuing behavior.

Other consideration

Nginx also has several features that can be used to improve the performance of Web applications. These features don’t often come up in tuning discussions, but they are worth mentioning because their impact can be considerable. We will discuss two of these features.

The cache

For a Nginx instance that loads a group of Web servers or application servers, enabling caching can significantly reduce response times and reduce the load on the back-end servers. Caching is a topic in and of itself and won’t be covered here.

The compression

Compressing the response can greatly reduce the size of the response and reduce the bandwidth footprint. However, this requires CPU resources to handle compression, so it is best used when it is worthwhile to reduce the bandwidth footprint. Note that compression cannot be enabled again for things that are already compressed, such as JPEG images. For more information about Nginx compression configuration, see Nginx Management Guide – Compression and Decompression

Link: http://www.jianshu.com/p/024b33d1a1a1

– MORE excellent articles – |

  • Solution to high availability database master/slave replication delay

  • Interview must ask Mysql four kinds of isolation levels, see the end of the hanging interviewers

  • After the server is attacked, such investigation is not handled!

  • Operation and maintenance engineers to fight strange upgrade road V2.0

  • High salary, helpless pain, this is the current situation of Chinese programmers?

  • SegmentFault stands out against CSDN download sites

If you enjoyed this article

Please click the QR code to pay attention to the road of technology

Scan code to pay attention to the public number, reply to the “directory” can view the public number of articles catalog, reply to the “group” can join the reader technical exchange group, communicate with you together.

All the best of the official account is here

You are watching, click here to have a surprise oh ~