Install nginx

# MAC
brew install ngnix

# LINUX
yum install epel-release -y
# yum list all | grep nginx
yum install nginx -y

# DOCKERNginx :1.17.6 docker run --name nginx-d8080-80 - p nginx: 1.17.6Check the installation directory
rpm -ql nginx

# start
nginx

Check startup parameters
nginx -V

# to restart
nginx -s reload

# stop
nginx -s stop
Copy the code

Context: the main configuration

# specify the number of worker child processes that nginx will start. Auto indicates the number of worker processes that you will start on the number of cpus
If the CPU has hyperthreading enabled, the number of hyperthreads is counted
worker_processes number | auto

Bind each worker child process to a specific CPU to avoid cache invalidation
# Do not consider hyperthreading, because hyperthreading still shares a CPU
# 4 core CPU configuration
worker_cpu_affinity 0001 0010 0100 1000
# 8-core CPU configuration
worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000 10000000

# worker Maximum number of file handles that a child process can open
worker_rlimit_nofile 65535

# set the priority of worker child processes so that the CPU can schedule nginx first
# The default priority of the process is 120. Setting the priority to -10 means that the priority of the worker process is 120+(-10)=110
# range is [-20, 20]
worker_priority -10

# specify when the worker child exits gracefully
worker_shutdown_timeout 2s

# worker Core file after the child process terminates abnormally, used to record analysis problems
worker_rlimit_core 10M
working_directory /opt/nginx/tmp

# worker The greater the time interval between the precision of the timer used within the child process, the fewer system calls, the better the performance
timer_resolution 100ms
Copy the code

Context: the configuration of the event

# worker Maximum number of concurrent requests supported by child processes. Default: 512
# If worker_rlimit_nofile is set, the value of worker_connections cannot exceed the value of worker_connections
worker_connections worker_rlimit_nofile/worker_processes

# Whether to enable load balancing lock. When load balancing lock is enabled, the worker process will accept new connections in order to prevent connection preemption
accept_mutex on

# Release the lock as long as you don't get it
accept_mutex_delay 200ms
Copy the code

Server Configuration

# Change the maximum number of connections that the server nic can receive
sudo echo net.core.netdev_max_backlog=65535 | sudo tee -a /etc/sysctl.conf

# change the maximum length of the TCP/IP SYN queue
sudo echo net.ipv4.tcp_max_syn_backlog=65535 | sudo tee -a /etc/sysctl.conf

# number of times that nginx retransmits a SYN packet when it does not receive an ACK from the upstream server.
TCP has a timeout retransmission mechanism. If you set the retransmission times too high, performance will be affected
sudo echo net.ipv4.tcp_syn_retries=1 | sudo tee -a /etc/sysctl.conf

# nginx sends an ACK of the syn packet from the client. If no ack is received, nginx sends a ACK of the SYN packet from the client
sudo echo net.ipv4.tcp_synack_retries=1 | sudo tee -a /etc/sysctl.conf

Alter TCP/IP protocol stack ACCEPT queue maximum length
# to check the operating system of somaxconn value: sysctl -a | grep somaxconn
sudo echo net.core.somaxconn=65535 | sudo tee -a /etc/sysctl.conf
Copy the code

SYN queue: half-connected queue; ACCEPT queue: full connection queue.

  • Half-connection queue: If the server receives a SYN packet from the client and replies with a SYN+ACK, but does not receive an ACK from the client, the server puts the connection information into the half-connection queue.
  • Full connection queue: Saves connections that the server has completed the three-way handshake, but have not yet been taken by the operating system call Accept ().
# Enable TCP FastOpen
sudo echo net.ipv4.tcp_syncookies=1 | sudo tee -a /etc/sysctl.conf
sudo echo net.ipv4.tcp_fastopen=3 | sudo tee -a /etc/sysctl.conf

Take effect of the modified configuration
sysctl -p
Copy the code

TCP FastOpen optimizes the TCP three-way handshake. 1. At the same time of the third handshake, the request data is transmitted to the server, reducing a transmission overhead; 2, in the first handshake, the server will generate a cookie back to the client, the next time to establish a connection with the cookie and request data, no need to establish three handshakes!

See the official website for more configurations

The article will continue to update, you have any good configuration, the article did not mention to the below message, little brother grateful