Os.chinaunix.net/a2008/0918/…
Blog.chinaunix.net/uid-2908180…
Blog.csdn.net/cnbird2008/…
1, Linux kernel parameter notes
The name of the | The default value | Recommended values | describe |
tcp_syn_retries |
5 |
1 |
How many SYN requests must the kernel send for a new connection before it decides to abort. It should not be greater than 255. The default value is 5, which corresponds to about 180 seconds. (For networks with heavy loads and good physical communication, this value is too high and can be changed to 2. This value is only for external connections, for incoming connections, it is determined by tcp_retries1) |
tcp_synack_retries |
5 |
1 |
For a remote SYN connection request, the kernel sends a SYN + ACK packet to acknowledge receipt of the previous SYN connection request packet. This is the second step in what’s called a threeway handshake. This determines the number of SYN+ Acks that the kernel sends before abandoning the connection. It should not be greater than 255. The default is 5, which corresponds to a time of about 180 seconds. |
tcp_keepalive_time |
7200 |
600 |
TCP Interval for sending keepalive probe messages (seconds), which is used to confirm whether the TCP connection is valid. Prevents attacks where the two sides establish a connection but do not send data. |
tcp_keepalive_probes |
9 |
3 |
TCP Interval for sending keepalive probe messages (seconds), which is used to confirm whether the TCP connection is valid. |
tcp_keepalive_intvl |
75 |
15 |
The interval (in seconds) between resending a probe message when no response is received. The default value is 75 seconds. (This value is a little too large for general use and can be reduced as needed. In particular, web servers need to change this value down,15 is a good value.) |
tcp_retries1 |
3 |
3 |
How many retries are required before a TCP connection request is aborted. The RFC states that the minimum value is 3 |
tcp_retries2 |
15 |
5 |
How many retries are required before discarding an active (established communication state) TCP connection. The default value is 15, which is equivalent to 13-30 minutes depending on the RTO value (RFC1122 must be greater than 100 seconds).(This value can be reduced appropriately based on the current network Settings, I changed it to 5 in my network). |
tcp_orphan_retries |
7 |
3 |
How many retries must be performed before the TCP connection is discarded by the local end. The default value is 7, equivalent to 50 seconds to 16 minutes, depending on RTO. If your system is a heavily loaded Web server, this value may need to be lowered, and these sockets can be costly. Orphans are also considered tcp_max_orphans. (In fact, lowering this value can be very beneficial when doing NAT, and in my own network environment, lowering this value to 3) |
tcp_fin_timeout |
60 |
2 |
Duration for TCP to remain in fin-WaIT-2 state for the socket connection that is disconnected at the local end. The other party may disconnect or never terminate the connection or an unexpected process may die. The default value is 60 seconds. |
tcp_max_tw_buckets |
180000 |
36000 | Maximum number of Timewait Sockets being processed by the system at the same time. If this number is exceeded, the time-wait socket is immediately removed and displays a warning message. This limit is set purely to defend against simple DoS attacks, but you can raise it (and perhaps increase memory) if network conditions require more than the default. (In fact, it is best to increase this value appropriately when doing NAT.) |
tcp_tw_recycle |
0 |
1 |
Enable fast Time-Wait Sockets recycling. Do not change this value unless advised or requested by a technical expert. (It is recommended to enable it when doing NAT.) |
tcp_tw_reuse |
0 |
1 |
Indicates whether a time-wait socket can be reapplied to a new TCP connection. (This is useful for quickly restarting some services and indicating that the port is already in use.) |
tcp_max_orphans |
8192 |
32768 |
Maximum number of TCP sockets that do not belong to any process that the system can handle. If this number is exceeded, connections that do not belong to any process are reset immediately with a warning message. This limit is set purely to defend against simple DoS attacks, do not rely on it or artificially lower the limit. This value should be increased if memory is large. (The value is set to 32768 in the Redhat AS version, but 2000 is recommended for many firewall changes.) |
tcp_abort_on_overflow |
0 |
0 |
When the daemon is too busy to accept new connections, it sends a reset message. The default is false. This means that when the overflow was caused by an accidental burst, the connection will recover. Turn this on only if you are sure that the daemon really cannot complete the connection request. This option will affect customer usage. (For overloaded services like Sendmail, Apache, etc., this can quickly cause the client to terminate the connection, giving the server a buffer to handle existing connections, so many firewalls recommend turning it on.) |
tcp_syncookies |
0 |
1 |
This only works if CONFIG_SYNCOOKIES is selected during kernel compilation. Send syncookies to the other party when the SYN wait queue overflows. To prevent SYN flood attacks. |
tcp_stdurg |
0 |
0 |
Use the host request interpretation function in the TCP URG Pointer field. Most hosts use the old BSD interpretation, so if you open it on Linux, you may not be able to communicate with them properly. |
tcp_max_syn_backlog |
1024 |
16384 |
The maximum number of connection requests that need to be stored in the queue that have not yet been confirmed by the client. For systems with more than 128Mb of memory, the default value is 1024 and less than 128Mb is 128. If the server is frequently overloaded, try increasing this number. Warning! If you set this value to greater than 1024, you’d better modify TCP_SYNQ_HSIZE in include/net/tcp.h. TCP_SYNQ_HSIZE*16(SYN Flood attacks use TCP to spread the handshake, and send a large number of TCP-SYN packets with forged source IP addresses to half-open the connection to the target system. As a result, the target system runs out of Socket queue resources and cannot accept new connections. To cope with this attack, it is common on modern Unix systems to buffer (rather than resolve) such attacks by using a multi-connection queue, with a single base queue for normal fully connected applications (Connect() and Accept()) and a separate queue for half-open connections. Combined with other kernel measures (such as SYN-cookies /Caches), this dual-queue processing can effectively mitigate small Syn Flood attacks (proven) |
tcp_window_scaling |
1 |
1 |
This file indicates whether the sliding window size for a TCP/IP session is variable. The parameter value is a Boolean value. A value of 1 indicates variable, and a value of 0 indicates immutable. The maximum TCP/IP window is 65535 bytes, which may be too small for high-speed networks. In this case, if this function is enabled, the SIZE of the TCP/IP sliding window can be increased by several orders of magnitude, improving the data transmission capability (RFC 1323). (For ordinary 100m networks, this will reduce overhead, so consider setting it to 0 if you are not on high-speed networks.) |
tcp_timestamps |
1 |
1 |
Timestamps are used, among other things, to protect against spurious sequence numbers. A 1GB broadband line might revisit an old sequence number with an out-of-line value if it was generated last time. Timestamp tells it that this is an ‘old packet’. (This file indicates whether the calculation of RTT is enabled in a more precise way than timeout retransmission (RFC 1323); This option should be enabled for better performance. |
tcp_sack |
1 |
1 |
With Selective ACK, it can be used to look for specific missing datagrams – thus facilitating rapid state recovery. This file indicates whether to enable Selective Acknowledgment, which can improve performance by selectively acknowledging out-of-order received messages (thus allowing the sender to send only the missing segments). (This option should be enabled for WAN traffic, but it increases CPU usage.) |
tcp_fack |
1 |
1 |
Enable FACK congestion avoidance and fast retransmission. (Note that when tcp_sack is set to 0, this value is invalid even if set to 1.) |
tcp_dsack |
1 |
1 |
Allows TCP to send “two identical” sacks. |
tcp_ecn |
0 |
0 |
TCP’s direct congestion notification function. |
tcp_reordering |
3 |
6 |
The maximum number of reordered datagrams in a TCP stream. (It is generally recommended to adjust this value slightly higher, such as 5) |
tcp_retrans_collapse |
1 |
0 |
Bug compatibility is provided for some printers that have bugs. (This support is generally not required and can be turned off) |
Tcp_wmem: mindefaultmax |
4096 16384 131072 |
8192 131072 16777216 |
Send cache min: indicates the minimum memory reserved for sending buffer of TCP sockets. Every TCP socket can use it after it is recommended. The default is 4096(4K). Default: Indicates the amount of memory reserved for sending buffering by TCP sockets. By default, this value affects the net.core.wmem_default value used by other protocols, which is generally lower than the net.core.wmem_default value. The default value is 16384(16K). Max: indicates the maximum memory size of the TCP socket send buffer. This value does not affect net.core.wmem_max, and the “static” selection parameter SO_SNDBUF is not affected. The default value is 131072(128K). (For servers, increasing the value of this parameter is helpful for sending data, in my network, I changed it to 51200 131072 204800) |
Tcp_rmem: mindefaultmax | 409687380, 174760, | 32768 131072 16777216 | The receive cache Settings are the same as tcp_wMEm |
Tcp_mem: mindefaultmax | Calculation by memory | 786432 1048576 1572864 | Low: When the number of memory pages used by TCP is lower than this value, TCP does not consider releasing the memory. That is, there is no memory pressure below this value. (Ideally, this value should match the second value specified for tcp_WMEm – which indicates the maximum page size multiplied by the maximum number of concurrent requests divided by the page size (131072 * 300/4096).) Pressure: When TCP uses more memory pages than this value, TCP tries to stabilize its memory usage and enters pressure mode. When the memory usage is lower than low, TCP exits pressure state. (Ideally this value should be the maximum total buffer size that TCP can use (204800 * 300/4096).) High: The number of pages allowed for all TCP sockets to be queued to buffer datagrams. (If this value is exceeded, the TCP connection will be rejected, which is why you should not make it too conservative (512000 * 300/4096). In this case, the value provided is great, it can handle many connections, 2.5 times as much as expected; Or make existing connections capable of transmitting 2.5 times as much data. 192000 300000 732000 in my network)
Typically these values are calculated at system startup based on the amount of system memory. |
tcp_app_win |
31 |
31 |
Retain Max (window/2^tcp_app_win, MSS) number of Windows due to application buffering. A value of 0 indicates that no buffering is required. |
tcp_adv_win_scale |
2 |
2 |
Calculate the buffer cost bytes/2^tcp_adv_win_scale(if tcp_adv_win_scale > 0) or bytes-bytes/2^(-tcp_adv_win_scale)(if tcp_adv_win_scale) BOOLEAN>0) |
tcp_low_latency |
0 |
0 |
Allow TCP/IP stack to accommodate low latency at high throughput; This option is normally disabled. (But it’s helpful to turn it on when building a Beowulf cluster.) |
tcp_westwood |
0 |
0 |
Enable send-side congestion control algorithms that maintain estimates of throughput and attempt to optimize overall bandwidth utilization; This option should be enabled for WAN communication. |
tcp_bic |
0 |
0 |
Enable Binary Increase Congestion for fast long distance networks; This allows better use of gigabit-speed links to operate; This option should be enabled for WAN communication. |
ip_forward |
0 |
– | NAT must enable IP forwarding. Write the value to 1 |
ip_local_port_range:minmax |
32768 61000 |
1024 65000 |
Indicates the range of ports used for outbound connections. By default, this range is small and also used indirectly for NAT table size. |
ip_conntrack_max |
65535 |
65535 |
The maximum number of ipv4 connections supported by the system, 65536 by default (in fact, this is the theoretical maximum), and this value depends on the size of your memory. If the memory is 128MB, the maximum value is 8192, 65536 by default for more than 1 GB of memory |
The name of the | The default value | Recommended values | describe |
ip_conntrack_max |
65536 |
65536 |
The maximum number of ipv4 connections supported by the system. The default value is 65536 (in fact, this is the theoretical maximum), and this value depends on the size of your memory. If the memory is 128MB, the maximum value is 8192. This value is the default 65536 for more than 1 GB of memory. This value is limited by /proc/sys/net/ipv4/ip_conntrack_max |
ip_conntrack_tcp_timeout_established |
432000 |
180 |
The default timeout period of established TCP connections is 432000, which is 5 days. Impact: If this value is too large, some connections that may not be used reside in memory and occupy a large number of link resources, which may cause NAT IP_conntrack: table full. Suggestion: If the NAT load is too small compared with the NAT table size of the local host, you may need to reduce the value to clear connections as soon as possible and ensure available connection resources. If you are not nervous, do not modify |
ip_conntrack_tcp_timeout_time_wait |
120 |
120 |
Time_wait state timeout period, after which the connection is cleared |
ip_conntrack_tcp_timeout_close_wait |
60 |
60 |
Timeout time for close_WAIT state. If this timeout is exceeded, the connection will be cleared |
ip_conntrack_tcp_timeout_fin_wait |
120 |
120 |
Fin_wait state timeout period. If the timeout period is exceeded, the connection will be cleared |
The name of the | The default value | Recommended values | describe |
netdev_max_backlog |
1024 |
16384 |
The maximum number of packets that each network interface is allowed to send to the queue if the rate at which each network interface receives packets is faster than the rate at which the kernel processes them, and this value needs to be adjusted up a bit for a heavily loaded server. |
somaxconn |
128 |
16384 |
LISTEN specifies the maximum number of packets in the LISTEN queue before the connection times out or retransmission is triggered. The listen backlog for web applications limits the kernel parameter net.core.somaxconn to 128 by default, while the NGX_LISTEN_BACKLOG defined by Nginx defaults to 511, so it is necessary to adjust this value. For busy servers, increasing this value can help network performance |
wmem_default |
129024 |
129024 |
Default send window size in bytes |
rmem_default |
129024 |
129024 |
Default receive window size in bytes |
rmem_max |
129024 |
873200 | Maximum TCP data receiving buffer |
wmem_max |
129024 |
873200 | Maximum TCP data sending buffer |
2Two methods to modify kernel parameters:
3, kernel production environment optimization parameters
net.ipv4.tcp_syn_retries = 1
net.ipv4.tcp_synack_retries = 1
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_retries2 = 5
net.ipv4.tcp_fin_timeout = 2
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 32768
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_wmem = 8192 131072 16777216
net.ipv4.tcp_rmem = 32768 131072 16777216
net.ipv4.tcp_mem = 786432 1048576 1572864
net.ipv4.ip_local_port_range = 1024 65000
net.ipv4.ip_conntrack_max = 65536
net.ipv4.netfilter.ip_conntrack_max=65536
net.ipv4.netfilter.ip_conntrack_tcp_timeout_established=180
net.core.somaxconn = 16384
net.core.netdev_max_backlog = 16384
Flandycheng.blog.51cto.com/855176/4767…
www.ha97.com/4396.html
Blog.csdn.net/force_eagle…
This article from “yun Yang” blog, please be sure to keep the source yangrong.blog.51cto.com/6945369/132…