Before we start today’s article, here’s an interview question:
What is the maximum number of concurrent requests you have ever had on a single machine? What do you think is the maximum number of concurrent requests for a physical server normally configured at present? Share your understanding and analysis.
If you can summarize the answers well in your mind, you can enjoy your weekend without taking up too much of your time
If you are a little confused and don’t know how to answer this question, take a few minutes to read the following, because this question is very scalable
The ideal situation
Ideally, concurrency is simply based on the number of source IP addresses and ports. For a single machine, this may not be a large number, but if you do, you may have to redo it or you may not have been exposed to the actual production environment **. **
The number of source IP addresses and source ports is very large from the server side.
In theory, the server can accept a client IP address of 2^32(in terms of IPv4) and a port number of 2^16, which is quite large and seems to have a lot of options, but as mentioned above, this is the ideal situation
The actual situation
There is no free lunch.
Each connection consumes system resources, so the theoretical maximum concurrency is impossible to achieve, in the daily development process, we generally set a maximum concurrency to control.
In practice, the number of concurrency is directly related to the business, such as Redis memory server concurrency tens of thousands is no problem, most of the dozens/hundreds/thousands/tens of thousands and so on exist.
Maximum number of client connections
Knowing that the maximum number of concurrent connections to a server is 2^48, how many servers can a client connect to?
For the client, of course, we can increase the connection capacity by means of multiple network cards and multiple IP. We still assume that the client has only one network card and one IP. Due to the limitation of port number to 2^16, after removing the ports occupied by the system, there are about 64,000 available.
That is, the theoretical maximum number of connections to the client is 2^16, including system-occupied ports.
Client in NAT environment
After solving the first two problems, let’s look at another one:
How many internal IP addresses can a public network egress NAT service device concurrently access Internet services?
After all, public IP addresses are limited and cost money, and most of our machines use NAT in combination with LAN to access the Internet, so this scenario is familiar.
Take a look at the PROCESS of IP and port replacement and mapping restoration when the Intranet machine accesses the Internet.
Because the client is a NAT device, a maximum of 65535 concurrent Internet access requests are supported in the NAT environment.
How about, after reading the content of the above do not know whether everyone for the first few questions answered clearly? However, I also said that this problem is somewhat interesting, the increase of concurrency, corresponding to the solution, here because of the space, there is no way to explain too much, let me say one of the most commonly used – Nginx
Nginx load balancing, traffic limiting, caching, blacklist and whitelist
Static proxy
Nginx is good at handling static files and is a great image and file server. By putting all static resources on Nginx, applications can be static and static and perform better.
Second, load balancing
Through reverse proxy, Nginx can realize load balancing of services, avoid single node failure of the server, and forward requests to different servers according to certain policies to achieve load effect. Common load balancing policies include:
1, polling
Requests are distributed to back-end servers in a sequential rotation that treats each server on the back-end equally, regardless of the actual number of connections to the server and the current system load.
2. Weighted polling
Different back-end servers may not have the same machine configuration and load on the current system, so they may not have the same ability to withstand stress. Assign higher weights to machines with high configuration and low load to handle more requests; Weighted polling works well for low-configuration, high-load machines by assigning them a lower weight, reducing their system load, and ordering and weighting requests to the back end.
3. Ip_hash
According to the IP address of the client, a value is calculated through the hash function, and the size of the server list is modulo calculated with this value. The result is the serial number of the server to be accessed by the client. The source address hash method is used for load balancing. When the list of back-end servers remains unchanged, clients with the same IP address will be mapped to the same back-end server each time for access.
4, random
Through the random algorithm of the system, a server is randomly selected for access according to the size of the list of back-end servers.
5. Least_conn (Minimum connection number Method)
Due to the backend server configuration is not the same, have fast or slow to request processing, the minimum number of connections method based on the current back-end server connection, dynamically select the current backlog connections at least one server to deal with the current request, improve the utilization efficiency of the back-end service as much as possible, will be responsible for properly to each server.
Three, the current limit
Nginx’s stream limiting module is based on the vulnerability algorithm, which is very practical in high concurrency scenarios.
1. Set parameters
1) Limit_REq_zone is defined in the HTTP block, where $binary_remote_addr means to save the binary form of the client IP address.
2) Zone The shared memory area that defines the IP status and URL access frequency. Zone =keyword Identifies the name of the region, followed by a colon, and the size of the region. The state information for 16000 IP addresses is about 1MB, so the zone in the example can store 160,000 IP addresses.
3) Rate Defines the maximum request Rate. The rate in the example cannot exceed 100 requests per second.
2. Set current limiting
Burst queue size, nodelay does not limit the time between individual requests.
Four, caching,
1, browser cache, static resource cache with expire.
Proxy layer cache
V. Black and white lists
1. Unlimited stream whitelist
2. Blacklist
Above is nginx several commonly used functions, dynamic and static separation, load balancing, current limiting, caching, black and white list, etc., can you understand it, don’t understand the word can also pay attention to my waiting for subsequent updates nginx in-depth knowledge (can’t, I tried, because space reasons, with complete knowledge, too much, sorry everyone, see me the next time is Shared).
Personal public account: Java Architects Association, updated daily technical good articles