When it comes to load balancing, I’d say it’s inherently unfair. Why do you say that? Could you imagine such A scene, A piece of cake cut into five copies, and now are going to give it A, B and C3 individual, based on the fair principle, we say that each person can be assigned to the 5/3 when the normal, but it’s apparently bad 5/3 when division, showing this time happens to be A noon did not eat, can eat A few more, B and C full belly, one can, based on the principle of unfair, We divide the cake to A3 and each cake to B and C. In this way, resources are divided according to a certain strategy, which is a balanced strategy.
In web applications, a Web application (or a service) is usually clustered in a production environment and then distributed to different service hosts for processing using load balancing hardware (F5) or software (Nginx). Obviously, the cake here is equivalent to our Web request. Suppose there are five requests coming in. Based on A certain equilibrium strategy, we may send three of them to server A for processing, and server B and SERVER C each for processing one request. Let me draw a picture to illustrate the model briefly:
So what are the benefits of using load balancing? Firstly, the utilization of resources is optimized, the throughput is maximized, and the delay is reduced. Moreover, the scalability and reliability of the system are guaranteed accordingly.
Nginx load balancing and related policies
Load balancing technology requires related balancing policies. Nginx provides four balancing policies. You can select an appropriate balancing policy based on specific service scenarios. The four balancing policies are described as follows:
-
1. Balancing strategy based on polling:
If request 1 is sent to Server A, request 2 will be sent to Server B,…… And so on
-
2. Balancing strategy based on the minimum number of connections
Minimum connections, that is,nginx determines which Server in the back-end cluster has the fewest Active connections. For each incoming request, Nginx sends the request to the corresponding Server.
-
3. Balancing policy based on IP-hash:
As we all know, each request client has a corresponding IP address. In this balancing policy, Nginx will use the IP address of each request as the key word according to the corresponding hash function, and the hash value obtained will decide to distribute the request to the corresponding Server for processing
-
4. Balancing strategy based on weighted polling:
Weighted polling: Nginx will assign a weight to the Server. The larger the weight is, the more requests will be received
In fact, the above balancing strategy is very easy to understand, but if you want to understand its implementation principle, you can see the source code, but xiaobian, I do not understand C, C++.
Nginx balancing policy configuration introduction
- 1. Balancing strategy based on polling:
This is the default Nginx balancing algorithm. If you do not configure this policy, it will be executed by default.
http { upstream myapp1 { server srv1.example.com; server srv2.example.com; server srv3.example.com; } server { listen 80; location / { proxy_pass http://myapp1; }}}Copy the code
The upstream block defines a small back-end cluster and configures servers to form the cluster, while the upstream names the cluster. This instance is called myapp1.proxy_pass in the location block, which indicates which cluster will handle all requests that meet /. This instance is http://myapp1.
Myapp1 in http://myapp1 must be named by upstream. It doesn’t matter whether you use HTTP or HTTPS. If you use HTTPS, you can change HTTP to HTTPS. In addition, if you specify the protocol name in the server directive in upstream, you do not need to specify the protocol name in the proxy_pass directive.
Nginx load balancing is implemented using reverse proxy, the proxy_pass directive we used above. It supports not only HTTP and HTTPS, but also FastCGI, UWSgi, SCGI, memcached, gRPC, If you need to use a protocol other than HTTP or HTTPS, we can’t use proxy_pass. Instead, we should use the corresponding directives such as fastcgi_pass, uWSGI_pass, scgi_pass, memcached_pass, grpc_pass.
This policy deals with load, but it is flawed and does not prevent a Server from being overloaded. If some requests are executed for a long time and the system has a large amount of concurrency, it may lead to the accumulation of requests on a Server. Snowslide is possible~
- 2. Balancing strategy based on the minimum number of connections
The policy mainly uses the least_conn command. The configuration is as follows:
upstream myapp1 {
least_conn;
server srv1.example.com;
server srv2.example.com;
server srv3.example.com;
}
Copy the code
The strategy is more humanized, according to the actual situation of the machine to just need allocation.
- 3. Balancing policy based on IP-hash:
Of course, if we want to implement such a feature, we want requests from the same client to be sent to the same Server each time, the above two strategies do not do. This policy ensures that requests from the same client are always directed to the same server, except when the server is unavailable. Related configurations are as follows:
upstream myapp1 {
ip_hash;
server srv1.example.com;
server srv2.example.com;
server srv3.example.com;
}
Copy the code
Since requests from the same client can be processed by the same Server, sessions from the same client can be persisted.
- 4. Balancing strategy based on weighted polling:
The weighted polling strategy doesn’t need to be explained too much, but just adds weight information to the polling
upstream myapp1 {
server srv1.example.com weight=3;
server srv2.example.com;
server srv3.example.com;
}
Copy the code
This strategy is suitable for situations where the processing power of the Server machine is different.
More advanced nGINx load balancing features and configurations
-
1. Health check
Not only do people need physical examinations, but machines also need physical examinations, so be nginx the physical examination doctor! What is nginx health check? When A request is sent to A Server for processing, nginx checks whether the request has timed out, failed to execute, etc., and then handles the request accordingly. For example, when nginx checks that Server A has executed A request, nginx reports an error 502. The next nGINx load balancing will exclude Server A from the upstream block and not distribute requests to Server A.
For health check functionality, nginx provides two basic instructions, Max_fails and fail_TIMEOUT, that is, when nginx checks that max_fails or fail_timeout exceeds the number of failed requests on a Server, if timeout occurs, Nginx will start gracefully probing the Server with real-time requests and, if there is a response, assume that the corresponding Server is still alive and sound.
-
2. More configurations
(server address [parameters]; , parameters can have many parameter types, such as specifying that a Server does not participate in load balancing, etc. For details, see the link on the official website. Click here for portal.