preface

Good attitude, not so tired. In a good mood, all you see is beautiful scenery.

"If you can't solve a problem for a while, use this opportunity to see your limitations and put yourself out of order." As the old saying goes, it's easy to let go. If you are distracted by something, learn to disconnect. Cut out the paranoia, the trash in the community, and get rid of the negative energy. Good attitude, not so tired. In a good mood, all you see is beautiful scenery.

Nginx series: Several ways to load balance Nginx! Do not like spray, if there are objections welcome to discuss! Check out the blogger before reading this articleWindows10 system installation nginx installation and configuration!Learn to install and configure nginx


NginxupstreamA brief description of current supported allocations

  1. Polling (default) : Each request is allocated to a different backend server one by one in chronological order. If the backend server goes down, the request is automatically removed.
Upstream {server 192.168.0.14; Server 192.168.0.15; }Copy the code

upstreamLoad in a polling (default) fashion, with each request being distributed chronologically to a different backend server, if the backend serverdownDrop, can automatically eliminate. Although this method is simple and cheap. However, the disadvantages are low reliability and unbalanced load distribution. Suitable for image server clusters and pure static page server clusters.

  1. Specify the weight: Specifies the polling probability,Weight is proportional to the access ratioIs used when the performance of back-end servers is uneven. As shown below,
Upstream {server 192.168.0.14weight =10; Server 192.168.0.15 weight = 10; }Copy the code
  1. IP binding IP_hash (access IP): Press for each requestHash assignment of access IP addresses(You can select the same backend server for clients in the same CLASS C address segment, unless that backend server is down), so that each visitor has fixed access to the same backend server, can be solvedThe problem of the session.
upstream backserver {
    ip_hash;
    server 127.0.0.1:88;
    server 127.0.0.1:80;
}
Copy the code
  1. Fair (third party) : Requests are allocated based on the response time of the back-end server, with priority given to those with short response times.
upstream backserver {
    server server1;
    server server2;
    fair;
}
Copy the code
  1. Url_hash (third party): according to the visitHash result of urlTo allocate requests so that each URL is directed to the same backend server, which is more efficient when the backend server is cached.Upstream (hash_method is the hash algorithm used) (server (weight) is not allowed.
upstream backserver {
    server squid1:3128;
    server squid2:3128;
    hash $request_uri;
    hash_method crc32;
}
Copy the code

use

1. Add proxy_pass http://backserver/ to the server that requires load balancing. Upstream {# define load balancing device Ip and device status ip_hash; Server 127.0.0.1:9090 down; Server 127.0.0.1:8080 weight = 2; Server 127.0.0.1:6060; 7070 backup server 127.0.0.1:; } upstream can also set status values for each device, which are as follows: down indicates that the server is not currently loaded. Weight is 1 by default. Max_fails: Specifies the number of failed requests allowed. The default value is 1. When the maximum number of failed requests is exceeded, an error defined by the proxy_next_upstream module is returned. Backup: Request the backup machine when all other non-backup machines are down or busy. So this machine will have the least pressure.Copy the code

A specific point

Back-end server

Upstream allows you to configure the back-end server using IP address and port, domain name, or UNIX socket. If a domain name can be resolved into multiple addresses, these addresses are used as backend.

Load Balancing Policy

Nginx supports round robin, client IP hash, and specified weights.

By default, Nginx provides you with polling as a load balancing strategy. But that’s not necessarily going to satisfy you. For example, if a series of accesses are initiated by the same user Michael during a certain period of time, the first request from Michael might be Backend2, the next request would be Backend3, then backend1, backend2, backend3… In most application scenarios, this is not efficient. Of course, that’s why Nginx gives you a way to hash the IP addresses of Michael, Jason, David, etc., so that each client’s access request is dumped on the same back-end server. The specific usage is as follows:

upstream backend {
    ip_hash;
    server backend1.example.com;
    server backend2.example.com;
    server.backend3.example.com;
}
Copy the code

In this policy, the hash key is the class C IP address of the client. Class C IP addresses range from 192.0.0.0 to 223.255.255.255. The first three numbers represent subnets. The fourth segment is the IP address category of the local host. In this way, each request from a client is sent to the same backend. Of course, if the backend to which you hash is currently unavailable, the request is transferred to another backend.

This section describes another keyword used with ip_hash: down. When a server is down temporarily, you can use “down” to indicate that the flagged server will not accept requests for processing. Details are as follows:

upstream backend { server https://blog.csdn.net/Mrs_chens down; Server XXXXX IP address; server unix:/tmp/backend3; }Copy the code

You can also specify weights as follows:

upstream backend { server backend1.example.com; Server XXXX IP address weight=4; }Copy the code

By default, weight is 1. For the above example, the first server has a default weight of 1 and the second server has a default weight of 4, so the first server receives 20% of the requests and the second server receives 80%. Note that weight and ip_hash cannot be used together for the simple reason that they are different and conflicting policies.

Retry strategy

You can specify the maximum number of retries and retry interval for each backend. The keywords used are max_fails and fail_timeout. As follows:

upstream backend { server backend1.example.com weight=5; Server XXXXX IP address max_fails=3 fail_timeout=30s; }Copy the code

In the example above, the maximum number of failures is 3, that is, the maximum number of attempts is 3, and the timeout period is 30 seconds. The default value for max_fails is 1, and the default value for AIL_timeout is 10s. The case of a transmission failure, specified by proxy_next_upstream or fastcgi_next_upstream. You can also use proxy_connect_TIMEOUT and proxy_read_timeout to control upstream response time.

It is important to note that the max_FAILS and FAIL_TIMEOUT parameters may not take effect when there is only one server in the upstream. The problem is that nginx only tries upstream once and if it fails the request is discarded: (110: “You should write the server in upstream a few more times.”)

upstream backend {
    server backend.example.com max_fails fail_timeout=30s;
    server backend.example.com max_fails fail_timeout=30s;
    server backend.example.com max_fails fail_timeout=30s;
}
Copy the code

Stand-by strategy

Starting with version 0.6.7 of Nginx, you can use the “backup” keyword. When all non-backup servers are down or busy, only the standby servers marked by backup are used. It is important to note that backup cannot be used with the ip_hash keyword. Examples are as follows:

upstream backend {
    server backend1.example.com;
    server backend2.example.com backup;
    server backend3.example.com;
}
Copy the code

🎉 summary:

  • For more references, see here:The Blog of Chan Wing Kai

  • Like the small partner of the blogger can add a concern, a thumbs-up oh, continue to update hey hey!