“This is the 21st day of my participation in the Gwen Challenge in November. See details of the event: The Last Gwen Challenge 2021”.

Previous articles:

  • I met Nginx
  • The installation of the nginx
  • Nginx core configuration file structure
  • Nginx Static Resource Deployment [1]
  • Nginx Static Resource Deployment [2]
  • Nginx Static Resource Deployment [3]
  • Nginx Static Resource Deployment [4]
  • Nginx reverse proxy [1]
  • Nginx reverse proxy [2]
  • Nginx Load Balancing [1]

Nginx layer 7 load balancing

Nginx requires proxy_pass agent module configuration to implement layer 7 load balancing. Nginx supports this module by default, so we don’t need to do anything about it. Load balancing for Nginx is to distribute user requests to a set of upstream virtual service pools based on a specified algorithm based on Nginx’s reverse proxy.

Nginx seven layer load balancing instructions

Upstream instruction

This directive is used to define a set of servers that can listen on different ports or on both TCP and Unix sockets. The server can specify different weights, with the default being 1.

grammar upstream name {… }
The default value
location http
Server instructions

This directive specifies the name and parameters of the back-end server, either using a domain name, IP address, port, or Unix socket

grammar server name [paramerters]
The default value
location upstream

Nginx seven layer load balancing implementation process

Server Settings

server { listen 9001; server_name localhost; default_type text/html; Location /{return 200 '<h1>192.168.200.146:9001</h1>'; } } server { listen 9002; server_name localhost; default_type text/html; Location /{return 200 '<h1>192.168.200.146:9002</h1>'; } } server { listen 9003; server_name localhost; default_type text/html; Location /{return 200 '<h1>192.168.200.146:9003</h1>'; }}Copy the code

Load balancer Settings

Upstream backend {server 192.168.200.146:9001; Server 192.168.200.146:9002; Server 192.168.200.146:9003; } server { listen 8083; server_name localhost; location /{ proxy_pass http://backend; }}Copy the code

Load balancing status

The states of the proxy server in charge of balancing scheduling are as follows:

state An overview of the
down The current server does not participate in load balancing temporarily
backup Reserved backup server
max_fails The number of requests allowed to fail
fail_timeout After max_FAILS, the service is suspended for some time
max_conns Limits the maximum number of receive connections
down

Down: Marks the server as permanently unavailable, and the proxy server does not participate in load balancing.

Upstream Backend {server 192.168.200.146:9001 down; Server 192.168.200.146:9002 server 192.168.200.146:9003; } server { listen 8083; server_name localhost; location /{ proxy_pass http://backend; }}Copy the code

This state is usually set for servers that need to be shut down for maintenance.

backup

Backup: Marks this server as a backup server, which will be used to pass requests when the primary server is unavailable.

Upstream Backend {server 192.168.200.146:9001 down; 9002 backup server 192.168.200.146:; Server 192.168.200.146:9003; } server { listen 8083; server_name localhost; location /{ proxy_pass http://backend; }}Copy the code

In this case, you need to disable the access to port 9003 to simulate the breakdown of the only service that can provide external access, the backup server will start to provide external services. In this case, to test and verify, you need to use the firewall to intercept.

This section describes the firewall-cmd tool, which is provided by Linux to operate the firewall.

Example Query whether the specified port on the firewall is open

firewall-cmd --query-port=9001/tcp
Copy the code

How to open a specified port

firewall-cmd --permanent --add-port=9002/tcp
Copy the code

Add development ports in batches

firewall-cmd --permanent --add-port=9001-9003/tcp
Copy the code

How do I remove a specified port

firewall-cmd --permanent --remove-port=9003/tcp
Copy the code

Reload the

firewall-cmd --reload
Copy the code

Among them

–permanent: Persistent

–add-port Adds a specified port

–remove-port Deletes a specified port

max_conns

Max_conns =number: sets the maximum number of active links on the proxy server at the same time. The default value is 0, indicating that there is no limit.

Max_fails and fail_timeout

Max_fails =number: Sets the number of failed requests allowed to the proxy server. The default is 1.

Fail_timeout =time: Specifies the time for services to be suspended after max_fails. The default value is 10 seconds.

Upstream Backend {server 192.168.200.133:9001 down; 9002 backup server 192.168.200.133:; Server 192.168.200.133:9003 max_fails fail_timeout = 3 = 15; } server { listen 8083; server_name localhost; location /{ proxy_pass http://backend; }}Copy the code

Load Balancing Policy

With Nginx load balancing, we can distribute user requests to different servers. What load algorithm can we use other than the default one?

Upstream supports the following six allocation algorithms:

The algorithm name instructions
polling The default mode
weight Weight way
ip_hash Based on the IP address assignment mode
least_conn According to the least connection mode
url_hash Depending on the URL assignment
fair According to the response time mode
polling

Upstream is the default load balancing policy for the upstream module. Each request is allocated to a different back-end server in chronological order. Polling requires no additional configuration.

Upstream backend {server 192.168.200.146:9001; Server 192.168.200.146:9002; Server 192.168.200.146:9003; } server { listen 8083; server_name localhost; location /{ proxy_pass http://backend; }}Copy the code
Weight [weighted polling]

Weight =number: used to set the weight of the server. The default value is 1. The weight value is adjusted according to the hardware configurations of back-end servers in the actual working environment. Therefore, this policy is suitable for the situation where the hardware configurations of servers are quite different.

Upstream Backend {server 192.168.200.146:9001 weight=10; Server 192.168.200.146:9002 weight = 5; Server 192.168.200.146:9003 weight = 3; } server { listen 8083; server_name localhost; location /{ proxy_pass http://backend; }}Copy the code
ip_hash

When load balancing is performed on multiple dynamic application servers at the back-end, the ip_hash command can locate requests from a client IP address to the same back-end server using the hash algorithm. In this way, after A user from A certain IP address logs in to back-end Web server A, the user can access the same BACKEND Web server A from other urls of this site.

grammar ip_hash;
The default value
location upstream
upstream backend{ ip_hash; Server 192.168.200.146:9001; Server 192.168.200.146:9002; Server 192.168.200.146:9003; } server { listen 8083; server_name localhost; location /{ proxy_pass http://backend; }}Copy the code

It should be added that using the IP_hash directive does not guarantee load balancing for back-end servers, which may result in some backend servers receiving more requests than others, and methods such as setting backend server weights will not work.

least_conn

Minimum connections, forwarding requests to back-end servers with fewer connections. Polling algorithm is to evenly forward the request to each backend, so that their load is roughly the same; However, some requests take a long time, resulting in high back-end load on which they reside. In this case, least_CONN can achieve better load balancing.

upstream backend{ least_conn; Server 192.168.200.146:9001; Server 192.168.200.146:9002; Server 192.168.200.146:9003; } server { listen 8083; server_name localhost; location /{ proxy_pass http://backend; }}Copy the code

This load balancing strategy is suitable for situations where the server is overloaded due to varying request processing times.

url_hash

Allocate requests based on the hash results of urls accessed so that each URL is directed to the same backend server and used in conjunction with cache hits. Multiple requests for the same resource may reach different servers, resulting in unnecessary multiple downloads, poor cache hit ratio, and some wasted resource time. Using url_hash, the same URL (that is, the same resource request) can reach the same server, and once the resource is cached, the request can be read from the cache.

upstream backend{ hash &request_uri; Server 192.168.200.146:9001; Server 192.168.200.146:9002; Server 192.168.200.146:9003; } server { listen 8083; server_name localhost; location /{ proxy_pass http://backend; }}Copy the code

Visit the following address:

http://192.168.200.133:8083/a
http://192.168.200.133:8083/b
http://192.168.200.133:8083/c
Copy the code

fair

Instead of using the rotating load balancing algorithm used by built-in load balancing, Fair can intelligently load balance based on page size and load time. So how to use the fair load balancing strategy of the third party module.

upstream backend{ fair; Server 192.168.200.146:9001; Server 192.168.200.146:9002; Server 192.168.200.146:9003; } server { listen 8083; server_name localhost; location /{ proxy_pass http://backend; }}Copy the code

However, if used directly, an error is reported because Fair is a load balancer implemented by a third-party module. Upstream-fair to add nginx-upstream-fair

  1. Download the nginx-upstream-fair module
Download address is: https://github.com/gnosek/nginx-upstream-fairCopy the code
  1. Upload the downloaded file to the server and decompress it
unzip nginx-upstream-fair-master.zip
Copy the code
  1. Renaming resources
mv nginx-upstream-fair-master fair
Copy the code
  1. Use the./configure command to add resources to the Nginx module
./configure --add-module=/root/fair
Copy the code
  1. compile
make
Copy the code

The ngx_HTTP_upstream_SRv_CONF_t structure is missing default_port

Solution:

In the Nginx source SRC/HTTP /ngx_http_upstream.h, go to ngx_http_upstream_srv_conf_s and add default_port to the module

in_port_t      default_port
Copy the code

And then make.

  1. Update the Nginx

6.1 Backing up nginx in the Sbin Directory

mv /usr/local/nginx/sbin/nginx /usr/local/nginx/sbin/nginxold
Copy the code

6.2 Copying nginx from objs in the installation Directory to sbin

cd objs
cp nginx /usr/local/nginx/sbin
Copy the code

6.3 update Nginx

cd .. / make upgradeCopy the code
  1. Compile tests using Nginx

Load Balancing Cases

Case 1: Load balancing with general polling rules for all requests
Upstream backend {server 192.168.200.146:9001; Server 192.168.200.146:9002; Server 192.168.200.146:9003; } server { listen 8083; server_name localhost; location /{ proxy_pass http://backend; }}Copy the code
Case 2: Load balancing with weighted polling rules for all requests
Upstream Backend {server 192.168.200.146:9001 weight=7; Server 192.168.200.146:9002 weight = 5; Server 192.168.200.146:9003 weight = 3; } server { listen 8083; server_name localhost; location /{ proxy_pass http://backend; }}Copy the code
Case 3: Load balancing for specific resources
Upstream videobackend {server 192.168.200.146:9001; Server 192.168.200.146:9002; Upstream fileBackend {server 192.168.200.146:9003; Server 192.168.200.146:9004; } server { listen 8084; server_name localhost; location /video/ { proxy_pass http://videobackend; } location /file/ { proxy_pass http://filebackend; }}Copy the code
Case 4: Load balancing for different domain names
Upstream testAbackend {server 192.168.200.146:9001; Server 192.168.200.146:9002; Upstream testBbackend{server 192.168.200.146:9003; Server 192.168.200.146:9004; } server { listen 8085; server_name www.testA.cn; location / { proxy_pass http://testAbackend; } } server { listen 8086; server_name www.testB.cn; location / { proxy_pass http://testBbackend; }}Copy the code
Case 5: Implementing load balancing with URL rewriting
Upstream backend {server 192.168.200.146:9001; Server 192.168.200.146:9002; Server 192.168.200.146:9003; } server { listen 80; server_name localhost; location /file/ { rewrite ^(/file/.*) /server/$1 last; } location / { proxy_pass http://backend; }}Copy the code