This is the 18th day of my participation in the November Gwen Challenge. Check out the event details: The last Gwen Challenge 2021

This article is a few notes from nginx load balancing experiments.

An overview of the

The front has completed the forwarding procedure, but also tried some load balancing algorithm, this paper on nginx load balancing to do some simple tests, some experiments are to answer the author and colleagues when the exchange of doubts.

The program

The program used in this paper is the forwarding program implemented by the author before. In fact, any program that can respond to POST request can be used.

The environment

The experimental environment of this paper is as follows: Virtual machine Linux running container. The VM Windows sends a POST request.

This article uses mirrored centos/nginx-116-centos7 for testing. The startup command is as follows:

# $PWD/bin saves the backend service. So the mount of the docker run - itd - name nginx -p 8080:8080 - v $PWD/bin: / home/latelee/bin centos/nginx - 116 - centos7 nginx - g "daemon off;"Copy the code

To configure nginx, you need root permission, so run the following command to enter the container:

sudo docker exec -u root -it nginx bash
Copy the code

The back-end service runs the following command:

/home/latelee/bin/httpforward_back.exe -p 9001 -i "hello in 9001"
/home/latelee/bin/httpforward_back.exe -p 9002 -i "hello in 9002"
/home/latelee/bin/httpforward_back.exe -p 9003 -i "hello in 9003"
Copy the code

To restart nginx, run the following command:

nginx -s reload
Copy the code

The nginx configuration file is as follows:

cat > /etc/nginx/nginx.conf <<-EOF worker_processes auto; error_log /var/opt/rh/rh-nginx116/log/nginx/error.log; pid /var/opt/rh/rh-nginx116/run/nginx/nginx.pid; # Load dynamic modules. See /opt/rh/rh-nginx116/root/usr/share/doc/README.dynamic. include /opt/rh/rh-nginx116/root/usr/share/nginx/modules/*.conf; events { worker_connections 1024; } http { log_format main '[$time_local] $remote_addr: "$request" ' '$status "$http_referer" ' '"$http_user_agent" [$upstream_addr $upstream_status $upstream_response_time ms $request_time ms]'; access_log /var/opt/rh/rh-nginx116/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; proxy_connect_timeout 10; include /etc/opt/rh/rh-nginx116/nginx/mime.types; default_type application/octet-stream; # Load modular configuration files from the /etc/nginx/conf.d directory. # See http://nginx.org/en/docs/ngx_core_module.html#include # for more information. include /opt/app-root/etc/nginx.d/*.conf; server { listen 8080 default_server; listen [::]:8080 default_server; server_name _; root /opt/app-root/src; # Load configuration files for the default server block. include /opt/app-root/etc/nginx.default.d/*.conf; location / { proxy_pass http://foobar; proxy_set_header Host $proxy_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location /fee/sleep { proxy_pass http://foobar; }} upstream foobar {server 127.0.0.1:9001; Server 127.0.0.1:9002; } #upstream foobar {# server 127.0.0.1:9001 weight=3; # server 127.0.0.1:9002 weight = 1; #} #upstream foobar { # ip_hash; # server 127.0.0.1:9001; # server 127.0.0.1:9002; # server 127.0.0.1:9003; #} } EOFCopy the code

This configuration file mainly sets the IP address and port of the upstream service Foobar. Then specific URL mapping is performed as follows:

location / { proxy_pass http://foobar; ]  location /fee/sleep { proxy_pass http://foobar; ]Copy the code

The following part is mainly modified for upstream foobar to achieve the experimental purpose of using different algorithms.

The experiment

To experiment, you need to start several terminals into the container, such as modifying configurations and restarting Nginx, executing programs, observing logs, and so on. To view the access log:

tail -f /var/opt/rh/rh-nginx116/log/nginx/access.log
Copy the code

Run the following command on another terminal (Windows or VM) :

The curl http://192.168.28.11:8080/ - X POST - F "file = @ sample. Json"Copy the code

The configuration and corresponding logs and observations are shown below.

Basic experiment

The default polling

Configuration:

Upstream foobar {server 127.0.0.1:9001; Server 127.0.0.1:9002; }Copy the code

Log:

[18 / Nov / 2021:10:23:15 + 0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9002 200 0.000 ms 0.001ms] [18/Nov/ 201:10:23:16 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9001 200 0.001ms 0.001ms] [18/Nov/ 201:10:23:18 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9001 200 0.002 ms 0.003 ms] [18/Nov/ 201:10:23:21 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9002 200 0.001ms 0.001ms] [18/Nov/ 201:10:23:24 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9002 200 0.001ms 0.001ms] [18/Nov/ 201:10:23:25 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9001 200 0.001ms 0.001ms]Copy the code

Conclusion: 9001 and 9002 appear in sequence.

Weighted polling

Configuration:

Upstream foobar {server 127.0.0.1:9001 weight=4; Server 127.0.0.1:9002 weight = 1; }Copy the code

Log:

[18 / Nov / 2021:10:24:37 + 0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9001 200 0.000 ms 0.000 MS] [18/Nov/ 201:10:24:38 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9001 200 0.001ms 0.001ms] [18/Nov/ 201:10:24:38 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9001 200 0.000 ms 0.000 MS] [18/Nov/ 201:10:24:39 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9001 200 0.001ms 0.001ms] [18/Nov/ 201:10:24:40 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9002 200 0.001ms 0.001ms]Copy the code

Conclusion: 9001 and 9002 appeared 4 times and 1 time respectively. Note: NGINx weighted polling itself is smooth weighted polling, and the weight values are specially enlarged for demonstration purposes.

Smooth weighted polling

Configuration:

Upstream foobar {server 127.0.0.1:9001 weight=2; Server 127.0.0.1:9002 weight = 5; Server 127.0.0.1:9003 weight = 3; }Copy the code

Log:

Round 1: [18/Nov/ 201:10:30:39 +0000] "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9002 200 0.001ms 0.001ms] [18/Nov/ 201:10:30:39 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9002 200 0.001ms 0.001ms] [18/Nov/ 201:10:30:40 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9003 200 0.001ms 0.000 ms] [18/Nov/ 201:10:30:41 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9003 200 0.000 ms 0.001ms] [18/Nov/ 201:10:30:42 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9001 200 0.008ms 0.009ms] [18/Nov/ 201:10:30:43 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9002 200 0.001ms 0.001ms] [18/Nov/ 201:10:30:44 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9002 200 0.001ms 0.001ms] [18/Nov/ 201:10:30:44 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9001 200 0.000 ms 0.001ms] [18/Nov/ 201:10:30:45 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9002 200 0.000 ms 0.001ms] [18/Nov/ 201:10:30:46 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9003 200 0.001ms 0.001ms] 2nd round [18/Nov/ 201:10:33:24 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9002 200 0.001ms 0.001ms] [18/Nov/ 201:10:33:25 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9001 200 0.001ms 0.001ms] [18/Nov/ 201:10:33:26 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9002 200 0.000 ms 0.000 MS] [18/Nov/ 201:10:33:27 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9003 200 0.000 ms 0.000 MS] [18/Nov/ 201:10:33:28 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9002 200 0.000 ms 0.000 MS] [18/Nov/ 201:10:33:29 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9002 200 0.001ms 0.001ms] [18/Nov/ 201:10:33:30 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9003 200 0.002 ms 0.002ms] [18/Nov/ 201:10:33:31 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9001 200 0.001ms 0.001ms] [18/Nov/ 201:10:33:31 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9002 200 0.000 ms 0.000 MS] [18/Nov/ 201:10:33:32 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9002 200 0.001ms 0.001ms]Copy the code

Conclusion: In terms of weight, 9001, 9002 and 9003 retain the values in the configuration, that is, they appear 2, 5 and 3 times in sequence. From the order, the weight of the server is not concentrated, the three relatively uniform polling. From the results of the two rounds of experiments, each round, a server appears in a different order.

The comparison with the previous self-implemented smoothing algorithm is as follows:

Nginx: 2 2 3 3 1 2 2 3 2 1 2 3 2 1 2 2 2 3 1 2 2 2 2Copy the code

As you can see, there’s a difference.

Ip_hash polling

Configuration:

    upstream foobar {
        ip_hash;
        server 127.0.0.1:9001;
        server 127.0.0.1:9002;
        server 127.0.0.1:9003;
    }
Copy the code

To simulate different IP addresses, send POST requests on virtual machines, physical machines, and other containers and observe logs. As follows:

[18 / Nov / 2021:10:59:55 + 0000] 127.0.0.1: "POST/HTTP/1.1" 200 "-" "curl/7.29.0" [127.0.0.1:9003 200 0.002 ms 0.042 ms] [18/Nov/ 201:10:59:55 +0000] "POST/HTTP/1.1" 200 "-" "curl/7.29.0" [127.0.0.1:9003 200 0.001ms 0.04ms] [18/Nov/ 201:10:59:56 +0000] "POST/HTTP/1.1" 200 "-" "curl/7.29.0" [127.0.0.1:9003 200 0.001ms 0.04ms] [18/Nov/ 201:07:02:20 +0000] 172.17.0.1: "POST/HTTP/1.1" 200 "-" "curl/7.29.0" [127.0.0.1:9002 200 0.001ms 0.04ms] [18/Nov/ 201:07:02:23 +0000] 172.17.0.1: "POST/HTTP/1.1" 200 "-" "curl/7.29.0" [127.0.0.1:9002 200 0.001ms 0.044ms] [18/Nov/ 201:07:03:20 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9002 200 0.001ms 0.001ms] [18/Nov/ 201:07:03:21 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9002 200 0.001ms 0.001ms] [18/Nov/ 201:07:03:22 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9002 200 0.000 ms 0.000 MS] [18/Nov/ 201:07:03:22 +0000] 192.168.28.5: "POST/HTTP/1.1" 200 "-" "curl/7.73.0" [127.0.0.1:9002 200 0.001ms 0.001ms] [18/Nov/ 201:07:03:54 +0000] 172.17.0.3: "POST/HTTP/1.1" 200 "-" "curl/7.29.0" [127.0.0.1:9002 200 0.000 ms 0.042 ms] [18/Nov/ 201:07:04:07 +0000] 172.17.0.3: "POST/HTTP/1.1" 200 "-" "curl/7.29.0" [127.0.0.1:9002 200 0.002ms 0.02ms] [18/Nov/ 201:07:04:08 +0000] "POST/HTTP/1.1" 200 "-" "curl/7.29.0" [127.0.0.1:9002 200 0.002ms 0.002ms] [18/Nov/ 201:07:04:42 +0000] 192.168.28.11: "POST/HTTP/1.1" 200 "-" "curl/7.29.0" [127.0.0.1:9002 200 0.001ms 0.04ms] [18/Nov/ 201:07:04:49 +0000] 192.168.28.11: "POST/HTTP/1.1" 200 "-" "curl/7.29.0" [127.0.0.1:9002 200 0.001ms 0.04ms] [18/Nov/ 201:07:04:50 +0000] 192.168.28.11: "POST/HTTP/1.1" 200 "-" "curl/7.29.0" [127.0.0.1:9002 200 0.002ms 0.03ms]Copy the code

Conclusion: 127.0.0.1 is the local IP address of the container where nginx resides, 172.17.0.3 is another container, 192.168.28.5 is a physical machine, and 192.168.28.11 is a VM. From the logs, each source IP address is responded by the same port service. But for some reason, the 9001 port service was not polled.

Custom experiments

Here are some experiments I’ve been wanting to do.

Backend service access not started – return 502

Simulation scenario: None of the back-end services are started, but the back-end services are specified in the Nginx configuration file. Nginx access log:

[18 / Nov / 2021:06:43:48 + 0000] 192.168.28.5: "POST/HTTP/1.1" -" "curl/7.73.0" [127.0.0.1:9003, 127.0.0.1:9002, 127.0.0.1:9001 0.001 ms 0.001 ms]Copy the code

The curl request returns:

POST - F $curl http://192.168.28.11:8080/ - X "file = @ sample. Json" % % of Total Received % Xferd business, Speed Time Time Time  Current Dload Upload Total Spent Left Speed 100 493 100 157 100 336 78500 164k --:--:-- --:--:-- --:--:-- 481k<html> <head><title>502 Bad Gateway</title></head> <body> <center><h1>502 Bad Gateway</h1></center> < hr > < center > nginx / 1.16.1 < / center > < / body > < / HTML >Copy the code

Conclusion: Nginx has already done a poll, because the access log indicates that all backend services have returned 502.

Service processing suddenly – return 502 when the service failed

Simulation scenario: a service stops abruptly in the middle of processing a request (such as a segment error or power outage). To simulate this situation, a sleep request is deliberately implemented with a delay of 4 seconds to facilitate stopping the service. Nginx access log:

[18 / Nov / 2021:06:49:18 + 0000] 192.168.28.5: "POST /fee/sleep HTTP/1.1" 502 "-" curl/7.73.0" [127.0.0.1:9001 502 3.035ms 3.035ms]Copy the code

As you can see, the response processing took more than 3 seconds, because I stopped the back-end service after about 3 seconds.

The curl request returns:

POST - F $curl http://192.168.28.11:8080/fee/sleep - X "file = @ sample. Json" % % of Total Received % Xferd business, Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 493 100 157 100 336 40 85 0:00:03 0:00:03 --:--:-- 125<html> <head><title>502 Bad Gateway</title></head> <body> <center><h1>502 Bad Gateway</h1></center> < hr > < center > nginx / 1.16.1 < / center > < / body > < / HTML >Copy the code

Conclusion: For the requestor, the return message is the same as in the previous experiment, but only 9001 is displayed in the access log, indicating 502. However, there are other back-end services running, so when the request is made again, nginx will return to normal, at which point nginx will find a working machine.

Reconfigure nginx in the process – will wait for the process to complete

Simulation scenario: In multiple back-end services, you need to stop some of them and upgrade them, start them again, and upgrade other services. Increase the weight of the two servers, for example, to 10 for 9001 and 1 for 9002, to ensure that most requests are forwarded to port 9001. In the request processing, modify the Nginx configuration, remove the 9001 service, and restart nginx. Observation.

Conclusion: NGINx waits for the 9001 service to finish processing the request, and subsequent requests do not forward the service. Therefore, make sure that the requests in progress are completed.

Set the timeout response time

Configuration:

        location / {
            proxy_pass http://foobar;
            proxy_set_header Host $proxy_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_connect_timeout 1;
            proxy_read_timeout    1;
            proxy_send_timeout    1;
        }
Copy the code

Note: do not know how to experiment. Since it is implemented by gin framework, there are corresponding response functions. Once entered, it will be considered as responding, and the previous sleep mode will not work either.

knowledge

Do not suffix IP addresses in upstream with URLS. Otherwise prompt

Nginx: [emerg] invalid host in upstream "127.0.0.1:9001 / foobar" in/etc/opt/rh/rh - nginx116 / nginx/nginx. Conf: 54Copy the code

You can add a URL to the location address, such as location /foobar.

To facilitate the observation of request logs, nginx logs need to be configured as follows:

log_format  main  '[$time_local] $remote_addr: "$request" '
                  '$status "$http_referer" '
                  '"$http_user_agent" [$upstream_addr $upstream_status $upstream_response_time ms $request_time ms]';
Copy the code

cheat

Can Nginx be used to block direct client access to a real Web server? I don’t think there’s a plan online yet.

summary

Distributed for a long time ago, watching video, it introduces the load balancing, snowflakes algorithm, consistent hashing algorithm, etc., opened my eyes, and could not go out while the Mid-Autumn festival, concentrated at night time to study, from the implementation based on the content of the request of forwarding tool, to nginx load balancing algorithm, a basic. As for other knowledge, there are no plans for now.

2021 Draft in late September slightly revised in mid-November