This article mainly introduces how Nginx implements load balancing. Load Balancing

Before introduce Nginx load balancing implementation under the simple said the classification of load balancing, mainly divides into the hardware load balancing and load balancing software, hardware load balancing is the use of special software and hardware equipment, combining equipment chamber of commerce provide complete mature solutions, such as F5, is very reliable in the stability and security of the data, But it can be more expensive than software; Software load balancing to Nginx software such as the main implementation of a message queue distribution mechanism. Simply put, load balancing is the process of splitting many requests and distributing them to different servers. For example, I have 3 servers, respectively A, B and C, and use Nginx for load balancing and polling strategy. At this time, if 9 requests are received, the 9 requests will be evenly distributed to A, B and Cf servers, and each server will process 3 requests. In this way we can take advantage of the multi-machine cluster to reduce the stress on a single server. Load balancing for Nginx

Load Balancing Policy

NGINX open source supports four load balancing methods, and NGINX Plus adds two more. 1.Round Robin: The system sends all requests in the Round Robin mode. Nginx. conf configuration example: upstream xuwujing {server www.panchengming.com; server www.panchengming2.com; } Note: The above domain name can also be replaced by IP. At Least Connections: Send requests to the server with the Least number of active Connections, again considering server weights. Conf configuration example: upstream xuwujing {least_conn; server www.panchengming.com; server www.panchengming2.com; } 3.IP Hash: The server sending the request is determined by the client IP address. In this case, the hash value is calculated using the first three bytes of the IPv4 address or the entire IPv6 address. This method ensures that requests from the same address arrive at the same server, unless the server is unavailable. upstream xuwujing { ip_hash; server www.panchengming.com; server www.panchengming2.com; } 4.Generic Hash: The server to which the request is sent is determined by a user-defined key, which can be a text string, variable, or combination. upstream xuwujing { hash request_uri consistent; server www.panchengming.com; server www.panchengming2.com; } 5.Least Time (NGINX Plus only) – For each request, NGINX Plus selects the server with the lowest average latency and the lowest number of active connections, where the lowest average latency is calculated based on the following parameters containing the least_time directive: Header: The time when the first byte is received from the server. Last_byte: time to receive the complete response from the server. Last_byte inflight: Indicates the time to receive complete responses from the server. upstream xuwujing { least_time header; server www.panchengming.com; server www.panchengming2.com; } 6.Random: Each request will be delivered to a randomly selected server. If two parameters are specified, NGINX first randomly selects two servers based on the server weight, and then selects one of them using the specified method. Least_conn: minimum number of active connections least_time=header (NGINX Plus) : minimum average time to receive response headers from the server (upstream_header_time). Least_time =last_byte (NGINX Plus) : Minimum average time to receive a complete response from the server ($upstream_response_time). upstream xuwujing { random two least_time=last_byte; server www.panchengming.com; server www.panchengming2.com; } Nginx+SpringBoot to implement load balancing environment

Rely on JDK1.8 or higher; Depend on Nginx environment; Here is a springboot project before I used a springboot project, springboot project address: github.com/xuwujing/sp… First we download the project, type: MVN Clean Package to package the project as a JAR file, then put application.properties and the JAR project in a folder, and then copy the folder. Change the port of the replication folder application.properties, for example, to 8086. Nginx configuration

(10) upstream pancm{server 127.0.0.1:8085; (11) upstream pancm{server 127.0.0.1:8085; (11) upstream pancm{server 127.0.0.1:8085; Server 127.0.0.1:8086; } upstream pancm: Define a name; Server + IP: port or domain name. If you don’t want to use the Round Robin strategy, you can use another one. Then add/modify the following configuration on the server: server {listen 80; Server_name 127.0.0.1;

location / { root html; proxy_pass http://pancm; proxy_connect_timeout 3s; proxy_read_timeout 5s; proxy_send_timeout 3s; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; }}Copy the code

Configuration description: Server: the name of the virtual host. One HTTP can be configured with multiple servers. Listen: Nginx default port; Server_name: address of the Nginx service. You can use a domain name. Multiple domain names are separated by Spaces. Proxy_pass: proxy path. Generally, configure the name following upstream to implement load balancing. You can directly configure an IP address to switch to the upstream. {worker_connections 1024; }

error_log nginx-error.log info; http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65;

Upstream pancm {server 127.0.0.1:8085; Server 127.0.0.1:8086; } server { listen 80; Server_name 127.0.0.1; location / { root html; proxy_pass http://pancm; proxy_connect_timeout 3s; proxy_read_timeout 5s; proxy_send_timeout 3s; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; }}Copy the code

} Load balancing test

After the Nginx configuration is complete, we start Nginx. Linux input/usr/local/nginx/sbin/nginx – c/usr/local/nginx/conf/nginx. Conf, If you have already started can use/usr/local/nginx/sbin/nginx -s reload to thermal loading configuration files, command Windows directly click nginx directory nginx. Exe or CMD operation start nginx start, If enabled, nginx -s reload can still be used for hot loading. After the Nginx startup is complete, we will start the springboot that we just downloaded and copy the project that changed the port in sequence by typing: java-jar springboot-jsp-thymeleaf.jar. After successful startup, we can enter the IP of the service in the browser to access. Figure:

Note: I used Windows system for testing here, and the actual Linux system is the same. Then we operate and view the console log!

From the above sample diagram we made four interface refresh requests, which were split evenly between the two services, and from the above test results we achieved load balancing. When learning and testing, there is no problem with using the default port of Nginx to achieve load balancing. However, when using the default port of Nginx in a project, especially if there is a login interface and the port is not 80, the login interface will not be able to jump. Net ::ERR_NAME_NOT_RESOLVED error while debugging nginx net::ERR_NAME_NOT_RESOLVED error Proxy_set_header Host $Host :port = proxy_set_header Host $Host :port = listen other

reference

Docs.nginx.com/nginx/admin… nginx.org/en/