Docker installs Nginx to implement load balancing

1 install nginx docker

This process mounts the configuration information to the host machine, so you can modify the configuration file there

Step 1: Pull the NgniX image (find it here at hub.docker.com)

docker pull nginx
Copy the code

Step 2: View images images

docker images
Copy the code

Step 3: Simple boot (this purpose is to get the default nginx resource file, by default will be released to the /etc/nginx directory)

docker run --name nginx -d nginx
Copy the code

To check the contents of the nginx installation, you can check the contents of the nginx /etc/nginx folder with or without execution

docker exec nginx ls -l /etc/nginx
Copy the code

Step 4: Copy the installed configuration files (copy the files from the nginx container service to the specified directory on the host)

docker cp -a nginx:/etc/nginx /usr/local/docker/nginx
Copy the code

You do not need to forcibly uninstall the newly installed nginx

docker rm -f nginx
Copy the code

Step 6: Start the nginx container and run the nginx image

sudo docker run -p 80:80 --restart always --name nginx \
-v /usr/local/docker/nginx/:/etc/nginx/ \
-v /usr/local/docker/nginx/conf.d:/etc/nginx/conf.d \
-d nginx
Copy the code

Step 7: Access detection, as shown in the figure:

2 nginx introduction

Create and start the ngidNx container under Docker (if it already exists, no need to create and start it again)

sudo docker run -p 80:80 --restart always --name nginx \
-v /usr/local/docker/nginx/:/etc/nginx/ \
-v /usr/local/docker/nginx/conf.d:/etc/nginx/conf.d \
-d nginx
Copy the code

Note: Nginx startup must be performed in the root directory.

Open the nginx

docker start nginx
Copy the code

Restart the nginx

docker restart nginx
Copy the code

Stop nginx

docker stop nginx
Copy the code

Enter the nginx container

docker exec -it nginx bash
Copy the code

View the nginx version (docker needs to be executed inside the container)

nginx -v
Copy the code

Reload the configuration file (this is done inside the container in Docker)

service nginx reload
Copy the code

3 Core configuration file

The core configuration of Nginx is nginx.conf in the conf directory. Its initial default configuration is as follows:

user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; # nested}Copy the code

In the docker environment, nginx uses nested loading mode, that is, the main configuration is in /etc/nginx/nginx.conf, while the usual server configuration is in /etc/nginx/conf.d. See the include directive section in the main configuration. By default, there is a default.conf file in the conf.d directory, which is the basic server configuration. No matter what configuration is used, nginx.conf consists of only three parts, for example:

§ Global block: part between the start of the configuration file and events, which is mainly configured with hardware resources

§ Events block: this block is mainly related to network configuration. The hardware performance is good and the number of connections can be configured more

§ HTTP block: the most core part of nginx configuration, can configure request forwarding, load balancing, etc.

3 Introduction Cases

Step 1: Prepare the SpringBoot Web service and package it into a JAR (for example, tomcat8901.jar)

Put this Springboot project 8901,8902,8903 into three JAR packages and put them in the directory of the host machine

@RestController
public class HelloController {
	
	@Value("${server.port}")
	private String port;
	
	// Dynamically obtain the port number of the real server
	@RequestMapping("/hello")
	public String doSayHello(a) {
		
		return "server:"+port+" say hello "; }}Copy the code

Place the project in the host’s /usr/local/tomcatweb directory, which you can specify as long as you can find it

Start the SpringBoot project in the background and start three SpringBoot projects in this way

nohup java -jar tomcatweb8901.jar > catalina8901.out 2>&1 &
Copy the code

View startup run logs

cat catalina8901.out
Copy the code

Description of the nohup command

Nohup: Run the command without hanging up and continue running the corresponding process after exiting the account.

> log filene. log: Nohup redirects the command output to the specified log filene. log file in the current directory. That is, the output content is printed to the log filene. log file instead of the screen. Out is created in the current directory. If the nohup.out file in the current directory is not writable, the output is redirected to the $HOME/nohup.out file. If no file can be created or opened for appending, the Command specified by the Command argument cannot be called.

2> &1:2 is standard error, and 1 is standard output. This command redirects standard error to standard output. In this case, & is equivalent to standard error equivalent to standard output, that is, the standard error and standard output are simultaneously output to the specified log filenames. Log file.

Jar: a project that executes Springboot. If this command is executed only, Linux will run the project for a short time and will automatically close the project when you exit the console.

The final & : lets the job run in the background.

Nohup Java jar waimaiurl - 0.0.1 - the SNAPSHOT. Jar > / home/ubuntu/CLF/test log > &1 &Copy the code

Nohup java-jar Project name jar >/ path name/output log name. Log >&1&

Killing background processes

Use jobs -l to view the processes running in the background of Nohup

jobs -l
Copy the code

Run the kill -9 PID command to kill the process

kill -9  pid
Copy the code

Configuring Request Forwarding

Configure nginx to implement the requested conversion, editable in docker environment

/ usr/local/default/nginx/conf. D/directory. The default file, detailed configuration forward requests to see red code is as follows:

server { listen 80; listen [::]:80; server_name localhost; #charset koi8-r; #access_log /var/log/nginx/host.access.log main; The location / {proxy_pass http://192.168.174.130:8901; #root /usr/share/nginx/html; #index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php${# proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php${# root HTML; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; #fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #}}Copy the code

Step 3: Restart nginx after configuration.

docker restart nginx
Copy the code

The fourth step: the resource access detection request forwarding implementation.

http://ip:80/hello
Copy the code

After access, it is found that each time a different port number is executed

4NGINx combat analysis

A Cluster of Tomcat servers is required to combat high concurrency. In this case, a reverse proxy server is required. Configure load balancing at the same time.

Note: Prepare three Tomcat servers. The port numbers are 8901, 8902, and 8903.

The startup mode is explained in details in “3 Getting Started”

(1) Load balancing policy

Note: Access servers in sequence based on the configuration file sequence.

Editable in docker environment

/ usr/local/default/nginx/conf. D/directory. The default file

Upstream {server 192.168.227.131:8901; Server 192.168.227.131:8902; Server 192.168.227.131:8903; } # configure the backend management server server {listen 80; server_name localhost; Location / {# implement HTTP request forward proxy_pass http://gateways; }}Copy the code

After the modification, restart nginx

docker restart nginx
Copy the code

The request is then made based on the IP address and request path

http://ip:80/hello
Copy the code

(2) Weight strategy

Note: You can add weight to some servers, so that the server provides more services for users

Upstream {server 192.168.227.131:8901 weight=6; upstream {server 192.168.227.131:8901 weight=6; Server 192.168.227.131:8902 weight = 3; Server 192.168.227.131:8903 weight = 1; }Copy the code

After the modification, restart nginx

docker restart nginx
Copy the code

The request is then made based on the IP address and request path

http://ip:80/hello
Copy the code

(3) IPHASH strategy

Note: In cluster deployment, users must log in to perform sensitive operations. However, nGINx implements load balancing, resulting in different Session data sharing. As a result, users frequently log in. Poor user experience.

Problem: Nginx implements Tomcat load balancing. As a result, the user accesses a different server each time.

Solution: Can users access the same server every time IPHASH policy

The IPHASH call works like this:

The configuration is as follows:

Upstream geteways {ip_hash; Server 192.168.227.131:8901 weight = 6; Server 192.168.227.131:8902 weight = 3; Server 192.168.227.131:8903 weight = 1; }Copy the code

Problems with IPHASH****

  1. IPHASH If the server is abnormal, services fail.

  2. Uneven load may occur. Loads can be high or low (try it out in tests).

IPHASH is not normally used, it is usually used in testing.

After the modification, restart nginx

docker restart nginx
Copy the code

The request is then made based on the IP address and request path

http://ip:80/hello
Copy the code

5NGINX common attributes

(1) Down attribute

Note: If the server is down, you can mark it as Down in the configuration file. In this way, you will not access the faulty server in the future.

upstream geteways {
		#ip_hash;
		server 192.168.227.131:8901 down;
		server 192.168.227.131:8902;
		server 192.168.227.131:8903;
}
Copy the code

(2) BACKUP design

Note: The standby server is configured. Normally, the server cannot be accessed. The server can be accessed only when all hosts are down or the host is busy.

upstream geteways { #ip_hash; Server 192.168.227.131:8901 down; Server 192.168.227.131:8902-8082; 8903 backup server 192.168.227.131:; }Copy the code

(3) High availability of the downtime server

Note: When the server is down and the number of access failures reaches the maximum, the server is labeled as Down. Automatic completion. Within a certain period, if the server is restored, attempts will be made to access the failed machine.

Max_fails =1 Specifies the maximum number of failures

fail_timeout=60s; Set the period to 60 seconds

upstream geteways { #ip_hash; Server 192.168.227.131:8901 max_fails = 1 fail_timeout = 60 s; Server 192.168.227.131:8902 max_fails = 1 fail_timeout = 60 s; Server 192.168.227.131:8903 max_fails = 1 fail_timeout = 60 s; }Copy the code

6Nginx interview question analysis

(1) Why not adopt multi-threaded model to manage connections?**

  1. Use separate processes so that they do not affect each other. If one process crashes unexpectedly, services of other processes are not interrupted, improving the reliability of the architecture.

  2. Processes do not share resources and do not need to be locked, thus eliminating the overhead associated with locking.

(2) Why not use multithreading to process logical business?

  1. If the number of processes is equal to the number of cores, creating new threads to process tasks only preempts existing processes and increases the switching cost.

  2. As the access layer, it is basically data forwarding business. The waiting time of network IO task has been processed into non-blocking/full asynchronous/event-driven mode. It is meaningless to use multithreading for processing without more CPU. And if there is blocking processing logic in the process, it should be resolved by each business, for example, Lua coroutines are used in openResty to optimize the blocking business.