Nginx is a high performance HTTP and reverse proxy server, characterized by low memory footprint, high concurrency, in fact, Nginx does perform better than the same type of web server.

Nginx was developed specifically for performance optimization. Performance is its most important requirement, with a strong focus on efficiency and reports that Nginx can support up to 50,000 concurrent connections.

01

Nginx knowledge network structure diagram

The knowledge network structure of Nginx is as follows:

02**** Reverse proxy

** Forward proxy: It is not feasible for computer users on the LAN to access the network directly. They can only access the network through a proxy server. This proxy service is called forward proxy.

** Reverse proxy: ** the client cannot be proxy-aware because the client does not need to be configured to access the network. The client only needs to send the request to the reverse proxy server. The reverse proxy server selects the target server for data and then returns to the client.

In this case, the reverse proxy server and the target server are one external server, exposing the proxy server address and hiding the REAL server IP address.

03

Load balancing

The client sends multiple requests to the server, which processes the requests, some of which may interact with the database, and then returns the results to the client.

The normal request and response process is shown below:

However, as the amount of information increases, the volume of visits and data increases rapidly, ordinary architecture cannot meet the current demand.

The first thing that comes to our mind is to upgrade the server configuration. Because Moore’s Law is becoming increasingly ineffective, it is no longer desirable to improve the performance from the hardware alone. How can we solve this demand?

We can increase the number of servers, build clusters, distribute requests to different servers, instead of centralizing requests to a single server, we can distribute requests to multiple servers, which is what we call load balancing.

Diagram load balancing:

Suppose there are 15 requests sent to the proxy server. The proxy server allocates 5 requests to each server based on the number of servers. This process is called load balancing.

04**** Static separation

In order to speed up the speed of website parsing, dynamic pages and static pages can be handed over to different servers to parse, speed up the speed of parsing, reduce the pressure from a single server.

2. The state before separation:

After separation:

5 * * * * Nginx installation

How to install Nginx on Linux

Reference links:

https://blog.csdn.net/yujing1314/article/details/97267369
Copy the code

Nginx common commands

View version:

./nginx -v
Copy the code

Activation:

./nginx
Copy the code

Close (there are two ways, using./nginx -s quit is recommended) :

 ./nginx -s stop ./nginx -s quit
Copy the code

Reload Nginx configuration:

./nginx -s reload
Copy the code

Nginx configuration file

The configuration file consists of three parts:

(1) the global piece

From the start of the configuration file to the Events block, it is mainly to set some configuration instructions that affect the overall operation of Nginx server.

The larger the value of the concurrent processing service is, the more concurrent processing it can support is limited by hardware and software.

(2) the events of the block

Common Settings that affect the network connection between the Nginx server and the user include whether to enable serialization of network connections under multiple workProcesses, whether to allow simultaneous acceptance of multiple network connections, and so on.

Maximum number of connections supported:

(3) the HTTP block

Such as reverse proxy and load balancing are configured here.

location[ = | ~ | ~* | ^~] url{}
Copy the code

The location directive states that this syntax is used to match urls, as shown in the following syntax:

  • **= : ** Is used for urls that do not contain regular expressions. The string must match the URL strictly. If the match is successful, the search down is stopped and the request is processed.

  • **~ : ** Indicates that the URL contains regular expressions and is case sensitive.

  • **~* : ** is used to indicate that the URL contains regular expressions and is case insensitive.

  • **^~ : ** For urls without regular expressions, the Nginx server is required to find the location with the highest degree of matching between the URL and the string, and immediately use this location to process the request.

  • If a URL contains a regular expression, it does not need to start with ~.

06**** Reverse proxy combat

1 Configure the reverse proxy

Purpose: Enter www.123.com in the browser address box to go to the Tomcat home page of the Linux operating system.

② Concrete implementation

Configure Tomcat first, because it is relatively simple, no longer verbose here, and access Windows:

The specific process is as follows:

Before modification:

The configuration is as follows:

Visit again:

③ Reverse proxy 2

Goal:

  • Visit http://192.168.25.132:9001/edu/ jump straight to the 192.168.25.132:8080

  • Visit http://192.168.25.132:9001/vod/ jump straight to the 192.168.25.132:8081

** Prepare: ** Configure two Tomcat servers with ports 8080 and 8081 respectively, both of which can be accessed by modifying the port configuration file.

Add 8080 to new file contents respectively!! And 8081!!

The response is shown below:

The configuration is as follows:

Reload Nginx:

./nginx -s reload
Copy the code

Access:

The same port proxy is implemented, and different pages are displayed by switching between EDU and vod paths.

Reverse proxy summary

** First example: ** browser access www.123.com, from the host file to resolve the server IP address 192.168.25.132 www.123.com.

Port 80 is then accessed by default, and Nginx listens for port 80 to be proxy to the local port 8080, thereby accessing www.123.com, and finally forwarding to Tomcat 8080.

Second example:

  • Visit http://192.168.25.132:9001/edu/ jump straight to the 192.168.25.132:8080

  • Visit http://192.168.25.132:9001/vod/ jump straight to the 192.168.25.132:8081

In fact, Nginx listens on port 9001 and uses the regular expression to select whether to forward to Tomcat 8080 or 8081.

07

Load balancing practice

Nginx. conf:

② Restart Nginx:

./nginx -s reload
Copy the code

③ Create an EDU folder and a.HTML file in the Tomcat webapps folder of 8081 and fill in the content as 8081!!!!

④ Press Enter in the address bar, it will be distributed to different Tomcat servers:

The load balancing modes are as follows:

  • Polling (default).

  • The higher the weight, the higher the priority.

  • Fair, requests are allocated according to the response time of the back-end server, with priority given to those with shorter response times.

  • Ip_hash: Each request is allocated according to the hash result of the access IP address. In this way, each visitor can access the same backend server, which can solve the Session problem.



08

Static and static separation combat

What is static separation? Nginx handles static pages, Tomcat handles dynamic pages.

Static and static separation can be roughly divided into two types:

  • Pure static files into a separate domain name on a separate server, is also the mainstream solution.

  • Publish dynamic and static files mixed together, separated by Nginx.

Static and static separation diagram analysis:

Prepare for actual combat, prepare static files:

Configure Nginx as shown below:

Nginx high availability

If Nginx has problems:

Solutions:

Preparation:

  • Two Nginx servers

  • Install Keepalived

  • Virtual IP

Install Keepalived:

[root@192 usr]# yum install keepalived -y[root@192 usr]# rpm -q -a keepalivedkeepalived-1.3.5-16.el7.x86_64
Copy the code

Modifying a configuration file:

[root@192 keepalived]# cd /etc/keepalived[root@192 keepalived]# vi keepalived.conf
Copy the code

Copy and paste the following configuration files and overwrite keepalive. conf. The virtual IP address is 192.168.25.50.

The host IP address needs to be changed as follows:

  • Smtp_server 192.168.25.147 (active) Smtp_server 192.168.25.147 (standby)

  • State MASTER (primary) State BACKUP (secondary)

global_defs { notification_email { [email protected] [email protected] [email protected] } Notification_email_from [email protected] smtp_server 192.168.25.147 smtp_connect_timeout 30 Router_id Vrrp_script chk_nginx {script "/usr/local/src/nginx_check.sh" # check file address interval 2 # Check script execution interval Weight 2 # weight}vrrp_instance VI_1 {state BACKUP ens33 # virtual_Router_id 51 # Priority 90 # Access priority Advert_int 1 authentication {auth_type PASS auth_pass 1111} virtual_ipAddress {192.168.25.50 # virtual IP}}Copy the code

The startup code is as follows:

[root@192 sbin]# systemctl start keepalived.service
Copy the code

Succeeded in accessing the virtual IP address:

Disable Nginx and Keepalived for host 147 and find that it is still accessible.

The principle of analytic

As shown in the figure below, a master and a worker are started. Master is the administrator, and worker is the process of specific work.

How does a worker work? The diagram below:

summary

The number of workers should be equal to the number of cpus; A master with multiple workers can use hot deployment, and workers are independent. The failure of one worker will not affect other workers.