Previous records in Youdao Cloud notes will be released and shared one by one, as well as personal understanding in the learning process, welcome to exchange
The article includes the following three points:
1. Reverse proxy
2. Load balancing
3. Static resource caching
Schematic Diagram 1: Load balancing
Schematic 2: Reverse proxy
The forward proxy objects are clients, and the reverse proxy objects are servers
Build an Nginx environment in Linux
1. Required installation package
Nginx environment dependencies
Nginx installation package
2. Compile installation environment dependencies
2.1 Changing the Root Permission
zjp@zjp-virtual-machine:~$ sudo su
[sudo] password for zjp:
root@zjp-virtual-machine:/home/zjp#
Copy the code
2.2 Going to the installation package Folder
-> tar ZXVF openSSL-1.0.0e.tar. gz Decompress the package -> CD openSSL-1.0.0e go to the openssl-1.0.0e folder ->./config && make && make install The installation starts Or ->./configure && make && make installCopy the code
3. Install Nginx
-> tar ZXVF nginx-1.12.1.tar.gz Decompress the package -> CD nginx-1.12.1 go to the folder ->./config && make && make install Start the installationCopy the code
4. Start the Nginx
After the installation
Start command:./nginx
Restart command:./nginx -s reload
After the start, open a terminal: ps – aux | grep nginx, view the nginx process
The nginx main thread occupies port 80. Open the browser: http://localhost:80. If the following information is displayed, the establishment is successful
5. Modify the nginx. Conf
Run the /usr/local/nginx/conf/ command to open the file using a text editor (or vi)
Upstream {server 192.168.0.155:8080; Server 192.168.0.155:9090; }Copy the code
Add:
server { listen 80; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; Location / {proxy_pass http://cluster name; #root html; #index index.html index.htm; }... }Copy the code
Save the modification and restart Nginx
Browser to http://localhost:80/+ cluster project name
The Tomcat_8080 server is displayed
Or go to the Tomcat_9090 server
Load balancing
With the continuous expansion of business and the increasing number of users, the original Nginx proxy server has become difficult, no matter in performance, response speed, etc., are inadequate, so it is necessary to do load balancing on the background server, to relieve the high concurrent request pressure of one or several servers.
1, the Upstream
Load balancing requires the use of the HTTP Upstream module supported by Nginx, which implements load balancing from client IP to server through a simple algorithmic scheduling. Upstream currently supports four scheduling algorithms:
- A. Default round robin: The default scheduling algorithm allocates each request from the client to the balancing server one by one in chronological order. The weight ratio of the server can be set. The greater the weight ratio, the greater the probability of access.
- B. Ip_hash: Records the IP address of each visitor and allocates it to a specified server, effectively solving the problem of web Session on different servers.
- C, fair: According to the backend server response time, the shorter the response time is more likely to be allocated, it is a third-party plug-in, Nginx itself does not support, if you need to use Nginx upstream_fair module must download.
- D. Url_hash: Allocates the url based on the hash value of the accessed URL, directing each URL to the specified server to improve the caching efficiency of the back-end server. Similarly, Nginx does not support this algorithm and requires third-party hash software.
2. Status parameters supported by Upstream
In the Upstream module of Nginx, in addition to specifying specific servers and ports via the server, you can also set the state of the server in load balancing. The current status is as follows:
- A, down: indicates that the current server does not participate in load balancing.
- B, backup: the reserved backup device, that is, it will be allocated to customer requests when other servers are down or busy, so it has the least pressure.
- Max_fails: specifies the number of requests that the server allows to fail. The default value is 1. When the number of failed requests exceeds the threshold, proxy_next_upstream will be returned with an error message.
- D. Fail_timeout: the period during which services are suspended after max_fails passes the number of times. This period is usually used together with max_fails.
Note: When the server scheduling algorithm is IP_hash, the server status in load balancing cannot be weight or backup
Example 1:
Upstream Backend {server A max_fails=3 fail_timeout=4s weight=9; Server B max_fails=3 fail_timeout=4s weight=9; Server C max_fails=3 fail_timeout=4s weight=9; Backup server D; Server E backup; }Copy the code
Example 2:
Upstream backend {ip_hash; Server A max_fails = 3 fail_timeout = 4 s; Server B max_fails fail_timeout = 3 = 4 s; server C max_fails=3 fail_timeout=4s; }Copy the code
Proxy_connect_timeout: specifies the timeout period for connecting to the server. The default value is 60 seconds
Fail_timeout: If the server does not respond within this period, the server fails. The default value is 10 seconds
Max_fails: Specifies the number of connection failures allowed. The default value is 1
If a server fails, wait time = proxy_connect_TIMEOUT + fail_TIMEOUT *max_fails
Static resource caching
1. Cache directive: proxy_cache_path
Format: proxy_cache_PATH + PATH + levels + keys_zone + inactive + max_size
- Path – The location where the cache file is stored
- Levels – Cache directory structure, which can be 1, 2, or 3 digits, and can be up to 3 digits such as 1,1:2
- Keys_zone – Specifies the cache pool name and size. Each defined cache path must be different
- Inactive – Sets the validity period of the cache file for each cache area, after which the cache that is not accessed will be deleted
- Max_size – Sets the inactive cache size. Inactive caches will be deleted if they exceed this size
Example: proxy_cache_path /usr/local/nginx/cache levels= 1:2keys_zone =webpages:30m max_size= 2G;
proxy_cache
Format: proxy_cache + cache_name
Specifies the name of the cache area. The same area can be used in different places.
proxy_cache_valid
Format: proxy_cache_VALID + reply_code + time;
Example: proxy_cache_VALID 200 10m;
Nginx.conf configuration example:
events { worker_connections 1024; } HTTP {upstream tomcatServers {server 192.168.0.153:8080 weight=1 max_fails=2 fail_timeout=2; Server 192.168.0.155:9090 weight=1 max_fails=2 fail_timeout=2; 9091 backup server 192.168.0.155:; } include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; proxy_cache_path /usr/local/nginx/cache levels=1:2 keys_zone=webpages:30m max_size=2g; server { listen 80; server_name localhost; add_header X-Via $server_addr; add_header X-Cache $upstream_cache_status; location / { proxy_pass http://tomcatServers; proxy_set_header X-Real-IP $remote_addr; proxy_cache webpages; proxy_cache_valid 200 10m; #root html; #index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; }}Copy the code
Create a cached page path:
$mkdir – pv/nginx/cache/webpages
$server_addr – The server address to display
$upstream_cache_status – The possible values for the state of the cache are: MISS, Hint, Expired, Stale, Updating, etc. So, if the cache state is HINT, then the cache hit, that is, the cache file was called.
Enter the cache path to view (with root permission) :
The cat command is used to view files