Configure the instance
Then from the principle to the actual combat, thoroughly understand Nginx, this article for Nginx practical advanced chapter. Through the configuration of Nginx configuration file, the realization of forward proxy, reverse proxy, load balancing, Nginx cache, static and static separation and high availability of Nginx six functions, and the principle of Nginx for further analysis. If you need to use the Nginx configuration file, you can refer to this example.
1. Forward proxy
The forward proxy object is the client. Forward proxy is the proxy server to access the target server for the client.
1.1 practical a
Effect: Enter www.google.com in the browser, the browser goes to www.google.com. Specific configuration:
Server {resolver 8.8.8.8; listen 80; location / { proxy_pass http://$http_host$request_uri; }}Copy the code
Perform one of the following operations on the client that needs to access the Internet:
1. Method 1 (recommended) export http_proxy= http://forward proxy server address: proxy port 2. Method 2 vim ~/. Bashrc export http_proxy= http://proxy port address of your forward proxy serverCopy the code
2. Reverse proxy
A reverse proxy is a mediation server that responds to client requests from the proxy backend server. The proxy object is the server.
2.1 practical a
Effect: Enter www.abc.com in the browser to go to the Tomcat home page of the Linux system from nginx server. Specific configuration:
server { listen 80; Server_name 192.168.4.32; Location / {root HTML; #/ HTML directory proxy_pass http://127.0.0.1:8080; HTML index.htm; # set default page}}Copy the code
2.2 practical 2
Implementation effect: Switch to different ports based on the paths entered in the browser. Specific configuration:
server { listen 9000; Server_name 192.168.4.32; Location ~ /example1/ {proxy_pass http://127.0.0.1:5000; } location ~ /example2/ {proxy_pass http://127.0.0.1:8080; }}Copy the code
Location directive description:
- ~ : Indicates that the URI contains a regular expression and is case sensitive.
- ~* : Indicates that the URI contains a regular expression and is case insensitive.
- = : Indicates that the URI does not contain regular expressions and must be strictly matched.
3 Load Balancing
3.1 practical a
Implementation effect: input http://192.168.4.32/example/a.html in your browser’s address bar, average to ports 5000 and 8080, the load balancing effect. Specific configuration:
Upstream myServer {server 192.167.4.32:5000; Server 192.168.4.32:8080; } server { listen 80; Server_name 192.168.4.32; Location / {root HTML; # HTML directory index index.html index.htm; Proxy_pass http://myserver; # request redirect to myServer defined server list}}Copy the code
Nginx allocates server policies
-
Polling (the default) allocates requests one by one in chronological order. If the server goes down, it is automatically removed.
-
Weight The higher the weight, the more clients are assigned. The default value is 1. Such as:
Upstream myServer {server 192.167.4.32:5000weight =10; Server 192.168.4.32:8080 weight = 5; }Copy the code
-
IP addresses are assigned based on the hash value of the requested IP address. Each visitor accesses a back-end server. Such as:
upstream myserver { ip_hash; server 192.167.4.32:5000; server 192.168.4.32:8080; } Copy the code
-
Fairs are allocated based on the response time of the back-end server, and requests with short response times are prioritized. Such as:
upstream myserver { fair; server 192.168.4.32:5000; server 192.168.4.32:8080; } Copy the code
4. Nginx cache
4.1 practical a
Implementation effect: within three days, through the browser address bar to access http://192.168.4.32/a.jpg, not from a server resources, after 3 days (date) from the server to download. Specific configuration:
Proxy_cache_path/TMP /nginx_proxy_cache levels=1 Keys_zone = cache_ONE: 512M inactive=60s max_size=1000m # server region to add cache configuration location ~ \. (GIF | JPG | PNG | HTM | | | HTML CSS js) (. *) {proxy_pass http://192.168.4.32:5000; Proxy_redirect off if there is no cache proxy_cache cache_one; proxy_cache_valid 200 1h; Proxy_cache_valid 500 1d; proxy_cache_valid any 1m; expires 3d; }Copy the code
Expires is a parameter that sets an expiration date for a resource. Setting expires allows the browser to cache content before the expiration date to reduce requests and traffic to the server. This means that the browser itself can check whether the expiration date is valid without going to the server, so there is no extra traffic. This approach is well suited to resources that are not subject to constant change.
5. Separation of static and static
5.1 practical a
Effect: Access the static resource content of the static resource server by accessing www.abc.com/a.html in the address bar of the browser. To access the dynamic resource content of the dynamic resource server, visit www.abc.com/a.jsp in the address bar of a browser. Specific configuration:
Upstream static {server 192.167.4.31:80; Upstream {server 192.167.4.32.8080; } server { listen 80; Server_name www.abc.com; Listening address # intercept dynamic resource location ~. * \. | JSP (PHP) ${proxy_pass http://dynamic; Static resource location ~} # intercept. * \. (JPG | PNG | HTM | | | HTML CSS js) ${root/data /; # HTML directory proxy_pass http://static autoindex on;; # Autoopen file list}}Copy the code
6. High availability
In general, the nginx primary server accesses the background target server cluster. When the primary server fails, the backup server automatically switches to the backup server. In this case, the backup server acts as the primary server and accesses the back-end target server.
6.1 practical a
Effect: Prepare two nginx servers, access the virtual IP address through the browser address bar, stop the nginx on the primary server, and access the virtual IP address again. (1) Install Keepalived on two Nginx servers. Keepalived is a route that uses a script to check if the current server is still alive, and if it is, continue access, otherwise switch to another backup server.
Keepalived yum install keepalived- y # check version RPM -q -a Keepalived keepalived-1.3.5-16.el7.x86_64Copy the code
(2) modify the master server/etc/keepalived/keepalivec conf configuration file (can be directly replace), to complete the high availability of master-slave configuration. Keepalived binds the nginx server to a virtual IP address. Nginx high availability cluster exposes this virtual IP address uniformly. Clients access the nginx server through this virtual IP address.
global_defs { notification_email { [email protected] [email protected] [email protected] } [email protected] smtp_server 192.168.4.32 smtp_connect_timeout 30 Router_id LVS_DEVEL # configure in /etc/hosts } vrrp_script_chk_http_port {script "/usr/local/src/nginx_check.sh" interval 2 # Interval for checking script execution weight 2 # Vrrp_instance VI_1 {interface ens7f0 # State MASTER # BACKUP virtual_Router_id 51 # The virtual_Router_id of the active and standby servers must be the same as the priority 100 # Advert_int 1 # How often do I send a heartbeat (default: 1s)? Authentication {auth_type PASS auth_pass 1111} virtual_ipAddress {192.168.1.100 # VRRP H virtual address, which can be multiple}}Copy the code
Fields that
-
Router_id: This parameter is configured in the /etc/hosts file to access the host.
127.0.0.1 localhost localhost.localdomain localhost4 localhost4. Localdomain4 ::1 localhost localhost.localdomain Localhost6 localhost6. Localdomain6 127.0.0.1 LVS_DEVELCopy the code
-
Interval: Sets the interval for executing scripts
-
Weight: The value (which can be negative) that the weight increases when the script fails to execute, namely keepalived or nginx hangs.
-
Interface: Run the ifconfig command to view the name of the current network adapter.
ens7f0: Flags = 4163 < UP, BROADCAST, RUNNING, MULTICAST > mtu 1500 inet 192.168.4.32 netmask 255.255.252.0 BROADCAST 192.168.7.255 inet6 fe80::e273:9c3c:e675:7c60 prefixlen 64 scopeid 0x20<link> ... .Copy the code
(3) Add the check script nginx_check.sh to the /usr/local/src directory.
#! /bin/bash A=`ps -C nginx -no-header |wc -l` if [ $A -eq 0 ]; then /usr/local/nginx/sbin/nginx sleep 2 if [ ps -C nginx -no-header |wc -l` -eq 0 ]; then killall keepalived fi fiCopy the code
(4) Start nginx and Keepalived for both servers.
Keepalived systemctl start keepalive.serviceCopy the code
(5) View the virtual IP address IP A. Stop the primary server 192.168.4.32 nginx and Keepalived, then access the virtual IP to see the high availability effect.
6. Principle analysis
After Nginx is started, there are two processes in Linux system, one is master, the other is worker. As an administrator, master does not participate in any work and is only responsible for assigning different tasks to multiple workers (workers generally have multiple tasks).
ps -ef |grep nginx
root 20473 1 0 2019 ? 00:00:00 nginx: master process /usr/sbin/nginx
nginx 4628 20473 0 Jan06 ? 00:00:00 nginx: worker process
nginx 4629 20473 0 Jan06 ? 00:00:00 nginx: worker process
Copy the code
How does a worker work? When the client sends a request, the master first sends it to the administrator. After receiving the request, the administrator will notify the worker of the request. Multiple workers snatch tasks by scrambling mechanism. The worker who gets the task will forward the request, reverse proxy, access the database, etc. (Nginx itself does not directly support Java) through Tomcat.
What are the benefits of one master and multiple workers?
- You can use
nginx -s reload
Perform hot deployment. - Each worker is an independent process. If there is a problem with one worker, other workers run independently and continue to compete for tasks to realize the client’s request process without causing service interruption.
How many workers is appropriate? Similar to Redis, Nginx adopts IO multiplexing mechanism. Each worker is an independent process, and there is only one main thread in each process. Requests are processed in an asynchronous and non-blocking way. It is best if the number of workers is equal to the number of cpus on the server.
(1) How many connections will be occupied by the worker when sending a request? (2) There is a master and 4 workers, and the maximum number of connections supported by each worker is 1024. What is the maximum number of concurrent connections supported by the system?
A: congratulations! So far you have learned how to configure Nginx in six different ways, and have discussed Nginx further with me. The last two questions that may be asked during the interview are welcome to be discussed in the comments section. If this article has helped you, give each other a thumbs up