Introduction to Nginx
1, Nginx (Engine X) is a high-performance HTTP and reverse proxy Web server, which is characterized by less memory, strong concurrency, in fact, Nginx concurrency in the same type of web server performance better
2. Nginx was developed specifically for performance optimization. Performance is the most important consideration, and it can withstand high loads, with reports indicating that it can support up to 50,000 concurrent connections
Forward proxy
1, forward proxy: is a server located between the client and the original server (Origin Server), in order to get content from the original server, the client sends a request to the proxy and specifies the target (original server), then the proxy forwards the request to the original server and returns the obtained content to the client. The client can use the forward proxy. The typical use of a forward proxy is to provide LAN clients within a firewall with access to the Internet. Unlike reverse proxies, a typical forward proxy is one that is known and actively used by end users. For example, after SwitchySharp is installed in Chrome, you can use SwitchySharp to facilitate proxy forwarding. For this user must be set in advance in SwitchySharp to achieve the corresponding effect.
Reverse proxy
1. Reverse proxy: The reverse proxy server is located between the user and the target server. However, for users, the reverse proxy server is the target server. At the same time, users do not need to know the address of the target server, nor do they need to make any Settings on the client side. A reverse proxy server is usually used for Web acceleration. That is, a reverse proxy is used as the front of the Web server to reduce the load on the network and server and improve access efficiency.
Load balancing
1. Load Balance: It means to Balance loads (work tasks) and distribute them to multiple operation units for running, such as FTP server, Web server, enterprise core application server and other main task servers, so as to cooperatively complete work tasks. Load balancing is built on the original network structure. It provides a transparent, inexpensive and effective method to expand the bandwidth of servers and network devices, strengthen the network data processing capacity, increase the throughput, and provide the availability and flexibility of the network.
Five, static separation
1. Static and dynamic separation: Static and dynamic separation refers to the architectural design method of separating static page and dynamic page or static content interface and dynamic content interface from different system access in web server architecture, so as to improve the access performance and maintainability of the whole service
Install Nginx on a Linux VM
1. Install compilation tools and library files
yum -y install make zlib zlib-devel gcc-c++ libtool openssl openssl-devel
Copy the code
(2) Install the PCRE (this is what Nginx does with Rewrite) : most virtual machines use images that ship with the PCRE, so you don’t need to install them
/ switch/CD/usr/local/SRC / / / download pcre wget HTTP: / / http://downloads.sourceforge.net/project/pcre/pcre/8.35/pcre-8.35.tar.gz / / Decompress the installation package tar ZXVF pcl-8.35.tar. gz // Go to the installation package directory CD pcl-8.35 // Compile the installation./configure make &&make install // Check the pcRE version pcl-config --versionCopy the code
3. Install and start Nginx
/ / download nginx CD/usr/local/SRC/wget HTTP: / / http://nginx.org/download/nginx-1.6.2.tar.gz / / install package tar ZXVF nginx - 1.6.2. Tar. Gz / / CD into the installation package directory nginx - 1.6.2 / / compile installation. / configure make && make install / / view nginx version/usr/local/nginx/sbin/nginx - v / / start nginx CD/usr/local/nginx/sbin/nginx / / ps - view the nginx process ef | grep nginx / / view your firewall status systemctl status firewalld / / close the firewall (not recommended) Systemctl stop firewalld // check firewall firewall-cmd --list-all // temporarily open port 80 (reload after missing) : Firewall-cmd --add-port=80/ TCP // Permanently open port 80, Firewall-cmd --permanent --add-port=80/ TCP firewall-cmd --reload // Permanently delete the open port firewall-cmd --permanent --remove-port=80/ TCP firewall-cmd --reload // Check all allowed ports firewall-cmd --list-portsCopy the code
Use Nginx
1, Nginx common command
(1) To use the nginx command, you must enter the nginx directory
cd /usr/local/nginx/sbin
Copy the code
(2) Common commands
/nginx -t // Stop the nginx service./nginx -s stop // Run the nginx service./nginx // Conf file and restart the nginx service./nginx -s reloadCopy the code
2. Nginx configuration files
(1) Nginx configuration file location
Conf CD /usr/local/nginx/conf in the conf directoryCopy the code
(2) Configuration file content composition: it is composed of three parts (global block, Events block, HTTP block)
// # indicates the current line comment#user nobody;Worker_processes 1; worker_processes 1; worker_processes 1; worker_processes 1; worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;Events {// Indicates that the maximum number of worker_connections 1024 is supported by each work process. This configuration has a significant impact on Nginx performance. } // HTTP global block HTTP {include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; // server block server {listen 80; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php${# proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php${# root HTML; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443 ssl; # server_name localhost; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 5m; # ssl_ciphers HIGH:! aNULL:! MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # #}}}Copy the code
Global: The contents from the configuration file to the Events block will mainly set some configuration instructions affecting the overall operation of the Nginx server, including the configuration of users (groups) running the Nginx server, the number of worker processes allowed to be generated, Process PID storage path, log storage path and type, and configuration file import
Events: The instructions involved mainly affect the network connection between the Nginx server and the user. Common Settings include whether to enable serialization of network connections under the multi-work process, whether to allow the simultaneous reception of multiple network connections, and which event-driven model to process connection requests. Maximum number of connections that each work process can support at the same time, etc
HTTP blocks: Most of the functionality, such as proxy, caching, and log definition, and the configuration of third-party modules are here. HTTP blocks include HTTP global blocks and Server blocks. HTTP global block configuration instructions include file import, MIME-Type definition, log customization, connection timeout, maximum number of single link requests, and so on. Server block and virtual host has a close relationship, virtual host from the user’s point of view, and an independent hardware host is exactly the same, the technology is to save the cost of Internet server hardware. Each HTTP block can contain multiple server blocks, and each server block is equivalent to a virtual host. Each Server block is divided into a global Server block and contains multiple Location blocks. Global Server block: The most common configuration is the listening configuration of the local vm host and the name or IP configuration of the local VM host. Location block: Based on the request string received by the Nginx server (such as server_name/ URI-string), matches any string other than the virtual host name (or IP alias) (such as the previous/URI-string) to process a specific request. Address targeting, data caching and response control, as well as many third-party modules are configured here.
3. Nginx reverse proxy configuration Example 1
Objective: To visit www.123.com on a Browser on a Windows machine and use nginx as a reverse proxy to access the tomcat home page running on a Linux server
(1) Install and configure JDK
Jingyan.baidu.com/article/0aa…
(2) Install and run Tomcat: Download and decompress the Tomcat installation package, and then use it
www.cnblogs.com/socketqiang…
Firewall - CMD --permanent --add-port=8080/ TCP firewall- CMD --reloadCopy the code
(3) In the Hosts file of Windows, configure the IP address mapped to www.123.com domain name. When you access a domain name from the browser, the system searches for the IP address corresponding to the domain name and communicates with the target IP address. The system searches for the IP address corresponding to the domain name in the hosts file. If the IP address cannot be found on the DNS server, the system displays an error message or a message indicating that the domain name cannot be accessed.
// Add the domain name mapping IP address (VM IP name) to the hosts file and the directory C:\Windows\System32\drivers\etc 192.168.249.129 www.123.comCopy the code
(4) Configure request forwarding in Nginx (i.e. reverse proxy)
This parameter indicates that the default port 80 is used to access 192.168.249.129:80. If the IP address does not contain a port, the default port 80 is used. If proxy_pass 127.0.0.1:8080 is configured, proxy_pass 127.0.0.1:8080 will be forwarded to 127.0.0.1:8080. Once this reverse proxy is complete, restart nginx and visit www.123.com in your browser.
Explanation: If you access www.123.com from the browser, you are accessing www.123.com:80. Because 192.168.249.129 www.123.com is configured in the hosts file, the access is proxied to 192.168.249.129:80. Nginx. conf is configured with request forwarding, so 192.168.249.129:80 will be forwarded to 127.0.0.1:8080, that is, to the TOMCAT service of the VM
Nginx reverse proxy configuration example 2
Goal: Use Nginx reverse proxy, according to the access path to jump to different port services, Nginx listening on port 9001, when accessing http://127.0.0.1:9001/edu/ jump straight to the 127.0.0.1:8081, Visit http://127.0.0.1:9001/vod/ jump straight to the 127.0.0.1:8082
(1) Start two Tomcat services, create folders tomcat8081 and Tomcat8082, and decompress the Tomcat installation package in the two folders. In the server. XML configuration file in the conf directory of the two folders, set the Tomcat startup port to 8081 and 8082 respectively, and change the corresponding Tomcat service shutdown port
(2) In Tomcat 8081, create a new edu folder in tomcat webApp directory and put an A.HTML file in it. The HTML content is 8081!!!! Create a vod folder in the webApp directory of Tomcat 8082 and put a.HTML file 8082 in it. Use the script startup.sh in the bin directory of Tomcat to start the two Tomcat services respectively
(3) Modify the nginx configuration, add a server global block, and then restart
The symbol after the location block explains:
The = equals sign is used before a URI that does not contain a regular expression. The request string must match the URI strictly. If the match is successful, the search is stopped and the request is processed immediately.
The ~ tilde is used to indicate that the URI contains regular expressions and is case sensitive.
The ~* tilde plus asterisk is used to indicate that the URI contains regular expressions and is case insensitive
^~ The opening symbol and tilde are used to indicate urIs without regular expressions. The Nginx server is required to find the location that best matches the request string and use that location immediately instead of using the regular URI in the Location block to match the request string
Note: If the URI contains a regular expression, you must have an or * identifier
(4) the final effect: visit http://192.168.249.129:9001/edu/a.html, can be forwarded to the port 8081 tomcat server, find the edu folder a.h HTML, according to the content of the 8081 inside!!!!!!!!!! Request for http://192.168.249.129:9001/vod/a.html, can be forwarded to the port 8082 tomcat server, find the vod folder a.h HTML, according to the content of the 8082 inside
5. Nginx load balancing configuration Example 1
Goal: Start two Tomcat servers, port 8081 and 8082 respectively, create loadBalance folder in webApp directory of both servers, add B.HTML files to each folder, content of the two files are load balancing 8081 and load balancing 8082 respectively. When on the browser to http://192.168.249.129:9002/loadBalance/b.html, will be asked to load balance to 8081 and 8082 on the server, the polling mode, namely the distribution to 8081 server, the next is distributed to 8082 server, cycling
(1) Prepare the Tomcat server, create a NEW B.HTML file on both Tomcat servers and start the Tomcat server
(2) Configure load balancing in Nginx
The configuration of here said when visit http://192.268.249.129:9002/ and subsequent path with loadBalance forwarded to configure load balancing myServer, the load balancing strategy here is to both servers polling distribution.
(3) Result presentation
Common load balancing policies:
Polling (default) : Each request is allocated to a different backend server one by one in chronological order. If the backend server goes down, the request is automatically removed
Weight: weight indicates the weight. The default value is 1. The higher the weight is, the more clients will be allocated. In the following configuration, the probability of sending requests to server 8082 is twice that of server 8081
Upstream server_pool {server 192.168.249.129:8081 weight=1; Server 192.168.249.129:8082 weight = 2; }Copy the code
Ip_hash: specifies the assignment mode based on the client IP address for the load balancer. This method ensures that requests from the same client are always sent to the same server to ensure session session. In this way, each visitor has fixed access to a back-end server, which can solve the problem that sessions cannot cross servers
upstream server_pool {
ip_hash;
server 192.168.249.129:8081;
server 192.168.249.129:8082;
}
Copy the code
Fair (third-party policy) : A third-party plug-in is required to implement a third-party load balancing policy. Requests are allocated based on the response time of the back-end server, with servers with short request response times given priority
upstream server_pool {
fair;
server 192.168.249.129:8081;
server 192.168.249.129:8082;
}
Copy the code
6, static and static separation configuration
(1) Nginx static and dynamic separation is simply to separate the dynamic and static requests, can not be understood as simply the dynamic page and static page physical separation. Use Nginx to process static pages, and Tomcat to process dynamic pages. At present, static and static separation can be roughly divided into two kinds from the perspective of implementation. One kind is purely to separate static files into separate domain names and put them on independent servers, which is also the mainstream recommended scheme at present. Another option is to mix dynamic and static files and publish them separately via Nginx.
Plan 1:
(2) Nginx uses location to specify different suffix names to achieve different request forwarding. The Expires parameter allows you to set the browser cache expiration time to reduce requests to the server. Specific definition Expires: resources to set an expiration time, visit the URL in the period of validity, will send a request, the comparison on the server and the browser cache resources update time are consistent, consistent server returned a status code of 304, the resources is directly in the browser cache, or to request from the server. This approach is suitable for static resources with little change.
7, static and static separation example 1
Target: Sets the access path for static resources through location
PNG file in img folder. A.html file in HTML folder. Configure static resource access path in nginx configuration file. Autoindex :on Sets the directories to be accessed
(2) Access resources in a browser
Autoindex :on Lists accessed static resources
Access still image resources
Access static file resources
Nginx configures a highly available cluster
Objective: To configure two Nginx servers so that if one Nginx server becomes unavailable, the other Nginx server can continue to provide services.
(1) High Availability: High Availability is one of the factors that must be considered in distributed system architecture design. It usually refers to reducing the time when the system cannot provide services through design
(2) Keepalived: The purpose of Keepalived is to detect the state of the server, if a web server is down, or something is wrong, Keepalived will detect it, remove the faulty server from the system, and use another server to replace the server. Keepalived automatically adds the server to the server cluster when the server is working properly. All this work is done automatically without human intervention. All that is needed is to repair the faulty server
(3) Prepare two Linux servers with IP address 192.168.79.128 (MASTER server) and 192.168.79.129 (BACKUP server) respectively. Install Nginx and Keepalived on both servers. Configure the keepalive. conf configuration file for the primary and backup servers, in /etc/keepalived directory, in /usr/local/src of both servers to check whether nginx is running, on both servers with the reverse proxy. Proxy requests for port 9010 to the Tomcat service on the server. The demo folder is added to the Tomcat Webapps directory on both the primary and backup servers. The index. HTML file is stored in the Demo folder
// Master server keepalive.conf configuration file contents
global_defs {
notification_email {
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 192.16879.128.
smtp_connect_timeout 30
router_id MASTER // Master server domain name
vrrp_skip_check_adv_addr
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_script check_nginx {
script "/usr/local/src/nginx_check.sh" // Check whether nginx is running properly
interval 2
weight 2
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.16879.100. / / virtual IP} # track_script {# check_nginx}}// Backup server keepalive. conf configuration file
global_defs {
notification_email {
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 192.16879.129.
smtp_connect_timeout 30
router_id BACKUP
vrrp_skip_check_adv_addr
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_script check_nginx {
script "/usr/local/src/nginx_check.sh"
interval 2
weight 2
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 80
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.16879.100.} # track_script {# check_nginx}}// Nginx_check. sh script content. Note that if the script is pasted from a Windows OS to a Linux OS, Run sed -i 's/\r$//' nginx_check.sh to replace \n\r at the end of each line for Windows with \n for Linux. https://blog.csdn.net/ouyang_peng/article/details/86488451
#!/bin/Bash # get nginx process the number of counter = $(ps - C nginx - no - heading | wc -l) number for # if nginx process0Echo does not start or hangs"monitor running"Number # query nginx process counter = $(ps - C nginx - no - heading | wc -l) # if number is still nginx process0
if [ "${counter}" = "0"]; Then # stop the keepalived process, forcing the virtual IP address to be migrated to the standby server to complete the failover. (This command needs to be modified as required.) systemctl stop keepalive. service fi// Add this line to the hosts file in the etc directory on the primary server
127.0. 01. MASTER
// Add this line to the hosts file in the etc directory of the backup server
127.0. 01. BACKUP
// Add this forwarding configuration to the nginx configuration files of both the primary and backup servers
server {
listen 9010;
server_name 127.0. 01.;
location ~ /demo/ {
proxy_pass http:/ / 127.0.0.1:8080;}}// Index. HTML in the Demo file of Webapps in tomcat on the primary server
<html>
<head>
<meta charset="UTF-8">
</head>
<body>
<h1>High availability Keepalived distribution is forwarded to the Tomcat server at 192.168.79.128 on the main Nginx server</h1>
</body>
</html>
// Index. HTML in the Demo file of The Tomcat webapps on the backup server
<html>
<head>
<meta charset="UTF-8">
</head>
<body>
<h1>High availability Keepalived distributed backup nginx server 192.168.79.129 forwarded to Tomcat server</h1>
</body>
</html>
Copy the code
(4) Start nginx and Keepalived services on the master and backup servers, start keepalived named systemctl start keepalive.service
(5), virtual IP access http://192.168.79.100:9010/demo/index.html, you can see the access to the main service nginx forwarded to the tomcat server
Run IP addr on primary server 192.168.79.128 and you can see that the virtual IP currently points to the primary server IP
(6) Stop the nginx service on the primary server. The nginx_check script will stop keepalived and access the virtual IP address 192.168.79.100 again. The access IP address will drift to the backup server 192.168.79.129
Run IP addr on the backup server and you can see that the virtual IP address is bound to the IP address of the backup server
Nginx principle analysis
1. Nginx architecture: After Nginx is started, there are two processes in Linux system, one is master, the other is worker. As an administrator, master does not participate in any work and is only responsible for assigning different tasks to multiple workers (workers generally have multiple tasks).
2. How does worker work?
When the client sends a request, the master first sends it to the administrator. After receiving the request, the administrator will notify the worker of the request. Multiple workers snatch tasks by scrambling mechanism. The worker who gets the task will forward the request, reverse proxy, access the database, etc. (Nginx itself does not directly support Java) through Tomcat.
3. Benefits of one master and multiple workers
(1) Nginx-s reload can be used for hot deployment. The working worker will continue to finish the work when the restart occurs, and other workers will reload.
(2) Each worker is an independent process. If one worker has problems, other workers run independently and continue to compete for tasks to realize the client’s request process without causing service interruption.
4. Set the appropriate number of workers
Nginx and Redis are similar in that they both adopt IO multiplexing mechanism (so Nginx and Redis are generally deployed on Linux servers). Each worker is an independent process, with only one main thread in each process. Requests are processed in an asynchronous and non-blocking way. Each worker thread can maximize the performance of one CPU. Therefore, it is most appropriate for the number of workers to be equal to the number of cpus on the server.