Introduction to Nginx
1.1 introduction
Nginx (Engine X) is a free, open source, high-performance HTTP server, IMAP/POP3 proxy server, and TCP/UDP proxy server that can be used for reverse proxy and software-based load balancing. It also has the following features:
- Nginx is designed in accordance with the modular design scheme, which can be combined with modules to achieve different functions, with high scalability.
- Nginx follows the Matser worker architecture. The main process is responsible for managing one or more worker processes, and each worker process is responsible for handling the actual connection. When a worker process fails, The main process quickly creates a new worker to continue processing connection requests to ensure high availability.
- Nginx can maintain a very low memory footprint despite high connection counts, thus supporting high concurrent access.
- Nginx supports hot deployment and configuration hot loading, and version upgrades without downtime.
- There are free open source versions and commercial versions (Nginx Plus) that can be selected on demand or redeveloped.
- In high-concurrency environments, Nginx has faster response times than other WEB servers.
1.2 Forward proxy and Reverse Proxy
Nginx can support both forward and reverse proxies. The differences between the two proxy modes are as follows:
- The forward proxy occurs on the client and is initiated by the client. For example, we cannot access a server directly, but we can access it indirectly through the intermediate proxy server, and then return the access results to us.
- Reverse proxying occurs on the server, where the client is unaware of the proxying, as shown in the following example. The user only knows that the request was sent to Nginx, but does not know that the request was forwarded or to which application server it was forwarded. In fact, there is no need for the user to know, because the result of the request is the same.
Two, basic commands
The basic usage format of Nginx Shell is as follows:
nginx [-?hvVtTq] [-s signal] [-c filename] [-p prefix] [-g directives]
Copy the code
- -? ,-h: Displays the help information.
- -v: displays the version number.
- -v: displays the version number and configuration information.
- -t: checks whether the configuration file has syntax errors.
- -q: silent mode. No message is output except an error message during configuration detection.
- -s signal: a signal is sent to the Master process. The following signal types are supported: Stop (immediately stop), quit (gracefully stop), reload (reloading the configuration file), these signals are reopen (a new log file is opened to continue recording the log).
- -p prefix: Specifies the prefix of the path. The default path is the installation directory, for example
The/usr/app/nginx 1.16.1 /
; - -c filename: Specifies the location of the configuration file. The default value is
conf/nginx.conf
, the actual path it points to isprefix + filename
; - -G directives: Sets the global directives from the specified configuration file.
3. Configuration format
3.1 Basic Configuration Format
Nginx configuration is made up of both global configuration and local configuration (blocks). All blocks follow the same configuration format:
<section>{
<directive> <parameters>;
}
Copy the code
The blocks of instructions are separated by curly braces, which contain separate configuration contexts containing instructions and specific parameters, and each line of instructions ends with a semicolon. In addition, Nginx configuration also supports the following operations:
- Support the use of
include
Syntax to import external configuration files so that each independent configuration can be split into a separate file; - Support the use of
#
Symbols to add comments; - Support the use of
$
Symbols to reference variables; - The arguments of some instructions support regular expressions.
3.2 Units of time and space
Nginx configuration files support the following space and time units:
-
The default unit is bytes. Other common units such as K /K, m/ m, and g/ g are also supported.
-
Space unit: Supports ms (ms), S (second), M (minute), H (hour), D (day), W (week), M (month, 30 days), and Y (year, 365 days) and can be used in combination, for example, 1h 30m (1 hour, 30 minutes), and 1Y 6M (1 year and 6 months).
3.3 Official Configuration Template
After Nginx is installed, there is an official configuration example nginx.conf in the conf directory of the installation directory, which is as follows:
# this parameter is used to configure the user and group of the worker process. If no group is specified, the default is nobody user nobody. # Specifies the number of worker processes used to process client connections, usually set to the number of CPU cores. Worker_processes 1 worker_processes 1 = 1.5 to 2 times the number of cores for IO intensive operations # to specify the location of the log and log level, the level of logging in the order from low to high as follows: [the debug | info | notice | warn | error | crit] error_log logs/error. The log. #error_log logs/error.log notice; #error_log logs/error.log info; Pid logs/nginx.pid; Max_clients = worker_processes * worker_connections Worker_connections 1024; } HTTP {# use include to refer to external file include mime.types; Default_type application/octet-stream; # define log output format, #log_format main '$remote_addr - $remote_user [$time_local] "$request" '# '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; Access_log logs/access.log main access_log logs/access.log main Sendfile () can be sent directly in kernel space, i.e., zero copy sendFile on; Tcp_nopush on; tcp_nopush on; Keepalive_timeout 65; # enable gzip on; # configure nginx server (virtual host) {# listen port 80; server_name localhost; Charset koi8-r; Access_log /host.access.log main access_log /host.access.log main Location / {root HTML; index index.html index.htm; } # error redirect page # error_page 404/404.html; error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # location ~ /\. Ht {deny all; }} # support configure multiple virtual host #server {# listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; Server {listen 443 SSL; server_name localhost; Ssl_certificate cert.pem; Ssl_certificate_key cert.key; Ssl_session_cache shared:SSL:1m; ssl_session_cache shared:SSL:1m; Ssl_session_timeout 5m; Ssl_ciphers HIGH:! aNULL:! MD5; Ssl_prefer_server_ciphers on; ssl_prefer_server_ciphers on location / { root html; index index.html index.htm; }}}Copy the code
Deploy static websites
Nginx is usually used as an HTTP server to deploy static resources. The steps are as follows:
4.1 Adding Configurations
Modify nginx.conf and add the following configuration to the HTTP directive block:
Server {# monitor port listen 9010; Server_name _; # root/usr/web # index index.html; }Copy the code
Create a new test page in the /usr/web directory, index.html, with the following contents:
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport"
content="Width =device-width, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<title>Nginx static resource site</title>
</head>
<body>
<h1 style="text-align: center">Nginx static resource site</h1>
</body>
</html>
Copy the code
4.2 Checking the Configuration
Use -t to check the configuration. Successful indicates that the configuration is correct.
[root@node1 web]# nginx -t -c conf/nginx.conf nginx: The configuration file /usr/app/nginx-1.16.1/conf/nginx.conf syntax is ok: Configuration file /usr/app/nginx-1.16.1/conf/nginx.conf test is successfulCopy the code
4.3 Heavy-load Configuration
Start Nginx. If Nginx is already started, you can use the following command to override the configuration:
nginx -s reload
Copy the code
Visit http://hostname:9010/index.html, you can see the static homepage.
Fifth, load balancing
5.1 Deploying Background Services
/usr/webapps001 /usr/webapps002 /usr/webapps002 /usr/webapps002 /usr/webapps002 /usr/webapps002 /usr/webapps002 /usr/webapps002 /usr/webapps002 /usr/webapps002 / Port numbers for the two projects are 8080 and 8090 respectively:
run -d -it --privileged=true -v /usr/webapps01:/usr/local/tomcat/webapps \
-p 8080:8080 --name tomcat8080 96c4e536d0eb
Copy the code
run -d -it --privileged=true -v /usr/webapps02:/usr/local/tomcat/webapps \
-p 8090:8080 --name tomcat8090 96c4e536d0eb
Copy the code
5.2 Load Balancing Configuration
Modify nginx.conf and add the following configuration to the HTTP directive block:
Upstream {server 192.168.0.22:8080; upstream {server 192.168.0.22:8080; Server 192.168.0.226:8090; } server { listen 9020; location / { proxy_pass http://springboot; }}Copy the code
After reloading the configuration, open a browser and access the project through port 9020, where Nginx polls requests to ports 8080 and 8090. When testing load balancing policies, it is best to turn off the browser’s caching function to avoid impact.
5.3 Load Balancing Policies
In the above configuration, we do not configure any load balancing policies. The default is polling. In addition, Nginx also supports the following load balancing policies:
1. Weighted Load balancing
Assign different weights to different services for forwarding. The configuration example is as follows:
upstream myapp1 {
server srv1.example.com weight=3;
server srv2.example.com weight=2;
server srv3.example.com;
}
Copy the code
2. Least connected load balancing
To forward requests to the service with the fewest connections, the following is an example:
upstream myapp1 {
least_conn;
server srv1.example.com;
server srv2.example.com;
server srv3.example.com;
}
Copy the code
3. IP Hash Load balancing
To determine the forwarding object, hash the requested IP address and take modulus. The configuration example is as follows:
upstream myapp1 {
ip_hash;
server srv1.example.com;
server srv2.example.com;
server srv3.example.com;
}
Copy the code
These are the basic load balancing policies built into Nginx. If you want to implement other more complex load balancing policies, you can use third-party modules to implement them.
4.4 Declaring the Standby Service
In any of the above load balancing policies, alternate services can be added using the backup parameter, as shown in the following example:
server backup1.example.com:8080 backup;
Copy the code
Standby services do not participate in load balancing unless all primary services are down, at which point Nginx forwards requests to standby services.
Six, to achieve the separation of dynamic and static
6.1 Configuration for Static and Static Separation
Nginx’s ability to support highly concurrent access and features such as static resource caching make it more suitable for deploying static resources than dynamic resource application servers such as Tomcat. To achieve static and static separation, just use a regular expression in the server instruction block to partition the static resources and specify where to store them, as shown in the following example:
server { listen 9020; location / { proxy_pass http://springboot; } # through regular to control needed to separate the static resource location. ~ * \. (HTML | HTM | | GIF JPG | jpeg | BMP | PNG | ico | TXT | js | CSS) ${# static resource storage root directory/usr/resources /; }}Copy the code
6.2 Common Configuration Exceptions
1. No such file or directory
The first common problem is that static resources cannot be found. In this case, you can check the error. Log log in the logs directory.
2019/09/01 17:12:43 [error] 12402#0: *163 open() "/usr/resources/spring-boot-tomcat/css/show.css" failed (2: No such file or directory), client: 192.168.0.106, server:, request: "GET /spring-boot-tomcat/ CSS /show. CSS HTTP/1.1", host: "192.168.0.26:9020 ", referrer: "Http://192.168.0.226:9020/spring-boot-tomcat/index"Copy the code
This problem occurs because Nginx requires that the request path for static resources be exactly the same as the original request path. Here, my project is decompressed in Tomcat and the project name is pring-boot-tomcat. For example, the correct storage path of show. CSS is:
/usr/resources/spring-boot-tomcat/css/show.css
Copy the code
That is: static resource root directory + project name + original path. Usually we ignore the project name level when creating directories, resulting in exceptions.
2. Permission denied
After the path is correct, another common problem is insufficient permissions. The error log is as follows. In this case, ensure that user in the configuration file has the permission to access the directory where static resources reside, or create a static resource directory as user:
2019/09/01 17:15:14 [error] 12402#0: *170 open() "/usr/resources/spring-boot-tomcat/css/show.css" failed (13: Permission denied), client: 192.168.0.106, server:, request: "GET /spring-boot-tomcat/ CSS /show. CSS HTTP/1.1", host: "192.168.0.226:9020", the referrer: "http://192.168.0.226:9020/spring-boot-tomcat/index"Copy the code
The resources
Nginx documentation, Using nginx as HTTP load balancer
For more articles, please visit the full stack Engineer manual at GitHub.Github.com/heibaiying/…