Nginx
What is the Nginx
- Nginx is an open source, high-performance, reliable Http Web service, proxy service.
- Open source, in the form of direct access to Nginx source code;
- High performance, reflected in the support of massive concurrency;
- High reliability, reflected in the stable service;
Why Nginx
High performance, high concurrency
Normally, a single request will get a faster response. On the other hand, during peak times (such as tens of thousands of concurrent requests), Nginx can respond to requests faster than other Web servers.
High scalability
Nginx features modular. Nginx officially provides many excellent modules to use. These modules can be added and subtracted quickly.
High reliability
The so-called high reliability means that Nginx can run continuously on the server, while many Web servers often run for weeks or months and need a restart. For nginx such a high concurrency, high performance reverse proxy server, he often runs the front end of the website architecture, so at this time if we want to provide 9999, 99999, for nginx continuous operation can be down time, a year may only be calculated in seconds, so in such a role, The high reliability of Nginx provides us with a very good guarantee.
Hot deployment
Hot deployment is the ability to upgrade nginx without stopping service, which is very important. For normal services, just kill the process before starting, but for Nginx, if there are many client connections, then kill Nginx. Nginx sends the TCP reset packet to the client, but many clients do not understand the reset packet well, causing an exception. Due to the separation of Nginx’s master management process from the worker process, Nginx can update Nginx executable files under the condition of 7×24 hours of uninterrupted service. Of course, it also supports configuration update and log file replacement without stopping the service.
Widely used
First, Nginx technology is mature, with the most commonly used functions of enterprises, such as proxy, proxy cache, load balancing, static resources, static separation, Https, Inmp, Inmt and so on.
Secondly, Nginx unified technology stack is used to reduce maintenance cost and technology update cost.
A network model
Nginx uses the Epool network model, while Apache is often heard to use the Select network model.
Select: When a user makes a request, the Select model performs a traversal scan, resulting in poor performance.
Epoll: When a user initiates a request, the Epoll model processes it efficiently.
Nginx application scenarios
The main usage scenarios of Nginx are summarized as three, including static resource service, proxy resource service, and security service. The detailed scenarios are described as follows
The following figure shows the basic architecture of a website. User requests first arrive at Nginx, and then go to an application server such as Tomcat or PHP. Then the application server accesses databases such as Redis and mysql to provide basic data functions.
So there is a problem here. Our program code requires high development efficiency, so its operation efficiency is very low, or its concurrency is limited, so we need many application services to form a cluster to provide access for more users.
Once the application services are clustered, we need our Nginx to have reverse proxy capabilities that can forward dynamic requests to the cluster service.
But with many applications clustered, there are two requirements.
Application servers need to scale dynamically.
2. If some services fail, you need to perform Dr.
Then our reverse proxy must have load balancing capabilities.
Second, as our network links grow, the user experience increases. If we can cache “dynamic” content that doesn’t change for a period of time in Nginx, and Nginx provides access directly to the user, then the latency of the user request will be much reduced, so here the reverse proxy will develop another feature, “caching”, because it will speed up our access.
Most of the time when we are accessing static resources such as docs, PDF, MP4, PNG, etc., there is no need to send these requests to the back-end application service through Nginx. We just need to use Nginx to handle “static resources” directly. This is the static resource feature of Nginx.
When we use HTTP websites, we may be hijacked and tampered with. If we use HTTPS secure communication protocol, the data is encrypted in the process of transmission, so that hackers can effectively avoid stealing or tampering with data information, but also to avoid information leakage in the process of transmission. Greatly improve our website security.
PS: Nginx has a lot more functionality than just the one listed above.
Nginx component
Here we compare Nginx’s architecture to that of a car:
Driving the car provides the basic functionality, but also need a driver to control the car to which direction, at the same time, the place where the car will also form GPS trajectory, if there is any car in the process of driving problem, we need a black box, analysis is the car itself, or the operation of the driver.
- The first component is the Nginx binary executable: it is a binary built from the Nginx framework and related modules, which is the equivalent of the car itself, providing all the functionality.
- The second part of Nginx. Conf file: it is equivalent to the driver, while the binary executable file already provides many functions, but these functions are there any open, or define the behavior of how to handle the request, after open by Nginx. Conf this file, so he is equivalent to the driver of the car, Control the behavior of the car.
- The third component, Access.log, is a GPS track of all the places the car passes through. Access.log records every HTTP request and response that Nginx processes.
- The fourth component, error.log: it acts as a black box. When unexpected problems occur, you can use error.log to locate the problem.
Nginx component summary
The components of Nginx complement each other, the Nginx binary executable and the nginx.conf file, which defines how Nginx handles requests.
And if we want to do some web operation and maintenance of nginx service, we need to do further analysis of access.log.
If there are any unknown errors, or if the expected behavior is inconsistent, use error.log to locate the underlying problem.
Quick installation of Nginx
There are several ways to install Nginx software:
- Source code compilation =>Nginx (1. Version optional 2. Installation complex 3.
- Epel warehouse =>Nginx (1. Earlier version 2. Easy installation 3. Configuration not easy to read)
- Official repository =>Nginx (1. Newer version 2. Easy installation 3. Easy configuration, recommended)
Dependencies required to install Nginx software
[root@web ~]# yum install -y gcc gcc-c++ autoconf pcre pcre-devel make automake httpd-tools
Copy the code
Configure nginx official yum source
[root@web ~]# vim /etc/yum.repos.d/CentOS-nginx.repo
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
Copy the code
Install Nginx service, start and add boot autoboot
[root@web ~]# yum install nginx -y
[root@web ~]# systemctl enable nginx
[root@web ~]# systemctl start nginx
Copy the code
Access to the test
Access the IP address of the server to obtain resources
Check the Nginx software version and compile parameters
[root@web ~]# nginx -v version: nginx/1.16.0 [root@web ~]# nginx-vCopy the code
View the overall directory and related functions
You can use RPM -ql nginx to view the overall directory structure and corresponding functions. The following table summarizes the important configuration files of nginx
Nginx master configuration file
The path | type | role |
---|---|---|
/etc/nginx/nginx.conf | The configuration file | Nginx master configuration file |
/etc/nginx/conf.d/default.conf | The configuration file | Default web site profile |
Nginx proxy parameter file
The path | type | role |
---|---|---|
/etc/nginx/fastcgi_params | The configuration file | Fastcgi proxy configuration file |
/etc/nginx/scgi_params | The configuration file | Scgi proxy configuration file |
/etc/nginx/uwsgi_params | The configuration file | Uwsgi agent configuration file |
Nginx encodings related configuration files
The path | type | role |
---|---|---|
/etc/nginx/win-utf | The configuration file | Nginx encoding transformation mapping file |
/etc/nginx/koi-utf | The configuration file | Nginx encoding transformation mapping file |
/etc/nginx/koi-win | The configuration file | Nginx encoding transformation mapping file |
/etc/nginx/mime.types | The configuration file | Content-type and extension |
Nginx manages related commands
The path | type | role |
---|---|---|
/usr/sbin/nginx | The command | Nginx command-line management terminal tool |
/usr/sbin/nginx-debug | The command | Nginx command line and terminal debugging tools |
Nginx log related directories and files
The path | type | role |
---|---|---|
/var/log/nginx | directory | Nginx default log directory |
/etc/logrotate.d/nginx | The configuration file | Nginx default log cutting |
Nginx basic configuration
Nginx main configuration file /etc/nginx/nginx.conf is a plain text file. The entire configuration file is organized in blocks. Typically, each block begins and ends with a pair of curly braces {}.
The Nginx main configuration file is divided into three parts: CoreModule(CoreModule), EventModule(EventModule), HttpCoreModule(kernel module).
CoreModule CoreModule
user www; The user used by the Nginx process
worker_processes 1; # number of work processes run by Nginx (recommended to be the same as CPU number or auto)
error_log /log/nginx/error.log #Nginx error log path
pid /var/run/nginx.pid Nginx specifies the PID process number generated after the Nginx service is running
Copy the code
Events event module
events {
worker_connections 25535; # Maximum number of connections supported by each worker process
use epoll; # event-driven model,epoll default
}
Copy the code
HTTP kernel module
http { # start the HTTP layer.Each Server{} represents a web site.
'server' {
listen 80; The default listening port is 80
server_name birenchong.cn; # Domain name provided
access_log access.log; # This website's visit log
# Control the path to your site
'location' / {
root /usr/share/nginx/html; # Where to store the source code of the website
index index.html index.htm; # black default return site files}}...# second virtual host configuration
'server'{... } include /etc/nginx/conf.d/*.conf;Conf files in /etc/nginx/conf.d/
#include simplifies writing too much to the main configuration file and makes the overall configuration file clearer.
} # layer over HTTP
Copy the code
PS: Nginx HTTP, server, location between the relationship?
HTTP is used to handle user requests and responses.
Server is used to respond to a specific website.
Location is used to match the specific URL path of a website.
HTTP {} layer allows multiple Server{}, one Server{} layer allows multiple location{}
Nginx virtual host
In general, there may be many business systems in the enterprise, so how do many sets of business services use Nginx configuration?
If you deploy Nginx in the preceding manner, multiple servers are required to configure Nginx. However, if you deploy Nginx in a virtual host manner, multiple sets of independent services are run on the same Nginx. In short, seemingly multiple business systems can actually run on a single Nginx service
Nginx can configure a virtual host in the following three ways:
Method 1. Host based on multiple IP addresses
Mode 2. Port-based configuration
Method 3. Name based on multiple hosts (Multi-domain name)
Virtual host configuration based on multiple IP combat
Configure multiple nics with multiple IP addresses
server { ... Listen 10.0.0.10:80; . } server { ... Listen 10.0.0.11:80; . }Copy the code
Based on multi-port virtual host configuration actual combat
Nginx multi-port virtual host mode. The configuration is as follows
Change the listen port, but do not conflict with system port
[root@web ~]# cat /etc/nginx/conf.d/port1.confserver { ... listen 80; . } [root@web ~]# cat /etc/nginx/conf.d/port2.confserver { ... listen 81; . } [root@web ~]# cat /etc/nginx/conf.d/port3.confserver { ... listen 82; . }Copy the code
Virtual host configuration based on host name
1. Create the corresponding Web site directory and program code
[root@web ~]# mkdir /code/{server1,server2}
[root@web ~]# echo "server1" > /code/server1/index.html
[root@web ~]# echo "server2" > /code/server2/index.html
Copy the code
2. Configure virtual hosts with different domain names
[root@web ~]# cat /etc/nginx/conf.d/server1.confserver { listen 80; server_name 1.birenchong.cn; root /code/server1; index index.html; . } [root@web ~]# cat /etc/nginx/conf.d/server2.confserver { listen 80; server_name 2.birenchong.cn; root /code/server2; index index.html; . }Copy the code
Nginx Location
What is the Location
Location Controls the URI path for accessing a website
The Location of grammar
│ location [= | | ~ ~ * ^ ~] uri {... } location @name { ... }# Priority of matching rule
# = Exact match 1
# ^~ Begins with a string 2
# ~ Case-sensitive regular match 3
# ~* Case-insensitive re match 4
# / Universal match, any request will match up to 5
Copy the code
Location Priority syntax example
[root@web conf.d]# cat location.birenchong.cn.conf
server {
listen 80;
server_name location.birenchong.cn;
location = / {
default_type text/html;
return 200 'location = /';
}
location / {
default_type text/html;
return 200 'location /';
}
location /documents/ {
default_type text/html;
return 200 'location /documents/';
}
location ^~ /images/ {
default_type text/html;
return 200 'location ^~ /images/';
}
location ~* \.(gif|jpg|jpeg)$ {
default_type text/html;
return 200 'location ~* \.(gif|jpg|jpeg)'; }}# curl test results:
Request # 1. http://location.birenchong.cn/ will be location = / match
Request # 2. http://location.birenchong.cn/index.html will be the location/match
Request # 3. http://location.birenchong.cn/documents/1.html will be the location/documents/match
# 4. Request http://location.birenchong.cn/images/1.gif will be the location ^ ~ / images/match
Request # 5. http://location.birenchong.cn/documents/1.jpg will be the location ~ * l. (GIF | JPG | jpeg) $matches
Copy the code
Locaiton Rule configuration application scenario
Validation tests can be performed
[root@web conf.d]# cat location2.birenchong.cn.conf
server {
listen 80;
server_name location2.birenchong.cn;
Any request will be matched
location / {
root html;
index index.html;
}
The requested URI must be /nginx_status
location = /nginx_status {
stub_status;
}
# strictly case sensitive, match everything that ends in.php with this location
location ~ \.php$ {
default_type text /html;
return 200 'PHP access successful';
}
# strictly case sensitive, matches that end in.jsp go to this location
location ~ \.jsp$ {
default_type text/html;
return 200 'JSP access successful';
}
JPG, GIF, PNG,js, CSS all go to this location
location ~* \.(jpglgif|pngljs|css)$ {
return 403;
}
# Case insensitive matchinglocation ~* \.(sql|bak|tgz|tar.gz|.git)$ { deny all; }}Copy the code
Nginx common modules
Nginx directory index
When the ngx_HTTP_index_module module cannot find the index file, it usually passes the request to the ngX_HTTP_autoIndex_Module module.
The ngx_HTTP_autoindex_module module processes requests ending with a slash character (‘/’) and generates a list of directories.
Instructions 1.
# enable/disable directory list output, on enable, of杅 disable.
Syntax : autoindex on | off;
Default: autoindex off;
Context: http,server,location
# specifies whether the exact file size should be printed in the directory list, on in bytes, off in approximate units.
Syntax : autoindex_exact_size on | off;
Default: autoindex_exact_size on;
Context: http,server,location
# specify whether the time in the directory list should be printed in local time zone or UTC. On Local time zone, off UTC time.
Syntax : autoindex_localtime on | off;
Default: autoindex_localtime off;
Context: http,server,location
Copy the code
2. Scenario example
[root@web ~]# cat /etc/nginx/conf.d/mirror.birenchong.cn.conf
server {
listen 80;
server_name mirror.birenchong.cn;
charset utf-8; # Set character set to prevent Chinese characters from garbled display.autoindex on; autoindex_exact_size off; autoindex_locatime on; location / { root /code/; }}Copy the code
3. Scenario example: Set up the internal YUM repository
[root@web mirror]# cat /etc/nginx/conf.d/mirror.birenchong.cn.conf
server {
listen 80;
server_name mirror.birenchong.cn;
charset utf-8;
root /mirror;
location / {
index index.html;
}
# yum repository directorylocation /repo { autoindex on; autoindex_exact_size off; autoindex_localtime on; }}Copy the code
Nginx access control
Restrict address access
The ngx_HTTP_access_module module allows you to restrict access to certain client addresses.
Instructions 1.
Allow configuration syntax
Syntax: allow address | CIDR | unix: | all;
Default: -
Context: http,server,location,limit_except
# reject config syntax
Syntax : deny address | CIDR | unix: | all;
Default: -
Context: http,server,location,limit_except
Copy the code
2. For example, only the specified source IP address is allowed to access /centos, and all other network segments are denied.
[root@web ~]# cat /etc/nginx/conf.d/mirror.birenchong.cn.conf
server {
listen 80;
server_name mirror.birenchong.cn;
charset utf-8; # Set character set to prevent Chinese characters from garbled display.autoindex on; autoindex_exact_size off; autoindex_locatime on; location / { index index.html; } location /centos {allow 127.0.0.1; Allow 10.0.0.1/32;Allow address or address segment
deny all; # Reject everyone}}Copy the code
3. In this scenario, the specified iP address is denied access to the /centos of this website, and all other iP addresses are allowed access.
[root@web ~]# cat /etc/nginx/conf.d/mirror.birenchong.cn.conf
server {
listen 80;
server_name mirror.birenchong.cn;
charset utf-8; # Set character set to prevent Chinese characters from garbled display.autoindex on; autoindex_exact_size off; autoindex_locatime on; location / { index index.html; } location /centos {deny 10.0.0.1/32;Reject the specified address or address segment
allow all; Allow all addresses}}Copy the code
- Note: The order of ‘deny’ and ‘allow’ matters
- By default, matches are made from the first rule
- If the match is successful, the following content is not matched.
- If the match is not successful, continue to find the content that can be matched.
The identity authentication
The ngx_HTTP_auth_basic_module module allows HTTP basic authentication, which validates user names and passwords to restrict access to resources.
Instructions 1.
# enable user name and password authentication using HTTP basic authentication protocol.
Syntax : auth_basic stringl off;
Default: auth_basic off;
Context: http,server,location,limit_except
# specify the file to save the username and password
Syntax : auth_basic_user_file file;
Default: -
Context: http,server,location,limit_except
Copy the code
2. Specify a file to save the user name and password in the following format:
You can use htpasswd or the "openssl passwd" command to generate the corresponding password.
name1:passwd1
name2:passwd2
Create a new password file with htpaaswd, -c create a new file with -b allow command line password input
[root@web ~]# yum install httpd-tools
[root@web ~]# htpasswd -b -c /etc/nginx/auth_conf birenchong 123456
Copy the code
3. Scenario example: User name – and password-based authentication.
[root@web ~]# cat /etc/nginx/conf.d/mirror.birenchong.cn.conf
server {
listen 80;
server_name mirror.birenchong.cn;
charset utf-8; # Set character set to prevent Chinese characters from garbled display.
autoindex on;
autoindex_exact_size off;
autoindex_locatime on;
location / {
index index.html;
}
location /centos {
auth_basic "Auth access Blog Input your Passwd!"; auth_basic_user_file /etc/nginx/auth_conf; }}Copy the code
Nginx traffic limiting and speed limiting
Rate Limiting Application Scenarios
- Download speed limit: Limits the speed at which users can download resources using Nginx
ngx_http_core_module
. - Request limiting: Limits the number of Http requests a user can make per unit of time, using Nginx
ngx_http_limit_req_module
. - Connection limit: limits the number of simultaneous connections and concurrent connections. Use Nginx
ngx_http_limit_conn_module
Request frequency limit implementation principle?
Water (requested) is poured into the bucket from above and out of the bucket from below (to be treated);
If water (requested) is flowing in too quickly, the bucket (processed) is flowing out too slowly, and the water that doesn’t flow out is stored in the bucket (cached) and then discharged at a constant rate, and when the bucket is full, the water overflows (discarded).
In simple terms: when the processing speed is not up to the request speed, the request is placed in the cache and continued processing. When the cache is full, if there are still a large number of requests, it is discarded.
Scenario Practice 1. Limit the number of Http requests per unit of time.
Instructions 1.
Syntax : limit_req_zone key zone=name:size rate=rate;
Default: -
Context: http
Syntax : limit_conn zone number [burst=number] [nodelay];
Default: -
Context: http,server,location
Copy the code
2. Limit the download rate based on the source IP address to one request per second, but can burst more than five requests into the cache
The HTTP tag defines the request limit, the rate limit, the maximum number of IP requests per second
http {
limit_req_zone $binary_remote_addr zone=req_one:10m rate=1r/s;
}
server {
listen 80;
server_name mirror.birenchong.cn;
Requests exceeding 1r/s will be deferred. If the number of requests exceeds the number defined by burst, 503 will be returned
limit_req zone=req_one burst=3 nodelay;
location / {
root /code;
index index.html;
}
}
limit_req_zone $binary_remote_addr zone=req_one:10m rate=1r/s;
The first parameter: $binary_remote_addr specifies that the same client IP address is restricted by this identifier.
The second parameter: zone=req_one:10m creates a memory region with a size of 10m named req_one, which is used to store access frequency information.
The third parameter: rate=1r/s Indicates the number of times clients with the same id are allowed to access.
limit_req zone=req_one burst=3 nodelay;
# the first parameter: zone=req_one Sets which configuration zone to use for restriction, corresponding to name in limit_req_zone above.
The second parameter, Burst =3, sets a buffer of size 3 into which requests exceeding the frequency limit can be placed when a large number of requests come in.
If nodelay is set to 503 and the buffer is full, all requests will be queued.
Copy the code
Scenario Practice 2. Limit the number of concurrent connections on a client at a time.
Instructions 1.
Syntax : limit_conn_zone key zone=name:size;
Default: -
Context: http
Syntax : limit_conn zone number;
Default: -
Context: http,server,location
Copy the code
⒉. Sets the maximum number of connections allowed for the shared memory region and the given key value. When this limit is exceeded, the server returns a 503 error to reply to the request
[root@web ~]# cat /etc/nginx/conf.d/mirror.birenchong.cn.conf
limit_conn_zone $binary_remote_addrzone=conn_mg:10m; server { listen 80; server_name mirror.birenchong.cn; root /code; charset utf8; autoindex on; autoindex_exact_size off; autoindex_localtime on; limit_conn conn_mg 2; location / { index index.html; }}Copy the code
Scenario Practice 3. Limit the download speed.
[root@web ~]# cat /etc/nginx/conf.d/mirror.birenchong.cn.conf
server {
listen 80;
server_name mirror.birenchong.cn;
root /code;
charset utf8;
autoindex on;
autoindex_exact_size off;
autoindex_localtime on;
limit_rate_after 100m;The speed limit starts when the speed reaches 100mlimit_rate 100k; location / { index index.html; }}Copy the code
Integrated case
Limit the number of requests processed by the Web server to one per second, with a trigger value of 5. Limit the user to download only one file at a time. When the download exceeds 100M, the download speed is limited to 500K. If more than two videos are downloaded at the same time, the prompt “Please contact the administrator for membership recharge” will be returned.
[root@web conf.d]# cat mirror.birenchong.cn.conf
limit_req_zone $binary_remote_addr zone=req_mg:10m rate=1r/s;
limit_conn_zone $binary_remote_addr zone=conn_mg:10m;
server {
listen 80;
server_name mirror.birenchong.cn;
root /code;
charset utf8;
autoindex on;
autoindex_exact_size off;
autoindex_localtime on;
limit_req zone=req_mg burst=5 nodelay;
limit_conn conn_mg 1;
limit_rate_after 100m;
limit_rate 500k;
error_page 503 @errpage;
location @errpage {
default_type text/html;
return 200 'Tip --> Please contact the administrator for membership recharge'; } location / { index index.html; }}Copy the code
Nginx status monitoring
The ngx_HTTP_STUB_status_module module provides access to basic state information.
This module is not integrated by default and needs to be integrated using –with-http_stub_status_module.
Instructions 1.
Syntax : stub_status;
Default: -
Context: server,location
Copy the code
2. Example configuration
[root@web ~]# cat /etc/nginx/conf.d/module.confserver { listen 80; server_name mirror.birenchong.cn; access_log off; location /nginx_status { stub_status; }}Copy the code
3. This configuration creates a simple web page with basic state data that might look like this:
4. Provide the following status information
state | meaning |
---|---|
Active connections | Number of active connections, including Waiting number of Waiting connections. |
accepts | Total number of RECEIVED TCP connections. |
handled | Number of TCP connections processed. |
requests | Number of current HTTP requests. |
Reading | The number of headers currently read. |
Writing | The number of headers in the current response. |
Waiting | Number of idle client connections currently awaiting requests |
How to understand Reading, Writing, Waiting.
Suppose we now have two ships: C and S. Ship C needs one item from ship S, so ship C will send a message to ship S.
1. S ship is reading when it receives this message.
2. When ship S sends materials to ship C, it is writing.
3, if C ship needs S ship a lot of items, then C ship and S ship need to establish a material transmission pipeline, continuous transmission of materials. When the pipe is set up, it will be waiting.