Recently, more and more frequently encountered the need to configure reverse proxy scenarios, in their own blog, it is inevitable to use Nginx, so this period of time to focus on learning Nginx, while making some notes, I hope it can also help you ~ 😜
This article will install and use Nginx in a CentOS environment. If you are not familiar with the basic operations of CentOS, you can read “Half an hour to get started with CentOS”.
I believe that as developers, we all know the importance of Nginx, so let’s learn more.
CentOS version: 7.6
Nginx version: 1.16.1
1. The Nginx is introduced
Traditional Web server, each client connection as a separate process or thread processing, needs to be in when to switch tasks to the CPU to switch to the new tasks and create a new runtime context, consume extra memory and CPU time, when concurrent requests to increase, the server response slow, thus have a negative impact on performance.
Nginx is an open source, high-performance, and reliable Web and reverse proxy server that supports hot deployment and can be run almost 7 * 24 hours a day, even if it runs for months without a restart, and can be hot-updated to software versions without interruption of service. Performance is the most important consideration for Nginx. Nginx uses less memory, concurrency is strong, supports up to 5W concurrent connections, and most importantly, Nginx is free, commercially available, and easy to configure and use.
The most important use scenarios for Nginx are:
- Static resource services are provided through the local file system.
- Reverse proxy services, including caching, load balancing, etc.
- API service, OpenResty;
Node.js is not new to the front-end. Nginx shares many concepts with Node.js, such as HTTP server, event-driven, asynchronous, non-blocking, etc. Most of the functions of Nginx can be implemented using Node.js, but Nginx and Node.js do not conflict. They all have their own areas of expertise. Nginx is good at the processing of low-level server-side resources (static resources handle forwarding, reverse proxy, load balancing, etc.), while Node.js is better at the processing of upper-layer specific business logic. The two can be perfectly combined to jointly assist front-end development.
Let’s focus on using Nginx.
2. Related concepts
2.1 Simple request and non-simple request
First, let’s look at simple and non-simple requests. A simple request is made if both of the following conditions are met:
- The request method is
HEAD
,GET
,POST
One of three; - HTTP header information is no more than a few fields on the right:
Accept
,Accept-Language
,Content-Language
,Last-Event-ID
Content-Type
Only three valuesapplication/x-www-form-urlencoded
,multipart/form-data
,text/plain
;
Anything that does not satisfy both of these conditions is a non-simple request.
Browsers handle simple and non-simple requests differently:
A simple request
For simple requests, the browser adds the Origin field to the header information and directly sends the request. The Origin field describes the source (protocol + domain name + port) from which the request comes.
If the server finds that the source specified by Origin is not permitted, the server will return a normal HTTP response. When the browser receives the response, it will find that the response header does not contain the Access-control-allow-Origin field. Throw an error to XHR’s error event;
If the server finds that the domain name specified by Origin is within the scope of the license, the server will return a response with several more access-control-header fields.
Non-simple request
Non-simple requests are requests that have special requirements for the server, such as a PUT or DELETE request method, or a content-type value of Application /json. The browser sends an HTTP precheck OPTIONS request, asking the server whether the domain name of the current web page is in the server’s permission list, and which HTTP request methods and header fields can be used before any formal communication. The browser will only issue a formal XHR request if it receives a positive response, otherwise it will report an error.
2.2 cross domain
A cross-domain request is a process in which a web site currently visited in a browser sends a request to another web site for data.
Cross-domain is determined by the browser’s same-origin policy. It is an important browser security policy that restricts the interaction of one Origin document or its loaded script with the resources of another source. It can help block malicious documents and reduce the media that may be attacked.
There are a lot of explanations about cross-domain networks, but you can also read MDN’s < Browser’s Same-origin Policy > documentation for further information. Here are a few examples of same-origin and different elements, which I believe programmers can understand.
# Homologous example
http://example.com/app1/index.html # Just different path
http://example.com/app2/index.html
http://Example.com:80 # Just case difference
http://example.com
# Examples of different sources
http://example.com/app1 # Different protocol
https://example.com/app2
http://example.com # host different
http://www.example.com
http://myapp.example.com
http://example.com # Different ports
http://example.com:8080
Copy the code
2.3 Forward Proxy and Reverse Proxy
The Reverse Proxy is the opposite of the Forward Proxy. The difference between the Reverse Proxy and the Forward Proxy is as follows:
Forward agent: general access to the process is the client sends a request directly to the target server and access to content, use positive agent, after the client to send the request to the proxy server, and specify the target server (original server), and then by the proxy server and the original server and transfer request and content, and then returned to the client. The forward proxy hides the real client and sends and receives requests for the client, so that the real client is invisible to the server.
Take 🌰 as an example. If your browser cannot access Google directly, you can use a proxy server to help you access Google. This server is called a forward proxy.
Reverse proxy: Compared with the general access process, after using reverse proxy, the server that receives the request directly is the proxy server, and then forwards the request to the server that actually processes the request on the internal network, and the result is returned to the client. The reverse proxy hides the real server and sends and receives requests for the server, making the real server invisible to the client. This is usually used when dealing with cross-domain requests. Now basically all large websites have set up reverse proxies.
For an example of concrete 🌰, go to a restaurant, can order sichuan, guangdong, jiangsu and zhejiang, the hotel also has three cuisine chef respectively 👨 🍳, but no need to worry about you as a customer which chefs cooking for you, in order to small 2 will you the menu assigned to different chefs to specific processing, then the small 2 is the reverse proxy server.
In simple terms, the proxy for the client is a forward proxy, and the proxy for the server is a reverse proxy.
The main principle difference between forward proxy and reverse proxy can be seen in the following figure:
2.4 Load Balancing
Generally, the client sends multiple requests to the server. The server processes the requests, some of which may operate on some resources, such as databases and static resources. After the server processes the requests, the results are returned to the client.
For early systems, this mode is not complex in functional requirements, and can be used in the case of relatively few concurrent requests, and the cost is low. With the increasing amount of information, traffic and data, and system service complexity, this approach can no longer meet the requirements. When the amount of concurrent requests is very large, the server is prone to crash.
Obviously this is due to server performance bottlenecks, and the most important thing to do besides heap the machine is load balancing.
Request under the condition of explosive growth, no strong of a single machine performance can meet the requirements, this time the concept of cluster, a single server cannot solve the problem, you can use multiple servers, then request distribution to all servers, load distribution to a different server, this is the load balance, the core is “share the pressure. Nginx implements load balancing, which generally means forwarding requests to server clusters.
Take a specific example 🌰. When taking the subway in the evening rush hour, there are often metro staff speakers at the entrance of the station, “Please go to Exit B. Exit B is short of people and the train is empty…..” The role of this worker is load balancing.
2.5 Static and dynamic Separation
In order to speed up the resolution of the website, dynamic pages and static pages can be resolved by different servers, to speed up the resolution, reduce the pressure of the original single server.
In general, dynamic resources need to be separated from static resources, which are often deployed on Nginx due to its high concurrency and static resource caching features. If the request is for static resources, the system directly obtains the resources from the static resource directory. If the request is for dynamic resources, the reverse proxy is used to forward the request to the corresponding background application for processing. In this way, the separation of dynamic and static resources is realized.
After the front-end and back-end separation is used, the access speed of static resources can be greatly improved. Even if dynamic services are unavailable, the access to static resources is not affected.
3. Quick Nginx installation
3.1 installation
We can look at it first
yum list | grep nginx
Copy the code
Take a look at
then
yum install nginx
Copy the code
To install Nginx, you can see the Nginx version information in the Nginx -v command line, and the installation is complete.
3.2 Related Folders
You can use RPM -ql nginx to check where nginx is installed and what directories it is installed in. The /etc/nginx directory contains mainly configuration files, as shown in the following figure:
There are two folders to focus on:
/etc/nginx/conf.d/
Folder, where we store the configuration items for sub-configuration,/etc/nginx/nginx.conf
The master profile will import all the sub-profiles in this folder by default;/usr/share/nginx/html/
Folder, where static files are usually stored, but you can put them in other places if you like;
3.3 Run kangkang
If the firewall is enabled, you need to set up the firewall port to open. The following are some common firewall operations (if not enabled) :
systemctl start firewalld # Enable firewall
systemctl stop firewalld Disable firewall
systemctl status firewalld Check the status of the firewall. If running is displayed, it is running
firewall-cmd --reload Restart the firewall, permanently open the port needs to be reload
# Add open port, --permanent indicates permanent open port, not temporary open port will fail after restart
firewall-cmd --permanent --zone=public --add-port=8888/tcp
The added ports can also be seen
firewall-cmd --list-all
Copy the code
Then set Nginx boot:
systemctl enable nginx
Copy the code
Start Nginx (more on the other commands below) :
systemctl start nginx
Copy the code
Then visit your IP address and see the Welcome to Nginx page. 👏
3.4 Installing NVM & Node & Git
Download NVM, or go to https://github.com/nvm-sh/nvm#install--update-scriptThe curl - o - https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh | bashsource ~/.bashrc After the installation, update the configuration file to use the NVM command
nvm ls-remote Check the remote node versionNVM install v12.16.3# Select the version you want to install, I chose 12.16.3
nvm list Check the installed node version after installation
node -v Check whether the installation is complete
yum install git # git installed
Copy the code
4. Nginx common operation commands
Type Nginx -h in the console to see the complete command. Here are some commonly used commands:
nginx -s reload # Send a signal to the main process to reload the configuration file for a hot restart
nginx -s reopen # restart Nginx
nginx -s stop # Quick close
nginx -s quit Shut down after the worker process completes processing
nginx -T Check the current Nginx final configurationNginx -t -c < Configure path >If the configuration directory is already in the configuration directory, then -c is not required
Copy the code
Systemctl is the main command of the Linux application management tool systemd. It is used to manage the Linux system.
systemctl start nginx # start Nginx
systemctl stop nginx # stop Nginx
systemctl restart nginx # restart Nginx
systemctl reload nginx # Reload Nginx after the configuration changes
systemctl enable nginx # Set Nginx to start on boot
systemctl disable nginx # Turn off Nginx
systemctl status nginx # Check the Nginx running status
Copy the code
5. Nginx configuration syntax
Nginx’s main configuration file is /etc/nginx/nginx.conf, as shown in the preceding figure. You can use cat -n nginx.conf to view the configuration.
Nginx. conf structure diagram can be summarized as follows:
main Global configuration, which takes effect globally├ ─ ─ eventsThe configuration affects the Nginx server or network connection to the user├ ─ ─ HTTPConfigure proxy, cache, log definition, most functions and third-party module configuration│ ├ ─ ─ upstream# Configure the backend server address, which is an integral part of load balancing configuration│ ├ ─ ─ server# Configure virtual host parameters, one HTTP block can have more than one server block│ ├── Server │ ├─ Location# Server block can contain multiple location blocks. The location directive is used to match the URI│ │ ├── Exercises │ ├─... │ └ ─ ─... └ ─ ─...Copy the code
A Nginx configuration file is structured as shown in nginx.conf. The syntax rules for the configuration file are:
- The configuration file is composed of instructions and instruction blocks.
- Each instruction takes
;
At the end of the semicolon, the instruction and the parameter are separated by a space symbol; - Instruction to
{}
Braces group multiple instructions together; include
Statement allows multiple profiles to be combined to improve maintainability;- use
#
Symbols add comments to improve readability; - use
$
Symbols use variables; - The parameters of some instructions support regular expressions.
5.1 Typical Configuration
Typical configuration of Nginx:
user nginx; # Run user, default is nginx, can not set
worker_processes 1; # Nginx number of processes, usually set to the same as the number of CPU cores
error_log /var/log/nginx/error.log warn; # Nginx error log directory
pid /var/run/nginx.pid; # Nginx service startup pid location
events {
use epoll; # Use epoll's I/O model (if you don't know which polling method to use, Nginx will automatically select the one that works best for your operating system)
worker_connections 1024; # Maximum number of concurrent requests per process
}
http { This is where most functions such as proxy, cache, log definition and third-party module configuration are set
Set log mode
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main; # Nginx access log location
sendfile on; # Enable efficient transmission mode
tcp_nopush on; # Reduce the number of network segments
tcp_nodelay on;
keepalive_timeout 65; The duration of the connection, also called timeout, in seconds
types_hash_max_size 2048;
include /etc/nginx/mime.types; # File extension and type mapping table
default_type application/octet-stream; Default file type
include /etc/nginx/conf.d/*.conf; # Load child configuration items
server {
listen 80; # Configure the listening port
server_name localhost; The configured domain name
location / {
root /usr/share/nginx/html; # Site root directory
index index.html index.htm; # Default home page file
deny 172.168.22.11; # IP address that is forbidden to access. This can be all
allow 172.168.33.44;# Specifies the IP address that can be accessed
}
error_page 500 502 503 504 /50x.html; # Default 50x corresponding to the access page
error_page 400 404 error.html; # same as above}}Copy the code
A server block can contain multiple location blocks. The location directive is used to match a URI. Syntax:
location[= |~ | ~ * |^ ~] uri {
...
}
Copy the code
After the command:
=
The exact matching path is used before the URI that does not contain the regular expression. If the match is successful, no further search is performed.^ ~
For URIs that do not contain regular expressions; Before: indicates that if the character following the symbol is the best match, this rule is adopted and follow-up search is not performed.~
To match the path using the re following the symbol, case sensitive;~ *
Indicates that the re following the symbol is used to match the path, case – insensitive. with~
If multiple locations match, the longest regular expression is used.
If the URI contains a regular expression, it must have the ~ or ~* flag.
5.2 Global Variables
Nginx has some common global variables that you can use anywhere in the configuration, as shown in the following table:
Global variable name | function |
---|---|
$host |
Request informationHost , if not in the requestHost Line, is equal to the server name set, excluding the port |
$request_method |
Client request type, for exampleGET ,POST |
$remote_addr |
The client’sIP address |
$args |
Parameters in the request |
$arg_PARAMETER |
GET PARAMETER Specifies the value of a PARAMETER in the request, for example:$http_user_agent (Uaer user-agent value),$http_referer . |
$content_length |
Request headerContent-length field |
$http_user_agent |
Information about the client Agent |
$http_cookie |
Cookie information of the client |
$remote_addr |
IP address of the client |
$remote_port |
Client Port |
$http_user_agent |
Information about the client Agent |
$server_protocol |
Protocol used by the request, such asHTTP / 1.0 ,HTTP / 1.1 |
$server_addr |
Server Address |
$server_name |
Server Name |
$server_port |
Port number of the server |
$scheme |
HTTP methods (e.g., HTTP, HTTPS) |
There are many more built-in predefined variables. You can search for the keyword “Nginx built-in predefined variables” and see a bunch of blogs writing about this. These variables can be used directly in the configuration file.
6. Set the secondary domain name for the host
After purchasing the domain name on the so-so cloud ☁️, you can configure the virtual host. The general configuration path is to add the second level domain name in the domain name management -> Resolution -> Add record. After the configuration, so-so cloud will resolve the second level domain name to the server IP address configured by us. Then we can configure the virtual host access listening on Nginx to get requests from the secondary domain.
My server is now configured with an FE secondary domain, which means that if you want to access fe. Sherlocked93.club from the Internet, you can also access our server.
The default configuration file /etc/nginx/nginx.conf includes the HTTP module include /etc/nginx/conf.d/*. Conf Files are imported into the configuration file as sub-configuration items. In order to maintain conveniently, I in the/etc/nginx/conf. D folder to create a new fe. Sherlocked93. Club. Conf:
# /etc/nginx/conf.d/fe.sherlocked93.club.conf
server {
listen 80;
server_name fe.sherlocked93.club;
location / {
root /usr/share/nginx/html/fe;
indexindex.html; }}Copy the code
/usr/share/nginx/ HTML create a new fe folder, create a new index. HTML file, and reload the nginx -s reload file. You can access the newly created FE folder from the secondary domain:
7. Configure the reverse proxy
The reverse proxy is the most common server feature at work and is often used to solve cross-domain problems. Here is a brief description of how to implement a reverse proxy.
First go to Nginx’s main configuration file:
vim /etc/nginx/nginx.conf
Copy the code
Then we go to location/in the server block of the HTTP module and add a line to redirect the default url to Bilibili’s proxy_pass configuration 🤓 :
Nginx -s reload reload = nginx -s reload reload = nginx -s reload reload = nginx -s reload reload
In practice, the request can be forwarded to another server on the local host, or it can be redirected to a service on a different port depending on the access path.
For example, we listen on port 9001 and reverse proxy requests to different paths:
- Their visit
http://127.0.0.1:9001/edu
Request is forwarded tohttp://127.0.0.1:8080
- Their visit
http://127.0.0.1:9001/vod
Request is forwarded tohttp://127.0.0.1:8081
To configure this, open the main configuration file again and add a server block under the HTTP module:
server {
listen 9001;
server_name *.sherlocked93.club;
location ~ /edu/ {
proxy_pass http://127.0.0.1:8080;
}
location ~ /vod/ {
proxy_passhttp://127.0.0.1:8081; }}Copy the code
There are a few other instructions for the reverse proxy that you can look at:
proxy_set_header
: Changes the request header information from the client before sending the client request to the back-end server.proxy_connect_timeout
: Sets the timeout period for Nginx to attempt to establish a connection with the back-end proxy server.proxy_read_timeout
: Specifies the timeout period that Nginx waits for after sending a read request to the back-end server group.proxy_send_timeout
: Specifies the timeout period that Nginx waits for after sending a write request to the back-end server group.proxy_redirect
: is used to change Location and Refresh in the response header returned by the back-end server.
8. Configure cross-domain CORS
The concepts of simple request, non-simple request, and cross-domain have already been introduced, so if you are not familiar with them, you can see the previous explanation. Nowadays, projects that are separated from the front end and the front end are all over the world. Often, front-end services are locally created, and they need to access different back-end addresses. Cross-domain problems are inevitable.
To solve the cross-domain problem, let’s create a cross-domain problem. First of all, set fe.sherlocked93.club and be.sherlocked93.club secondary domain names, both pointing to the address of the local cloud server, although the corresponding IP is the same. Sherlocked93. club request to be sherlocked93.club request to be sherlocked93.club request to be sherlocked93.club request to be sherlocked93.club request to be sherlocked93.
8.1 Using the Reverse Proxy to Solve cross-domain Problems
Sherlocked93. club back-end service caused by cross-domain, can be configured like this:
server {
listen 9001;
server_name fe.sherlocked93.club;
location / {
proxy_passbe.sherlocked93.club; }}Copy the code
In this way, all requests to the previous domain fe.sherlocked93.club are forwarded to be. Sherlocked93.club, and all front-end requests are forwarded to the back end address using the server, bypassing cross-domain.
Here, both static file requests and back-end service requests start with fe.sherlocked93.club, which is not easy to distinguish, so in order to achieve uniform forwarding of back-end service requests, Requests to back-end services are usually prefixed with /apis/ or some other path to distinguish them from requests to static resources. In this case, we can configure the following:
# Request cross-domain, agree that the proxy backend service request path begins with /apis/
location ^~/apis/ {
This overwrites the request by concatenating the path of the first group in the regular match to the actual request and stopping subsequent matches with a break
rewrite ^/apis/(.*)$ /The $1 break;
proxy_pass be.sherlocked93.club;
Cookie transfer and write back between two domains
proxy_cookie_domain be.sherlocked93.club fe.sherlocked93.club;
}
Copy the code
Static resources we use fe. Sherlocked93. Club/xx. HTML, dynamic resource fe. We use sherlocked93. The club/apis/getAwo, browser page looks still access front-end server, bypassing the browser’s same-origin policy, After all, we don’t seem to be cross-domain.
Sherlocked93. club, and only add a path at the end to differentiate between a static resource or a backend service, depending on the requirement.
8.2 Configuring headers Solve cross-domain problems
When the browser accesses cross-source servers, Nginx can also be directly set on the cross-domain servers, so that the front-end can develop without feeling, without changing the address of the actual access to the back-end to the address of the front-end service, which is more suitable.
Such as front-end site is fe. Sherlocked93. Club, the address of the front page requests be. Sherlocked93. Under the club resources, such as the former fe. Sherlocked93. Club/index. The HTML content is as follows:
<html>
<body>
<h1>welcome fe.sherlocked93.club!!<h1>
<script type='text/javascript'>
var xmlhttp = new XMLHttpRequest()
xmlhttp.open("GET"."http://be.sherlocked93.club/index.html".true);
xmlhttp.send();
</script>
</body>
</html>
Copy the code
Open a browser to access fe. Sherlocked93. Club/index. The HTML results are as follows:
Obviously this is cross-domain request, direct access to http://be.sherlocked93.club/index.html in your browser can access to, but in fe. Sherlocked93. The club access HTML page will appear across the field.
Create a new configuration file in /etc/nginx/conf.d/ with the second level domain be.sherlocked93.club:
# /etc/nginx/conf.d/be.sherlocked93.club.conf
server {
listen 80;
server_name be.sherlocked93.club;
add_header 'Access-Control-Allow-Origin' $http_origin; The global variable gets origin for the current request. Requests with cookies are not supported
add_header 'Access-Control-Allow-Credentials' 'true'; # true can bring cookies
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'; Allow request method
add_header 'Access-Control-Allow-Headers' $http_access_control_request_headers; # Allow the request header, which can be *
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Max-Age' 1728000; # OPTIONS Specifies the validity period of the request without issuing another precheck request
add_header 'Content-Type' 'text/plain; charset=utf-8';
add_header 'Content-Length' 0;
return 204; # 200 will also work
}
location / {
root /usr/share/nginx/html/be;
indexindex.html; }}Copy the code
Then nginx -s reload reloads the configuration. At this time to visit fe. Sherlocked93. Club/index. The HTML results are as follows, we have just appeared in the request Header configuration:
The cross-domain problem is solved.
9. Enable gZIP compression
Specifically a common web page compression technology, the transmission of web page after gzip compression can often become half of original even smaller size (website words), a smaller page size means save bandwidth and transmission speed boost, especially for large traffic large sites, each static resources volume is reduced, Will bring considerable traffic and bandwidth savings.
Baidu can find a lot of detection sites to check whether the target page open gZIP compression, randomly found a < webpage GZIP compression detection > input nuggets juejin. Im to peek at the nuggets have opened gZIP.
Here you can see that the nuggets are opened gzip, compression effect is very good, up to 52%, the original 34kb of the web volume, compressed only 16kb, you can imagine the speed of the web transmission increased a lot.
9.1 Configuring gzip on Nginx
Using gzip not only requires Nginx configuration, but also requires the browser to include accept-encoding: gzip in the request header (it is supported by all browsers after IE5 and is the default setting for modern browsers). For static resources such as HTML and CSS, supported browsers use accept-encoding when requesting a static resource: The gzip header indicates that it supports gZIP compression. When Nginx receives the request, it will return the gzip compressed file to the browser and respond with content-encoding: Gzip to tell the browser to use the compression method (because the browser to the server when generally also tell the server to support several compression methods), the browser to get the compressed file, according to their own decompression method for parsing.
/etc/nginx/ gzip/gzip/gzip/gzip/gzip/gzip/gzip/gzip/gzip/gzip/gzip/gzip/gzip/gzip/gzip/gzip/gzip/gzip
# /etc/nginx/conf.d/gzip.conf
gzip on; # Default off, whether to enable gzip
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
# The above two open basically can run, the following willing to toss about to understand
gzip_static on;
gzip_proxied any;
gzip_vary on;
gzip_comp_level 6;
gzip_buffers 16 8k;
# gzip_min_length 1k;
gzip_http_version 1.1;
Copy the code
A little explanation:
- Gzip_types: MIME file types to be compressed with GZIP, where text/ HTML is forcibly enabled;
- gzip_static: off by default. When this module is enabled, Nginx first checks to see if there is a gz terminated file that requests static files, and returns it if there is
.gz
File content; - Gzip_proxied: The default value is off. This parameter is enabled when nginx is used as the reverse proxy. This parameter is used to enable or disable gZIP compression of the content received from the proxy server.
- gzip_vary: is used to add in the response header
Than: Accept - Encoding
To make the proxy server according to the request headerAccept-Encoding
Identify whether gZIP compression is enabled; - Gzip_comp_level: Indicates the gZIP compression ratio. The compression levels range from 1 to 9. 1 is the lowest, and 9 is the highest.
- Gzip_buffers: How much memory to get for caching compression results. 16 8k means 8K *16;
- gzip_min_length: The minimum number of page bytes that can be compressed from the header header
Content-Length
In. The default value is 0, regardless of the size of the page. You are advised to set the number of bytes larger than 1K. If the number is smaller than 1K, the pressure may increase. - Gzip_http_version: Default 1.1, the lowest version of HTTP required to enable gzip.
This configuration can be inserted into the HTTP module’s entire server configuration. It can also be inserted into the server or location module below, including the HTTP module.
For more complete configuration information, see < ngx_http_gzip_module>. Before configuration, it looks like this:
Content-encoding: gzip: Response header
Gzip_min_length 1k (gzip_min_length 1k)
Because the files are too small, the gzip compression results in -48% volume optimization, and the compressed volume is larger than before, so it is best not to gzip files below 1KB 🤪
9.2 GZIP configuration for Webpack
You can also enable gZIP compression when the current project is packaged with Webpack:
// vue-cli3's vue.config.js file
const CompressionWebpackPlugin = require('compression-webpack-plugin')
module.exports = {
/ / gzip configuration
configureWebpack: config= > {
if (process.env.NODE_ENV === 'production') {
// Production environment
return {
plugins: [new CompressionWebpackPlugin({
test: /\.js$|\.html$|\.css/.// Match the file name
threshold: 10240.// File compression threshold. Compress files that exceed 10K
deleteOriginalAssets: false // Whether to delete the source file})]}}} }Copy the code
The resulting package looks like this:
Here you can see some of the packaged files have a corresponding.gz gzip file under them. This is because the file is more than 10KB, and some files are not gzip packed until 10KB. If you want the compressed file to be smaller, You can do this in the compression-webpack-plugin configuration.
Nginx has gzip compression and Webpack has gzip compression. If you use Nginx to compress all the files, it will cost the server computing resources. If the gzip_comp_level of the server is configured high, Increase the server overhead, the corresponding increase in the client request time, the gain is not worth the loss.
If the compression action pack to do when I was at the front, the level of high compression file after packaging as static resource on the server, Nginx will give priority to find these documents returned to the client after compression, equivalent to the action of the compressed file from Nginx give Webpack packing completed ahead of time, save the server resources, Therefore, it is recommended to configure GZIP compression using Webpack in a production environment.
10. Configure load balancing
The concept of load balancing has been introduced before. The main idea is to distribute the load evenly and reasonably across multiple servers to achieve pressure distribution.
The main configuration is as follows:
http {
upstream myserver {
# ip_hash; # ip_hash way
# fair; # fair way
server 127.0.0.1:8081; # Indicates the destination address of the load balancing service
server 127.0.0.1:8080;
server 127.0.0.1:8082 weight=10; The default value is 1
}
server {
location / {
proxy_pass http://myserver;
proxy_connect_timeout 10; }}}Copy the code
Nginx provides several allocation methods. The default is polling, which means taking turns. There are several ways to allocate:
- In polling, the default mode, each request is allocated to different back-end servers one by one in chronological order. If the back-end service is down, it can be automatically eliminated.
- The higher the weight is, the greater the probability of being accessed. It is used in the case of uneven performance of the back-end server.
- Ip_hash, each request is assigned according to the hash result of the access IP, so that each visitor accesses a fixed back-end server, which can solve the dynamic web session sharing problem. Load balancing relocates each request to one of the servers in the cluster, so the login information of the user who has logged in to one server will be lost if the user relocates to another server, which is obviously improper.
- Fair (third party), based on the response time of the back-end server, short response time is allocated first, depends on the third party plug-in nginx-upport-fair, must be installed first;
11. Configure dynamic and static separation
The separation of static and dynamic requests was discussed earlier, which is the separation of dynamic and static requests. There are two main ways. One is to separate the static file into a separate domain name and put it on a separate server, which is also the current mainstream proposal. Another approach is to publish dynamic and static files in a mix, separated by Nginx configuration.
Different request forwarding can be achieved by specifying different name suffixes through location. By setting the Expires parameter, you can make the browser cache expire, reducing the amount of requests and traffic before the server. Specific expires definition: Set an expiration time for a resource. In other words, it does not need to be verified by the server, but directly confirms the expiration by the browser itself, so no additional traffic will be generated. This approach is well suited to resources that do not change frequently. (If a file is updated frequently, it is not recommended to use Expires for caching), I set 3d here to indicate that the URL is accessed within 3 days, send a request, and compare the server with the last time the file was updated. If there are any changes, it is directly downloaded from the server again, returning the status code 200.
server {
location /www/ {
root /data/;
index index.html index.htm;
}
location /image/ {
root /data/;
autoindex on; }}Copy the code
12. Configure the HA cluster (Dual-system Hot Backup).
When the primary Nginx server goes down, switch to the backup Nginx server
First install Keepalived,
yum install keepalived -y
Copy the code
Then edit/etc/keepalived/keepalived. Conf configuration file, and add the vrrp_script definition in the configuration file a peripheral detection mechanism, And in vrrP_instance by defining track_script to track the script execution process, to achieve node transfer:
global_defs{
notification_email {
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0. 01.
smtp_connect_timeout 30 // This is all email configuration
router_id LVS_DEVEL // Use the hostname command to view the name of the current server
}
vrrp_script chk_maintainace { // The script name of the detection mechanism is chk_maintainACE
script "[[ -e/etc/keepalived/down ]] && exit 1 || exit 0" // Can be a script path or a script command
// "/etc/keepalived/nginx_check.sh
interval 2 // Check every 2 seconds
weight -20 // When the script is executed, change the current server priority to -20
}
vrrp_instanceVI_1 { Each vrrp_instance defines a virtual router
state MASTER // The host is MASTER and the standby is BACKUP
interface eth0 // The name of the network adapter, which can be found in ifconfig
virtual_router_id 51 // The id of the virtual route is less than 255. The ids of the active and standby VMS must be the same
priority 100 // Master has a higher priority than backup
advert_int 1 // Default heartbeat interval
authentication { // Authentication mechanism
auth_type PASS
auth_pass 1111 / / password
}
virtual_ipaddress { // Virtual address VIP172.16.2.8}}Copy the code
The check script nginx_check.sh provides one:
#! /bin/bash
A=`ps -C nginx --no-header | wc -l`
if [ $A -eq 0 ];then
/usr/sbin/nginx # Try to restart nginx
sleep 2 # Sleep for 2 seconds
if [ `ps -C nginx --no-header | wc -l` -eq 0 ];then
killall keepalived # Startup failed, keepalived service killed. Drift the VIP to another backup node
fi
fi
Copy the code
Make a copy to the BACKUP server and change the Nginx configuration to BACKUP state and priority smaller than the host.
Service keepalived start service keepalived start service keepalived start service keepalived stop Check whether the virtual IP address can be automatically switched to the IP addr of the standby server.
Start Master keepalived again and the VIP will be on the host again.
13. Suitable for PC or mobile device
According to different user devices to return different styles of site, often used before is pure front-end adaptive layout, but whether it is complexity and ease of use above or not as good as separately written, such as our common Taobao, JINGdong…… These large sites are not adaptive, but in a separate way, according to the user requested by the user-agent to determine whether to return to the PC or H5 site.
In the /usr/share/nginx/ HTML folder mkdir create two folders respectively PC and mobile, vim edit two index.
cd /usr/share/nginx/html
mkdir pc mobile
cd pc
vim index.html # Write something like Hello PC!
cd ../mobile
vim index.html # write something like Hello mobile!
Copy the code
And then set the secondary domain name when virtual host, go to/etc/nginx/conf. D folder to create a new configuration file fe. Sherlocked93. Club. Conf:
# /etc/nginx/conf.d/fe.sherlocked93.club.conf
server {
listen 80;
server_name fe.sherlocked93.club;
location / {
root /usr/share/nginx/html/pc;
if ($http_user_agent ~* '(Android|webOS|iPhone|iPod|BlackBerry)') {
root /usr/share/nginx/html/mobile;
}
indexindex.html; }}Copy the code
$http_user_agent = $http_agent; $http_user_agent = $http_agent; $http_user_agent = $http_agent;
Access the site in a browser, and then simulate mobile access in F12:
As you can see, when simulating mobile access, Nginx returns the site as the mobile HTML.
14. Configuration HTTPS
The specific configuration process is quite much on the Internet, you can also use the purchase of a cloud, generally there will be a free application for the server certificate, installation directly see the operation guide of the cloud can be.
The free certificate issued by the Asian credit organization provided by Tencent Cloud I purchased can only be used for one domain name, and the second level domain name needs to be applied for separately. However, the application and approval are relatively fast and can be successful within a few minutes. Then download the compressed file of the certificate, which contains a nginx folder. Copy xxx.crt and xxx.key files to the server directory, and then configure:
server {
listen 443 ssl http2 default_server; # SSL access port 443
server_name sherlocked93.club; Enter the domain name of the bond certificate
ssl_certificate /etc/nginx/https/1_sherlocked93.club_bundle.crt; # Address of certificate file
ssl_certificate_key /etc/nginx/https/2_sherlocked93.club.key; # Private key file address
ssl_session_timeout 10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; Please use the following protocol configuration
ssl_ciphersECDHE-RSA-AES128-GCM-SHA256:HIGH:! aNULL:! MD5:! RC4:! DHE;ssl_prefer_server_ciphers on;
location / {
root /usr/share/nginx/html;
indexindex.html index.htm; }}Copy the code
Finish the nginx – t – q check, no problem just reload nginx – s, now go to https://sherlocked93.club/ can visit HTTPS version of the site.
You can generally add several commands to enhance security:
add_header X-Frame-Options DENY; # Reduce clickjacking
add_header X-Content-Type-Options nosniff; # Disable the server to automatically resolve resource types
add_header X-Xss-Protection 1; # Anti-XSS attack
Copy the code
15. Some common techniques
15.1 Static Service
server {
listen 80;
server_name static.sherlocked93.club;
charset utf-8; # Prevent garbled Chinese file names
location /download {
alias /usr/share/nginx/html/static; # Static resource directory
autoindex on; Open the static resource column directory
autoindex_exact_size off; # on(default) displays the exact size of the file in bytes; Off Displays the file size in KB, MB, or GB
autoindex_localtime off; # off(default) displays the file time in GMT; On The file time displayed is the server time}}Copy the code
15.2 Image anti-theft
server {
listen 80;
server_name *.sherlocked93.club;
# Image anti-theft link
location ~* \.(gif|jpg|jpeg|png|bmp|swf)$ {
valid_referers none blocked server_names ~\.google\. ~\.baidu\. *.qq.com; # Only allow local IP external link reference, thanks to @mufadhuan's reminder, baidu and Google will also be added to the whitelist
if ($invalid_referer){
return 403; }}}Copy the code
15.3 Request Filtering
All non-specified requests return 403
if ( $request_method !~ ^(GET|POST|HEAD)$ ) {
return 403;
}
location / {
Limit IP access (only allow IP address 192.168.0.2)
allow 192.168.0.2;
deny all;
root html;
index index.html index.htm;
}
Copy the code
15.4 Configuring the Static File Cache, such as Pictures and Fonts
Static files, such as images, fonts, audio, and videos, are usually packaged with hashes. Therefore, you can set the cache to be longer. Set the mandatory cache first and then the negotiated cache. If there are static files without hash values, you are advised not to set mandatory cache. You can only negotiate the cache to determine whether to use the cache.
# Set image cache time
location ~ .*\.(css|js|jpg|png|gif|swf|woff|woff2|eot|svg|ttf|otf|mp3|m4a|aac|txt)$ {
expires 10d;
}
# If you do not want to cache
expires -1;
Copy the code
15.5 Single-page Item History Route configuration
server {
listen 80;
server_name fe.sherlocked93.club;
location / {
root /usr/share/nginx/html/dist; # vue after the package folder
index index.html index.htm;
try_files $uri $uri/ /index.html @rewrites;
expires -1; # There is generally no mandatory cache on the home page
add_header Cache-Control no-cache;
}
# Interface forward, if necessary
#location ~ ^/api {
# proxy_pass http://be.sherlocked93.club;
#}
location @rewrites {
rewrite^ (. +) $ /index.html break; }}Copy the code
15.6 Forwarding HTTP Requests to HTTPS
After configuration HTTPS, the browser can access the HTTP address of http://sherlocked93.club/, you can do a 301 redirect, the corresponding HTTP requests of the domain name redirect to the HTTPS
server {
listen 80;
server_name www.sherlocked93.club;
# Single domain redirection
if ($host = 'www.sherlocked93.club') {return 301 https://www.sherlocked93.club$request_uri;
}
# Redirection for global non-HTTPS protocol
if($scheme ! ='https') {
return 301 https://$server_name$request_uri;
}
# or all redirects
return 301 https://$server_name$request_uri;
The above configuration can choose their own needs, do not add all
}
Copy the code
15.7 Generic Domain name Path Separation
This is a very useful skill. Sometimes we need to configure some secondary or tertiary domain names, and we want Nginx to automatically point to the corresponding directory. For example:
test1.doc.sherlocked93.club
Automatic pointing/usr/share/nginx/html/doc/test1
Server address;test2.doc.sherlocked93.club
Automatic pointing/usr/share/nginx/html/doc/test2
Server address;
server {
listen 80;
server_name ~^([\w-]+)\.doc\.sherlocked93\.club$;
root /usr/share/nginx/html/doc/The $1;
}
Copy the code
15.8 Generic domain name forwarding
Similar to the previous function, sometimes we want to rewrite the secondary or tertiary link to the desired path, so that the back end can resolve different rules based on the route:
test1.serv.sherlocked93.club/api? name=a
Automatically forward to127.0.0.1:8080 / test1 / API? name=a
;test2.serv.sherlocked93.club/api? name=a
Automatically forward to127.0.0.1:8080 / test2 / API? name=a
;
server {
listen 80;
server_name ~^([\w-]+)\.serv\.sherlocked93\.club$;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://127.0.0.1:8080/The $1$request_uri; }}Copy the code
16. Best practices
- To make Nginx configurations easier to maintain, it is recommended to create a separate configuration file for each service and store it in
/etc/nginx/conf.d
Directory, you can create any number of independent configuration files as required. - It is recommended to follow the following naming convention
The < service >. Conf
For example, the domain name issherlocked93.club
Your configuration file should look like this/etc/nginx/conf.d/sherlocked93.club.conf
If multiple services are deployed, you can also add the Nginx forwarding port number to the file name, for examplesherlocked93.club.8080.conf
If it is a secondary domain name, it is also recommended to addfe.sherlocked93.club.conf
. - Common and frequently reused configurations can be placed in
/etc/nginx/snippets
Include the folders in the Nginx configuration files where you need them, name them by their features, and annotate each snippet configuration file with the main features and location for easy management. Like beforegzip
,cors
And so on, I set up snippets. - Nginx log related directory, in
The domain name. The log
Naming (e.g.be.sherlocked93.club.access.log
和be.sherlocked93.club.error.log
) is located in the/var/log/nginx/
Directory, configure different access rights and error log files for each individual service, making it easier and faster to find errors.
Thanks to @mufa chuan’s reminder, when Nginx set anti-theft link, baidu and Google can be set as a whitelist, which is beneficial to SEO
Most of the online posts are different in depth, and even some inconsistent, the following article is a summary of the learning process, if you find mistakes, welcome to leave a message to point out ~
Reference documents:
- Nginx Chinese documentation
- Nginx installation, directory structure and configuration file detailed explanation
- Keepalived installation and configuration
- Keepalived+Nginx implements high availability
- Nginx and front-end development
- Description of cross-domain Resource Sharing CORS – Network logs of Ruan Yifeng
- Prerequisite nginx knowledge for front-end developers
- I also talk about Nginx solving front-end cross-domain problems, correct Nginx cross-domain configuration
- Vue – the router history mode nginx configuration and configure static resource cache | HolidayPenguin
- Nginx redirection, global HTTPS, SSL configuration, reverse configuration reference
- Introduction to Nginx
PS: MY blog address Github – SHERlocked93/blog, also welcome everyone to pay attention to my public number [front afternoon tea], refuting together ~
In addition, you can join the wechat group of “front end afternoon tea Communication Group”. Long press to identify the qr code below to add my friend, note to add group, I pull you into the group ~