• 0. Before the words
  • 1. Basic introduction to Nginx
  • 2. How to install nginx
  • 3. Nginx configuration file management
  • 4. Configure Nginx as a basic Web Server
    • 4.1 Writing the First Configuration File
    • 4.2 Verifying and reloading the Nginx configuration file
    • 4.3 Understand Directives and Contexts in the Nginx configuration file
    • 4.4 Using Nginx as a Static File Server
    • 4.5 Nginx handles static file type resolution
    • 4.6 Nginx subconfiguration Import
  • 5. Nginx dynamic routing
    • 5.1 Address Matching
    • 5.2 Nginx variables
    • 5.3 Redirection and Rewrite
    • 5.4 Fault Tolerance of Multiple Files (TRy_Files)
  • 6. Nginx logs
  • 7. Nginx serves as the reverse proxy server
    • 7.1 What is a Reverse proxy
    • 7.2 Basic Principles of Reverse Proxy
    • 7.3 Basic Configuration of Reverse Proxy
    • 7.4 Nginx Reverse Proxy Address Matching Rule
    • 7.5 Reverse proxy Header Rewrite
    • 7.6 Reverse proxy try tcpdump packet capture and parsing
    • 7.7 Reverse Proxy Handling relative Path Problems
  • 8. Nginx acts as a load balancing server
  • Optimize Nginx performance
    • 9.1 How to Set the Number of Worker Processes and Worker Connections?
    • 9.2 How can I Cache Static Files
    • 9.3 How Do I Compress A Response?
  • 10. Understand the entire Nginx configuration file
  • reference
  • The latter
  • Update record

0. Before the words

Igor Sysoev, a young Programmer from Russia, started the development of a new Web Server in 2002 in order to solve the so-called C10K problem, that is, the previous Web Server could not support more than 10K concurrent requests.

Nginx was unveiled in 2004 under the 2-clause BSD certificate. According to a Web Server survey in March 2021, Nginx holds 35.3% of the market, serving 419.6 million websites.

Thanks to DigitalOcean’s NGINXConfig project, there are many written Nginx templates available for download so that you can copy and paste Nginx configurations without understanding them.

This is not to say that copy and paste is wrong, but if you just copy and paste and don’t understand it, it’s going to be a problem. So, you must understand Nginx configuration, and by studying this article, you can:

  • Understand tool-generated or other-configured Nginx.
  • Configure the Web server, reverse proxy server, and load balancing server from 0 to 1.
  • Optimize Nginx for maximum performance.
  • Configure HTTPS and HTTP/2.

This article requires a basic knowledge of Linux, the ability to execute Linux commands such as ls and cat, and some understanding of the front and back ends, but these are easy for front-end or back-end programmers.

1. Basic introduction to Nginx

Nginx is a high-performance Web server with an eye toward high performance, high concurrency, and low resource consumption. Although Nginx is best known as a Web server, its other core feature is reverse proxy.

Nginx is not the only Web Server on the market; its biggest competitor, Apache HTTP Server(HTTPD), was released in 1995. People choose Nginx as a Web server based on two considerations:

  • Support higher concurrency.
  • Provide static file services with fewer hardware resources.

There is no point in arguing whether Nginx or Apache is better. For more on the differences between Nginx and Apache, see Justin Ellingwood’s article.

As for Nginx’s new feature on request processing, Justin’s article explains:

Nginx came after Apache and was more aware of the concurrency problems that web sites faced as they grew, so it was designed from the outset as an asynchronous, non-blocking, event-driven connection handling algorithm.

Nginx works by setting up worker processes, each of which can handle thousands of connections. Worker processes process events through a mechanism called Fast Looping. Decoupling the work of handling requests from connections enables each worker process to associate it with a connection only when a new event is triggered.

Nginx works like this:

Nginx is able to provide high performance static file services with low resource consumption because it does not have a built-in dynamic programming language processor. When a static file request arrives, Nginx simply responds to the request file without doing any extra processing.

This is not to say that Nginx cannot incorporate dynamic programming language processors; it can delegate request tasks to separate processes such as phP-fpm, Node.js, or Python. Once the third-party process has processed the request, it proxies the response back to the client, as shown in the figure below:

2. How to install nginx

There are many examples on the Nginx installation network. Here is an example of Ubuntu:

#Update the source
sudo apt update && sudo apt upgrade -y

#The installation
sudo apt install nginx -y
Copy the code

Nginx is registered as a systemd system service.

sudo systemctl status nginx

#If you are not registered as a systemd service, try using service
sudo service nginx status
Copy the code

Nginx configuration files are usually stored in the /etc/nginx directory. The default configuration port is 80.

A: congratulations! Nginx installed successfully!

3. Nginx configuration file management

Nginx provides services for either static or dynamic files. How these services are served is determined by the configuration file.

Nginx configuration files end in. Conf and are usually located in the /etc/nginx directory. /etc/nginx

cd /etc/nginx

ls -lh

#Drwxr-xr-x 2 root root 4.0K Apr 21 2020 conf.d
#-rw-r--r-- 1 root root 1.1K Feb 4 2019 fastcgi.conf
#-rw-r--r-- 1 root root 1007 Feb 4 2019 fastcgi_params
#-rw-r--r-- 1 root root 2.8K Feb 4 2019 KOI-UTF
#-rw-r--r-- 1 root root 2.2K Feb 4 2019 KOI-win
#-rw-r--r-- 1 root root 3.9K Feb 4 2019 MIME. Types
#Drwxr-xr-x 2 root root 4.0k Apr 21 2020 Modules -available
#Drwxr-xr-x 2 root root 4.0k Apr 17 14:42 modules-enabled
#-rw-r--r-- 1 root root 1.5K Feb 4 2019 nginx.conf
#-rw-r--r-- 1 root root 180 Feb 4 2019 proxy_params
#-rw-r--r-- 1 root root 636 Feb 4 2019 scgi_params
#Drwxr-xr -x 2 root root 4.0K Apr 17 14:42 sites-available
#Drwxr-xr-x 2 root root 4.0k Apr 17 14:42 sites-enabled
#Drwxr-xr-x 2 root root 4.0k Apr 17 14:42 snippets
#-rw-r--r-- 1 root root 664 Feb 4 2019 uwsgi_params
#-RW-r --r-- 1 root root 3.0K Feb 4 2019
Copy the code

/etc/nginx/nginx.conf is the main nginx configuration file. If you open this configuration file, you’ll find a lot. Don’t be afraid. This article is about learning it bit by bit.

When modifying the configuration file, you are not advised to directly modify the /etc/nginx/nginx.conf file. You can back it up and then modify it:

#Rename file
sudo mv nginx.conf nginx.conf.backup

#Creating a Configuration File
sudo touch nginx.conf
Copy the code

4. Configure Nginx as a basic Web Server

In this section, you will learn how to write Nginx configuration files from scratch. The purpose is to understand the basic syntax and basic concepts of Nginx configuration files.

4.1 Writing the First Configuration File

Vim /etc/nginx/nginx.conf Open the configuration file and update the content:

events { } http { server { listen 80; server_name localhost; return 200 "Bonjour, mon ami! \n"; # redirection #return 302 https://www.baidu.com$request_uri; }}Copy the code

Restart Nginx and access, you will get the following information:

Curl -i http://127.0.0.1 HTTP/1.1 200 OK Server: nginx/1.18.0 (Ubuntu) Date: Sat, 19 Feb 2022 08:31:59 GMT Content-Type: text/plain Content-Length: 21 Connection: keep-alive Bonjour, mon ami!Copy the code

4.2 Verifying and reloading the Nginx configuration file

Nginx configuration file can be verified by using -t:

sudo nginx -t

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Copy the code

If there are syntax errors, the preceding command output will prompt you.

If you want to change the state of Nginx, such as restarting, reloading, etc., you can do so in three ways. The -s(signal) parameter is used to send signals to Nginx. The second is to use the system service management tool Systemd or Service; Third, run the kill command to operate the Linux process.

Send a signal to Nginx

Nginx signal: nginx -s reload | quit | stop | reopen, respectively overloading, grace stop nginx, unconditionally stop nginx configuration file and open the log file again.

Nginx is a “graceful stop” that stops after the current request is processed. With “unconditional stop” Nginx, which is equivalent to kill -9, the process is killed.

System services manage Nginx

#Using systemctl
sudo systemctl start|restart|stop nginx

#Or use service
sudo service nginx start|restart|stop
Copy the code

The kill command kills a process and starts it manually

#Kill the main process and its children
sudo kill -TERM $MASTER_PID

#Start Nginx by specifying the configuration file
sudo /usr/sbin/nginx -c /etc/nginx/nginx.conf
Copy the code

4.3 Understand Directives and Contexts in the Nginx configuration file

The Nginx configuration file looks like simple configuration text, but it contains syntax. Actually the contents of the configuration file are Directives. There are two Directives:

  • Simple Directives
  • Block Directives

Simple Directives: contains name and space, semicolon (;) At the end. For example, listen and return.

Block Directives: Wrapped in {}, which consists of Simple Directives and is called Contexts.

Nginx configuration for core Contexts:

  • events{}: General configuration of how nginx handles requests, which can only appear once in the configuration file.
  • http{}How to configure nginxhttporhttpsRequest, which can only appear once in a configuration file.
  • server{}: embedded inhttp{}To configure a specific virtual host on an independent host.http{}Multiple configurations can be configuredserver{}Indicates multiple virtual hosts.
  • main: The above threeContextsAll other configurations are in this directoryContexOn.

Set different virtual hosts (multiple server{}, same server_name) on the host, listen to different ports:

http { server { listen 80; server_name localhost; return 200 "hello from port 80! \n"; } server { listen 8080; server_name localhost; return 200 "hello from port 8080! \n"; }}Copy the code

Server_name = server_name = server_name = server_name = server_name = server_name = server_name = server_name

In this case, the domain name must be used. Nginx will extract the Host information from the request header to match the server configuration server_name, which is entered into the processing block.

http { server { listen 8088; server_name library.test; return 200 "your local library! \n"; } server { listen 8088; server_name librarian.library.test; return 200 "welcome dear librarian! \n"; }}Copy the code

Different results are returned when accessing different domain names:

Curl -i http://library.test:8088 HTTP/1.1 200 OK Server: nginx/1.18.0 (Ubuntu) Date: Sun, 20 Feb 2022 08:02:20 GMT Content-Type: application/octet-stream Content-Length: 21 Connection: keep-alive your local library ! The curl -i http://librarian.library.test:8088 HTTP / 1.1 200 OK Server: nginx / 1.18.0 (Ubuntu) Date: Sun, 20 Feb 2022 08:04:26 GMT Content-Type: application/octet-stream Content-Length: 24 Connection: keep-alive welcome dear librarian!Copy the code

You can successfully resolve the specified domain name to the same IP address in advance, or configure the domain name in the local hosts file for local test:

The librarian 172.19.146.188 library. Library. The testCopy the code

Note that the return Directive is followed by two parameters: the status code and the returned text, which is enclosed in quotes.

4.4 Using Nginx as a Static File Server

Update the Nginx configuration file as follows:

events { } http { server { listen 8088; server_name localhost; root /usr/share/nginx/html; }}Copy the code

Here to Nginx have modified the default display page, in the file/usr/share/Nginx/HTML/assets/mystyle CSS write p {background: red; } and introduce the CSS in the HTML file so that the background of the paragraph normally turns red.

The page is displayed as index.html, but the paragraph background color is not in effect. Debug the CSS file:

The curl -i http://fengmengzhao.hypc:8088/assets/mystyle.css HTTP / 1.1 200 OK Server: nginx / 1.18.0 (Ubuntu) Date: Sun, 20 Feb 2022 08:43:58 GMT Content-Type: text/plain Content-Length: 27 Last-Modified: Sun, 20 Feb 2022 08:38:54 GMT Connection: keep-alive ETag: "6211fe1e-1b" Accept-Ranges: bytes p { background: red; }Copy the code

Note that the content-type of the response header is text/plain, not text/ CSS. That is, Nginx serves CSS files as plain text, not as stylesheets, and browsers will not render styles.

This article adds domain name resolution to the local hosts file, so you’ll see requests for domain names in the example. When working with the examples in this article, change the IP (domain name) or port based on your environment.

4.5 Nginx handles static file type resolution

In fact, this involves Nginx handling static file type resolution. By default, Nginx assumes that text files are of text/plain type.

Modify the configuration file as follows:

events { } http { types { text/html html; text/css css; } server { listen 8088; server_name localhost; root /usr/share/nginx/html; }}Copy the code

The response header of mystyle.css is content-type text/ CSS:

Types {} is introduced here in HTTP {} to map file types by their suffixes. Note that if there is no types{}, nginx will assume that the.html file is of type text/ HTML, but once types{} is introduced, nginx will only parse the defined type mapping. So with types{}, you can’t just define type mappings for CSS, but also explicitly define type mappings for HTML, otherwise Nginx will parse HTML as plain text files.

4.6 Nginx subconfiguration Import

Manually adding types{} to HTTP {} to map file types is fine for small projects, but too tedious for large projects. Nginx provides default parsing mappings (often in /etc/nginx/mime.types files), Subconfigurations can be introduced into a configuration file using the include syntax.

Modify the configuration as follows:

events { } http { include /etc/nginx/mime.types; server { listen 80; server_name localhost; root /usr/share/nginx/html; }}Copy the code

Restart Nginx and the custom CSS file can be displayed normally.

5. Nginx dynamic routing

The above example is very simple: access a file in the root definition directory, return if it exists, or return the default 404 page if it does not exist.

Next, learn about Nginx’s use of location dynamic routing, including redirection, rewriting, and try_files Directive.

A dynamic path is how Nginx matches a user’s access path once it reaches Nginx.

Location Matches

Modify the configuration file as follows:

Events {} HTTP {server {# set default content-type text/ HTML, otherwise download default_type text/ HTML stream; Charset UTF-8; charset UTF-8; listen 80; server_name localhost; # prefix matching, example: http://fengmengzhao.hypc:8088/agatha - the location/Agatha {return 200 "prefix matching - Miss Marple. \ nHercule Poirot. \ n"; Location =} # match exactly, example: http://fengmengzhao.hypc:8088/agatha/Agatha {return 200 "fully match - Miss Marple. \ nHercule Poirot. \ n"; } # regex match, default case sensitive, example: http://fengmengzhao.hypc:8088/agatha01234 # priority than regular matching prefix match, below the first prefix matching location ~ / Agatha [0-9] {return 200 "regular match, Case-sensitive -Miss Marple.\nHercule Poirot.\n"; } # regex = "regex"; http://fengmengzhao.hypc:8088/AGatHa01234 the location ~ * / Agatha [0-9] {return 200 "regular match, Case-insensitive -Miss Marple.\nHercule Poirot.\n"; } # priority prefix match, example: http://fengmengzhao.hypc:8088/Agatha01234 # add ^ ~ before prefix matching can be converted into priority prefix matching location ^ ~ / Agatha {return 200 "priority prefix matching - Miss Marple.\nHercule Poirot.\n"; }}}Copy the code

Summary of matching rules:

matching The keyword
completely =
Priority prefix ^ ~
regular ~or~ *
The prefix None

If a request meets multiple configured matches, the regular match has a higher priority than the prefix match, and the priority prefix match has a higher priority than the regular match. Full match has the highest priority.

Variables in nginx (Variables)

Set variables:

set $<variable_name> <variable_value>;

# set name "Farhan"
# set age 25
# set is_working true*
Copy the code

Variable type:

  • String
  • Integer
  • Boolean

In addition to custom variables, nginx has built-in variables, see nginx.org/en/docs/var… .

For example, use built-in variables in the following configuration:

events { } http { server { listen 80; server_name localhost; return 200 "Host - $host\ - $uri\nArgs - $args\n"; }}
#curl http://localhost/user? name=Farhan

# Host - localhost
# URI - /user
# Args - name=Farhan
Copy the code

The $host, $URI, and $args built-in variables are used above to represent the host name, request relative path, and request parameters, respectively. Variables can be assigned as values to custom variables, for example:

events { } http { server { listen 80; server_name localhost; set $name $arg_name; # $arg_<query string name> return 200 "Name - $name\n"; }}Copy the code

$arg_

is used to retrieve the query string specified in the $args variable.

Redirect (Redirects) and rewrite (Rewrites)

Redirects in Nginx, like redirects seen on other platforms, return a 3xx status code and a Location header. If accessed from a browser, the browser automatically re-initiates the request specified by location and the url of the address bar changes.

Redirection examples:

events { } http { include /etc/nginx/mime.types; server { listen 80; server_name localhost; root /usr/share/nginx/html; location = /index_page { return 307 https://fengmengzhao.github.io; } location = /about_page { return 307 https://fengmengzhao.github.io/about; }}}
#curl -I http://localhost/about_page

HTTP/1.1 307 Temporary Redirect
Server: nginx/1.18.0 (Ubuntu)
Date: Mon, 21 Feb 2022 11:47:42 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 180
Connection: keep-alive
Location: https://fengmengzhao.github.io/about
Copy the code

Rewrites are different from redirects in that Rewrites internally forward requests without changing the address bar. The following is an example:

events { } http { include /etc/nginx/mime.types; server { listen 80; server_name localhost; root /usr/share/nginx/html; rewrite /image /assets/generate.png; }}
#curl -i http://localhost/image

HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Mon, 21 Feb 2022 11:56:42 GMT
Content-Type: image/png
Content-Length: 144082
Last-Modified: Sun, 20 Feb 2022 08:35:21 GMT
Connection: keep-alive
ETag: "6211fd49-232d2"
Accept-Ranges: bytes

Warning: Binary output can mess up your terminal. Use "--output -" to tell
Warning: curl to output it to your terminal anyway, or consider "--output
Warning: <FILE>" to save to a file.
Copy the code

If on the browser to http://fengmengzhao.hypc:8088/image, you can display images.

try_filesTry multiple files

Try_files example:

events { } http { include /etc/nginx/mime.types; server { listen 80; server_name localhost; root /usr/share/nginx/html; try_files /assets/xxx.jpg /not_found; location /not_found { return 404 "sadly, you've hit a brick wall buddy! \n"; }}}Copy the code

The example looks for the /assets/xxx.jpg file and the /not_found path if it doesn’t exist.

Try_files is often used with the $URI built-in variable:

events { } http { include /etc/nginx/mime.types; server { listen 80; server_name localhost; root /usr/share/nginx/html; try_files $uri /not_found; #try_files $uri $uri/ /not_found; #try_files $uri $uri/ /not_found; location /not_found { return 404 "sadly, you've hit a brick wall buddy! \n"; }}}Copy the code

6. Nginx logs

Log location (usually /var/log/nginx) :

ls -lh /var/log/nginx/

#-rw-r----- 1 www-data adm 0 Apr 25 07:34 access.log
#-rw-r----- 1 www-data adm 0 Apr 25 07:34 error.log
Copy the code

Delete the log file and reopen Nginx:

# delete the old files
sudo rm /var/log/nginx/access.log /var/log/nginx/error.log

# create new files
sudo touch /var/log/nginx/access.log /var/log/nginx/error.log

# reopen the log files
sudo nginx -s reopen
Copy the code

Nginx -s is a reopen reopen of an overloaded nginx; otherwise, the new log file is not written to the log file, because the output stream of nginx is the same as that of the previously deleted log file. You actually want to clear the log file can use echo “” > / var/log/nginx/access. Log method, that you don’t have to reopen the nginx.

Access Nginx and view logs:

curl -I http://localhost

#HTTP / 1.1 200 OK
#Server: nginx / 1.18.0 (Ubuntu)
# Date: Sun, 25 Apr 2021 08:35:59 GMT
# Content-Type: text/html
# Content-Length: 960
# Last-Modified: Sun, 25 Apr 2021 08:35:33 GMT
# Connection: keep-alive
# ETag: "608529d5-3c0"
# Accept-Ranges: bytes

sudo cat /var/log/nginx/access.log 

#[25/Apr/ 202:08:35:59 +0000]"The HEAD/HTTP / 1.1" 200 0 "-" "Curl / 7.68.0"
Copy the code

By default, any access logs are logged in the access.log file, or you can customize the path with the access_log Directive:

events { } http { include /etc/nginx/mime.types; server { listen 80; server_name localhost; Return 200 "this will be logged to the default file.\n"; } location = / admin {# will log output in/var/logs/nginx/admin log file access_log/var/logs/nginx/admin log; return 200 "this will be logged in a separate file.\n"; } location = /no_logging {# disable logging access_log off; return 200 "this will not be logged.\n"; }}}Copy the code

The path to access.log can be customized in location{}, and the log output can be turned off with access_log off.

Similarly, error_log can also define the error. Log path of Nginx:

events { } http { include /etc/nginx/mime.types; server { listen 80; server_name localhost; error_log /var/log/error.log; # Nginx returns 200 "..." "..." ; }}Copy the code

Reload nginx using nginx -s reload

sudo nginx -s reload

# nginx: [emerg] invalid number of arguments in "return" directive in /etc/nginx/nginx.conf:14
Copy the code

Access the error log file with the same error message:

sudo cat /var/log/nginx/error.log 

# 2021/04/25 08:35:45 [notice] 4169#4169: signal process started
# 2021/04/25 10:03:18 [emerg] 8434#8434: invalid number of arguments in "return" directive in /etc/nginx/nginx.conf:14
Copy the code

Nginx error logs have levels:

  • debug: Can help identify what went wrong.
  • info: Useful but unnecessary information.
  • notice: better thaninfoMore information worth knowing, but it’s okay not to know.
  • warn: Something unexpected happens, something goes wrong, but it still works.
  • error: What failed message.
  • critA serious problem that needs immediate solution.
  • alertIs imminent.
  • emerg: The system is unstable. It’s urgent.

By default, Nginx logs all levels of Error information, which can be overridden with the second parameter error_log. To set the lowest level of log output to WARN, update the configuration file as follows:

events {

}

http {

    include /etc/nginx/mime.types;

    server {

        listen 80;
        server_name localhost;
	
        error_log /var/log/error.log warn;

        return 200 "..." "...";
    }

}
Copy the code

Reload Nginx and view the log:

cat /var/log/nginx/error.log

# 2021/04/25 11:27:02 [emerg] 12769#12769: invalid number of arguments in "return" directive in /etc/nginx/nginx.conf:16
Copy the code

As you can see, the previous [notice] log is no longer printed.

7. Nginx serves as the reverse proxy

7.1 What is a Reverse proxy?

A reverse proxy is, first and foremost, a proxy that is a third party in addition to the client and server. It is easy to compare Forward and Reverse proxies.

Forward proxy The forward proxy is generally used by the client, and the user (client) knows the existence of the proxy (usually configured by the client). The client’s request to the target service is forwarded by the proxy and the target service response is returned to the client. Common VPN agents, browser (Settings) agents, Git (Settings) agents, and Fiddler packet capture software are all forward proxies.

In this article, “target service”, “proxied upstream service”, “proxied service”, and “server” refer to proxy_pass configured proxied service. “Proxy service” and “proxy service server” refer to proxy services provided by Nginx.

Forward proxy schematic diagram:

A reverse proxy is typically a proxy for a server. The client directly deals with the proxy service (if there is a reverse proxy) without knowing anything about the propped service. After the client request reaches the proxy service, the proxy service forwards the request to the proxied service and returns the response to the client.

Schematic diagram of reverse proxy:

In the two figures above, it is understandable that the services on the blue background are known to each other.

When Nginx acts as a reverse proxy, it sits between the client and the server. The client sends the request to Nginx (reverse proxy), and Nginx sends the request to the server. Once the server finishes processing the request, it returns the result to Nginx, which in turn returns the result to the client. Throughout this process, the client has no idea who is actually processing the request (actually processing the request and generating the response, not the proxy).

7.2 Basic Principles of Reverse Proxy

When I first came into contact with reverse proxy, I felt it was a very magical thing. Is it possible to delegate third-party websites to your host with simple configuration?

Actually, not really. Some websites can proxy the home page, but the function can not be fully used; Some proxies have problems loading styles, images, etc. Only when we understand how it works can we explain all kinds of situations.

The so-called reverse proxy forwards the request sent by the client to the server that actually processes the request (the server specified by proxy_pass). After the server responds, it proxies the response back to the client.

Since it is a proxy, it does not simply forward the request. After receiving the request from the client, the proxy rewrites the requested header information before forwarding it to the specified proxy server. For example, the rewriting rules are described in the reverse proxy header rewriting section:

  1. Value is emptyheaderForwarding will not take place;headerthekeycontained_Those underlined will not be forwarded.
  2. The default rewriteHostandConnectiontwoheader, respectively:Host: $proxy_host,Connection: close.

If the proxy server is just forwarding, what proxy is there? Just like agents in life, they will provide value-added services and help you get everything done.

Reverse proxy is to rewrite the header information and forward the request to the proxy service. The proxy service processes the request and returns the response to the proxy service, which then forwards the response back to the client.

The request forwarded by the proxy service is re-initiated by the proxy server, so packets cannot be captured by the browser or Fiddler tool on the client. Use the Wireshark to capture the network adapter corresponding to the proxy server to initiate network requests.

The client <– > proxy service <– > proxy service. Nginx’s reverse proxy does not change the content of the response by default, Absolute reference is the proxy service response page (/ assets/images/ABC. JPG), relative references (assets/images/ABC. JPG) or reference figure bed (https://image.com/image/abc.jpg) agent back when the client will not change. These references will re-initiate the request when the client parses the HTML, and if the request is directed to the proxy service, it will also request the proxy service <– > proxy service <– > proxied service.

–> indicates a request, and <– indicates a response.

Sometimes things get complicated after the proxy because the proxy service has redirection or authentication constraints, and the proxy process is as simple as requesting the proxy service without changing the response of the proxy service.

7.3 Basic Configuration of Reverse Proxy

Look at a simple reverse proxy configuration:

events { } http { include /etc/nginx/mime.types; server { listen 80; server_name localhost; location / { proxy_pass "https://bbs.tianya.cn/"; }}}Copy the code

The page after proxy is as follows:

Because HTTP is the reverse proxy for HTTPS, the carrier inserts ads in the bottom right corner (https://bbs.tianya.cn/ will not be inserted ads).

Proxy_pass can simply forward the client request to the third-party server and return the reverse proxy response result to the client.

This is a simple proxy. If you want to reverse proxy an interface and use WebSocket, you override the header information:

#WebSocket requires HTTP /1.1, the default is HTTP /1.0Proxy_http_version 1.1;#Overwrite header Upgrade to$http_upgrade, which is the value of the Upgrade header requested by Nginx
proxy_set_header Upgrade $http_upgrade;
#Overwrite header Connection to'upgrade'
proxy_set_header Connection 'upgrade';
Copy the code

7.4 Nginx Reverse Proxy Address Matching Rule

How does Nginx concatenate a request from a client to the upstream service specified by proxy_pass? Nginx has certain rules:

  1. ifproxy_passThe upstream service of the proxy is domain name plus port (default port is 80 or 443 if no port is available). Then the proxy path requested by the client is spelled directly to the upstream service address. The sample,proxy_pass http://redis.cnIt is simply a proxy for the domain name (and port).
  2. ifproxy_passIf the upstream service of the proxy has a request path, the proxy path requested by the client will be clipped to the upstream service address. The sample,proxy_pass http://redis.cn/orproxy_pass http://redis.cn/commandsIs a proxy with a request path.

1 and 2 above are defined as “Case 1” and “case 2” respectively, referenced below.

events { } http { include /etc/nginx/mime.types; server { listen 8088; server_name localhost; Location / {# Case 1, client path and proxy path mapping:  #http://fengmengzhao.hypc:8088/commands --> http://redis.cn/commands proxy_pass http://redis.cn; } #location /redis {# case 1, client path and proxy path mapping:  #http://fengmengzhao.hypc:8088/redis/commands --> http://redis.cn/redis/commands # proxy_pass http://redis.cn; Location /redis {# case 2, client path and proxy path mapping:  #http://fengmengzhao.hypc:8088/redis/commands --> http://redis.cn//commands proxy_pass http://redis.cn/; } location /redis/ {# case 2, client path and proxy path mapping:  #http://fengmengzhao.hypc:8088/redis/commands --> http://redis.cn/commands proxy_pass http://redis.cn/; } # case 2, client path and proxy path mapping:  #http://fengmengzhao.hypc:8088/redis-commands --> http://redis.cn/commands #http://fengmengzhao.hypc:8088/redis-commands/keys.html --> http://redis.cn/commands/keys.html # proxy_pass http://redis.cn/commands; # case 2, client path and proxy path mapping:  # #http://fengmengzhao.hypc:8088/redis-commands/keys.html --> http://fengmengzhao.hypc:8088/commandskeys.html # proxy_pass http://redis.cn/commands; # case 2, client path and proxy path mapping:  # #http://fengmengzhao.hypc:8088/redis-commands/keys.html --> http://fengmengzhao.hypc:8088/commands/keys.html # proxy_pass http://redis.cn/commands/; Case 2, client path and proxy path mapping:  #http://fengmengzhao.hypc:8088/redis-commands/keys.html --> http://fengmengzhao.hypc:8088/commands//keys.html proxy_pass http://redis.cn/commands/; }}}Copy the code

The corresponding relationship between client request and proxy forward request is summarized as follows:

Matching path proxy_pass Client request Post-proxy request
/ redis.cn
/redis redis.cn /redis /redis
/ redis.cn/ /
/ redis.cn/ / /
/redis redis.cn/ /redis /
/redis redis.cn/ /redis/commands //commands
/redis/ redis.cn/ /redis /
/redis/ redis.cn/ /redis/commands /commands
/redis-commands redis.cn/commands /redis-commands /commands
/redis-commands redis.cn/commands /redis-commands/keys.html /commands/keys.html
/redis-commands/ redis.cn/commands /redis-commands /commands
/redis-commands/ redis.cn/commands /redis-commands/keys.html /commandskeys.html
/redis-commands redis.cn/commands/ /redis-commands /commands/
/redis-commands redis.cn/commands/ /redis-commands/keys.html /commands//keys.html
/redis-commands/ redis.cn/commands/ /redis-commands /commands/
/redis-commands/ redis.cn/commands/ /redis-commands/keys.html /commands/keys.html

Empty in the table indicates that only domain names and ports are accessed, and no request path is available.

If no network request is displayed on the client, you can use the NETWORK adapter on the host where the proxy service is deployed to capture the. Cap file and view the request in the Wireshark.

Tcpdump listening command:

#172.19.146.188 is the Nginx proxy IP address. 121.42.46.75 is the IP address of the proxy upstream service, that is, the resolved IP address of the domain name redis
#Ech0 is the IP address of the NIC used for 172.19.146.188Sudo tcpdump -i eth0 TCP port 8088 and host 172.19.146.188 or host 121.42.46.75 -c 100-n - VVV -w /opt/nginx-2.capCopy the code

After startup, access the proxy service and packets are captured as they pass through eth0. To view the network package, open the nginx-2.cap file in Wireshark.

The requests in the following table are demo, and packets are captured to obtain proxy requests.

The request is as follows:

Matching path proxy_pass Client request Post-proxy request
/redis-commands/ redis.cn/commands /redis-commands/keys.html /commandskeys.html

Grab request package as shown in figure:

7.5 Reverse proxy Header Rewrite

The main difference is the header information. Nginx modifies the header information of the request sent by the client. The rules are as follows:

  1. Nginx removes null valuesheader. Nginx does this because of null valuesHeaderIt also makes no sense to send the server, of course, to take advantage of this if you want the agent not to send aheaderInformation that can be used in configurationproxy_set_headeroverwriteheaderValue is empty.
  2. Nginx by defaultheaderIs contained in the name of_Underlining is invalidheader. This behavior can also be set in the configuration fileunderscores_in_headers onTo open, otherwise any containing_theheaderNone of the information is proxied to the target upstream service.
  3. The agent’sHostThe header information is overwritten as a variable$proxy_host, the variable is the IP address (or domain name) of the proxy upstream service plus the port, and its value is inproxy_passIn the definition.
  4. The agent’sConnectionThe header is overwritten as “close” and tells the proxied upstream service that once the server responds to the proxy request, the connection will be closed and will not be persistent.

The Host header overridden is important in Nginx’s reverse proxy. Nginx defines different variables that represent different values:

  • $proxy_hostAs mentioned above, this is the default reverse proxy overwriteheader, its value isproxy_passUpstream service IP address and port defined.
  • $http_host: Nginx gets the client requestHostHead. Nginx use$http_The client is prefixedheaderName lowercase and will-Symbols used_The substitution concatenation represents the header information actually requested by the client.
  • $Host: often and$http_hostSame thing, but willhttp_hostConvert to lowercase (domain name case) and remove ports. ifhttp_hostWhere there is no presence or is empty,$hostIs equal to the value in the Nginx configurationserver_nameThe value of the.

Nginx can use proxy_set_header to overwrite the header requested by the client and forward it. In addition to the Host header, the Host header is important.

  • X-Forwarded-Proto: configuration values$schema. Tells the upstream proxyed service that the original client request ishttporhttps.
  • X-Real-IP: configuration values$remote_addr. Tell the proxy service the IP address of the client to assist the proxy service to make certain decisions or log output.
  • X-Forwarded-For: configuration values$proxy_add_x_forwarded_for. Contains the IP address of each proxy through which the request passes.

7.6 Reverse proxy try tcpdump packet capture and parsing

I have been trying to understand the role of the Host header in the HTTP request. Normally, when an HTTP request is sent, the TCP connection has already specified the IP and port, so why do I need the Host header?

First, MDN Web Docs explains the Host header:

All HTTP/1.1 request messages must contain a Host header field. For HTTP/1.1 requests that lack a Host header or have more than one Host header, a 400 (Bad Request) status code may be received.

What does the Nginx reverse proxy do by default when the Host header is overridden with $proxy_host? Use tcpdump to capture packets.

For example, the reverse proxy http://redis.cn is configured as follows (Case 1) :

events { } http { include /etc/nginx/mime.types; server { listen 8088; server_name localhost; location / { proxy_pass http://redis.cn; }}Copy the code

The most common reverse proxy setup, with no header overrides. Monitoring network adapters using tcpdump:

#First use ping or nslookup to find the IP address of redis.cn, where found is 121.42.46.75
#In this command, host 121.42.46.75 indicates that packets of a specified IP address are filtered. Without filtering, there will be a lot of packets, which is not very nice
#-c 100 When 100 packets are captured, the system automatically exits and produces files
#You need tocapOpen the file in WiresharkSudo tcpdump -i eth0 host 121.42.46.75 -c 100-n - VVV -w /opt/ nginx-Redis-1. CapCopy the code

Visit http://fengmengzhao.hypc:8088/ at this time, the acting page is normal:

The tcpdump package on the Nginx server is also caught:

Using Wireshark to view package requests:

Nginx proxy_set_header Host $http_host

events { } http { include /etc/nginx/mime.types; server { listen 8088; server_name localhost; location / { proxy_pass http://redis.cn; proxy_set_header Host $http_host; }}Copy the code

Visit http://fengmengzhao.hypc:8088/, the acting page:

What page is this? If you go directly to the redis.cn IP address http://121.42.46.75, you will get the same page. Why is that?

Take a look at the bags caught:

According to packets captured in tcpdump, the response is normally received from the server. So why do different Host headers return different pages?

After proxy_set_header $http_host is set, the Host requested by the Nginx proxy is the Host requested by the client (fengmengzhao.hypc:8088), while the Host in the first case is the upstream Host served by the proxy (redis.cn). The host 121.42.46.75 corresponding to the domain name redis. Cn may provide more than one port 80 service.

This provides multiple domain name services (with the same port) on a single host called a virtual host. Understand that Nginx as described in the Directives and Contexts section of the Nginx configuration file can implement this by setting up virtual hosts with different domains and the same port. In addition, Apache also supports the configuration of virtual hosts with different domain names. In both cases, the bottom line is that after the request reaches the server, the server takes the Host header in the request and matches it to a different virtual service.

Therefore, the overwriting of the Host header in the Nginx reverse proxy depends on whether the upstream proxied service has a special need for this information. If there is no special implementation requirement, the default proxy_host will do; If it’s a special implementation mechanism, be careful.

The special requirement here is that in the case of virtual hosts, such as the one above, the Host header is mandatory in HTTP/1.1.

7.7 Reverse Proxy Handling relative Path Problems

Based on the understanding of reverse proxies explained above, let’s deal with the problems encountered in the actual work and increase the understanding of Nginx reverse proxies.

Suppose the propped upstream service is a simple static page (http://127.0.0.1:80) that references two identical images, an absolute reference to /assets/generate.png and a relative reference to assets/generate.png. We perform the following reverse proxy configuration:

events { } http { include /etc/nginx/mime.types; server { listen 8088; server_name localhost; Location /static/ {proxy_pass http://127.0.0.1/; }}Copy the code

At that time, http://fengmengzhao.hypc:8088/static will find it absolutely reference (/ assets/generate PNG) pictures of loading failure, through a web browser to see, Their client load request is http://fengmengzhao.hypc:8088/assets/generate.png. This request will look for a root match in our configuration by default (the default is /usr/share/nginx/html), and the corresponding resource will not be found.

Actually application either absolute or relative application we want client requests are to http://fengmengzhao.hypc:8088/static/assets/generate.png, here you can see, if use the way of acting, and upstream services have absolute path reference, There will be a loading exception. Example:

By default, the Nginx reverse proxy does not modify the content of the response. A reference to a relative or absolute path in the target service is returned to the client by the reverse proxy as if it had directly accessed the target server response.

How to solve it? There are the following solutions:

1). If the target upstream service can be modified, all absolute path references can be changed to relative path references. PNG should be changed to./assets/generate. PNG or assets/generate. PNG. Secondary directory static file references should be changed to.. / XXX/assets/generate PNG. In general, references to absolute paths on the page are changed to references to relative paths.

2). You can add the pictures that cannot be used as proxies as follows:

events { } http { include /etc/nginx/mime.types; server { listen 8088; server_name localhost; Location /static/ {proxy_pass http://127.0.0.1/; } location /assets/ {proxy_pass http://127.0.0.1/assets/; }}Copy the code

Such absolute reference can agent to http://127.0.0.1/assets/generate.png, http://fengmengzhao.hypc:8088/assets/generate.png will be able to normal loading pictures.

3). Drop subdirectory scheme, use independent domain name no problem, configuration as follows:

events { } http { include /etc/nginx/mime.types; server { listen 8088; server_name static.fengmengzhao.hypc; Location / {proxy_pass http://127.0.0.1/; }}Copy the code

This visit http://static.fengmengzhao.hypc:8088 could succeed agency http://127.0.0.1.

4).nginx overwrites the target server response content

As noted several times in this article, the Nginx reverse proxy does not modify the target server response content by default. However, Nginx can also modify the response content by enabling Nginx’s ngx_HTTP_sub_module.

Nginx has the ngx_http_sub_module module by checking whether the ngx_http_sub_module module is included in nginx -v.

After enabling the ngx_HTTP_sub_module module, modify the configuration as follows:

events { } http { include /etc/nginx/mime.types; server { listen 8088; server_name localhost; location /static/ { sub_filter 'src="/assets/' 'src="./assets/'; sub_filter_once off; Proxy_pass http://127.0.0.1/; }}Copy the code

The correct proxy response can be obtained by either of the above methods:

A point to note here, when you access path is http://fengmengzhao.hypc:8088/static (a), its response in the HTML reference assets/generate PNG, the generate. PNG request path is: http://fengmengzhao.hypc:8088/assets/gnerate.png. When you access path is http://fengmengzhao.hypc:8088/static/ (2), the response HTML reference same assets/generate PNG, on the request of the picture will become: http://fengmengzhao.hypc:8088/static/assets/generate.png. The difference between case 2 and case 1 is whether the URI ends with a /. If it ends with a /, the current access is considered to be a directory, so its relative reference starts from the path in the current address bar. If there is no/ending, the current access is considered to be a file, and its relative path is the file’s path, that is, the level where the URI appears before /. In this case, it is the root directory, so case 1 is a relative reference, but the request path still starts at the root directory.

8. Nginx acts as a load balancing server

Once you have learned about reverse proxies, it is easy to understand further load balancing based on reverse proxies.

Configuration example:

events { } http { upstream backend_servers { server localhost:3001; server localhost:3002; server localhost:3003; } server { listen 80; server_name localhost; location / { proxy_pass http://backend_servers; }}}Copy the code

Upstream {} can contain multiple services and be referenced as an upstream service.

Test load balancing:

While sleep 0.5; do curl http://localhost; done
# response from server - 2.
# response from server - 3.
# response from server - 1.
# response from server - 2.
# response from server - 3.
# response from server - 1.
# response from server - 2.
# response from server - 3.
# response from server - 1.
# response from server - 2.
Copy the code

Optimize Nginx performance

This article describes three aspects to optimize Nginx performance: tuning Worker Processes and Worker Connections configurations based on host parameters, caching static files, and compression of response data.

9.1 How do I set the number of Worker Processes and Worker Connections

As mentioned at the beginning of this article, Nginx sets up Worker processes and switches between them, capable of handling “thousands” of requests concurrently. You can run the status command to check the number of Worker processes:

sudo systemctl status nginx

# ● nginx.service - A high performance web server and a reverse proxy server
#      Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
#      Active: active (running) since Sun 2021-04-25 08:33:11 UTC; 5h 54min ago
#        Docs: man:nginx(8)
#     Process: 22610 ExecReload=/usr/sbin/nginx -g daemon on; master_process on; -s reload (code=exited, status=0/SUCCESS)
#    Main PID: 3904 (nginx)
#       Tasks: 3 (limit: 1136).
#Memory: 3.7 M
#      CGroup: /system.slice/nginx.service
#              ├─ 3904 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
#Nginx ├ ─ 22611: the worker process
#└ ─ 22612 nginx: the worker process

#You can also view the process through Ps
#It can be seen that the master process is the parent process of all Worker processes
ps -ef | grep nginx
Copy the code

Here you can see that there are 1 master process and 2 Worker process. The number of Worker processes is easy to configure in Nginx:

#In general, the number of Worker processes is set to the number of cores the host has
worker_processes 2;
#The number of Worker processes is automatically set according to the number of host CPU cores
#worker_processes auto;

events {

}

http {

    server {

        listen 80;
        server_name localhost;

        return 200 "worker processes and worker connections configuration!\n";
    }
}
Copy the code

Assuming that the host has four cores, worker_processes, if set to 4, means that each Worker can theoretically use 100% of the CPU. If worker_PROCESSES is set to 8, it means that a Worker can theoretically use 50% of the CPU, which means that when the host CPU is running at full load, the Worker has to wait for one minute for every one minute. Therefore, the larger worker_processes are configured, the better. If the number of worker_processes exceeds the number of host CPU cores, there will be time wasted scheduling processes at the operating system level.

You can easily check the number of CPU cores in the host by using the nproc command:

nproc

# 4
Copy the code

The worker_PROCESSES Auto configuration automatically sets the number of Worker processes based on the number of host CPU cores. If your host is only used to run Nginx, this can be done; If other services are deployed on the host, allocate resources appropriately.

Worker_connections indicates the maximum number of connections that a Worker process can handle. This parameter is related to the number of host CPU cores and the number of files that a core can open (this value can be queried using the ulimit -n command).

ulimit -n

# 1024
Copy the code

Worker_connections Settings:

worker_processes auto;

events {
    worker_connections 1024;
}

http {

    server {

        listen 80;
        server_name localhost;

        return 200 "worker processes and worker connections configuration!\n";
    }
}
Copy the code

Notice that the events Context is used here for the first time in this article.

9.2 How can I Cache Static Files

No matter what service Nginx provides, there are static files (JS, CSS, etc.) that don’t change very often and can be cached to improve Nginx performance. Nginx cache configuration for static files is very convenient:

worker_processes auto; events { worker_connections 1024; } http { include /env/nginx/mime.types; server { listen 80; server_name localhost; root /usr/share/nginx/html; # # regular match, case-insensitive. CSS or. Js or. JPG at the end of the match the location ~ * \. | js | JPG | PNG (CSS) ${access_log off; add_header Cache-Control public; add_header Pragma public; add_header Vary Accept-Encoding; #1M represents a month expires 1M; }}}Copy the code

Just as proxy_set_header in the previous reverse proxy setup can add headers to proxy requests to the back end, add_header can add headers to response.

The cache-control header is set to public, which tells the client that the request can be cached. Pragma is an old version of cache-Control.

The Vary header is set to accept-encoding.

The Expires directive indicates the time for Nginx to cache a response. It is easy to set the response expires header, which can be 1M (1 month), 10m/10 minutes, or 24h/24 hours.

Cache-control tells the client that the response is cached on the server, and the client can Cache it in any form. In addition, according to the Nginx expires set the Cache time, increase cache-control: max-age=2592000, where cache-control: max-age means that the response will not be refreshed within the max-age time. The unit of 2592000 is seconds, equal to 1M set by expire (one month, 30x24x3600=2592000).

After restarting Nginx, test the response to the request:

The curl -i http://fengmengzhao.hypc:8088/assets/generate.png HTTP / 1.1 200 OK Server: nginx / 1.18.0 (Ubuntu) Date: Tue, 01 Mar 2022 05:04:17 GMT Content-Type: image/png Content-Length: 144082 Last-Modified: Sun, 20 Feb 2022 08:35:21 GMT Connection: keep-alive ETag: "6211fd49-232d2" Expires: Thu, 31 Mar 2022 05:04:17 GMT # Cache-control: max-age=2592000 Cache-control: public Pragma: public Vary: Accept-Encoding Accept-Ranges: bytesCopy the code

Cache-control header has been added to response, indicating that the configuration has taken effect. You can use tcpdump to see if the Nginx server has cached the response.

It is important to note that if the visit http://fengmengzhao.hypc:8088/assets/generate.png in the browser, return a status code of 200, for the first time said is the server returns success. The second attempt returns a 304 status code, indicating that the browser uses the client Cache directly on the second attempt, following the instructions of the first response header cache-control: public. You can also open the console via F12 and check the Network –> Disable Cache option so that no caching is used on the browser side.

9.3 How Do I Compress A Response?

Compression configuration:

worker_processes auto; events { worker_connections 1024; } http { include /env/nginx/mime.types; # enable gzip, default only for HTML compression gzip on; Gzip_comp_level 3; Gzip_types text/ CSS text/javascript; server { listen 80; server_name localhost; root /usr/share/nginx/html; location ~* \.(css|js|jpg)$ { access_log off; add_header Cache-Control public; add_header Pragma public; add_header Vary Accept-Encoding; expires 1M; }}}Copy the code

By default, nginx uses gzip compression for HTML files. If you want to compress other types of files, you need to set gzip_types text/ CSS text/javascript. .

Gzip_comp_level is not the larger the better, generally set to 1-4.

After the server is set to gzip, the client needs to add the header “accept-encoding: gzip” to complete the server – to – client compression transmission.

Example of a client request without “accept-encoding: gzip” :

curl -I http://localhost/mini.min.css

#HTTP / 1.1 200 OK
#Server: nginx / 1.18.0 (Ubuntu)
# Date: Sun, 25 Apr 2021 16:30:32 GMT
# Content-Type: text/css
# Content-Length: 46887
# Last-Modified: Sun, 25 Apr 2021 08:35:33 GMT
# Connection: keep-alive
# ETag: "608529d5-b727"
# Expires: Tue, 25 May 2021 16:30:32 GMT
# Cache-Control: max-age=2592000
# Cache-Control: public
# Pragma: public
# Vary: Accept-Encoding
# Accept-Ranges: bytes
Copy the code

Example of a client request to set “accept-encoding: gzip” :

curl -I -H "Accept-Encoding: gzip" http://localhost/mini.min.css

#HTTP / 1.1 200 OK
#Server: nginx / 1.18.0 (Ubuntu)
# Date: Sun, 25 Apr 2021 16:31:38 GMT
# Content-Type: text/css
# Last-Modified: Sun, 25 Apr 2021 08:35:33 GMT
# Connection: keep-alive
# ETag: W/"608529d5-b727"
# Expires: Tue, 25 May 2021 16:31:38 GMT
# Cache-Control: max-age=2592000
# Cache-Control: public
# Pragma: public
# Vary: Accept-Encoding
# Content-Encoding: gzip
Copy the code

Note that the response header contains the Vary: Accept-encoding message, which tells the client that the server response will change depending on the accept-encoding header set by the client.

Compare the size of the content transmitted before and after compression:

cd ~
mkdir compression-test && cd compression-test

curl http://localhost/mini.min.css > uncompressed.css

curl -H "Accept-Encoding: gzip" http://localhost/mini.min.css > compressed.css

ls -lh

#-rw-rw-r-- 1 vagrant Vagrant 9.1K Apr 25 16:35 compressed. CSS
#-rw-rw-r-- 1 vagrant vagrant 46K Apr 25 16:35 uncompressed.css
Copy the code

The uncompressed version size is 46K, while the compressed version size is 9.1K.

10. Understand the entire Nginx configuration file

Complete nginx configuration file:

user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

events {
	worker_connections 768;
	# multi_accept on;
}

http {

	##
	# Basic Settings
	##

	sendfile on;
	tcp_nopush on;
	tcp_nodelay on;
	keepalive_timeout 65;
	types_hash_max_size 2048;
	# server_tokens off;

	# server_names_hash_bucket_size 64;
	# server_name_in_redirect off;

	include /etc/nginx/mime.types;
	default_type application/octet-stream;

	##
	# SSL Settings
	##Ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE ssl_prefer_server_ciphers on;
	##
	# Logging Settings
	##

	access_log /var/log/nginx/access.log;
	error_log /var/log/nginx/error.log;

	##
	# Gzip Settings
	##

	gzip on;

	# gzip_vary on;
	# gzip_proxied any;
	# gzip_comp_level 6;
	# gzip_buffers 16 8k;
	#Gzip_http_version 1.1;
	# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

	##
		# Virtual Host Configs
	##

	include /etc/nginx/conf.d/*.conf;
	include /etc/nginx/sites-enabled/*;
}


#mail {
#	# See sample authentication script at:
#	# http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
# 
#	# auth_http localhost/auth.php;
#	# pop3_capabilities "TOP" "USER";
#	# imap_capabilities "IMAP4rev1" "UIDPLUS";
# 
#	server {
#listen localhost:110;
#protocol pop3;
#proxy on;
#	}
# 
#	server {
#listen localhost:143;
#protocol imap;
#proxy on;
#	}
#}
Copy the code

The configuration described in the previous section is not repeated.

user www-data; If a user reads a directory for www-data without permission, it cannot provide services normally. If a user reads a directory for www-data without permission, it cannot provide services normally.

pid /run/nginx.pid; Example Set the nginx process ID.

include /etc/nginx/modules-enabled/*.conf; Set include to specify any configuration file at the end of the. Conf file in the directory. This directory is used to load nginx’s dynamic modules (not covered in this article).

Under HTTP {}, there are basic optimization Settings as follows:

  • sendfile on;: Disables static file buffering.
  • tcp_nopush on;: allows sending headers in a response packet.
  • tcp_nodelay on;: Static file fast transmission is disabledNagle’s Algorithm.

Keepalive_timeout Sets the connection time of the HTTP connection. Types_hash_maxsize Sets the Hash map size.

SSL configuration is not covered in this article.

Mail Context allows Nginx to be configured as a mail server. This article only discusses Nginx as a Web server, so it will not be explained.

Focus on the following configuration:

##
# Virtual Host Configs
##

include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
Copy the code

This configuration indicates that Nginx loads the matching configuration in the /etc/nginx/conf.d/ and /etc/nginx/sites-enabled/ directories. As such, it is generally assumed that these two directories are the best places to put Nginx configuration, but they are not.

There is another directory, /etc/nginx/sites-available/, which is used to store the nginx virtual host (i.e. server{} block) configuration. /etc/nginx/sites-enabled/ is used to store symbolic links to /etc/nginx/sites-available/. Such as:

ln -lh /etc/nginx/sites-enabled/

# lrwxrwxrwx 1 root root 34 Apr 25 08:33 default -> /etc/nginx/sites-available/default
Copy the code

To enable or disable the configuration in the /etc/nginx/sites-available/ directory, use symbolic links. Symbolic link unlink ‘and create the following command:

#Delete symbolic links using rm
sudo unlink /etc/nginx/sites-enabled/default

#To create a symbolic link, the first argument is the file to be linked to, and the second argument is the path to create the symbolic link
#That is, linking a file to a symbolic link
sudo ln -s /etc/nginx/sites-available/nginx-handbook.conf /etc/nginx/sites-enabled/nginx-handbook 
Copy the code

reference

  • www.freecodecamp.org/news/the-ng…
  • Serverfault.com/questions/9…
  • www.cnblogs.com/sky-cheng/p…
  • Stackoverflow.com/questions/1…
  • www.jscape.com/blog/bid/87…
  • Tarunlalwani.com/post/nginx-…
  • www.digitalocean.com/community/t…
  • Stackoverflow.com/questions/4…
  • Blog.csdn.net/gui951753/a…

The latter

In this paper, most of the content of the reference www.freecodecamp.org/news/the-ng… 7. Nginx as a reverse proxy server chapter focuses on the author’s understanding.

Update record

  • Re-read, optimize and erratum the article “Brother Feng Painted Halberd” on wechat public account before publication.
  • Reread, optimize, erratum before Nuggets column.

Wechat public account search “Feng Brother painting Halberd” to follow Feng Brother, the first time to watch more exciting content. Tcpdump packet capture learning Nginx(reverse proxy), learning not afraid of Nginx, but also always want to try! (Nginx usage, principles of the complete version of the manual)