Nginx characteristics

  • High concurrency and high performance;
  • The modular architecture makes it very scalable;
  • Asynchronous non-blocking event-driven model this sumNode.jsSimilar;
  • Compared to other servers, it can go on for months or longer without having to restart the server making it highly reliable;
  • Hot deployment and smooth upgrade;
  • Fully open source, ecological prosperity;

Nginx role

The most important usage scenarios for Nginx are:

  1. Static resource services, which provide services through local file systems;
  2. Reverse proxy services, including caching, load balancing, etc.
  3. APIService,OpenResty ;

Node.js is no stranger to the front end. Nginx and Node.js share many of the same concepts, such as HTTP server, event-driven, asynchronous non-blocking, etc. Most of Nginx’s functions can also be implemented using Node.js. But Nginx and Node.js are not in conflict, and both have their own areas of expertise.

Nginx is good at the processing of the underlying server side resources (static resources handling forwarding, reverse proxy, load balancing, etc.), node.js is better at the processing of the upper level of the specific business logic, the two can be a perfect combination.

With a graph:

Nginx installation

In this article, you can install Nginx on the Linux centOS 7. X operating system. As for other operating systems to install, you can search online.

Install Nginx with yum:

yum install nginx -y
Copy the code

Run the RPM -ql nginx command to check the nginx installation information.

# Nginx configuration file/etc/Nginx/Nginx. Conf # Nginx main configuration file/etc/Nginx/Nginx. Conf. # default executable file/usr/bin/Nginx - upgrade / usr/sbin/nginx # nginx library file/usr/lib/systemd/system/nginx. # service is used to configure the system daemon/usr/lib64 / nginx/modules directory # # nginx module /usr/share/doc/nginx-1.16.1 /usr/share/doc/nginx-1.16.1/CHANGES /usr/share/doc/nginx-1.16.1/README /usr/share/doc/nginx-1.16.1/ readme. dynamic /usr/share/doc/nginx-1.16.1/ upgrade-notes-1.6-to-1.10 # Static resource directory / usr/share/nginx/HTML / 404 HTML/usr/share/nginx/HTML / 50 x) HTML/usr/share/nginx/HTML/index. The HTML # deposit nginx log files /var/log/nginxCopy the code

There are two main folders to focus on:

  1. /etc/nginx/conf.d/Is where subconfiguration items are stored,/etc/nginx/nginx.confThe master configuration file imports all of the subconfiguration items in this folder by default;
  2. /usr/share/nginx/html/Static files are stored in this folder, or elsewhere as you like;

Nginx common commands

Systemctl System command:

Nginx systemctl enable nginx systemctl disable nginx systemctl start nginx Have direct access to the host IP, # Nginx systemctl stop Nginx # Nginx systemctl restart Nginx # Nginx systemctl reload Nginx # Nginx running state systemctl status Nginx # check Nginx process ps - ef | grep Nginx kill Nginx process kill 9 pid # # # according to the above view to Nginx process, kill Nginx process, -9 Forcibly ends a processCopy the code

Nginx application commands:

Nginx -s reload # signal the main process to reload the configuration file, Nginx -s reopen # restart nginx nginx -s stop # restart nginx -s quit # Reopen nginx -t # Check for configuration problemsCopy the code

Nginx core configuration

Configuration file structure

A typical configuration example of Nginx:

User nginx; Worker_processes Auto worker_processes auto worker_processes auto worker_processes auto Error_log /var/log/nginx/error.log warn; Pid /var/run/nginx.pid; Events {use epoll; # Worker_connections 1024 using ePoll's I/O model (if you don't know which polling method Nginx should use, it will automatically choose the one that is best for your operating system) # maximum number of concurrent requests per process} # HTTP segment configuration information # configure the most frequently used parts, Log_format main '$remote_addr - $remote_user [$time_local] "$request" '  '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; # Nginx sendFile on; Tcp_nopush on; Tcp_nodelay on; keepalive_timeout 65; # Time to stay connected, also called timeout, in seconds. include /etc/nginx/mime.types; Default_type application/octet-stream; Include /etc/nginx/conf.d/*.conf; Server {listen 80; Server_name localhost; {root /usr/share/nginx/ HTML; # index index.html index.htm; # deny 172.168.22.11; All allow 172.168.33.44; } error_page 500 502 503 504/50x.html; Error_page 400 404 error.html; #}}Copy the code
  • mainGlobal configuration takes effect globally.
  • eventsConfiguration influenceNginxNetwork connection between server and user;
  • http Configure the agent, cache, log definition and most other functions and third-party module configuration;
  • serverConfigure virtual host parameters, onehttpThere can be multiple blocksserverBlock;
  • locationUsed to configure a matchuri ;
  • upstreamConfiguring back-end server addresses is an indispensable part of load balancing configuration.

Use a picture to clearly show its hierarchy:

Set core parameters in the main section of the file

user

Specifies the owner and owner group of the woker child process running Nginx, where the group may not be specified.

user USERNAME [GROUP] user nginx lion; # user is nginx; Group is a lionCopy the code

pid

Specifies the path to the PID file that runs the Nginx master process.

Pid/opt/nginx/logs/nginx pid # master the main process of pid in nginx. The pid fileCopy the code

worker_rlimit_nofile_number

Specifies the maximum number of file handles that the worker child process can open.

worker_rlimit_nofile 20480; # Can be understood as the maximum number of connections per worker child process.Copy the code

worker_rlimit_core

Specifies the core file after the worker child process terminates abnormally, which is used to record analysis problems.

worker_rlimit_core 50M; Working_directory /opt/nginx/ TMP; # store directoryCopy the code

worker_processes_number

Specifies the number of worker child processes started by Nginx.

worker_processes 4; Worker_processes auto; The number of physical cores is the same as that of the current CPUCopy the code

worker_cpu_affinity

Bind each worker child process to our CPU physical core.

worker_cpu_affinity 0001 0010 0100 1000; # 4 physical cores, 4 worker child processesCopy the code

The advantage of binding each worker subprocess to a specific CPU physical core is to avoid switching of the same worker subprocess on different CPU cores, resulting in cache failure and performance degradation. But it doesn’t really prevent process switching.

worker_priority

Specifies the nice value of the worker child process to adjust the priority of running Nginx, usually set to negative in order to call Nginx first.

worker_priority -10; # 120-10= 110,110 is the final priorityCopy the code

The default Linux process priority value is 120. The smaller the value, the higher the priority. Nice ranges from -20 to +19.

[Remarks] The default priority value of the application is 120 plus the nice value equals its final value. The smaller the value, the higher the priority.

worker_shutdown_timeout

Specifies the timeout for the worker child process to gracefully exit.

worker_shutdown_timeout 5s;
Copy the code

timer_resolution

The accuracy of timer used inside worker child process. The larger the adjustment interval is, the fewer system calls are, which is conducive to performance improvement; Conversely, the more system calls, the worse performance.

timer_resolution 100ms;
Copy the code

In Linux, users need to send requests to the operating system kernel when they need to obtain timers, and there will inevitably be overhead if there is a request. Therefore, the larger the interval, the smaller the overhead.

daemon

Specifies how Nginx should run, foreground or background, foreground for debugging and background for production.

daemon off; The default value is onCopy the code

Configure the core parameters of the Events segment in the file

use

What event-driven model Nginx uses.

use method; Select, poll, kqueue, epoll, /dev/poll, eventportCopy the code

worker_connections

The maximum number of concurrent connections that the worker child process can handle.

# Maximum number of connections per child process is 1024Copy the code

accept_mutex

Whether to enable the load balancing mutex.

Accept_mutex on # Is turned off by defaultCopy the code

Server_name instruction

Specify the domain name of the virtual host.

Server_name name1 name2 name3 #Copy the code

Four ways to write a domain name match:

  • Exact matching:server_name www.nginx.com ;
  • Left side wildmatch:server_name *.nginx.com ;
  • Right side integration:server_name  www.nginx.* ;
  • Regular matching:server_name ~^www.nginx.*$ ;

Matching priority: Exact matching > Left Wildcard matching > Right Wildcard matching > Regular expression matching

1. Configure local DNS to parse vim /etc/hosts (macOS).

# add the following content: 121.42.11.34 www.nginx-test.com 121.42.11.34 mail.nginx-test.com 121.42.11.34 www.nginx-test.org 121.42.11.34 doc.nginx-test.com 121.42.11.34 www.nginx-test.cn 121.42.11.34 fe.nginx-test.clubCopy the code

[Note] The virtual domain name is used for the test, so the local DNS resolution needs to be configured. If the domain name purchased on Ali Cloud is used, the domain name resolution needs to be configured on Ali Cloud.

/etc/nginx/nginx.conf /etc/nginx.conf

Server {listen 80; server_name *.nginx-test.com; root /usr/share/nginx/html/nginx-test/left-match/; location / { index index.html; }} # match server {listen 80; server_name ~^.*.nginx-test.. * $; root /usr/share/nginx/html/nginx-test/reg-match/; location / { index index.html; }} # right match server {listen 80; server_name www.nginx-test.*; root /usr/share/nginx/html/nginx-test/right-match/; location / { index index.html; }} # match server {listen 80; server_name www.nginx-test.com; root /usr/share/nginx/html/nginx-test/all-match/; location / { index index.html; }}Copy the code

3. Access analysis

  • When accessingwww.nginx-test.com, can be matched. Therefore, Full Match with the highest priority is selected.
  • When accessingmail.nginx-test.com, “left matching” will be carried out;
  • When accessingwww.nginx-test.org, “right match” will be performed;
  • When accessingdoc.nginx-test.com, “left matching” will be carried out;
  • When accessingwww.nginx-test.cn, “right match” will be performed;
  • When accessingfe.nginx-test.club, “regular matching” will be carried out;

root

Specifies the static resource directory location, which can be written to HTTP, server, location, and other configurations.

Root path For example: location /image {root /opt/nginx/static; } when users visit www.test.com/image/1.png, actually find the server path is/opt/nginx/static/image / 1. PNGCopy the code

[Note] Root overlays the defined path with the URI. Alias takes only the defined path.

alias

It also specifies the static resource directory location, which can only be written in location.

location /image { alias /opt/nginx/static/image/; } when users visit www.test.com/image/1.png, actually find the server path is/opt/nginx/static/image / 1. PNGCopy the code

[Note] Using alias must end with a /, and it can only be in a location.

location

Configure the path.

location [ = | ~ | ~* | ^~ ] uri {
 ...
}
Copy the code

Matching rules:

  • =Accurate matching;
  • ~Regular matching, case sensitive;
  • ~ *Regular matching, case insensitive;
  • ^ ~Stop the search when the match is found;

Matching priority: = > ^~ > ~ > ~* > Does not contain any characters. Example:

server { listen 80; server_name www.nginx-test.com; # only when visiting www.nginx-test.com/match_all/ will match to/usr/share/nginx/HTML/match_all/index. The HTML location = / match_all / {root The/usr/share/nginx/HTML index index. The HTML} # when visit www.nginx-test.com/1.jpg etc path to/usr/share/nginx/images / 1. JPG to find the corresponding resources location ~ .(jpeg|jpg|png|svg)$ { root /usr/share/nginx/images; } # matches when visiting www.nginx-test.com/bbs/ on/usr/share/nginx/HTML/BBS/index. The HTML location ^ ~ / BBS / {root /usr/share/nginx/html; index index.html index.htm; }}Copy the code

A backslash in location

location /test {
 ...
}

location /test/ {
 ...
}
Copy the code
  • Don’t take/When accessingwww.nginx-test.com/testWhen,NginxFind out if there are anytestCatalogue, if anytestIn the directoryindex.html; If there is notestDirectory,nginxI’m looking to see if there are anytestFile.
  • with/When accessingwww.nginx-test.com/testWhen,NginxFind out if there are anytestCatalogue, if anytestIn the directoryindex.htmlIf it doesn’t exist, it doesn’t existtestFile.

return

Stop processing the request and return the response code directly or redirect to another URL; After the return directive is executed, subsequent directives in Location are not executed.

return code [text]; return code URL; return URL; For example: location / {return 404; {return 404 "pages not found"; } location / {return 302 / BBS; } location / {return https://www.baidu.com; # return redirection address}Copy the code

rewrite

Rewrites the URL based on the specified regular expression matching rules.

Syntax: rewrite regular expressions to replace [flag]; Context: server, location, if Example: rewirte /images/(.*.jpg)$/ PIC /$1; # $1 is a backreference to the preceding parentheses (.*.jpg)Copy the code

Flag The meanings of the optional values:

  • lastrewrittenURLInitiate a new request and enter againserverTry again,locationMatching in;
  • breakI’m just going to use the rewriteURL, no longer matches anything elselocationIn the statement;
  • redirectReturn 302 temporary redirection;
  • permanentReturn 301 permanent redirection;
server{ listen 80; server_name fe.lion.club; # to configure root HTML in the local hosts file location /search { rewrite ^/(.*) https://www.baidu.com redirect; } location /images { rewrite /images/(.*) /pics/$1; } location /pics { rewrite /pics/(.*) /photos/$1; } location /photos { } }Copy the code

According to this configuration, we analyze:

  • When accessingfe.lion.club/search“, will automatically redirect us tohttps://www.baidu.com.
  • When accessingfe.lion.club/images/1.jpg, the first step is rewrittenURL 为 fe.lion.club/pics/1.jpgTo findpics 的 location, continue to rewriteURL 为 fe.lion.club/photos/1.jpgTo find/photos 的 locationLater, go tohtml/photosLook in directory1.jpgStatic resources.

If the instructions

Syntax: if (condition) {... If ($http_user_agent ~ Chrome){rewrite /(.*)/browser/$1 break; }Copy the code

Condition Judgment conditions:

  • $variableA string with a null value or a string beginning with 0 is treated as a variable onlyfalseProcessing;
  • = 或 ! =Equal or unequal;
  • ~Regular matching;
  • ! ~Non-regular matching;
  • ~ *Regular matching, case insensitive;
  • -f 或 ! -fDetect file presence or absence;
  • -d 或 ! -dDetect the existence or non-existence of the directory;
  • -e 或 ! -eDetect the existence or non-existence of files, directories, symbolic links, etc.
  • -x 或 ! -xCheck whether the file is executable or not;

Example:

server { listen 8080; server_name localhost; root html; location / { if ( $uri = "/images/" ){ rewrite (.*) /pics/ break; }}}Copy the code

When accessing localhost:8080/images/, it enters the if judgment and executes the rewrite command.

autoindex

When a user request ends with /, the directory structure is listed, which can be used to quickly build a static resource download site.

Autoindex.conf configuration information:

server { listen 80; server_name fe.lion-test.club; location /download/ { root /opt/source; autoindex on; Open autoindex #, and optional parameters on | off autoindex_exact_size on; Autoindex_format HTML autoindex_format HTML autoindex_format HTML; # in the form of HTML formatting, optional parameters have HTML | json | XML autoindex_localtime off; The file time displayed is the server time of the file. The default is off, and the file time displayed is GMT time}}Copy the code

When you visit fe.lion.com/download/, the files in the /opt/source/download/ directory are displayed, as shown in the following figure:

image.png

variable

Nginx provides a large number of variables to the user, but it is ultimately a complete request process that generates the data, which Nginx provides to the user in the form of variables.

Here are some common variables used in projects:

Var. Conf:

server{ listen 8081; server_name var.lion-test.club; root /usr/share/nginx/html; location / { return 200 " remote_addr: $remote_addr remote_port: $remote_port server_addr: $server_addr server_port: $server_port server_protocol: $server_protocol binary_remote_addr: $binary_remote_addr connection: $connection uri: $uri request_uri: $request_uri scheme: $scheme request_method: $request_method request_length: $request_length args: $args arg_pid: $arg_pid is_args: $is_args query_string: $query_string host: $host http_user_agent: $http_user_agent http_referer: $http_referer http_via: $http_via request_time: $request_time https: $https request_filename: $request_filename document_root: $document_root "; }}Copy the code

When we visit http://var.lion-test.club:8081/test? If pid=121414&cid=sadasd, chrome will download a file from Nginx by default.

Remote_addr: 27.16.220.84 Remote_port: 56838 server_addr: 172.17.0.2 server_port: 8081 server_protocol: HTTP/1.1 Binary_remote_addr: Connection: 126 URI: /test/ request_uri: /test/? pid=121414&cid=sadasd scheme: http request_method: GET request_length: 518 args: pid=121414&cid=sadasd arg_pid: 121414 is_args: ? Query_string: pid=121414&cid=sadasd host: var.lion-test.club http_user_agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.182 Safari/537.36 http_referer: Http_via: request_time: 0.000 the HTTPS: request_filename: / usr/share/nginx/HTML/test/document_root: / usr/share/nginx/HTMLCopy the code

Nginx configuration there are many more, the above is just a list of some common configurations, in the actual project or learn to consult the documentation.

Nginx applies core concepts

A proxy is a hypothetical layer of servers between the server and the client. The proxy will receive the client’s request and forward it to the server, and then forward the server’s response to the client.

Both forward and reverse proxies implement the above functions.

image.png

Forward agent

Forward proxy, which means a server located between the client and the original server (Origin Server), in order to get content from the original server, the client sends a request to the proxy and specifies the destination (origin server), then the proxy forwards the request to the original server and returns the obtained content to the client.

The forward proxy is for us, that is, for the client. The client can access server resources that cannot be accessed by the forward proxy.

The forward proxy is transparent to us, but opaque to the server, which does not know whether it is receiving access from the proxy or from the real client.

The reverse proxy

  • In Reverse Proxy (*) mode, a Proxy server receives Internet connection requests, forwards the requests to the Intranet server, and returns the results to the Internet client. In this case, the proxy server acts as a reverse proxy server.

A reverse proxy serves the server. The reverse proxy helps the server receive requests from clients, forward requests, and balance loads.

The reverse proxy is transparent to the server, but not to us, that is, we do not know that we are accessing a proxy server, and the server knows that the reverse proxy is serving him.

Advantages of reverse proxy:

  • Hide the real server;
  • Load balancing facilitates horizontal expansion of back-end dynamic services;
  • Dynamic and static separation, improve the robustness of the system;

So what is “static separation”? What is load balancing?

Dynamic and static separation

Static and dynamic separation refers to the architecture design method of separating static page and dynamic page or static content interface and dynamic content interface from different system access in web server architecture, so as to prompt the accessibility and maintainability of the whole service.

In general, dynamic resources need to be separated from static resources, which are often deployed on Nginx due to features such as high concurrency and static resource caching. If the request is for static resources, the system directly obtains resources from the static resource directory. If the request is for dynamic resources, the system forwards the request to the corresponding background application for processing using the reverse proxy principle, thus realizing static and static separation.

The front – end separation greatly improves the access speed of static resources. Even if dynamic services are unavailable, static resources are not affected.

Load balancing

In general, the client sends multiple requests to the server. The server processes the requests, some of which may operate on some resources, such as databases and static resources. After the server processes the requests, the server returns the results to the client.

This pattern was not complex in function requirements for earlier systems, and it was also efficient and cost-effective with relatively few concurrent requests. As the amount of information continues to grow, the volume of visits and data rapidly increases, and the complexity of system services continues to increase, this approach can not meet the requirements. When the concurrency is extremely large, the server is prone to collapse.

This is obviously due to server performance bottlenecks, and the most important practice besides heap machines is load balancing.

Request under the condition of explosive growth, no strong of a single machine performance can meet the requirements, this time the concept of cluster, a single server cannot solve the problem, you can use multiple servers, then request distribution to all servers, load distribution to a different server, this is the load balance, the core is “share the pressure. Nginx implements load balancing, which generally means forwarding requests to a cluster of servers.

Take a specific example, when taking the subway in the evening peak, there will often be subway staff loudspeaker at the entrance “Please go to port B, port B is less car empty….” The role of this worker is load balancing.


Nginx implements load balancing strategies:

  • Polling policy: The policy adopted by default that allocates all client requests for polling to the server. This strategy works fine, but if one of the servers becomes too stressed and delays occur, it will affect all users assigned to that server.
  • Minimum connection count policy: Prioritizing requests to less stressed servers balances the length of each queue and avoids adding more requests to stressed servers.
  • Fastest response time policy: Priority is allocated to the server with the shortest response time.
  • The clientipBinding policy: from the sameipThe request is always assigned to only one server, effectively solving the dynamic web existssessionShared problems.

Nginx field configuration

Upstream: Proxy_pass: Upstream: Proxy_pass: Upstream: Proxy_pass: Upstream: Proxy_pass: Proxy_pass

upstream

Information used to define the upstream server (that is, the application server provided in the background).

Upstream name {... } context: HTTP example: upstream back_end_server{server 192.168.100.33:8081}Copy the code

Instructions to use in upstream:

  • serverDefine the upstream server address;
  • zoneDefines shared memory for crossworkerThe child process.
  • keepaliveEnable long connections for upstream services.
  • keepalive_requestsMaximum number of requests for a long connectionHTTPThe number of;
  • keepalive_timeoutThe timeout duration of a long connection when idle;
  • hashHashing load balancing algorithm;
  • ip_hashOn the basis ofIPLoad balancing algorithm for hash calculation;
  • least_connMinimum number of connections load balancing algorithm;
  • least_timeShortest response time load balancing algorithm;
  • randomRandom load balancing algorithm;

server

Define the upstream server address.

Server address [parameters] context: upstreamCopy the code

Parameters Optional value:

  • weight=numberWeight value, default is 1;
  • max_conns=numberThe maximum number of concurrent connections to upstream servers;
  • fail_timeout=timeThe decision time of server unavailability;
  • max_fails=numerThe number of times the server was checked for unavailability;
  • backupBackup server, which is enabled only when all other servers are unavailable;
  • downMark that the server is unavailable for a long time and maintain it offline;

keepalive

Limits the maximum number of idle long connections per worker child process to upstream servers.

keepalive connections; Upstream: Keepalive 16;Copy the code

keepalive_requests

Maximum number of HTTP requests that can be processed by a single long connection.

Syntax: keepalive_requests number; Default: keepalive_requests 100; Context: upstreamCopy the code

keepalive_timeout

Maximum hold time of idle long connections.

Syntax: keepalive_timeout time; Default value: keepalive_timeout 60s. Context: upstreamCopy the code

Configure the instance

Upstream {server 127.0.0.1:8081 weight=3 max_conns=1000 fail_timeout=10s max_fails=2; keepalive 32; keepalive_requests 50; keepalive_timeout 30s; }Copy the code

proxy_pass

Used to configure a proxy server.

Syntax: proxy_pass URL; Context: the location, the if, limit_except example: proxy_pass proxy_pass http://127.0.0.1:8081 http://127.0.0.1:8081/proxyCopy the code

URL parameter principle

  1. URLHave to behttp 或 httpsAt the beginning.
  2. URLCan carry variables in;
  3. URLWhether or not to bring inURIWill directly affect requests sent upstreamURL ;

Let’s look at two common uses of urls:

  1. Proxy_pass http://192.168.100.33:8081
  2. Proxy_pass http://192.168.100.33:8081/

The difference between these two uses is with/and without /, which can be quite different when configuring the proxy:

  • Don’t take/meansNginxDoes not modify the userURLIs directly and transparently transmitted to the upstream application server.
  • with/meansNginxModify the userURL, the modification method is tolocationAfter theURLFrom the userURLRemove the;

Use without / :

The location/BBS / {proxy_pass http://127.0.0.1:8080; }Copy the code

Analysis:

  1. User requestURL :/bbs/abc/test.html
  2. The request toNginx 的 URL :/bbs/abc/test.html
  3. The request reaches the upstream application serverURL :/bbs/abc/test.html

Use with / :

The location/BBS / {proxy_pass http://127.0.0.1:8080/; }Copy the code

Analysis:

  1. User requestURL :/bbs/abc/test.html
  2. The request toNginx 的 URL :/bbs/abc/test.html
  3. The request reaches the upstream application serverURL :/abc/test.html

There is no concatenation of/BBS, consistent with the difference between root and alias.

Configuring a Reverse Proxy

In order to make the demonstration more realistic, the author prepared two cloud servers, whose public IP addresses are respectively 121.42.11.34 and 121.5.180.193.

We take server 121.42.11.34 as the upstream server and do the following configuration:

# /etc/nginx/conf.d/proxy.conf server{ listen 8080; server_name localhost; location /proxy/ { root /usr/share/nginx/html/proxy; index index.html; }} # / usr/share/nginx/HTML/proxy index. The HTML < h1 > 121.42.11.34 proxy HTML < / h1 >Copy the code

After the configuration, restart the Nginx server Nginx -s reload.

Set server 121.5.180.193 as a proxy server and do the following configuration:

# / etc/nginx/conf. D/proxy. Conf upstream back_end {server 121.42.11.34:8080 weight = 2 max_conns = 1000 fail_timeout = 10 s max_fails=3; keepalive 32; keepalive_requests 80; keepalive_timeout 20s; } server { listen 80; server_name proxy.lion.club; location /proxy { proxy_pass http://back_end/proxy; }}Copy the code

To access the proxy.lion. Club domain name, you need to configure local hosts. Run the vim /etc/hosts command to access the configuration file and add the following information:

121.5.180.193 proxy. The lion. The clubCopy the code

image.png

Analysis:

  1. When accessingproxy.lion.club/proxythroughupstreamThe configuration of121.42.11.34:8080 ;
  2. So the access address becomeshttp://121.42.11.34:8080/proxy ;
  3. Connect to the121.42.11.34Server, find8080Port providedserver ;
  4. throughserverfind/usr/share/nginx/html/proxy/index.htmlResources, and finally show up.

Configuring Load Balancing

Configuring load balancing involves using the upstream directive.

We 121.42.11.34 server as the upstream server, do the following configuration (/ etc/nginx/conf. D/balance. The conf) :

server{ listen 8020; location / { return 200 'return 8020 \n'; } } server{ listen 8030; location / { return 200 'return 8030 \n'; } } server{ listen 8040; location / { return 200 'return 8040 \n'; }}Copy the code

After the configuration is complete:

  1. nginx -tCheck whether the configuration is correct;
  2. nginx -s reloadrestartNginxThe server;
  3. performss -nltCommand to check whether the port is occupiedNginxWhether the service is started correctly.

Do 121.5.180.193 server as a proxy server, the following configuration (/ etc/nginx/conf. D/balance. The conf) :

Upstream demo_server {server 121.42.11.34:8020; Server 121.42.11.34:8030; Server 121.42.11.34:8040; } server { listen 80; server_name balance.lion.club; location /balance/ { proxy_pass http://demo_server; }}Copy the code

Restart the Nginx server after the configuration is complete. Configure the mapping between IP addresses and domain names on the clients to be accessed.

# / etc/hosts 121.5.180.193 balance. The lion. The clubCopy the code

On the client machine to perform the curl command: http://balance.lion.club/balance/

image.png

As you can see, the load balancing configuration is already in effect, and the upstream servers we distribute to are different each time. Upstream server distribution through a simple polling strategy.

Next, let’s look at some of Nginx’s other distribution strategies.

The hash algorithm

You can specify keywords as hash keys and map them to specific upstream servers based on the Hash algorithm. Keywords can contain variables and strings.

upstream demo_server { hash $request_uri; Server 121.42.11.34:8020; Server 121.42.11.34:8030; Server 121.42.11.34:8040; } server { listen 80; server_name balance.lion.club; location /balance/ { proxy_pass http://demo_server; }}Copy the code

Hash $request_URI specifies that the request_URI variable is used as the hash key and will be distributed to the same server as long as the accessed URI remains the same.

ip_hash

Based on the requested IP address of the client, the server is always assigned to the same host as long as the IP address remains unchanged. It can effectively solve the problem of background server session persistence.

upstream demo_server { ip_hash; Server 121.42.11.34:8020; Server 121.42.11.34:8030; Server 121.42.11.34:8040; } server { listen 80; server_name balance.lion.club; location /balance/ { proxy_pass http://demo_server; }}Copy the code

Minimum number of connections algorithm

Each worker sub-process obtains the information of the back-end server by reading the data of the shared memory. To select the server with the fewest current connections to make the allocation request.

Grammar: least_conn; Context: upstream;Copy the code

Example:

upstream demo_server { zone test 10M; # zone can set the name and size of shared memory space least_conn; Server 121.42.11.34:8020; Server 121.42.11.34:8030; Server 121.42.11.34:8040; } server { listen 80; server_name balance.lion.club; location /balance/ { proxy_pass http://demo_server; }}Copy the code

In the end, the configuration of load balancing is not complicated at all.

Configure the cache

Caching can be a very effective performance enhancer, so both the client (browser), proxy server (Nginx), and even the upstream server are somewhat involved in caching. You can see that caching is very important at every step. Let’s learn how to set caching policies in Nginx.

proxy_cache

Storing resources that have been accessed before and may be accessed again makes them available to the user directly from the proxy server, reducing the strain on the upstream server and speeding up overall access.

Grammar: proxy_cache zone | off; Default: proxy_cache off; Context: HTTP, server, locationCopy the code

proxy_cache_path

Example Set the path for storing cache files.

Syntax: proxy_cache_path path [level=levels]... Optional parameters are omitted. The default values are detailed below: proxy_cache_path off Context: HTTPCopy the code

Parameter Meanings:

  • pathCache file storage path;
  • level pathDirectory level;
  • keys_zoneSet shared memory.
  • inactiveIf the cache is not accessed within a specified period of time, the cache will be cleared. The default value is 10 minutes.

proxy_cache_key

Set the key of the cache file.

Syntax: proxy_cache_key Default: proxy_cache_key $scheme$proxy_host$request_URI; Context: HTTP, server, locationCopy the code

proxy_cache_valid

Configure what status codes can be cached and for how long.

Syntax: proxy_cache_valid [code...  time; Context: HTTP, server, location Example: proxy_cache_VALID 200 304 2m; The cache time for cached files in the status 200 and 304 is 2 minutesCopy the code

proxy_no_cache

Defines conditions for saving to the cache that will not save the response to the cache if at least one value of the string parameter is not null and does not equal “0”.

Syntax: proxy_no_cache string; Context: HTTP, server, location Example: proxy_no_cache $http_pragma $http_authorization;Copy the code

proxy_cache_bypass

Defines conditions under which the response will not be retrieved from the cache.

Syntax: proxy_cache_bypass string; Context: HTTP, server, location Example: proxy_cache_bypass $http_pragma $http_authorization;Copy the code

Upstream_cache_status variable

It stores information about whether the cache hit, which is set in the response header, and is useful for debugging.

MISS: HIT: cache EXPIRED: cache STALE: old cache revalidated: Nginx verifying that the old cache is still valid UPDATING: contents STALE but UPDATING BYPASS: The X response is retrieved from the original serverCopy the code

Configure the instance

We 121.42.11.34 server as the upstream server, do the following configuration (/ etc/nginx/conf. D/cache. Conf) :

server { listen 1010; root /usr/share/nginx/html/1010; location / { index index.html; } } server { listen 1020; root /usr/share/nginx/html/1020; location / { index index.html; }}Copy the code

Do 121.5.180.193 server as a proxy server, the following configuration (/ etc/nginx/conf. D/cache. Conf) :

proxy_cache_path /etc/nginx/cache_temp levels=2:2 keys_zone=cache_zone:30m max_size=2g inactive=60m use_temp_path=off; Upstream cache_server {server 121.42.11.34:1010; Server 121.42.11.34:1020; } server { listen 80; server_name cache.lion.club; location / { proxy_cache cache_zone; Proxy_cache_valid 200 5m; Proxy_cache_key $request_uri; Add_header nginx-cache-status $upstream_cache_status Response to client proxy_pass http://cache_server; # proxy forwarding}}Copy the code

This is how caching is configured, and the corresponding cache file can be found in /etc/nginx/cache_temp.

For some pages or data with very high real-time requirements, you should not set the cache. Here is how to configure the uncached content.

. server { listen 80; server_name cache.lion.club; # in the URI suffix for. TXT or. The text set a variable's value to "no cache" if ($request_uri ~. (TXT | text) $) {set $cache_name "no cache"} the location / { proxy_no_cache $cache_name; Proxy_cache cache_zone proxy_cache cache_zone proxy_cache cache_zone Proxy_cache_valid 200 5m; Proxy_cache_key $request_uri; Add_header nginx-cache-status $upstream_cache_status Response to client proxy_pass http://cache_server; # proxy forwarding}}Copy the code

HTTPS

Before we learn how to configure HTTPS, let’s briefly review how HTTPS works. How is it encrypted to make it secure?

HTTPS workflow

  1. Client (browser) accesshttps://www.baidu.comBaidu website;
  2. Baidu Server returnHTTPSThe use ofCACertificate;
  3. Browser authenticationCAWhether the certificate is legal;
  4. The certificate is valid. A string of random numbers is generated and the public key (provided in the certificate) is used for encryption.
  5. Send the public key encrypted random number to baidu server;
  6. Baidu server to get the ciphertext, decryption through the private key, to obtain the random number (public key encryption, private key decryption, and vice versa);
  7. Baidu server to send to the browser content, using random number after encryption to the browser;
  8. At this time, the browser can use random number to decrypt and obtain the real transmission content of the server.

This is how HTTPS works. Symmetric encryption and asymmetric confidentiality are used together to ensure the security of transmitted content.

Certificate of configuration

CRT and XXX. Key files to the server directory, and then do the following configuration:

server { listen 443 ssl http2 default_server; Server_name lion. Club; # fill in certificate of binding domain (I write here is literally) ssl_certificate/etc/nginx/HTTPS/lion club_bundle. CRT; # certificate address ssl_certificate_key/etc/nginx/HTTPS/lion club. The key; Ssl_session_timeout 10m; Ssl_protocols TLSv1 TLSv1.1 TLSv1.2; [TLSv1.2] location / {root /usr/share/nginx/html; index index.html index.htm; }}Copy the code

After this configuration, the HTTPS version of the website can be accessed normally.

Configure cross-domain CORS

Just a quick review of what cross-domain is all about.

Cross domain definition

The same origin policy restricts how documents or scripts loaded from the same source can interact with resources from another source. This is an important security mechanism for isolating potentially malicious files. Read operations between different sources are generally not allowed.

Definition of homology

If both pages have the same protocol, port (if specified), and domain name, then both pages have the same source.

Is given below with the URL http://store.company.com/dir/page.html comparing the source of the sample:

Different source homologous http://store.company.com/dir2/other.html, https://store.company.com/secure.html, Protocol different http://store.company.com:81/dir/etc.html source, different http://news.company.com/dir/other.html source port, the host is differentCopy the code

Different sources have the following restrictions:

  • WebAt the data level, the same origin policy restricts the sites of different sources from reading the data of the current siteCookie 、 IndexDB 、 LocalStorageSuch data.
  • DOMAt the same origin level, the same origin policy restricts the entries from different sourcesJavaScriptScript to the currentDOMObject read and write operations.
  • At the network level, the same-origin policy restricts accessXMLHttpRequestAnd other ways to send the site data to different sources of the site.

Nginx addresses cross-domain principles

Such as:

  • The front endserverThe domain name is:fe.server.com
  • The domain name of the back-end service is:dev.server.com

Now I’m making a request to dev.server.com at fe.server.com and it’s bound to cross domains.

Now all we need to do is start an Nginx server, set server_name to fe.server.com and set location to intercept front-end requests that need to cross domains, and finally broker requests back to dev.server.com. The configuration is as follows:

server { listen 80; server_name fe.server.com; location / { proxy_pass dev.server.com; }}Copy the code

This perfectly circumvents the browser’s same-origin policy: fe.server.com visits Nginx’s fe.server.com are same-origin visits, and Nginx forwards requests to the server that do not trigger the browser’s same-origin policy.

Enable Gzip compression

GZIP is one of the three standard HTTP compression formats specified. The vast majority of websites use GZIP to transfer HTML, CSS, JavaScript and other resource files.

For text files, the effect of GZiP is very obvious. After enabling GZiP, the traffic required for transmission will be reduced to about 1/4~1/3.

Not every browser supports gzip. How do you know if the client supports gzip? The accept-encoding in the request header identifies support for compression.

image.png

Enabling gzip requires both client and server support. If the client supports gzip resolution, then the server can enable Gzip as long as the server can return gzip files. We can configure the server to support GZIP through Nginx. Content-encoding :gzip :gzip is enabled on the server.

image.png

/etc/nginx/conf.d/ create gzzip. conf in /etc/nginx/conf.d/

Enable gzip gzip on; Gzip compressed MIME file type, where text/ HTML is mandatory; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; Nginx checks to see if there is a gz-ending file that requests static files. If there is a gZ-ending file, Nginx returns the contents of the.gz file. gzip_static on; Nginx is used to enable or disable gzip compression for receiving content from the proxy server. gzip_proxied any; Add Vary: accept-encoding to the response header so that the proxy server identifies whether gzip compression is enabled based on the accept-encoding in the request header; gzip_vary on; # gzip compression ratio, the compression level is 1-9, 1 is the lowest compression level, 9 is the highest, the higher the level, the higher the compression rate, the longer the compression time, recommended 4-6; gzip_comp_level 6; How much memory is obtained to cache the compression results, 16 8K means obtained in units of 8K *16; gzip_buffers 16 8k; The minimum number of bytes allowed to compress a page. The number of bytes is obtained from the content-Length in the header. The default value is 0, regardless of how large the page is compressed. You are advised to set the number of bytes to a value larger than 1K. If the value is smaller than 1K, the pressure may increase. # gzip_min_length 1k; Enable the lowest version of HTTP required by gzip by default 1.1; Gzip_http_version 1.1;Copy the code

In fact, you can also use front-end construction tools such as Webpack, rollup, etc. to do a good job of Gzip compression in the production package, and then put it into the Nginx server, which can reduce server overhead, speed up access.

The practical application of Nginx is learned here, I believe that through the master of Nginx core configuration and actual configuration, and then meet any requirements, we can easily cope with. Next, let’s take a closer look at the Nginx architecture.

Nginx architecture

The process structure

Nginx process model diagram:

The Nginx Process architecture is shown below. There is a Master Process that has many Child Processes.

  • Master ProcessThe one used to manage child processes does not actually handle user requests per se.
  • A child processdownIf I drop it, it goes toMasterThe process sends a message indicating that it is unavailableMasterThe process will start a new child process.
  • A configuration file has been modifiedMasterThe process will notifyworkThe process gets new configuration information, which is what we call hot deployment.
  • Child processes communicate with each other through shared memory.

Principle of configuration file reloading

Reload reload configuration file flow:

  1. tomasterThe process to sendHUPSignal (reloadCommand);
  2. masterThe process checks whether the configuration syntax is correct.
  3. masterThe process opens the listening port.
  4. masterThe process starts a new one with the new configuration fileworkerThe child process.
  5. masterProgress to the oldworkerChild process sendingQUITSignal;
  6. The oldworkerThe process closes the listener handle and closes the process after processing the current connection.
  7. The whole processNginxAlways in a smooth operation, to achieve smooth upgrade, users have no perception;

Nginx modular management mechanism

The internal structure of Nginx is composed of core parts and a series of functional modules. This division is to make the function of each module relatively simple, easy to develop, but also easy to function expansion of the system. Nginx modules are independent of each other, with low coupling and high cohesion.