This paper first introduces the principle of Nginx reverse proxy, load balancing, static and static separation and high availability, then explains the configuration file of Nginx in detail, and finally realizes the specific configuration of Nginx reverse proxy and load balancing through practical cases. Learn Nginx, one is enough.

1. Introduction

Nginx is an open source lightweight Web server, reverse proxy server, and load balancer and HTTP cache. It is characterized by high concurrency, high performance and low memory. Nginx was developed specifically for performance optimization. Performance is the most important consideration, and the implementation is very efficient. It can withstand high loads and supports up to 50,000 concurrent connections. Nginx also supports hot deployment, which is extremely easy to use and can run almost 24/7. Nginx’s website users include: Baidu, Taobao, JINGdong, Tencent, Sina, netease, etc.

2. Reverse proxy

2.1 Forward Proxy

Nginx can not only do the reverse proxy, load balancing, but also can be used as a forward proxy for Internet access and other functions.

2.2 Reverse Proxy

The client is not aware of the proxy server and does not need to do any configuration on the client. The user only requests the reverse proxy server. The reverse proxy server selects the target server and returns data to the client. The reverse proxy server and the target server are one server externally, but the proxy server address is exposed and the IP address of the real server is hidden.

3. Load balancing

Instead of concentrating requests on a single server, you increase the number of servers, then distribute the requests to each server and distribute the load to different servers, which is load balancing.

4. Separation of static and static

In order to speed up the site’s resolution speed, static pages and dynamic pages can be resolved by different servers, speed up the resolution, reduce the original single server pressure.

5. High availability

To improve system availability and fault tolerance, you can increase the number of Nginx servers. When the primary server fails or goes down, the backup server can immediately act as the primary server for continuous operation.

6. Nginx configuration file

6.1 File Structure

The Nginx configuration file consists of three parts.

. # block events {# block events... } HTTP # HTTP block {... # HTTP global block server #server block {... Location [PATTERN] #location block {... } location [PATTERN] { ... } } server { ... }... # HTTP global block}Copy the code
  • The first part of the global block sets up some configuration instructions that affect the overall operation of the Nginx server. For example, worker_processes 1; , the larger the worker_PROCESSES value, the more concurrent processing can be supported.

  • The Events block involves instructions that affect the network connection between the Nginx server and the user. Example: worker_connections 1024; , the maximum number of connections supported.

  • Part 3 HTTP Block THE HTTP block includes the HTTP global block and the SERVER block. It is the most frequent part of the server configuration and includes most functions such as proxy configuration, caching, and log definition.

    • Server block: Configures parameters related to virtual hosts.
    • Location block: Configures the routing of the request and the processing of the various pages.

6.2 Configuration File

########### Each instruction must end with a semicolon. ################# #user administrator administrators; Configure the user or group. Default is nobody nobody. #worker_processes 2; Pid /nginx/pid/nginx.pid Error_log log/error.log debug; Specify log path, level. This setting can fit into a global, HTTP, server block level to: debug | info | notice | warn | error | crit | alert | emerg events {accept_mutex on; Set network connection serialization to prevent stampedes, default to on multi_accept on; If a process accepts multiple network connections at the same time, the default is off. # event driven model, select | poll | kqueue | epoll | who | / dev/poll | eventport worker_connections 1024; HTTP {include mime.types; Default_type application/octet-stream; The default file type is text/plain #access_log off. Log_format myFormat '$remote_addr - $remote_user [$time_local] $request $status $body_bytes_sent $http_referer $http_user_agent $http_x_forwarded_for'; Access_log log/access.log myFormat; #combined = default value sendFile on; # allow sendFile transfer, default is off, HTTP block, server block, location block sendfile_max_chunk 100k; The number of transfers per call cannot exceed the set value. The default value is 0, that is, no upper limit is set. keepalive_timeout 65; # connection timeout, default is 75s, can be in HTTP, server, location block. Upstream mysvr {server 127.0.0.1:7878; 3333 backup server 192.168.10.121:; # hot standby} error_page 404 https://www.baidu.com; Server {keepalive_requests 120; # Maximum number of single connection requests. listen 4545; Server_name 127.0.0.1; Location ~*^.+${# request url filtering, regular matching, ~ is case sensitive, ~* is case insensitive. #root path; # root directory #index vv.txt; Proxy_pass http://mysvr; Mysvr defines the server list to deny 127.0.0.1; Allow 172.18.5.54; # allowed IP}}}Copy the code

7. Configure an example

7.1 Reverse Proxy

7.1.1 practical a

Effect: Enter www.abc.com in the browser to go to the Tomcat home page of the Linux system from nginx server. Specific configuration:

server { listen 80; Server_name 192.168.4.32; Location / {root HTML; #/ HTML directory proxy_pass http://127.0.0.1:8080; HTML index.htm; # set default page}}Copy the code

7.1.2 practical 2

Implementation effect: Switch to different ports based on the paths entered in the browser. Specific configuration:

server { listen 9000; Server_name 192.168.4.32; Location ~ /example1/ {proxy_pass http://127.0.0.1:5000; } location ~ /example2/ {proxy_pass http://127.0.0.1:8080; }}Copy the code

Location directive description:

  • ~ : Indicates that the URI contains a regular expression and is case sensitive.
  • ~* : Indicates that the URI contains a regular expression and is case insensitive.
  • = : Indicates that the URI does not contain regular expressions and must be strictly matched.

7.2 Load Balancing

7.2.1 practical a

Implementation effect: input http://192.168.4.32/example/a.html in your browser’s address bar, average to ports 5000 and 8080, the load balancing effect. Specific configuration:

Upstream myServer {server 192.167.4.32:5000; Server 192.168.4.32:8080; } server { listen 80; Server_name 192.168.4.32; Location / {root HTML; # HTML directory index index.html index.htm; Proxy_pass http://myserver; # request redirect to myServer defined server list}}Copy the code

Nginx allocates server policies

  • Polling (the default) allocates requests one by one in chronological order. If the server goes down, it is automatically removed.

  • Weight The higher the weight, the more clients are assigned. The default value is 1. Such as:

    Upstream myServer {server 192.167.4.32:5000weight =10; Server 192.168.4.32:8080 weight = 5; }Copy the code
  • IP addresses are assigned based on the hash value of the requested IP address. Each visitor accesses a back-end server. Such as:

          upstream myserver { 
            ip_hash;  
            server 192.167.4.32:5000;
            server 192.168.4.32:8080;
          }
    Copy the code
  • Fairs are allocated based on the response time of the back-end server, and requests with short response times are prioritized. Such as:

          upstream myserver { 
            fair;  
            server 192.167.4.32:5000;
            server 192.168.4.32:8080;
          }
    Copy the code

A: congratulations! By now you have mastered the basics of Nginx and have been able to configure reverse proxies and load balancing. In the next chapter, we will update the advanced Nginx: from principle to practice, thoroughly understand Nginx (advanced), including Nginx configuration static and static separation and high availability configuration examples, and in-depth analysis of the principle of Nginx. The code word is not easy, click a “like” to encourage it