Nginx is an open source lightweight Web server, reverse proxy server, and load balancer and HTTP cache. It is characterized by high concurrency, high performance and low memory.

Nginx was developed specifically for performance optimization. Performance is the most important consideration, and the implementation is very efficient. It can withstand high loads and supports up to 50,000 concurrent connections.

Nginx also supports hot deployment, which is extremely easy to use and can run almost 24/7.

The reverse proxy

① Forward proxy

Nginx can not only do the reverse proxy, load balancing, but also can be used as a forward proxy for Internet access and other functions.

② Reverse proxy

The client is not aware of the proxy server and does not need to do any configuration on the client. The user only requests the reverse proxy server. The reverse proxy server selects the target server and returns data to the client.

The reverse proxy server and the target server are one server externally, but the proxy server address is exposed and the IP address of the real server is hidden.


Load balancing

Instead of concentrating requests on a single server, you increase the number of servers, then distribute the requests to each server and distribute the load to different servers, which is load balancing.

Dynamic and static separation

In order to speed up the site’s resolution speed, static pages and dynamic pages can be resolved by different servers, speed up the resolution, reduce the original single server pressure.

High availability

To improve system availability and fault tolerance, you can increase the number of Nginx servers. When the primary server fails or goes down, the backup server can immediately act as the primary server for continuous operation.



Nginx configuration file


① File structure

.# global block

events {         # events block. } http# HTTP block{...# HTTP global block
    server        # server block{...# server global block
        location [PATTERN]   # the location of block{... } location [PATTERN] { ... } } server { ... }...# HTTP global block
}Copy the code

The Nginx configuration file consists of three parts:

  • Global block, which mainly sets some configuration instructions that affect the overall operation of Nginx server. For example, worker_processes 1; The larger the worker_PROCESSES value, the more concurrent processing can be supported.
  • Events block, involving instructions that primarily affect the network connection between the Nginx server and the user. Example: worker_connections 1024; Maximum number of connections supported.
  • HTTP blocks, also including HTTP global blocks and Server blocks, are the most frequent part of Server configuration, including configuring proxies, caching, log definition, and most of the other functions. Server block: Configures parameters related to virtual hosts. Location block: Configures the routing of the request and the processing of the various pages.


② Configuration file

########### Each instruction must end with a semicolon. # # # # # # # # # # # # # # # # #
#user administrator administrators; Configure the user or group. Default is nobody nobody.
#worker_processes 2; # Number of processes allowed to be generated. Default is 1
#pid /nginx/pid/nginx.pid; # specify the location where nginx run files are stored
error_log log/error.log debug;  Specify log path, level. , this setting can fit into a global, HTTP server block, level as: debug | info | notice | warn | error | crit | alert | emerg
events {
    accept_mutex on;   Set network connection serialization to prevent stampedes. Default is on
    multi_accept on;  Set whether a process accepts multiple network connections at the same time. Default: off
    #use epoll; # event driven model, select | poll | kqueue | epoll | who | / dev/poll | eventport
    worker_connections  1024;    # Maximum number of connections. Default is 512
}
http {
    include       mime.types;   File extension and file type mapping table
    default_type  application/octet-stream; The default file type is text/plain
    #access_log off; Cancel service log
    log_format myFormat '$remote_ADDR - $remote_user [$time_local] $request $status $body_bytes_SENT $http_referer $http_user_agent $http_x_forwarded_for'; # Custom format
    access_log log/access.log myFormat;  #combined is the default value for logging format
    sendfile on;   # allow sendFile transfer, default is off, HTTP block, server block, location block
    sendfile_max_chunk 100k;  The number of transfers per call cannot exceed the set value. The default value is 0, that is, no upper limit is set.
    keepalive_timeout 65;  # connection timeout, default is 75s, can be in HTTP, server, location block.Upstream mysvr {server 127.0.0.1:7878; 3333 backup server 192.168.10.121:;# hot standby
    }
    error_page 404 https://www.baidu.com; # error page
    server {
        keepalive_requests 120; # Maximum number of single connection requests.
        listen       4545;   # monitor portServer_name 127.0.0.1;# monitor address
        location  ~*^.+$ {       # request url filtering, regular matching, ~ is case sensitive, ~* is case insensitive.
           #root path; # the root directory
           #index vv.txt; Set the default page
           proxy_pass  http://mysvr;  Request redirected to mysvr defined server listDeny 127.0.0.1;Rejected IPAllow 172.18.5.54;# Allowed IP}}}Copy the code


Configure the instance

The following Nginx configuration file through the configuration, to achieve forward proxy, reverse proxy, load balancing, Nginx cache, static and static separation and high availability of Nginx functions, and the principle of Nginx for further analysis. If you need to use the Nginx configuration file, you can refer to this example.

① Forward proxy

The forward proxy object is the client. Forward proxy is the proxy server to access the target server for the client.

Effect: Enter www.google.com in the browser, the browser goes to www.google.com.

Specific configuration:

Server {resolver 8.8.8.8; listen 80; location / { proxy_pass http://$http_host$request_uri;
    }}Copy the code

Perform one of the following operations on the client that needs to access the Internet:

1. Method 1 (Recommended)exportHttp_proxy = http://your forward proxy server address: proxy port 2. Method 2vim ~/.bashrcexport http_proxy= http://your forward proxy server address: proxy portCopy the code


② Reverse proxy

A reverse proxy is a mediation server that responds to client requests from the proxy backend server. The proxy object is the server.

Effect: Enter www.abc.com in the browser to go to the Tomcat home page of the Linux system from Nginx server.

Specific configuration:

    server {
        listen       80;
        server_name  192.168.4.32;   # monitor address
        location  / {
           root html;  # / HTML directoryProxy_pass http://127.0.0.1:8080;# request redirection
           index  index.html index.htm;      Set the default page}}Copy the code

Implementation effect: Switch to different ports based on the paths entered in the browser.

Specific configuration:

    server {
        listen       9000;
        server_name  192.168.4.32;   # monitor addressLocation ~ /example1/ {proxy_pass http://127.0.0.1:5000; } location ~ /example2/ {proxy_pass http://127.0.0.1:8080; }}Copy the code

Location directive description:

  • ~ : Indicates that the URI contains a regular expression and is case sensitive.
  • ~* : Indicates that the URI contains a regular expression and is case insensitive.
  • = : Indicates that the URI does not contain regular expressions and must be strictly matched.


③ Load Balancing

Implementation effect: input http://192.168.4.32/example/a.html in your browser’s address bar, average to ports 5000 and 8080, the load balancing effect.

Specific configuration:

Upstream myServer {server 192.167.4.32:5000; Server 192.168.4.32:8080; } server { listen 80;# monitor portServer_name 192.168.4.32;# monitor address
        location  / {
           root html;  # HTML directory
           index index.html index.htm;  Set the default page
           proxy_pass  http://myserver;  The request is redirected to the list of servers defined by myServer}}Copy the code

Nginx allocation server policy:



Polling (default) : Requests are allocated one by one in chronological order. If the server goes down, the request is automatically removed.

Weight: The higher the weight, the more clients will be assigned. The default value is 1.

Such as:

Upstream myServer {server 192.167.4.32:5000weight =10; Server 192.168.4.32:8080 weight = 5; }Copy the code



IP: Assigned based on the Hash value of the requested IP address. Each visitor accesses one back-end server.

Such as:

      upstream myserver {
         ip_hash;
         server 192.167.4.32:5000;
         server 192.168.4.32:8080;
      }Copy the code



Fair: Requests are allocated according to the response time of the back-end server, and requests with short response times are allocated first.

Such as:

      upstream myserver {
         fair;
         server 192.168.4.32:5000;
         server 192.168.4.32:8080;
      }Copy the code


(4) the Nginx cache

Implementation effect: within three days, through the browser address bar to access http://192.168.4.32/a.jpg, not from a server resources, after 3 days (date) from the server to download.

Specific configuration:

# add cache configuration under HTTP zone
proxy_cache_path /tmp/nginx_proxy_cache levels=1 keys_zone=cache_one:512m inactive=60s max_size=1000m;
Add cache configuration under serverThe location ~ \. (GIF | JPG | PNG | HTM | | | HTML CSS js) (. *) {proxy_pass http://192.168.4.32:5000;Redirect the request if there is no cache
     proxy_redirect off;
     proxy_cache cache_one;
     proxy_cache_valid 200 1h;            Set different cache times for different HTTP status codes
     proxy_cache_valid 500 1d;
     proxy_cache_valid any 1m;
     expires 3d;
}Copy the code

Expires is a parameter that sets an expiration date for a resource. Setting Expires allows the browser to cache content before the expiration date to reduce requests and traffic to the server.

This means that the browser itself can check whether the expiration date is valid without going to the server, so there is no extra traffic. This approach is well suited to resources that are not subject to constant change.


⑤ Static and static separation

Effect: Access the static resource content of the static resource server by accessing www.abc.com/a.html in the address bar of the browser.

To access the dynamic resource content of the dynamic resource server, visit www.abc.com/a.jsp in the address bar of a browser.

Specific configuration:

Upstream static {server 192.167.4.31:80; Upstream {server 192.167.4.32.8080; } server { listen 80;# monitor portserver_name www.abc.com; Listen on address# Intercept dynamic resources
    location ~ .*\.(php|jsp)$ {
       proxy_pass http://dynamic;
    }

    Intercepting static resources
    location ~ .*\.(jpg|png|htm|html|css|js)$ {       
       root /data/;  # HTML directory
       proxy_pass http://static;
       autoindex on;;  Open the file list automatically}}Copy the code


6. High availability

In general, the Nginx primary server accesses the background target server cluster. When the primary server fails, the backup server automatically switches to the backup server. In this case, the backup server acts as the primary server and accesses the back-end target server.

Effect: Prepare two Nginx servers, access the virtual IP address through the browser address bar, stop the Nginx on the primary server, and access the virtual IP address again.


Specific configuration:

# installation keepalived
yum install keepalived -y
RPM -q-a KeepalivedKeepalived 1.3.5-16. El7 x86_64Copy the code

1. Install Keepalived on two Nginx servers: Keepalived acts as a route. It uses a script to check if the current server is still alive.

2. Modify the master server/etc/keepalived/keepalivec conf configuration file (can be directly replace), to complete the high availability of master-slave configuration.

Keepalived binds the Nginx server to a virtual IP address. Nginx high availability cluster exposes this virtual IP address uniformly. Clients access the Nginx server through this virtual IP address.

global_defs {
    notification_email {
        [email protected]
        [email protected]
        [email protected]
    }
    [email protected]
    smtp_server 192.168.4.32
    smtp_connect_timeout 30
    router_id LVS_DEVEL  # configure it in the /etc/hosts file to access our host
}

vrrp_script_chk_http_port {
    script "/usr/local/src/nginx_check.sh"
    interval 2      Check the interval between script execution
    weight 2        The weight increases by 2 each time
}

vrrp_instance VI_1 {
    interface ens7f0 # Nic, need to change according to the situation
    state MASTER    Alter server MASTER to BACKUP
    virtual_router_id 51 The virtual_router_id of the active and standby hosts must be the same
    priority 100   The primary and secondary servers have different priorities. The primary server has a larger value and the backup server has a smaller value
    advert_int 1  # How often (default: 1s) to send a heartbeat to check whether the server is still aliveAuthentication {auth_type PASS auth_pass 1111} virtual_ipaddress {192.168.1.100# VRRP H virtual address. Multiple virtual addresses can be bound}}Copy the code

The fields are described as follows:

Router_id:It is configured in the /etc/hosts file to access our host.

127.0.0.1 localhost localhost.localdomain localhost4 localhost4. Localdomain4 ::1 localhost localhost.localdomain Localhost6 localhost6. Localdomain6 127.0.0.1 LVS_DEVELCopy the code

Interval: Sets the interval for executing scripts.

Weight: The value (which can be negative) that the weight increases when the script fails to execute, namely Keepalived or Nginx hangs.

Interface: Run the ifconfig command to view the name of the current network adapter.

ens7f0: Flags = 4163 < UP, BROADCAST, RUNNING, MULTICAST > mtu 1500 inet 192.168.4.32 netmask 255.255.252.0 BROADCAST 192.168.7.255 inet6 fe80::e273:9c3c:e675:7c60 prefixlen 64 scopeid 0x20<link> ... .Copy the code

3. Add the check script nginx_check.sh to the /usr/local/src directory.

#! /bin/bash
A=`ps -C nginx -no-header |wc -l`
if [ $A -eq0];then
    /usr/local/nginx/sbin/nginx
    sleep 2
    if [ ps -C nginx -no-header |wc -l` -eq0];then
        killall keepalived
    fi
fiCopy the code

4. Start Nginx and Keepalived for both servers:

# start nginx
./nginx
# starting keepalived
systemctl start keepalived.serviceCopy the code

5. View the virtual IP address IP A. Stop the primary server 192.168.4.32 Nginx and Keepalived, then access the virtual IP to see the high availability effect.



The principle of analytic

After Nginx is started, there are two processes in Linux system, one is Master, the other is Worker.

As an administrator, Master does not participate in any work and is only responsible for assigning different tasks to multiple workers (workers generally have multiple tasks).

ps -ef |grep nginx
root     20473     1  0  2019 ?        00:00:00 nginx: master process /usr/sbin/nginx
nginx     4628 20473  0 Jan06 ?        00:00:00 nginx: worker process
nginx     4629 20473  0 Jan06 ?        00:00:00 nginx: worker processCopy the code

How does a Worker work? When a client sends a request, it first passes through the Master. After receiving the request, the administrator will notify the Worker of the request.

Multiple workers snatch tasks through the mechanism of scrambling. The Worker who gets the task will forward the request, reverse proxy, access the database, etc., through Tomcat.

What are the benefits of one Master and multiple workers?

  • You can use nginx -s Reload for hot deployment.
  • Each Worker is an independent process. If there is a problem with one Worker, other workers run independently and continue to compete for tasks to realize the client’s request process without causing service interruption.

How many workers is appropriate?

Similar to Redis, Nginx adopts IO multiplexing mechanism. Each Worker is an independent process, and there is only one main thread in each process.

By processing requests asynchronously and non-blocking, each Worker thread can maximize the performance of one CPU. Therefore, it is most appropriate for the number of workers to be equal to the number of cpus on the server.