preface

Nginx(” Engine X “) is a high performance Web and reverse proxy server developed by Igor Sysoev, a Russian programmer. It is also an IMAP/POP3/SMTP proxy server.

For some small projects, IIS(.NET), Tomcat(Java) and so on can be done, but for large projects or micro service architecture, Nginx is certainly not necessary, a picture to see how popular Nginx is:

Nginx is popular because of its good performance, high concurrency support, low memory consumption, simple configuration, powerful functionality and, most importantly, free open source. ** Next, I will talk about something important. Anyone who knows me well should know that I like to talk about theory while practicing. Go up ~ ~ ~

The body of the

On the installation I will not step by step to demonstrate, if you need to install the steps in detail, click here, novice tutorial is very detailed, then focus on the usual use of more functions.

The following demonstration is through ali cloud server demonstration, system is Centos7, nginx version is 1.18.0. The tool used to connect to the cloud server is Xshell6 and the upload file is Xftp 6.

1. Interpret the configuration file

Nginx, like Redis, requires only a simple file configuration, so it is easy to implement the functions of the configuration file, so you don’t need to know how to use it too soon, and you will need to use it in the actual operation.

The nginx.conf file is often required to be configured. After I install it here, the configuration file path is shown in the following figure:

The main contents of the document are as follows:

# Specifies a user, but no setting is required
#user nobody;
#Nginx process, normally set to the same number of CPU cores
worker_processes  1;   
The directory where error logs are stored can be specified according to the log level
error_log  /var/log/nginx/error.log info;
# Location where process PID is stored
pid        /var/run/nginx.pid;

events {
	# Maximum number of concurrent requests for a single background process
    worker_connections  1024; 
}

http {
	File extension and type mapping table, specified as mime.types in the current directory
    include       mime.types;
    Default file type
    default_type  application/octet-stream;  
    Set log display format
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

	#nginx access log location
    access_log  /var/log/nginx/access.log  main;   
    
    # Enable efficient transmission mode
    sendfile        on;   
    #tcp_nopush on;
	The duration of the connection, also called timeout
    keepalive_timeout  65;  
	# Enable gzip compression
    #gzip on;
	# Server configuration can be separated into a subprofile to avoid a single profile being too large
    server {
    	Configure the listening port
    	listen       80;  
        # Configure domain name
    	server_name  localhost;  
    	#charset koi8-r;     
    	#access_log /var/log/nginx/host.access.log main;
    	location / {
    		Specifies the default directory
        	root   html;
            # Default access page
        	index  index.html index.htm;    
    	}
		Set HTTP code to configure the 404 page
    	#error_page 404 /404.html;

    	# redirect server error pages to the static page /50x.html
    	# Error status code display page, need to restart after configurationerror_page 500 502 503 504 /50x.html; location = /50x.html { root html; }}}Copy the code

In the above configuration file, there are several points to note:

  • An HTTP configuration block can be configured with multiple server blocks, each of which acts as a virtual host (more on this later).
  • Multiple Location blocks can be included in the Server block.
  • You can use the include directory /*.conf in HTTP configuration blocks; Specify the location of the subconfiguration file, and then automatically load the configuration content to avoid the single file configuration too large.

2. Common commands

The nginx installation directory (/usr/local/nginx/sbin) is not configured. Therefore, you need to go to the nginx installation directory (/usr/local/nginx/sbin).

  • Open the nginx

    ./nginx # start
    Copy the code
  • Stop nginx

    Way # 1
    ./nginx -s stop # Stop immediately
    Way # 2
    ./nginx -s quit # Stop the process after it has finished its current work
    Way # 3
    killall nginx # Kill the process directly
    Copy the code
  • The configuration file was reloaded

    ./nginx -s reload
    Copy the code
  • Check the nginx startup status

    ps aux|grep nginx
    Copy the code
  • Check whether the port number is occupied

    netstat -tlnp # Check the overall port usageNetstat TLNP | grep port numberCheck whether the specified port is occupied
    Copy the code

3. Practical application of common functions

3.1 Reverse Proxy

Often some friends want to use Google search information, was mercilessly rejected, so can only Baidu; What if you have to Google it? Over the wall (need to configure the relevant information), in fact, the essence is the local computer through the proxy server to the corresponding target server (small partner machine and proxy server in the same LAN), and then can obtain information indirectly, this form is called forward proxy. The diagram below:

The reverse proxy is the opposite of the forward proxy. The reverse proxy and the target server are in the same LAN. The partner directly accesses the address of the reverse proxy server. The diagram below:

Case demonstration:

Create a new API project, then deploy it to the cloud server, reverse proxy through nginx, hide the real address of the project, in order to run the API project, this needs to be installed. NetCore3.1 runtime environment (not development need not install SDK);

Step 1: Register Microsoft keys and repositories. Install the required dependencies.
rpm -Uvh https://packages.microsoft.com/config/centos/7/packages-microsoft-prod.rpm
Step 2: Install the.NET Core3.1 runtime. If you are not a development environment, you do not need to install the SDKYum install aspnetcore - runtime - 3.1Copy the code

Then run the dotnet –version command. If the corresponding version is displayed, you can continue to deploy the program.

Create a TestWebAPI project, copy the compiled project file to the cloud server via Xftp, and start it as follows:

After the operation, the security group of Aliyunyun server does not open 5000 ports, so the Internet cannot access, but you can run the curl command in the server to test whether the site is started, as follows:

For my server, port 80 is open to the public and can be accessed as follows:

So now we reverse proxy through nginx-accessible port 80 to our internally enabled test project, which is port 5000. The nginx configuration is as follows:

After nginx is restarted, it can be accessed as follows:

Key knowledge:

  • Specify port and server_name(domain name or IP address) in Server block;

  • Configure the location block in the corresponding Server block. The configuration of location allows regular matching. The syntax is as follows:

    location [ = | ~ | ~* |^~] uri{
    
    } # Matching path
    Copy the code

    = : indicates that the URI does not contain regular expressions, and the request string must match the URI strictly. As long as the match is successful, the request is processed immediately and the matching rule is no longer sought.

    ~ : indicates that the URI contains regular expressions and is case sensitive.

    ~* : indicates that the URI contains regular expressions and is case insensitive.

    ^~ : indicates that the URI does not contain regular expressions. The request is processed immediately after the location where the request string matches the URI most.

    Ex. :

    The actual operation is as follows:

  • Use proxy_pass in location to configure the address of the target server to be forwarded to;

Nginx reverse proxy benefits:

  • Mask the real address of the target server, relatively good security;
  • The performance of NGINx is good, which facilitates the configuration of load balancing and dynamic/static separation functions, and makes proper use of server resources.
  • Unified entry, when used as a load balancer, callers only access the proxy entry, regardless of how the target server scales.
3.2 Load Balancing

High availability of the system is more important, so the site will usually be deployed in the form of cluster, but in order to ensure even distribution of requests to the server, then must use load balancing strategy, whether it is the way of software or hardware approach can achieve (there will be no detailed), probably model below:

Case presentation

The case uses a NGINx as a reverse proxy, and through simple configuration to achieve load balancing function; Due to the limited equipment, the target server adopts different forms of ports for simulation, and the ports are 5000 and 6000 respectively. Then a port acquisition interface is added to the original project for the convenience of case demonstration. The code is as follows:

Then copy the compiled project file to the cloud server through xFtp, and then start it on different terminals in the form of different ports. The command is as follows:

In addition, open a terminal and start the project as shown above, but configure port 5000 to open, so the project starts two (cluster), then configure nginx to achieve load balancing function. The diagram below:

Nginx Load balancing policy

As shown in the preceding example, the load balancing policy of NGINx is polling by default. In actual application scenarios, you can configure other policies as follows:

  • Polling: By default, each request is allocated to a different target server one by one according to the order of the request. If the target server is down, it can be automatically eliminated.

  • Weight: Requests are allocated by configuring weights. The higher the weight of the target server, the more requests are allocated.

    # Everything else is the same, just add weight after each target serverUpstream testloadBalance {server 127.0.0.1:5000 weight=5; Server 127.0.0.1:6000 weight = 10; }Copy the code

    Restart Nginx according to the configuration above, and request the test multiple times. More requests will be forwarded to 6000.

  • Ip_hash: Each request has a corresponding IP address. Based on the hash of the IP address, the user can access the specified target server. This method can ensure that the corresponding client can access the corresponding target server.

    # Other unchanged, just add a strategy to proceed
    upstream testloadbalance {
            ip_hash; # Specifies that the policy is forwarded after hashing through IPServer 127.0.0.1:5000; Server 127.0.0.1:6000; }Copy the code
  • Fair: Requests are allocated according to the response time of the target server. Those with shorter response times are allocated first.

    If you want to install the nginx-upstream-fair, you need to configure the nginx-upstream-fair policy. The configuration is as follows:

    # Other unchanged, just add a strategy to proceed
    upstream testloadbalance {
            fair; # Set policy to fairServer 127.0.0.1:5000; Server 127.0.0.1:6000; }Copy the code

The configuration of the load balancing function is not very simple ~~~, moving the feeling is comfortable.

3.3 Dynamic and static separation

In recent years, the development mode of separating the front end and the back end has not been popular. In terms of deployment, in order to improve system performance and user experience, the deployment of static resources (such as HTML, JS, CSS, images, etc.) is also separated, that is, a separate site is deployed for static resources, and the acquisition and processing of WebAPI information is separately deployed for one site. The essence is still based on the location matching rule, and the matching request can be processed differently.

Environment to prepare

Create a static directory in the nginx installation directory to store static resources:

The structure is as follows:

Dynamic and static separation configuration

Restart nginx(or reload the configuration file) and access it to see what happens:

The idea of separation of activity and activity is so intuitive that partners can define the matching rules of location according to their own needs.

4. Other features

In addition to the above commonly used functions, there may be some small functions will also be commonly used, such as the HTTP status code to configure the specified page, access control, adaptation of PC or mobile terminal, etc., the following is a few commonly used demo, as follows:

  • Configure the specified page based on the status code

    In the case of a common 404, the default might be a simple page prompt like this:

    But for many enterprises like to do their own personalized page, there are some used to do public service advertising and so on; The nginx configuration is simple, as follows:

    Other HTTP status codes can also be displayed in the preceding way.

  • Access control

    For the sake of system security, the access permission control will be added for the request, such as using the way of black and white list to control, the access IP to the whitelist can access, join the blacklist can not access, as follows:

    The preceding figure indicates that the specified IP address is denied. If the specified IP address is allowed, you can perform the following configurations:

    location /weatherforecast/ {
       proxy_pass http://testloadbalance;
       You can also view this IP from the nginx logAllow 223.88.45.26; }Copy the code

    Note: If both deny and allow are configured in the same location block, the first configuration will override the following, as follows:

    location /weatherforecast/ {
        proxy_pass http://testloadbalance;
        # deny all = deny all = deny all = deny all
        #deny all; Allow 223.88.45.26;# deny all = deny all
        deny all; 
    }
    Copy the code
  • Applicable to PC or mobile terminal

    Nowadays, many mobile terminals are developed in the form of H5, or in mixed mode. Therefore, corresponding sites need to be deployed for mobile terminals. How to automatically adapt PC or mobile terminal pages with Nginx?

    Prepare the environment

    Create the pcAndMobile directory in the nginx installation directory as follows:

    The contents of the directory are as follows:

    There is only one H1 tag in the two index. HTML, which displays “PC page” and “mobile page” text, respectively.

    Nginx configuration

    location / {
      root pcandmobile/pc; The default is to find the page in the PC directory
      # When the user-agent in the request header matches the following, go to the mobile directory to find the page
      if ($http_user_agent~ *'(Android|webOS|iPhone|iPod|BlackBerry)') {
          root pcandmobile/mobile;
      }
      index index.html;
    }
    Copy the code

    The running effect is as follows:

    Essentially, the user-agent in the request header is identified, and as long as it matches the mobile end, it will go to the specified mobile page.

conclusion

The common functions of NGINx say this first, the sharing of functions for the development of small partners should be enough to use, if you need to in-depth, but also to work hard; Next time, we will talk about how to configure ha: master/slave mode, dual master mode.

A handsome guy who was made ugly by the program. Follow “Code Variety Circle “and learn ~~~ with me