• An overview of the

  • Installation & Use

  • Practical scenario

    • Virtual host

    • Static site

    • The reverse proxy

    • Load balancing

    • HTTPS encrypted transmission

    • File server

    • The speed limit

    • Current limiting

    • Black and white list

    • Request to intercept

  • Configuration & Variable details

  • The HTTP status code

An overview of the

What is Nginx?

Nginx (pronounced “Engine X”) is a high performance reverse proxy and Web server software. Because of its low system resource consumption, stable operation and high performance concurrent processing capability, Nginx is widely used in Internet enterprises.

Nginx characteristics

  • High performance, high concurrency

  • Good scalability

  • Asynchronous non-blocking event-driven model

    Nginx Apache
    A process processes multiple requests A process processes a request
    Non-blocking type The block type

Installation & Use

The installation

Windows installation

  • Download the official stable version:https://nginx.org/download/nginx-1.20.0.zip
  • Decompress it to the local PC and run it directlynginx.exeCan be

Linux installation

  • The RPM packages installed

RPM package download page:

Nginx.org/packages/rh…

$ rpm -ivh nginx-*.rpm
Copy the code
  • Disabling the Firewall
$ firewall-cmd --zone=public --add-port=80/tcp --permanent
$ firewall-cmd --reload
Copy the code

The RPM package contains a large number of common modules. You are advised to use the RPM package to install them quickly and easily

use

The command parameter meaning
nginx Start the
nginx -V View the current version and compilation configuration information
nginx -t Check whether the syntax of the configuration file is correct
nginx -s stop Close the worker child process directly
nginx -s quit Wait for the worker child process to properly process the request and close
nginx -s reload Rereading the Configuration file

Practical scenario

Virtual host

server {
    # 1: Virtual host based on multiple IP addresses: Listen listens for the IP addresses of different network cards with the same port
    listen 8000;
    server_name 172.17.1.1;
    
    listen 8000;
    server_name 172.17.1.2;
    
    # 2: Multiport-based virtual host: Listen listens on different ports
    listen 8001;
    server_name localhost;
    
    listen 8002;
    server_name localhost;
    
    #3: Domain-based virtual hosts: The ports can be the same and server_name is a different domain name
    listen 8003;
    server_name www.test1.com;
    
    listen 8003;
    server_name www.test2.com;
}
Copy the code

Static site

In order to speed up the site parsing speed, you can put dynamic resources to the back-end server, pure front-end static pages in the system directory, to Nginx to parse.

server {
    listen 80; server_name localhost; location / { root /opt/nginx/html; index index.html index.htm; }}Copy the code

The reverse proxy

Reverse proxy is user client access proxy server, by the reverse proxy server according to certain rules from one or more response in the proxy server resources and returned to the client proxy pattern, the client only know the proxy server IP, don’t know the backend server IP, the reason was that the hidden proxy server is a proxy server information.

Layer 7 reverse proxy

In the HTTP section of the configuration file nginx.conf, write the following configuration format to proxy the local port 8088 to Baidu:

server {
    listen       8088;
    server_name	 localhost;
    
    location / {
        proxy_pass   https://www.baidu.com;}}Copy the code

Four-layer reverse proxy

Nginx can not only proxy HTTP layer 7 traffic, but also TCP/UDP layer 4 traffic. The core module stream needs to add “–with-stream” parameter to compile and configure (included in the RPM package).

The configuration file is as follows:

stream {
    server {
        listen 3306;
        # access the local 3306, is forwarded to the remote 3306
        proxy_pass 172.17.0.1:3306; }}Copy the code

Load balancing

In a service scenario with high concurrency and heavy traffic, a single back-end server cannot support normal service running. Therefore, request traffic must be distributed to multiple service nodes based on certain rules. Even if a node is down, the system can still provide services to improve system performance and stability.

Support protocol diagram

The upstream module

  • Defining upstream servers
instruction meaning
upstream Segment name, the middle of which defines the upstream service URL
server Define the upstream service address
zone Define shared memory for sharing data across worker child processes
keepalive Enable long connections for upstream services, with each worker child process connecting to the upstream serverIdle long connectionThe maximum number (Keepalive 16; When 5000 requests are received at the same time, 16 connections are kept and all others are closed.)
keepalive_requests Maximum number of requests a long connection can handle
keepalive_timeout In idle state, the timeout period of a long connection, after which the long connection is destroyed
hash Load balancing algorithm: hashing
ip_hash Load balancing algorithm: Hashes based on IP addresses
least_conn Load balancing algorithm: minimum connections
least_time Load balancing algorithm: minimum response time
random Load balancing algorithm: random

Server This parameter is optional

parameter meaning
weight=number Weight value, default is 1
max_conns=number Maximum number of concurrent connections to the upstream server
fail_timeout=time Determination time of server unavailability (If the server is unavailability for 3 times within 10 seconds, the server will not be forwarded to the back end within 10 seconds, and the server will still be forwarded after 10 times)
max_fails=number Number of times the server was checked for unavailability
backup Backup server, only if all other servers are unavailable
down Note That the server is unavailable for a long time and is maintained offline

Load balancing algorithm

  • Polling (default)

    • Each request is allocated to a different backend server one by one in chronological order
upstream backend {
    The default weight for all servers is 1
    server 192.168.1.1:8080;
    server 192.168.1.2:8080;
    server 192.168.1.3:8080;
}
Copy the code
  • Weight – Weight polling

    • Specifies the polling probability, which is used when back-end server performance is uneven
upstream backend {
    server 192.168.1.1:8080	weight=3;
    server 192.168.1.2:8080	weight=2;
    # default weight=1
    server 192.168.1.3:8080;		
}
Copy the code
  • Hash – hash

    • A hash algorithm maps a binary value of arbitrary length to a shorter, fixed-length binary value. This small binary value is called a hash value, and the mapping is irreversible.
    • Hash $request_URI: Loads based on the hash value of this variable
upstream backend {
    hash $request_uri;
    server 192.168.1.1:8080;
    server 192.168.1.2:8080;
    server 192.168.1.3:8080;	
}
Copy the code
  • ip_hash

    • Each request is allocated according to the hash result of the access IP, so that each visitor has fixed access to one back-end server, which is one solution to the session sharing problem
upstream backend {
    ip_hash;
    server 192.168.1.1:8080;
    server 192.168.1.2:8080;
    server 192.168.1.3:8080;	
}
Copy the code
  • Minimum number of connections algorithm

    • Pick an allocation request from the upstream server that has the fewest connections currently established
    • In extreme cases it degenerates into a polling algorithm
    • Least_conn:
      • When multiple worker sub-processes process requests at the same time, the connection number status of the back-end server cannot be shared. In this case, shared memory space needs to be opened to share information among multiple worker sub-processes
      • Zone zone_name 1M opens shared memory
upstream backend {
    least_conn;
    server 192.168.1.1:8080;
    server 192.168.1.2:8080;
    server 192.168.1.3:8080;	
}
Copy the code

Handling of an exception returned by the upstream server

In these cases, forwarding fails

Grammar: proxy_next_upstream error | timeout | invalid_header | http_500 | http_502 | http_503 | http_504 | http_403 | http_404 | http_429 | non_idempotent| off

Default value: proxy_next_upstream error timeout

Context: HTTP, server, location


Timeout period after which failed forwarding attempts are stopped

Syntax: proxy_next_upstream_timeout time

Default: proxy_next_upstream_timeout 0 (without waiting)

Context: HTTP, server, location


Forwarding number

Syntax: proxy_next_upstream_tries number

Default: proxy_next_upstream_tries 0

Context: HTTP, server, location

Optional parameters meaning
error The back-end server transmits the request or reads the response headererror(If the server is down, it will be forwarded to the next one)
timeout The back-end server transmits the request or reads the response headertimeoutIf the response body is not received within the time set by proxy_read_timeout, the response body will be forwarded to the next server. If the server is down, it will return to 502 and not forward to the next one.
invalid_header The back end returns an invalid response
Http_500, 502, 503, 504, 403, 404, 429 The HTTP response status is XXX
non_idempotent If a non-idempotent request fails, whether to forward the next backend server (if this parameter is not set, no forward is required. For example, if a 404 is hit in a POST request, 404 is returned directly. It’s best not to set it up easily for writes.)
off The request failure forwarding function is disabled

Configuration of the sample

upstream backend {
    zone upstream_backend 64k;
    
    server 127.0.0.1:8080 weight=2 max_conns=1000 fail_timeout=10s max_fails=5;
    server test.nginx.com weight=1;
    
    keepalive 16;
    keepalive_requests 100;
    keepalive_timeout 30s;
}

server {
    location /test {
        proxy_pass http://backend/test;
        If proxy_next_upstream is not configured, nginx will directly return an HTTP error code to the client
        proxy_next_upstream errortimeout invalid_header http_500 http_502 http_503 http_504 http_403 http_404 http_429; }}Copy the code

HTTPS encrypted transmission

HTTPS protects data transmission between clients and servers over encrypted channels. It is mandatory for the current website deployment. At HTTPS sites where Nginx proxy clusters are deployed, SSL certificates are typically deployed on Nginx servers and requests are proxy to upstream servers on the back end. In this deployment mode, the Nginx server is responsible for the COMPUTATION of SSL requests, which relatively reduces the CPU computation of the upstream back-end server.

  • Generate a self-signed HTTPS certificate
  # Configure the HTTPS signature certificate
  Create a directory for storing HTTPS certificates.
  cd /usr/local/nginx/conf/
  mkdir ssl
  Create private key;
  openssl genrsa -des3 -out https.key 1024
  Create signature request certificate
  openssl req -new -key https.key -out https.csr
  # 4 remove the required password when loading ssl-supported Nginx and using the above private key:
  cp https.key https.key.org
  openssl rsa -in https.key.org -out https.key
  # 5, finally mark the certificate with the above private key and CSR and expiration date:
  openssl x509 -req -days 365 -in https.csr -signkey https.key -out https.crt
Copy the code
  • The server configuration
server {
    listen       443 ssl;
    server_name  localhost;

    # Certificate section
    ssl_certificate     /usr/local/nginx/conf/ssl/https.crt; # RSA certificate
    ssl_certificate_key  /usr/local/nginx/conf/ssl/https.key; # RSA key
    
    # TLS handshake optimization
    The session cache is stored in 1MB
    ssl_session_cache    shared:SSL:1m;
    Session cache timeout is 5 minutes
    ssl_session_timeout  5m;
    keepalive_timeout    75s;
    keepalive_requests   100; location / { root html; index index.html index.htm; }}Copy the code

File server

To archive some data or data, a file server is essential. Using Nginx can be very fast and easy to build a simple file service.

  • The effect

  • configuration
server {
    listen 8004;
    server_name localhost;
    
    The Chinese directory on the Windows server cannot be drilled down
    charset gbk,utf-8;

    # Turn on autoIndex, request ending with /
    autoindex on;
    
    # display file size,
    # ON: displays in bytes
    # off: Personalized display, the file will be displayed as MB or GB
    autoindex_exact_size off;
    
    # returns in what format: XML HTML | | json | json
    # Default: autoindex_format HTML
    autoindex_format html;
    
    # Display time format
    # ON: 12-Jul 2019 10:11 (current time zone)
    # off: 12-Jul-2019 02:11(0 GMT)
    autoindex_localtime on;

    location / {
        root /data/files/;
        If an A.HTML file exists, the a.HTML content is returned, otherwise the directory content is returnedindex a.html; }}Copy the code

The speed limit

location /rate {
	# Define the transmission speed of response data. Default bytes/s
    limit_rate 20;

    # These are the variables associated with Nginx's request processing. Increase the amount of data returned to better see the speed limit effect
	return 200 'request_time $request_time request_id $request_id server_name $server_name request_filename $request_filename document_root $document_root realpath_root $realpath_root request_completion $request_completion ';
}
Copy the code

Current limiting

limit_conn

  • Used to limit the number of concurrent client connections
  • Use shared memory for all worker child processes (need to save client connections)

limit_req

  • Used to limit the average rate at which clients process requests
  • Using shared memory applies to all worker child processes
  • Current limiting algorithm:leaky_bucket(bucket)
    • Temporarily intercept the downward flow of the water above, and wait for a part of the water in the bucket to leak away, and then release the water above.
    • The overflow of the upper water is directly discarded.
http {
    include       mime.types;
    default_type  application/json;
    
    # limit_conn_zone key zone=name:size
    # key: Defines the unique identifier of the client to speed limit, such as remote_ADDR
    # name: any name
    # size: indicates the size of the shared memory in m units
    # binary_remote_addr uses 4 bytes of space, efficient; Remote_addr uses 7-15 bytes of space
    limit_conn_zone $binary_remote_addr zone=limit_addr:10m;
    
    # limit_req_zone key zone=name:size rate=rate;
    # Context: HTTP
    # rate: indicates the access frequency of clients with the same id. The 12r/m limit is one access per 5 seconds and only one request per 5 seconds.
    limit_req_zone  $binary_remote_addr zone=limit_req:15m rate=12r/m;

    server {
        listen       80;
        server_name  localhost;

        location / {
           root   html;
        	  index  index.html index.htm;
            
            # Return status code (default: 503)
            # Context: HTTP, server, location
            limit_conn_status 503;
            
            When the rate limiting is triggered, an error log is recorded. This is used to define the log level
            # info|notice|warn|error
            # Context: HTTP, server, location
            # Default value: error
            limit_conn_log_level warn;
            
            # limit_conn zone number;
            # zone: use the zone name defined in limit_conn_zone
            # number: the maximum number of simultaneous connections allowed by a client identified by a zone
            limit_conn limit_addr 2;
            
            Define the transmission speed of the response data, bytes/s
            This directive belongs to ngx_http_CORE_module, not ngx_HTTP_limit_conn_module
            limit_rate 50;

            # limit_req_status code
            # Default value: 503
            # Context: HTTP, server, location
            limit_req_status 504;
            
            The level of logging after speed limiting is triggered
            # info|notice|warn|error
            # Default value: error
            # Context: HTTP, server, location
            limit_req_log_level notice;
            
            # limit_req zone=name [burst=number] [nodelay | delay=number];
            # burst: set a buffer of size x. When a large number of requests (bursts) come in, requests exceeding the limit can be placed in this buffer first, but there are only 5 places in this buffer. If the number of requests exceeds the limit, 503 will be reported and returned.
            # nodelay: If set, this will provide the ability to process (burst + rate) requests at once, and will return 503 when the request exceeds (burst + rate).
            # Context: HTTP, server, location
            # limit_req zone=limit_req burst=7 nodelay;limit_req zone=limit_req; }}}Copy the code

Black and white list

access

  • Restrict access to specific IP addresses or network segments

  • allow

  • deny

server {
    listen       80;
    server_name  localhost;
    location / {
        # allow address | CIDR | UNIX | all
        # the default value
        # context: HTTP, server, location, limit_except
        allow 192.168.0.1/24;
        
        # deny address | CIDR | UNIX | all
        # the default value
        # context: HTTP, server, location, limit_exceptdeny all; }}Copy the code

Rules of the sample

location / {
    # Rule from top to bottom
    
    # refused to
    deny   192.168.1.1;
    
    The subnet mask is 24 bits (255.255.255.0), except 192.168.1.1
    allow  192.168.1.0/24;
    
    10.1.1.0 subnet mask 16 bits (255.255.0.0)
    allow  10.1.1.0/16;
    
    # release ipv6
    allow  2001:0db8::/32;
    
    # Refuse all other items except those above
    deny   all;
}
Copy the code

Request to intercept

auth_request

  • Access control is performed based on the HTTP response code received by the subrequest
    • For example, intercept all requests, make an authentication request first, and then allow permission after the request passes
location /private {
    # Default value: off
    Context: HTTP, server, location;
    If the authentication succeeds, the actual content is returned. If the authentication fails, the returned content of the authentication service is returnedauth_request /auth; . } location /auth { proxy_pass http://localhost:8080/auth;
    proxy_pass_request_body off;
    proxy_set_header Content-Length "";
    proxy_set_header X-Original-URI $request_uri;
    
}
Copy the code

Configuration & Variable details

Configure the main segment globally

Core parameters (other parameters are not used in most cases)# user USERNAME [GROUP]
Specify the owner and owner group of the worker child process running nginx, where the owner group can not be specified
user  nginx;

# worker_processes NUMBER | auto
Nginx specifies the number of worker child processes started by nginx
# [*auto: automatically set to physical CPU core number]
worker_processes  auto;

# pid DIR
Nginx master pid file path
pid /opt/nginx/logs/nginx.pid;

# worker_rlimit_nofile NUMBER
Specifies the maximum number of file handles that worker child processes can open
The number of open child processes multiplied by the number of open child processes does not exceed 65535.
This value needs to be increased
worker_rlimit_nofile 20480;

# worker_rlimit_core SIZE
# specify the core file after the worker child process terminates abnormally, which is used to record the analysis problem
worker_rlimit_core 50M;
working_directory /opt/nginx/tmp;Write permission must be granted to the child user.

# Explanation: Bind each worker child process to the CPU physical core
# master handles requests, worker handles requests
At one point, worker1 is removed from CPU1, and CPU1 processes other events. At the next point, worker1 may be removed from CPU2 or CPU3. If worker1 is bound to CPU1, worker1 will always wait for CPU1 to schedule worker1, making full use of CPU cache.
【 Main function: bind each worker subprocess to a specific CPU physical core, advantage is: avoid the same worker subprocess switching on different CPU cores, cache failure, performance reduction; it can not really avoid process switching (process switching is a CPU working feature) 】
# -- worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000 10000000; # 8 core, 8 workers
# -- worker_cpu_affinity 01 10 01 10; # 2 Core, 4 workers
worker_cpu_affinity 0001 0010 0100 1000;# 4 Core, 4 workers

The nice value of the worker child process is used to adjust the priority of running nginx, usually set to "negative" in order to call nginx first
The default Linux process priority is 120, the smaller the priority; nice sets the range from -20 to +19.
# for Linux, the priority value is 100 to 139.
worker_priority -20;

# specify the timeout period for the graceful exit of the worker child process, regardless of whether the process is completed within 5 seconds
worker_shutdown_timeout 5s;

# worker child process internal use of timer precision, adjustment time interval is larger, less system calls, conducive to performance improvement; Conversely, the more system calls, the worse performance
# For example, for some timed operations, workers need to obtain kernel time. Frequent contact with the kernel will degrade performance
timer_resolution 100ms;

# daemon on | off
# Set nginx running mode, foreground or background, foreground user debug, background for production
daemon on;

Load balancing mutex file path
lock_file logs/nginx.lock;
Copy the code

The events section

events {
    The event-driven model used by Nginx is not specified
    # use epoll;
    
    # worker child process can handle the maximum number of concurrent connections.
    worker_connections  65535;
    
    (When the master receives the request, it will send a message to each worker to wake up. When the status is on, there will be a load balancing lock, and the master will send it to each worker in turn.)
    accept_mutex on;
    
    The default timeout period is 500ms. After the timeout, the request will be forwarded to the next worker to process
    accept_mutex_delay 100ms;
    
    # worker Number of new connections that child processes can receive (this parameter does not have a significant impact on performance)
    multi_accept on;
}
Copy the code

HTTP section

  • The server section
server {
	listen 80;
    server_name www.test.com; 
    location /picture {
	    root /opt/nginx/html/picture;
        www.test.com/picture/1.jpg; # client request
        Corresponding disk mapping path for: # / opt/nginx/HTML/picture/picture / 1. JPG
        
	}
	location /picture {
	    alias /opt/nginx/html/picture/;
        www.test.com/picture/1.jpg; # client request
        Corresponding disk mapping path for: # / opt/nginx/HTML/picture / 1. JPG
        # [always end with /]}}Copy the code
  • Matching rule of server_name
Exact match, highest priority, 1
server_name www.test.com;

Left wildcard, priority 2
server_name *.test.com;

Right wildcard, priority 3
server_name www.test.*;

# regular wildcard, the lowest priority, 4server_name ~^w\.test\.. * $;# multipleserver_name www.test.com *.test.com www.test.* ~^w\.test\.. * $;Copy the code
  • The location for
Match rule meaning The sample Priority (1 highest)
= An exact match location = /pic/ 1
^ ~ The search stops when a match is found location ^~ /pic/ 2
~ Regular matching, case sensitive location ~ \.(Jpg|gif)# 3
~ * Regular matching, case insensitive location ~ \.(Jpg|gif)$ 4
unsigned location / 5
@ Internal jump location @errorpage
# Test sample
location ~ /test/8005/t/$ {
		return 200 'first regular expressions match! ';
}
location ~* /test/8005/t/(\w+)$ {
		return 200 'longest regular expressions match! ';
}
location ^~ /test/8005/t/ {
		return 200 'stop regular expressions match! ';
}
location /test/8005/t/Test2 {
		return 200 'longest prefix string match! ';
}
location /test/8005/t {
		return 200 'prefix string match! ';
}
location = /test/8005/t {
		return 200 'exact match! ';
}
Copy the code
Difference between root and alias root alias
grammar root path alias path
context http, server, location, if location
The difference between The defined path is superimposed with the URI Just take the defined path, always end with a slash
The difference between a/and no/at the end of location
Don’t take / location /test Try using test as a directory, or find the test file if you can’t find it
Take / location /test/ Use test as the directory, or 404 if it doesn’t exist
location /status {
	# Monitoring module
    stub_status;
}
# ------ page result ------
Active connections: 2 
server accepts handled requests
 16 16 26 
Reading: 0 Writing: 1 Waiting: 1 
Copy the code
Item status meaning
Active connections The number of TCP connections between the current client and Nginx is equal to the sum of the following numbers: Reading, Writing, Waiting
accepts The total number of connections made to the client since Nginx started
handled Total number of client connections processed since Nginx started. If the worker_connections configuration is not exceeded, this value is the same as accepts
requests Total number of client requests processed since Nginx started. Because there was an HTTP keep-alive request, the requests value is greater than the handled value
Reading Total number of connections reading HTTP request headers
Writing The total number of connections that are sending response data to clients
Waiting Total number of currently idle HTTP keep-alive connections
The built-in variable
The variable name meaning
$connections_active With the Active connections
$connections_reading With the value of Reading
$connections_writing With their value
$connections_waiting With the value of waiting
  • Rewrite directive &return directive
    • return
      • Stop processing the request and return the response code directly or redirect to another URL
      • After the return directive is executed, subsequent directives in Location are not executed
    • rewrite
      • Rewrites the URL based on the specified regular expression matching rules
location / {
    
    # Context: server, location, if
    # return code [text];
    # text: Response body content (if code is 200)
    # return 200 "return 200 HTTP Code";
    # return code URL;
    # URL: redirect
    # return 302 /test;
    # return URL;
    # URL: the full URL must start with HTTP/HTTPS
    # text: Response body content
    return http://localhost:8000/test;
}

location /test {
    index test.html;
}

location /search {
    
    # rewrite regex replacement [flag]
    # Context: server, location, if
    # flag: 
    # last: The rewritten URL initiates a new request, re-enters the server segment, and retries the match in the location
    # break: Use the rewritten URL directly, no longer matching statements in other locations
    # Redirect: Return to 302 temporary redirect
    # permanent: Returns a 301 permanent redirect
    rewrite /(.*) https://www.baidu.com permanent;
}

location /test1 {
    # continue to match location,
    rewrite /images/(.*) /test2/$1 last;
    return 200 "return 200 in /test1";
}

location /test2 {
    Select * from test3
    rewrite /pics/(.*) /test3/$1 break;
    return 200 "return 200 in /test2";
}

location /test3 {
    /test3/index.html,
    Test3: return 200 in /test3
    return 200 "return 200 in /test3";
}

location /test4/ {
    if ( $remote_addr = "192.168.1.1" ) {
        return 200 "test if OK in URL /test4/";
    } 
}

location /test5 {
    if ( $uri = "/images/" ) {
        rewrite (.*) /test2/ break;
    }
    # this will be done after rewriting, usually not together
    return 200 "test5 if failed\n";
}
Copy the code

Nginx variable classification

1, variables related to TCP connectionsClient address, for example, 192.168.1.1
	remote_addr					
	
	Client port, for example 58473
	remote_port					
	
	The client address is an integer format
	binary_remote_addr			
	
	# processed connection, is an increasing sequence number
	connection					
	
	# Number of requests performed on the current connection, meaningful for keepalive connections
	connection_request			
	
	If proxy_protocol is used, the address of the original user is returned, otherwise null
	proxy_protocol_addr			
	
	If proxy_protocol is used, the original user's port is returned, otherwise null
	proxy_protocol_port			
	
	The server address is 192.168.184.240
	server_addr					
	
	Server port, for example, 80
	server_port					
	
	Server protocols, such as HTTP/1.1
	server_protocol		
	                            
		                                
2HTTP request-related variablesRequest package header length
	conten_length				
	
	Request package type
	content_type				
	
	A parameter in the URLArg_ parameter nameAll URL parameters
	args						
	
	If there is a parameter in the URL, return? ; Otherwise return null
	is_args						
	
	# identical to ARgs
	query_string				
	
	The requested URL contains no parameters
	uri							
	
	The requested URL contains parameters
	request_uri					
	
	The protocol name is HTTP or HTTPS
	scheme						
	
	GET, HEAD, POST, etc
	request_method				
	
	The size of all request contents, including the request line, header, and request body
	request_length				
	
	The user name passed in by HTTP Basic Authentication
	remote_user					
	
	The client requests a temporary file name for the body information
	request_body_file			
	
	# contains the main information about the request, which makes sense in a location using the proxy_pass or fastcgi_pass directives
	request_body	

	Server_name = server_name
	host

	# user browser id
	http_user_agent

	# requests from which links
	http_referer

	Add the information of the corresponding proxy server through a layer of proxy servers
	http_via

	Obtain the real IP address of the user
	http_x_forwarded_for

	# the user cookie
	http_cookie


		                                
3Variables related to request processing by Nginx# The amount of time it took to process the request in seconds, for example, 0.03 for 30 milliseconds
	request_time				
	
	Return OK if request processing is complete, otherwise null
	request_completion			
	
	Request ID in hexadecimal format, randomly generated
	request_id					
	
	Match the requested server_name value
	server_name					
	
	# on if HTTPS is enabled, null otherwise
	https						
	
	The full path to the file to be accessed
	request_filename			
	
	# Folder path generated by URI and root/alias rules
	document_root				
	
	Change the soft link in document_root to the real path
	realpath_root				
	
	# speed upper limit when returning a response
	limit_rate					
		                                
4Nginx returns a responseThe size of the actual content in the response body
	body_bytes_sent				
	
	# Total response body size
	body_sent					
	
	#HTTP return status code
	status						
			
		
5, system variables# Nginx system version
	nginx_version

	# server time
	time_local
Copy the code

Time space unit

Unit of time

  • Ms: ms
  • S: s
  • M: minutes
  • H: hours
  • D: oh,
  • W: zhou
  • M: month
  • Y: in

Spatial unit

  • K/k: KB
  • M/m: MB
  • G/g: GB

The HTTP status code

classification describe
1 * * Message, the server receives the request and requires the requester to proceed with the operation
2 * * Success, the operation is received and processed successfully
3 * * Redirect, requiring further action to complete the request
4 * * Client error, request contains syntax error or request cannot be completed
5 * * Server error. The server encountered an error while processing the request
Status code describe
100 To continue. The client should continue with its request
101 Switch protocol. The server switches protocols based on client requests. You can only switch to a more advanced protocol, for example, the new version of HTTP
200 The request succeeded. Typically used for GET and POST requests
201 Has been created. The new resource was successfully requested and created
202 Has been accepted. The request has been accepted, but processing is not complete
203 Unauthorized information. The request succeeded. The meta information returned is not the original server, but a copy
204 No content. The server processed successfully, but did not return content. You can ensure that the browser continues to display the current document without updating the web page
205 Reset the content. The server is successful, and the user end (for example, browser) should reset the document view. Use this return code to clear the browser’s form field
206 Part of the content. The server successfully processed some of the GET requests
300 A variety of options. The requested resource can include multiple locations, and a list of resource characteristics and addresses can be returned for user terminal (e.g., browser) selection
301 Permanently move. The requested resource has been permanently moved to the new URI, the return message will include the new URI, and the browser will automatically redirect to the new URI. Any future new requests should be replaced with a new URI
302 Temporary move. Similar to 301. But resources are moved only temporarily. The client should continue to use the original URI
303 Look at other addresses. Similar to 301. Use GET and POST requests to view
304 Unmodified. The requested resource is not modified, and the server does not return any resources when it returns this status code. Clients typically cache accessed resources by providing a header indicating that the client wants to return only resources that have been modified after a specified date
305 Use a proxy. The requested resource must be accessed through a proxy
306 An invalid HTTP status code
307 Temporary redirect. Similar to 302. Use GET to request redirection
400 Client request syntax error, server cannot understand
401 The request requires user authentication
402 Reserved for future use
403 The server understands the request from the requesting client, but refuses to execute the request
404 The server could not find the resource (web page) based on the client’s request. With this code, a web designer can set up a personalized page that says “the resource you requested could not be found.
405 The method in the client request is disabled
406 The server could not complete the request based on the content nature of the client request
407 The request requires the identity of the broker, similar to the 401, but the requester should use the broker for authorization
408 The server waited for a request sent by the client for a long time and timed out. Procedure
409 This code may be returned when the server completes a PUT request from the client, and a conflict occurred when the server processed the request
410 The resource requested by the client does not exist. 410 differs from 404 in that if a resource previously had a 410 code that is now permanently deleted, the site designer can specify a new location for the resource through the 301 code
411 The server cannot process the content-length message sent by the client
412 A prerequisite error occurred when the client requested information
413 The request was rejected because the requested entity was too large for the server to process. To prevent continuous requests from clients, the server may close the connection. If the server is temporarily unable to process it, a retry-after response is included
414 The request URI is too long (usually a url) for the server to process
415 The server could not process the media format attached to the request
416 The scope requested by the client is invalid
417 The server cannot satisfy Expect’s request headers
500 The server had an internal error and could not complete the request
501 The server did not support the requested functionality and could not complete the request
502 The server working as a gateway or proxy received an invalid response from the remote server when attempting to execute the request
503 The server is temporarily unable to process client requests due to overloading or system maintenance. The length of the delay can be included in the server’s retry-after header
504 The server acting as a gateway or proxy did not get the request from the remote server in time
505 The server did not support the HTTP version of the request and could not complete the processing

The above is my humble opinion, welcome to discuss and study together!