The Nginx configuration file is divided into four sections: main (global Settings), server (host Settings), upstream (reverse proxy and load balancing Settings) and location (Settings where the URL matches a specific location). Directives set in the main section affect Settings in all other sections; Server part of the instruction is mainly used to develop the virtual host domain name, IP and port number; The upstream directive is used to set up a set of back-end servers, setting up reverse proxies and load balancing for back-end servers. The Location section is used to match the location of the web page (for example, root “/”, “/images”, and so on). The relationship between them: Server inherits from main, location inherits from server; Upstreams neither inherit nor be inherited from directives.

Nginx currently supports several directive contexts () :

Nginx. conf configuration file


The following is a detailed introduction to the nginx.conf configuration file (the following configuration parameters are not necessarily used in many cases, but as a reference to the configuration parameters, you can see the following general version introduction).

# Define users and user groups that Nginx will run on
user www www; 

#nginx number of processes, usually set to equal the number of cpus
worker_processes 4; 

# define global error log type, the debug | info | notice | warn | error | crit]
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;


# Process PID file
#pid logs/nginx.pid;


# Specifies the maximum number of descriptors that a process can open
# Working mode and maximum number of connections
This directive specifies the maximum number of file descriptors that can be opened by an nginx process. The theory value should be the maximum number of open files (ulimit -n) divided by the number of Nginx processes, but nginx does not allocate requests evenly, so it is best to keep the same value as ulimit -n.
# This is because nginx does not allocate requests to processes evenly, so if 10240 is filled in and the total number of concurrent requests reaches 30,000-40,000, it is possible that 10240 is exceeded and error 502 will be returned.
worker_rlimit_nofile 65535;


events {
    # reference event model, use [kqueue | rtsig | epoll | / dev/poll | select | poll]; Epoll model
    # is a high performance network I/O model in Linux kernel versions above 2.6. Linux recommends epoll, and kqueue if running on FreeBSD.
    # Supplementary note:
    Similar to Apache, Nginx has different event models for different operating systems
    #A) Standard event model
    Select and poll are standard event models. Nginx will Select Select or poll if no more efficient method is available
    #B) Efficient event model
    #Kqueue: for FreeBSD 4.1+, OpenBSD 2.9+, NetBSD 2.0 and MacOS X. Using KQueue on MacOS X with dual processors can cause a kernel crash.
    #Epoll: for Linux kernel 2.6 and later.
    #/dev/poll: Used for Solaris 7 11/99+, HP/UX 11.22+ (Eventport), IRIX 6.5.15+ and Tru64 UNIX 5.1A+.
    #Eventport: For Solaris 10. To prevent kernel crashes, it is necessary to install security patches.
    use epoll
    
    
    # Maximum number of connections per process (Maximum number of connections = Number of connections + Number of processes)
    Adjust to the hardware, work with the previous worker process, as large as possible, but do not run 100% cup.
    worker_connections  1024;
    
    # Keepalive timeout
    keepalive_timeout 60;
    
    # Buffer size for client request headers. This can be set according to the page size of your system. Generally, a request header will not be larger than 1K, but since most system pages are larger than 1K, this will be set to the page size.
    The PAGESIZE can be obtained by using the getconf PAGESIZE command.
    #[root@web001 ~]# getconf PAGESIZE
    Client_header_buffer_size must be set to an integer multiple of system page size.
    client_header_buffer_size 4k;
    
    Max specifies the number of files to be cached. It is recommended to be the same as the number of files to be opened. Inactive means how long has elapsed since the file has not been requested to delete the cache.
    open_file_cache max=65535 inactive=60s;
    
    
    How often do you check the cache for valid information?
    Syntax :open_file_cache_valid time Default: open_file_cache_VALID 60 The location directive specifies when the open_file_cache item needs to be checked for valid information.
    open_file_cache_valid 80s;
    
    
    #open_file_cache indicates the minimum number of times a file is used during inactive time. If this number is exceeded, the file descriptor is always open in the cache. As shown in the previous example, if a file is not used once during inactive time, it will be removed.
    # Syntax :open_file_cache_min_uses number Default :open_file_cache_min_uses 1 The location directive specifies the minimum number of files that can be used in a given period of time when the open_file_cache directive is invalid. If a larger value is used, the file descriptor is always open in the cache.
    open_file_cache_min_uses 1;
    
    # syntax: open_file_cache_errors on | off default: open_file_cache_errors off use field: HTTP, server, The location directive specifies whether searching for a file is a cache error.
    open_file_cache_errors on;
}


Configure the HTTP server to provide load balancing support using its reverse proxy function
http{
    # File extension and file type mapping table
    include mime.types;
    
    Default file type
    default_type application/octet-stream;
    
    # Default code
    charset utf-8;
    
    Hash table size for the server name
    # The hash table that holds the server name is controlled by the directives server_names_hash_max_size and server_names_hash_bucket_size. The hash bucket size argument is always equal to the size of the hash table and is a multiple of the size of the processor cache along the way. By reducing the number of accesses to memory, it is possible to speed up the lookup of hash table keys in the processor. If the hash bucket size is equal to the size of the processor cache along the way, then the worst-case number of times the key is looked up in memory is 2. The first time is to determine the address of the storage unit, and the second time is to find the key value in the storage unit. Therefore, if Nginx tells you that you need to increase the hash Max size or hash bucket size, the first thing to do is to increase the size of the previous parameter.
    server_names_hash_bucket_size 128;
    
    # Buffer size for client request headers. This can be set according to the page size of your system. Generally, the header size of a request should not exceed 1K, but since most system pages are larger than 1K, this is set to the page size. The PAGESIZE can be obtained by using the getconf PAGESIZE command.
    client_header_buffer_size 32k;
    
    # Customer request header buffer size. By default, Nginx will use client_header_BUFFer_size to read header values, and large_client_header_buffers if headers are too large.
    large_client_header_buffers 4 64k;
    
    Set the size of files uploaded via nginx
    client_max_body_size 8m;
    
    To enable efficient file transfer mode, the sendfile directive specifies whether nginx will call sendfile to output files. This is set to on for common applications and to off for applications with heavy disk I/O loads, such as downloads, to balance disk and network I/O processing speed and reduce system load. Note: Change this to OFF if the image is not showing properly.
    The sendfile directive specifies whether nginx calls sendfile (zero copy) to output the file. For normal applications, this must be set to on. To balance disk AND network I/O processing speeds and reduce the UPtime of the system, set this parameter to OFF.
    sendfile on;
    
     # Enable directory list access, suitable for download server, disable by default.
    autoindex on;
    
      This option allows or disallows socke's TCP_CORK option, which is only used when sendfile is used
    tcp_nopush on;
     
    tcp_nodelay on;
    
    # Long connection timeout, in seconds
    keepalive_timeout 120;
    
    The #FastCGI parameters are designed to improve the performance of your site: reduce resource usage and increase access speed. The following parameters can be read literally.
    fastcgi_connect_timeout 300;
    fastcgi_send_timeout 300;
    fastcgi_read_timeout 300;
    fastcgi_buffer_size 64k;
    fastcgi_buffers 4 64k;
    fastcgi_busy_buffers_size 128k;
    fastcgi_temp_file_write_size 128k;
    
    # Gzip module setup
    gzip on; # Enable gzip compression output
    gzip_min_length 1k;    # Minimum compressed file size
    gzip_buffers 4 16k;    # Compress bufferGzip_http_version 1.0;# compressed version (default 1.1, use 1.0 for squid2.5)
    gzip_comp_level 2;     # Compression grade
    gzip_types text/plain application/x-javascript text/css application/xml;    * * * * * * * * * * * * * * * * * * * * * *
    gzip_vary on;

    This parameter is required when limiting the number of IP connections is enabled
    #limit_zone crawler $binary_remote_addr 10m;
    
    
    Load balancing configuration
    upstream piao.jd.com {
     
        #upstream for load balancing. Weight is the weight, which can be defined according to the machine configuration. The weigth parameter represents the weight, the higher the weight, the greater the probability of being assigned.Server 192.168.80.121:80 weight = 3; Server 192.168.80.122:80 weight = 2; Server 192.168.80.123:80 weight = 3;Upstream currently supports 4 types of allocation
        #1: Polling (default)
        # Each request is allocated to a different back-end server one by one in time order. If the back-end server is down, it can be automatically deleted.
        # 2, and weight
        # specifies the polling probability, weight is proportional to the access ratio, used in the case of uneven backend server performance.
        # such as:
        #upstream bakend {
        # server 192.168.0.14 weight = 10;
        # server 192.168.0.15 weight = 10;
        #}
        # 2, ip_hash
        # Each request is assigned based on the hash result of the access IP, so that each visitor accesses a fixed back-end server, which can solve the session problem.
        # such as:
        #upstream bakend {
        # ip_hash;
        # server 192.168.0.14:88;
        # server 192.168.0.15:80;
        #}
        #3. Fair (third party)
        # Requests are allocated according to the response time of the back-end server, and those with short response time are allocated first.
        #upstream backend {
        # server server1;
        # server server2;
        # fair;
        #}
        #4. Url_hash (third party)
        # Allocate requests based on the hash result of the accessed URL, so that each URL is directed to the same back-end server. This is effective when the back-end server is cached.
        "(" hash") while reading upstream. "hash_method" is the hash algorithm used
        #upstream backend {
        # server squid1:3128;
        # server squid2:3128;
        # hash $request_uri;
        # hash_method crc32;
        #}

        #tips:
        #upstream bakend{# define Ip address and device status}{
        # ip_hash;
        # server 127.0.0.1:9090 down;
        # server 127.0.0.1:8080 weight = 2;
        # server 127.0.0.1:6060;
        7070 # server 127.0.0.1: backup;
        #}
        # add proxy_pass http://bakend/ to server where load balancing is required;

        The status of each device is set to:
        #1. Down indicates that the previous server does not participate in the load temporarily
        The greater the weight, the greater the weight of the load.
        #3. Max_fails: The number of requests allowed to fail is 1 by default. When the maximum number of requests is exceeded, an error defined by the proxy_next_upstream module is returned
        #4. Fail_timeout :max_fails specifies the timeout time after a failed attempt.
        #5. Backup: Request a backup machine when all other non-backup machines are down or busy. So this machine will have the least pressure.

        # Nginx supports load balancing of multiple groups at the same time to be used by different servers.
        #client_body_in_file_only Set to On to record data sent by the client to a file for debugging
        #client_body_temp_path Sets the directory for recording files. You can set up to three levels of directories
        #location matches the URL. You can redirect or do a new proxy load balancing
    }
    
    
    # Virtual host configuration
    server {
        # Listening port
        listen 80;

        There can be more than one domain name, separated by a space
        server_name www.jd.com jd.com;
        Default entry file name
        index index.html index.htm index.php;
        root /data/www/jd;

        # Load balancing ******location ~ .*.(php|php5)? The ${fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi.conf; }# Set image cache time
        location ~ .*.(gif|jpg|jpeg|png|bmp|swf)$
        {
            expires 10d;
        }
         
        #JS and CSS cache time Settingslocation ~ .*.(js|css)? $ { expires 1h; }Log format setting
        $remote_addr = $http_x_forwarded_for;
        #$remote_user: record the name of the client user;
        #$time_local: record access time and time zone;
        #$request: record the REQUEST url and HTTP protocol;
        #$status: record the status of the request; Success is 200,
        #$body_bytes_sent: records the size of the body of the file sent to the client;
        #$http_referer: to record access from that page link;
        #$http_user_agent: Record information about the client's browser;
        $remote_add = $remote_add; $remote_add = $remote_add; The reverse proxy server can add x_forwarded_for information to the HTTP header of the forwarding request to record the IP address of the original client and the server address requested by the original client.
        log_format access '$remote_addr - $remote_user [$time_local] "$request" '
        '$status $body_bytes_sent "$http_referer" '
        '"$http_user_agent" $http_x_forwarded_for';
         
        # Define the access log for this virtual host
        access_log  /usr/local/nginx/logs/host.access.log  main;
        access_log  /usr/local/nginx/logs/host.access.404.log  log404;
         
        # Enable reverse proxy for "/connect-controller"Location /connect-controller {proxy_pass http://127.0.0.1:88;Please note that the port number must not be the same as the port number that the virtual host is listening on.
            proxy_redirect off;
            proxy_set_header X-Real-IP $remote_addr;
             
            # The Web server on the back end obtains the user's real IP address through X-forwarded-for
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
             
            The following are some reverse proxy configurations, which are optional.
            proxy_set_header Host $host;

            # Maximum number of bytes per file that can be requested by a client
            client_max_body_size 10m;

            # Buffer proxy buffered the maximum number of bytes requested by a client,
            If it is set to a large value, such as 256K, then it is normal to submit any image smaller than 256K using either Firefox or Internet Explorer. If you annotate this directive with the default client_body_buffer_size setting, which is twice the operating system page size, 8K or 16K, the problem arises.
            Error 500 Internal Server Error 500 Internal Server Error 500 Internal Server Error 500 Internal Server Error
            client_body_buffer_size 128k;

            Nginx blocks HTTP reply code 400 or higher.
            proxy_intercept_errors on;

            # Timeout for the back-end server connection _ Timeout for the handshake to initiate and wait for a response
            # Timeout between nginx and backend server
            proxy_connect_timeout 90;

            # Back-end server data return time (proxy send timeout)
            # Back end server data return time _ means that the back end server must complete all data transmission within the specified time
            proxy_send_timeout 90;

            # Back-end server response time after successful connection (proxy receive timeout)
            # After the connection is successful _ wait for the response time from the back-end server _ in fact, it has been queued for processing (also can be said to be the back-end server processing time)
            proxy_read_timeout 90;

            Set the size of the buffer in which the proxy server (nginx) holds user headers
            Set the buffer size for the first part of the reply read from the proxy server. Normally this part contains a small reply header. By default this value is the size of one of the buffers specified in the proxy_buffers directive, but it can be set to smaller
            proxy_buffer_size 4k;

            #proxy_buffers buffers, pages average below 32K Settings
            # Sets the number and size of buffers used to read responses (from the proxy server). The default is also the paging size, which can be 4k or 8K depending on the operating system
            proxy_buffers 4 32k;

            Proxy_buffers *2
            proxy_busy_buffers_size 64k;

            # Set the size of data when writing proxy_temp_path to prevent a worker process from blocking for too long when passing files
            "Set the cache folder size. If it is larger than this value, it will be sent from the upstream server.
            proxy_temp_file_write_size 64k;
        }
        
        Local dynamic/static separation reverse proxy configuration
        # All JSP pages are processed by Tomcat or resin
        location ~ .(jsp|jspx|do)? $ { proxy_set_header Host$host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; Proxy_pass http://127.0.0.1:8080; }}}Copy the code

Nginx.conf for Windows nginx1.14.2


The following nginx.conf simple implementation of nginx in front of the reverse proxy server example, processing JS, PNG and other static files, JSP and other dynamic requests forward to other servers (related configuration parameters refer to the above explanation) : involved

worker_processes  1;

#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;

#pid logs/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    #log_format main '$remote_addr - $remote_user [$time_local] "$request" '
    # '$status $body_bytes_sent "$http_referer" '
    # '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  logs/access.log  main;
    error_log   logs/ssl.error.log crit;

    sendfile        on;
    #tcp_nopush on;

    #keepalive_timeout 0;
    keepalive_timeout  65;

    #gzip on;

    server {
        listen       8088;
        server_name  localhost;

        charset utf-8;

        #access_log logs/host.access.log main;

        Entry file Settings
        location / {
            root   D:/vueWork/dentalFactory/dist; # Directory where the entry file is located
            index  index.html index.htm; Default entry file name
        }
        
        Tomcat reverse proxy configurationLocation /location name / {proxy_pass http://192.168.1.10:8080;# http://127.0.0.1:8080/ Service name (project name) /
            #proxy_set_header Host $host;
            proxy_set_header Host $host:$server_port; Proxy_set_header x-real-ip proxy_set_header x-real-ip proxy_set_header x-real-ip proxy_set_header x-real-ip$remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
        
        
        # Static resource proxy configuration (generally do not configure, here is an example of configuring image resources configuration)
        #location /images/ {
        # root D:/A-studySpace/nginxDemo;
        #}
        
        # This is where the 404 page is configured
        #error_page 404 /404.html;

        This is where the page is configured for the status of the request
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

        # PHP reverse proxy configuration
        # Forward all PHP page requests to PHP-fpm for processing
        #location ~ \.php$ {
        # root html;
        # fastcgi_pass 127.0.0.1:9000;
        # fastcgi_index index.php;
        # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; #fastcgi_param configuration parameters are many, as necessary to adjust
        # include fastcgi_params;
        #}

       
    }


    # another virtual host using mix of IP-, name-, and port-based configuration
    #
    #server {
    # listen 8000;
    # listen somename:8080;
    # server_name somename alias another.alias;

    # location / {
    # root html;
    # index index.html index.htm;
    #}
    #}


    # HTTPS server
    #
    #server {
    # listen 443 ssl;
    # server_name localhost;

    # ssl_certificate cert.pem;
    # ssl_certificate_key cert.key;

    # ssl_session_cache shared:SSL:1m;
    # ssl_session_timeout 5m;

    # ssl_ciphers HIGH:! aNULL:! MD5;
    # ssl_prefer_server_ciphers on;

    # location / {
    # root html;
    # index index.html index.htm;
    #}
    #}

}

Copy the code