Nginx Software Directory:
-
Working directory: /etc/nginx
- This folder is the main directory for nginx and contains configuration files
-
Execute file: /usr/sbin/nginx
- This nginx is an executable file
-
Log directory: /var/log/nginx
- In this folder, nginx log files are stored
-
Startup file: /etc/init.d/nginx
- Nginx is an executable file
-
A web directory:
- /var/ WWW/HTML/index.nginx-debian.html
- The home file of /usr/share/nginx/html is index.html
- /var/ WWW/HTML/index.nginx-debian.html
Nginx configuration file:
-
Default file: /etc/nginx/nginx.conf
-
/etc/nginx/{sites-available/sites-enabled/conf.d}
- There is a default file in both site-available and site-enabled (nginx default configuration).
- There are no files under the conf.d folder
-
The original nginx.conf file
-
# the user user www-data; # Work process worker_processes auto; # pid /run/nginx.pid; # include includes the following modules-enabled configuration file include /etc/nginx/modules-enabled/*.conf; events { worker_connections 768; # multi_accept on; } # HTTP section http { # # # Basic Settings # # # sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; # # # set SSL Settings # # ssl_protocols TLSv1 TLSv11. TLSv12.; # Dropping SSLv3, ref: POODLE ssl_prefer_server_ciphers on; # # # Logging Settings # Logging Settings # # access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; # # # Gzip Settings # Zip file Settings # # gzip on; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; # # # Virtual Host Configs # contains the following configuration files # # include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } # This is an example of use #mail { # # See sample authentication script at: # # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript # # # auth_http localhost/auth.php; # # pop3_capabilities "TOP" "USER"; # # imap_capabilities "IMAP4rev1" "UIDPLUS"; # # server { # listen localhost:110; # protocol pop3; # proxy on; #} # # server { # listen localhost:143; # protocol imap; # proxy on; #} #} Copy the code
-
-
Rounding nginx. Conf
-
File structure:
- Global configuration segment
- The HTTP configuration section
- Server configuration section project or application
- Location Indicates the URL of the configuration segment
- Server configuration section project or application
-
######Nginx configuration file nginx.conf: ##### # define users and user groups to run Nginx user www www; # Number of nginx processes. It is recommended that the value be set to the total number of CPU cores. worker_processes 8; # define global error log type, the debug | info | notice | warn | error | crit] error_log /usr/local/nginx/logs/error.log info; Process PID file pid /usr/local/nginx/logs/nginx.pid; # specify the maximum number of descriptors a process can open: number # Working mode and connection number upper limit # The maximum number of open file descriptors for an Nginx process. The theoretical value should be the maximum number of open files (ulimit-n) divided by the number of nginx processes. However, nginx does not allocate requests evenly, so it is best to keep the value consistent with ulimit-n. # worker_rlimit_nofile = 65535; # worker_rlimit_nofile = 65535; # This is because nginx does not allocate requests to processes evenly, so if you enter 10240, some processes may exceed 10240 when the total number of concurrent requests reaches 30,000-40,000, which will return error 502. worker_rlimit_nofile 65535; events { # reference event model, use [kqueue | rtsig | epoll | / dev/poll | select | poll]; Epoll model # is a high performance network I/O model in the Linux 2.6 kernel. Linux recommends epoll and kqueue if running on FreeBSD. # Additional notes: # Like Apache, Nginx has different event models for different operating systems #A) Standard event model Nginx will choose Select or poll if no more efficient method is available #B) Efficient event model #Kqueue: used with FreeBSD 4.1+, OpenBSD 2.9+, NetBSD 2.0 and MacOS X. Dual processor MacOS X systems using Kqueue can cause a kernel crash. #Epoll: Used on Linux kernel 2.6 and later. #/dev/poll: Used on Solaris 7 11/99+, HP/UX 11.22+ (EventPort), IRIX 6.5.15+ and Tru64 UNIX 5.A +. #Eventport: For Solaris 10. To prevent kernel crashes, it is necessary to install security patches. use epoll; # Maximum number of connections per process (maximum number of connections = number of connections * number of processes) Use it in conjunction with the previous work process, as large as possible, but don't run the CPU to 100%. The maximum number of connections allowed per process. Theoretically, the maximum number of connections per Nginx server is. worker_connections 65535; Keepalive timeout. keepalive_timeout 60; The buffer size of the client request header. This can be set depending on your system's paging size. The size of a request header should not exceed 1K, but since system paging is usually greater than 1K, it is set to the paging size. The PAGESIZE size can be obtained with the command getconf PAGESIZE. #[root@web001 ~]# getconf PAGESIZE # 4096 There are cases where client_header_BUFFer_size exceeds 4K, but client_header_BUFFer_size must be set to an integer multiple of the system paging size. client_header_buffer_size 4k; Max specifies the number of cached files, which is recommended to be the same as the number of open files. Inactive means how long it will take before the cache is deleted before the file is requested. open_file_cache max=65535 inactive=60s; How often do you check the cache for valid information? # Syntax: open_file_cache_VALID Time Default: open_file_cache_VALID 60 Use fields: HTTP, server, The location directive specifies when to check for valid information about cached items in open_file_cache. open_file_cache_valid 80s; The minimum number of times a file has been used in the inactive parameter of the #open_file_cache directive. If this number is exceeded, the file descriptor will remain open in the cache. As shown above, if a file has not been used once in inactive time, it will be removed. # Syntax :open_file_cache_min_uses number Default :open_file_cache_min_uses 1 The location directive specifies the minimum number of files that can be used within a certain time range for open_file_cache invalid arguments. If larger values are used, the file descriptor is always open in the cache. open_file_cache_min_uses 1; # syntax: open_file_cache_errors on | off default: open_file_cache_errors off use field: HTTP, server, The location directive specifies whether a file is being searched to record a cache error. open_file_cache_errors on; } Configure the HTTP server to provide load balancing support with its reverse proxy capabilities http { File extension and file type mapping table include /etc/nginx/mime.types; The default file type default_type application/octet-stream; # default encoding #charset utf-8; The hash table size of the server name The hash table that holds the server name is controlled by the server_names_hash_max_size and server_names_hash_bucket_size directives. The hash bucket size parameter is always equal to the hash table size and is a multiple of the processor cache size. Makes it possible to speed up hash table key lookup in the processor by reducing the number of accesses in memory. If hash bucket size is equal to the size of the processor cache along the way, the worst-case number of in-memory searches for the key is 2. The first is to determine the address of the storage unit, and the second is to look up the key value in the storage unit. Therefore, if Nginx is prompted to increase the hash Max size or hash bucket size, the first step is to increase the size of the previous parameter. server_names_hash_bucket_size 128; The buffer size of the client request header. This can be set depending on your system paging size. Generally, the header size of a request should not exceed 1K, but since system paging is usually greater than 1K, it is set to the paging size. The PAGESIZE can be obtained using the command getconf PAGESIZE. client_header_buffer_size 32k; Client request header buffer size. By default, nginx uses client_header_buffer_size to read the header value, or large_client_header_buffers if the header is too large. large_client_header_buffers 4 64k; # Set the size of files uploaded via nginx client_max_body_size 8m; Sendfile specifies whether nginx calls sendFile to output files. Set it to ON for common applications. Set it to off for heavy load applications such as downloading disk I/O to balance disk and network I/O processing speed and reduce system load. Note: Change this to off if the image does not display properly. The # sendFile directive specifies whether nginx calls sendfile (zero copy) to output files. For common applications, it must be set to on. If it is used for heavy disk I/O load applications, such as downloads, you can set it to Off to balance the I/O processing speed between disks and networks and reduce the uptime of the system. sendfile on; # Enable directory list access, suitable for download server, default disabled. autoindex on; This option allows or disallows socke's TCP_CORK option, which is only used when sendfile is used tcp_nopush on; tcp_nodelay on; # Long connection timeout, in seconds keepalive_timeout 120; #FastCGI parameters are designed to improve the performance of your site: reduce resource usage and increase access speed. The following parameters can be understood literally. fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; # Gzip module Settings gzip on; # Enable gzip compressed output gzip_min_length 1k; Minimum compressed file size gzip_buffers 4 16k; # compress buffer gzip_http_version 1.0; Compressed version (default 1.1, use 1.0 if squid2.5) gzip_comp_level 2; # Compression level gzip_types text/plain application/x-javascript text/css application/xml; The type of compression already contains textML by default, so there is no need to write it. There will be no problem writing it, but there will be a WARN. gzip_vary on; This command is used when limiting the number of IP connections #limit_zone crawler $binary_remote_addr 10m; Load balancing configuration upstream piao.jd.com { Upstream load balancer: Weight is a weight that can be defined according to machine configuration. The weigth parameter represents the weight, and the higher the weight, the more likely it is to be assigned. server 192.16880.121.:80 weight=3; server 192.16880.122.:80 weight=2; server 192.16880.123.:80 weight=3; #nginx upstream currently supports 4 ways to distribute #1, Polling (default) Each request is allocated to a different backend server one by one in chronological order. If the backend server goes down, it will be automatically removed. # 2, and weight # specify the polling probability, weight proportional to the access ratio, for cases where back-end server performance is uneven. # such as: #upstream bakend { # server 192.168.0.14 weight = 10; # server 192.168.0.15 weight = 10; #} # 2, ip_hash Each request is allocated according to the hash result of the access IP, so that each visitor has fixed access to one back-end server, which can solve the session problem. # such as: #upstream bakend { # ip_hash; # server 192.168.0.14:88; # server 192.168.0.15:80; #} #3 fair (Third Party) Allocate requests based on the response time of the back-end server, with priority given to those with short response times. #upstream backend { # server server1; # server server2; # fair; #} #4, url_hash (third-party) Allocate requests based on the hash result of the url accessed, so that each URL is directed to the same backend server, which is more efficient when the backend server is cached. (110: while adding a hash to upstream, server cannot write weight or other parameters (hash_method is the hash algorithm used) #upstream backend { # server squid1:3128; # server squid2:3128; # hash $request_uri; # hash_method crc32; #} #tips: Upstream (110: upstream) { # ip_hash; # server 127.0.0.1:9090 down; # server 127.0.0.1:8080 weight = 2; # server 127.0.0.1:6060; 7070 # server 127.0.0.1: backup; #} # add proxy_pass http://bakend/ to servers that need load balancing The state of each device is set to: #1.down indicates that the server does not participate in the load temporarily The greater the weight, the greater the weight of the load. Max_fails: The number of failed requests allowed is 1 by default. When the maximum number of failed requests is exceeded, an error defined by the proxy_next_upstream module is returned #4. Fail_timeout :max_fails specifies the time to pause after this failure. #5. Backup: Request the backup machine when all other non-backup machines are down or busy. So this machine will have the least pressure. # Nginx supports load balancing for multiple groups of servers at the same time. #client_body_in_file_only set to On to record the data sent by the client to a file for debugging Up to 3 levels of directories can be set #location matches the URL. Redirects or new proxy load balancing can be performed } Copy the code
-
-
Default. The conf explanation
-
server { # monitor port listen 80; # Multiple domain names can be separated by Spaces server_name www.jd.com jd.com; index index.html index.htm index.php; root /data/www/jd; Load balancing for ******location ~ .*.(php|php5)? $ { fastcgi_pass127.0. 01.:9000; fastcgi_index index.php; include fastcgi.conf; } # Image cache time Settings location ~ .*.(gif|jpg|jpeg|png|bmp|swf)$ { expires 10d; } #JS and CSS cache time Settingslocation ~ .*.(js|css)? $ { expires 1h; }# Log formatting $remote_addr = $http_x_forwarded_for #$remote_user: specifies the user name of the client. #$time_local: access time and time zone; #$request: specifies the url and HTTP protocol used to record the request. #$status: indicates the request status. Success is 200, #$body_bytes_sent: Records the size of the body of the file sent to the client; #$http_referer: used to record links from that page; $http_user_agent: Log information about the client's browser; $remote_add ($remote_add); $remote_add ($remote_add); $remote_add ($remote_add); The reverse proxy server can add x_forwarded_for to the HTTP header to record the IP address of the original client and the server address requested by the original client. log_format access '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" $http_x_forwarded_for'; # Define the access log of this virtual host access_log /usr/local/nginx/logs/host.access.log main; access_log /usr/local/nginx/logs/host.access404..log log404; # enable reverse proxy for "/" location / { proxy_pass http://127.0. 01.:88; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; The backend Web server can obtain the real IP address of the user through x-Forwarded-For proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # Here are some optional reverse proxy configurations. proxy_set_header Host $host; # Maximum number of bytes per file that a client can request client_max_body_size 10m; The buffer agent can buffer the maximum number of bytes requested by the client. # If it is set to a large value, such as 256K, it is normal to submit any image smaller than 256K using firefox or Internet Explorer. If you annotate this directive with the default client_body_BUFFer_size setting, which is twice the operating system page size, 8K or 16K, problems arise. If you submit a large image using firefox4.0 or Internet explorer 8.0, return 500 Internal Server Error client_body_buffer_size 128k; # indicates that nginx prevents replies with an HTTP reply code of 400 or higher. proxy_intercept_errors on; Backend server connection timeout _ initiate handshake waiting response timeout #nginx connection timeout (proxy connection timeout) proxy_connect_timeout 90; # backend server data return time (proxy send timeout) # back end server data return time _ specifies the time within which the back end server must complete data transfer proxy_send_timeout 90; Backend server response time after successful connection (proxy receive timeout) Time spent waiting for the request to be processed by the backend server proxy_read_timeout 90; Set the buffer size for the proxy server (nginx) to hold user headers By default this value is the size of a buffer specified in the proxy_buffers directive, although it can be set to a smaller size proxy_buffer_size 4k; #proxy_buffers buffer, page average set below 32K # set the number and size of buffers used to read replies (from the proxyed server). The default is also the paging size, which can be 4K or 8K depending on the operating system proxy_buffers 4 32k; # Buffers for high load (proxy_buffers*2) proxy_busy_buffers_size 64k; # Set the size of data when writing proxy_temp_path to prevent a worker process from blocking too long while passing files Set the cache folder size to upstream if it is larger than this proxy_temp_file_write_size 64k; } # set the address to check Nginx status location /NginxStatus { stub_status on; access_log on; auth_basic "NginxStatus"; auth_basic_user_file confpasswd; The contents of the #htpasswd file can be generated using the htpasswd tool provided by Apache. } # Local static and static separation reverse proxy configuration All JSP pages are processed by Tomcat or resinlocation ~ .(jsp|jspx|do)? $ { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0. 01.:8080; } All static files are read directly by Nginx without Tomcat or resin location ~ .*.(htm|html|gif|jpg|jpeg|png|bmp|swf|ioc|rar|zip|txt|flv|mid|doc|ppt| pdf|xls|mp3|wma)$ { expires 15d; } location ~ .*.(js|css)?$ { expires 1h; } } Copy the code
-