Install nginx
Download the nginx zip file to the root directory,
Yum update # update system software CD/wget nginx.org/download/nginx-1.17.2.tar.gzCopy the code
Decompress tar.gz and go to nginx-1.17.2
Tar -xzvf nginx-1.17.2.tar.gz CD nginx-1.17.2Copy the code
Enter the folder and check the configuration
./configure
Copy the code
An error message is found during the configuration check before installation. In this case, you need to install some nGINx dependencies first
Yum -y install zlib* yum -y install openssl openssl-develCopy the code
Check again./configure does not find error display, next compile and install the operation
/configure --prefix=/usr/local/nginx --with-http_ssl_module --with-http_v2_module --with-http_realip_module./configure --prefix=/usr/local/nginx --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_auth_request_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_slice_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-stream --with-stream_ssl_module --with-stream_realip_module --with-stream_ssl_preread_module --with-threads --user=www --group=wwwCopy the code
Pay special attention to whether the functional modules you need to use in the future exist, otherwise it will be difficult to add new packages in the future.
View the supported modules installed by default
Run the ls nginx-1.17.2 command to view the nginx file list. You can find an auto directory in it.
In the auto directory, there is an options file that holds all the options configured during the nginx compilation process.
Through the command: cat nginx – 1.17.2 / auto/options | grep YES you can look at it
How to view installation modules when compiling and installing nginx
Compile and install
make && make install
Copy the code
It is important to note that module support is related to subsequent nginx configurations, such as SSL, GZIP compression, etc. It is best to check whether the module to be configured exists before compiling and installing.
Nginx is installed in /usr/local/nginx
whereis nginx
$nginx: /usr/local/nginx
Copy the code
Start the Nginx service
cd /usr/local/nginx/sbin/
./nginx
Copy the code
Nginx: emerg bind() to 0.0.0.0:80 failed (98: Run the “Address already in use” command to view the local network Address and port information, find the TCP connection of the occupied port 80, and kill the process (kill process PID).
Netstat -ntpl kill process PIDCopy the code
Continue to start the Nginx service
./nginx
Copy the code
When you directly access the IP address in the browser, the Welcome to Nginx! The installation is successful.
Nginx configuration
The basic structure
├── HTTP # Configure proxy, cache, │ ├── Server # Configure the parameters of the virtual host, │ ├── Server │ ├── Location # Location, location, location, location, location │ ├── location │ ├─... │ └ ─ ─... └ ─ ─...Copy the code
Main Configuration Meaning
- Main: Indicates the global nginx configuration, which takes effect globally.
- Events: Configuration affects the nginx server or network connection to the user.
- HTTP: You can nest multiple servers, configure proxies, caches, log definitions and most other functions, and configure third-party modules.
- Server: Configures parameters related to virtual hosts. One HTTP server can have multiple servers.
- Location: Configures the routing of requests and the processing of various pages.
- Upstream: Configuring the IP address of the back-end server is an essential part of load balancing configuration.
Nginx. conf Syntax rules of the configuration file
- Configuration files consist of instructions and instruction blocks
- Each instruction is marked with a; The command is separated from the parameter by a space
- An instruction block groups multiple instructions together in {} braces
- Include statements allow multiple configuration files to be combined to improve maintainability
- Add comments with # symbols to improve readability
- Use variables with the $sign
- The parameters of some directives support regular expressions, such as the commonly used location directive
Built-in variables
Nginx has built-in global variables that you can use whenever you want in your configuration:
Common nGINx functions
Common commands
Here are some common commands:
Nginx -s reload # to reload the configuration file A warm restart nginx -s reopen # restart nginx nginx - # s stop quickly closed nginx - # s quit waiting for processing after the completion of the work process close nginx -t # check the nginx configuration nginx eventually - T -c < config path > # Check whether the configuration is correct. If it is already in the config directory, no -c is requiredCopy the code
The above commands can be viewed with nginx -h, but there are other less common commands not listed here.
Nginx = nginx = nginx
Nginx # Restart nginx systemctl restart nginx # Restart nginx Systemctl reload Nginx # reload nginx, Nginx systemctl disable nginx # Disable nginx # Restart nginx systemctl status nginx # Check nginx Running stateCopy the code
Configure nginx to automatically start upon startup
With the systemctl command:
If you install nginx using the yum install command, you can automatically create the nginx.service file by using the yum install command:
Nginx systemctl disable nginx # Disable nginxCopy the code
Otherwise, you need to create the nginx.service file in the system services directory.
Create and open the nginx.service file:
vi /lib/systemd/system/nginx.service
Copy the code
The contents are as follows:
[Unit]
Description=nginx
After=network.target
[Service]
Type=forking
ExecStart=/usr/local/nginx/sbin/nginx
ExecReload=/usr/local/nginx/sbin/nginx -s reload
ExecStop=/usr/local/nginx/sbin/nginx -s quit
PrivateTmp=true
[Install]
WantedBy=multi-user.target
Copy the code
Save the :wq file and exit. Run systemctl daemon-reload to make the file take effect.
This allows you to manipulate nginx with the following command:
Nginx. service # Enable nginx.service # Enable nginx.service # enable nginx.service # disable nginx.service # Service # Check the current status of the service. Systemctl restart nginx.service # Restart the systemctl is enabled Nginx. service # Query whether the service is started at startupCopy the code
Automatic startup upon startup is realized through the startup command script
Create a boot command script file:
vi /etc/init.d/nginx
Copy the code
Insert the script code into the nginx file to start the script code source network replication.
#! /bin/bash
# chkconfig: - 85 15
PATH=/usr/local/nginx
DESC="nginx daemon"
NAME=nginx
DAEMON=$PATH/sbin/$NAME
CONFIGFILE=$PATH/conf/$NAME.conf
PIDFILE=$PATH/logs/$NAME.pid
scriptNAME=/etc/init.d/$NAME
set -e
[ -x "$DAEMON" ] || exit 0
do_start() {
$DAEMON -c $CONFIGFILE || echo -n "nginx already running"
}
do_stop() {
$DAEMON -s stop || echo -n "nginx not running"
}
do_reload() {
$DAEMON -s reload || echo -n "nginx can't reload"
}
case "$1" in
start)
echo -n "Starting $DESC: $NAME"
do_start
echo "."
;;
stop)
echo -n "Stopping $DESC: $NAME"
do_stop
echo "."
;;
reload|graceful)
echo -n "Reloading $DESC configuration..."
do_reload
echo "."
;;
restart)
echo -n "Restarting $DESC: $NAME"
do_stop
do_start
echo "."
;;
*)
echo "Usage: $scriptNAME {start|stop|reload|restart}" >&2
exit 3
;;
esac
exit 0
Copy the code
Set everyone to have execution permission on the nginx file:
chmod a+x /etc/init.d/nginx
Copy the code
Add nginx to system services:
chkconfig --add nginx
Copy the code
To set the service to boot on:
chkconfig nginx on
Copy the code
Reboot Reboot the system to take effect, run the same command as the systemctl command above:
Nginx. service # Enable nginx.service # Enable nginx.service # enable nginx.service # disable nginx.service # Service # Check the current status of the service. Systemctl restart nginx.service # Restart the systemctl is enabled Nginx. service # Query whether the service is started at startupCopy the code
Restarting nginx daemon: nginxnginx: error open() Restarting nginx daemon: nginx: error open()
/ usr/local/nginx/logs/nginx pid “failed (2: No to the file or directory) nginx not running mistakes, through the nginx – c parameter specifies the configuration file
/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
Copy the code
Nginx: emerg bind() to 0.0.0.0:80 failed (98: Address already in use)
Configure nGINx globally available
Wouldn’t it be annoying to have to go back to the nginx startup directory every time you changed the contents of the nginx.conf configuration file, or point to a specific directory with the -p argument?
-bash: nginx: command not found -bash: nginx: command not found -bash: nginx: command not found
There are two ways to do this. One is to wrap the nginx command in a script. The other is simple: you can configure nginx to an environment variable and use nginx to execute the command. The steps are as follows:
1. Edit /etc/profile
vi /etc/profile
Copy the code
2. Add the configuration in the last line and save the configuration
export PATH=$PATH:/usr/local/nginx/sbin
Copy the code
3. Make the configuration take effect immediately
source /etc/profile
Copy the code
This makes it pleasant to use the nginx command globally directly.
Common nGINx functions
The reverse proxy
We most often say the reverse proxy is a reverse proxy to solve cross-domain problems.
In fact, the reverse proxy can also be used to control the cache (proxy cache), access control and so on, as well as the load balancing mentioned in the following are actually achieved by the reverse proxy.
server { listen 8080; Github location /test {proxy_pass https://github.com; } # All/API access is proxied to local port 8888 Location/API {proxy_pass http://127.0.0.1:8888; }}Copy the code
Access control
Server {location ~ ^/index.html {# match index.html = 127.0.0.1; Deny 192.168.1.2 instead. allow all; }}Copy the code
The preceding command disables IP addresses 192.168.1.1 and 192.168.1.2, and allows all other IP addresses. From top to bottom order, match to jump out, can be set according to your needs.
Load balancing
Nginx supports four load balancing policies and two common third-party policies to utilize server resources.
Polling policy (default)
Each request is allocated to a different back-end server one by one in chronological order. If any back-end server fails, it is automatically removed. However, if one of the servers is stressed and delays occur, all users assigned to that server will be affected.
HTTP {upstream test.com {server 192.168.1.12:8887; Server 192.168.1.13:8888; } server { location /api { proxy_pass http://test.com; }}}Copy the code
By server weight
For example, to configure: about 1 out of 10 requests to access port 8888 and 9 to access port 8887:
HTTP {upstream test.com {server 192.168.1.12:8887 weight=9; Server 192.168.1.13:8888 weight = 1; } server { location /api { proxy_pass http://test.com; }}}Copy the code
Client IP binding (ip_hash)
Requests from the same IP address are always allocated to one server, which effectively solves the session sharing problem of dynamic web pages. For example, if you save the login information to the session, you will need to log in again when switching to another server.
So many times we need a client to access only one server, so we need to use ip_hash.
http { upstream test.com { ip_hash; Server 192.168.1.12:8887; Server 192.168.1.13:8888; } server { location /api { proxy_pass http://test.com; }}}Copy the code
Minimum number of connections
By prioritizing requests to less stressed servers, it equalizes the length of each queue and avoids adding more requests to the stressed servers.
http { upstream test.com { least_conn; Server 192.168.1.12:8887; Server 192.168.1.13:8888; } server { location /api { proxy_pass http://test.com; }}}Copy the code
Fastest response time policy (relying on third-party NGINX Plus)
Depending on NGINX Plus, priority is given to the server with the shortest response time.
http { upstream test.com { fair; Server 192.168.1.12:8887; Server 192.168.1.13:8888; } server { location /api { proxy_pass http://test.com; }}}Copy the code
Hash result by url accessed (third party)
Requests are allocated based on the hash result of the accessed URL, so that each URL is directed to the same back-end server. This is effective when the back-end server is cached. “(upstream) add a hash statement while reading upstream. Do not write weight or other parameters in the server statement
http { upstream test.com { hash $request_uri; hash_method crc32; Server 192.168.1.12:8887; Server 192.168.1.13:8888; } server { location /api { proxy_pass http://test.com; }}}Copy the code
The loadBalance URI of HAproxy or the upstream_hash module of Nginx can be used to carry out load balancing forwarding based on URL hash.
Gzip compression
Enabling GZIP compression can greatly reduce the size of files during HTTP transmission and greatly improve the access speed of websites, which is basically an essential optimization operation:
gzip on; # gzip_types # gzip_static on; # gzip_proxied expired no-cache no-store private auth; # gzip_buffers 16 8k; gzip_min_length 1k; gzip_comp_level 4; Gzip_http_version 1.0; gzip_vary off; gzip_disable "MSIE [1-6]\.";Copy the code
To explain:
- Gzip_types: MIME file types to be compressed with GZIP, where text/ HTML is forcibly enabled;
- Gzip_static: off by default. When this module is enabled, Nginx first checks whether there is a file with the end of gz that requests static files. If there is, Nginx returns the contents of the.gz file.
- Gzip_proxied: The default value is off. This parameter is enabled when nginx is used as the reverse proxy. This parameter is used to enable or disable gZIP compression of the content received from the proxy server.
- Gzip_buffers: How much memory to get for caching compression results. 16 8k means 8K *16;
- Gzip_min_length: The minimum number of pages that can be compressed, which is obtained from content-length in the header header. The default value is 0, regardless of the size of the page. You are advised to set the number of bytes larger than 1K. If the number is smaller than 1K, the pressure may increase.
- Gzip_comp_level: Indicates the gZIP compression ratio. The compression levels range from 1 to 9. 1 is the lowest, and 9 is the highest.
- Gzip_http_version: Default 1.1, the lowest version of HTTP required to enable gzip.
- Gzip_vary: Adds Vary: accept-encoding to the response header to enable the proxy server to recognize whether gZIP compression is enabled based on accept-encoding in the request header.
- Gzip_disable specifies which browsers do not require GZIP compression
The second point is generally packaged into gZIP files and deployed to the server in combination with front-end packaging, so that the server can directly use the gZIP files, and the compression ratio can be increased, so that nGINx does not have to compress, and will not affect the speed. Generally do not pursue the extreme, the front end does not need to do any configuration can be used ~
Open the gzip compression configuration with the front-end webpack in the vue.config.js configuration file of vue-cli3:
Const CompressionWebpackPlugin = require('compression-webpack-plugin') module.exports = {// gzip configureWebpack: Config => {if (process.env.node_env === 'production') {return {plugins: [new CompressionWebpackPlugin ({test: / \. Js $| \. HTML $| \. CSS /, / / matching filename threshold: DeleteOriginalAssets: false // Whether to delete the source file})]}}},... }Copy the code
The HTTP server
Nginx itself is a server with static resources. When only static resources are available, you can use Nginx as a server:
server { listen 80; server_name localhost; location / { root /usr/local/app; index index.html; }}Copy the code
If you go to http://ip, you will go to index.html under /usr/local/app by default. If a site is just a static page, you can deploy it in this way, such as a static official website.
Dynamic and static separation
This is to separate dynamic and static requests. There are two main ways:
- One is to separate the static files into separate domain names and put them on a separate server, which is also the current mainstream proposal
- One approach is to publish dynamic and static files mixed together, separated by nGINx configuration
# all static request handled by nginx, storage directory location for HTML ~ \. (GIF | JPG | jpeg | PNG | BMP | | js SWF | CSS) ${root/usr/local/resource; expires 10h; # # set the expiration time for 10 hours} all dynamic requests are forwarded to tomcat to handle the location ~ \. | do (JSP) ${proxy_pass 127.0.0.1:8888; }Copy the code
Note that expires is set above. When nginx sets expires, for example, expires 10d; So, the contents of the location or if, when requested by the user within 10 days, will only be accessed in the browser cache, and will not be requested by Nginx.
Request limits
For heavy traffic malicious access, will cause bandwidth waste, increase pressure on the server. You can use Nginx to limit the number of connections and concurrent connections to the same IP address. Proper controls can also be used to prevent DDos and CC attacks.
Nginx uses two modules that are integrated by default:
- Limit_conn_module Connects to a frequency limit module
- Limit_req_module Request frequency limit module
The configuration is as follows:
- Limit_req_zone Limit the number of requests
- Limit_conn_zone Limits the number of concurrent connections
Limit the number of requests by limit_req_zone
http{ limit_conn_zone $binary_remote_addrzone=limit:10m; Server {location /{limit_conn addr 5; Only 5 connections are allowed at a time for the same user address. }}}Copy the code
If the shared memory space is exhausted, the server returns a 503 (Service Temporarily Unavailable) error on all subsequent requests.
When multiple limit_conn_zone directives are configured, all connection number limits take effect. For example, the following configuration limits not only the number of connections from a single IP source, but also the total number of connections from a single virtual server:
limit_conn_zone $binary_remote_addr zone=perip:10m; limit_conn_zone $server_name zone=perserver:10m; server { limit_conn perip 10; Limit the number of connections per IP address to the server limit_conn perserver 2000; Limit the number of connections to the serverCopy the code
Limit the number of concurrent connections using the limit_conn_zone command
limit_req_zone $binary_remote_addr zone=creq:10 mrate=10r/s; server{ location /{ limit_req zone=creq burst=5; }}Copy the code
Limit the average number of requests per second to no more than one and allow no more than five requests to exceed the frequency limit. If you do not want an overridden request to be delayed, you can use the nodelay parameter, such as:
limit_req zone=creq burst=5 nodelay;
Here is just a brief introduction, so that you have this concept, when the configuration can be in-depth to look for information.
Forward agent
A forward proxy is a server that sits between the client and the origin server. In order to get content from the original server, the client sends a request to the proxy and specifies the target (the original server), and the proxy forwards the request to the original server and returns the obtained content to the client. For example, the VPN service we use is a forward proxy. The difference is intuitive (the picture is from the Nginx knowledge required by front-end developers) :
Configure the forward proxy:
Resolver 8.8.8.8 # server {resolver_timeout 5s; $request_uri $request_uri $proxy_pass http://$host$request_uri; $request_uri $request_uri $proxy_pass http://$host$request_uri; }}Copy the code
The object of the forward proxy is the client. The server cannot see the real client.
Picture anti-theft chain
server { listen 80; server_name *.test; # image hotlinking prevention \. The location ~ * (GIF | JPG | jpeg | PNG | BMP | SWF) ${valid_referers none blocked server_names ~ \. Google \. ~ \. Baidu \. *.qq.com; If ($invalid_referer){if ($invalid_referer){return 403; }}Copy the code
The above Settings can prevent other websites from using external links to access our pictures, which is conducive to saving traffic
Applicable to PC or mobile device
According to different user devices to return different styles of site, often used before is pure front-end adaptive layout, but complex site is not suitable for response, both complexity and ease of use above or not as good as separately written, such as our common Taobao, JD.com.
To determine whether to return PC or H5 site according to the user-agent of the user request:
server { listen 80; server_name test.com; location / { root /usr/local/app/pc; # PC HTML path if ($http_user_agent ~ * '(Android | webOS | the | iPod | BlackBerry)') {root/usr/local/app/mobile; # mobile HTML path} index index.html; }}Copy the code
Setting the secondary domain name
Create a new server:
server { listen 80; server_name admin.test.com; // Location / {root /usr/local/app/admin; # secondary domain HTML path index index.html; }}Copy the code
Configure HTTPS
Here I am using certbot’s free certificate, but it is only valid for 3 months at a time (I think you can try to configure automatic renewal with crontab, I haven’t tried it yet) :
Install certbot first
wget https://dl.eff.org/certbot-auto
chmod a+x certbot-auto
Copy the code
Apply for a certificate (note: you need to resolve the domain name of the certificate to this server before you can apply for a certificate) :
sudo ./certbot-auto certonly --standalone --email [email protected] -d test.com -d www.test.com
Copy the code
Execute the above instructions and follow the prompts.
Certbot starts a temporary server to complete the verification (it takes port 80 or port 443, so you need to temporarily shut down the Web server), and then Certbot saves the certificate as a file, including the complete certificate chain file and the private key file.
The file is saved in the domain name directory under /etc/letsencrypt/live/.
Modify the nginx configuration:
server{ listen 443 ssl http2; / / here also enabled HTTP / 2.0 ssl_certificate/etc/letsencrypt/live/test.com/fullchain.pem; # address ssl_certificate_key/etc/letsencrypt/live/test.com/privkey.pem; certificate file # private key file address server_name test.com www.test.com; // The domain name bound to the certificate}Copy the code
Configure the transfer from HTTP to HTTPS
server { listen 80; server_name test.com www.test.com; # single domain name redirect the if ($host = 'www.sherlocked93.club') {return 301 https://www.sherlocked93.club$request_uri; If ($scheme! = 'https') { return 301 https://$server_name$request_uri; Return 301 https://$server_name$request_uri; }Copy the code
The above configuration can choose one of their needs, do not add all.
Single-page item History Route configuration
server { listen 80; server_name fe.sherlocked93.club; location / { root /usr/local/app/dist; # vue package folder index index. HTML index. HTM; try_files $uri $uri/ /index.html @rewrites; # Default directory index.html, redirection Expires -1 if none exist; Add_header cache-control no-cache; Rewrite ^(.+)$/index.html break; rewrite ^(.+)$/index.html break; }}Copy the code
Try_files uriuri uriuri/ /index.html; try_files uriuri uriuri/ /index.html; , while the above does some redirection processing.
Configuring an HA Cluster (Dual-System Hot Backup)
When the primary Nginx server goes down, switch to the backup Nginx server
First install keepalived:
yum install keepalived -y
Copy the code
Then edit
The/etc/keepalived/keepalived. Conf configuration file, and add the vrrp_script definition in the configuration file a peripheral detection mechanism, by defining in vrrp_instance track_script to track script execution process, Node transfer:
Global_defs {notification_email {[email protected]} notification_email_from [email protected] smtp_server 127.0.0.1 Smtp_connect_timeout 30 router_id LVS_DEVEL } vrrp_script chK_maintainACE {// The script name of the detection mechanism is chK_maintainACE script "[[-e/etc/keepalived/down]] && The exit 1 | | 0 exit "/ / script path or scripting commands / / script"/etc/keepalived/nginx_check sh "/ 2 / / / such as the script path interval every 2 seconds to detect a weight } vrrp_instanceVI_1 {// each vrrp_instance is defined as a virtual router state MASTER // Host is defined as MASTER, Virtual_router_id 51 // Indicates the ID of the virtual route. Generally, the value is less than 255. The ids of the active and standby routers must be the same. Advert_int 1 // Default heartbeat interval authentication {// authentication mechanism auth_type PASS auth_pass 1111 // password} Virtual_ipaddress {// Virtual address VIP 172.16.2.8}}Copy the code
The check script nginx_check.sh provides one:
#! /bin/bash A=`ps -C nginx --no-header | wc -l` if [ $A -eq 0 ]; Then/usr/sbin/nginx # try to restart the nginx sleep sleep 2 # 2 seconds if [` ps - C nginx - no - the header | wc -l ` - eq 0]; Then killall keepalived # Drift viPs to other backup nodesCopy the code
Make a copy to the BACKUP server and change the nginx configuration to BACKUP state and priority smaller than the host. Service keepalived start service keepalived start service keepalived start service keepalived stop Check whether the virtual IP address can be automatically switched to the IP addr of the standby server.
Start Master keepalived again and the VIP will be on the host again.
Configure high availability cluster content from: Nginx from entry to practice, 10,000 words in detail!
Other functions and tricks
Proxy cache
Nginx’s http_proxy module, which provides squid-like caching, is configured using proxy_cache_path.
Nginx can create a local copy of the accessed content on the NGINx server. In this way, the nGINx server does not need to send a request to the back-end server again to access the data in a certain period of time, reducing the data transmission delay and improving the access speed:
proxy_cache_path usr/local/cache levels=1:2 keys_zone=my_cache:10m; server { listen 80; server_name test.com; location / { proxy_cache my_cache; Proxy_pass http://127.0.0.1:8888; proxy_set_header Host $host; }}Copy the code
Nginx provides a 10 MB cache with name my_cache and level 1:2. The cache is stored in usr/local/cache.
Access log
The access log is annotated by default and needs to be enabled and configured in more detail. Here is the default configuration for nginx:
http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; }Copy the code
The error log
The error log is stored in the main global block. If you open nginx.conf, you will see the following code in the configuration file:
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
Copy the code
The default nginx error log configuration is:
error_log logs/error.log error;
Static resource server
server { listen 80; server_name static.bin; charset utf-8; Location /download {alias /usr/share/nginx/static; # Static directory autoindex on; Autoindex_exact_size off autoindex_exact_size off # on(default) displays the exact size of the file in bytes; Off Indicates the approximate size of the file, in KB, MB, or GB autoindex_LOCALtime OFF. # off(default) displays the file time in GMT; On displays the file time as the server time}}Copy the code
Disabling user_Agent
Nginx can disable access to specified browser and crawler frameworks:
# ban user_Agent from Baidu, 360 and SOhu, ~ * said case-insensitive matching the if ($http_user_agent ~ * 'baidu | 360 | sohu') {return 404; } # prohibited Scrapy tools such as fetching the if ($http_user_agent ~ * (Scrapy | Curl | HttpClient)) {return 403;Copy the code
The request filtering
Filter by request type
If ($request_method! ~ ^(GET|POST|HEAD)$ ) { return 403; }Copy the code
Filter by status code
error_page 502 503 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
Copy the code
This is actually an internal jump, so when you visit 502, 503, you can return the content in 50x.html. You need to be aware of whether you can find the 50x.html page, so you need to add a location to ensure that you can find your own 50x page.
Filter by URL name
If ($host = zy.com') {$1 = zy.com'; Rewrite ^/(.*)$http://www.zy.com/$1 permanent; Rewrite ^(.*)$/index.html redirect; rewrite ^(.*)$/index.html redirect; }Copy the code
Ab command
The ab command is called Apache Bench, which is a stress test tool for Apache. You can also test Nginx, IIS, and other Web servers.
- -n Indicates the total number of requests
- -c Indicates the number of concurrent requests
- -t Indicates the maximum number of seconds that the test takes. The default value is 50000
- -P contains the required POST data file
- -t Specifies the content-type header used by POST data
Ab -n 1000 -c 5000 http://127.0.0.1/ # Send 1000 concurrent requests each time. The total number of requests is 5000.Copy the code
Install httpd-tools: yum install httpd-tools
Generic domain name path separation
This is a very useful skill. Sometimes we need to configure some secondary or tertiary domain names, and we want nginx to automatically point to the corresponding directory. For example:
- Test1. Doc. Test. The club is to point to/usr/local/HTML/doc/test1 server address;
- Test2. Doc. Test. The club is to point to/usr/local/HTML/doc/test2 server address;
server { listen 80; server_name ~^([\w-]+)\.doc\.test\.club$; root /usr/local/html/doc/$1; }Copy the code
Generic domain name forwarding
Similar to the previous function, sometimes we want to rewrite the secondary or tertiary link to the desired path, so that the back end can resolve different rules based on the route:
- test1.serv.test.club/api? Name =a automatically forwards to 127.0.0.1:8080/test1/ API? name=a
- test2.serv.test.club/api? Name =a automatically forwards to 127.0.0.1:8080/test2/ API? name=a
server { listen 80; server_name ~^([\w-]+)\.serv\.test\.club$; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://127.0.0.1:8080/$1$request_uri; }}
Copy the code
Q&A
How to set variables in nginx
In case you don’t know, nginx’s configuration files use a tiny programming language. In a programming language, there is always such a thing as a “variable,” but in the Nginx configuration, a variable can only hold one type of value, because there is only one type of value, which is a string.
For example, we have this configuration line in nginx.conf:
set $name "chroot";
Copy the code
So what we’ve done is we’ve used the set configuration instruction to assign the variable name, and we’ve assigned “chroot” to name, and we’ve assigned “chroot” to name, and we’ve assigned “chroot” to name. The nginx variable name is preceded by a symbol, which is a notation requirement. It is a notation requirement that all Nginx variables be referenced in the Nginx configuration file with symbols. It is a notation requirement that all Nginx variables be referenced in the Nginx configuration file with symbols. All Nginx variables must be prefixed when referenced in the Nginx configuration file. This representation is similar to languages like Perl and PHP.
The use of this notation is that you can construct new strings by embedding variables directly into string constants. For example, you need to do a string concatenation:
server { listen 80; server_name test.com; location / { set $temp hello; return "$temp world"; }}Copy the code
The string “Hello world” is returned when the match is successful. Note that when the variable name is referred to and is followed by the component characters of the variable name (such as letters, numbers, and underscores), we need to use special notation to disambiguate. For example:
server { listen 80; server_name test.com; location / { set $temp "hello "; return "${temp}world"; }}Copy the code
In this case, when we refer to the variable temp in the parameter value of the configuration directive, it’s followed by the word world, so if we write “temp “directly after the word world, So if you just write “tempworld” then nginx’s computing engine will recognize it as a reference to the variable tempworld. To solve this problem, nginx string support uses curly braces in tempWorld. To solve this problem, nginx string support uses curly braces in tempWorld. To solve this problem, nginx supports a string with curly braces surrounding the variable name, such as ${temp}, so the above example returns “Hello world” :
$ curl 'http://test.com/'
hello world
Copy the code
Note also that if you want to print the $sign itself, you can do so:
geo $dollar { default "$"; } server { listen 80; server_name test.com; location / { set $temp "hello "; return "${temp}world: $dollar"; }}Copy the code
The configuration command geo provided by the standard ngX_Geo module is used to assign the variable dollar to the string “dollar to the string “” dollar to the string “”, so that the return value is “Hello world: $”.
Nginx has built-in predefined variables
In alphabetical order, variable names and corresponding definitions:
- $arg_PARAMETER #GET specifies the value of the PARAMETER PARAMETER in the request
- $args # this variable is equal to the parameters in the GET request, for example, foo=123&bar=blahblah; This variable can be modified
- $binary_remote_addr # Specifies the binary_remote_addr of the client
- $body_bytes_sent # Number of bytes sent to the page
- $content_length # Specifies the content-length field in the request header
- $content_type # Specifies the content-type field in the request header
- $cookie_COOKIE #cookie Cookie value
- $document_root # Specifies the value of the current request in the root directive
- Document \_uri # is the same as uri
- $host # Specifies the host header field in the request. If the host header is unavailable or empty, the name of the server that handled the request (the value of the server_name directive of the server that handled the request). The value is lowercase and does not contain ports
- Use the value of the gethostName system call
- The HEADER is changed to lowercase, and the – becomes \_(dashes become underscores), for example: http_user_agent(value of uaer-agent)
- $sent_http_HEADER # Specifies the content in the HTTP response HEADER. HEADER is changed to lowercase and – to _(dashes to underscores), for example: sent_http_content_type…
- Is \_args # If args is set, the value is “?” Otherwise, “”
- $limit_rate # this variable limits the connection rate
- $nginx_version # Specifies the version of nginx currently running
- Query \_string # same as args
- $remote_addr # Specifies the IP address of the client
- $remote_port # Specifies the port of the client
- $remote_port # Specifies the user name authenticated by the Auth Basic Module
- $request_filename # Specifies the file path of the current connection request, which is generated by root or alias directives and URI requests
- $request_body # this variable (0.7.58+) contains the main information about the request. It makes sense in a location that uses proxy_pass or fastcgi_pass directives
- $request_body_file # Temporary file name for the client to request the body information
- $request_completion # If the request was successful, set it to “OK”; Set to null if the request is not complete or is not the last part of a series of requests
- $request_method # this variable is the action requested by the client, usually GET or POST. In versions 0.8.20 and earlier, this variable is always the action in the main request. If the current request is a child request, the current request action is not used
- Request \ _URI # This variable is equal to the original URI that contains some of the client’s request parameters. It cannot be modified. See URI Changes or rewrite URIs
- Rewrite ^(.+)$$scheme://example.com1 redirect
- $server_addr # Server address, which can be determined after a system call. To bypass the system call, you must specify the address in LISTEN and use the bind parameter
- $server_name # Specifies the server name
- $server_port # Specifies the port through which the request reaches the server
- $server_protocol # The protocol used for the request, usually HTTP/1.0, HTTP/1.1, or HTTP/2
- $uri # The current URI in the request (without the request parameter, the parameter is located in args), which is different from the args passed by the browser), which is different from the value of the request_URI passed by the browser, which can be redirected internally. Or use the index directive. Do not include protocols and host names, such as /foo/bar.html
Attached nginx module
Nginx module classification
- Core module: The most basic and core services of NGINx, such as process management, permission control, logging;
- Standard HTTP module: the standard HTTP functionality of the Nginx server;
- Optional HTTP module: Handles special HTTP requests
- Mail service module: Mail service
- Third-party modules: Serve as extensions that perform special functions
The module listing
Core modules:
- ngx_core
- ngx_errlog
- ngx_conf
- ngx_events
- ngx_event_core
- ngx_epll
- ngx_regex
Standard HTTP module:
- ngx_http
- Ngx_http_core # Configure ports, URI analysis, server error handling, alias control, etc
- Ngx_http_log # Custom access log
- Ngx_http_upstream # defines a set of servers that can accept redirects from proxies, Fastcgi, and Memcache; Mainly used for load balancing
- ngx_http_static
- Ngx_http_autoindex # Automatically generates directory lists
- Ngx_http_index # handles requests ending in /. If the index page is not found, then see if random_index is turned on; If enabled, use it, otherwise use autoIndex
- Ngx_http_auth_basic # Http-based authentication (auth_basic)
- Ngx_http_access # IP address-based access control (deny,allow)
- Ngx_http_limit_conn # Limits the response and processing rate of connections from clients
- Ngx_http_limit_req # limits the response and processing rate of requests from clients
- ngx_http_geo
- Ngx_http_map # Create arbitrary key-value pair variables
- ngx_http_split_clients
- Ngx_http_referer # filters objects with an empty Referer in the HTTP header
- Ngx_http_rewrite # redirects requests through regular expressions
- ngx_http_proxy
- Ngx_http_fastcgi # Supports fastCGI
- ngx_http_uwsgi
- ngx_http_scgi
- ngx_http_memcached
- Ngx_http_empty_gif # Create a 1×1 transparent GIF image from memory, which can be called quickly
- Ngx_http_browser # Parses the user-agent value in the HTTP request header
- Ngx_http_charset # specifies the code of the web page
- ngx_http_upstream_ip_hash
- ngx_http_upstream_least_conn
- ngx_http_upstream_keepalive
- ngx_http_write_filter
- ngx_http_header_filter
- ngx_http_chunked_filter
- ngx_http_range_header
- ngx_http_gzip_filter
- ngx_http_postpone_filter
- ngx_http_ssi_filter
- ngx_http_charset_filter
- ngx_http_userid_filter
- Ngx_http_headers_filter # Sets the HTTP response header
- ngx_http_copy_filter
- ngx_http_range_body_filter
- ngx_http_not_modified_filter
Optional HTTP modules:
- Ngx_http_addition # Adds text at the beginning or end of the page in response to the request
- Ngx_http_degradation # allows the server to return 444 or 204 errors with low memory
- ngx_http_perl
- Ngx_http_flv # supports Flash multimedia information transfer as a stream file, and can return Flash according to the start position specified by the client
- Ngx_http_geoip # supports parsing client requests based on the GeoIP database
- ngx_google_perftools
- Ngx_http_gzip # Gzip compression request response
- Ngx_http_gzip_static # searches for and responds to client requests with pre-compressed.gz files instead of regular files
- Ngx_http_image_filter # supports changing the size and rotation direction of PNG, JPEG and GIF images
- Ngx_http_mp4 # Supports streaming of multimedia information such as.mp4,.m4v,.m4a files, often used with ngX_http_flv
- Ngx_http_random_index # When receiving a request ending in /, select a random file in the specified directory as index
- Ngx_http_secure_link # supports validation checks for requested links
- Ngx_http_ssl # supports HTTPS
- ngx_http_stub_status
- Ngx_http_sub_module # Replaces the information in the response with the specified string
- Ngx_http_dav # and PUT in the WebDAV protocol support HTTP/DELETE/MKCOL/COPY/MOVE method
- Ngx_http_xslt # Transforms the XML response information using XSLT
Mail service module:
- ngx_mail_core
- ngx_mail_pop3
- ngx_mail_imap
- ngx_mail_smtp
- ngx_mail_auth_http
- ngx_mail_proxy
- ngx_mail_ssl
Third-party module:
- Echo -nginx-module # supports Shell commands like echo/sleep/time/exec in the nginx configuration file
- memc-nginx-module
- Rds-json-nginx-module # Enable nginx to support JSON data processing
- lua-nginx-module
Author: author: chrootliu, Tencent QQ music front-end development engineer