tips
If you already use Nginx, you can skip the introduction and go straight to the Nginx configuration files and usage scenarios. If you want to get familiar with nginx, take your time and fill in some common scenarios at the end of this article
preface
As a front-end, what choice do we have other than Node as a service, so simple and easy to use Nginx can satisfy all your fantasies. Learning about Nginx helps us understand the whole process of getting a front-end project online. As a front-end, you have some experience with Nginx, so why learn about that? Unsystematic: in the past, you may only configure a certain function (online collection), it is fragmented knowledge, not formed a systematic. As a result, you don’t know where to start when there are problems with your service.
What is Nginx?
Nginx
“Nginx is a lightweight HTTP server with an event-driven asynchronous non-blocking processing framework that provides excellent IO performance and is often used for server-side reverse proxies and load balancing.”
The advantages of nginx
- Supports massive and high concurrency: uses I/O multiplexing epoll. Nginx was officially tested to support up to 50,000 concurrent connections, and production environments can support up to 20,000 to 40,000 concurrent connections.
- Less memory consumption
- Can be commercialized
- Simple configuration file
In addition to these advantages, there are many, such as reverse proxy function, gray publishing, load balancing function, etc
Second, the installation
This article does not focus on how to install nginx, but it does leave a tutorial address for you to install
- linux
If it is centos you can also directly use yum installation is also very convenient
yum -y install nginx
Copy the code
- window
- Through docker installation (highly recommended), panda made ready-made, you can also use it
The nginx.conf file is the nginx general configuration file and the entry point for nginx to read the configuration.
Nginx file introduction
Nginx configuration files are the most commonly used files in nginx. We have covered other files. When you are proficient in writing nginx files, you are proficient in using Nginx
[wujianrong@localhost ~]# tree /usr/local/nginx /usr/local/nginx /usr/local/nginx /usr/local/nginx /usr/local/nginx /usr/local/nginx /usr/local/nginx /usr/local/nginx /usr/local/nginx /usr/local/nginx /usr/local/nginx /usr/local/nginx / │ ├─ ├─ Fastcgi_params # Fastcgi │ ├─ Fastcgi.conf # fastcgi.conf # fastcgi.conf # fastcgi.conf # fastcgi.conf # fastcgi.conf # fastcgi.conf ├─ FastCGI_├ s.default │ ├─ Koi-UTF │ ├─ mime. Types # Media │ ├─ mime. Types. Default │ ├─ Nginx.conf # Nginx └ │ ├─ Nginx.conf. Default │ ├─ Anti-Flag # Nginx ├─ Anti-Flag # Nginx.conf. Default │ ├─ Anti-Flag # ├─ ├─ uWSGi ├─ ├─ uWSgi_params. Default │ ├─ UWSGI ├─ UWSgi_params. Default │ ├─ UWSGI ├─ UWSgi_params 50x.html # Error pages elegantly replace display files, │ ├─ ├─ access.log │ ├─ error.log │ ├─ ├─ ├.log │ ├─ ├.log │ ├─ ├.log │ ├─ ├.log └─ nginx. Pid # pid file, └─ nginx Will put all the process ID number to the file ├ ─ ─ proxy_temp # temp directory ├ ─ ─ sbin # Nginx command directory │ └ ─ ─ Nginx # Nginx start command ├ ─ ─ scgi_temp # temp directory └ ─ ─ uwsgi_temp # temporary directoryCopy the code
1. Configuration file (key)
conf //nginx all configuration files directory
nginx.conf // This is the core configuration file of Nginx, which is very important and will be the focus of our study
nginx.conf.default // Nginx. conf backup file
Copy the code
2. Log
Logs: Log entry files. When the nginx server is started, there are three files: access.log error.log and nginx.pid.Copy the code
3. Resource Catalog
html // Nginx contains two static HTML pages
50x.html // Access the failed page after the failure
index.html // Successful access to the default home page
Copy the code
4. Back up files
fastcgi.conf:fastcgi // Related configuration file
fastcgi.conf.default // Fastcgi. conf backup file
fastcgi_params // Fastcgi parameter file
fastcgi_params.default // Fastcgi parameter backup file
scgi_params // Scgi parameter file
scgi_params.default // ScGI parameter backup file
uwsgi_params // UWSGi parameter file
uwsgi_params.default // UWSGi parameter backup file
mime.types // Record the mapping between the HTTP content-type value and the file name extension
mime.types.default // Backup files of mime.types
Copy the code
5. Code files
The koi-UTF, KoI-Win, and win-UTf files are all configuration files related to encoding translation mappings that are used to convert one encoding to anotherCopy the code
6. Execute the file
Sbin: stores the executable program file nginxCopy the code
Command 7.
Nginx: commands used to control the start and stop of nginx.Copy the code
Nginx common commands
- There are two common startup commands
> nginx // Start nginx directly, provided the nginx environment variables are configured
> systemctl start nginx.service // Run the systemctl command to start
Copy the code
- Four common stop commands
> nginx -s stop // Stop the service immediately
> nginx -s quit // Gracefully stopping the service requires that the process finish its current work and then stop
> killall nginx // Kill the nginx process directly
> systemctl stop nginx.service / / systemctl stop
Copy the code
- Two common restart commands
> nginx -s reload // Stop the service immediately
> systemctl reload nginx.service / / systemctl stop
Copy the code
- Verify that the nginx configuration file is correct
> nginx -t Conf syntax is ok: the nginx configuration file is correct
Copy the code
Nginx configuration details
1. Structure of the configuration file
To give you a simple outline, here is a brief description of the configuration file:
Worker_processes 1; # worker_connections 1024; HTTP {# HTTP block start include mime.types; Default_type application/octet-stream; # Default media type sendFile on; Keepalive_timeout 65; Server {# the first server block starts, representing an independent virtual host site listen 80; Server_name localhost; Location / {# the first location block starts root HTML; The root directory of the Nginx installation directory index index.html index.htm; } # first location block result error_page 500502503504/50x.html; # location = /50x. HTML {# location block start, access 50x. HTML root HTML; # specify the corresponding site directory as HTML}}......Copy the code
- Conf is an entry file, and nginx starts by reading the basic configuration from nginx.conf
- Conf directory below the various xxx.conf file, generally is each application configuration, such as a website nginx configuration called A.conf, b website called b.conf, can be convenient for us to facilitate management
- In the main configuration file nginx.conf, there is usually a line of code like this
2. Nginx. conf main configuration file
3. Xx. conf provides details about the sub-configuration file
The most common changes we make to Nginx are subconfiguration files
4. About location matching
# priority 1, exact match, root path location =/ {return 400; } # location ^~ /av {root /data/av/; /media***** path location ~ /media {alias /data/static/; } # 4 priority, case-insensitive regular match, all the * * * *. JPG | | PNG GIF came here the location ~ *. * \. (JPG | | PNG GIF | js | CSS) ${root/data/av /; } location / {return 403; }Copy the code
Nginx reverse proxy and load balancing
1. Reverse proxy
Before talking about the reverse agent, we first look at the forward agent, forward agent is also the most common contact to the agent model, we will from two aspects of the forward agent processing mode, respectively from the software and life to explain what is the forward agent, also talk about the difference between the positive agent
Forward agent
Forward agent, “it is agent of the client”, is located in a client and the original server (origin server) between the server and to get content from the original server, the client send a request to the agent and specify the target (the original server), then transfer agent to the original server request and will get back to the content of the client. The client must perform some special Settings to use the forward proxy.
- Access previously inaccessible resources, such as Google
- Can do cache, speed up access to resources
- Authorize client access and authenticate Internet access
- The proxy can record user access records (online behavior management) and hide user information externally
The reverse proxy
Reverse proxy, “it proxies the server”, mainly used in distributed deployment of server cluster, reverse proxy hides the server information reverse proxy:
- To ensure Intranet security, the reverse proxy is usually used as the IP address for accessing the public network, and the Web server is the Intranet
- Load balancing, using a reverse proxy server to optimize the load on your website
2. Load balancing
The number of requests that the server receives from different clients and the Nginx reverse proxy server receives is called the load. These requests are distributed to different servers according to certain rules, which is a kind of equilibrium rule. Therefore, the process of distributing the requests received by the server according to the rules is called load balancing. Load balancing is also divided into hardware load balancing and software load balancing. We talk about software load balancing, and interested in hardware load balancing, you can go to understand the algorithm of load balancing:
- Polling (default, weighted polling, IP_hash)
- Plugins (fair, url_hash), url_hash and ip_hash are pretty much the same, one based on IP and the other based on URL
The default polling
Each request is allocated to a different back-end server one by one in chronological order. If a back-end server goes down, the failed system can be automatically removed.
Upstream constPolling {server localhost:10001; server localhost:10002; } server { listen 10000; server_name localhost; location / { proxy_pass http://constPolling; [constPolling proxy_redirect default] }}Copy the code
Weighted polling
If you set weight, the greater the value, the higher the allocation rate, the higher the access probability. This parameter is used when the performance of each back-end server is unbalanced. The second is to set different weights in the case of master and slave to achieve reasonable and effective use of host resources.
Upstream constPolling {server localhost:10001 weight=1; server localhost:10002 weight=2; } server { listen 10000; server_name localhost; location / { proxy_pass http://constPolling; [constPolling proxy_redirect default] }}Copy the code
The greater the weight, the greater the probability of being visited, for example, the above is 33.33% and 66.66% probability of accessing: Localhost :10001, localhost:10002, localhost:10002, localhost:10001, localhost:10002, localhost:10002
ip_hash
Each request is allocated according to the hash result of the IP address. After such processing, each visitor has a fixed access to the back-end service, which is configured as follows (ip_hash can be used with weight), and can effectively solve the session sharing problem existing in dynamic web pages
upstream constPolling {
ip_hash;
server localhost:10001 weight=1;
server localhost:10002 weight=2;
}
Copy the code
fair
I prefer to use a load balancing algorithm, Fair algorithm can be based on page size and load time intelligent load balancing, short response time priority allocation.
- To install the upstream_FAIR module, attach the Fair installation tutorial
- Requests are assigned to whichever server responds faster
upstream constPolling {
server localhost:10001;
server localhost:10002;
fair;
}
Copy the code
Nginx error page configuration, enable Gzip compression configuration
1. The nginx error page is configured
When the address we visit does not exist, we can do the corresponding processing according to the HTTP status code, let’s take 404 as an example
, of course, in addition to 404 think we can also according to the status code of other, such as 500, 502, etc., the panda’s company project, because multiple projects error pages are unified, so we have a separate maintenance a error code page in the middle of our company in the project, and then according to the client’s PC/mobile side, jump to the corresponding error pages
2. Gzip compression
Gzip is a web page compression technology. After Gzip compression, the page size can be reduced by 30% or less. Smaller web pages make browsing experience better and faster. The implementation of GZIP web compression requires both browser and server support. When the browser supports gzip compression, it will include accept-encoding :gzip in the request message, so Nginx will send the gzip Content to the browser, and add content-encoding :gzip to the corresponding header, stating that it is the Content after gzip. Tell the browser to decompress before parsing the output. If the project is running on Internet Explorer or some less compatible browser, check to see if the browser supports GZIP
server { listen 12089; index index.php index.html; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log; root /var/www/html/gzip; # enable gzip on; Gzip_http_version 1.0; Gzip_types text/ CSS text/javascript application/javascript image/ PNG image/ JPEG image/ GIF; location / { index index.html index.htm index.php; autoindex off; }}Copy the code
What format do you need for gzip_types? Look at content-Type
Content-Type: text/css
Copy the code
Gzip content-encoding: gzipCopy the code
Common global variables
variable meaning $args This variable is equal to the argument in the request line, as in $query_string $content length Content-length field in the request header. $content_type The Content-Type field in the request header. $document_root The value specified in the root directive for the current request. $host Request the host header field, otherwise the server name. $http_user_agent Client Agent information $http_cookie Client cookie information $limit_rate This variable can limit the connection rate. $request_method The action requested by the client, usually GET or POST. $remote_addr IP address of the client. $remote_port Port of the client. $remote_user A user name that has been authenticated by the Auth Basic Module. $request_filename The file path of the current request, generated by the root or alias directive and the URI request. $scheme HTTP methods (such as HTTP, HTTPS). $server_protocol The protocol used for the request, usually HTTP/1.0 or HTTP/1.1. $server_addr Server address, which can be determined after completing a system call. $server_name Server name. $server_port Port number for the request to arrive at the server. $request_uri The original URI containing the request parameters, without the host name, such as “/foo/bar.php? Arg = baz “. $uri The current URI without request parameters, $URI does not contain a host name, such as “/foo/bar.html”. $document_uri Same as $uri.
9. Nginx Usage Scenarios (github will continue to update and supplement)
1. Specify different project directories in different directories for the same domain name
In the development process, there is A scenario, for example, A project has multiple subsystems that need to be accessed through different directories through the same domain name, which is also used in A/B Test grayscale release scenario. For example, a.com/a/*** is used to access system A and a.com/b/*** is used to access system B
2. Automatically adapt to the PC/ mobile page
3. Restrict access to Chrome only
4. The front-end single-page application refreshes 404 problems
More: including anti-theft chain, static and static separation, authority control
Check it out on Github or Gitee
Examples will continue to be updated, and you are welcome to help supplement them