preface
This article only focuses on what Nginx can do without third-party modules. There are so many third-party modules that I can’t completely explain them. Of course, this article may not be complete, after all, IT is only my personal use and knowledge. So please forgive me, and welcome to leave a message
What can Nginx do
-
1. Reverse proxy
-
2. Load balancing
-
3.HTTP server (including static/static separation)
-
4. Forward proxy
That’s what I know Nginx can do without relying on third-party modules. Here’s how to do each function
The reverse proxy
Reverse proxy is one of the most popular things Nginx does. In Reverse Proxy mode, a Proxy server receives Internet connection requests, forwards the requests to the Intranet server, and returns the results to the Internet client. In this case, the proxy server acts as a reverse proxy server. In simple terms, the real server cannot be directly accessed by the external network, so a proxy server is required. The proxy server can be accessed by the external network and resides on the same network environment as the real server, of course, it may be the same server with different ports. A simple reverse proxy code is posted below
server {
listen 80;
server_name localhost;
client_max_body_size 1024M;
location / {
proxy_pass http://localhost:8080;
proxy_set_header Host $host:$server_port; }}Copy the code
After saving the configuration file, start Nginx so that when we access localhost, it is equivalent to accessing localhost:8080
Load balancing
Load balancing is also a common function of Nginx. Load balancing means that the execution is distributed among multiple operation units, such as Web server, FTP server, enterprise critical application server, and other critical task servers, so as to complete the work together. In simple terms, when two or more servers are deployed, requests are randomly distributed to the specified server for processing based on rules. In load-balancing configuration, a reverse proxy must be configured at the same time to switch to load balancing. Nginx currently supports three load balancing policies and two common third-party policies.
1. RR (default)
Each request is allocated to a different backend server one by one in chronological order. If the backend server goes down, the request is automatically removed.
Simple configuration
upstream test {
server localhost:8080;
server localhost:8081;
}
server {
listen 81;
server_name localhost;
client_max_body_size 1024M;
location / {
proxy_pass http://test;
proxy_set_header Host $host:$server_port; }}Copy the code
The core code of load balancing is
upstream test {
server localhost:8080;
server localhost:8081;
}
Copy the code
Here I have configured two servers, of course, it is actually one server, but the port is different, and the 8081 server does not exist, that is to say, we can not access, but when we visit http://localhost, there will be no problem. Nginx will skip to http://localhost:8080 by default because Nginx automatically determines the state of the server. If the server is not accessible (the server is down), Nginx will not skip to the server. Therefore, Nginx will not skip to the server if the server is down. So we don’t need any more Settings.
2, the weight
Specifies the polling probability, weight proportional to the access ratio, for cases where back-end server performance is uneven. For example,
upstream test {
server localhost:8080 weight=9;
server localhost:8081 weight=1;
}
Copy the code
So you’re only going to have 1 out of 10 accesses to 8081, and you’re going to have 9 accesses to 8080
3, ip_hash
Both of the above methods have a problem, that is, when the next request comes, the request may be distributed to another server. When our program is not stateless (using session to save data), there will be a big problem, such as storing login information in session. Therefore, it is necessary to re-log in when switching to another server, so most of the time we need a client to access only one server, so we need to use ip_hash. Each request of IP_hash is allocated according to the hash result of accessing IP, so that each visitor can access the same backend server. This can solve session problems.
upstream test {
ip_hash;
server localhost:8080;
server localhost:8081;
}
Copy the code
4. Fair (third Party)
Requests are allocated based on the response time of the back-end server, with priority given to those with short response times.
upstream backend {
fair;
server localhost:8080;
server localhost:8081;
}
Copy the code
5. Url_hash (third party)
Requests are allocated based on the hash result of the url so that each URL is directed to the same backend server, which is more efficient when the backend server is cached. (10) Add the hash to upstream. (10) Do not write weight or other parameters to server. (10) Hash_method is the hash algorithm used
upstream backend {
hash $request_uri;
hash_method crc32;
server localhost:8080;
server localhost:8081;
}
Copy the code
The above five load balancers are applicable to different situations, so you can choose which policy mode to use according to the actual situation. However, fair and URl_hash require third-party modules to be used. Since this article focuses on Nginx, Nginx installing third-party modules will not be covered in this article
The HTTP server
Nginx itself is also a static resource server, when only static resources, you can use Nginx to do the server, and now also popular static separation, you can use Nginx to implement, let’s take a look at Nginx to do the static resource server
server { listen 80; server_name localhost; client_max_body_size 1024M; location / { root e:\wwwroot; index index.html; }}Copy the code
By default, if you visit http://localhost, you will go to index.html in the wwwroot directory on the E disk. If a site is a static page, you can deploy it this way.
Dynamic and static separation
Dynamic and static separation is to make dynamic web pages in the dynamic website according to certain rules to differentiate between constant resources and often changed resources. After dynamic and static resources are split, we can do cache operation according to the characteristics of static resources. This is the core idea of static site processing
upstream test{
server localhost:8080;
server localhost:8081;
}
server {
listen 80;
server_name localhost;
location / {
root e:\wwwroot;
index index.html;
}
All static requests are handled by nginx and stored in HTML
location ~ \.(gif|jpg|jpeg|png|bmp|swf|css|js)$ {
root e:\wwwroot;
}
All dynamic requests are forwarded to Tomcat for processing
location ~ \.(jsp|do)$ {
proxy_pass http://test; } error_page 500 502 503 504 /50x.html; location = /50x.html { root e:\wwwroot; }}Copy the code
This allows us to place HTML, images, CSS, and JS in the wwwroot directory. Tomcat handles only JSP and requests. For example, if we use the suffix “GIF”, Nginx will return the GIF file from wwwroot by default. Of course, the static file and Nginx are the same server, we can also on another server, and then through the reverse proxy and load balancing configuration is good, as long as we understand the most basic process, a lot of configuration is very simple, in addition to localtion is actually a regular expression, so very flexible
Forward agent
Forward proxy, which means a server located between the client and the original server (Origin Server), in order to get content from the original server, the client sends a request to the proxy and specifies the destination (origin server), then the proxy forwards the request to the original server and returns the obtained content to the client. The client can use the forward proxy. If you need to use your server as a proxy server, you can use Nginx to implement forward proxy. However, there is a problem with Nginx, that is, it does not support HTTPS. Although I have tried to configure HTTPS forward proxy, but finally found that the proxy still cannot be configured. So also hope to know the correct method of the comrades leave a message to explain.
Resolver 114.114.114.114 8.8.8.8; server { resolver_timeout 5s; listen 81; access_log e:\wwwroot\proxy.access.log; error_log e:\wwwroot\proxy.error.log; location / { proxy_pass http://$host$request_uri; }}Copy the code
Resolver is the DNS server for configuring forward proxies, and listen is the port for forward proxies. After the configuration is complete, you can use the server IP address and port number for proxy on Internet Explorer or other proxy plug-ins.
One last word
Nginx supports hot start, which means that when we modify the configuration file, we can make the configuration take effect without shutting down Nginx. I don’t know how many people know this, but I didn’t know it at first, so it often kills the Nginx thread to start. The command for Nginx to reread the configuration is
nginx -s reload
Copy the code
Under Windows
nginx.exe -s reloadCopy the code