Author: Zhao Tiantian 2021-10-15
What is the nginx???????
Nginx is a high-performance HTTP and reverse proxy Web server
Features are: small memory, strong concurrency (official 50000 concurrent) Tomcat (more than 500 concurrent), of course, this mainly depends on the hardware configuration of the server
So if you want to understand what a reverse proxy is, what is a forward proxy
Forward proxy: proxy client, such as VPN, we need to access foreign websites, of course, need a proxy client
Reverse proxy: Used to proxy the server
What can Nginx do??
Static Resource Deployment
For the front-end this is the most commonly used, front-end deployment is generally static resource deployment, such as HTML/JS/CSS/IMG; Just paste the configuration file
server {
server_name test.ddelement.com;
listen 8888;
root /home/ddelement/test/;
location /myweb {
add_header 'Access-Control-Allow-Origin' $http_origin;
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,web-token,app-token,Authorization,Accept,Origin,Keep-Alive,User-Agent,X-Mx-ReqToken,X-Data-Type,X-Auth-Token,X-Requ ested-With,If-Modified-Since,Cache-Control,Content-Type,Range'; index index.htm index.html; }}Copy the code
/usr/sbin/nginx -s reload /usr/sbin/nginx -s reload /usr/sbin/nginx -s reload /usr/sbin/nginx -s reload
Explain the above configuration in detail
Server_name: specifies the name of the server
Listen: Server_name and listen are unique identifations of a server. That is, you can have multiple servers with port number 8888. However, ensure that the server_name of each server port number 8888 is unique
Root: is the root path to access the file. Root can also be placed in location, which is common to all of the following locations
The location/myweb: here said If access to the/myweb will be intercepted access/home/ddelement/test/myweb/index. The HTML
Add_header: add some cORS to address the cross-domain configuration. Of course, this configuration can also be placed outside the location, indicating that this server is common configuration
Pit: Nginx uses the root path to access the directory and then matches the path to location. In this case, you can access the index.html or index.htm file in /myweb/ in /home/ddelement/test/
The reverse proxy
Reverse proxy is no sense for the client, the client does not require any configuration, just send the service to the reverse proxy server, chosen by the reverse proxy server to the target server to get data, and then returned to the client, the reverse proxy server and the target server is a server, exposure is the proxy server, to hide the real server.
Scenario 1In the process of cross-domain processing, reverse proxy can be used for processing. Front-end resources are deployed on NGINX through static deployment, and the back-end address is proxied through NGINX. If the front-end accesses the back-end address, it directly accesses the proxy of NGINX, so that different source policies do not appear
server {
server_name localhost;
listen 9999;
Front-end static resource deployment
location / {
root /home/ddelement/test/;
index index.html;
}
# back-end address proxyThe location/apiv2 / {proxy_pass http://20.80.91.145:8080/; }}Copy the code
Scenario 2 if by an IP address to access multiple services, of course you can use the reverse proxy, such as access to taobao, http://20.80.91.145/taobao, http://20.80.91.145/baidu access to baidu, Access drops, http://20.80.91.145/didi,… , the configuration is as follows
server {
server_name localhost;
listen 80;
location /baidu/ {
proxy_pass http://baidu.com/;
}
location /taobao/ {
proxy_pass http://taobao.com/;
}
location /didi/ {
proxy_pass http://didi.com/;
}
location /qq/ {
proxy_pass http://qq.com/;
}
proxy_intercept_errors on;
error_page 404 403 500 502 503 504 /404.html;
location = /404.html {
root /home/ddelement/test/; }}Copy the code
Load balancing
Principle: In an application, the number of users and QPS are often very large, and one service machine cannot bear the load, and many machines are often required to carry out partial pressure. For example, Baidu and Taobao may need tens of thousands of services, but each machine has its own unique IP address. However, users are all accessed through the domain name baidu.com, so scenarios like this need load balancing to control the forwarding to that machine
Schematic diagram
You can configure it directly
HTTP {upstream myapp1 {server 192.168.1.103:8080; Server 192.168.1.104, : 8080; }... Server {listen 80; server_name localhost; ... Location /myweb {proxy_pass http://myapp1; }}}Copy the code
Upstream (” myapp1 “, “proxy_pass”, “http://myapp1”); Of course, there are many load balancing algorithms. The default is polling, which is to access 192.168.1.103:8080 once, and then access 192.168.1.104:8080 once
Load balancing algorithm
Polling strategy: Polling means that each service is accessed once and then polled, configured as follows
Upstream myapp1 {server 192.168.1.103:8080; Server 192.168.1.104, : 8080; }Copy the code
Weight strategy: Load server will vary, some server processing ability, or memory, some server, on the other hand, if still according to the polling strategy, probably the worse performance of server is easy to hang up, then it should be “how many strength out” strategy, the performance good weight can, of course, a few bigger, the worse performance of weights can be smaller, This will result in a better allocation of servers, as shown below
Upstream myapp1 {server 192.168.1.103:8080 weight = 3; Server 192.168.1.104:8080 weight = 2; }Copy the code
Minimum connection policy: Web requests are forwarded to idle machines, but this should be used with caution. Generally, idle machines are poor machines. The configuration is as follows
upstream myapp1 { least_conn; Server 192.168.1.103:8080; Server 192.168.1.104, : 8080; }Copy the code
All of the above policies have one problem: session loss
IP hash policy: IP hash is also called IP binding. Each request is assigned according to the HASH value of the IP address. In this way, each client can access the fixed back-end server, which can solve the problem of session loss. Configuration is as follows
upstream myapp1 { ip_hash; Server 192.168.1.103:8080; Server 192.168.1.104, : 8080; }Copy the code
Other policies, such as the backup policy and down policy. In the backup policy, the backup machine is enabled only after all the non-standby machines fail. It is generally used for releasing updates. When releasing updates, remember to update the standby machine first and then gradually update the non-standby machine. The down policy is never to participate in the load
Dynamic and static separation
Before the front and back ends were not separated, the front and back ends were deployed on Tomcat through a JAR package. In this way, the access speed of static resources was slow. Nginx is 10 times more capable of handling static resources than Tomcat, and Tomcat is better at handling back-end services, so why not let the right service do the right thing? NGINX deploys the front-end static resources, while Tomcat deploys the back-end services to maximize service performance
server {
server_name localhost;
listen 9999;
Front-end static resource deployment
location / {
root /home/ddelement/test/;
index index.html;
}
# back-end address proxyThe location/apiv2 / {proxy_pass http://20.80.81.101:8080/; }}Copy the code
Virtual host
Virtual host is a physical machine divided into multiple “virtual” server, is a physical server as a number of servers, so you can configure multiple sites; Nginx provides the function of virtual hosting, which is to run multiple sites with different domain names without having to install multiple Nginx on a server.
Under Nginx, a server is a virtual host. If you want to configure multiple virtual hosts, you can configure multiple server nodes. For example, it is not necessary to configure a micro-application as a nginx server. It can be implemented through virtual hosts
Port-based virtual host
Port-based virtual hosts are distinguished by ports. Browsers can access virtual hosts by using the same domain name and port number, or by using the same IP and port number
server {
server_name localhost;
listen 9999;
location / {
root /home/ddelement/test/;
index index.html;
}
}
server {
server_name localhost;
listen 8888;
location / {
root /home/ddelement/test/; index index.html; }}Copy the code
Domain-based virtual hosting
Domain-based access is the most common web hosting available here through ww.ddelement.com, gg.ddelement.com
server {
server_name ww.ddelement.com;
listen 80;
location / {
root /home/ddelement/test/ww;
index index.html;
}
}
server {
server_name gg.ddelement.com;
listen 80;
location / {
root /home/ddelement/test/gg; index index.html; }}Copy the code
Nginx common directives
- . / nginx start
- ./nginx -s stop Stops
- /nginx -s quit Exit safely
- /nginx -s reload Reloads the configuration file
- Ps aux | grep nginx view nginx process