preface
This article has participated in the “Please check | you have a free opportunity to apply for nuggets peripheral gifts” activity.
Join in the comments for a chance to win two new nuggets badges. See the bottom of the article for details.
For the primary development, especially small white students, I hope friends carefully read, I believe you will have a harvest.
Creation is not easy, remember to like, follow, bookmark yo.
Nginx concept
Nginx is a high-performance HTTP and reverse proxy service. It is characterized by less memory and strong concurrency. In fact, NGINx’s concurrency is better in the same type of web server.
Nginx was developed specifically for performance optimization. Performance is its most important consideration. Implementation is very efficient and can withstand high loads, with reports indicating that it can support up to 50,000 concurrent connections.
Nginx is a good alternative to Apache services for high concurrency connections: Nginx is one of the software platforms of choice for web hosting business owners in the United States.
The reverse proxy
Before we talk about reverse proxies, let’s talk about proxies and forward proxies.
The agent
Agent is actually an intermediary, A and B could have been directly connected, insert A C in the middle, C is the intermediary. In the beginning, proxies were mostly used to help Intranet clients (lans) access extranet servers. Then came reverse proxy, which in this case actually means the opposite direction, i.e. the proxy forwards requests from the external client to the internal server, from the external to the internal server.
Forward agent
The forward proxy is the client proxy, the proxy client, the server does not know the actual initiating client.
A forward proxy is like a jump board in that the proxy accesses external resources.
For example, if we visit Google in China and cannot access it directly, we can send a request to the proxy service through a forward proxy server. The proxy server can access Google, so that the proxy can access Google, get the returned data, and then return it to us, so that we can access Google.
The reverse proxy
A reverse proxy is a server proxy. The client does not know which server actually provides the service.
The client is unaware of the existence of the proxy server.
A proxy server receives the Internet connection request, forwards the request to the Intranet server, and returns the result to the Internet client. In this case, the proxy server acts as a reverse proxy server.
Load balancing
Here’s an example of load balancing:
Subway everyone should have taken it, we are generally in the morning peak when the subway, there is always A subway mouth the most crowded, at this time, there will be A subway staff A with A big speaker Shouting “anxious personnel please go to B, B mouth fewer cars empty”. And this subway worker A is responsible for load balancing.
In order to improve the capability of all aspects of the website, we will generally form a cluster of several machines to provide external services. However, our website provides a single access point, such as www.taobao.com. So how do you distribute a user’s request to different machines in the cluster when the user types www.taobao.com in the browser? That’s what load balancing does.
Load Balance refers to the balancing and distribution of loads (work tasks, access requests) across multiple operation units (servers, components) for execution. It is the ultimate solution for high performance, single point of failure (high availability), and scalability (horizontal scaling).
Nginx provides three load balancing methods: polling, weighted polling, and Ip hash.
polling
Nginx defaults to polling with a weight of 1 and the order in which the server processes requests: ABCABCABCABC….
upstream mysvr {
server 192.1688.1.:7070;
server 192.1688.2.:7071;
server 192.1688.3.:7072;
}
Copy the code
Weighted polling
Different numbers of requests are distributed to different servers based on the weight of the configuration. If this parameter is not set, the value is 1 by default. The order of requests from the following servers is: ABBCCCABBCCC….
upstream mysvr {
server 192.1688.1.:7070 weight=1;
server 192.1688.2.:7071 weight=2;
server 192.1688.3.:7072 weight=3;
}
Copy the code
ip_hash
Iphash hash the IP address requested by the client and then distributes the requests from the same client IP address to the same server for processing. In this way, sessions are not shared.
upstream mysvr {
server 192.1688.1.:7070;
server 192.1688.2.:7071;
server 192.1688.3.:7072;
ip_hash;
}
Copy the code
Dynamic and static separation
Dynamic versus static pages
- Static resources: Resources whose source code never changes when the user accesses the resource multiple times (such as: HTML, JavaScript, CSS, IMG files, etc.).
- Dynamic resource: When a user accesses the resource multiple times, the source code of the resource may send changes (e.g..jsp, servlet, etc.).
What is static separation
-
Dynamic and static separation is to make dynamic web pages in the dynamic website according to certain rules to differentiate between constant resources and often changed resources. After dynamic and static resources are split, we can do cache operation according to the characteristics of static resources. This is the core idea of static site processing.
-
Dynamic and static separation simple summary is: dynamic file and static file separation.
Why do we use static separation
In order to speed up the resolution of the website, dynamic resources and static resources can be used in different servers to resolve, speed up the resolution. Reduce stress on a single server.
Nginx installation
Under Windows installation
1. Download nginx
Nginx.org/en/download… Download the stable version. Take nginx/Windows-1.20.1 as an example, download nginx-1.20.1.zip directly. Download it and decompress it as follows:
2. Start nginx
-
Double-click nginx.exe directly, and a black pop-up window flashes by
-
Open the CMD command window, switch to the nginx decompression directory, enter the command nginx.exe, and press Enter
3. Check whether nginx is started successfully
Directly enter the url http://localhost:80 in the browser address bar and press Enter. If the following page appears, the startup is successful!
Docker to install nginx
My previous article also talked about the steps of installation under Linux, I use docker installation, very simple.
Docker (3) : Docker deploys Nginx and Tomcat
1. To view images on all local hosts, run the docker images command
2. Create the nginx container and start the container, using the commanddocker run -d --name nginx01 -p 3344:80 nginx
3. Run the docker ps command to view the started container
If the browser accesses the server IP address 3344, the installation is successful.
Note: how not to connect, check whether ali cloud security group open port, or whether the server firewall open port!
Under Linux installation
1. Install GCC
GCC GCC is not available, you need to install:
yum install gcc-c++
Copy the code
2, PCRE pcre-devel installation
PCRE(Perl Compatible Regular Expressions) is a Perl library that includes the Perl-compatible Regular expression library. The HTTP module of Nginx uses PCRE to parse regular expressions, so you need to install the PCRE library on Linux. Pcre-devel is a secondary development library developed using PCRE. Nginx also needs this library. Command:
yum install -y pcre pcre-devel
Copy the code
3. Zlib installation
Zlib provides a variety of compression and decompression methods. Nginx uses Zlib to gzip HTTP packages, so you need to install zlib on Centos.
yum install -y zlib zlib-devel
Copy the code
4. OpenSSL installation
OpenSSL is a powerful Secure Socket layer cryptographic library that includes major cryptographic algorithms, common key and certificate encapsulation management capabilities, and SSL protocols, and provides rich applications for testing and other purposes. Nginx supports both HTTP and HTTPS, so you need to install the OpenSSL library on Centos.
yum install -y openssl openssl-devel
Copy the code
5. Download the installation package
Manually download the.tar.gz installation package from nginx.org/en/download…
Upload the file to the /root server
6, decompression
Tar -zxvf nginx-1.20.1.tar.gz CD nginx-1.20.1Copy the code
7, configuration,
Use the default configuration and execute in the nginx root directory
./configue
make
make install
Copy the code
Find the installation path whereis nginx
8. Start nginx
./nginx
Copy the code
The startup is successful. Visit IP :80
Nginx common commands
Note: Before using the Nginx command, you must access the Nginx directory /usr/local/nginx/sbin
1, check the Nginx version number:./ Nginx -v
Start Nginx:./ Nginx
Stop Nginx:./ Nginx -s stop or./ Nginx -s quit
Reload the configuration file:./nginx -s reload
5, check the nginx process: ps – ef | grep nginx
Nginx configuration file
Nginx configuration file location: / usr/local/Nginx/conf/Nginx. Conf
The Nginx configuration file consists of three parts:
1. Global block
Between the start of the configuration file and the Events block, configuration directives that affect the overall operation of the Nginx server, such as worker_processes 1, are set.
This is a key configuration for Nginx server concurrent processing service. The larger worker_processes value is, the more concurrent processing it can support is limited by hardware, software, and other devices. Generally, the value is consistent with the number of CPU cores.
2, the events
The Events block involves directives that affect the network connection between the Nginx server and the user, such as worker_connections 1024
Indicates that each work process supports a maximum of 1024 connections. This configuration has a great impact on Nginx performance and should be flexibly configured in practice.
3, HTTP block
http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; Server_name localhost; # domain name location / {root HTML; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; }}}Copy the code
This is the most frequent part of the Nginx server configuration.
Demo sample
Reverse proxy/load balancer
In Windows, we create two SpringBoot projects with ports 9001 and 9002 as follows:
All we need to do is proxy localhost:80 for the localhost:9001 and localhost:9002 services and give polling access to both services.
Nginx configuration is as follows:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream jiangwang {
server 127.0. 01.:9001 weight=1;// Polling has a default weight of 1
server 127.0. 01.:9002 weight=1;
}
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root html;
index index.html index.htm;
proxy_pass http://jiangwang;}}}Copy the code
We first put the project into jar package, and then command line to start the project, and then in the browser to access the localhost to access the two projects, I also printed logs in the project, operation down to see the result, whether the two projects polling access.
As you can see, by accessing localhost, the two project polls are accessed.
Next we change the weights to the following Settings:
upstream jiangwang {
server 127.0. 01.:9001 weight=1;
server 127.0. 01.:9002 weight=3;
}
Copy the code
Reload the nginx configuration file: nginx -s reload
Localhost = localhost = localhost = localhost
The result shows that the number of accesses to port 9002 is 3:1 compared with that to port 9001.
Dynamic and static separation
1, put the static resources into the local newly created file, for example: create a file data in disk D, and then create two new folders in the data folder, an IMG folder, to store pictures; An HTML folder for HTML files; The diagram below:
2. Create a new A.HTML file in the HTML folder as follows:
<! DOCTYPEhtml>
<html>
<head>
<meta charset="utf-8">
<title>The Html file</title>
</head>
<body>
<p>Hello World</p>
</body>
</html>
Copy the code
3. Place a photo in the IMG folder as follows:
Configure the nginx.conf file in nginx:
location /html/ { root D:/data/; index index.html index.htm; } location /img/ { root D:/data/; autoindex on; List all contents in the current folder}Copy the code
5, start, nginx, access to the file path, enter http://localhost/html/a.html, the browser as follows:
6. Enter http://localhost/img/ in the browser
How Nginx works
mater&worker
After receiving the signal, the master assigns the task to the worker for execution. There may be multiple workers.
How workers work
After the client sends a request to the master, the mechanism for the worker to obtain the task is neither direct allocation nor polling, but a scrambling mechanism. After the task is “grabbed”, the task is executed, that is, the target server, such as Tomcat, is selected and the result is returned.
worker_connection
Sending the request takes up two or four connection counts for woker.
The normal maximum number of concurrent static accesses is: Worker_connections * worker_processes/ 2. If HTTP is used as a reverse proxy, the maximum number of concurrent requests should be worker_connections * worker_processes/ 4. Of course, the number of workers is not the more the better, and the number of workers is the same as the number of cpus on the server.
advantages
Each woker is an independent process. If one of the wokers fails, the other wokers continue to fight for the request process without causing service interruption.
conclusion
This article explains the basic concepts, installation tutorials, configuration, usage examples, and working principles of Nginx in detail. Hope you found this article helpful.
Sweepstakes rules
-
First of all, you need to comment. I want you to read the article and not just type in emojis.
-
Remember to like, follow, and move your little finger, thank you!
-
As for how fair, I also thought about it, I will create a lucky draw group, I give red envelopes, you grab, the least amount of prize. Or do you have a better way to let me know in the comments section. If I think it’s better, I’ll take it.
-
If MY comments are in the Top 1-5 and I get a new badge or a new IPT shirt, I will also give it to my friends in the comments section. If you pay attention to me, my public account is wechat search [First Love].