preface
Forward agent
Forward agent: general access to the process is the client sends a request directly to the target server and access to content, use positive agent, after the client through the configuration or other ways to send the request to the proxy server, and specify the target server (original server), then the proxy server and the original server communication, transfer request and content, and then returned to the client. The forward proxy hides the real client and sends and receives requests for the client, making the real client invisible to the server.
For example, China cannot directly access Google Browser, at this time you can use a proxy server to help you access Google Browser (that is, we often all over the wall), then this access process is called forward proxy. This proxy server is called a forward proxy server.
Application scenarios of forward proxy:
- Accessing previously inaccessible resources, such as Google;
- Can do cache, accelerate the speed of resource access;
- Authorize client access and authenticate Internet access.
- Records user access records (for example, online behavior management) and hides user information externally.
The reverse proxy
Reverse proxy: Compared with forward proxy, the reverse proxy server directly receives the request and forwards the request to the actual server on the internal network. The result is returned to the client. The reverse proxy hides the real server, sends and receives requests for the server, and makes the real server invisible to the client. This is commonly used when handling cross-domain requests. Almost all large web sites now have a reverse proxy.
Application scenarios of forward proxy:
- To ensure Intranet security and prevent Web attacks, large websites usually use the reverse proxy as the public network access address, and the Web server is the Intranet.
- Load balancing, using a reverse proxy server to optimize the load on your website.
Load balancing
Based on the existing network structure, load balancing provides a cheap, effective and transparent method to expand the bandwidth of network devices and servers, increase the throughput, strengthen the network data processing capacity, and improve the flexibility and availability of the network.
Load Balance is distributed among multiple operation units, such as the Web server, FTP server, enterprise critical application server, and other critical task servers, to jointly complete work tasks.
The client sends multiple requests to the server, which processes the requests, some of which may interact with the database, and then returns the results to the client. This architectural pattern is suitable for early systems with relatively few concurrent requests and low cost. However, with the continuous growth of the amount of information, the rapid growth of the volume of visits and data, as well as the increase of the complexity of the system business, this architecture will cause the server corresponding client requests increasingly slow, when the concurrency is particularly large, but also easy to cause the server directly crash. This is obviously due to server performance bottlenecks, so what can be done to resolve this situation? Our first thought may be to upgrade the server configuration, such as increasing the CPU execution frequency, increasing the memory and so on to improve the physical performance of the machine to solve this problem, but we know that Moore’s Law is increasingly ineffective, the performance of the hardware can no longer meet the increasing demand. The most obvious example is that the instant traffic of a hot commodity on Tmall’s Singles’ Day is extremely large, so the system architecture similar to the above and the addition of machines to the existing top physical configuration cannot meet the demand. So what to do? In the above analysis, we have removed the method of increasing the physical configuration of the server to solve the problem, that is to say, the vertical solution to the problem is not feasible, so how about increasing the number of servers horizontally? Generated by this time the concept of cluster, a single server won’t solve, we increase the number of the server, and then will request distribution to each server, the original request is to focus on a single server to request distribution to multiple servers, load distribution to a different server, which is what we call the load balance.
Dynamic and static separation
In order to speed up the site’s resolution speed, dynamic pages and static pages can be resolved by different servers, speed up the resolution, reduce the original single server pressure.
In general, dynamic resources need to be separated from static resources, which are often deployed on Nginx due to features such as high concurrency and static resource caching. If the request is for static resources, the system directly obtains resources from the static resource directory. If the request is for dynamic resources, the system forwards the request to the corresponding background application for processing using the reverse proxy principle, thus realizing static and static separation.
The front – end separation greatly improves the access speed of static resources. Even if dynamic services are unavailable, static resources are not affected.
Nginx profile
What is the Nginx
Nginx is C language development.
Nginx (Engine X) is a high-performance HTTP server, reverse proxy server, and email (IMAP/POP3) proxy server.
Nginx was developed by Igor Sysoev, a Russian developer. In official tests, Nginx can support up to 50,000 concurrent sessions with very low CPU, memory and other resource consumption. It runs very stable. The most important is open source, free and commercially available.
Nginx also supports hot deployment and can run almost 24/7 without a restart, even if it runs for months, and software can be upgraded and maintained without interruption.
Nginx application scenarios
Virtual host: a server virtual out of multiple sites.
Static resource service: provides HTTP resource access services.
Reverse proxy, load balancer. When the number of visits to the website reaches a certain level, a single server cannot meet the user’s request, you need to use multiple server cluster can use Nginx as a reverse proxy. In addition, multiple servers can evenly share the load, so that a server is not idle due to the high load of a server downtime.
Nginx installation
Downloading the Installation package
Go to the Nginx website to download the latest version of the installation package, click here to download
Yum install
- Using commands to download
Nginx
The installation package
Wget HTTP: / / http://nginx.org/download/nginx-1.18.0.tar.gzCopy the code
- Unpack the
Tar -zxvf nginx-1.18.0.tar.gz. C./Copy the code
- Installation-dependent dependencies
To install nginx, you need to compile the source code downloaded from the official website, which depends on the GCC environment. If the GCC environment is not available, you need to install GCC.
Gcc: yum install gcc-c++
Copy the code
PCRE(Perl Compatible Regular Expressions) is a Perl library that includes the Perl-compatible Regular expression library. Nginx’s HTTP module uses PCRE to parse regular expressions, so you need to install the PCRE library on Linux.
Note: Pcl-devel is a secondary development library developed using PCRE. Nginx also needs this library.
Pcre: yum install -y Pcre pcre-develCopy the code
The Zlib library provides a variety of compression and decompression methods. Nginx uses Zlib to gzip the contents of HTTP packages, so you need to install zlib on Linux.
Zlib: yum install -y Zlib zlib-develCopy the code
OpenSSL is a powerful Secure Socket layer cryptographic library that includes major cryptographic algorithms, common key and certificate encapsulation management capabilities, and SSL protocols, and provides rich applications for testing and other purposes.
Nginx supports both HTTP and HTTPS (that is, HTTP over SSL), so you need to install the OpenSSL library on Linux.
4)Openssl: yum install -y Openssl openssl-develCopy the code
- Go to the home directory and run the following command
cdNginx 1.18.0 /Copy the code
./configure
Copy the code
- compile
make
Copy the code
- The installation
make install
Copy the code
Or simultaneously
make && make install
Copy the code
- Start the
cd /usr/local/nginx/sbin
Copy the code
./nginx
Copy the code
- Access authentication
Docker installation
Nginx command
Start the
Activation:. / nginxCopy the code
Nginx loads the conf/nginx.conf file by default on startup
Nginx specifies the loaded nginx configuration file in -c
stop
Quick stop:./nginx -s stopCopy the code
This method is equivalent to finding the Nginx process ID and then executing the kill command to forcibly kill the process
Complete stop:./nginx -s quitCopy the code
The stop step in this mode is to stop the Nginx process after it finishes processing tasks
restart
Method 1: Stop and start (recommended) :
Restarting nginx is equivalent to stopping nginx and then starting nginx, that is, executing the stop command and then executing the start command. As follows:
./nginx -s quit
./nginx
Copy the code
Method 2: Reload the configuration file:
After the nginx configuration file is modified, nginx needs to be restarted for the configuration to take effect. Using -s reload, the configuration takes effect in nginx without stopping nginx first and then starting nginx as follows:
./nginx -s reload
Copy the code
Boot from the rev.
Check the version
./nginx -v
Copy the code
Check the process
ps -ef | grep nginx
Copy the code
Nginx configuration file
#user nobody; worker_processes 1; # number of Nginx processes, usually set to the same as the number of CPU cores #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; HTTP {include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 12306; Server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php${# proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php${# root HTML; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443 ssl; # server_name localhost; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 5m; # ssl_ciphers HIGH:! aNULL:! MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # #}}}Copy the code
reference
Baike.baidu.com/item/nginx/…
lnmp.org/nginx.html