1. Reverse proxy

Reverse proxy is also easy to say, such as you dial a phone number, they ask who you want to speak to, and then put you through to the extension of the corresponding person. Everyone dials the same number, but is routed to the right person, and there are friendly reminders if someone dials the wrong number.

You don’t dial the number directly, you dial the number of the unified forwarder, and what this person does is reverse proxy, and reverse proxy is that he will forward your question to the corresponding person, and then forward the corresponding person’s answer to you.

Reverse proxies are very demanding on the server, and Nginx is an expert in this area, so most of our servers use Nginx as the entry server, to act as a messenger.

For example, a Web request to the service will first pass through Nginx and then be forwarded by Nginx to the corresponding application server. For example, if you want to access an image, you first enter nginx. Nginx takes the desired image path to the corresponding image server to retrieve the image and returns it to you.

Why do reverse proxies exist? Because our application service requires very high development efficiency, its operation efficiency is very low. Its QBS and TPS concurrency are limited, and it is difficult to deal with users’ requests while processing business.

In this case, we would set up many services to form a cluster to provide high availability to users. It is precisely because many services are clustered and exposed to different addresses that Nginx needs to have a reverse proxy that can route requests to application services. All the user is faced with is the address of one nginx service.

Closing in on user requests at the Nginx layer can also improve site security and affordability. If we want to modify the interface’s response header, we just need to modify Nginx instead of modifying it for each service. If we want to stop service access, we just need to turn off Nginx instead of shutting down each service.

2. Forward proxy

In fact, the reverse proxy is a server. The client does not know the server that actually provides the service. That is, the caller makes a call to the same person every time.

The forward agent is assuming that you want to buy a train ticket during the Spring Festival, you find a scalper, the scalper helps you buy the ticket, many people look for the scalper, the scalper helps many people to buy the ticket, the scalper is the forward agent, for the service every time the ticket is a scalper. The service doesn’t know who actually bought the ticket. In forward proxying, the server does not know which client actually initiated the request.

In other words, reverse proxies require that the proxy server and the server providing the service be on the same network, and forward proxies require that the client and proxy server be on the same network.

3. Load balancing

Load balancing is used to solve the problem of large traffic, because each server can bear the page view is limited, with the development of the company will be more and more page view, at this time we will choose to increase the way of servers to balance the pressure of each server.

As mentioned above, our application service requires very high development efficiency, so its operation efficiency is very low. Its QBS and TPS concurrency are limited, so we need to form many such application services into a cluster to provide users with high availability.

Each time a Web request comes in, Nginx can choose a less stressed server and send the request to the service for processing. This is load balancing.

In the service field, once many application services form a cluster, it will certainly bring two requirements. The first requirement is that we need dynamic expansion. When we add a server, it is impossible to shut down the website and stop the use of the website, it must be a unaware process. This means you can add a service without shutting it down.

If a server in our cluster has a problem, it should not affect user access. Nginx will send requests to other normal servers to ensure the normal use of online users.

4. Static resource caching

In a link, Nginx is a node at the edge of the enterprise Intranet, that is, the outermost layer of the website. All requests and access are forwarded by Nginx. In this way, as our network links grow, the delay experienced by users will increase.

So if we can cache some dynamic content in Nginx that doesn’t change for all users, or over a period of time, and nginx provides access directly to the user, then the latency of the user will be much reduced.

So the reverse proxy can derive another function called cache, he is able to accelerate our visit, and most of the time we in access like CSS and js files, or a few pictures, so such a static resource is not necessary by application service to access, he need only through a local file, system placed on static resources, Nginx provides access directly. This is the static resource feature of Nginx.

5. Open the gzip

Go to gzip(off -> on). Gzip_min_length is less than a certain number of bytes. No longer perform compression, because HTTP transmission is less than a certain number of bytes. Compression costs CPU performance. Gzip_comp_level represents the compression level and gzip_types is used for certain types of files.

http { ... gzip on; gzip_min_length 1; gzip_comp_level 2; gzip_types text/plain applicaton/x-javascript text/css image/png; . }Copy the code
gzip on; Gzip_min_length 10k; # set the minimum number of bytes allowed for page compression. This means that if the file is less than 10 bytes, do not compress it because it is meaningless and is very small. # set the compression buffer size to 4 16K memory for the compression result stream cache gzip_http_version 1.1; Gzip_comp_level 2; # Set the compression ratio, the minimum is 1, fast processing speed, slow transmission speed; 9 is the maximum compression ratio, slow processing speed, fast transmission speed; Gzip types text/ CSS text/ XML application/javascript gzip types text/ CSS text/ XML application/javascript # Specify compression type, configure as many compression types as possible when online configuration! gzip_disable "MSIE [1-6]\."; # Configure to disable gzip condition, support regex. Gzip is not enabled for IE6 and below (because earlier versions of IE do not support it). # select support for vary header; The option allows the front-end cache server to cache gzip-compressed pages; This may not be written to indicate that I am using gzip compression when transferring data to the clientCopy the code

6. proxy_pass

Proxy_pass can be used to forward the request to the corresponding service, assuming there is a 7001 service on the server, which can be accessed by local IP127.0.0.1:7001, this can be configured in location using proxy_pass.

server {
    listen     8080; . location / { proxy_pass http:/ / 127.0.0.1:7001;}}Copy the code

7. upstream

Suppose you have two services, one port 7001 and one port 7002, which can be load balanced using upstream to create a service pool in the HTTP code block. It is then forwarded to the service pool by proxy_pass in the server’s location. This enables a simple load balancing.

http {
    upstream myservice {
        server 127.0. 01.:7001;
        server 127.0. 01.:7002;
    }
    server {
        listen 80;
        location / {
            proxy_pass http://myservice;}}}Copy the code

8. To configure HTTPS

Create a cert folder in the nginx configuration folder to store domain name certificates.

cd /usr/local/nginx/conf
mkdir cert
Copy the code

Modify the nginx configuration file /usr/local/nginx/conf

/usr/local/nginx/conf
vi nginx.conf
Copy the code

Remove the comment at the front of the server module and use port 443 for HTTPS.

Ssl_certificate Specifies the downloaded cert.pem certificate, and ssl_certificate_key specifies the downloaded cert.key certificate.

server { listen 443 ssl; server_name localhost; ssl_certificate certs/cert.pem; ssl_certificate_key certs/cert.key; ssl_session_cache shared:SSL:1m; ssl_session_timeout 5m; ssl_ciphers HIGH:! aNULL:! MD5; ssl_prefer_server_ciphers on; location / { root html; index index.html index.htm; }}Copy the code

Restarting nginx takes effect. You can forward HTTP to HTTPS to force HTTPS access.

Modify the nginx configuration file /usr/local/nginx/conf

cd /usr/local/nginx/conf
vi nginx.conf
Copy the code
server {
    listen 80;
    server_name zhiqianduan.com;
    rewrite ^(.*)$ https://${server_name}$1 permanent; 
}
Copy the code

9. Open http2

Modify the nginx configuration file /usr/local/nginx/conf

/usr/local/nginx/conf
vi nginx.conf
Copy the code

Add http2 after 443.

server {
    listen 443 ssl http2;
    server_name localhost;
    ssl_certificate server.crt;
    ssl_certificate_key server.key;
}
Copy the code

10. Set up a static Web server

Create a WWW folder under the nginx installation directory and place some static files in the WWW.

Conf file to find the server block. Listen configures which port to listen on, in this case port 8080, and then configures location. All URL requests go to the WWW folder and use/for all requests.

We need to specify that the url suffix corresponds to the suffix after the file directory. There are two ways to use it, root and alias. Root has a problem.

The default alias is in the WWW directory of the nginx installation directory. The following path corresponds to the URL path one by one.

server { listen 8080; . location / { alias www/; . }... }Copy the code

After configuration, restart nginx to see the effect. To access this, visit localhost:8080 in your browser.

11. Open the directory structure

There is a folder called dlib under the same directory. Suppose you want to share the files in the dlib, or the folder and its directory structure information with the user, and the user decides which files to use.

Nginx provides an official module called autoIndex that displays the structure of the directory when accessing urls ending in /. Use method is also particularly simple, is autoindex on to add such a directive can be.

location / {
    autoindex on;
}
Copy the code

After reload, the code will list all the files in the folder we visited. When we open a directory, we can continue to display the files in that directory, which is a nice static resource help feature.

12. Traffic restrictions

Another common function is that the bandwidth of the public network is limited. When a large number of concurrent users use the bandwidth, a competition will be formed to limit the speed of users accessing certain large files and save enough bandwidth for users to access some necessary small files, such as CSS and JS.

This can be done using the set command with some built-in variables, such as set $limit_rate 1k, which limits the speed at which Nginx can send responses to the client’s browser. This means transferring 1K data per second to the browser.

location / {
    set $limit_rate 1k;
}
Copy the code

13. Log function

You first need to define the log format, using the log_format directive, and define procedures that use variables. $remote_addr specifies the IP address of the remote browser client, and $time_local specifies the current time. $STATUS is the returned status code.

http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer"  ' '"$http_user_agent" "$http_x_forwarded_for"'; }Copy the code

After the log is defined, you need to give it a name, for example, main, because you can define more than one log format, and each log needs a different name to distinguish it.

In actual cases, different domain names need to be logged in different formats. Files with different urls and operation records for different purposes such as some reverse proxies are also recorded in different log formats.

The position of the access_log command determines where requests from its region will be logged. The first parameter is the log path logs/yindong.log. The second parameter is the format of the logging, main.

For example, the access_log placed on the server, that is, the request log for this domain name or this port, will be recorded in this file.

server {
    listen 8080; access_log logs/yindong.log main; location / { alias www; }}Copy the code

After yindong.log is configured, all requests will be logged after completion. You can view it in logs/yindong.log.