As the front-end changes, Nginx has become a must-have skill for front-end developers. Does Nginx really work? In fact, Nginx has always been relevant to us. It can be used as either a Web server or a load balancing server, with high performance, high concurrent connections and so on

1. Load balancing

When the traffic volume of an application surges in a unit of time, the bandwidth and performance of the server are greatly affected, and the server will break down and crash. To prevent this phenomenon and achieve better user experience, we can configure Nginx load balancing to share the server pressure

When a server goes down, the load balancer assigns other servers to users, greatly increasing the stability of the website. When users visit the Web, the first access is the load balancer, and then the load balancer forwards requests to the background server

1.1 Common load balancing modes

  • Polling (default)
// nginx.config
upstream backserver {
Server 192.168.0.1;Server 192.168.0.2;}
Copy the code
  • Weight weight

The weight of different IP addresses is positively correlated with the access ratio. The higher the weight is, the greater the access is. It is suitable for machines with different performance

// nginx.config
upstream backserver {
Server 192.168.0.1 weight = 2;Server 192.168.0.2 weight = 8;}
Copy the code
  • Response time to allocate

This is a fair competition, but it relies on a third party plugin called nginx-upstream-fair, which needs to be installed first

// nginx.config
upstream backserver {
Server 192.168.0.1;Server 192.168.0.2;    fair;
}  server {  listen 80;  server_name localhost;  location / {  proxy_pass http://backserver;  } } Copy the code

1.2 Health Check

Nginx’s ngx_HTTP_upstream_module (health check module) is essentially a server heartbeat check. It sends health check requests to servers in the cluster periodically to check whether any servers in the cluster are abnormal

If one of the servers is detected to be abnormal, all requests to the nginx reverse proxy from the client will not be sent to the server (until the next rotation health check is normal).

The basic example is 👇

upstream backserver{
Server 192.168.0.1 max_fails = 1 fail_timeout = 40 s;Server 192.168.0.2 max_fails = 1 fail_timeout = 40 s;}

server {  listen 80;  server_name localhost;  location / {  proxy_pass http://backend;  } } Copy the code

Two configurations are involved:

  • Fail_timeout: sets the period during which the server is considered unavailable and the period during which the number of failed attempts is counted. The default value is 10 seconds
  • Max_fails: Sets the number of failed attempts for Nginx to communicate with the server. The default is 1

2. Reverse proxy

A reverse proxy is when a client sends a request to access the content on the server, But the request will be sent to the first a proxy server proxy, the proxy server (Nginx) will request broker to and their internal server, belonging to the same local area net and the user through the client really want to obtain content is stored on these internal server, Nginx proxy server at this time is to assume the role of an intermediary, Play the role of distribution and communication

2.1 Why is a Reverse proxy Required?

There are two main advantages of reverse proxy

  • Firewall function

If your application does not want to be exposed directly to the client (i.e. the client cannot directly request the real server, only through Nginx), use Nginx to filter out unauthorized or illegal requests to secure the internal server

  • Load balancing

As mentioned in the previous chapter, load balancing is essentially an application scenario of reverse proxy, whereby nginx distributes the received client requests to all servers in the cluster “evenly” (depending on the load balancing method), thus achieving load balancing of server pressure

2.2 How to Use a Reverse Proxy

We set up the reverse proxy access to port 80 by starting the NodeJS project that emulated the port of the internal server

// nginx.config
server  {
  listen 80;
  server_name localhost;
  location / {
Proxy_pass http://127.0.0.1:8000; (upstream) } } Copy the code

The Nginx reverse proxy matches the specified URI through the location function and forwards requests that match the URI to the upstream node pool defined by proxy_pass

3. The Https configuration

Nginx is used to configure Https authentication. There are two main steps: signing a trusted SSL certificate and configuring Https

3.1 Signing SSL trusted by a third party

The private key example.key file and the example. CRT certificate file are used to configure HTTPS. The example. CSR file is used to apply for the certificate file. For those who want to learn more about SSL certificates, here is the SSL Certificate introduction

3.2 Configuring HTTPS for Nginx

To enable the HTTPS service, in the configuration file information block (server), you must use the SSL parameters of the listen command and define the server certificate file and private key file, as shown below:

server {
   # SSL parameters
listen 443 ssl; // Listen on port 443, which is the default HTTPS port. 80 is the default HTTP port   server_name         example.com;
   # certificate file
 ssl_certificate example.com.crt;  # private key file  ssl_certificate_key example.com.key; } Copy the code
  • Ssl_certificate: indicates the absolute path of the certificate
  • Ssl_certificate_key: specifies the absolute path of the key.

4. Common configurations

What else can Nginx do with the front end

4.1 IP Address Whitelist

Nginx whitelist can be configured to specify which IP addresses can access your server, anti-crawler essential

  • Simple configuration
 server {
        location / {
Deny 192.168.0.1. // Disable access to this IP addressdeny all; // Disable all            }
 } Copy the code
  • Whitelist Configuration

Create a whitelist

vim /etc/nginx/white_ip.conf
.192.168.0.1 1; 
.Copy the code

Modifying the nginx configuration (nginx.conf)

geo $remote_addr $ip_whitelist{
    default 0;
    include ip.conf;
}
The geo directive can map a new variable based on the value of the specified variable. If no variable is specified, the default value is$remote_addr
Copy the code

Whitelist Settings for matching entries

server {
    location / {
        if ( $ip_whitelist = 0 ){
            return403; // Not whitelisted returns 403        }
 index index.html;  root /tmp;  } } Copy the code

4.2 Adapting to PC and mobile environments

When the user opens the scene of Baidu.com on the PC from the mobile end, it will automatically redirect to the mobile end m.baidu.com. Essentially, Nginx can obtain the userAgent on the request client through the built-in variable $http_user_Agent. In this way, you can know whether the current terminal of the current user is a mobile terminal or a PC, and then redirect to H5 station or PC station

server {
 location / {
// Obtain agent from mobile or PC devices        if ($http_user_agent~ *'(Android|webOS|iPhone)') {
            set $mobile_request '1';
 }  if ($mobile_request = '1') {  rewrite ^.+ http://m.baidu.com;  }  } } Copy the code

4.3 configuration gzip

After Nginx gzip is enabled, the size of static resources is greatly reduced, which saves a lot of bandwidth, improves transmission efficiency, and brings better response and experience

server{
gzip on; / / start    gzip_buffers 32 4K;
gzip_comp_level 6; // Compression level, 1-10, the larger the number of compression the bettergzip_min_length 100; // Do not compress the critical value, greater than 100 compression, generally do not change gzip_types application/javascript text/css text/xml;  gzip_disable "MSIE [1-6]\."; // IE6 is not Gzip friendly to Gzip gzip_vary on; } Copy the code

4.4 Nginx Configuring Cross-domain requests

No ‘access-Control-allow-origin’ header is present on the requested resource The Nginx server needs to configure the header parameter for the response:

location / {  
    add_header Access-Control-Allow-Origin *;
    add_header Access-Control-Allow-Methods 'GET, POST, OPTIONS';
    add_header Access-Control-Allow-Headers 'DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization';

 if ($request_method = 'OPTIONS') {  return 204;  } }  Copy the code

5. How to use Nginx

By using Nginx locally, we introduce the basic use of Nginx from the start, change, restart and so on

  • How to startsudo nginx
  • Modify the nginx.conf configuration (depending on where you configure it)vim /usr/local/etc/nginx/nginx.conf
  • Check the syntaxsudo nginx -t
  • Restart the nginxsudo nginx -s reload
  • Create soft links (easy to manage multi-application Nginx)

D /nginx/conf. D /nginx/conf. D /nginx/conf. /etc/nginx/conf.d/

If we have an ngxin configuration file in the program folder: /home/app/app.nginx.conf We need to create a soft link for this file to /etc/nginx/conf.d/ : Ln -s/home/app/app.example.com.nginx.conf/etc/nginx/conf. D/app. Nginx. Conf this operation, when we change the application configuration file, /etc/nginx/conf.d/ the corresponding configuration file will also be modified. After the modification, restart nginx to make the new ngxin configuration take effect.

Please drink a cup 🍵 remember three even oh ~

1. Remember to give 🌲 sauce a thumbs up after reading oh, there is 👍 motivation

2. Pay attention to the interesting things at the front of the public account, and chat with you about the interesting things at the front

3. Github frontendThings thanks to Star✨