1. Introduction of Nginx

Nginx is a lightweight Web server. Commonly used for reverse proxies, load balancing, and HTTP caching. Nginx is currently used by many well-known Internet companies around the world.

Reverse proxy and forward proxy

One of Nginx’s functions is reverse proxies. What are forward proxies and reverse proxies? On Zhihu there is a good summary of the answer put here.

The following content is from zhihu user monitor his brother-in-law’s answer

The forward proxy hides the real client and the reverse proxy hides the real server.

The following is answered by Zhihu user Liu Zhijun

We often say that the proxy is only forward proxy, forward proxy process, it hides the real request client, the server does not know who the real client is, the client requested services are replaced by proxy server to request, some scientific Internet tools play a typical role of forward proxy. You can set up a proxy server in a foreign country, ask the proxy to request Google.com for me, and the proxy will return the corresponding structure to me.

The reverse proxy hides the real server. When we request www.baidu.com, it’s just like dialing 10086. There may be thousands of servers serving us, but you don’t know which one it is, and you don’t need to know. www.baidu.com is our reverse proxy server, which forwards requests to the real server for us. Nginx is a very good reverse proxy server for load balancing.

The difference between the two proxies is that the forward proxy objects are clients, and the reverse proxy objects are servers

Based on the above understanding, we can see that Nginx is very useful for handling sites with high concurrency requirements. Let’s take a look at the installation and basic use of Nginx on MacOS.

2. Basic use of Nginx

Nginx installation

We use Homebrew to install Nginx

  • Search Nginx
brew search nginx
Copy the code
  • Install Nginx
brew install nginx
Copy the code

If Homebrew needs updates this process will be slow, so be patient. After the installation is complete, the following message is displayed:

Docroot is: /usr/local/var/www

The default port has been set in /usr/local/etc/nginx/nginx.conf to 8080 so that
nginx can run without sudo.

nginx will load all files in /usr/local/etc/nginx/servers/.

To have launchd start nginx now and restart at login:
  brew services start nginx
Or, if you don't want/need a background service you can just run: nginx = = > the Summary 🍺 / usr/local/Cellar/nginx / 1.17.8: 25 files, 2MB ==> `brew cleanup` has not been run in 30 days, running now... O: / usr/local/Cellar/pcre / 8.43... (204 files, 5.5MB) Pruned 1 symbolic links and 1 directories from /usr/local ==> Caveats ==> nginx Docroot is: /usr/local/var/www The default port has been set in /usr/local/etc/nginx/nginx.conf to 8080 so that nginx can run without sudo. nginx will load all files in /usr/local/etc/nginx/servers/. To have launchd start nginx now and restart at  login: brew services start nginx Or, if you don't want/need a background service you can just run:
  nginx
Copy the code
  • View Nginx information
brew info nginx
Copy the code

This command displays the basic configuration information of Nginx as prompted after the installation is complete.

  • Uninstall Nginx
brew uninstall Nginx
Copy the code

Note that this command only uninstalls the Nginx software itself and does not delete the configuration file. /usr/local/var/www and /usr/local/etc/ Nginx/will need to be manually deleted if we want to delete Nginx completely.

  • View the Nginx version
nginx -v
Copy the code

My native Nginx version is as follows

Nginx version: nginx / 1.17.8Copy the code
  • Check whether the Nginx configuration is normal
nginx -t
Copy the code

If normal, we get the following output:

nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok
nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful
Copy the code
  • The configuration file
cat /usr/local/etc/nginx/nginx.conf
Copy the code

The above command outputs the default configuration of nginx.conf, and I have commented on the key points.

#user nobody;
worker_processes  1;
# (1) process

#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;

# (2) the error log position on the Mac in/usr/local/var/log/nginx /

#pid logs/nginx.pid;


events {
    worker_connections  1024;
    #(3) Maximum number of connections
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    #log_format main '$remote_addr - $remote_user [$time_local] "$request" '
    # '$status $body_bytes_sent "$http_referer" '
    # '"$http_user_agent" "$http_x_forwarded_for"';
    #(4) Log format

    #access_log logs/access.log main;
    
    # (5) access log position on the Mac in/usr/local/var/log/nginx /

    sendfile        on;
    #tcp_nopush on;

    #keepalive_timeout 0;
    keepalive_timeout  65;

    #gzip on;

    server {
        listen       8080;
        server_name  localhost;

        #charset koi8-r;

        #access_log logs/host.access.log main;

        location / {
            root   html;
            index  index.html index.htm;
            
            #(6) The first case denies access to IP addresses in the IP address segment 50-100Deny 192.168.10.50/100;In the second case, only the IP address 192.168.10.50 is allowedAllow 192.168.10.50; deny all;In the third case, this configuration is not accessible, and matches from top to bottom
            deny all;
            allow 192.168.10.50;
        
        }
        
        #(7) The exact match /test path denies access
        location =/test { 
            deny all;
        }
        
        #(8) Exact match /test2 paths are accessible
        location =/test2 {
            allow all;
        }
        
        #(9) Exactly matches the PHP end to deny access
        location ~ \.php$ { # regular match
            deny all;
        }

        #error_page 404 /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

        # proxy the PHP scripts to Apache listening on 127.0.0.1:80
        #
        #location ~ \.php$ {
        # proxy_pass http://127.0.0.1;
        #}

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        #
        #location ~ \.php$ {
        # root html;
        # fastcgi_pass 127.0.0.1:9000;
        # fastcgi_index index.php;
        # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
        # include fastcgi_params;
        #}

        # deny access to .htaccess files, if Apache's document root
        # concurs with nginx's one
        #
        #location ~ /\.ht {
        # deny all;
        #}
    }


    # another virtual host using mix of IP-, name-, and port-based configuration
    #
    #server {
    # listen 8000;
    # listen somename:8080;
    # server_name somename alias another.alias;

    # location / {
    # root html;
    # index index.html index.htm;
    #}
    #}


    # HTTPS server
    #
    #server {
    # listen 443 ssl;
    # server_name localhost;

    # ssl_certificate cert.pem;
    # ssl_certificate_key cert.key;

    # ssl_session_cache shared:SSL:1m;
    # ssl_session_timeout 5m;

    # ssl_ciphers HIGH:! aNULL:! MD5;
    # ssl_prefer_server_ciphers on;

    # location / {
    # root html;
    # index index.html index.htm;
    #}
    #}
    include servers/*;
    #(10) Other configuration files
}
Copy the code
  • Start the Nginx
cd /usr/local. / Cellar/nginx / 1.17.8 / bin/nginxCopy the code
  • Reload the configuration file
./nginx -s reload 
Copy the code

Start after completion of execution ps – ef | grep nginx, if appear below log shows successful visit.

501 65101 10 8:59 PM?? 07:00.00 nginx: Master Process./nginx 501 65102 65101 0 8:59 PM?? Nginx: worker process 501 65173 63026 0 9:00 PM ttys001 0:00.00 grep nginxCopy the code

Visit http://localhost:8080. If the following page is displayed, the configuration is successful and the access is normal.

3. The actual combat

Now that we have Nginx installed and configured, let’s take a look at how Nginx is used in a reverse proxy scenario. Let’s start by creating a Web service with SpringBoot.

Initializing a project in start.spring. IO makes it easy for us to test.

Use Eclipse or InteliJ IDEA to open the project you just initialized. The entire project is in the following directory.

Write two simple interfaces in the NginxTestController, /hello/nginx and /hi/nginx, and return in the interface response the serverPort on which the program is listening.

@RestController
public class NginxTestController {

    @Value("${server.port}")
    private int serverPort;

    @RequestMapping(value = "/hello/nginx". method = RequestMethod.GET) public TestResponse getNginx() throws Exception { TestResponse response = new TestResponse(); response.setCode(200); response.setMessage("success");
        response.setName("hello nginx");
        response.setServerPort(serverPort);
        return response;
    }

    @RequestMapping(value = "/hi/nginx". method = RequestMethod.GET) public TestResponse getIncome() throws Exception { TestResponse response = new TestResponse(); response.setCode(200); response.setMessage("success");
        response.setName("hi nginx");
        response.setServerPort(serverPort);
        returnresponse; }}Copy the code

Add listening ports to the application.properties configuration file.

server.port=8090
Copy the code
server.port=9090
Copy the code

Jar: nginxdemo-snapshot-8090. Jar: Nginxdemo-snapshot-9090. Jar: Nginxdemo-snapshot-9090

Start the two servers separately

Start the first server java-jar nginxdemo-snapshot-8090. jarCopy the code
Start the second server java-jar nginxdemo-snapshot-9090. jarCopy the code

In the case that Nginx is not used as the reverse proxy, we first check whether the interfaces of the two servers can be used normally, as shown in the following figure.

Listen for Tomcat on port 8090

Monitor Tomcat on port 9090

The reverse proxy

  • Rules of the match

Let’s take a look at the matching of access rules discussed above and add the following location to the server block.

    location =/hello/nginx {
        deny all;
    }
Copy the code

The complete configuration of the server block is as follows. After modification, reload the Nginx configuration.

server {
    listen       9000;
    server_name  localhost;

    location / {
        proxy_pass        http://localhost:8090;
        proxy_set_header  Host $http_host;
        proxy_set_header  X-Real-IP  $remote_addr;
        proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header  X-Forwarded-Proto $scheme; } location =/hello/nginx { deny all; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; }}Copy the code

/hello/nginx is not allowed to access /hello/nginx.

/hello/nginx access denied.

/hi/nginx is still accessible.

  • Reverse proxy using the port number
#user nobody;
worker_processes  1;

events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    sendfile        on;
    keepalive_timeout  65;

    server {
        listen       9000;
        server_name  localhost;

        location / {
            proxy_pass        http://localhost:8090;
            proxy_set_header  Host $http_host;
            proxy_set_header  X-Real-IP  $remote_addr;
            proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header  X-Forwarded-Proto $scheme;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }

    server {
        listen       9001;
        server_name  localhost;

        location / {
            proxy_pass        http://localhost:9090;
            proxy_set_header  Host $http_host;
            proxy_set_header  X-Real-IP  $remote_addr;
            proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header  X-Forwarded-Proto $scheme;
        }
    }

    include servers/*;
}
Copy the code

Using ports for reverse proxy is relatively simple. We add two server blocks to the configuration file and let Nginx listen on ports 9000 and 9001, respectively. Then proxy to ports 8090 and 9090 of Tomcat, respectively, via proxy_pass in the Location block.

Using Nginx to access port 9000, you can see that the content of Tomcat 8090 port is returned.

The purpose of this is that, in a real server environment, it is impossible to open all the ports, such as mysql port 3306, which would have a large security problem that can be solved by Nginx reverse proxy.

  • A domain name is used for reverse proxy

In a real development environment, we use domain names or secondary domain names to access certain pages or interfaces. We usually do not need to add port numbers to the domain name. This is because 80 is the default port for the HTTP protocol. When we visit http://baidu.com, we visit http://baidu.com:80.

So let’s look at how to use Nginx to do reverse proxy by domain name. Since I am developing and testing locally, there is no public IP address. How can I resolve a domain name to my own IP address? We can modify the hosts file.

127.0.0.1 hello. Democome. Local 127.0.0.1. Hi democome. LocalCopy the code

The above are the two lines of configuration I added to the hosts file of the host. We know that there is no.local domain name, but after adding the above configuration, when I visit Hello.democome. Local and hi.democome. Now that you can reverse proxy through Nginx, let’s take a look at the Nginx configuration file.

#user nobody;
worker_processes  1;

events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    sendfile        on;
    keepalive_timeout  65;

    server {
        listen       80;
        server_name  hello.democome.local;

        location / {
            proxy_pass        http://localhost:8090;
            proxy_set_header  Host $http_host;
            proxy_set_header  X-Real-IP  $remote_addr;
            proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header  X-Forwarded-Proto $scheme;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }

    server {
        listen       80;
        server_name  hi.democome.local;

        location / {
            proxy_pass        http://localhost:9090;
            proxy_set_header  Host $http_host;
            proxy_set_header  X-Real-IP  $remote_addr;
            proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header  X-Forwarded-Proto $scheme;
        }
    }

    include servers/*;
}
Copy the code

In the above configuration, we changed the listening port of both server blocks to 80. Then the two secondary domain names hello.democome. Local and hi.democome. Local are represented to http://localhost:8090 and http://localhost:9090 web servers respectively.

In the browser to http://hello.democome.local/hello/nginx can see the content of the normal return to port 8090.

In the browser to http://hi.democome.local/hello/nginx can see the content of the normal return to port 9090.

Through the above tests, our configuration works.

If you don’t want to use the default port 80, you can still use the domain name plus port. The configuration is as follows:

server {
    listen       9000;
    server_name  hello.democome.local;

    location / {
        proxy_pass        http://localhost:8090;
        proxy_set_header  Host $http_host;
        proxy_set_header  X-Real-IP  $remote_addr;
        proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header  X-Forwarded-Proto $scheme; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; }}Copy the code

After modifying the configuration and reloading it, you can view that the domain name and port can be accessed normally.

  • Proxy to other sites
server { listen 80; server_name hi.democome.local; location / { proxy_pass https://www.so.com/; }}Copy the code

If we change the server block to the following configuration and reload the Nginx configuration, we can see that although we visited hi.democome. Local, the actual page returned is the 360 search page, and the domain name does not change.

Load balancing

Speaking of load balancing, what is load balancing? Let’s take a look at wikipedia’s definition

Load balancing is a computer technique used to distribute Load among multiple computers (clusters of computers), network connections, cpus, disk drives, or other resources in order to optimize resource usage, maximize throughput, minimize response times, and avoid overloading. Using multiple server components with load balancing instead of a single component improves reliability through redundancy. Load balancing services are usually performed by specialized software and hardware. The main function is to allocate a large number of jobs reasonably to multiple operation units for execution, which is used to solve the problems of high concurrency and high availability in the Internet architecture.

In summary, load balancing is to solve high concurrency. If we have multiple servers, and we run the same application on each server, we expect Nginx to forward user requests to different servers according to certain policies. This has solved the problem of heavy access on a single server.

To test the effect, I uploaded the previously packaged JAR file to a remote server and started it.

Let’s see how to configure it:

#user nobody;worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; Upstream {server 62.234.66.219:8090 weight=1;# remote serverServer 192.168.0.101:8090 weight = 3; } server { listen 80; server_name hello.democome.local; location / { add_header Backend-IP$upstream_addr;
            proxy_pass        http://localhost;
            proxy_set_header  Host $http_host;
            proxy_set_header  X-Real-IP  $remote_addr;
            proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header  X-Forwarded-Proto $scheme;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }

    include servers/*;
}
Copy the code

I have configured two servers. 62.234.66.219:8090 is the address of Tencent cloud server, and 192.168.0.101:8090 is the IP address of my local computer. The policy is to forward the packets to the two servers in a ratio of 1:3. There are many rules for policies, such as based on IP hash and so on. We will only demonstrate the weight policy here.

Upstream {server 62.234.66.219:8090 weight=1; Server 192.168.0.101:8090 weight=3; }Copy the code

Complete the configuration and perform the nginx -s reload reload. Continue to visit http://hello.democome.local/hello/nginx.

The following figure shows the request from 192.168.0.101

As shown below, this request comes from 62.234.66.219

After repeated requests, we find that the ratio of forwarding to each server is basically 1:3. The above shows that the configured load balancing policy is effective.

HTTPS Certificate Configuration

server {
    listen 443 ssl;                                                     
    server_name democome.com www.democome.com; # configure domain name
    ssl_certificate /etc/letsencrypt/live/democome.com/fullchain.pem; Configure the certificate location
    ssl_certificate_key /etc/letsencrypt/live/democome.com/privkey.pem; Configure the certificate location
    location /{                                                         # Reverse proxy configuration
        proxy_pass            http://localhost:8080;
        proxy_set_header  Host $http_host;
        proxy_set_header  X-Real-IP  $remote_addr;
        proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header  X-Forwarded-Proto $scheme; }}Copy the code

Configuring the certificate is simple. First listen on port 443 and then execute the certificate location.

More recommended

Free HTTPS certificate application letsencrypt.

To learn more, please pay attention to the official number: nine o ‘clock off work