I heard that wechat search “Java fish” will change strong oh!

This article is in Java Server, which contains my complete series of Java articles, can be read for study or interview

(1) Overview

A recent project had a problem when migrating the public network environment. Although the services were up, they could not be accessed anyway. It turned out that there was a configuration problem with Nginx. Unfortunately, I almost forgot my knowledge about Nginx after graduation from college. At that time, I could not find operation and maintenance personnel in an emergency, and almost had to end the migration with failure.

Therefore, I feel that even in back-end development, I still need to understand the basic configuration of Nginx. Therefore, I wrote this article about nginx, hoping to help you.

(2) Nginx introduction

Nginx is a high-performance WEB server, nginx can achieve forward proxy, reverse proxy, load balancing and other functions. For example, I am currently working on a project with 11 application servers, which is a reverse proxy using Nginx.

There is a difference in concept between forward and reverse proxies, but no difference at all in technical implementation.

Forward proxy: The proxy server is located where both the client and the service can connect, and we access the target service by accessing the proxy server.

Reverse proxy: By accessing the proxy server, the proxy server distributes requests to other services. This is the reverse proxy.

But technically all you need to do is configure proxy_pass in location.

2.1 installation

This installation is all done on Linux. Nginx is available for Windows, but I haven’t seen any projects that put nginx on Windows servers.

To run nginx, you need to install the following environment:

yum -y install make zlib zlib-devel gcc-c++ libtool openssl openssl-devel pcre pcre-devel
Copy the code

Next, download the nginx zip package, version depends, here is 1.14.2:

Wget HTTP: / / http://nginx.org/download/nginx-1.14.2.tar.gzCopy the code

Use the default installation mode:

./configure
make
make install
Copy the code

After each execution, the nginx run file is installed in /usr/local/nginx

Check for success with the following code:

/usr/local/nginx/sbin/nginx -V
Copy the code

2.2 Basic Commands

# config file start
./sbin/nginx -c /conf/nginx.conf

# specify the nginx program directory to start
./sbin/nginx -p /usr/local/nginx

# Stop: The second is more elegant
./sbin/nginx -s stop
./sbin/nginx -s quit

Reload the configuration file
./sbin/nginx -s reload
Copy the code

2.3 Proxy Parameters

proxy_pass            # Proxy service
proxy_redirect off;   # Allow redirection
proxy_set_header Host $host;   # Pass the header parameter when forwarding
proxy_set_header X-Forwarded-For $remote_addr;  Set the requested IP address
proxy_connect_timeout 90;  Connection proxy service timeout
proxy_send_timeout 90;  Request send maximum time
proxy_read_timeout 90;  # Maximum read time
# cache correlation
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
Copy the code

2.4 Configuring the Proxy

location /javayz/ {
    proxy_pass http://127.0.0.1:8080;
}
Copy the code

This will be proxyed to port 8080 on the local machine when accessing/Javayz.

After proxy_pass whether add url/also has exquisite, if added/represents the absolute root, without is relative to the root path, I visit http://127.0.0.1/javayz/index.html, for example, according to the configuration will be forwarded to the above: http://127.0.0.1:8080/javayz/index.html. If the configuration is changed to this:

location /javayz/ {
    proxy_pass http://127.0.0.1:8080/;
}
Copy the code

After forwarding address into: http://127.0.0.1:8080/index.html

(3) NGINX load balancing

The main use of nginx is load balancing, which can be done through upstream.

First, I deployed two very simple SpringBoot projects on the server, one of which returned 8081 when accessing port 8081 and the other returned 8082 when accessing port 8082

@RestController
public class IndexController {
    @RequestMapping(value = "/",method = RequestMethod.GET)
    public String index(a){
        return "8081"; }}Copy the code

Nginx load balancing

This code is written in the HTTP blockUpstream Backend {server 127.0.0.1:8081; Server 127.0.0.1:8082; }Configure in the server block
location / {
    proxy_pass http://backend/;
}
Copy the code

Curl curl curl curl curl curl curl

8081 and 8082 run in polling mode. If you encounter a server with a better configuration and want to load more requests to that server, you can increase the weight of different services:

Upstream Backend {server 127.0.0.1:8081 weight=2; Server 127.0.0.1:8082 weight = 1; }Copy the code

Run the curl command again:

Upstream parameters are as follows:

Server service IP address: port weight Max_fails Specifies the maximum number of failed connections. Fail_timeout Reconnection time backup Backup service max_conns Maximum number of connections slow_start How many seconds to wait for a node to join again after it recoversCopy the code

(4) Nginx load balancing algorithm

The default load balancing algorithm used by Nginx is poll + weight, which is polling one by one based on the set weight. In addition, there are many other load balancing algorithms.

Ip_hash algorithm: Hashes the IP addresses and implements mod for the number of applications in the current load. This algorithm ensures that the same IP address accesses the same application server.

upstream backend {
    ip_hash;
    server 127.0.0.1:8081;
    server 127.0.0.1:8082;
}
Copy the code

Url_hash algorithm: Hashes the requested URL to ensure that the same URL accesses the same application server.

upstream backend {
    url_hash;
    server 127.0.0.1:8081;
    server 127.0.0.1:8082;
}
Copy the code

Least_conn algorithm: distributes requests to the node with the fewest connections.

Least_time algorithm: Allocates requests to the node with the fastest response.

5. Nginx cache

If a reverse proxy is required for every request to the application server, the bandwidth and performance pressure can be significant. Nginx has support for caching, which can load static front-end pages that do not change much into the cache, enhancing overall performance.

5.1 Adding a cache declaration under HTTP Elements

This statement is placed under the HTTP element

proxy_cache_path /www/javayz/cache levels=1:2 keys_zone=cache_javayz:500m inactive=20d max_size=1g;

#proxy_cache_path Specifies the cache path
#levels Indicates the number of digits of the cache level and directory. 1:2 indicates two levels of directory. The first level of directory is represented by one hexadecimal and the second level by two hexadecimal
#keys_zone Cache size
#inactive: if the cache is not used during the inactive period, it will be deleted
#max_size Specifies the size of the hard disk to store the cache
Copy the code

5.2 Setting a Cache Policy in the Location

This statement is placed in the location element

Key_zone = key_zone
proxy_cache cache_javayz;

Md5 is used as the md5 key
proxy_cache_key $host$uri$is_args$args;

HTTP status codes are cached at 200 hours and cached for 12 hours
proxy_cache_valid 200 12h;
Copy the code

5.3 Accessing the Build Cache

There may be cases where cache generation fails, so switch user to root

Once accessed, the cache file is generated in the path above.

5.4 Clearing cache

Download modules:

Wget HTTP: / / http://labs.frickle.com/files/ngx_cache_purge-2.3.tar.gzCopy the code

Unpack the

Tar - ZXVF ngx_cache_purge 2.3. Tar. GzCopy the code

Configure in the nginx installation directory

./configure --prefix=/usr/local/nginx --with-http_stub_status_module --with-http_ssl_module --add-module=.. / ngx_cache_purge - 2.3Copy the code

Recompile:

make
Copy the code

/usr/local/nginx/sbin/nginx/sbin/nginx/objs /usr/local/ nginx/sbin/nginx/objs

Check whether the installation is successful

After re-running, you can clear the cache simply by accessing http://ip:80/clear/

(6) Summary

If your company’s responsibilities are strictly segregated, knowing these concepts of Nginx is sufficient for back-end development to understand how the architecture works. If you’re in both development and operations, that’s not enough. Learning is always about moving forward, and I’ll see you next time.