In our daily use of high availability clusters, load balancing tools are used to forward loads from multiple nodes. Here we have to mention a common load balancing tool Nginx, Nginx official free version of the function is relatively simple, most of the cases we use it for load balancing, for the status of the application is mainly dependent on other monitoring tools. If a small team still needs resources to deploy a dedicated monitoring tool, using Nginx for probing and monitoring applications can save some of that cost. Yum install Nginx install Nginx Nginx install Nginx
Install Nginx explorer plugin download source code and explorer plugin wgetNginx.org/download/ng… wget Github.com/yaoweibin/n…Yum install pcre pcre-devel openssl openssl-devel gd gd-devel zlib patch libxml2-devel libxslt-devel yum install pcre pcre-devel openssl-devel gd gd-devel zlib patch libxml2-devel libxslt-devel Perl -devel perl- extutils-embed Perl zlib-devel patch Tar ZXVF nginx-1.16.1.tar.gz unzip nginx_upstream_check_module.zip patch plugins to nginx source code CD /opt/nginx-1.16.1 patch-p1 < /opt/nginx_upstream_check_module-master/check_1.16.1+. Patch Add the HTTP probe plug-in./configure
–prefix=/usr/share/nginx
–sbin-path=/usr/sbin/nginx
–modules-path=/usr/lib64/nginx/modules
–conf-path=/etc/nginx/nginx.conf
–error-log-path=/var/log/nginx/error.log
–http-log-path=/var/log/nginx/access.log
–http-client-body-temp-path=/var/lib/nginx/tmp/client_body
–http-proxy-temp-path=/var/lib/nginx/tmp/proxy
–http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi
–http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi
–http-scgi-temp-path=/var/lib/nginx/tmp/scgi
–pid-path=/run/nginx.pid
–lock-path=/run/lock/subsys/nginx
–user=nginx
–group=nginx
–with-file-aio
–with-ipv6
–with-http_ssl_module
–with-http_v2_module
–with-http_realip_module
–with-stream_ssl_preread_module
–with-http_addition_module
–with-http_xslt_module=dynamic
–with-http_image_filter_module=dynamic
–with-http_sub_module
–with-http_dav_module
–with-http_flv_module
–with-http_mp4_module
–with-http_gunzip_module
–with-http_gzip_static_module
–with-http_random_index_module
–with-http_secure_link_module
–with-http_degradation_module
–with-http_slice_module
–with-http_stub_status_module
–with-http_perl_module=dynamic
–with-http_auth_request_module
–with-mail=dynamic
–with-mail_ssl_module
–with-pcre
–with-pcre-jit
–with-stream=dynamic
–with-stream_ssl_module
–with-debug
–with-cc-opt=’-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong –param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic’
–with-ld-opt=’-Wl,-z,relro -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -Wl,-E’
–add-module=/opt/nginx_upstream_check_module-master/ use Nginx load Artifactory It is used to load requests between Artifactory nodes. Artifactory can also automatically generate Nginx configuration files, as shown in the following figure
After the configuration file is generated, configure it in the Nginx config file using the configuration method of the probe plug-in. Upstream artifactory {server 192.168.1.2:8082; Server 192.168.1.3:8082; check interval=2000 rise=2 fall=2 timeout=1000 type=http; Check_http_send “HEAD/HTTP / 1.0 \ r \ n \ r \ n”; Check_http_expect_alive http_2XX http_3XX} upstream artifactory-direct {server 192.168.1.2:8081; Server 192.168.1.3:8081; check interval=2000 rise=2 fall=2 timeout=1000 type=http; Check_http_send “HEAD/HTTP / 1.0 \ r \ n \ r \ n”; check_http_expect_alive http_2xx http_3xx } server {
listen 80 ;
server_name artifactory.external.io;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
## Application specific logs
## access_log /var/log/nginx/artifactory.external.io -access.log timing;
## error_log /var/log/nginx/artifactory.external.io -error.log;
rewrite ^/$ /ui/ redirect;
rewrite ^/ui$ /ui/ redirect;
chunked_transfer_encoding on;
client_max_body_size 0;
location / {
proxy_read_timeout 2400s;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_buffer_size 128k;
proxy_buffers 40 128k;
proxy_busy_buffers_size 128k;
proxy_pass http://artifactory;
proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host:$server_port;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
location ~ ^/artifactory/ {
proxy_pass http://artifactory-direct;
}
}
location /status {
check_status;
access_log off;
}
Copy the code
} After the probe configuration is successful, the preset location can see the health status of the current load application node
It also supports viewing in JSON format, which is convenient for us to collect data