Systematic problem solving
- Solve the problem of insufficient physical machines
- Solve the problem of insufficient physical machine resource usage
- Solve the system high availability problem
- Solve the non-stop update problem
Preparations for system deployment
- A server with Ubuntu16.04.6, which can be networked
- The minimum server configuration is 4 cores and 8 GB, and the larger the memory, the better
System deployment scheme design diagram
Note: The upper part is a service server and the lower part is a database server. This document only describes the deployment process of the service server. The lower part is relatively simple. In addition, the IP address marked in the figure is the AUTHOR’s OWN IP address on the VM. You can set the IP address based on the actual situation.
Relevant concepts
1 LVS
LVS is an open source software that implements load balancing at transport layer 4. LVS stands for Linux Virtual Server. There are three IP load balancing technologies (VS/NAT, VS/TUN, and VS/DR). Eight kinds of scheduling algorithm (rr, WRR, lc, WLC, LBLC LBLCR, dh, sh).
2 Keepalived role
LVS can achieve load balancing, but cannot perform health checks. For example, if an RS fails, LVS will still forward requests to the failed RS server, which will result in ineffective requests. Keepalive software can perform health checks and simultaneously achieve high availability of LVS to solve the single point of failure of LVS. In fact, Keepalive is designed for LVS.
Keepalived and how it works
Keepalived is a software similar to Layer2,4,7 switching mechanism. Linux cluster management is a service software to ensure high availability of the cluster, its function is to prevent a single point of failure.
Keepalived is a service software based on VRRP protocol to ensure high availability of clusters. Its main function is to implement fault isolation of real machines and failover between load balancers to prevent single point of failure. Before getting to know keepalived, learn about VRRP.
4 VRRP: Virtual Route
Redundancy Protocol Redundancy Protocol of virtual routes. It is a fault-tolerant protocol that ensures that when the next-hop route of a host is faulty, another router takes the place of the faulty router to ensure continuity and reliability of network communication. Before introducing VRRP, let me introduce some terms related to VRRP:
Virtual router: A virtual router consists of a Master router and multiple Backup routers. The host uses the virtual router as the default gateway.
VRID: identifies a virtual router. A group of routers with the same VRID constitutes a virtual router.
Master router: a virtual router that forwards packets.
Backup router: a router that can replace the Master router when the Master router fails.
Virtual IP address: indicates the IP address of the virtual router. A virtual router can have one or more IP addresses.
IP address owner: The router whose interface IP address is the same as the virtual IP address is called the IP address owner.
Virtual MAC Address: A virtual router has a virtual MAC address. The virtual MAC address is in the format of 00-00-5E-00-01-{VRID}. Generally, a virtual router uses a virtual MAC address to respond to ARP requests. Only when the virtual router is specially configured, it responds to the real MAC address of an interface.
Priority: VRRP determines the status of each virtual router based on the priority.
Non-preemption: If the Backup router works in non-preemption mode, the Backup router does not become the Master router even if it is configured with a higher priority as long as the Master router is not faulty.
Preemption mode: If the Backup router works in preemption mode, after receiving a VRRP packet, it compares its priority with that in the advertisement packet. If its priority is higher than that of the current Master router, it preempts to become the Master router. Otherwise, the Backup state will remain.
VRRP Divides a group of routers on a LAN to form a VRRP backup group. It functions as a router and is identified by a virtual router NUMBER (VRID). A virtual router has its own virtual IP address and virtual MAC address. Its externality is the same as that of a physical route. Hosts on the LAN use the VIRTUAL router IP address as the default gateway to communicate with external networks.
Virtual routers work on top of physical routers. It consists of multiple actual routers, including a Master router and multiple Backup routers. When the Master router works properly, hosts on the LAN communicate with the outside world through the Master router. When the Master router is faulty, one of the Backup routers becomes the new Master router to take over packet forwarding. (High availability of routers)
5 VRRP workflow
- The router in the virtual router elects the Master based on the priority. The Master router sends gratuitous ARP packets to inform the connected device or host of its virtual MAC address and forwards packets.
- The Master router periodically sends VRRP packets to advertise its configuration information (such as priority) and working status.
- If the Master router fails, the Backup router in the virtual router elects a new Master router based on its priority.
- During the virtual router status switchover, the Master router switches from one device to another. The new Master router simply sends an ARP packet containing the MAC address and VIRTUAL IP address of the virtual router. In this way, the ARP information of the connected host or device can be updated. Hosts on the network do not perceive that the Master router has switched over to another device.
- If the priority of the Backup router is higher than that of the Master router, the working mode of the Backup router (preemption mode and non-preemption mode) determines whether to re-elect the Master router.
- The VRRP priority ranges from 0 to 255 (a larger value indicates a higher priority).
6 Docker
- Docker is the world’s leading software container platform.
- Docker uses the Go language launched by Google for development and implementation. Based on cgroup, Namespace, AUFS class UnionFS and other technologies, Docker encapsulates and isolates processes, which is a virtualization technology at the operating system level. Since a quarantined process is independent of the host and other quarantined processes, it is also called a container. Docke was originally implemented based on LXC.
- Docker automates repetitive tasks, such as setting up and configuring development environments, freeing up developers to focus on what really matters: building great software.
- Users can easily create and use containers and put their own applications into containers. Containers can also be versioned, copied, shared, and modified just like normal code.
Please refer to an article I wrote: juejin.cn/post/684490…
7 Nginx
Nginx is a high-performance HTTP/reverse proxy server and E-mail (IMAP/POP3) proxy server.
Functions: clustering (improve throughput, reduce the pressure on a single server), reverse proxy (do not expose real IP address), virtual server, static server (static separation). Solve cross-domain problems, using NGINx to build enterprise API interface gateway
Start the deployment
Install the Docker
1 Uninstall the earlier Version of Docker. If the system is not installed, skip this step
sudo apt-get remove docker docker-engine docker.io containerd runcCopy the code
2 Update the index list
sudo apt-get updateCopy the code
3 Allow APT to install the repository using HTTPS
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-commonCopy the code
4 Install the GPG certificate
sudo curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add -Copy the code
5 Verify the fingerprint of the key
sudo apt-key fingerprint 0EBFCD88Copy the code
6 Add stable repositories and update indexes
sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable" sudo apt-get updateCopy the code
7 View the Docker version list
apt-cache madison docker-ceCopy the code
8 Download the customized Docker version
Sudo apt-get install -y docker-ce=17.12.1~ce-0~ UbuntuCopy the code
9 Check whether the Docker is successfully installed
docker --versionCopy the code
10 Add user root to the docker group to allow sudo-free docker execution
sudo gpasswd -aThe user name dockerChange your username to your login nameCopy the code
11 Restart the service and refresh the Docker group members
sudo service docker restartnewgrp - dockerCopy the code
Docker custom network
Because the IP will change after the container is restarted, but this is not what we want, we want the container to have its own fixed IP. The container uses the Docker0 bridge by default. There is no way to customize the IP, so we need to create a bridge and specify the container IP, so that the container will remain the same after restart.
Docker network create --subnet=172.18.0.0/24 mynetCopy the code
Use ifconfig to view the network we created
Host machine install Keepalived
1 Preinstall the compilation environment
sudo apt-get install -y gcc
sudo apt-get install -y g++
sudo apt-get install -y libssl-dev
sudo apt-get install -y daemon
sudo apt-get install -y make
sudo apt-get install -y sysv-rc-confCopy the code
2 Download and install Keepalived
cd /usr/localhttp://www.keepalived.org/software/keepalived-1.2.18.tar.gz/wget tar ZXVF keepalived - 1.2.18. Tar. GzcdKeepalived - 1.2.18. / configure -- prefix = / usr /local/keepalived
make && make instaCopy the code
3 Set Keepalived as system service
mkdir /etc/keepalived
mkdir /etc/sysconfig
cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
ln -s /usr/local/sbin/keepalived /usr/sbin/
ln -s /usr/local/keepalived/sbin/keepalived /sbin/
Copy the code
4 Modify keepalived startup configuration files
Because Linux except Readhat no/etc/rc. D/init. D/functions provides, so need to modify the original boot file
- Will. / etc/rc. D/init. D/functions provides modify for. / lib/LSB/init – functions provides
- Change daemon Keepalived ${KEEPALIVED_OPTIONS} to Daemon Keepalived start
The overall content after modification is as follows
#! /bin/sh
#
# Startup script for the Keepalived daemon
#
# processname: keepalived
# pidfile: /var/run/keepalived.pid
# config: /etc/keepalived/keepalived.conf
# chkconfig: - 21 79
# description: Start and stop Keepalived
# Source function library
#. /etc/rc.d/init.d/functions
. /lib/lsb/init-functions
# Source configuration file (we set KEEPALIVED_OPTIONS there)
. /etc/sysconfig/keepalived
RETVAL=0
prog="keepalived"
start() {
echo -n $"Starting $prog:"
daemon keepalived start
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && touch /var/lock/subsys/$prog
}
stop() {
echo -n $"Stopping $prog:"
killproc keepalived
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/$prog
}
reload() {
echo -n $"Reloading $prog:"
killproc keepalived -1
RETVAL=$?
echo
}
# See how we were called.
case "The $1" in
start)
start
;;
stop)
stop
;;
reload)
reload
;;
restart)
stop
start
;;
condrestart)
if [ -f /var/lock/subsys/$prog ]; then
stop
start
fi
;;
status)
status keepalived
RETVAL=$?
;;
*)
echo "Usage: $0 {start|stop|reload|restart|condrestart|status}"
RETVAL=1
esac
exit Copy the code
5 Modify the Keepalived configuration file
cd /etc/keepalived
cp keepalived.conf keepalived.conf.back
rm keepalived.conf
vim keepalived.confCopy the code
Add the following
vrrp_instance VI_1 { state MASTER interface ens33 virtual_router_id 51 priority 100 advert_int 1 authentication { Auth_type PASS auth_pass 1111} virtual_ipaddress {192.168.227.88 192.168.227.99}} virtual_server 192.168.227.88 80 { Delay_loop 6 lb_algo rr lb_kind NAT persistence_timeout 50 protocol TCP real_server 172.18.0.210 {weight 1}} Virtual_server 192.168.227.99 80 {delay_loop 6 lb_algo RR lb_kind NAT persistence_TIMEOUT 50 protocol TCP real_server 172.18.0.220 {weight 1Copy the code
Note that interface is set according to the name of the network card of the server. Otherwise, VIP mapping cannot be performed
6 starting keepalived
systemctl daemon-reload
systemctl enable keepalived.service
systemctl start keepalived.service
Copy the code
After each configuration file change, you must run the systemctl daemon-reload operation; otherwise, the configuration is invalid.
7 Check whether the Keepalived process exists
ps -ef|grep keepalivedCopy the code
8 Check the Keepalived running status
systemctl status keepalived.serviceCopy the code
9 Check whether virtual IP addresses are mapped
ip addrCopy the code
10 Ping the next two IP addresses
It can be seen that both IP addresses are connected. Keepalived installed successfully
Docker container implements front-end master/slave hot backup system
The environment in the host only needs the Docker engine and Keepalived virtual IP mapping, and the subsequent work is carried out in the Docker container. The reason for installing Keepalived in centos7 is that we can’t access the container directly from an external IP, so we need a virtual IP from the host to bridge the container and allow them to interconnect internally.
Look at the picture below, at a glance:
The IP address in the figure should be 172.18.0.210 virtualized from inside the container. The following correction is made.
Next we install the master/slave portion of the front-end server
1 Pull the centos7 image
docker pull centos:7Copy the code
2 Creating a Container
docker run -it -d --name centos1 -d centos:7Copy the code
3 Switch to centos1
docker exec -it centos1 bashCopy the code
4 Install common tools
yum update -y
yum install -y vim
yum install -y wget
yum install -y gcc-c++
yum install -y pcre pcre-devel
yum install -y zlib zlib-devel
yum install -y openssl-devel
yum install -y popt-devel
yum install -y initscripts
yum install -y net-tools
Copy the code
5 Package the container into a new image and create the container using the image
docker commit -a 'cfh' -m 'centos with common tools' centos1 centos_baseCopy the code
6 delete the centos1 container created earlier, re-create the container with the base image, install Keepalived +nginx
docker rm -f centos1
/usr/sbin/init = /usr/sbin/init
docker run -it --name centos_temp -d --privileged centos_base /usr/sbin/init
docker exec -it centos_temp bashCopy the code
7 install nginx
# yum yum yum yum yum yum yum yum yum yum yum yum yum yum yum yum
rpm -Uvh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm
Install nginx using the following command
yum install -y nginx
# start nginx
systemctl start nginx.service
Copy the code
8 installation keepalived
1. Download keepalived wget http://www.keepalived.org/software/keepalived-1.2.18.tar.gz 2. Decompress installation: tar -zxvf keepalived-1.2.18.tar.gz -c /usr/local/ 3. Openssl yum install -y openssl openssl-devel Start compiling Keepalivedcd /usr/local/ keepalived - 1.2.18 / &&. / configure -- prefix = / usr /localKeepalived 5. Make a make && make insCopy the code
9 Install Keepalived as system service
mkdir /etc/keepalived
cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
ln -s /usr/local/sbin/usr/sbin/ can be set to boot upon boot: chkconfig Keepalived on to this we install complete!If an error occurs during startup, run the following command
cd /usr/sbin/
rm -f keepalived
cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
# starting keepalivedSystemctl daemon-reload Reloads systemctlenableKeepalived. service Automatically starts upon startup systemctl start keepalive. service Start systemctl status keepalive. service View the service statusCopy the code
10. Modify the/etc/keepalived/keepalived conf file
Backup configuration files
cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.backup
rm -f keepalived.conf
vim keepalived.conf
The configuration file is as follows
vrrp_script chk_nginx {
script "/etc/keepalived/nginx_check.sh"Interval 2 weight-20} vrrp_instance VI_1 {state MASTER interface eth0 virtual_router_id 121 McAst_src_ip 172.18.0.201 priority 100 nopreempt advert_int 1 authentication { auth_type PASS auth_pass 1111 } track_script { chk_nginx } Virtual_ipaddress {172.18.0.210}Copy the code
11 Modify the nginx configuration file
vim /etc/nginx/conf.d/default.conf
Upstream tomcat {server 172.18.0.11:80; Server 172.18.0.12:80; Server 172.18.0.13:80; } server { listen 80; Server_name 172.18.0.210;#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
proxy_pass http://tomcat;
index index.html index.html;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
Copy the code
12 Add the heartbeat detection file
vim nginx_check.sh
# Here is the script content
#! /bin/bashA = ` ps - C nginx - no - the header | wc-l`
if [ $A -eq0];then
/usr/local/nginx/sbin/nginx
sleep 2
if [ `ps -C nginx --no-header |wc -l` -eq0];then
killall keepalived
fi
fiCopy the code
13 Grant execution permission to the script
chmod +x nginx_check.shCopy the code
14 Set startup
systemctl enable keepalived.service
# open keepalived
systemctl daemon-reload
systemctl start keepalived.service
Copy the code
15 Check whether the virtual IP address is successful
Ping 172.18.0.210Copy the code
16 Pack the Centos_temp container as an image
docker commit -a 'cfh' -m 'centos with keepalived nginx' centos_temp centos_knCopy the code
17 Delete all containers
docker rm -f `docker ps -a -q`Copy the code
18 Recreate the container using the previously packed image
The names are centos_web_master and Centos_web_slave
Docker run --privileged -tid \ --name centos_web_master --restart=always \ --net mynet -- IP 172.18.0.201 \ centos_kn /usr/sbin/init docker run --privileged -tid \ --name centos_web_slave --restart=always \ --net mynet -- IP 172.18.0.202 \ centos_kn /usr/sbin/initCopy the code
19 Modify the configuration files of nginx and Keepalived in centos_web_slave
Keepalived modifications are as follows
state SLAVE Set to slave serverMcAst_src_ip 172.18.0.202Change the IP address to the local IP address
priority 80 The weight setting is smaller than master
Copy the code
The Nginx configuration is as follows
Upstream tomcat {server 172.18.0.14:80; Server 172.18.0.15:80; Server 172.18.0.16:80; } server { listen 80; Server_name 172.18.0.210;#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
proxy_pass http://tomcat;
index index.html index.html;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
Copy the code
Restart Keepalived and nginx
systemctl daemon-reload
systemctl restart keepalived.service
systemctl restart nginx.serviceCopy the code
20 Start six front-end servers using Nginx
docker pull nginx
nginx_web_1='/home/root123/cfh/nginx1'
nginx_web_2='/home/root123/cfh/nginx2'
nginx_web_3='/home/root123/cfh/nginx3'
nginx_web_4='/home/root123/cfh/nginx4'
nginx_web_5='/home/root123/cfh/nginx5'
nginx_web_6='/home/root123/cfh/nginx6'
mkdir -p ${nginx_web_1}/conf ${nginx_web_1}/conf.d ${nginx_web_1}/html ${nginx_web_1}/logs
mkdir -p ${nginx_web_2}/conf ${nginx_web_2}/conf.d ${nginx_web_2}/html ${nginx_web_2}/logs
mkdir -p ${nginx_web_3}/conf ${nginx_web_3}/conf.d ${nginx_web_3}/html ${nginx_web_3}/logs
mkdir -p ${nginx_web_4}/conf ${nginx_web_4}/conf.d ${nginx_web_4}/html ${nginx_web_4}/logs
mkdir -p ${nginx_web_5}/conf ${nginx_web_5}/conf.d ${nginx_web_5}/html ${nginx_web_5}/logs
mkdir -p ${nginx_web_6}/conf ${nginx_web_6}/conf.d ${nginx_web_6}/html ${nginx_web_6}/logs
docker run -it --name temp_nginx -d nginx
docker ps
docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_1}/conf
docker cp temp_nginx:/etc/nginx/conf.d/default.conf ${nginx_web_1}/conf.d/default.conf
docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_2}/conf
docker cp temp_nginx:/etc/nginx/conf.d/default.conf ${nginx_web_2}/conf.d/default.conf
docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_3}/conf
docker cp temp_nginx:/etc/nginx/conf.d/default.conf ${nginx_web_3}/conf.d/default.conf
docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_4}/conf
docker cp temp_nginx:/etc/nginx/conf.d/default.conf ${nginx_web_4}/conf.d/default.conf
docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_5}/conf
docker cp temp_nginx:/etc/nginx/conf.d/default.conf ${nginx_web_5}/conf.d/default.conf
docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_6}/conf
docker cp temp_nginx:/etc/nginx/conf.d/default.conf ${nginx_web_6}/conf.d/default.conf
docker rm -f temp_nginx
docker run -d- the name nginx_web_1 \ - network = mynet - IP 172.18.0.11 \ - v/etc/localtime: / etc/localtime-e TZ=Asia/Shanghai \
-v ${nginx_web_1}/html/:/usr/share/nginx/html \
-v ${nginx_web_1}/conf/nginx.conf:/etc/nginx/nginx.conf \
-v ${nginx_web_1}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \
-v ${nginx_web_1}/logs/:/var/log/nginx --privileged --restart=always nginx
docker run -d- the name nginx_web_2 \ - network = mynet - IP 172.18.0.12 \ - v/etc/localtime: / etc/localtime-e TZ=Asia/Shanghai \
-v ${nginx_web_2}/html/:/usr/share/nginx/html \
-v ${nginx_web_2}/conf/nginx.conf:/etc/nginx/nginx.conf \
-v ${nginx_web_2}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \
-v ${nginx_web_2}/logs/:/var/log/nginx --privileged --restart=always nginx
docker run -d- the name nginx_web_3 \ - network = mynet - IP 172.18.0.13 \ - v/etc/localtime: / etc/localtime-e TZ=Asia/Shanghai \
-v ${nginx_web_3}/html/:/usr/share/nginx/html \
-v ${nginx_web_3}/conf/nginx.conf:/etc/nginx/nginx.conf \
-v ${nginx_web_3}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \
-v ${nginx_web_3}/logs/:/var/log/nginx --privileged --restart=always nginx
docker run -d- the name nginx_web_4 \ - network = mynet - IP 172.18.0.14 \ - v/etc/localtime: / etc/localtime-e TZ=Asia/Shanghai \
-v ${nginx_web_4}/html/:/usr/share/nginx/html \
-v ${nginx_web_4}/conf/nginx.conf:/etc/nginx/nginx.conf \
-v ${nginx_web_4}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \
-v ${nginx_web_4}/logs/:/var/log/nginx --privileged --restart=always nginx
docker run -d- the name nginx_web_5 \ - network = mynet - IP 172.18.0.15 \ - v/etc/localtime: / etc/localtime-e TZ=Asia/Shanghai \
-v ${nginx_web_5}/html/:/usr/share/nginx/html \
-v ${nginx_web_5}/conf/nginx.conf:/etc/nginx/nginx.conf \
-v ${nginx_web_5}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \
-v ${nginx_web_5}/logs/:/var/log/nginx --privileged --restart=always nginx
docker run -d- the name nginx_web_6 \ - network = mynet - IP 172.18.0.16 \ - v/etc/localtime: / etc/localtime-e TZ=Asia/Shanghai \
-v ${nginx_web_6}/html/:/usr/share/nginx/html \
-v ${nginx_web_6}/conf/nginx.conf:/etc/nginx/nginx.conf \
-v ${nginx_web_6}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \
-v ${nginx_web_6}/logs/:/var/log/nginx --privileged --restart=always nginx
cd ${nginx_web_1}/html
cp /home/server/envconf/index.html ${nginx_web_1}/html/index.html
cd ${nginx_web_2}/html
cp /home/server/envconf/index.html ${nginx_web_2}/html/index.html
cd ${nginx_web_3}/html
cp /home/server/envconf/index.html ${nginx_web_3}/html/index.html
cd ${nginx_web_4}/html
cp /home/server/envconf/index.html ${nginx_web_4}/html/index.html
cd ${nginx_web_5}/html
cp /home/server/envconf/index.html ${nginx_web_5}/html/index.html
cd ${nginx_web_6}/html
cp /home/server/envconf/index.html ${ngiCopy the code
/home/server/envconf/ is my own directory, readers can create their own directory, attached below is the contents of the index.html file
<! DOCTYPE html> <html lang="en" xmlns:v-on="http://www.w3.org/1999/xhtml">
<head>
<meta charset="UTF-8"> <title> Master slave test </title> </head> <script SRC ="https://cdn.jsdelivr.net/npm/vue"></script>
<script src="https://cdn.staticfile.org/vue-resource/1.5.1/vue-resource.min.js"></script>
<body>
<div id="app" style="height: 300px; width: 600px">
<h1 style="color: red"</h1> <br> <br> <button V-on :click="getMsg"</button> </div> </body> </ HTML > <script> var app = new Vue({el:'#app',
data: {
message: 'Hello Vue! '
},
methods: {
getMsg: function () {
var ip="http://192.168.227.99"var that=this; // Send the get request that.$http.get(ip+'/api/test').then(function(res){
that.message=res.data;
},function(){
console.log('Request failed processing'); }); ; < /}}})Copy the code
21 After logging in to 192.168.227.88, the index. HTML page is displayed.
22 test
- Stop the centos_web_master container and check whether the page can be accessed
- Restart the centos_web_master container to see if the page visited has been switched from slave to master. (To make the switch clear, add your own markup in index.html, such as master or slave in the title.)
- Feel free to shut down the web container corresponding to the main server and see if the NGINx load balancing works.
If the above tests are normal, it indicates that the front-end main function is complete
Docker containers implement back-end master/slave hot backup systems
For the backend server, we run the container using openJDK as a JAR package, create the master/slave container using the centos_KN image above, and then modify the configuration
In order for openJDK to automatically run jar programs while the container is running, we need to rebuild the image with Dockerfile to do so
1 Create a Dockerfile file
FROM openjdk:10
MAINTAINER cfh
WORKDIR /home/soft
CMD ["nohup"."java"."-jar"."docker_server.jar"]
Copy the code
2 Building an Image
docker build -t myopenjdk .Copy the code
3 Use the image to create six back-end servers
docker volume create S1
docker volume inspect S1
docker volume create S2
docker volume inspect S2
docker volume create S3
docker volume inspect S3
docker volume create S4
docker volume inspect S4
docker volume create S5
docker volume inspect S5
docker volume create S6
docker volume inspect S6
cd /var/lib/docker/volumes/S1/_data
cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S1/_data/docker_server.jar
cd /var/lib/docker/volumes/S2/_data
cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S2/_data/docker_server.jar
cd /var/lib/docker/volumes/S3/_data
cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S3/_data/docker_server.jar
cd /var/lib/docker/volumes/S4/_data
cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S4/_data/docker_server.jar
cd /var/lib/docker/volumes/S5/_data
cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S5/_data/docker_server.jar
cd /var/lib/docker/volumes/S6/_data
cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S6/_data/docker_server.jar
docker run -it -d --name server_1 -v S1:/home/soft -v /etc/localtime:/etc/localtime -eTZ=Asia/Shanghai --net mynet -- IP 172.18.0.101 --restart=always myOpenJDK docker run -it-d --name server_2 -v S2:/home/soft -v /etc/localtime:/etc/localtime -eTZ=Asia/Shanghai --net mynet -- IP 172.18.0.102 --restart=always myOpenJDK docker run -it-d --name server_3 -v S3:/home/soft -v /etc/localtime:/etc/localtime -eTZ=Asia/Shanghai --net mynet -- IP 172.18.0.103 --restart=always myOpenJDK docker run -it-d --name server_4 -v S4:/home/soft -v /etc/localtime:/etc/localtime -eTZ=Asia/Shanghai --net mynet -- IP 172.18.0.104 --restart=always myOpenJDK docker run -it-d --name server_5 -v S5:/home/soft -v /etc/localtime:/etc/localtime -eTZ=Asia/Shanghai --net mynet -- IP 172.18.0.105 --restart=always myOpenJDK docker run -it-d --name server_6 -v S6:/home/soft -v /etc/localtime:/etc/localtime -eTZ=Asia/Shanghai --net mynet -- IP 172.18.0.106 --restarCopy the code
Docker_server. jar is the test program, the main code is as follows
import org.springframework.web.bind.annotation.RestController;
import javax.servlet.http.HttpServletResponse;
import java.util.LinkedHashMap;
import java.util.Map;
@RestController
@RequestMapping("api")
@CrossOrigin("*")
public class TestController {
@Value("${server.port}")
public int port;
@RequestMapping(value = "/test",method = RequestMethod.GET)
public Map<String,Object> test(HttpServletResponse response){
response.setHeader("Access-Control-Allow-Origin"."*");
response.setHeader("Access-Control-Allow-Methods"."GET");
response.setHeader("Access-Control-Allow-Headers"."token");
Map<String,Object> objectMap=new LinkedHashMap<>();
objectMap.put("code", 10000); objectMap.put("msg"."ok");
objectMap.put("server_port"."Server port :"+port);
return objectMap;
}Copy the code
4 Create back-end primary and secondary containers
The primary server
Docker run --privileged -tid --name centos_server_master --restart=always --net mynet -- IP 172.18.0.203 centos_kn /usr/sbin/initCopy the code
Master server keepalived configuration
vrrp_script chk_nginx {
script "/etc/keepalived/nginx_check.sh"Interval 2 weight-20} vrrp_instance VI_1 {state MASTER interface eth0 virtual_router_id 110 McAst_src_ip 172.18.0.203 priority 100 nopreempt advert_int 1 authentication { auth_type PASS auth_pass 1111 } track_script { chk_nginx } Virtual_ipaddress {172.18.0.220}Copy the code
Primary server nginx configuration
Upstream tomcat {server 172.18.0.101:6001; Server 172.18.0.102:6002; Server 172.18.0.103:6003; } server { listen 80; Server_name 172.18.0.220;#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
proxy_pass http://tomcat;
index index.html index.html;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
Copy the code
Restart Keepalived and nginx
systemctl daemon-reload
systemctl restart keepalived.service
systemctl restart nginx.service
Copy the code
From the server
Docker run --privileged -tid --name centos_server_slave --restart=always --net mynet -- IP 172.18.0.204 centos_kn /usr/sbin/initCopy the code
Keepalived configuration from the server
cript chk_nginx {
script "/etc/keepalived/nginx_check.sh"Interval 2 weight-20} vrrp_instance VI_1 {state SLAVE interface eth0 virtual_router_id 110 McAst_src_ip 172.18.0.204 priority 80 nopreempt advert_int 1 authentication { auth_type PASS auth_pass 1111 } track_script { chk_nginx } Virtual_ipaddress {172.18.0.220}Copy the code
Nginx configuration from the server
Upstream tomcat {server 172.18.0.104:6004; Server 172.18.0.105:6005; Server 172.18.0.106:6006; } server { listen 80; Server_name 172.18.0.220;#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
proxy_pass http://tomcat;
index index.html index.html;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
Copy the code
Restart Keepalived and nginx
systemctl daemon-reload
systemctl restart keepalived.service
systemctl restart nginx.service
Copy the code
Command line authentication
Browser authentication
Portainer installation
It is the container management interface where you can see the running state of the container
docker search portainer
docker pull portainer/portainer
docker run -d -p 9000:9000 \
--restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
--name prtainer-eureka\
portainer/portainer
http://192.168.227.171:90Copy the code
The default account is admin. After the password is created, go to the next screen. Select the Local container to manage Local, and click OK.
Conclusion:
The above is the design and deployment of the whole scheme. Another deeper technology includes unified container management of Docker Composer and docker cluster management of K8s. This part is going to take a lot of research.
In addition, the three elements of Docker should be understood: image/container, data volume and network management.