The cluster structures,
Through Docker to build a data center cluster composed of three servers, and then start a Client container to do service registration and discovery of the entrance, open simulation to see
Server startup command
#Pull the latest mirror
$ docker pull consul
#Start server1 and map port 8500 of the container to port 8900 of the host to provide the UI
$ docker run -d --name=consul1 -p 8900:8500 -e CONSUL_BIND_INTERFACE=eth0 consul agent --server=true- the bootstrap - expect = 3 - client = 0.0.0.0 - the UI
076d7658951753309e2b052315190921aa89ddf9e63b7055558867769747a954
#View the IP address of container Server1
$ docker inspect cunsul1[{"Networks": {"bridge": {"IPAMConfig": null, "Links": null, "Aliases": null, "NetworkID": "1cf2fe7613c4b319b47058da0f61460f6d70c9829645741d0c9eadc23f7846af", "EndpointID": "18840278673 f45308e2b0333f054aeb89e817af0d53a1a5bcd84202017937cdf", "Gateway" : "172.17.0.1", "IPAddress" : "172.17.0.3", "IPPrefixLen": 16, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:32: AC :10:00:03", "DriverOpts": null}}}]
#Start server2 and join the cluster
$ docker run -d --name=consul2 -e CONSUL_BIND_INTERFACE=eth0 consul agent --server=true- client = 0.0.0.0 - join 172.17.0.3
2543282e512bf6150bf340dde19460f85166568dc706bcd9b797eb66bbe9437a
#Start server3 and join the cluster
$ docker run -d --name=consul3 -e CONSUL_BIND_INTERFACE=eth0 consul agent --server=true- client = 0.0.0.0 - join 172.17.0.3
cd333ecbb37d828560c4fdb4b65b103fa343565408e1d1e62eaf8a8f61711263
Copy the code
After the server is started, run commands to view the interaction between the three servers
$ docker logs -f consul1 | grep "agent.server"
$ docker logs -f consul2 | grep "agent.server"
$ docker logs -f consul3 | grep "agent.server"
Copy the code
The log shows that server2 and server3 are added to the cluster and server1 is identified as the leader
Log in to any server and check the cluster member status
$ docker exec -it consul1 sh/ # consul members Node Address Status Type Build Protocol DC Segment 076d76589517 172.17.0.3:8301 Alive server 1.10.1 2 Dc1 <all> 2543282e517b 172.17.0.4:8301 alive server 1.10.1 2 DC1 <all> CD333ecbb37d 172.17.0.5:8301 alive server 1.10.1 2 dc1 <all>Copy the code
Access the Web UI interface through the exposed port of Server1
In addition to the Services, Nodes panels, you can also see the Key/Value panel, and don’t forget, Consul also supports simple KV storage.
Client Startup Command
$ docker run -d --name=consul4 -e CONSUL_BIND_INTERFACE=eth0 consul agent --server=false- client = 0.0.0.0 - join 172.17.0.2
313r3d7ab14f021c335b0b4e7d4c5bdcad890531d80fa13273a3657702233112
Copy the code
Check consul cluster member information again
$docker exec -it Consul2 Consul members Node Address Status Type Build Protocol DC Segment 782b20094c63 172.17.0.4:8301 Alive server 1.10.2dc1 <all> b404b70879A2 172.17.0.2:8301 Alive server 1.10.2dc1 <all> e5f114c8c4df 172.17.0.3:8301 alive server 1.10.2dc1 <all> 3134cd7ab14f 172.17.0.5:8301 alive client 1.10.2dc1 <default>Copy the code
A complete Consul cluster has now been set up.
The service registry
Write a redis service configuration file redis-service.json
$ cat <<EOF> redis-service.json{" service ": [{" name" : "redis", "tags" : [] "master", "address" : "127.0.0.1", "port" : 6379}]} EOFCopy the code
Upload to the /consul/config directory of the client container, i.e. Consul4, and reload Consul
$ docker cp redis-service.json consul4:/consul/config
$ docker exec -it consul4 consul reload
Configuration reload triggered
Copy the code
Looking at the Web UI again, you can see that there are many Redis in our service
Health check
Modified the redis service configuration and added health check
$ cat <<EOF> redis-service.json{" service ": [{" name" : "redis", "tags" : [] "master", "address" : "172.17.0.6", "port" : 6379, "check" : {" name ": "Nc" and "args" : [" nc "and" 172.17.0.6 ", "6379"], "interval" : "3 s"}}}] EOFCopy the code
An error was detected during the last reload
$ docker exec -it consul4 consul reload
Error reloading: Unexpected response code: 500 (Failed reloading services: Failed to register service "redis": Scripts are disabled on this agent; to enable, configure 'enable_script_checks' or 'enable_local_script_checks' to true)
Copy the code
The script check function needs to be enabled when the client is started
$ docker run -d --name=consul4 -e CONSUL_BIND_INTERFACE=eth0 -e CONSUL_LOCAL_CONFIG='{"enable_script_checks": true}' consul agent --server=false- client = 0.0.0.0 - join 172.17.0.2
Copy the code
In the log file, you can see that the health check is enabled. Then check the Web UI and verify that the check is successful
At this point, if we shut down the Redis service (here Docker Desktop is used to control the Redis container), we can simulate the situation of the redis service failure.Take a look at the log and the Web UI
At this time, the Redis service has been identified as unhealthy after Consul’s health check. We use the HTTP service discovery interface provided by Client to query and filter the healthy service. The return is empty, meeting expectations:
$ docker exec- it consul4 curl http://127.0.0.1:8500/v1/health/service/redis\? passing\=true
[]
Copy the code
Turn on the Redis service and compare:
$docker exec - it consul4 curl http://127.0.0.1:8500/v1/health/service/redis\? passing\=true [{" Node ": {" ID" : "8 a73e872 cf8 - c378-7197-0-9 f6f08c529ea", "Node" : "5 f9387a8e3af", "Address" : "172.17.0.5", "Datacenter" : "dc1", "T 172.17.0.5 aggedAddresses ": {" LAN" : ""," lan_ipv4 ":" 172.17.0.5 ", "wan" : "172.17.0.5", "wan_ipv4" : "172.17.0.5}", "Meta" : {" consul - network-segment":""},"CreateIndex":412,"ModifyIndex":412},"Service":{"ID":"redis","Service":"redis","Tags":["master"],"A Ddress 172.17.0.6 ":" ", "TaggedAddresses" : {" lan_ipv4 ": {" Address" : "172.17.0.6", "Port" : 6379}, "wan_ipv4" : {" Address ":" 172.17.0. 6","Port":6379}},"Meta":null,"Port":6379,"Weights":{"Passing":1,"Warning":1},"EnableTagOverride":false,"Proxy":{"Mode":" ","MeshGateway":{},"Expose":{}},"Connect":{},"CreateIndex":416,"ModifyIndex":416},"Checks":[{"Node":"5f9387a8e3af","Chec kID":"serfHealth","Name":"Serf Health Status","Status":"passing","Notes":"","Output":"Agent alive and reachable","ServiceID":"","ServiceName":"","ServiceTags":[],"Type":"","Interval":"","Timeout":"","ExposedPort":0,"Defini tion":{},"CreateIndex":413,"ModifyIndex":413},{"Node":"5f9387a8e3af","CheckID":"service:redis","Name":"nc","Status":"pas sing","Notes":"","Output":"","ServiceID":"redis","ServiceName":"redis","ServiceTags":["master"],"Type":"script","Interva l":"","Timeout":"","ExposedPort":0,"Definition":{},"CreateIndex":416,"ModifyIndex":662}]}]Copy the code
Let’s look at other scenarios
- If a client consul5 is launched, the consul4 data will be synchronized
$ docker run -d --name=consul5 -e CONSUL_BIND_INTERFACE=eth0 -e CONSUL_LOCAL_CONFIG='{"enable_script_checks": true}' consul agent --server=false- client = 0.0.0.0 - join 172.17.0.2
$ docker exec- it consul5 curl http://127.0.0.1:8500/v1/health/service/redis\? passing\=true
#Return redis information
Copy the code
- If we turn consul1 (the Leader on the server side) off, the remaining two servers re-elect consul2 as the new Leader
Redis service information is still available from Consul4 and 5
$ docker exec- it consul4 curl http://127.0.0.1:8500/v1/health/service/redis\? passing\=true
#Return redis information
$ docker exec- it consul5 curl http://127.0.0.1:8500/v1/health/service/redis\? passing\=true
#Return redis information
Copy the code
conclusion
This section describes how Docker can build a Consul high availability cluster, and understand the communication between servers, clients, and clients in Consul cluster using logs. Take a Redis service as an example to configure service registration and health check to see the actual effect.