preface
I have attended several Java interviews recently, and found that Eureka is more popular in most microservice practices. Considering that the author’s company chooses Consul, here is a brief summary of Consul.
This is the third in the Docker Action series. Portal:
- Docker MySQL master/slave replication
- Docker redis-cluster
Why Consul?
First Consul has the following key features:
- Service discovery: Supports service discovery. You can get service information via DNS or HTTP.
- Health check: Supports health check. You can provide any number of health checks (such as Web status codes or CPU usage) associated with a given service.
- K/V storage: key/value pair storage. You can store relevant information such as dynamic configuration via Consul.
- Multi-dc: Supports multiple DCS out of the box.
- Web-ui: Supports web-UI. Bit by bit, you can see how your service is running, at a glance, and it is very friendly to open shipping maintenance.
As a frequently asked question, interviewers always start with 100,000 whys. But as a programmer, you still need to know what it is and why. Here is a comparison of several commonly used service discovery components.
The selection of service discovery components is mainly carried out from the following aspects. CAP theory, consistency algorithm, multi-data center, health check, whether k8S is supported, etc.
1. CAP
When data consistency is enforced, some nodes are locked during data update and cannot provide services, which affects service availability.
2. Consistency algorithm
The Raft algorithm divides servers into three types: Leader, Follower, and Candidate. The Leader handles all queries and transactions and synchronizes the transactions to the followers. The Follower forwards all RPC queries and transactions to the Leader for processing, and it only receives transaction synchronization from the Leader. Data consistency is implemented based on the data in the Leader.
The following are several common consistency algorithms
3. Multiple data centers
Consul synchronizes data across data centers using the Gossip protocol over the WAN; Other products require additional development work to implement;
Note that DCS and nodes are two different concepts
The Gossip protocol is a mature protocol in P2P networks. The biggest benefit of the Gossip protocol is that even if the number of nodes in the cluster increases, the load on each node does not increase very much and is almost constant. This allows Consul to scale clusters horizontally to thousands of nodes.
Each Agent on Consul checks each other’s online status using the Gossip protocol, which essentially pings each other to share heartbeat load on server nodes. If a node goes offline, the server node does not need to check it. Other normal nodes will find it and broadcast it to the whole cluster.
The Consul architecture
What is consul’s architecture that the authorities have given a very intuitive picture of
Looking at Data center 1 alone, we can see that Consul’s cluster consists of N servers plus M clients. Both SERVER and CLIENT are nodes in Consul. All services can be registered on these nodes and service registration information can be shared through these nodes. In addition to these two, there are a few minor details, one by one.
CLIENT
CLIENT Indicates the CLIENT mode of Consul. Is a mode of consul node in which all services registered with the current consul node are forwarded to the SERVER without persisting this information.
SERVER
SERVER indicates the SERVER mode of Consul, indicating that Consul is a SERVER. In this mode, Consul functions as a SERVER, except that all information is persisted locally so that information can be retained in the event of a fault.
SERVER-LEADER
There is the word LEADER under the middle SERVER, indicating that this SERVER is the boss of these servers. Different from other servers, it is responsible for synchronizing the registered information to other servers and monitoring the health of each node.
Docker environment construction
docker-compose-consul-cluster.yml
version: '3'
services:
consul-server1:
image: consul:latest
hostname: "consul-server1"
ports:
- "8500:8500"
- "53"
volumes:
- ./consul/data1:/consul/data
command: Agent-server-bootstrap-expect 3-UI-disable-host-node-id-client 0.0.0.0"
consul-server2:
image: consul:latest
hostname: "consul-server2"
ports:
- "8501:8500"
- "53"
volumes:
- ./consul/data2:/consul/data
command: "Agent-server-ui-join consul- server1-disable-host-node-id-client 0.0.0.0"
depends_on:
- consul-server1
consul-server3:
image: consul:latest
hostname: "consul-server3"
ports:
- "8502:8500"
- "53"
volumes:
- ./consul/data3:/consul/data
command: "Agent-server-ui-join consul- server1-disable-host-node-id-client 0.0.0.0"
depends_on:
- consul-server1
consul-node1:
image: consul:latest
hostname: "consul-node1"
command: "agent -join consul-server1 -disable-host-node-id"
depends_on:
- consul-server1
consul-node2:
image: consul:latest
hostname: "consul-node2"
command: "agent -join consul-server1 -disable-host-node-id"
depends_on:
- consul-server1
Copy the code
Run docker-comement-f docker-comement-consul-cluster. yml up -d to start and then visit http://localhost:8500
See the following figure and the startup is successful
The last
Docker combat series are based on rapid learning environment, Consul feature learning and production environment configuration still have a long way to go. If you have any questions or mistakes, please correct them.
Public number [When I meet you]