[TOC]

The integration of Eureka as service governance was outlined earlier, along with a brief introduction to how eureka works. Eureka follows AP principle in CAP theory.

Consul installation

Click on my website to download it

  • In’s official website to download the jar decompression after executing a consul for executable files. We moved Consul to our own directory. The following file structure is formed to facilitate debugging and configuration in the future.

  • Conf /dev. Json: we configure the file, we can also use the system default.

  • Data /node-id: indicates node information

  • /log/consul-**. Log: log file

  • Json configuration can be added to Consul startup command. We put it in dev.json so that we can view the configuration information

consul agent -dev -config-dir=/data/services/consul/conf

  • This command will enable Consul.
==> Starting Consul agent... Version: 'v1.7.3' Node ID: 'ebe4a279-8e4e-dfc9-7f68-4652bfb27f3a' Node name: 'Datacenter: 'dc1' (Segment: '<all>') Server: true (Bootstrap: false) Client Addr: [0.0.0.0] (HTTP: 8500, HTTPS: -1, gRPC: 8502, DNS: 8600) Cluster Addr: 192.168.44.131 (LAN: 8301, WAN: 8302) Encrypt: Gossip: false, TLS-outgoing: false, TLS-incoming: false, Auto-Encrypt-TLS: false ==> Log data will now stream in as it occurs: 2022-03-18T18:49:58.102-0700 [INFO] Agent. Server. Raft: Initial Configuration: Index servers = 1 = "[{Suffrage: Voter ID: ebe4a279-8 e4e - dfc9-7 f68-4652 bfb27f3a Address: 192.168.44.131:8300}]" 2022-03-18T18:49:58.103-0700 [INFO] agent.server.serf.wan: serf: EventMemberJoin: Consul. Dc1 192.168.44.131 2022-03-18T18:49:58.103-0700 [INFO] agent. Server. Raft: Entering follower state: Follower ="Node at 192.168.44.131:8300 [follower]" leader= 2022-03-18T18:49:58.103-0700 [INFO] agent.server.serf.lan: Serf: EventMemberJoin: Consul 192.168.44.131 2022-03-18T18:49:58.104-0700 [INFO] Agent: Started DNS server: Address =0.0.0.0:8600 network= UDP 2022-03-18T18:49:58.104-0700 [INFO] Agent. Server: Adding LAN Server: Server ="Consul (Addr: TCP /192.168.44.131:8300) (DC: dC1)" 2022-03-18T18:49:58.104-0700 [INFO] agent. Server: Handled event for server in area: Event =member-join server=Consul. Dc1 area=wan 2022-03-18T18:49:58.104-0700 [INFO] Agent: Started DNS server: Address =0.0.0.0:8600 network= TCP 2022-03-18T18:49:58.107-0700 [INFO] Agent: Started HTTP server: Address =[::]:8500 network= TCP 2022-03-18T18:49:58.108-0700 [INFO] Agent: Started gRPC Server: Address =[::]:8502 network= TCP 2022-03-18T18:49:58.109-0700 [INFO] agent: Started state syncer ==> Consul running! 2022-03-18T18:49:58.166-0700 [WARN] Agent. Server. Raft: Heartbeat timeout reached, starting election: Last-leader = 2022-03-18T18:49:58.166-0700 [INFO] Agent. Server. Raft: entering candidate state: Node =" node at 192.168.44.131:8300 [Candidate]" term=2 2022-03-18T18:49:58.166-0700 [INFO] agent. Server. Raft: Election won: Tally =1 2022-03-18T18:49:58.166-0700 [INFO] Agent. Server. Raft: Entering leader state: Leader ="Node at 192.168.44.131:8300 [leader]" 2022-03-18T18:49:58.167-0700 [INFO] agent. Server: Cluster Leadership Acquired 2022-03-18T18:49:58.168-0700 [INFO] Agent. Server: New Leader sensor: Content = Consul T18:2022-03-18 49:58. 202-0700 [INFO] agent. The server to connect: Initialized Primary datacenter CA with provider: provider= Consul 2022-03-18T18:49:58.202-0700 [INFO] Agent. Leader: Started routine: routine="CA root pruning" 2022-03-18T18:49:58.202-0700 [INFO] Agent. Server: Started routine: routine="CA root Pruning "2022-03-18T18:49:58.202-0700 [INFO] Agent. Member joined, marking Health Alive: Member =Consul 2022-03-18T18:49:58.202-0700 [INFO] Agent: Synced node INFOCopy the code

The background to start

nohup consul agent -dev -config-dir=/data/services/consul/conf >> /data/services/consul/log/consul.log &

  • The service installation has started successfully.

Client Registration

  • The client here is the same as eureka. Payment-provider and Order-consumer are both clients in Consul’s opinion. This section briefly describes the registration process of the Payment-Provider. Order is the same operation

  • First, we know that prefer-ip-address in the consul of springcloud is disabled by default via the automatic prompt of idea. This configuration will affect something. Let’s take a look at the service below which is registered by default.

  • If we don’t have prefer-ip-address on, we’ll just register it and say localhost. This in the experimental scenario we can be under the same IP. It must be on a different machine in the real world. Localhost will fail. The service checks state is × because of localhost.

  • At this point our IP is registered. And the services checks are √.

The introduction of pom


<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-consul-discovery</artifactId>
</dependency>

Copy the code
  • We are not going to cover the basics of the framework here. If you need the source code, go directly to Git

Let me download

The configuration file



spring:
  cloud:
    consul:
      host: 192.16844.131.
      port: 8500
      discovery:
        service-name: ${spring.application.name}
        register: true
        prefer-ip-address: true

Copy the code

Start the annotation

– Add @enableDiscoveryClient to the startup class

The order calls

  • Here order module registration will not say le. Just repeat the payment registration above. In other words, the order is not registered. Because in the demo scenario, order calls Payment with Consul. You just need to make sure that the payment is registered and can be passed.

  • We just need to change the root address. Here is the name of the service registered in Consul.

Consul Cluster Construction

  • The difference between a cluster and a single machine is that there are more cluster groups. The IP address group is added to the cluster

"retry_join": [ 
 "x.x.x.1",
 "x.x.x.2",
 "x.x.x.3"
 ]

Copy the code
  • We need different machines here. In order to facilitate the installation, we adopt the docker method

Docker installation

Single machine installation

docker run -d -p 8500:8500 --restart=always --name=consul consul:latest agent -server -bootstrap -ui -node=1 - client = '0.0.0.0'Copy the code

You can ignore

Because docker default docker0 virtual network card is not directly support static IP Settings. So we started by creating our own virtual network.

Sudo docker network create --subnet=172.18.0.0/24 Staticnet

Error response from daemon: Pool overlaps with other one on this address space

In this case, we just need to change the IP.

Sudo docker network create --subnet=172.16.0.0/24 Staticnet

The default IP docker

The preceding operations may damage the VM network. Therefore, I recommend that you use the default IP of Docker.


{
    "datacenter": "dc1"."log_level": "INFO"."node_name": "s_3"."server": true."bootstrap_expect": 2."bind_addr": "0.0.0.0"."client_addr": "0.0.0.0"."ui": true."ports": {
        "dns": 8600."http": 8500."https": - 1."server": 8300."serf_lan": 8301."serf_wan": 8302
    },
    "rejoin_after_leave": true."retry_join": [
        "172.18.0.5"."172.18.0.6"."172.18.0.7"]."retry_interval": "30s"."reconnect_timeout": "72h"
}

Copy the code
  • Make a copy of the above configuration file to s_1,s_2,s_3, as long as it’s different. We don’t need to worry about the IP in retry_join. Then execute the Docker to start the container

  • sudo docker run -p 8500:8500 -d –name consul_s1 -v /data/services/consul/docker/consul1.json:/consul/config/basic_config_1.json consul agent -config-dir /consul/config Starts the first Consul

sudo docker run -d --name consul_s2 -v /data/services/consul/docker/consul2.json:/consul/config/basic_config_1.json consul agent -config-dir /consul/configsudo docker run -d --name consul_s2 -v /data/services/consul/docker/consul2.json:/consul/config/basic_config_1.json consul agent -config-dir /consul/config

  • Start S2, S3. Remember that s2, S3, and S1 differ in that they do not need to bind ports. Because a physical machine can only have one 8500 port. I’m just exposing 8500 for demonstration purposes. Other consul ports need to be self-exposed.

– Execute after all three containers are starteddocker inspect consul1

  • Above we can see the IP of the container. This IP will not change if we restart it. Deletion regenerated may change. This reader can test it for himself.
  • Then we change the IP in the JSON file to the IP of the three containers and restart them separately.

  • In this way, our Docker version cluster has been built successfully. As for SpringCloud integration, it is simply a matter of switching from one machine to multiple machines at the same location. There’s no disguising here

Consul Operation Principle

  • Consul is multi-data center. It can be understood that clusters can communicate with each other to form larger clusters. There are clients and servers in a Consul data center
  • Client: The client is stateless. It is only responsible for forwarding requests to the server.
  • Server: Server is the service that actually persists data and notifies other servers to synchronize data

Servers in the data center are notified of data backups using virus algorithms.