preface

In recent years, microservice architecture has become more and more popular in the field of Internet application. The introduction of microservice mainly solves the problems of tight coupling of multiple modules in single application, inability to expand and difficulty in operation and maintenance. Microservice architecture is to vertically split service modules according to function granularity, service and componentize individual applications, and deploy each component separately as a small application (from DB to UI). Microservices interact with microservices through the Service API, and a single Service allows simultaneous deployment of one or more Service instances to support horizontal scaling, performance improvement, and Service availability. At run time, each instance is usually a cloud virtual machine or Docker container.

How do instances of multiple services communicate within a microservice system? How do you sense each other’s presence and destruction? How does the producer service know the address of the consumer service? How do you decouple services from the registry? This requires a third-party service registry that provides registration management of producer service nodes and discovery management of consumer service nodes.


The body of the

1. Service discovery and registration

1.1. Specific process

  • Service registry: As the core of the entire architecture, it should support distributed, persistent storage and real-time notification of registration information changes to consumers.
  • Service Provider:Services todocker The container is changedMode deployment (implementationService porttheDynamically generated), can passdocker-composeTo manage. throughRegistratordetecteddockerProcess information to complete the serviceAutomatic registration.
  • Service consumers: To use services provided by service providers, and service providers tend to move dynamically to each other.

A relatively complete service registration and discovery process is as follows:

  1. Registration services: service providers register with a registry;
  2. Subscription service: a service consumer subscribes to a registry for service information and listens to it;
  3. Cache service list: local cache service list, reducing network communication with the registry;
  4. Call the service: search the local cache first, and then go to the registry to pull the service address, and then send the service request;
  5. Change notification: When a service node changes (addition, deletion, etc.), the registry notifies the listening node to update the service information.

1.2. Related components

A service discovery system consists of three parts:

  1. Registrator: Registers/unregisters a service based on the running status of the service. The main problem to solve is when to initiate the register/unregister action.
  2. Registry: Stores service information. Common solutions are ZooKeeper, ETCD, Cousul, etc.
  3. Discovery mechanism: Reads the service information from the registry and encapsulates the access interface for the user.

1.3. Third party implementation

For the implementation of third-party service registration and discovery, there are three main tools available:

  1. Zookeeper: a high-performance, distributed application coordination service for name services, distributed locking, shared resource synchronization, and distributed configuration management.
  2. Etcd: a key/value pair storage system that uses the HTTP protocol. It is mainly used for share configuration and service discovery and provides simpler functions than Zookeeper and Consul.
  3. Consul: a distributed, highly available service discovery and configuration sharing software that supports service discovery and registration, multi-data centers, health checks, and distributed key/value stores.

A simple comparison:

Unlike Zookeeper and ETCD, Consul implements an embedded service discovery system. Instead of building your own system or using a third-party system, customers simply need to register the service and perform service discovery through a DNS or HTTP interface.

2. The Consul and Registrator

2.1. Consul

What is the Consul

Consul is a distributed, highly available, scale-up service registry and discovery tool. It generally includes the following features:

  • Service discovery:ConsulthroughDNSorHTTPThe interface to makeService registration and service discoveryIt’s easy. Some external services such assaasProvided can also be registered;
  • Health check: Health detection enabledconsulYou can quickly alert the operations in the cluster. Integration with service discovery to prevent services from being forwarded to failing services;
  • Key/value storage: One used toDynamic storage configurationIn the system. Provide simpleHTTPInterface, can operate from anywhere;
  • Multi-data centerSupport:Multi-data centerIn order to avoidA single point of failureServices on the Intranet and extranet use different ports for listening. And its deployment needs to consider network latency, sharding, etc.zookeeperandetcdThey do not support multi-data center functions.
  • Consistency algorithm: in this paper,RaftConsistency protocol algorithm, thanPaxosThe algorithm is easy to use. useGOSSIPProtocol manages membership and broadcast messages, and supportsACLAccess control;
  • Service Management Dashboard: Provide aWeb UIIs registered withHealth monitoringManagement page.

Consul several concepts

The following is the architecture design diagram provided in Consul’s official document:

The figure contains two Consul data centers, each of which is a Consul cluster. In Data center 1, you can see that Consul’s cluster consists of N servers plus M clients. Both the SERVER and CLIENT are nodes in consul cluster. All services can be registered on these nodes, and it is through these nodes that service registration information is shared. In addition to these two, there are a few minor details.

  • CLIENT

CLIENT Indicates the CLIENT mode of Consul. Consul is a mode of consul nodes in which all services registered with the current consul node are forwarded to the SERVER node without persisting this information.

  • SERVER

SERVER indicates the SERVER mode of Consul, indicating that Consul is a SERVER node. In this mode, the function is the same as that of the CLIENT. The only difference is that all information is persisted locally. In the event of a failure, information can be retained.

  • SERVER-LEADER

The middle SERVER has the LEADER description below, indicating that the SERVER node is their boss. Different from other servers, it is responsible for synchronizing registration information to other servers and monitoring the health of each node.

  • Other information

Other information includes the way each node communicates, as well as some protocol information and algorithms. They are used to solve a series of cluster problems such as data synchronization between nodes and real-time requirements. Those interested take a look at the official documentation for themselves.

2.2. Registrator profile

Registrator is an automatic service registration/deregistration component that is independent of the service registry. It is typically deployed as a Docker Container. Registrator automatically detects the status (enabled/destroyed) of all Docker containers on the host and registers/deregisters services based on the container status to the corresponding service registry.

In fact, the Registrator reads environment variables from other containers on the same host for service registration, health check definitions, and so on.

Registrator supports pluggable service registry configuration and currently supports Consul, ETCD and SkyDNS 2.

2.3. Docker installs Consul cluster

2.3.1. Cluster Node planning

I used Ubuntu16.04 vm locally:

Container name Container IP Address Mapping port number IP address of the host Service operation mode
node1 172.17.0.2 8500 – > 8500 192.168.127.128 Server Master
node2 172.17.0.3 9500 – > 8500 192.168.127.128 Server
node3 172.17.0.4 10500 – > 8500 192.168.127.128 Server
node4 172.17.0.5 11500 – > 8500 192.168.127.128 Client

2.3.2. Consul Cluster Installation

Consul configuration parameters are described as follows:

The list of parameters Parameter Description and application scenario
advertise The presentation address notification is used to change the address we present to other nodes in the cluster. In general, -bind is the presentation address
bootstrap This command is used to control whether a server is in Bootstrap mode. Only one server can be in Bootstrap mode in a datacenter. When a server is in Bootstrap mode, you can elect the raft Leader
bootstrap-expect The expected number of server nodes in a datacenter. When provided, Consul waits until the specified number of sever nodes is reached before booting the entire cluster. This flag cannot be shared with bootstrap
bind This IP address is used for communication within the cluster. All nodes in the cluster must be reachable to the address. The default address is 0.0.0.0
client Consul is bound to which client address is Consul bound to. This address provides HTTP, DNS, and RPC services. The default address is 127.0.0.1
config-file Explicitly specify which configuration file to load
config-dir Configuration file directory, where all files ending in.json will be loaded
data-dir A directory is provided to store the status of the Agent. All agents need this directory. The directory must be stable and will continue to exist after the system restarts
dc This flag controls the name of the datacenter allowed by the Agent, which is DC1 by default
encrypt Specifies the secret key that enables Consul to encrypt consul during communication. The key can be generated using Consul KeyGen. Nodes in the same cluster must use the same key
join Add the IP address of an agent that has been started. You can specify multiple agent addresses. If Consul cannot be added to any specified address, the Agent fails to start. By default, no node is added to the Agent when it is started
retry-interval The interval between two joins is 30s by default
retry-max The number of repeated join attempts, which defaults to 0, is infinite
log-level Consul Agent Log level displayed after the consul Agent startup. The default value is INFO. The options are trace, DEBUG, INFO, WARN, and ERR
node The name of a node in a cluster must be unique within a cluster. The default name is the host name of the node
protocol Protocol version used by Consul
rejoin Causes Consul to ignore consul’s previous departure and attempt to join the cluster after restarting
server Each cluster has at least one server. It is recommended that each cluster have no more than five servers
syslog This function takes effect only on Linux/OSX
pid-file Provide a path to the PID file that can be used to make SIGINT/SIGHUP(close/update)agent

2.4. Docker installs Consul cluster

2.4.1. Pull consul official Image

madison@ubuntu:~$ docker pull consul:latest
Copy the code

2.4.2. Start the Server node

Run consul mirror and start Server Master node1:

node1:

madison@ubuntu:~$ docker run -d --name=node1 --restart=always \
             -e 'CONSUL_LOCAL_CONFIG={"skip_leave_on_interrupt": true}'\ -p 8300:8300 \ -p 8301:8301 \ -p 8301:8301/udp \ -p 8302:8302/udp \ -p 8302:8302 \ -p 8400:8400 \ -p 8500:8500 \ -p 8600:8600 \ -h node1 \ consul agent-server-bind =172.17.0.2 -bootstrap-expect= 3-node =node1 \ -data-dir=/ TMP /data-dir - client 0.0.0.0 - the UICopy the code

View the logs for node1 to track the health:

No leader node has been elected in the cluster. Continue to start the other two Server nodes node2 and node3:

node2:

madison@ubuntu:~$ docker run -d --name=node2 --restart=always \
             -e 'CONSUL_LOCAL_CONFIG={"skip_leave_on_interrupt": true}'\ -p 9300:8300 \ -p 9301:8301 \ -p 9301:8301/udp \ -p 9302:8302/udp \ -p 9302:8302 \ -p 9400:8400 \ -p 9500:8500 \ -p 9600-8600 \ -h 2 \ consul agent server - bind = 172.17.0.3 \ - join = 192.168.127.128 - node - id = $(uuidgen | awk'{print tolower($0)}') \ -node=node2 \ -data-dir=/ TMP/data-dir-client 0.0.0.0 - UICopy the code

View the process startup logs of node2.

node3:

madison@ubuntu:~$ docker run -d --name=node3 --restart=always \
             -e 'CONSUL_LOCAL_CONFIG={"skip_leave_on_interrupt": true}'\ -p 10300:8300 \ -p 10301:8301 \ -p 10301:8301/udp \ -p 10302:8302/udp \ -p 10302:8302 \ -p 10400:8400 \ -p 10500:8500 \ -p \ 10600-8600 - h 2 \ consul agent server - bind = 172.17.0.4 \ - join = 192.168.127.128 - node - id = $(uuidgen | awk'{print tolower($0)}') \ -node=node3 \ -data-dir=/ TMP/data-dir-client 0.0.0.0 - UICopy the code

View the process startup logs of node3.

When all three Server nodes are up and running, the process logs for node2 and Node3 show that Node1 is elected as the leader node, that is, the Server Master of the data center.

View the process startup logs of node1 again:

The logs show that node2 and node3 are successfully joined to data center DC1 where Node1 resides. When three Consul Servers in the cluster are started, Node1 is elected as the primary node in DC1. Node1 then continuously performs health checks on Node2 and Node3 via heartbeat checks.

2.4.4. Start the Client node

node4:

madison@ubuntu:~$ docker run -d --name=node4  --restart=always \
            -e 'CONSUL_LOCAL_CONFIG={"leave_on_terminate": true}'\ -p 11300:8300 \ -p 11301:8301 \ -p 11301:8301/udp \ -p 11302:8302/udp \ -p 11302:8302 \ -p 11400:8400 \ -p 11500:8500 \ -p \ 11600-8600 - h node4 \ consul agent - bind = 172.17.0.5 - retry - join = 192.168.127.128 \ - node - id = $(uuidgen | awk'{print tolower($0)}') \ -node= node4-client 0.0.0.0 - UICopy the code

View the process startup logs of node4.

Node4 is running in Client mode. After the startup is complete, node1, Node2, and Node3 in the DC1 data center that are started in Server mode are added to the local cache list. When a client sends a service discovery request to Node4, Node4 forwards the request through RPC to one of the Server nodes for processing.

2.4.5. View the cluster status

madison@ubuntu:~$ docker exec -t node1 consul members
Copy the code

Node1, Node2, Node3, and Node4 in the DC1 data center are successfully started. Status indicates their Status, which is alive. Node1, node2, and Node3 are started in Server mode, while Node4 is started in Client mode.

2.5. Docker install Registrator

2.5.1. Pull the image of the Registrator

madison@ubuntu:~$ docker pull gliderlabs/registrator:latest
Copy the code

2.5.2. Start the Registrator node

madison@ubuntu:~$ docker run -d --name=registrator \
             -v /var/run/docker.sock:/tmp/docker.sock \
             --net=host \
             gliderlabs/registrator -ip="192.168.127.128"Consul: / / 192.168.127.128:8500Copy the code

–net Specifies host to use host mode. – IP Specifies the IP address of the host and the communication address for health check. Consul: / / 192.168.127.128:8500: use the consul as a service registry, specify the specific consul address for service registration and cancellation (note: the 8500 is the consul’s exposure of HTTP communication port).

View the Registrator’s container process startup log:

Registrator performs the following steps during startup:

  1. View the Leader node in Consul data center as a service registry;
  2. Synchronize the current host enabled container, and all service ports;
  3. Register the service addresses/ports published by each container with Consul’s service registry.

2.5.3. Check the registration status of Consul

Consul provides a Web UI to visualize service registry lists, communication nodes, data centers, and key/value stores, among others, with direct access to port 8500 on the host machine.

Service Registration List:

All Consul NODES in the DC1 data center, including Consul Server and Client, are mounted to NODES.

List of communication nodes:

After Registrator is enabled, all containers on the host have registered their SERVICES with Consul’s SERVICES, and the test is complete!


conclusion

The Consul cluster setup for single data center is complete!! I’ll show you how to tag service registrations using Registrator in the following sections. Then through docker deployment of multi-instance Web container to achieve HTTP based RESTful Service and TCP-based RPC Service Service registration and health check definition, and demonstrate how to label multiple instances of a Service.


Welcome to pay attention to the technical public number: Zero one Technology Stack

This account will continue to share learning materials and articles on back-end technologies, including virtual machine basics, multithreaded programming, high-performance frameworks, asynchronous, caching and messaging middleware, distributed and microservices, architecture learning and progression.