Consul Entry Basis

  • Website: www.consul.io
  • Making: github.com/hashicorp/c…
  • Documents: www.consul.io/docs/agent/…

1. Introduction

1.1 introduction

Consul is an open source tool from HashiCorp, developed using the GO language, for service discovery and configuration in distributed systems. Consul has built-in service registration and discovery framework, distributed consistency protocol implementation, health check, Key/Value storage, and multi-data center solution, eliminating the need to rely on other tools (such as ZooKeeper, etc.). Consul is distributed, highly available, and scaleable.

  • Service discovery: Consul provides a DNS or HTTP interface for registering and discovering services. For external services, Consul makes it easy to find the service they depend on.
  • Health check: Consul’s client can check whether a service or current node is in a healthy state in a number of ways, such as checking whether a service returns 200 OK or whether the memory usage of the client’s currently deployed machine is below 80%. Health checks can be used to avoid traffic being forwarded to faulty services.
  • Key/Value storage: Applications can use the Key/Value storage provided by Consul based on their own needs. Consul provides an easy-to-use HTTP interface that, in combination with other tools, enables dynamic configuration, feature marking, leader election, and more.
  • Multi-data Center: Consul supports out-of-the-box multi-data centers. This means that users don’t need to worry about building additional layers of abstraction to scale their business across multiple regions.

Service registration and Discovery:

  • Service registration: The process by which a service registers its location information with the central registry node. The service typically registers its host IP address and port number, and sometimes has authentication information for access to the service, protocol used, version number, and details about the environment.
  • Service discovery: Enables an application or component to discover information about its operating environment and other applications or components. Users can configure a service discovery tool to separate the actual container from the running configuration. Common configuration information includes IP address, port number, and name.

Traditionally, when a service exists on multiple host nodes, static configuration is used to register the service information. Dynamic service registration and discovery is important to avoid service outages in complex systems where scalability is required and services are frequently replaced. There are many components for service registration and discovery, such as Zookeeper, Etcd, etc. It can be used for both coordination between services and registration of services.

1.2 features

  • Consul vs. ZooKeeper, doozerd etcd www.consul.io/intro/vs/zo…

Officials published an article comparing the software differences in Consul vs. ZooKeeper, Doozerd, etcd.

Feature Consul zookeeper etcd euerka
Service health check Service status, etc. (weak) long connection, keepalive Connect the heart Can match support
Multi-data center support
Kv storage service support support support
consistency raft paxos raft
cap ca cp cp ap
Using an interface Supports HTTP and DNS The client http/grpc HTTP (sidecars)
Watch support Full/Long polling is supported support Support long polling Support long polling
Their monitoring metrics metrics metrics
security acl /https acl HTTPS support (weak)
Spring integration Have support Have support Have support Have support

1.3 Consul term

vocabulary instructions
Agent Agent is a daemon that runs on every member in Consul at all times. Start by running Consul Agent. The Agent can run in client or Server mode. It is very simple to specify a node as a client or server, unless there are other Agent instances. All agents can run DNS or HTTP interfaces, check and synchronize services at run time.
Client A Client is a proxy that forwards all RPCS to the server. This client is relatively stateless. The only background activity that client performs is joining the LANgossip pool. This has a minimal resource overhead and consumes only a small amount of network bandwidth.
Server A server is an agent with an extended set of functions including participating in Raft elections, maintaining cluster state, responding to RPC queries, interacting with other data centers and forwarding queries to the leader or remote data center.
DataCenter While the definition of a data center is obvious, there are some fine details that must be considered. For example, in EC2, multiple available areas are considered to constitute a data center? We define the data center as a private, low latency and high bandwidth network environment. This does not include access to the public network, but for us, multiple available areas in the same EC2 can be considered part of a single data center.
Consensus In our documentation, we use Consensus to indicate agreement on the leader election and the order of transactions. Since these transactions are applied to finite state machines, Consensus implies duplicating the consistency of the state machine.
Gossip Consul builds on Serf, which provides a complete gossip protocol for multicast purposes. Serfs provide membership, fault detection, and event broadcasting. More information is described in the Gossip document. This is enough to know that G OSSIP uses random point-to-point communication based on UDP.
LAN Gossip It contains all nodes in the same LAN or data center.
WAN Gossip It contains only servers. These servers are distributed in different data centers and usually communicate over the Internet or wan.
RPC Remote procedure call. This is a request/response mechanism that allows clients to request servers.

1.4 Consul port

Consul requires up to six different ports to work properly, some using TCP, UDP or both. The main ports are described as follows:

role Port instructions
The server RPC The default is 8300 This is used by the server to handle incoming requests from other agents. TCP only.
Serf LAN The default is 8301 This is for dealing with gossip on the LAN. All agents need it. TCP and UDP.
Serf WAN The default is 8302 This is used by the server to gossip to other servers over the WAN. TCP and UDP.
HTTP API The default is 8500 This is used by clients to talk to HTTP apis. TCP only.
DNS interface The default is 8600 Used to resolve DNS queries. TCP and UDP.

2. The Consul architecture

2.1 Consul Node mode

Consul is divided into Client and Server nodes (all nodes are also called Agents).

Consul node mode:

  • CLIENT: CLIENT indicates the CLIENT mode of Consul. Is a mode of consul node in which all services registered with the current consul node are forwarded to the SERVER without persisting this information.
  • SERVER: SERVER indicates the SERVER mode of Consul, indicating that Consul is a SERVER. In this mode, Consul functions as a SERVER, except that all information is persisted locally so that information can be retained in the event of a fault.
  • Server-leader: indicates that the SERVER is the LEADER of the node. Different from other servers, it is responsible for synchronizing the registered information to other servers and monitoring the health of each node.

2.2 the Consul architecture

Consul supports multiple data centers. In the figure above, there are two datacenters that are connected over the Internet. Note that only Server nodes participate in cross-data center communication for increased efficiency.

In a data center, Consul is divided into Client and Server nodes (all nodes are also called Agents). The Server node stores data, while the Client performs health checks and forwards data requests to the Server. A Server node has a Leader and multiple followers. The Leader node synchronizes data to the followers. Although Consul can run on one server, it is recommended to use three to five to avoid data loss in the event of a failure. You are advised to configure one server cluster for each data center.

Consul nodes in a cluster use the Gossip protocol to maintain membership, meaning that a node knows which nodes are in the cluster and whether they are clients or servers. The myth protocols in a single data center communicate using both TCP and UDP, and both use port 8301. The myth protocol across the data center also uses both TCP and UDP communication, using port 8302.

The read/write request of the data in the cluster can either be directly sent to the Server or forwarded to the Server by the Client using RPC. The request will finally reach the Leader node. Under the condition that the data is slightly stale, the read request can also be completed on the common Server node. Data in the cluster is read, written, and replicated through TCP port 8300.

3. The installation

  • Learn.hashicorp.com/consul/gett…

3.1 Environment Planning

role IP The operating system role
consul-1 192.124.64.212 centos6.4 consul-server
consul-2 192.124.64.213 centos6.4 consul-server
consul-3 192.124.64.214 centos6.4 consul-server

3.2 Installing Software

  • www.consul.io/downloads.h…

Consul is written in Go and the installation package is just an executable, making it easy to deploy and seamless with Docker containers.

1) Installation in Linux environment:

Installing Consul for Linux is easy. Just go to the consul website, download consul, and extract it to the appropriate location.

# execute on node1,node2,node3
$wget 'https://releases.hashicorp.com/consul/1.6.2/consul_1.6.2_linux_amd64.zip' . 
$mkdir -p  /usr/local/consul/bin
$unzipConsul_1. 6.2 _linux_amd64. Zip-d /usr/local/consul/bin

$echo ' export PATH=$PATH:/usr/local/consul/bin '>>/etc/profile
$source /etc/profile

$echo $PATH
$which consul
/usr/local/consul/bin/consul

$consul --version
Consul v1.6.2

# help
$consul agent -h
Usage: consul [--version] [--help] <command> [<args>]

Available commands are:
    acl            Interact with Consul's ACLs agent Runs a Consul agent catalog Interact with the catalog config Interact with Consul's Centralized Configurations
    connect        Interact with Consul Connect
    debug          Records a debugging archive for operators
    event          Fire a new event
    exec           Executes a command on Consul nodes
    force-leave    Forces a member of the cluster to enter the "left" state
    info           Provides debugging information for operators.
    intention      Interact with Connect service intentions
    join           Tell Consul agent to join cluster
    keygen         Generates a new encryption key
    keyring        Manages gossip layer encryption keys
    kv             Interact with the key-value store
    leave          Gracefully leaves the Consul cluster and shuts down
    lock           Execute a command holding a lock
    login          Login to Consul using an auth method
    logoutDestroy a Consul token created with login maint Controls node or service maintenance mode members Lists the members of a  Consul cluster monitor Stream logs from a Consul agent operator Provides cluster-level toolsfor Consul operators
    reload         Triggers the agent to reload configuration files
    rtt            Estimates network round trip time between nodes
    services       Interact with services
    snapshot       Saves, restores and inspects snapshots of Consul server state
    tls            Builtin helpers for creating CAs and certificates
    validate       Validate config files/directories
    version        Prints the Consul version
    watch          Watch for changes in Consul
Copy the code

2)Docker environment installation:

If Docker environment can also be easy to install.

Docker pull consul: 1.6.2Map port 8500 of container to port 8900 of host, and open the management interface
docker run -d --name=consul1 -p 8500:8500 -e CONSUL_BIND_INTERFACE=eth0 consul agent --server=true- the bootstrap - expect = 3 - client = 0.0.0.0 - the UIStart the second Server node and join the cluster
docker run -d --name=consul2 -e CONSUL_BIND_INTERFACE=eth0 consul agent --server=true- client = 0.0.0.0 - join 172.17.0.2Start the third Server node and join the cluster
docker run -d --name=consul3 -e CONSUL_BIND_INTERFACE=eth0 consul agent --server=true- client = 0.0.0.0 - join 172.17.0.2Start the fourth Client node and join the cluster
docker run -d --name=consul4 -e CONSUL_BIND_INTERFACE=eth0 consul agent --server=false- client = 0.0.0.0 - join 172.17.0.2Copy the code

3.3 Configuration Files

  • Consul VS https://www.consul.io/docs/agent/options.html
  • www.consul.io/docs/agent/…

Create directories and write configuration files on node1,node2, and node3.

Create a directory:

$mkdir -p /data1/consul/{data,conf,logs}
$tree.Heavy Exercises ── heavy exercises ─ data ├─Copy the code

Writing configuration files:

Node1 writes configuration files. The configuration files of other nodes are the same as those of node1, but you need to change node_name and advertise_addr to the local IP address. Ports The default ports can be 8500 8600 8400 8301 8302 8300.

# Generate encrypt. All nodes must be the same
$consul keygen
9gvOvcmiZW8XzRot6MH22Rf6vW/neOo0LcNNzNtf2nw=
Write the configuration file
vim /data1/consul/conf/config.json 
{
    "bootstrap_expect": 3."server": true."datacenter": "BJ-DC01"."node_name": "consul-node01"."data_dir": "/data1/consul/data"."client_addr": "0.0.0.0"."ports": {
        "http": 8500,"https": 8501,"dns": 8600,"grpc": 8400,"serf_lan": 8301,"serf_wan": 8302,"server": 8300}."advertise_addr":"192.124.64.212"."ui": true."encrypt": "0hVQNFBR6BlIMJ/Xb4+2tEmpJl76ngS9JMNvI15RXOo="."log_level": "INFO"."log_file": "/data1/consul/logs"."enable_syslog": false."start_join": ["192.124.64.212"."192.124.64.213"."192.124.64.214"]}Write configuration files for node2,node3, same as node1. You need to change node_name,advertise_addr to the local IP address.
Copy the code

Consul configuration information can be viewed in documentation – Configuration, where some of the options are as follows:

# config:-advertise: indicates that the advertise address is used to change the address that is displayed to other nodes in the cluster. Generally, -bind indicates the advertise address. - the bootstrap: This command is used to control whether a server is in Bootstrap mode. Only one server can be in Bootstrap mode in a datacenter. When a server is in Bootstrap mode, you can elect the raft Leader. -bootstrap-expect: specifies the expected number of server nodes in a datacenter. When this value is available, Consul does not boot the entire cluster until the specified number of sever nodes has been reached. This flag cannot be shared with bootstrap: This address is used for communication within the cluster. All nodes in the cluster must be reachable to the address. The default address is 0.0.0.0-client: Consul specifies which client address consul is bound to. This address provides HTTP, DNS, and RPC services. The default value is 127.0.0.1 -config-file: specifies which configuration file to load -config-dir: -data-dir: provides a directory to store the status of the agent. All agents need this directory, and the directory must be stable and will continue to exist after the system restarts. -dc: provides a directory to store the status of the agent. This flag controls the name of the datacenter allowed by agent. The default is Dc1 -encrypt: The secret key can be generated using Consul keygen. Nodes in a cluster must use the same key-join: Specifies the IP address of an agent that has been enabled. You can specify the addresses of multiple agents. If Consul cannot be added to any specified address, the Agent fails to start. By default, no node is added to the Agent when it is started. -retry-join: Similar to join, but allows you to retry after the first failure. -retry-interval: specifies the interval between two join attempts. The default value is 30s. - Retry-max: specifies the number of join attempts, which is infinite. The default value is INFO. The options are trace, DEBUG, INFO, WARN, and ERR. -node: specifies the node name in the cluster. It must be unique in a cluster. The default value is the host name of the node. -server: Specifies that the agent runs in server mode. Each cluster has at least one server. It is recommended that the number of servers in each cluster not exceed five. -pid-file: provides a path for storing the PID file, which can be used to make SIGINT/SIGHUP(close/update)agentCopy the code

3.4 Starting the Service

Activation:

Bootstrap-expect is recommended to start clusters automatically. Run the startup command on each node and observe the logs. The leader is elected only when all three servers are started.

Start Consul Agent. The Agent can run in server or client mode. Each data center must have at least one server. It is recommended to have three or five servers in a cluster. Failure to deploy a single server inevitably results in data loss.

If you do not write a configuration file, the direct start command is similar to the following
Consul agent-dev # Test environment
Consul agent-server-bootstrap-expect 1-datacenter =DC01 -node= s1-uI-client =0.0.0.0 -data-dir=/data1/consul/data -log-file=/data1/consul/logs # Single node scenario


# Three-node scenario
Consul -node= s1-bind =192.124.64.212 -ui-dir./consul_ui/ -rejoin -config-dir=/etc/consul.d/ -client 0.0.0.0-advertise =192.124.64.212 -rejoin

Node1, node2, node3 run the startup command
cd /data1/consul/
cat /data1/consul/conf/config.json 
nohup consul agent -server -config-dir=/data1/consul/conf/ >/data1/consul/logs/consul.log 2>&1 &

Copy the code

Startup log:

bootstrap_expect > 0: expecting 3 servers
==> Starting Consul agent...
           Version: 'v1.6.2'
           Node ID: '4135abe9-2a6a-7a99-913d-b9718c53c116'
         Node name: 'consul-node01'
        Datacenter: 'data-center01' (Segment: '<all>')
            Server: true (Bootstrap: false)
       Client Addr: [0.0.0.0] (HTTP: 8500, HTTPS: -1, gRPC: -1, DNS: 8600)
      Cluster Addr: 192.124.64.212 (LAN: 8301, WAN: 8302)
           Encrypt: Gossip: true, TLS-Outgoing: false, TLS-Incoming: false, Auto-Encrypt-TLS: false

==> Log data will now stream in as it occurs:

    2020/01/07 16:01:13 [INFO]  raft: Initial configuration (index=0): []
    2020/01/07 16:01:13 [INFO]  raft: Node at 192.124.64.212:8300 [Follower] entering Follower state (Leader: ""2020/01/07 16:01:13 [INFO] serf: EventMemberJoin: ony-node01. data-center01 192.124.64.212... ==> Consul agent running! . 2020/01/07 16:01:20 [ERR] agent: failed to sync remote state: No cluster leader 2020/01/07 16:01:21 [WARN] raft: no known peers, aborting election 2020/01/07 16:01:23 [INFO] serf: EventMemberJoin: Consul -node03 192.124.64.214 2020/01/07 16:01:23 [INFO] Consul: Adding LAN server consul-node03 (Addr: TCP /192.124.64.214:8300) (DC: data-center01) 2020/01/07 16:01:23 [INFO] serf: EventMemberJoin: Data-center01 192.124.64.214 2020/01/07 16:01:23 [INFO] consul: Handled member-join eventfor server "consul-node03.data-center01" in area "wan"2020/01/07 16:01:23 [INFO] consul: Found expected number of peers, attempting bootstrap: 192.124.64.212:8300192124 64.213:8300192124:64.214 8300 2020/01/07 16:01:29 raft, WARN the Heartbeat timeout the from"" reached, starting election
    2020/01/07 16:01:29 [INFO]  raft: Node at 192.124.64.212:8300 [Candidate] entering Candidate state interm 2 2020/01/07 16:01:29 [INFO] raft: Election won. Tally: 2 2020/01/07 16:01:29 [INFO] raft: Node at 192.124.64.212:8300 [Leader] entering Leader state 2020/01/07 16:01:29 [INFO] raft: Added peer 767572c5-577c-8cab-5038-5d871b1a5498, starting replication 2020/01/07 16:01:29 [INFO] raft: Added peer e0accec7-3f08-bd85-c761-1b021eb3714d, starting replication 2020/01/07 16:01:29 [INFO] consul: cluster leadership acquired 2020/01/07 16:01:29 [INFO] consul: New leader elected: consul-node01 2020/01/07 16:01:29 [INFO] raft: Pipelining replication to peer {Voter 767572C5-577C-8CAB-5038 -5d871b1a5498 192.124.64.213:8300} 2020/01/07 16:01:29 [WARN] raft: AppendEntries to {Voter e0accec7-3f08-bd85-C761-1b021eb3714d 192.124.64.217:8300} Rejected, sending older logs (next: 1) 2020/01/07 16:01:29 [INFO] consul: member'consul-node01' joined, marking health alive
    2020/01/07 16:01:29 [INFO]  raft: pipelining replication to peer {Voter e0accec7-3f08-bd85-c761-1b021eb3714d 192.124.64.214:8300}
    2020/01/07 16:01:29 [INFO] consul: member 'consul-node02' joined, marking health alive
    2020/01/07 16:01:29 [INFO] consul: member 'consul-node03' joined, marking health alive
    2020/01/07 16:01:31 [INFO] agent: Synced node info
Copy the code

Stop service:

ps aux|grep consul
kill -pid
Copy the code

3.5 Checking Services

Check service status:

# Check cluster status
$consul info

# Check member information
$consulMembers Node Address Status Type Build Protocol DC Segment Consul_node01 192.124.64.212:8301 Alive server 1.6.2 2 Data_center <all> consul_node02 192.124.64.213:8301 alive server 1.6.2 2 data_center <all> consul_node03 192.124.64.214:8301 Alive server 1.6.2 2 data_center <all>#
$consul operator raft list-peers
Node           ID                                    Address             State     Voter  RaftProtocol
consul-node01  4135abe9-2a6a-7a99-913d-b9718c53c116  192.124.64.212:8300  leader    true3 Consul-node02 767572C5-577C-8CAB-5038-5d871b1a5498 192.124.64.213:8300 followerstrue3 Ony-node03 e0accec7-3f08-bd85-C761-1b021eb3714d 192.124.64.214:8300 followerstrue   3

# view nodeThe curl 127.0.0.1:8500 / v1 / catalog/nodes | python - m json. ToolCopy the code

The Web interface:

You can go to Consul to check out service. http://192.124.64.212:8500/ui/data-center01/nodes

4. The Consul

4.1 Common Commands

# help
$consul agent -h
Usage: consul [--version] [--help] <command> [<args>]

# Check cluster status
$consul info

View cluster members
$consul members

# Join a clusterConsul join 192.124.64.212# view nodeThe curl 127.0.0.1:8500 / v1 / catalog/nodes | python - m json. ToolCheck node information using DNSDig @127.0.0.1 -p 8600 consul-node01.node.consul... ;; ANSWER SECTION: consul-node01.node.consul. 0 IN A 10.124.64.212#
consul reload

Copy the code

4.2 Consul API


Copy the code

4.3 log

The log configuration rm - rf/etc/rsyslog. D/consul. Conf rm - rf/etc/logrotate. D/consulecho ':programname, isequal, "consul" /var/log/consul.log' >> /etc/rsyslog.d/consul.conf
echo '& ~' >> /etc/rsyslog.d/consul.conf

vim /etc/logrotate.d/consul
/var/log/consul.log
{
    daily
    rotate 7
    missingok
    dateext
    copytruncate
    compress
}

# restart service
/etc/init.d/rsyslog restart  # centos6
consul reload

/bin/systemctl restart consul #centos7 Configuring startupecho "/bin/systemctl start consul" >> /etc/rc.local
Copy the code

5. Service registration

6. Service discovery

Reference:

  • Cloud.tencent.com/developer/a…
  • Learn.hashicorp.com/consul/gett…
  • www.jianshu.com/p/f8746b81d…
  • www.cnblogs.com/gomysql/p/8…
  • Micro service registry found configuration center – consul at https://juejin.cn/post/6844904003764125703
  • Learn.hashicorp.com/consul?utm_…
  • www.cnblogs.com/sunsky303/p…
  • Blog.csdn.net/liuzhuchen/…
  • www.consul.io/docs/agent/…
  • Consul principle and the use of abstract: blog.coding.net/blog/intro-…
  • Consul Address of the mirror warehouse: hub.docker.com/_/consul
  • Consul Image Usage document: github.com/docker-libr…
  • Consul official documentation: www.consul.io/docs/agent/…
  • Service discovery for Docker containers using Consul and Registration
  • Livewyer. IO/blog / 2015/0…
  • A cluster framework for automatic container service discovery based on Consul+Registrator+Nginx
  • www.mamicode.com/info-detail…
  • NET Core microservices implement service governance based on Consul
  • www.cnblogs.com/edisonchou/…
  • www.jianshu.com/p/ef788a924…
  • Lidong1665. Making. IO / 2017/03/14 /…