This is the 26th day of my participation in the August More Text Challenge

1. Introduction

Etcd is a highly consistent distributed K/V storage that provides a reliable way to store data that needs to be accessed by distributed systems or clusters of machines.

Characteristics of 2.

  • K/V storage Stores data in hierarchical directories
  • Security Indicates the SSL client certificate authentication. TTL is optional for key expiration
  • Ease of use HTTP tools such as curl are supported
  • High performance 1000 base writes per instance per second
  • Reliability Raft algorithm is used to achieve the availability and consistency of distributed system data

3. Application scenarios

  • Service registration and discovery
  • Message publishing and subscription
  • Load balancing
  • A distributed lock
  • Distributed queue

4. Concept vocabulary

Raft: An algorithm used by ETCD to ensure strong consistency in distributed systems. Node: A Raft state machine instance. Member: an etCD instance. It manages a Node and can service client requests. Cluster: An ETCD Cluster consisting of multiple members that can work together. Peer: indicates the name of another Member in the same ETCD cluster. Client: the Client that sends HTTP requests to the ETCD cluster. WAL: write-ahead log format. Etcd is used for persistent storage. Snapshot: etcd A snapshot created to prevent excessive WAL files and store ETCD data status. Proxy: An ETCD mode that provides reverse proxy services for ETCD clusters. Leader: The node created by campaigning in the Raft algorithm to process all data commits. Follower: The failed node serves as a subordinate node in Raft to ensure the consistency of the algorithm. Candidate: When the Follower cannot receive the heartbeat from the Leader for a certain period of time, the Follower becomes the Candidate and starts the campaign. Term: a node becomes the Leader until the next election, called a Term. Index: indicates the number of a data item. Term and Index are used to locate data in RaftCopy the code

5. Architecture introduction

The framework consists of four main parts:

  • HTTP Server: Used to process API requests sent by users and synchronization and heartbeat information requests from other ETCD nodes.
  • Store: Transactions that handle the various functions supported by ETCD, including data indexing, node state changes, monitoring and feedback, event processing and execution, and so on. It is the concrete implementation of most of the API functions provided by ETCD to users.
  • Raft: The concrete implementation of Raft strong consistency algorithm is the core of ETCD.
  • WAL: Write Ahead Lo is the data storage mode of ETCD. In addition to holding the state of all the data and the index of the node in memory, ETCD is persisted through WAL. In WAL, all data is logged before submission. Snapshot is a status Snapshot to prevent too much data. Entry Indicates the specific log content.

Process:

When a user’s request is sent, it will be forwarded to Store via HTTP Server for specific transaction processing. If it involves node modification, it will be handed to Raft module for state change and log recording, then synchronized to other ETCD nodes for confirmation of data submission, and finally commit data again for synchronization

6. Installation and deployment

6.1 Single-server Installation

Centos7 already has etcd RPM package, here directly use yum install can be convenient

yum install etcd -y
Copy the code

Check the configuration, standalone environment does not need to be modified, egrep -v “^ # | ^ $”/etc/etcd etcd. Conf

Etcdctl cluster-health etcdctl member listCopy the code

6.2 Cluster Installation

In a cluster of three machines, the installation is the same as that in a single machine. Run the installation command on each machine. The configuration varies according to the actual situation

The machine
D1
D2
D3

Execute installation commands on three machines

yum install etcd -y
Copy the code

To simplify the configuration file, here are a few variables to replace the machine’s ETCD endpoint address

# set several variables D1= HTTP:/ / 192.168.1.101
D2=http:/ / 192.168.1.102
D3=http:/ / 192.168.1.103
Copy the code
Replace ETCD_DATA_DIR= with D1,D2,D3 machines"/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="${D1}:2380"
ETCD_LISTEN_CLIENT_URLS="Http://127.0.0.1:2379, ${D1} : 2379"
ETCD_NAME="number1"
ETCD_INITIAL_ADVERTISE_PEER_URLS="${D1}:2380"
ETCD_ADVERTISE_CLIENT_URLS="Http://127.0.0.1:2379, ${D1} : 2379"
ETCD_INITIAL_CLUSTER="number1=${D1}:2380,number2=${D2}:2380,number3=${D3}:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-token"
ETCD_INITIAL_CLUSTER_STATE="new"
Copy the code

Run the systemctl start etcd # command on each of the three machines to check the status

etcdctl cluster-health
etcdctl member list
Copy the code

Setting up a cluster depends on the configuration file. If an error occurs, check the error log and rectify the error based on the log information

7. Management tools

Etcd UI tools are not many, and most of them are no longer maintained. The tool currently used supports cross-platform (Windows, Linux, Mac), covering all etCD functions (etCDctl can do this tool can operate), to meet the needs of all users. However, etCD V2 API is not supported, only V3 is supported

Tool address: github.com/gtamas/etcd…

Actual interface picture:

8. Common operations

Use etcdctl for common operation examples

var
ENDPOINTS=http:/ / 127.0.0.1:2379
Copy the code
put/get
etcdctl --endpoints=$ENDPOINTS put foo "Hello World!"
etcdctl --endpoints=$ENDPOINTS get foo
etcdctl --endpoints=$ENDPOINTS --write-out="json" get foo
Copy the code
prefix
etcdctl --endpoints=$ENDPOINTS put web2 value2
etcdctl --endpoints=$ENDPOINTS put web3 value3
etcdctl --endpoints=$ENDPOINTS get web --prefix
Copy the code
delete
etcdctl --endpoints=$ENDPOINTS del foo
etcdctl --endpoints=$ENDPOINTS del web --prefix
Copy the code
Watch
etcdctl --endpoints=$ENDPOINTS watch stock1
etcdctl --endpoints=$ENDPOINTS put stock1 1000
etcdctl --endpoints=$ENDPOINTS watch stock --prefix
etcdctl --endpoints=$ENDPOINTS put stock1 10
etcdctl --endpoints=$ENDPOINTS put stock2 20
Copy the code
Lease
etcdctl --endpoints=$ENDPOINTS lease grant 300
#lease 2be7547fbc6a5afa granted with TTL(300s)

etcdctl --endpoints=$ENDPOINTS put sample value --lease=2be7547fbc6a5afa
etcdctl --endpoints=$ENDPOINTS get sample

etcdctl --endpoints=$ENDPOINTS lease keep-alive 2be7547fbc6a5afa
etcdctl --endpoints=$ENDPOINTS lease revoke 2be7547fbc6a5afa
### or after 300 seconds
etcdctl --endpoints=$ENDPOINTS get sample
Copy the code
Distributed locks
etcdctl --endpoints=$ENDPOINTS lock mutex1
###another client with the same name blocks
etcdctl --endpoints=$ENDPOINTS lock mutex1
Copy the code
Elections
etcdctl --endpoints=$ENDPOINTS elect one p1
###another client with the same name blocks
etcdctl --endpoints=$ENDPOINTS elect one p2
Copy the code
Cluster status
etcdctl --write-out=table --endpoints=$ENDPOINTS endpoint status
etcdctl --endpoints=$ENDPOINTS endpoint health
Copy the code
Snapshot
etcdctl --endpoints=$ENDPOINTS snapshot save my.db
Copy the code
Migrate
# write key in etcd version 2 store
export ETCDCTL_API=2
etcdctl --endpoints=http://$ENDPOINT set foo bar

# read key in etcd v2
etcdctl --endpoints=$ENDPOINTS --output="json" get foo

# stop etcd node to migrate, one by one

# migrate v2 data
export ETCDCTL_API=3
etcdctl --endpoints=$ENDPOINT migrate --data-dir="default.etcd" --wal-dir="default.etcd/member/wal"

# restart etcd node after migrate, one by one

# confirm that the key got migrated
etcdctl --endpoints=$ENDPOINTS get /foo
Copy the code
Member

Note You are advised to perform this operation on the official website. This operation is usually performed during cluster capacity expansion or reduction. Do not test clusters that run normally.

Auth
export ETCDCTL_API=3 ENDPOINTS=localhost:2379 etcdctl --endpoints=${ENDPOINTS} role add root etcdctl --endpoints=${ENDPOINTS} role grant-permission root readwrite foo etcdctl --endpoints=${ENDPOINTS} role get root etcdctl  --endpoints=${ENDPOINTS} user add root etcdctl --endpoints=${ENDPOINTS} user grant-role root root etcdctl --endpoints=${ENDPOINTS} user get root etcdctl --endpoints=${ENDPOINTS} auth enable ###now all client requests go through auth etcdctl --endpoints=${ENDPOINTS} --user=root:123 put foo bar etcdctl --endpoints=${ENDPOINTS} get foo etcdctl --endpoints=${ENDPOINTS} --user=root:123 get foo etcdctl --endpoints=${ENDPOINTS} --user=root:123 get foo1Copy the code

9. Monitoring integration

Etcd comes with a metrics interface by default, which Prometheus collects data for presentation using Grafana

Command to view the interface data: curl – http://localhost:2379/metrics

Prometheus configuration

- job_name: 'etcd'
    static_configs:
      - targets: ['ip:2379']
Copy the code

There are several common templates available in Granfa, with ids 12362, 9618 and 3070, which can be imported directly from Grafana

reference

Etcd. IO/docs/v3.3 / d…