preface

Redis knowledge summarized a map, share with you:

Redis cluster scheme comparison

Before Redis3.0, the cluster generally monitored the status of the master node with the help of the Sentinel tool. If the master node was abnormal, the master would switch over and a slave would be used as the master. The configuration of sentry is slightly complex, and its performance and high availability are mediocre. Especially, when the access is interrupted at the moment of primary/secondary switchover, the cluster needs more than ten or even tens of seconds to judge that the primary node goes offline and elect a secondary node as the new primary node. In such a high-concurrency scenario as Double 11, it is a terrible thing if the Redis master node access transient occurs, which means that tens of millions of commodity and order query requests will directly request the database, and the database is likely to crash because of a large number of query requests.

Sentry mode usually there is only one master node external services, can not support high concurrency, suppose a Redis node allows support concurrent, 10 w but in the face of double 11 tens of millions of concurrency or stretched, and a single master node memory also shoulds not be set too large, otherwise it will lead to the persistent file is too large, affect the efficiency of data recovery or master-slave synchronization.

Sentinel mode:

Redis high availability cluster is a distributed server farm consisting of multiple master and slave node groups with replication, high availability and sharding features. The Redis cluster does not require sentinel sentinel to perform node removal and failover. It only needs to set each node to cluster mode, which has no central node and can scale horizontally. According to official documentation, it can scale linearly to tens of thousands of nodes (no more than 1000 nodes are officially recommended). Redis clusters provide better performance and high availability than previous versions of Sentinel mode, and the cluster configuration is simple. Compared with sentinel clusters, an HA cluster does not remain unavailable for a period of time after the primary node goes offline until the primary node is elected. Because the HA cluster has multiple primary nodes, when we need to write a large amount of data to the whole Redis service, the data will calculate a hash value according to the written key, and the data will be landed on different primary nodes. Therefore, when a primary node goes offline, the write request landed on other primary nodes is normal.

High availability cluster mode:

Redis HIGH availability cluster setup

The Redis cluster requires at least three master nodes. Here we set up three master nodes and a slave node for each master node, with a total of six Redis nodes with port numbers ranging from 8001 to 8006. Here, the author still deployed six nodes on one machine, and the construction steps are as follows:

1-1 configuration

# create a config and data directory under the Redis installation directory, copy the redis.conf file to the config directory and rename it to redis-8001.conf. Some of the configurations are explained in the previous master-slave & sentry cluster and will not be covered here. port 8001 protected-mode no daemonize yes pidfile "/var/run/redis-8001.pid" logfile "8001.log" dir "/home/lf-redis-6.2.1 /data" dbfilename "dump-8001. RDB" #bind 127.0.0.1 -::1 appendonly yes appendfilename "Appendonly -8001.aof" requirepass "123456" # set cluster access password masterauth 123456 # start cluster mode cluster-enabled yes # cluster node information file, Conf # Cluster-config-file nodes-8001.conf # Set the node timeout duration in milliseconds cluster-node-timeout 15000Copy the code

Conf, redis-8003.conf, redis-8004.conf, redis-8005.conf, redis-8006.conf. Then we replace 8001 in the file with 8002, 8003, 8004, 8005 and 8006 respectively, which can be replaced in batches:

:%s/ source string/destination string /gCopy the code

Note that if the cluster is set up on different servers, you need to execute the following command on each server to disable the firewall, so that the Redis process on different servers cannot access each other:

Systemctl stop firewalld # temporarily disable firewall systemctl disable firewalld # Disable startup upon startupCopy the code

Redis-8001.conf: redis-8001.conf:

min-replicas-to-write 1
Copy the code

This configuration can let us in to the master node to write the data, the master node must be synchronized to the at least one from a node to return, if with the master node must be synchronized to the three nodes will return, this configuration can be rolled off the production line in the master node, the node from switch master node to reduce the loss of data, but this configuration also cannot completely avoid the loss of data when the master node referrals, There is also a performance penalty because the master node must confirm that data has been synchronized to a certain number of slave nodes before it can return client requests.

Now let’s start the Redis service with ports 8001~8006 in sequence:

[root@master redis-6.2.1]# SRC /redis-server config/redis-8001.conf [root@master redis-6.2.1]# SRC /redis-server Config /redis-8002.conf [root@master redis-6.2.1]# SRC /redis-server config/redis-8003.conf [root@master redis-6.2.1]# Conf [root@master redis-server config/redis-8005.conf [root@master]# SRC /redis-server config/redis-8005.conf [root@master Redis - 6.2.1] # SRC/redis - server config/redis - 8006. ConfCopy the code

The six Redis services created earlier are still independent services, so let’s look at the command that groups these six services into a cluster:

[root@master redis-6.2.1]# SRC /redis-cli --cluster help cluster Manager Commands: create host1:port1... HostN :portN # cluster-replicas <arg> # host :portN # cluster-replicas <arg> The cluster-replicas parameter will be 1, and the cluster-replicas parameter will be 1, and the cluster-replicas parameter will be 1. [root@master redis-6.2.1]# SRC /redis-cli -a 123456 --cluster create --cluster-replicas 1 192.168.6.86:8001 192.168.6.86:8002 192.168.6.86:8003 192.168.6.86:8004 192.168.6.86:8005 192.168.6.86:8006 >>> Performing Hash slots allocation on 6 nodes... Master[0] -> Slots 0 - 5460 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 Adding replica 192.168.6.86:8005 to 192.168.6.86:8001 Adding Replica 192.168.6.86:8006 to 192.168.6.86:8002 Adding Replica 192.168.6.86:8004 to 192.168.6.86:8003 >>> Trying to optimize Slaves allocation for anti-affinity [WARNING] Some slaves are in the same host as their master #<1> M: 28 ad6b59866832b13dbd58dd944e641862702e23 192.168.6.86:8001 slots: [0-5460] (5461 slots) master M: Baf630fe745d9f1db7a58ffb96e180fab1047c79 192.168.6.86:8002 slots: [5461-10922] (5462 slots) master M: 115 a626ee6d475076b096181ab10d3ab6988cc04 192.168.6.86:8003 slots: [10923-16383] (5461 slots) master S: 54 b6c985bf0f41fa1b92cff7c165c317dd0a30c7 192.168.6.86:8004 replicates baf630fe745d9f1db7a58ffb96e180fab1047c79 S: 9 c6f93c3b5329e60032b970b57e599b98961cba6 192.168.6.86:8005 replicates 115 a626ee6d475076b096181ab10d3ab6988cc04 S: 28 ad6b59866832b13dbd58dd944e641862702e23 aa6ce37e876660161403a801adb8fc7a79a9d876 192.168.6.86:8006 replicates Can I set  the above configuration? (type 'yes' to accept): yes #<2> >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET Messages to join the cluster Waiting for the cluster to join >>> Performing Cluster Check (using node 192.168.6.86:8001)  #<3> M: 28 ad6b59866832b13dbd58dd944e641862702e23 192.168.6.86:8001 slots: [0-5460] (5461 slots) master 1 additional up (s) s: Aa6ce37e876660161403a801adb8fc7a79a9d876 192.168.6.86:8006 slots: (0 slots) slave replicates 28ad6b59866832b13dbd58dd944e641862702e23 M: Baf630fe745d9f1db7a58ffb96e180fab1047c79 192.168.6.86:8002 slots: [5461-10922] (5462 slots) 1 additional master Up (s) s: 9 c6f93c3b5329e60032b970b57e599b98961cba6 192.168.6.86:8005 slots: (0 slots) slave replicates 115a626ee6d475076b096181ab10d3ab6988cc04 M: 115 a626ee6d475076b096181ab10d3ab6988cc04 192.168.6.86:8003 slots: [10923-16383] (5461 slots) 1 additional master Up (s) s: 54 b6c985bf0f41fa1b92cff7c165c317dd0a30c7 192.168.6.86:8004 slots: (0 slots) slave replicates baf630fe745d9f1db7a58ffb96e180fab1047c79 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.Copy the code

  

The following three M’s and three S’s represent the master node and slave node respectively, followed by the node ID, IP+ port. By default, the cluster will use the first three services we input as the master node. According to the parameters we entered before, The service with port numbers 8001, 8002, and 8003 serves as the primary node. Redis divides data into 16,384 slots. Each node is responsible for storing some slots. For example, 8001 corresponds to slot [0,5460], 8002 corresponds to slot [5461,10922]. The slot 8003 corresponds to is [10923,16383]. When we want to store or read a key value, the Redis client will run a command to the primary node in the corresponding slot according to the hash value of the key. The format of a slave node is similar to that of a master node. Other than the slot part, a master node ID can be queried for a slave node. For example, a master node of 8004 identifies a master node. 8006 The secondary node corresponds to the primary node 8001.

#<1> M (primary node) 28 ad6b59866832b13dbd58dd944e641862702e23 (node ID) 192.168.6.86:8001 IP and port (node) slots: [0-5460] (5461 slots) master (node slots, The hash value of key will be dropped to this node between 0 and 5460. Baf630fe745d9f1db7a58ffb96e180fab1047c79 192.168.6.86:8002 slots: [5461-10922] (5462 slots) master M: 115 a626ee6d475076b096181ab10d3ab6988cc04 192.168.6.86:8003 slots: [10923-16383] (5461 slots) master S: 54 b6c985bf0f41fa1b92cff7c165c317dd0a30c7 192.168.6.86:8004 replicates baf630fe745d9f1db7a58ffb96e180fab1047c79 S: 9 c6f93c3b5329e60032b970b57e599b98961cba6 192.168.6.86:8005 replicates a626ee6d475076b096181ab10d3ab6988cc04 115 S (from node) : 192.168.6.86 aa6ce37e876660161403a801adb8fc7a79a9d876 (node ID) : 8006 (node's IP and port) replicates 28 ad6b59866832b13dbd58dd944e641862702e23 (from the node corresponding to the master node ID)Copy the code

If you agree to master/slave partitioning of the Redis cluster, type yes at <2> and press Enter. <3> is the true division, and if there is no accident, the content should be roughly similar to <1>. Before we put all of the nodes are built on a single server, if we put the node deployment on multiple servers, so Redis in master-slave, divided into will deliberately will be divided into different master-slave node server, this is because the Redis expectations if a server hang up and will not lead to a whole master-slave clusters are unavailable, Separating the master from the slave on different machines ensures that the slave can be switched to the master if the server on which the master resides fails.

If we want to view CLUSTER information, we can connect to any node and run CLUSTER NODES or CLUSTER INFO:

[root@master redis-6.2.1]# SRC /redis-cli -a 123456 -c -p 8001 127.0.0.1:8001> CLUSTER NODES 28 ad6b59866832b13dbd58dd944e641862702e23 aa6ce37e876660161403a801adb8fc7a79a9d876 192.168.6.86:8006 @ 18006 slave 0 1618317182151 1 connected baf630fe745d9f1db7a58ffb96e180fab1047c79 192.168.6.86:8002 @ 18002 master - 0 1618317187163 2 Connected to 5461-10922-9 c6f93c3b5329e60032b970b57e599b98961cba6 192.168.6.86:8005 @ 18005 slave 115a626ee6d475076b096181ab10d3ab6988cc04 0 1618317186161 3 connected 115a626ee6d475076b096181ab10d3ab6988cc04 192.168.6.86: master 8003 @ 18003-0 1618317184000 3 connected b6c985bf0f41fa1b92cff7c165c317dd0a30c7 54, 10923-16383 192.168.6.86:8004 @ 18004 slave baf630fe745d9f1db7a58ffb96e180fab1047c79 0 1618317186000 2 connected 28 ad6b59866832b13dbd58dd944e641862702e23 192.168.6.86:8001 @ 18001 myself, master 1618317184000 connected 1 0 0-5460 127.0.0.1:8001> CLUSTER INFO Cluster_state: OK Cluster_SLOts_assigned :16384 Cluster_SLOts_OK :16384 Cluster_SLOts_pfail :0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:6 cluster_my_epoch:1 cluster_stats_messages_ping_sent:61 cluster_stats_messages_pong_sent:62 cluster_stats_messages_sent:123 cluster_stats_messages_ping_received:57 cluster_stats_messages_pong_received:61 cluster_stats_messages_meet_received:5 cluster_stats_messages_received:123Copy the code

Run CLUSTER NODES to view the primary and secondary NODES of the CLUSTER, slots managed by the primary node, primary NODES connected to the secondary node, and the number of connections to each node. Note that if all the servers in the cluster crash, if we want to restart the whole cluster when the server is started, we do not need to use the redus-cli –cluster create command to create the cluster, just start each Redis node 8001~8006, the whole cluster will be restored. Once a cluster is created, node information about the cluster will be written to the nodes-800x. conf file.

Now to test the cluster, let’s set two key value pairs < Python,flask> and < Java, Spring > respectively:

[root@master redis-6.2.1]# SRC /redis-cli -a 123456 -c -p 8001 127.0.0.1:8001> SET Python flask -> Redirected to slot [7252] located at 192.168.6.86:8002 OK 192.168.6.86:8002> SET Java Spring -> Redirected to slot [858] located at 192.168.6.86:8001 OK 192.168.6.86:8001> GET Java "Spring" 192.168.6.86:8001> GET Python -> Redirected to slot [7252] 192.168.6.86:8001> GET Python -> Redirected to slot [7252] Located at 192.168.6.86:8002 "flask"Copy the code

From the above output we can see that when setting the < Python,flask> key-value pair, Redis calculates the Hash value for Python to be 7252, which is in the slot [5461-10922] governed by 8002, and redirects us to 8002. When we set < Java,spring> on 8002 master node, Redis service calculates that the hash value of Java is 858, which is in slot [0-5460] governed by 8001 node, and then redirects us to 8001. In the same way, executing the GET command will help us redirect.

Now, let’s kill the process of 8001’s slave node 8006 and test whether the min-replicas-to-write configured for 8001 can take effect. Before we configure 8001 to write data, we must synchronize it to at least one slave node. Now let’s set the < Java,tomcat> key pair for Redis service on port 8001:

[root @ master redis - 6.2.1] # ps - ef | grep redis root 44661 22426 0 timnathserah PTS / 0 00:00:00 grep -- color = auto redis root 108814 1 0 Apr13 ? 00:13:24 src/redis-server *:8002 [cluster] root 108820 1 0 Apr13 ? 00:13:31 src/redis-server *:8003 [cluster] root 108826 1 0 Apr13 ? 00:13:14 src/redis-server *:8004 [cluster] root 108835 1 0 Apr13 ? 00:13:43 src/redis-server *:8005 [cluster] root 108923 1 0 Apr13 ? 00:13:21 src/redis-server *:8001 [cluster] root 109206 1 0 Apr13 ? 00:13:28 src/redis-server *:8006 [cluster] root 109315 1 0 Apr13 ? 00:13:43 src/redis-server *:8007 [cluster] root 109324 1 0 Apr13 ? 00:13:20 src/redis-server *:8008 [cluster] root 109963 103945 0 Apr13 pts/1 00:00:00 src/redis-cli -a 123456 -c -p 8001 [root@master Redis -6.2.1]# kill -9 109206 # After connecting to 8001Redis, try to set < Java,tomcat> key pair, you can see error: not enough write from node. 192.168.6.86:8001> SET Java tomcat (error) NOREPLICAS Not enough good replicas to write.Copy the code

  

From the above results, we can confirm that the min-replicas-to-write N can ensure that the data written to the primary Redis node is synchronized to at least N secondary nodes before returning. If we restart the secondary node 8006, it will automatically join the cluster again. Therefore, the primary node of 8001 can set key-value pairs normally:

[root@master redis-6.2.1]# SRC /redis-server config/redis-8006.conf 192.168.6.86:8001> SET Java tomcat OKCopy the code

  

Redis communication mechanism between cluster nodes

Redis Cluster nodes communicate with each other through the Gossip protocol. There are two ways to maintain Cluster metadata (Cluster node information, primary and secondary roles, number of nodes, data shared by each node, etc.) : Centralized and Gossip

Centralized:

The advantage is that the update and reading of metadata have good timeliness. Once the metadata changes, it will be updated to the centralized storage immediately, and other nodes can immediately perceive it when reading. The disadvantage is that the update pressure of all metadata is concentrated in one place, which may lead to the storage pressure of metadata. Many middleware use ZooKeeper to centrally store metadata.

Gossip:

The Gossip protocol contains multiple messages, including ping, pong, meet, fail, and so on.

meet

A node sends a meet to a new node to join the cluster, and then the new node starts communicating with other nodes.

ping

Each node frequently sends ping messages to other nodes, which contain its own status and cluster metadata maintained by it. Each node exchanges metadata (such as the perceived addition and removal of cluster nodes, hash slot information, etc.) through ping.

pong

Returns to ping and meet messages, containing their own status and other information, can also be used for information broadcasts and updates;

fail

When a node determines that another node is offline, it sends a FAIL message to other nodes to inform them that the specified node is down.

The gossip protocol has the advantage that metadata updates are scattered rather than centralized in one place. Therefore, update requests are continuously transmitted to all nodes, reducing the pressure. However, there is a certain delay, which may cause some cluster operations to lag.

Each node has a dedicated port for the communication between nodes. That is, the port number that it provides services for +10000. For example, 8001, port 18001 is used for the communication between nodes. Each node sends ping messages to several other nodes at regular intervals, and the other nodes return pong messages after receiving the ping messages.

The network jitter

The on-line computer room network is not always smooth, and various problems often occur. For example, network jitter is a very common phenomenon, where part of the connection suddenly becomes inaccessible and then recovers after a period of time.

To solve this problem, Redis Cluster provides a cluster-node-timeout option. Cluster-node-timeout indicates that a node is faulty only when the node is disconnected for a timeout period and a primary/secondary switchover is required. Without this option, network jitter can cause frequent master/slave switching (data replication).

Redis cluster election principle analysis

When a secondary node finds that its primary node is in the Fail state, it attempts to perform a failover to become the new primary node. Since a failed primary node may have multiple slave nodes, there is a process of multiple slave nodes competing to become the primary node.

The process is as follows:

1. The primary node of the secondary node becomes Fail.

2. Add one to the currentEpoch cluster and broadcast the FAILOVER_AUTH_REQUEST message.

3. When other nodes receive the message, only the primary node responds, checks the validity of the requestor, and sends FAILOVER_AUTH_ACK. For each epoch, only one ACK is sent.

4. The secondary node that attempts failover collects the FAILOVER_AUTH_ACK returned by other primary nodes.

5. Become the new master node after receiving ack from more than half of the master nodes. (Here explains why the cluster needs at least three master nodes.

6. From other cluster nodes, node broadcasting pong notifications from the node is not immediately into a state in the master node fail attempt to launch the election, but has a certain delay, a certain delay to ensure that we wait for the fail state in the cluster, from the node if immediately try election, other primary node has not yet aware of the fail state, may refuse to vote.

DELAY calculation formula: DELAY = 500ms + random(0 to 500ms)+SALVE_RANK x 1000ms SALVE_RANK indicates the rank of the total amount of data replicated by the secondary node from the primary node. A smaller rank indicates that the replicated data is newer. In this way, the slave node holding the latest data will initiate the election first.

Services can be provided only when the cluster is complete

When cluster-require-full-coverage of redis.conf is set to no, the cluster is still available when a primary library goes offline and no corresponding secondary library fails to recover. If yes, the cluster is unavailable.

Why does a Redis cluster need at least three master nodes and an odd number of nodes is recommended?

For commands such as MSET and MGET that can operate multiple keys, the Redis cluster only supports the case that all keys fall into the same slot. If multiple keys must be operated in batches on the Redis cluster using commands similar to MSET, you can add {XX} before the keys. This will hash only the values in braces to ensure that different keys are placed in the same slot, as shown in the following example:

#user:1:name and user:2:name will drop into different slots. 192.168.6.86:8002> MSET user:1:name Tom user:2:name Amy (error) CROSSSLOT Keys in request don't Hash to the same slot # Ensure that {user}:1:name and {user}:2:name are landed in the same slot 192.168.6.86:8002> MSET {user}:1:name Tom {user}:2:name Amy -> Redirected to Slot [5474] located at 192.168.6.86:8001 OK 192.168.6.86:8001> MGET {user}:1:name {user}:2:name 1) "Tom" 2) "Amy"Copy the code

Sentry Leader election process

When a master node server is considered offline by a sentinel, the sentinel negotiates with other sentinels to select the sentinel leader for failover. Each sentry who finds the primary node offline can ask the other sentries to elect him as the sentry leader on a first-come, first-served basis. Each sentry increases the election period for each election, and only one sentry is selected as the leader in each election period. If more than half of all sentinels elect a sentinel as leader. The sentinel then fails over and elects a new master from the surviving nodes in a process similar to the primary election of the cluster.

Even if the sentinel cluster has only one sentinel node, it can normally elect a new master node when the master node goes offline. Of course, the only sentinel node elects the new master node as the leader. However, it is generally recommended to deploy at least three Sentinel nodes for high availability. The principle of having an odd number of sentinel nodes is similar to having an odd number of primary nodes in a cluster.

A node is added or deleted

So far, we have learned how to create a cluster and how to set key-value pairs to a cluster, but we still need to know how to add and delete nodes to a cluster. Here I will take you to join a pair of Redis master/slave nodes with ports 8007 and 8008 in the cluster, and then remove the master/slave nodes from the cluster. Conf to config directory and rename to redis-8007.conf and redis-8008.conf. Replace 8001 with 8007 and 8008 according to configuration 1-1. Then start two redis services 8007 and 8008:

[root@master redis-6.2.1]# SRC /redis-server config/redis-8007.conf [root@master redis-6.2.1]# SRC /redis-server config/redis-8008.confCopy the code

Then we run redis-cli –cluster help to see how to add the new node to the cluster:

[root@master redis-6.2.1]# SRC /redis-cli --cluster help cluster Manager Commands: create host1:port1... hostN:portN --cluster-replicas <arg> check host:port --cluster-search-multiple-owners info host:port fix host:port --cluster-search-multiple-owners --cluster-fix-with-unreachable-masters reshard host:port --cluster-from <arg> --cluster-to <arg> --cluster-slots <arg> --cluster-yes --cluster-timeout <arg> --cluster-pipeline <arg> --cluster-replace rebalance host:port --cluster-weight <node1=w1... nodeN=wN> --cluster-use-empty-masters --cluster-timeout <arg> --cluster-simulate --cluster-pipeline <arg> --cluster-threshold <arg> --cluster-replace add-node new_host:new_port existing_host:existing_port --cluster-slave --cluster-master-id <arg> del-node host:port node_id call host:port command arg arg .. arg --cluster-only-masters --cluster-only-replicas set-timeout host:port milliseconds import host:port --cluster-from <arg> --cluster-from-user <arg> --cluster-from-pass <arg> --cluster-from-askpass --cluster-copy --cluster-replace backup  host:port backup_directory helpCopy the code

create

Create a cluster environment host1:port1… HostN: portN.

call

You can run the redis command.

add-node

Add a node to a cluster. The first parameter is the IP address of the new node :port. The second parameter is the IP address of any existing node in the cluster :port.

del-node

Remove a node.

reshard

Resharding.

check

Check the cluster status.

Now, we add 8007Redis service to the cluster. We need to fill in two parameters here, one is the IP address and port of the newly added node, and the other is the IP address and port of the existing cluster, which are 192.168.6.86:8007 and 192.168.6.86:8001 respectively:

[root@master redis-6.2.1]# SRC /redis-cli -a 123456 --cluster add-node 192.168.6.86:8007 192.168.6.86:8001 Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe. >>> Adding node >>> Performing Cluster Check (using node 192.168.6.86:8001) M: 28 ad6b59866832b13dbd58dd944e641862702e23 192.168.6.86:8001 slots: [0-5460] (5461 slots) master 1 additional up (s) s: Aa6ce37e876660161403a801adb8fc7a79a9d876 192.168.6.86:8006 slots: (0 slots) slave replicates 28ad6b59866832b13dbd58dd944e641862702e23 M: Baf630fe745d9f1db7a58ffb96e180fab1047c79 192.168.6.86:8002 slots: [5461-10922] (5462 slots) 1 additional master Up (s) s: 9 c6f93c3b5329e60032b970b57e599b98961cba6 192.168.6.86:8005 slots: (0 slots) slave replicates 115a626ee6d475076b096181ab10d3ab6988cc04 M: 115 a626ee6d475076b096181ab10d3ab6988cc04 192.168.6.86:8003 slots: [10923-16383] (5461 slots) 1 additional master Up (s) s: 54 b6c985bf0f41fa1b92cff7c165c317dd0a30c7 192.168.6.86:8004 slots: (0 slots) slave replicates baf630fe745d9f1db7a58ffb96e180fab1047c79 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. >>> Send CLUSTER MEET to node 192.168.6.86:8007 to make it join the CLUSTER  added correctly.Copy the code

When adding a node, the system displays the original primary/secondary division of the cluster again. At last, the system displays [OK] New node added correctly, indicating that the node is added successfully.

According to the above steps, we add 8008 to the cluster. We can find the cluster information printed this time, compared with the last time, there is one more master node 8007:

[root@master redis-6.2.1]# SRC /redis-cli -a 123456 --cluster add-node 192.168.6.86:8008 192.168.6.86:8001 Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe. >>> Adding node >>> Performing Cluster Check (using node 192.168.6.86:8001) M: 28 ad6b59866832b13dbd58dd944e641862702e23 192.168.6.86:8001 slots: [0-5460] (5461 slots) master 1 additional up (s) s: Aa6ce37e876660161403a801adb8fc7a79a9d876 192.168.6.86:8006 slots: (0 slots) slave replicates 28ad6b59866832b13dbd58dd944e641862702e23 M: Baf630fe745d9f1db7a58ffb96e180fab1047c79 192.168.6.86:8002 slots: [5461-10922] (5462 slots) 1 additional master Up (s) s: 9 c6f93c3b5329e60032b970b57e599b98961cba6 192.168.6.86:8005 slots: (0 slots) slave replicates 115a626ee6d475076b096181ab10d3ab6988cc04 M: 115 a626ee6d475076b096181ab10d3ab6988cc04 192.168.6.86:8003 slots: [10923-16383] (5461 slots) 1 additional master Up (s) M: 5846 d4b7785447b9d7b1c08a0ed74c5e68f2f367 192.168.6.86:8007 slots: (0 slots) master s: 54 b6c985bf0f41fa1b92cff7c165c317dd0a30c7 192.168.6.86:8004 slots: (0 slots) slave replicates baf630fe745d9f1db7a58ffb96e180fab1047c79 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. >>> Send CLUSTER MEET to node 192.168.6.86:8008 to make it join the CLUSTER  added correctly.Copy the code

If we print the cluster information, we will find that 8007 and 8008 are both primary nodes, and the cluster does not assign slots for these two nodes, which is normal. The new nodes added to the cluster are all primary nodes. The master-slave relationship between the two nodes and the slots managed by the nodes need to be manually divided:

192.168.6.86:8001 > CLUSTER NODES aa6ce37e876660161403a801adb8fc7a79a9d876 192.168.6.86:8006 @ 18006 slave 28ad6b59866832b13dbd58dd944e641862702e23 0 1618318693000 1 connected baf630fe745d9f1db7a58ffb96e180fab1047c79 192.168.6.86:8002 @ 18002 master - 0 1618318692000 2 connected c6f93c3b5329e60032b970b57e599b98961cba6 9, 5461-10922 192.168.6.86:8005 @ 18005 slave a626ee6d475076b096181ab10d3ab6988cc04 115 0 1618318693725 3 connected 115 a626ee6d475076b096181ab10d3ab6988cc04 192.168.6.86:8003 @ 18003 master - 0 1618318695730 3 connected. 10923-16383 5 cd842f76c141eddf5270218b877a54a0c202998 192.168.6.86:8008 @ 18008 master - 0, 1618318690000 connected 5846 d4b7785447b9d7b1c08a0ed74c5e68f2f367 192.168.6.86:8007 @ 18007 master - 0 1618318694728 7 connected 54 b6c985bf0f41fa1b92cff7c165c317dd0a30c7 192.168.6.86:8004 @ 18004 slave baf630fe745d9f1db7a58ffb96e180fab1047c79 0 1618318691000 2 connected 28 ad6b59866832b13dbd58dd944e641862702e23 192.168.6.86:8001 @ 18001 myself, master - 0 1618318692000 1 connected 0-5460Copy the code

We connect to 8008 and make 8008 a slave node of 8007. We use CLUSTER REPLICATE {masterID} to specify a new master node to become a slave node of another master node. For masterID, we use 8007’s ID:

[root@master redis-6.2.1]# SRC /redis-cli -a 123456 -c -p 8008 127.0.0.1:8008> CLUSTER REPLICATE # 5846 d4b7785447b9d7b1c08a0ed74c5e68f2f367 OK check node information as you can see, 8008 has become 8007 from node 127.0.0.1:8008 > CLUSTER NODES baf630fe745d9f1db7a58ffb96e180fab1047c79 192.168.6.86: master - 8002 @ 18002 2 connected 0 1618318835003, 5461-10922, 5846 d4b7785447b9d7b1c08a0ed74c5e68f2f367 192.168.6.86: master 8007 @ 18007-0 1618318835000 7 connected 54 b6c985bf0f41fa1b92cff7c165c317dd0a30c7 192.168.6.86:8004 @ 18004 slave baf630fe745d9f1db7a58ffb96e180fab1047c79 0 1618318834000 2 connected 28ad6b59866832b13dbd58dd944e641862702e23 192.168.6.86: master 8001 @ 18001-0, 1618318832000 1 connected a scale of 0-5460 to 115 a626ee6d475076b096181ab10d3ab6988cc04 192.168.6.86: master 8003 @ 18003-0 1618318832999 3 connected 5 cd842f76c141eddf5270218b877a54a0c202998. 10923-16383 192.168.6.86:8008 @ 18008 myself, slave d4b7785447b9d7b1c08a0ed74c5e68f2f367 0 1618318833000 7 5846 connected 9 c6f93c3b5329e60032b970b57e599b98961cba6 192.168.6.86:8005 @ 18005 slave a626ee6d475076b096181ab10d3ab6988cc04 0 115 1618318832000 3 connected aa6ce37e876660161403a801adb8fc7a79a9d876 192.168.6.86:8006 @ 18006 slave 28ad6b59866832b13dbd58dd944e641862702e23 0 1618318836006 1 connectedCopy the code

After dividing the new master and slave, we need to allocate slots for the new master and slave. Here we use the –cluster reshard command:

[root@master redis-6.2.1]# SRC /redis-cli -a 123456 --cluster reshard 192.168.6.86:8001 >>> Performing Cluster Check (using the node 192.168.6.86:8001) M: 28 ad6b59866832b13dbd58dd944e641862702e23 192.168.6.86:8001 slots: [0-5460] (5461 slots) master 1 additional up (s) s: Aa6ce37e876660161403a801adb8fc7a79a9d876 192.168.6.86:8006 slots: (0 slots) slave replicates 28ad6b59866832b13dbd58dd944e641862702e23 M: Baf630fe745d9f1db7a58ffb96e180fab1047c79 192.168.6.86:8002 slots: [5461-10922] (5462 slots) 1 additional master Up (s) s: 9 c6f93c3b5329e60032b970b57e599b98961cba6 192.168.6.86:8005 slots: (0 slots) slave replicates 115a626ee6d475076b096181ab10d3ab6988cc04 M: 115 a626ee6d475076b096181ab10d3ab6988cc04 192.168.6.86:8003 slots: [10923-16383] (5461 slots) 1 additional master Up (s) s: 5 cd842f76c141eddf5270218b877a54a0c202998 192.168.6.86:8008 slots: (0 slots) slave replicates 5846d4b7785447b9d7b1c08a0ed74c5e68f2f367 M: 5846 d4b7785447b9d7b1c08a0ed74c5e68f2f367 192.168.6.86:8007 slots: (0 slots) master 1 additional up (s) s: 54 b6c985bf0f41fa1b92cff7c165c317dd0a30c7 192.168.6.86:8004 slots: (0 slots) slave replicates baf630fe745d9f1db7a58ffb96e180fab1047c79 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. # How many slots do you want to move (from 1 to 16384)? 600 # If you enter the master node ID of 8007, the 600 slots managed by master node 8001 will be shifted to 8007. 5846d4b7785447b9d7b1c08a0ed74c5e68f2f367 Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the Hash slots to allocate 600 slots from each primary node (8001, 8002, 8003) to the target primary node (8007). Type 'done' once you enter all the source nodes IDs. Type 'done' once you enter all the source nodes IDs. Source node #1: all Source node #1: all Source node #1: all Moving slot 11119 from 115a626ee6d475076b096181ab10d3ab6988cc04 Moving slot 11120 from 115 a626ee6d475076b096181ab10d3ab6988cc04 Moving slot 11121 from 115 a626ee6d475076b096181ab10d3ab6988cc04 # enter yes. Let Redis start performing slot assignments. Do you want to proceed with the proposed reshard plan (yes/no)? yesCopy the code

8001, 8002, and 8003 manage different slots. 8007 manages three slots. [0 198], [5461,5661] and [10923,11121] are allocated from 8001, 8002 and 8003 respectively:

127.0.0.1:8001 > CLUSTER NODES aa6ce37e876660161403a801adb8fc7a79a9d876 192.168.6.86:8006 @ 18006 slave 28ad6b59866832b13dbd58dd944e641862702e23 0 1618319470349 1 connected baf630fe745d9f1db7a58ffb96e180fab1047c79 192.168.6.86:8002 @ 18002 master - 0 1618319472353 2 connected c6f93c3b5329e60032b970b57e599b98961cba6 9, 5662-10922 192.168.6.86:8005 @ 18005 slave a626ee6d475076b096181ab10d3ab6988cc04 115 0 1618319469347 3 connected 115 a626ee6d475076b096181ab10d3ab6988cc04 192.168.6.86:8003 @ 18003 master - 0 1618319471351 3 connected. 11122-16383 5 cd842f76c141eddf5270218b877a54a0c202998 192.168.6.86:8008 @ 18008 slave d4b7785447b9d7b1c08a0ed74c5e68f2f367 0 5846 1618319469000 7 connected 5846 d4b7785447b9d7b1c08a0ed74c5e68f2f367 192.168.6.86:8007 @ 18007 master - 0, 1618319470000 Connected 0-198, 5461-5661, 10923-11121-54 b6c985bf0f41fa1b92cff7c165c317dd0a30c7 192.168.6.86:8004 @ 18004 slave baf630fe745d9f1db7a58ffb96e180fab1047c79 0 1618319468345 2 connected 28ad6b59866832b13dbd58dd944e641862702e23 192.168.6.86:8001@18001 myself, master-0 1618319470000 1 Connected 199-5460Copy the code

–cluster del-node {host}:{port} {nodeID} –cluster del-node {host}:{port} {nodeID}

[root@master redis-6.2.1]# SRC /redis-cli -a 123456 --cluster del-node 192.168.6.86:8008 5cd842f76c141eddf5270218b877a54a0c202998 >>> Removing node 5cd842f76c141eddf5270218b877a54a0c202998 from cluster 192.168.6.86:8008 >>> Sending CLUSTER FORGET messages to the CLUSTER... >>> Sending CLUSTER RESET SOFT to the deleted node.Copy the code

We will remove the 8007 primary node again. Since 8007 node has been allocated slots, it will report an error to remove it directly. Here, we need to return the 8007 slot to each primary node first, and we will still use it here

–cluster reshard repartition 8007 from existing nodes:

[root@master redis-6.2.1]# SRC /redis-cli -a 123456 --cluster reshard 192.168.6.86:8007 >>> Performing Cluster Check (using node 192.168.6.86:8007) M: 5846 d4b7785447b9d7b1c08a0ed74c5e68f2f367 192.168.6.86:8007 slots: [0-198], [5461-5661], [10923-11121] (599 slots) master M: 28 ad6b59866832b13dbd58dd944e641862702e23 192.168.6.86:8001 slots: [199-5460] (5262 slots) master 1 additional up (s) S: 54 b6c985bf0f41fa1b92cff7c165c317dd0a30c7 192.168.6.86:8004 slots: (0 slots) slave replicates baf630fe745d9f1db7a58ffb96e180fab1047c79 M: Baf630fe745d9f1db7a58ffb96e180fab1047c79 192.168.6.86:8002 slots: [5662-10922] (5261 slots) 1 additional master replica(s) M: 115 a626ee6d475076b096181ab10d3ab6988cc04 192.168.6.86:8003 slots: [11122-16383] (5262 slots) 1 additional master Up (s) s: aa6ce37e876660161403a801adb8fc7a79a9d876 192.168.6.86:8006 slots: (0 slots) slave replicates 28ad6b59866832b13dbd58dd944e641862702e23 S: 9 c6f93c3b5329e60032b970b57e599b98961cba6 192.168.6.86:8005 slots: (0 slots) slave replicates 115a626ee6d475076b096181ab10d3ab6988cc04 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. # How many slots do you want to move (from 1 to 16384)? 600 # Fill in the receiving node ID with 8001 What is the receiving node ID? 28ad6b59866832b13dbd58dd944e641862702e23 Please enter all the source node IDs. Type 'all' to use all the nodes as source Type 'done' once you enter all the source nodes IDs. # Enter 8007 node ID source node #1: 5846 # d4b7785447b9d7b1c08a0ed74c5e68f2f367 input done generated slot migration plan Source node # 2: the done... Moving slot 11119 from 5846d4b7785447b9d7b1c08a0ed74c5e68f2f367 Moving slot 11120 from # 5846 d4b7785447b9d7b1c08a0ed74c5e68f2f367 Moving slot 11121 from 5846 d4b7785447b9d7b1c08a0ed74c5e68f2f367 enter yes start slot migration, Do you want to proceed with the proposed reshard plan (yes/no)? Yes...... Moving slot 11119 from 192.168.6.86:8007 to 192.168.6.86:8001: Moving slot 11120 from 192.168.6.86:8007 to 192.168.6.86:8001: Moving slot 11121 from 192.168.6.86:8007 to 192.168.6.86:8001:Copy the code

After the primary 8007 node redistributes the slots, it does not mean that the slots managed by 8001, 8002, and 8003 will return to the original one. As you can see, 8001 manages the two slots [0,5661] and [10923,11121], which are different from the original 8001 [0-5460]. 8002 and 8003 will not be compared here, you can compare them by yourself:

192.168.6.86:8001 > CLUSTER NODES aa6ce37e876660161403a801adb8fc7a79a9d876 192.168.6.86:8006 @ 18006 slave 28ad6b59866832b13dbd58dd944e641862702e23 0 1618651357467 8 connected baf630fe745d9f1db7a58ffb96e180fab1047c79 192.168.6.86:8002 @ 18002 master - 0 1618651357000 2 connected c6f93c3b5329e60032b970b57e599b98961cba6 9, 5662-10922 192.168.6.86:8005 @ 18005 slave a626ee6d475076b096181ab10d3ab6988cc04 115 0 1618651356000 3 connected 115 a626ee6d475076b096181ab10d3ab6988cc04 192.168.6.86:8003 @ 18003 master - 0 1618651355000 3 connected. 11122-16383 54 b6c985bf0f41fa1b92cff7c165c317dd0a30c7 192.168.6.86:8004 @ 18004 slave baf630fe745d9f1db7a58ffb96e180fab1047c79 0 1618651355463 2 connected 28 ad6b59866832b13dbd58dd944e641862702e23 192.168.6.86:8001 @ 18001 myself, master - 0 1618651354000 8 connected 0-5661 10923-11121Copy the code

After the slots are reassigned, let’s look at the node information:

127.0.0.1:8001 > CLUSTER NODES aa6ce37e876660161403a801adb8fc7a79a9d876 192.168.6.86:8006 @ 18006 slave 28ad6b59866832b13dbd58dd944e641862702e23 0 1618320346264 8 connected baf630fe745d9f1db7a58ffb96e180fab1047c79 192.168.6.86:8002 @ 18002 master - 0 1618320345000 2 connected c6f93c3b5329e60032b970b57e599b98961cba6 9, 5662-10922 192.168.6.86:8005 @ 18005 slave a626ee6d475076b096181ab10d3ab6988cc04 115 0 1618320345000 3 connected 115 a626ee6d475076b096181ab10d3ab6988cc04 192.168.6.86:8003 @ 18003 master - 0 1618320345261 3 connected. 11122-16383 5846 d4b7785447b9d7b1c08a0ed74c5e68f2f367 192.168.6.86:8007 @ 18007 master - 0 1618320347267 7 connected 54 b6c985bf0f41fa1b92cff7c165c317dd0a30c7 192.168.6.86:8004 @ 18004 slave baf630fe745d9f1db7a58ffb96e180fab1047c79 0 1618320343256 2 connected 28 ad6b59866832b13dbd58dd944e641862702e23 192.168.6.86:8001 @ 18001 myself, master - 0 1618320343000 8 connected 0-5661 10923-11121Copy the code

After determining that 8007 no longer manages any slots, we remove 8007 node from the cluster:

[root@master redis-6.2.1]# SRC /redis-cli -a 123456 --cluster del-node 192.168.6.86:8007 5846d4b7785447b9d7b1c08a0ed74c5e68f2f367 >>> Removing node 5846d4b7785447b9d7b1c08a0ed74c5e68f2f367 from cluster 192.168.6.86:8007 >>> Sending CLUSTER FORGET messages to the CLUSTER... >>> Sending CLUSTER RESET SOFT to the deleted node.Copy the code

At this point, check the cluster information again, you can see that there is no more 8007 node:

127.0.0.1:8001 > CLUSTER NODES aa6ce37e876660161403a801adb8fc7a79a9d876 192.168.6.86:8006 @ 18006 slave 28ad6b59866832b13dbd58dd944e641862702e23 0 1618360351136 8 connected baf630fe745d9f1db7a58ffb96e180fab1047c79 192.168.6.86:8002 @ 18002 master - 0 1618360350000 2 connected c6f93c3b5329e60032b970b57e599b98961cba6 9, 5662-10922 192.168.6.86:8005 @ 18005 slave a626ee6d475076b096181ab10d3ab6988cc04 115 0 1618360350132 3 connected 115 a626ee6d475076b096181ab10d3ab6988cc04 192.168.6.86:8003 @ 18003 master - 0 1618360348127 3 connected. 11122-16383 54 b6c985bf0f41fa1b92cff7c165c317dd0a30c7 192.168.6.86:8004 @ 18004 slave baf630fe745d9f1db7a58ffb96e180fab1047c79 0 1618360351000 2 connected 28 ad6b59866832b13dbd58dd944e641862702e23 192.168.6.86:8001 @ 18001 myself, master - 0 1618360350000 8 connected 0-5661 10923-11121Copy the code

The last

I here organized a Redis cluster related information documents, Spring series of family, Java systematic information (including 2021 Java core knowledge, interview topics and 21 years of the latest Internet real questions, e-books, etc.) friends who need to pay attention to the public number [procedures Yuan Small wan] can obtain.