Redis migration method introduction

Article source: Tao Teacher operation notes – wechat official account

Sometimes in my work, I will encounter the need to take the original Redis cluster offline and move to a new Redis cluster (for example, machine room migration, cloud on Redis, etc.). How does the original Redis data need to be successfully migrated to a new Redis cluster? This section briefly describes related methods and tools.

Redis has master-slave architecture and Redis – Cluster architecture, and their migration methods are also different due to different architectures.

1. Redis primary/secondary migration

If the original Redis cluster is a master-slave structure, the migration is fairly simple and convenient. Data can be migrated using Slaveof or AOF/RDB.

  • Slaveof: Use Slaveof to create new slave libraries for data migration purposes.
  • Aof migration: Suspend the write service and export the AOF file to a new Redis cluster to complete data migration.

1.1 Slaveof Synchronizes data

For example: Redis_master 192.124.64.212:6379 will be migrated to the new redis_master 192.124.64.214:6380.

# old redis_master 192.124.64.212:6379 --> new redis_master 192.124.64.214:63801. Data synchronization$redis-cli -h redis_master_new 6380 slaveof redis_master_old 6379
$redis- CLI -h 192.124.64.214 6380 slaveof 192.124.64.212 6379 2 Modify the newread-only The value is set to no. Modify the new redisread-only If the value is set to no, the new Redis can write data. Otherwise you'll make a mistake.$redis- CLI -h 192.124.64.214 -p 6380-a pwd123 set foo bar
(error) READONLY You canDon't write against a read only slave Check current slave-read-only Configure $redis-cli -h 192.124.64.214 -p 6380 -a pwd123 config get Slave -read-only 1) "slave-read-only" 2) "yes" $redis-cli -h 192.124.64.214 6380 config set slave-read-only no $redis-cli -h 192.124.64.214 6380 config get slave-read-only 3. 4. Change the redis configuration and restart the Web service. Change the REDis configuration to the new REDis address and restart the Web service. 5. Check new/old Redis links Redis -cli -h 192.124.64.212 -p 6379 -a pwd123 info CLIENTS CONNECted_CLIENTS :1 redis-cli -h 192.124.64.212 -p 6379-a Pwd123 redis client list - cli - h 192.124.64.212 p - 6379 - a pwd123 client list | awk - F 'addr=' '{print $2}' | awk '{print The $1}'|awk -F':' '{print The $1}'| sort | uniq -c | sort - nr 1 192.124.64.214 new redis: Redis -cli -h 192.124.64.214 -p 6380 -a pwd123 info Clients redis-cli -h 192.124.64.214 -p 6380 -a pwd123 dbsize 6. Disconnect the synchronous redis - 192.124.64.214 cli - h - p 6380 info | grep role redis - 192.124.64.214 cli - h - p 6380 slaveof NO ONE OK redis - cli - h 192.124.64.214 - p 6380 info | grep role role: master 7. Test business conditions (omitted)Copy the code

1.2 AOF Synchronizes data

If the original Redis can suspend the write operation, then we use AOF to synchronize data is also very convenient.

# old Redis exports aOF files
$redis-cli -h old_instance_ip -p old_instance_port config set appendonly yes
$redis-cli -h old_instance_ip -p old_instance_port configconfig get appendonly
1) "appendonly"
2) "yes"
The #aof file is imported into the new Redis
$redis-cli -h new_redis_instance_ip -p 6379 -a password --pipe < appendonly.aof
After the import is complete, close the original Redis AOF
$redis-cli -h old_instance_ip -p old_instance_port config set appendonly no
Copy the code

Slaveof and AOF migration comparison:

  • Note that Slaveof will remove all current keys from the target redis library.
  • Using Slaveof XXX when redis has a large amount of data will bring great pressure to master. It is best to deal with low peak hours.
  • Unlike Slaveof, aOF migrates data to retain existing data on the new Redis.
  • Aof cannot be synchronized in real time, so the redis incremental data cannot be synchronized during migration, so it is often necessary to stop the original REDis write operation.

2. Migrate the Redis cluster

2.1 Redis cluster is introduced

  • Redis. IO/switchable viewer/clus…
  • www.redis.cn/topics/clus…
  • www.redis.cn/topics/clus…

Redis-cluster is a distributed implementation of Redis. Cluster stores data in multiple different Redis instances, which solves the problem of limited capacity of single Redis. In addition, when a node of Redis-cluster fails, if the node has available slave libraries, the redis-cluster can continue to process requests, thus realizing the high availability of Redis.

2.2 Redis cluster migration

Method 1: Cluster nodes are added and deleted, and new Redis nodes are added to the current cluster one by one to form a large Redis cluster. Then a manual failover is performed and the original master library node is offline.

Note: Perform this operation on the slave node. Otherwise, error message ERR You should send CLUSTER FAILOVER to a slave is displayed

# redis 192.124.64.212 cli - h - p - 6301 - a pwd123 - c cluster nodes | grep master | sort - k 9 n
# redis 192.124.64.212 cli - h - p - 6301 - a pwd123 - c cluster nodes | grep slave | sort - k 9 n
#/usr/local/redis-5.0.6/bin/redis-cli --cluster info 192.124.64.212:6301 -a pwd123 
#/usr/local/redis-5.0.6/bin/redis-cli --cluster check 192.124.64.212:6301 -a pwd123Redis -cli -h newly added slave library node -p 6301-a pwd123 cluster failover

Copy the code

Approach 2: Migrate the tool implementation

Migration method 1 cannot be used in scenarios where the original Redis cluster is expected to remain temporarily, as shown in the figure below, with two separate clusters. (For example, dual-room Redis high-availability scenario)

You cannot directly use the slave of command to change redis-cluster-b into the slave library of redis-cluster-a.

Redis cluster migration: –

  • If services can be interrupted, stop cluster-A write operations, transfer Redis RDB/AOF files to A new Cluster and restart services.
  • Migrate redis-cluster data using some redis data migration tools. (Recommended)

3. Redis migration tool

3.1 Common Redis migration tools

Common Redis migration tools:

  • Github.com/delano/redi…
  • Github.com/CodisLabs/r…
  • Github.com/vipshop/red…
  • Github.com/alibaba/Red…
  • Redis-dump: Redis-dump is a small tool to import/export Redis data into JSON format.
  • Redis-port: Originally a coDIS project-related tool, it supports real-time synchronization of REDIS data to CODIS/REDis etc.
  • Redis-migration-tool: An open source Redis data migration tool of VipSHOP, which can be used for online migration of data between heterogeneous Redis clusters.
  • Redis-shake: Redis-Shake is a data synchronization tool for Redis-Shake. Support redis master-slave -> Redis -cluster, cluster-cluster and other redis architecture data synchronization.

redis-dump:

Redis-dump can easily export data of a Redis instance (not redis-cluster). The keys command is used to rename the redis-dump command. If the command is rename, redis-dump fails.

192.124.64.212:6379 to 192.124.64.214:6380.
$redis-dump -u :pwd123, 6379: @192.124.64.212 | redis - load - u:pwd123, 6380: @192.124.64.214#
$redis-dump -u :pwd123, 6379: @192.124.64.212 {"db": 0."key":"foo"."ttl":-1,"type":"string"."value":"bar"."size": 3}#keys not available will cause an error
$redis-dump -u :pwd123, 6379: @192.124.64.212 > redis_6379. Json ERR is unknowncommand 'keys'
Copy the code

Redis-port/Redis-migration-tool/Redis-shake: These tools have similar functions and support real-time data synchronization in Redis. This article focuses on Redis-Shake.

3.2 Redis – shake

3.2.1 Redis – shake

  • Github.com/alibaba/Red…
  • Github.com/alibaba/Red…
  • Yq.aliyun.com/articles/69…

Redis-shake is an open source tool developed by Aliyun, which supports decode, restore, backup and sync/rump of Redis data. In sync mode, redis-shake uses sync or PSYNC to synchronize data from the redis at the source end to the redis at the destination end. Full data synchronization and incremental data synchronization are supported. Incremental data synchronization starts automatically after full data synchronization is complete.

Function introduction:

  • Backup dump: Backs up the full source REDis data to the RDB file.
  • Parse decode: RDB files are read and parsed and stored in JSON format.
  • Restore restore: Restores the RDB file to the destination REDis database.
  • Sync: Supports data synchronization between source REDis and destination REDis, and migration of full and incremental data. Supports synchronization between single nodes, master and slave versions, and cluster versions. Note that if the source server is a cluster version, you can start a RedisShake to pull data from different DB nodes. The move Slot function cannot be enabled on the source server. For the destination end, the write can be one or more DB nodes in the cluster version.
  • Synchronization RUMP: Supports data synchronization between source Redis and destination REDis. Only full migration is supported. Scan and restore commands are used for migration, and redis versions of different cloud vendors can be migrated.

3.2.2 Redis-Shake Migration principle

The basic principle of Redis-shake is to simulate a slave node joining the source Redis cluster, first performing full pull and playback, then performing incremental pull (via the psync command). As shown below:

4. Environment preparation

4.1 Machine Planning

In this paper, Redis cluster migration test, in order to save machines, the target cluster reuse the source Redis machine, just modify the port. This paper plans to test cluster_A– > cluster_B and cluster_A– > Redis_C primary and secondary migration scenarios.

Redis – shake machines:

role version IP
redis-shake Redis – shake-down 1.6.24 192.124.64.212

Source Redis cluster A:

role version IP port
node-1 Redis – 5.0.6 192.124.64.212 6301,7301
node-2 Redis – 5.0.6 192.124.64.213 6301,7301
node-3 Redis – 5.0.6 192.124.64.214 6301,7301

Target Redis cluster B:

role version IP port
node-1 Redis – 5.0.6 192.124.64.212 6302,7302
node-2 Redis – 5.0.6 192.124.64.213 6302,7302
node-3 Redis – 5.0.6 192.124.64.214 6302,7302

Target Redis master/slave C:

role version IP port
node-1 Redis – 5.0.6 192.124.64.212 6303
node-2 Redis – 5.0.6 192.124.64.213 6303

4.2 Redis Cluster Installation

  • Tao teacher operation notes – Redis5 Cluster construction

Target Redis cluster B installation:

Start 6302 for each node and start 7302
/usr/local/redis/bin/redis-server  /data1/redis6302/conf/redis-cluster-6302.conf 
/usr/local/redis/bin/redis-server  /data1/redis7302/conf/redis-cluster-7302.conf 

Create a cluster, 3 primary and 3 secondary
$ /usr/local/redis-5.0.6/bin/redis-cli --cluster create  192.124.64.212:6302 192.124.64.213:6302 192.124.64.214:6302 192.124.64.213:7302 192.124.64.214:7302 192.124.64.212:7302 \
--cluster-replicas 1 -a pwd123

#redis5.0.x to check cluster status
$/usr/local/ redis - 5.0.6 / bin/redis - cli - cluster info 192.124.64.212:6302-a pwd123
$/usr/local/ redis - 5.0.6 / bin/redis - cli - cluster check 192.124.64.212:6302-a pwd123
Copy the code

Target Redis master/slave C installation:

# Install and start redis service (212,213 machines)
/usr/local/redis/bin/redis-server /data1/redis6303/conf/redis-6303.conf

#Redis master-slave builder (213 machines)
$redis- CLI -h 127.0.0.1 -p 6303-a pwd123  slaveof 192.124.64.212 6303
Copy the code

5. Migration implementation

5.1 redis – shark installation

Software download address: github.com/alibaba/Red… . The software is decompressed and ready for use.

Installation:

# installed redis - shake
$wget 'https://github.com/alibaba/RedisShake/releases/download/release-v1.6.24-20191220/redis-shake-1.6.24.tar.gz' .
$tar- ZXVF redis - shake-down 1.6.24. Tar. Gz$mvRedis - shake-down 1.6.24 / usr /local/redis-shake
# Environment variables
$echo 'export PATH=$PATH:/usr/local/redis-shake'>>/etc/profile
$source /etc/profile

# check version
$redis-shake.linux  --version

File information in the directory
$tree /usr/local/ Redis-Shake. ├── Redis-shake ├ ─ ─ start. Sh └ ─ ─ stop. ShCopy the code

Activation:

Sync, restore, dump, decode, rump
$redis-shake.linux -conf=redis-shake.conf -type=xxx 
Copy the code

Redis-shake –conf={configuration file address} –type={mode: sync/dump, etc.} The mode must correspond to the source target in the configuration file. The mode can be sync, restore, dump, decode, or rump. For full and incremental synchronization, select Sync. In MAC, use redis-shak. Darwin. In Windows, use redis-shak. Windows

Introduction to Redis-Shake mode:

  • Dump mode: Backs up data in the Redis instance of the cloud database to the RDB file.
  • Decode mode: DECODE reads RDB files and stores them in JSON format.
  • Restore mode: Restore mode restores RDB files to the destination REDis database.
  • Sync mode: In sync mode, Redis data is migrated to another Redis cluster.
  • Rump mode: In rUMP mode, scan and restore commands are used for migration, and redis versions of different cloud vendors can be migrated.

5.2 redis – shake configuration

  • How to configure the https://github.com/alibaba/RedisShake/wiki/
  • Redis – shake migration at https://help.aliyun.com/document_detail/111066.html

5.2.1 Sync Mode Parameters

  • Help.aliyun.com/document_de…

In sync mode, redis-shake uses sync or PSYNC to synchronize data from the redis at the source end to the redis at the destination end. Full data synchronization and incremental data synchronization are supported. Incremental data synchronization starts automatically after full data synchronization is complete.

Sync mode parameters:

parameter instructions The sample
source.type Standalone, Sentinel, Cluster and Proxy are supported cluster
source.address The connection address and service port of the source Redis. XXX, XXX. 1.10:6379
source.password_raw The connection password of the source Redis. SourcePass233
target.address Destination Redis connection address and service port. xx.redis.rds.aliyuncs.com:6379
target.password_raw Destination Redis connection password. TargetPass233
rewrite Override if the destination Redis has the same key, optional value: true (override); False (not overwrite). The default value is true. If the value is false and data conflicts exist, an exception message is displayed.
target.db The logical database name of the data to be migrated in the destination Redis. When the value is set to -1, the source Redis and destination Redis have the same name – 1
parallel The number of concurrent threads used in RDB file synchronization to improve synchronization performance. The minimum value is 1, and the recommended value is 64.

5.3 Performing Migration

Illustrate the cluster_A– > cluster_B and cluster_A– > Redis_C primary/secondary scenarios respectively.

5.3.1 Migrating redis-cluster to Redis-cluster

Source Redis cluster A:

role version IP port
node-1 Redis – 5.0.6 192.124.64.212 6301,7301
node-2 Redis – 5.0.6 192.124.64.213 6301,7301
node-3 Redis – 5.0.6 192.124.64.214 6301,7301

Target Redis cluster B:

role version IP port
node-1 Redis – 5.0.6 192.124.64.212 6302,7302
node-2 Redis – 5.0.6 192.124.64.213 6302,7302
node-3 Redis – 5.0.6 192.124.64.214 6302,7302

1) Check the source/target Redis information

Obtain source cluster information:echo 'cluster nodes'| redis - 192.124.64.212 cli - c - h - p 6301-a pwdMaster: 123 | egrep master source side cluster 192.124.64.212:6301; 192.124.64.213:6301; 192.124.64.214:6301 Obtaining information about the target cluster:echo 'cluster nodes'| redis - 192.124.64.212 cli - c - h - p 6302-a pwd123 | egrep master target cluster master: 192.124.64.212:6302; 192.124.64.213:6302; 192.124.64.214:6302Copy the code

2) configuration RedisShake

Modify the configuration file, the key is to modify the source, type, source. The address, the source. Password_raw; Target. type, target.address, target.password_raw.

which  redis-shake
/usr/local/redis-shake/redis-shake
  
$cd /usr/local/redis-shake/
$vim /usr/local/redis-shake/redis-shake-cluster.conf
The configuration is as follows
# id Uniquely identifies a synchronization process
id=redis-shake-cluster
log.file = /var/log/redis-shake.log
log.level = info
Port configuration, default
system_profile = 9310
http_profile = 9320
 
# source related configuration [cluster | standalone]
source.type = cluster
Addres, you need to configure the list of all db nodes in the source cluster and the list of all DB nodes in the destination clusterSource. The address = 192.124.64.212:6301; 192.124.64.213:6301; 192.124.64.214:6301 source. Password_raw =pwd123
source.auth_type = auth
 
Des-related configurationTarget. type= cluster target.address = 192.124.64.212:6302; 192.124.64.213:6302; 192.124.64.214:6302 target. Password_raw =pwd123

target.auth_type = auth
# all the data will be written into this db. < 0 means disable.
target.db = -1

Filter key of execution type
# filter.key = QH; cn
In the RDB full synchronization phase, if the target redis already has the corresponding key whether to overwrite, if false throw exception
rewrite = true
Set this parameter to true. If this parameter is set to false, the /metric parameter will always be 0
metric = true
Copy the code

3) Perform migration Perform migration to enable data synchronization. If the data volume is very large, the migration needs to be performed in the background when the pressure is low.

$redis-shake.linux -type sync -conf ./redis-shake-cluster.conf   

Copy the code

4) Monitor progress

  • Restful Monitoring indicators github.com/alibaba/Red…

View logs:

View synchronization logs to check the synchronization status. When sync RDB done is displayed, the full synchronization is complete and the synchronization enters the incremental phase. If **+ Forward =0, no new data is written to the source end and no incremental data is being transmitted on the synchronization link. You can select an appropriate time to switch services to Cluster_B.

$tail -f /var/log/redis-shake.log 
2020/04/15 20:28:40 [INFO] dbSyncer[2] FlushEvent:IncrSyncStart Id:redis-shake-cluster
2020/04/15 20:28:40 [INFO] dbSyncer[1] total = 178B -         178B [100%]  entry=0
2020/04/15 20:28:40 [INFO] dbSyncer[1] sync rdb done
2020/04/15 20:28:40 [INFO] dbSyncer[1] FlushEvent:IncrSyncStart Id:redis-shake-cluster
2020/04/15 20:28:41 [INFO] dbSyncer[2] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
2020/04/15 20:28:41 [INFO] dbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0

Copy the code

Monitoring status:

$curl127.0.0.1:9320 / metric | python - m json toolCopy the code

5) Check verification You can use the scan command to simply spot check the data synchronization results. You can also use redis-full-check to verify data consistency on both ends. For details, see Verifying Data After Migration.

Test write data:

# source Redis, write test data
$redis- CLI -c -h 192.124.64.212 -p 6301-a pwd123 192.124.64.212:6301 >setFoo bar 192.124.64.214:6301 >setFoo1 bar1 192.124.64.214:6301> Scan 0 matchThe '*' count 1000 
1) "0"
2) 1) "foo"
   2) "foo1"

The target Redis is scanned
$redis- CLI -c -h 192.124.64.212 -p 6302-a pwd123  scan 0 match The '*' count 1000  
1) "0"
2) 1) "foo"
   2) "foo1"
Copy the code

Note: The scan command can scan only a certain Redis node, but does not support cluster scan.

Check keys number:

# source Redis
/usr/local/ redis - 5.0.6 / bin/redis - cli - cluster info 192.124.64.212:6301-a pwd123, 6301:192.124.64.212 (5 e19efdd...). - > 2 keys 6720 slots | | 1 slaves. 192.124.64.213:6301 (8021 b063...). - > 0 keys 5462 slots | | 1 slaves. 192.124.64.214:6301 (7 f411012...). -> 2 keys | 4202 slots | 1 slaves. [OK] 4 keysin 3 masters.

Goal # Redis
/redis-cli -h 192.124.64.212 -p 6303 -a pwd123  scan 0 match The '*' count 1000  
1) "0"
2) 1) "foo"
   2) "foo1"
Copy the code

You can see that the source and target Redis clusters have the same number of keys.

5.3.2 Migrating a Redis-cluster to a Redis primary/secondary Cluster

Source Redis cluster A:

role version IP port
node-1 Redis – 5.0.6 192.124.64.212 6301,7301
node-2 Redis – 5.0.6 192.124.64.213 6301,7301
node-3 Redis – 5.0.6 192.124.64.214 6301,7301

Target Redis master/slave C:

role version IP port
node-1 Redis – 5.0.6 192.124.64.212 6303
node-2 Redis – 5.0.6 192.124.64.213 6303

1) the configuration RedisShake

Modify the configuration file, the key is to modify the source, type, source. The address, the source. Password_raw; Target. type, target.address, target.password_raw.

$vim ./redis-shake-standalone.conf 
source.type = cluster
Addres, you need to configure the list of all db nodes in the source cluster and the list of all DB nodes in the destination clusterSource. The address = 192.124.64.212:6301; 192.124.64.213:6301; 192.124.64.214:6301 source. Password_raw =pwd123
source.auth_type = auth

Des-related configurationTarget. type= standalone target.address = 192.124.64.212:6303; 192.124.64.213:6303 target. Password_raw =pwd123
Copy the code

2) Perform the migration

To enable data synchronization migration, run the following command: If the data volume is very large, the migration needs to be performed in the background when the pressure is low.

$redis-shake.linux -type sync -conf ./redis-shake-standalone.conf   
Copy the code

3) Check and verify

Check log:

$tail -f  /var/log/redis-shake.log 
2020/04/15 21:10:23 [INFO] dbSyncer[2] sync rdb done
2020/04/15 21:10:23 [INFO] dbSyncer[2] FlushEvent:IncrSyncStart Id:redis-shake-cluster
2020/04/15 21:10:23 [WARN] dbSyncer[2] GetFakeSlaveOffset not enable when psync == false
2020/04/15 21:10:24 [INFO] dbSyncer[1] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
Copy the code

Check data:

# source Redis, write data
$redis- CLI -c -h 192.124.64.212 -p 6301-a pwd123 set hello hi
$redis- CLI -c -h 192.124.64.212 -p 6301-a pwd123 set foo barbar
$redis- CLI -c -h 192.124.64.212 -p 6301-a pwd123 set foo1 bar1bar1
#
$redis- the cli - cluster info 192.124.64.212:6301-a pwd123, 6301:192.124.64.212 (5 e19efdd...). - > 1 keys 6720 slots | | 1 slaves. 192.124.64.213:6301 (8021 b063...). - > 0 keys 5462 slots | | 1 slaves. 192.124.64.214:6301 (7 f411012...). -> 2 keys | 4202 slots | 1 slaves. [OK] 3 keysin 3 masters.

# Target Redis to view data
$redis- CLI -h 192.124.64.212 -p 6303-a pwd123  scan 0 match The '*' count 1000
1) "0"
2) 1) "hello"
   2) "foo"
   3) "foo1"
Copy the code

You can see that the source and target Redis clusters have the same number of keys.

Reference:

  • redis.io/
  • Github.com/alibaba/Red…
  • Tao teacher operation notes – Redis5 Cluster construction
  • Yq.aliyun.com/articles/69…
  • Help.aliyun.com/document_de…