Migration is a common thing for operation and maintenance, and continuous service migration is a test for DBA. The following is a summary of the current migration plan:
Replication and Snapshot are used to migrate data and services are switched over automatically
Steps:
- Create replication for two clusters, primary/secondary (cluster to be migrated —-> New cluster)
- Create the same table in the new cluster as in the old cluster and start synchronization
- Pause the synchronization and use snapshot to take A snapshot of the tables in the old cluster, such as table A
list_snapshots
snapshot 'A'.'A_snapshot'Hbase org, apache hadoop. Hbase. The snapshot. ExportSnapshot - the snapshot A_snapshot - copy - to HDFS: / / new cluster: 8020 / hbase - mappersCopy the code
Once complete, import Snaphsot in the new cluster
disable'A'restore_snapshot' A_snapshot 'enable' A'
Copy the code
- The synchronization is continued to complete the migration, and services are migrated at a specified time
Using the bulkload
Create a table in the old cluster, cut traffic, and use Bulkload to supplement data
The basic idea is to copy and replicate the Snapshot to the destination cluster, then enable real-time data synchronization between the two clusters, and finally copy the data between the snapshot creation and data synchronization through MR tasks.
Use test_table as an example:
1. Add a replication relationship between the original cluster and the target cluster, and create tables in the target cluster that are the same as the original cluster.
Import historical data to secondary cluster with snapshot:
A, list_snapshots B, snapshot 'test_table', 'test_table_snapshot'c, bin/hbase org, apache hadoop, hbase. The snapshot. ExportSnapshot - the snapshot test_table_snapshot - copy - the to hdfs://yourdestination:9100/hbase -mappers 16Copy the code
Execute the following commands in the target cluster:
A,disable'test_table'
b、restore_snapshot ‘test_table_snapshot’
c、enable 'test_table'
Copy the code
SQL > alter table REPLICATION;
The alter test_table, {NAME = > 'f', REPLICATION_SCOP = > '1'}Copy the code
4. Use the Export tool to import data between snapshots and replication to slave tables. Execute the following commands in the original cluster:
bin/hbase org.apache.hadoop.hbase.mapreduce.Export test_table hdfs://yourdestination:9100/import/test_table 1 1445961600000, 1446048000000,Copy the code
1: indicates that the version number is specified. 1445875200000 1445961600000: indicates that the start time stamp is specified. Execute the following command in the target cluster:
bin/hbase org.apache.hadoop.hbase.mapreduce.Import test_table hdfs://yourdestination:9100/import/test_table
Copy the code
Note: 1, the path to the HDFS: / / yourdestination: 9100 / port is HDFS foreign service ports, in HDFS – site. In the XML configuration, yourdestination is active in your objective cluster namenode host name