Abstract:The purpose of database migration is to migrate the business, and whether the business can be smoothly switched depends on the migration ability of the database and the accuracy after the migration. From the perspective of the business side, at least the following three correctness must be satisfied before the business switch can be done.
This article is shared from the Huawei cloud community “Huawei Cloud GaussDB(for OpenGauss) special live stream 4: with the right migration tools, migration can be easy”, the original author: clever.
1. Background
With the continuous development of GAUSSDB (for OpenGauss) database, more and more customers began to choose to use GAUSSDB (for OpenGauss), a large part of them are to replace the existing system to GAUSSDB (for OpenGauss), Customers currently use a variety of database types, such as Oracle, MySQL, PostgreSQL, and so on. So how to solve the problem of migrating the customer’s current database to GaussDB(for OpenGauss) is a very urgent need. GAUSSDB (for OpenGauss) comes with GDS data migration tool to achieve efficient data migration between GAUSSDB (for OpenGauss), but can not solve the heterogeneous synchronization and real-time synchronization scenarios. Huawei cloud database migration tool DRS provides heterogeneous migration and real-time synchronization capabilities for GausSDB (for OpenGauss) with an easy-to-use, stable and efficient cloud service, helping customers easily migrate their databases to GausSDB (for OpenGauss).
2. Overall solution of database migration
The purpose of database migration is to migrate the business, and whether the business can be smoothly switched depends on the migration ability of the database and the accuracy after the migration. From the perspective of the business side, at least the following three correctness must be satisfied before the business switch can be done.
- Object migration is correct
All database objects, such as stored procedures, functions, triggers, table structures, indexes, etc. of the database can be migrated to the target database completely, and can ensure that the operation logic of the object and the source database are consistent.
- Data migration is correct
Migration of the full amount of data from the source database to the target database. When the business has requirements for the downtime window, full amount + incremental online migration should be considered to ensure the business does not interrupt. At the same time, it should be able to verify the synchronized data to ensure the accuracy of the migration data.
- The business runs correctly after the migration
When objects and data are migrated to the target repository, there are still two risk points for business switching. One is whether the business runs correctly in the target repository, and the other is whether the performance of the target repository can support the business load as well as the source repository. Because there are great differences between heterogeneous databases, there are many different design concepts and implementation methods, which may lead to two objects that look similar, but their running results or efficiency are completely different. So you need tools to verify this difference and ensure that the business runs correctly after the migration.
In order to meet the requirements of the above business switch, Huawei Cloud provides the overall solution of database migration. Through the four tool products of UGO, DRS-data migration, DRS-data verification and DRS-traffic playback, the whole migration process is closed loop.
- Syntax Migration (UGO)
Realized the ability to migrate Oracle database objects to GaussDB(for OpenGauss), can give a complete migration evaluation report, which objects can be fully compatible for migration, which objects need to be transformed for migration, which objects need business cooperation transformation.
- DRS- Data migration
Realize real-time migration of Oracle, MySQL, PostgreSQL and other database data to GaussDB(for OpenGauss).
- DRS- Data validation
Consistency verification after data migration has been realized, with the ability of row – level comparison, content – level comparison and real-time incremental data comparison.
- DRS- Traffic Playback
Realize the ability to capture the business traffic of Oracle database, and convert the traffic SQL, and then playback in GaussDB(for OpenGauss).
3. Migration of DRS data to the cloud
DRS provides a simple, user-friendly interface and a process-oriented configuration that allows customers to build synchronous links by following the prompts step by step. In addition to supporting data migration from Oracle to GausSDB (for OpenGauss), DRS also supports data synchronization between other databases. The following is a list of source and target library types currently supported by DRS.
In the process of data migration, DRS uses many means and techniques to reduce the possible risks and ensure the stability of the migration process and the consistency of the final data.
- Online migration
DRS migrates the stock data in the customer database to GAUSSDB (for OpenGauss) through full migration, and synchronizes the real-time change data of the customer to GAUSS (for OpenGauss) through incremental synchronization and real-time parsing of the source library log. Through full volume and incremental seamless connectivity, customers can fully migrate all data to GaussDB(for OpenGauss) without interrupting their business.
- The divverify
Before DRS migration tasks can begin to find early migration risk or error after launch, DRS into preverification link, can advance to the configuration information, database compatibility, connectivity information, such as check, at the same time to some successful migration but may affect the business of the alarm, let the customer timely detection and early treatment.
- Breakpoint mechanism
In order to ensure the consistency of data migration, DRS has a breakpoint mechanism in each component, which ensures the accuracy of data synchronization and does not cause data loss in the normal start and stop, abnormal restart or failover scenarios.
4. Implementation principle of DRS technology
DRS technology is mainly divided into two major modules, one is full data synchronization, the other is incremental data synchronization. Full synchronization solves the migration of static data, and incremental synchronization solves the migration of real-time changing data.
Technical architecture for full synchronization
The overall logic of full synchronization is relatively simple, that is, to query data from the source library by means of SELECT, and then write the data to the target library, but there are some key technical points in the implementation of the code.
- Data fragmentation
General quantity product synchronization synchronous granularity can be table level of concurrent, namely, multiple threads can be synchronized to multiple tables at the same time at the same time, but often the customer in the system will exist single table particularly large amount of data, such as billions or even tens of billions of data in a table, the synchronization time of this form is to the amount of time synchronization. So how to further improve the single table synchronization efficiency? We can further split a single table by splitting it into multiple shards according to the primary key. Multi-thread synchronization is done in parallel by shards.
The DRS currently shards the table according to the following strategy:
- Tables without primary keys are not sharded
- The partition table is synchronized by partition instead of sharding each partition
- A primary key table is sharded by the primary key (first column)
- Data does not fall
In order to reduce the disk consumption during the full synchronization process, the data exported by DRS is not cached, but is directly passed to the importing thread through memory, which can maximize the efficiency of the full synchronization at the same rate of the export and the import.
- Breakpoint control
Quantity halfway interrupt synchronization is a very difficult problem, may be a 200 million data list, at the time of synchronization to 180 million because the network or source-sink snapshot too old problems cause the failure of synchronization, if there is no good breakpoint control mechanism, it might before efforts are wasted, and synchronize again once again. The DRS saves the record on a shard basis as a breakpoint. In the case of the above example, even if the synchronization is broken, it can be pulled again. After the pull, the shards that have already been synchronized will no longer be synchronized, and the shards that have not been synchronized will continue to be synchronized.
- Flow control
The customer’s business often has peak and low peak periods. In the peak period, the resource occupation of the database is the highest. We should try to avoid doing full synchronization in the peak period of the business, because full synchronization occupies a lot of CPU, memory and network resources of the source library. DRS business peak flow control mechanism to reduce the source of library resources, mainly through the way of control network traffic, customers can set for flow control period, DRS in full amount in the process of synchronization, real-time synchronized flow calculation, after running to the period of time, when the traffic more than set threshold, the slower the speed of data acquisition. After running this period, full data synchronization speed is restored.
- Full volume + incremental seamless connection
In the scenario of business cutting database, the data migration process can generally choose two schemes. One is to migrate the data of the source database to the target database at one time, but the business stop window is required. The size of the window depends on the time of the full data migration. When the volume of data is large, the entire migration process can take several days, and this business outage is not acceptable. Therefore, the other solution is a data migration solution without interruption of business. Its implementation principle is based on the seamless connection of full migration and incremental synchronization. For Oracle->GaussDB(for OpenGauss) migration, Oracle database provides the function of specifying SCN for snapshot export, based on this feature, When the DRS is doing full synchronization, it specifies the SCN to export, so that the whole data in full synchronization is the snapshot data before this SCN point, and then the incremental synchronization takes this SCN point as the demarcation point of synchronization, and only incremental transactions larger than this SCN will be synchronized to the target library. In this way, the full and incremental seamless connection is realized. The synchronization process does not require business shutdown. When the full data synchronization is completed and the incremental synchronization catches up to the current time point, the service switch can be carried out, and the business interruption window can be controlled at the second level.
Technical architecture for incremental synchronization
The incremental synchronization architecture of DRS is mainly divided into three parts: data capture, disk drop file and data playback.
- Data capture
Data capture can obtain the change data of the source database in real time by analyzing the log of the source database. The internal implementation mainly includes several steps, such as log pulling, log parsing, transaction integration and data dropping.
- Log pull
The DRS uses Oracle’s LogMiner interface to obtain real-time Redo logs. When the Redo is archived, the DRS reads the archived log files. To prevent the archived logs from the source library from being indeternally deleted, DRS starts the log pull thread (which can be concurrent with multiple threads) to pull the logs to the local location for subsequent parsing.
- Log parsing
The data obtained by Oracle LogMiner interface needs to be further parsed to obtain the actual changes. The log parsing thread of DRS filters, splits, maps metadata, transforms and other operations on the returned data to form a complete change record object.
- Transaction integration
Log parsing is done according to the sequence of data changes in the source library. The transactions of each record after parsing are intersected and mixed together. Each record must be integrated according to transaction ID to form a complete transaction. On the other hand, Oracle RAC scenarios also need to sort the transactions of different nodes to avoid the occurrence of out-of-order transactions.
- Trading files
After the transaction consolidation, there is a sequence in the order in which the source repository business commits, in which the DRS writes the data to a disk file. The dropped data contains all the information of each piece of changed data in the source library, including table information, column information, transaction information, data information and other additional information (such as timestamp, ROWID, etc.). Based on this information, the components behind each piece of changed data can restore to the SQL of the object.
- The data playback
Data playback is the process of executing the captured data in the target library, but it is decoupled from the captured data. It reads the DRS disk file, parses out each changed data, reconstructs the corresponding SQL statement according to the metadata information recorded in the file, and executes it in the target library.
Before data playback, DRS provides the function of filtering and transformation. It can filter the synchronized data. The filtering conditions can be configured, such as only synchronizing the data with ID < 10000, and mapping the table, schema or column names of the synchronized data.
Exception handling and playback performance are two important considerations. DRS handles abnormal data in playback by configuring data conflict policy, and improves loading performance through concurrency mechanism.
- Conflict strategy
The so-called collision refers to the data class error (such as primary key conflict, update and delete can not find the record, etc.) during the data playback. These error reports are caused by the data inconsistency between the two sides. DRS has three strategies for handling this type of error: overwrite, ignore, and wait.
Overwrite: When a conflict occurs, overwrite the target database with the captured data
Ignore: After data conflict, simply skip the error record and continue execution
Wait: After data conflict, wait for manual processing
- Concurrent mechanism
The DRS concurrency mechanism uses record level concurrency to maximize data load performance.
First from DRS trading incremental data read from the file, according to the order in a queue, parallel analysis engine can obtain every piece of data from the queue, and according to the primary key information to determine whether there is data conflict, for there is no conflict of data that can parallel execution, the data across multiple threads in the queue, When the amount of data in the thread queue reaches a set threshold, the data is executed as a transaction in the target library. For the data with conflicts, the data is put into the conflict queue, and after the thread completes the execution of the last batch of data, it enters the parallel analysis engine again to judge whether there is a conflict.
PS: This content is compiled according to the “GAUSSDB (for OpenGauss) DRS” technology live broadcast. For those who missed the broadcast, please click here to review the wonderful content
Click on the attention, the first time to understand Huawei cloud fresh technology ~