1. Introduction to Backup

2.1 Backup Types

According to different dimensions, database backup is usually divided into the following categories:

Physical backup and logical backup

  • Physical backup: Back up the complete database directory and data files. In this mode, a large number of I/O operations are performed, but no logical conversion is required. Therefore, backup and recovery are usually fast.
  • Logical backup: Backup by database structure and content information. It is slow because logical transformations are performed, and the output file size is larger than the physical backup when saved in text format. The granularity of a logical backup restore can range from the server level (all databases) down to specific tables, but backups do not include log files, configuration files, and other database-independent content.

Full backup and incremental backup

  • Full backup: Backs up all data of the server at a specified point in time.
  • Incremental backup: Backing up changes made to data over a given span of time (from one point in time to another).

Online and offline backup

  • Online backup: The database service is backed up in the running state. In this case, other clients can still connect to the database. However, to ensure data consistency, data may be locked during backup. In this case, client access is still restricted.
  • Offline backup: Backs up data when the database service is down. The backup process is simple, but external services cannot be provided. Therefore, services are adversely affected.

2.2 Backup Tool

MySQL supports a variety of backup tools. Here are three common ones:

  • Mysqldump: a logical backup tool that supports full database backup, single database backup, and single table backup. Because logical backup is used, the generated backup file is larger than that of physical backup and takes a long time to restore.
  • Mysqlpump: mysqlPump is a new backup tool for MySQL 5.7. It extends the function of mysqlDump to support multi-threaded backup and compression of backup files. It improves backup speed and reduces storage space required for backup files.
  • Xtrabackup: This real-time hot backup tool from Percona enables fast and reliable hot backup without downtime and without interruption of database transactions. It supports full backup and increment of data, and because it adopts physical backup, the recovery speed is relatively fast.

Second, the mysqldump

2.1 Common Parameters

Mysqldump syntax

#Backs up a database or a specified table in a database
mysqldump [options] db_name [tbl_name ...]
#Back up multiple specified databases
mysqldump [options] --databases db_name ...
#Back up all tables in the current database instance
mysqldump [options] --all-databases
Copy the code

Options indicates optional operations. Common optional parameters are as follows:

  • – host = host_name, -h host_name

    Specify the server address.

  • – user = user_name, -u user_name

    Specify the user name.

  • – the password [= password], -p (password)

    Specify a password. Generally, you do not need to specify it in plain text on the command line.

  • –default-character-set=charset_name

    The character set used for the exported text. The default character set is UTF8.

  • — events – E

    The backup contains events in the database.

  • –ignore-table=db_name.tbl_name

    Tables that do not need to be backed up must be specified using both the database and table name. You can also apply it to views.

  • — routines – R

    Backups contain stored procedures and custom functions in the database.

  • –triggers

    The backup contains triggers in the database.

  • ‘=’ where_condition ‘- where – w where_condition’

    When exporting a single table, you can specify filtering conditions, such as user name –where=”user=’jimf'” or user range -w” userID >1″.

  • – the lock – all – tables, – x

    Lock all tables in all databases to ensure consistency of backup data. This option automatically turns off –single-transaction and –lock-tables.

  • – the lock – tables, -l

    Locking all tables in the current database can ensure the consistency of the tables in the current database, but cannot ensure global consistency.

  • –single-transaction

    This option sets transaction isolation mode to REPEATABLE READ and starts a transaction to ensure consistency of backup data. Mainly used for transaction tables such as InnoDB tables. However, ALTER TABLE, CREATE TABLE, DROP TABLE, RENAME TABLE, TRUNCATE TABLE and other operations cannot be performed on the backup TABLE, because REPEATABLE READ cannot isolate these operations.

    Also note that the –single-transaction option and the –lock-tables option are mutually exclusive, because Lock Tables causes any pending transactions to be implicitly committed. When dumping large tables, you can combine the –single-transaction option with the –quick option.

  • –quick, -q

    It is used to back up large tables. It forces mysqldump to retrieve only one row of data from the server at a time, avoiding cache overruns by retrieving all rows at once.

  • –flush-logs, -F

    Flush MySQL log files before starting backup. This option requires RELOAD permission. If this option is used with –all-databases, the logs are flushed once for each database before the backup begins. If used in conjunction with –lock-all-tables, –master-data, or –single-transaction, it is refreshed only once when all tables are locked or when a transaction is started.

  • –master-data[=value]

    You can configure this parameter to control whether the generated backup file contains the CHANGE MASTER statement, which contains information about binary logs at the current point in time. This option has two optional values: 1 and 2. When set to 1, the CHANGE MASTER statement is generated normally, and when set to 2, it is generated as a comment. The –master-data option will also automatically turn off the –lock-tables option, and if you don’t specify –single-transaction, it will also enable the –lock-all-tables option, in which case, The global read lock is acquired briefly when the backup starts.

2.2 Full Backup

Mysqldump full backup and restore operations are simple as follows:

#Backing up employee repositories
mysqldump  -uroot -p --databases employees > employees_bak.sql

#Restoring the employee pool
mysql -uroot -p  < employees_bak.sql
Copy the code

Single table backup:

#Back up job listings in the employee repository
mysqldump  -uroot -p --single-transaction employees titles > titles_bak.sql

#Restores the job table in the employee database
mysql> use employees;
mysql> source /root/mysqldata/titles_bak.sql;
Copy the code

2.3 Incremental Backup

Mysqldump itself cannot perform incremental backup. You need to analyze binary logs to perform incremental backup. Specific examples are as follows:

1. Complete foundation

1. Perform a full backup as a basis. In this example, use the –master-data parameter mentioned above.

mysqldump -uroot -p --master-data=2 --flush-logs employees titles > titles_bak.sql
Copy the code

Run the more command to view the backup file. In this case, you can see the CHANGE MASTER statement at the beginning of the file, which contains the binary log name and offset information as follows:

-- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000004', MASTER_LOG_POS=155;
Copy the code

2. Incremental restore

Make any changes to the table contents and then generate an incremental backup script file by analyzing the binary log file as shown in the following example:

mysqlbinlog --start-position=155 \
--database=employees  ${MYSQL_HOME}/data/mysql-bin.000004 > titles_inr_bak_01.sql
Copy the code

Note that in actual production environments, multiple binaries may be generated in the interval between full backup and incremental backup. In this case, you need to run the same command for each binary file:

mysqlbinlog --database=employees  ${MYSQL_HOME}/data/mysql-bin.000005 > titles_inr_bak_02.sql
mysqlbinlog --database=employees  ${MYSQL_HOME}/data/mysql-bin.000006 > titles_inr_bak_03.sql
.....
Copy the code

The full script (titles_bak.sql) and all additional scripts (inr_01.sql, inr_02.sql….) are then prepared. Import with the source command, which completes the full + incremental restore.

Third, mysqlpump

3.1 Functional Advantages

Mysqlpump is an extended and enhanced version of mysqlDump. Its main advantages are as follows:

  • The ability to process databases and their objects in parallel, thus speeding up the backup process;

  • Better control of databases and database objects (tables, stored procedures, user accounts, etc.);

  • Can directly compress backup files;

  • The ability to show progress indicators (estimates) when backing up

  • CREATE USER and GRANT statements are generated when users are backed up, rather than backed up as data like mysqldump.

3.2 Common Parameters

Mysqlpump is used in the same way as mysqldump. The new options are described as follows:

  • –default-parallelism=N

    The default number of threads per parallel processing queue. The default value is 2.

  • –parallel-schemas=[N:]db_list

    For backing up multiple databases in parallel: db_list is one or more comma-separated lists of database names; N is the number of threads used. If not set, use the value of the –default-parallelism parameter.

  • –users

    Back up the USER information into CREATE USER statements and GRANT statements. If you want to back up only user information, you can use the following command:

    mysqlpump --exclude-databases=% --users
    Copy the code
  • –compress-output=algorithm

    By default, mysqlPump does not compress backup files. You can use this option to specify the compression format, currently LZ4 and ZLIB are supported. Note that compressed files occupy less storage space, but cannot be directly used for backup and restoration. You need to decompress the files as follows:

    #Lz4 algorithm is used for compression
    mysqlpump --compress-output=LZ4 > dump.lz4
    #Decompress the software before recovery
    lz4_decompress input_file output_file
    
    #ZLIB algorithm is used for compression
    mysqlpump --compress-output=ZLIB > dump.zlib
    zlib_decompress input_file output_file
    Copy the code

    The MySQL distribution comes with the above two compression tools and requires no additional installation. Mysqlpump — A Database Backup Program mysqlPump — A Database Backup Program

Fourth, the Xtrabackup

4.1 Online Installation

Xtrabackup Xtrabackup Xtrabackup Xtrabackup Xtrabackup Xtrabackup Xtrabackup Xtrabackup

#Install Percona yum source
yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm

#The installation
yum install percona-xtrabackup-80
Copy the code

4.2 Full Backup

The steps for full backup are as follows:

1. Create a backup

Parallel specifies the level of parallelism of the Xtrabackup full backup operation. If the Xtrabackup full backup operation is supported by the target-dir command, parallel specifies the level of parallelism of the Xtrabackup full backup operation.

xtrabackup --backup  --user=root --password --parallel=3  --target-dir=/data/backups/
Copy the code

If you need to back up a specific database, you can use –databases to specify the database.

Another common exception is: Xtrabackup when backups, default to/var/lib/mysql/mysql. The socket of the sock file access database information, if you modify the socket configuration database, you need to use the socket to the specified parameters, Otherwise, an exception will be thrown that the connection cannot be found. Another action that needs to be performed immediately after the backup is complete is prepare.

2. Prepare a backup

Backup is to copy all physical database tables and other files to the backup directory, and the whole process takes a period of time. In this case, the backup data may contain uncommitted transactions or transactions that have been committed but not synchronized to the data file, resulting in inconsistent backup results. In this case, you need to perform the prepare operation to roll back and forth the uncommitted transaction and synchronize the committed transaction to the data file for consistency. The command is as follows:

xtrabackup --prepare --target-dir=/data/backups/
Copy the code

It is important to be careful not to interrupt the Xtrabackup process at this stage, as this may result in corrupted data files and unusable backups.

3. Restore the backup

Since Xtrabackup performs a physical backup, you must first stop the MySQL service in order to restore it. In addition, we can delete the MySQL data directory to simulate data loss, and then use the following command to copy the backup file to the MySQL data directory:

#Analog data is abnormally lostRm - rf/usr/app/mysql - 8.0.17 / data / *
#Copy the backup file to the data directory
xtrabackup --copy-back --target-dir=/data/backups/
Copy the code

The copy-back command only needs to specify the location of the backup file, but does not need to specify the location of the MySQL data directory, because Xtrabackup automatically obtains MySQL information from /etc/my.cnf, including the location of the data directory. If you do not need to reserve backup files, run the –move-back command to move the backup files to the data directory directly. In this case, the owner of the data directory is usually the user who runs the command. You need to change the owner to the mysql user.

Chown -r mysql: mysql/usr/app/mysql - 8.0.17 / dataCopy the code

Start again to complete the backup and restoration.

4.3 Incremental Backup

When Xtrabackup is used for incremental backup, each incremental backup needs to build on the previous backup and then apply the incremental backup to the first full backup to complete the backup. Specific operations are as follows:

1. Create a backup

Here is the first to create a full backup as a base:

xtrabackup  --user=root --password --backup  --target-dir=/data/backups/base/
Copy the code

Then modify any data in the library and perform the first incremental backup. In this case, use incremental-basedir to specify the base directory as the full backup directory:

xtrabackup  --user=root --password --backup  --target-dir=/data/backups/inc1 \
--incremental-basedir=/data/backups/base
Copy the code

Then modify any data in the library and perform a second incremental backup. In this case, use incremental-basedir to specify the base directory as the last incremental directory:

xtrabackup  --user=root --password --backup  --target-dir=/data/backups/inc2 \
--incremental-basedir=/data/backups/inc1
Copy the code

2. Prepare a backup

Prepare a basic backup:

xtrabackup --prepare --apply-log-only --target-dir=/data/backups/base
Copy the code

Apply the first backup to full data:

xtrabackup --prepare --apply-log-only --target-dir=/data/backups/base \
--incremental-dir=/data/backups/inc1
Copy the code

Apply the second backup to full data:

xtrabackup --prepare --target-dir=/data/backups/base \
--incremental-dir=/data/backups/inc2
Copy the code

When preparing a backup, all prepare commands except the last increment need to have the –apply-log-only option to prevent the rollback of transactions, because uncommitted transactions may be in progress at the time of the backup and may be committed in the next incremental backup, which would be meaningless if not prevented.

3. Restore the backup

The backup is the same as the full backup. You only need to copy the full backup data to the MySQL data directory:

xtrabackup --copy-back --target-dir=/data/backups/base
#File permissions must be changed, or the startup will failChown -r mysql: mysql/usr/app/mysql - 8.0.17 / dataCopy the code

At this point, the incremental backup is complete. Note: According to the above situation, if an outage occurs after the second backup, the data from the second backup to the outage still cannot be restored through Xtrabackup. In this case, you can only use the recovery method of analyzing binary logs described above. As you can see, binary logs are very important regardless of the backup method, so it’s best to back them up in real time.

Backup of binary logs

To back up binary log files, you can run the cp or SCP command periodically. You can also use the mysqlbinlog function to back up binary log files to the local computer as follows:

Mysqlbinlog --read-from-remote-server --raw --stop-never \ --host= host name --port=3306 \ --user= user name --password= password Log file name for initial replicationCopy the code

Note that the user must have replication slave permission, because the above command essentially simulates a master-slave replication architecture in which the slave node continuously obtains binary logs from the master node through THE I/O thread for backup purposes.

The resources

  • Chapter 7 Backup and Recovery
  • Mysqldump — A Database Backup Program
  • Mysqlpump — A Database Backup Program
  • Percona XtraBackup – Documentation

For more articles, please visit the full stack Engineer manual at GitHub.Github.com/heibaiying/…