In daily operation and maintenance, it is extremely important to back up the mysql database. In case of database table loss or damage, data can be restored in time.
Online database backup scenario: Perform a full backup every Sunday, and then perform incremental MySQLdump backup every day at 1:00 PM.
The following describes the backup scheme in detail: 1.MySQLdump incremental backup configuration The prerequisite for performing incremental backup is that the binlog log function is enabled in MySQL. Add log-bin=/opt/Data/ mysql-bin log-bin= to my. CNF. You are advised to save it on a disk that is different from the MySQL data directory.
` -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- ` ` mysqldump > export data ` ` mysql < import data Mysqldump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump dump If the new library name is not the same as the old library name, then you need to change the old library name in the A.sql file to the new one, so that the mysql command can be used to import data. ` ` -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- `Copy the code
2.MySQLdump incremental backup assumes full backup on Sunday at 1pm for MyISAM storage engine. [root@test-huanqiu ~]# MySQLdump –lock-all-tables –flush-logs –master-data=2 -u root -p test > backup_sunday_1_PM.sql
For InnoDB, replace –lock-all-tables with –single-transaction –flush-logs to end the current log and generate a new log file; The –master-data=2 option will record the name of the new log file after a full backup in the output SQL,
For example, the output backup SQL file contains: CHANGE MASTER TO MASTER_LOG_FILE= ‘mysql-bin.000002’, MASTER_LOG_POS=106.
MySQLdump incremental backup Other instructions: If MySQLdump is added with -delete-master-logs, the previous logs will be cleared to free space. However, if the server is configured as the replication master of the mirror, it is dangerous to delete the MySQL binary log with mysqldump-delete-master-logs because the secondary server may not have fully processed the contents of the binary log. In this case, it is safer to use PURGE MASTER LOGS.
Use MySQLadmin flush-logs periodically each day to create a new log and end the previous log writing process. And back up the previous log file, for example, save the log file mysql-bin.000002 in the data directory…
1. Restore full backup mysql -u root -p < backup_sunday_1_pm.sql
Mysqlbinlog mysql-bin.000002… | MySQL -u root -p pay attention to the recovery process will also be written to the log file, if the data quantity is large, suggest to close the log function
–compatible=name This tells MySQLdump which database or older version of MySQL server the exported data will be compatible with. The value can be ANSI, MySQL323, MySQL40, postgresQL, oracle, MSSQL, DB2, maxDB, no_KEY_options, NO_tables_options, or no_field_options. Separate them with commas. Of course, it’s not guaranteed to be completely compatible, but it’s as compatible as possible.
–complete-insert, -c Export data using a full insert with field names, that is, all values on one line. This improves the insert efficiency, but may be affected by the max_allowed_packet parameter and result in insert failure. Therefore, use this parameter with caution, at least I don’t recommend it.
–default-character-set=charset Specifies the character set to use when exporting data. If the data table does not use the default latin1 character set, this parameter must be specified when exporting data. Otherwise, garbled characters will be generated when importing data again.
–disable-keys tells MySQLdump to add /*! At the beginning and end of INSERT statements 40000 ALTER TABLE table DISABLE KEYS /; And /! 40000 ALTER TABLE table ENABLE KEYS */; Statement, which greatly improves the speed of insert statements because it rebuilds the index after all data has been inserted. This option applies only to MyISAM tables.
– extended – insert = true | false by default, the MySQLdump open – complete – insert mode, so don’t want to use it, just use this option, set its value to false.
–hex-blob exports binary string fields in hexadecimal format. You must use this option if you have binary data. The field types affected are BINARY, VARBINARY, and BLOB.
–lock-all-tables, -x Commit the request to lock all tables in all databases to ensure data consistency before starting the export. This is a global read lock and the –single-transaction and –lock-tables options are automatically turned off.
–lock-tables –lock-all-tables –lock-all-tables –lock-all-tables –lock-all-tables –lock-all-tables –lock-all-tables –lock-all-tables –lock-all-tables –lock-all-tables This option only applies to MyISAM tables, if Innodb tables can use — single-Transaction option.
–no-create-info, -t export data only, do not add create TABLE statement.
–no-data, -d Does not export any data, only the database table structure. Mysqldump –no-data — Databases mydatabase1 myDatabase2 mydatabase3 > test.dump Databases indicates the database to be backed up on the host.
–opt This is just a shortcut option, Equivalent to adding –add-drop-tables –add-locking –create-option –disable-keys –extended-insert –lock-tables –quick at the same time – set – charset options. This option allows MySQLdump to export data quickly, and export data quickly back. This option is enabled by default, but can be disabled with –skip-opt. Note that if you run MySQLdump without specifying –quick or –opt, the entire result set is placed in memory. Problems can arise if you export large databases.
–quick, -q This option is useful when exporting large tables. It forces MySQLdump to directly output the fetched records from the server instead of caching them in memory after fetching all the records.
–routines, -r exports stored procedures and custom functions.
— single-TRANSACTION This option submits a BEGIN SQL statement before exporting data. BEGIN does not block any applications and guarantees the consistent state of the database when exporting. It only works with transaction tables, such as InnoDB and BDB. This option and the –lock-tables option are mutually exclusive because Lock Tables makes any pending transactions commit implicitly. To export large tables, use the –quick option in combination.
Triggers triggers at the same time. This option is enabled by default, disable it with –skip-triggers.
Cross-host backup The sourceDb on Host1 can be copied to the targetDb on Host2 using the following command if the targetDb database has been created on Host2: – C indicates the data transmission using data compression between hosts mysqldump, host = host1, opt sourceDb | mysql – host targetDb = host2 – C
Combined with the Linux cron command implementation backup regularly Such as the need to backup at 1:30 in the morning every day on a host of compressed and all database dump file format for gz 1 * * * 30 mysqldump -u root – pPASSWORD – all – databases | gzip > /mnt/disk2/database_date ‘+%m-%d-%Y’.sql.gz
Example of a complete Shell script backing up a MySQL database. Opspc [root@test-huanqiu ~]# vim /root/backup.sh #! bin/bash echo “Begin backup mysql database” mysqldump -u root -ppassword opspc > /home/backup/mysqlbackup-date +%Y-%m-%d.sql echo “Your database backup successfully completed”
[root@test-huanqiu ~]# crontab -e
30 1 * * * /bin/bash -x /root/backup.sh > /dev/null 2>&1
Mysqldump full backup +mysqlbinlog Incremental backup 1) Restore data from mysqldump backup file will lose the updated data since the backup point, so you also need to combine mysqlbinlog incremental backup. Ensure that the binlog function is enabled. Include the following configuration in my.cnf to enable binary logging: [mysqld] log-bin=mysql-bin
2) the mysqldump command must have the –flush-logs option to generate a new binary log file: Mysqldump – single ws-transaction – flush – logs – master – data = 2 > backup. SQL parameter, the master of them – data = | 1 | 2 0 0: Do not record 1: record as CHANGE MASTER statement 2: record as comment CHANGE MASTER statement
For details about the mysqldump full and incremental backup solution, see the following two documents: Restoring data after a database is mistakenly deleted Description Mysql binlog logs and restoring data using binlog logs
— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — the following share their used mysqldump total quantity and incremental backup script
Application scenarios: 1) Incremental backup copies mysql-bin.00000 to a specified directory at 3:00 am from Monday to Saturday. 2) Full backup dump is used to export all database, every Sunday at 3am, and delete mysq-bin.00000 left over from last week. Then backup operation of mysql is saved in bak.log file.
Script implementation: 1) Full backup script (assuming mysql login password is 123456; Note the command path in the script) : [root@test-huanqiu ~]# vim /root/mysql-fullybak.sh \
#! /bin/bash\ # Program\ # use mysqldump to Fully backup mysql data per week! \ # History\ # Path\ BakDir=/home/mysql/backup\ LogFile=/home/mysql/backup/bak.log\ Date=`date +%Y%m%d`\ Begin=`date + "% d % % m Y year month day % H: % m: % S" ` \ CD $BakDir \ DumpFile = $Date SQL \ GZDumpFile = $Date SQL. The TGZ \ / usr/local/mysql/bin/mysqldump - uroot -p123456 --quick --events --all-databases --flush-logs --delete-master-logs --single-transaction > $DumpFile\ /bin/tar -zvcf $GZDumpFile $DumpFile\ /bin/rm $DumpFile\ Last= 'date +"%Y year %m month %d day %H:%M:%S"' \ echo Start :$Begin End :$Last $GZDumpFile succ >> $LogFile\ cd $BakDir/daily\ /bin/rm -f *Copy the code
[root@test-huanqiu ~]# vim /root/mysqle-dailybak.sh \
#! /bin/bash\ # Program\ # use cp to backup mysql data everyday! Mysql-bin.00000 =/home/mysql/backup/daily // Early manually create the directory \ BinDir = / home/mysql/data/mysql data directory \ LogFile = / home/mysql/backup/bak. The log \ BinFile=/home/mysql/data/mysql-bin.index //mysql index file path, Under the data directory \ / usr/local/mysql/bin/mysqladmin uroot - p123456 flush - logs \ # this is used to produce new mysql - bin. = 00000 * files \ Counter ` wc -l $BinFile | awk '{print $1}' ` \ \ NextNum = 0 # the for loop is used to compare $Counter, $NextNum these two values to determine if file is exist or latest \ for file in ` cat $BinFile ` \ do \ Base = 'basename $file' \ #basename /\ NextNum= 'expr $NextNum + 1' \ if [$NextNum -eq $Counter]\ then\ echo $base skip! >> $LogFile\ else\ dest=$BakDir/$base\ if(test-e $dest)\ # test-e exist! $LogFile \ then\ echo $base exist! >> $LogFile\ else\ cp $BinDir/$base $BakDir\ echo $Base copying >> $LogFile\ Fi \ Fi \ done\ echo 'date +"%Y year %m month %d day %H:%M:%S"` $Next Bakup succ! >> $LogFileCopy the code
3) Set the crontab task and execute the backup script. The incremental backup script is executed first, and then the full backup script is executed: [root@test-huanqiu ~]# crontab -e # Run full backup script 0 3 * * 0 /bin/bash -x /root/mysql-fullybak. sh >/dev/null 2>&1 every Sunday at 3:00 am 0 3 * * 1-6 /bin/bash -x /root/mysql-dailybak. sh >/dev/null 2>&1
4) Execute the above two scripts manually, [root@test-huanqiu backup]# PWD /home/mysql.backup [root@test-huanqiu backup]# mkdir daily [root@test-huanqiu backup]# ll total 4 drwxr-xr-x. 2 root root 4096 Nov 29 11:29 daily [root@test-huanqiu backup]# ll daily/ total 0
[root@test-huanqiu backup]# sh /root/mysql-dailybak. sh [root@test-huanqiu backup]# ll total 8 -rw-r–r– 1 root root 121 Nov 29 11:29 bak.log drwxr-xr-x. 2 root root 4096 Nov 29 11:29 daily [root@test-huanqiu backup]# ll daily/ total 8 -rw-r—–. 1 root root 152 Nov 29 11:29 mysql-binlog.000030 -rw-r—–. 1 root root 152 Nov 29 11:29 mysql-binlog.000031 [root@test-huanqiu backup]# cat bak.log mysql-binlog.000030 copying mysql-binlog.000031 copying mysql-binlog.000032 skip! Bakup Succ!
[root@test-huanqiu backup]# sh /root/mysql-fullybak. sh 20161129.sql [root@test-huanqiu backup]# ll total 152 -rw-r–r–. 1 root root 145742 Nov 29 11:30 20161129.sql.tgz -rw-r–r–. 1 root root 211 Nov 29 11:30 bak.log drwxr-xr-x. 2 root root 4096 Nov 29 11:30 daily [root@test-huanqiu backup]# ll daily/ total 0 [root@test-huanqiu backup]# cat bak.log mysql-binlog.000030 copying mysql-binlog.000031 copying mysql-binlog.000032 skip! Bakup Succ! Start: nov 01, 2021 11:30:38 Finish: Nov 01, 2021 11:30:38 20211101.sql. TGZ succ