This is the 12th day of my participation in the Novembermore Challenge.The final text challenge in 2021
Solution 1: Restore data using Oplog
The silver lining is that you can use Oplog to recover as much data as possible when deploying a MongoDB replica set. Oplog is recorded for each modification of the MongoDB replica set, so if the database is deleted by mistake, the existing Oplog can be replayed to “recover as much data as possible”. Because Oplog is a set of fixed size, use Oplog to restore data as quickly as possible; otherwise, data will be overwritten.
1. Export oplog set
mongodump -d local -c oplog.rs -o backupdir -u root -p 123456 --authenticationDatabase admin --port 20001
Copy the code
2. Back up collection data
mkdir new_backupdir;
cp backupdir/local/oplog.rs.bson new_backupdir/oplog.bson;
Copy the code
3. Replay oplog
mongorestore new_backupdir -u root -p 123456 --authenticationDatabase admin --port 20001; Db. CreateRole ({role:'sysadmin',roles:[], privileges:[ {resource:{anyResource:true},actions:['anyAction']}]}) db.grantRolesToUser( "admin" , [ { role: "sysadmin", db: "admin" } ])Copy the code
Solution 2: Restore data from other nodes in the replication set or migrate nodes
Mongo by copying set can ensure high reliability of data storage, usually in a production environment it is recommended to use “3 nodes replicate set”, so even if the collapse of a node cannot be started, we can clear out the data directly, after the restart, with a fresh copy to join the Secondary node set, or is the data copied to other nodes, Restart the node, and it automatically synchronizes data, thus achieving the purpose of data recovery.
1. Shut down the node for data synchronization
docker stop node; # db.shutdownServer({timeoutSecs: 60}); # Non-Docker environmentCopy the code
2. Copy the data storage directory (/dbPath) of the target node to the specified directory on the current node.
SCP Target node shard/data -> current node shard/dataCopy the code
3. Specify the data file again and start the mongodb node
Docker run - d - p - 20002-20001 v $PWD/shard/data: / data/mongo - v $PWD/shard2 / log: / data/log - name shard2 mongod: 3.4 \ --shardsvr --replSet rs2 --dbpath /data/mongodb --logpath /data/log/shard2.log --keyFile /data/key/keyfile --port 20001 --slowms 200 --logappend --wiredTigerCacheSizeGB 2;Copy the code
4. Add the new node to the replication set
Add ("<hostnameNew>:<portNew>"); Rs.remove ("<hostnameOld>:<portOld>") # Remove the original nodeCopy the code