1. The background
At 6:00 PM, the customer reported that the BUG management system upload attachment suddenly appeared unknown abnormality, blocking the bill of lading of the tester, so he went to the computer room, and found by df -h command at 18:10 that there was still 14K of /home stored in the BUG system log (1T of total space). The leader’s words in his head, he should solve the user’s problem first. Dig deeper.
Process of 2.
At 18:20, to release the space, first head for a while, at the same time send a notice in the customer group at 20:00 on time for disk expansion, first through CD to /home directory, du -sh./* command, online service JAR package, front-end engineering package are placed in /home directory (strongly recommended, when applying for the server, Have IT mount 20TB disk to /data directory, put all files storage and large files related packages in this directory, and back up periodically). Then delete all the JAR packages, authorization, gateway, configuration center and other core service JAR packages deployed by the user back-end, and copy them. Run df -h again, and add 1G space to /home directory. It’s a break for a while.
At 19:00, through monitoring, I found that the attachment uploading interface was frequently called, and each file ranged from 50M to 200M. Looking at the situation, I estimated that it would be full immediately before a meal. Sure enough, at 19:30, the /home space was left with 12K (the rhythm of the crash), so it seemed that it could not wait until 20:00. Then step up preparations to expand migration.
19:50 IT has attached the disk of 20T to /data directory. Next, IT needs to migrate the data of /home near IT to /data directory. There are risks that may occur in the process in mind. The author logged in through a WIN10 PowerShell SSH user@ip in the machine room. Risk 1: When SSH is connected to Linux, it will be disconnected without operation for a long time. Risk 2: The file name has special characters and the file length is too long, which may cause cp abnormalities. Risk three: How to achieve unattended and silent synchronization in the background according to the synchronization speed of 200G in 1 hour and starting at least 5 hours.
-name “*” -exec cp -r {} /data/ \ command is used to solve the problem. However, risk 3 cannot be solved. By the way, when storing files, the file path name should be unique. For example, a UUID should be added in front of the file, so that the update can be overwritten in case of abnormal interruption during migration. Nohup Command & runs commands in the background without hanging up. At the same time, risk 1 can be solved [SSH connection to Linux, long time without operation will be disconnected]
3. Here comes the dry stuff
3.1 Attached with sh script, you can use it directly.
If the source files in the directory listing address into a list file, the ls/data > list, records the current document number for right now, after the migration to check file total number, ls – lR | grep “^ -” | wc -l
Vim sync.sh: press I to enter the editing mode
` #! /bin/bash
source=”/home/data”
dest=”/data”
echo “begin”
while read line
do
echo "begin to copy $line"
cp -raf $source/"$line" $dest
Copy the code
done < list
echo “end,success”
Press ESC, : wq can be saved Then execute nohup sh sysnc. Sh & two days to come, such as through the ls – lR | grep | wc -l “^ -” to contrast.
3.2 The author uses docker to deploy the service and modify the container mapping directory
Yml is used as an example to modify the mapping directory address of volumes: – /data:/root/itsys/files. Docker build-t image name.
4. Summary & reflection
When designing the deployment architecture, we should consider the impact of the growth of business volume on the system, and predict the corresponding adjustment plan of future infrastructure (CPU, memory, storage, etc.) through the analysis of user volume and business data. Regularly back up important business data, such as BUG logs for defect management, so that if data is lost, it will be difficult for the developer to fix the BUG.
Technical debt needs to be paid off, and if you had thought about mounting file directories to a disk with enough space at the time of initial deployment, there would have been no later online exceptions that needed to be dealt with urgently.
In addition, even if the problems in the capacity expansion process are analyzed, there will still be other anomalies. At this time, we need to adjust our attitude. As long as the plan is correct, we need to overcome the problems in the process. Bashrc: alias cp= ‘cp -i’ : alias cp= ‘cp -i’ : alias cp= ‘cp -i’ : alias cp= ‘cp -i’ : alias cp= ‘cp -i’
5. Attach common commands during the process
find . -name “*” -exec cp -r {} /data/ ; Scenarios in which file names have special characters
Ls – lR | grep “^ -” | wc -l statistical number of the current directory files
Ll-t is in descending order of time
Ll-lrt is in ascending order by time
Nohup sh Script name & Start a process silently in the background
Docker build-t Image name
Docker-compose up -d is required to install docker-compose to start the service
Docker-compose logs -f View logs