preface
We learned containerization of Docker together in Getting Started with Docker 01.
In Getting Started with Docker 02, we learned how different containers communicate with each other through Docker network.
Docker data management
Today we are going to learn how to use Docker data management. Docker data can be managed in three ways:
- Volume is also the most recommended method.
- Bind Mount, an early data management method used by Docker.
- TMPFS mount. Memory-based data management.
Pay attention to
TMPFS mountThis parameter applies only to the Linux operating system.
Data volume
Basic commands
Volume is also one of the common Docker object types, and therefore supports operations such as CREATE, inspect, ls, prune, and RM. First create a data volume:
docker volume create my-volume
Copy the code
View all current data volumes
docker volume ls
Copy the code
You can see the last line of output
local my-volume
Copy the code
Then we enter the command to view the details of the my-volume data volume:
docker volume inspect my-volume
Copy the code
You can see the following output:
Finally, we delete the my-volume data volume we just created
docker volume rm my-volume
Copy the code
As you can see from the figure above, data volumes provide a “bridge” between the “host environment” and the “container environment.” In general, we write the data that needs to be stored in the container to the path (location) where the data volume is mounted, and then the data is immediately and automatically stored to the corresponding area of the host.
Method of creating a data volume
When creating a container with data volumes, you usually have two options: 1) Named volumes; 2) Anonymous Volume. Let’s talk about them in detail.
Creating a Named Volume
Run the following command:
docker rum -it -v my:vol:/data --name container1 alpine
Copy the code
As you can see, we specified the data volume configuration as my-vol:/data with the -v (or –volume) parameter, where (you guessed it) my-vol is the name of the data volume and /data is the path of the data volume in the container.
After entering the container, add a file to the /data directory and exit
touch /data/file.txt
exit
Copy the code
To verify that the data in /data is really saved, delete container1 and create container2.
Docker rm container1 docker run -v my-vol:/data --name container2 alpine ls /data //Copy the code
You can see the file.txt file you just created in Container1! In fact, this pattern of sharing volumes between containers is so common that Docker provides a handy parameter called volumes-from to facilitate volume sharing:
docker run -it -v --volumes-from container2 --name container3 alpine
ls /data
file.txt
Copy the code
Likewise, container3 has access to the contents of the data volume.
Creating an Anonymous Volume
My-vol :/data -v = my-vol:/data -v = my-vol:/data -v = my-vol
docker run -v /data --name container4 alpine
Copy the code
Type the following command to check container4:
docker inspect container4
Copy the code
Take a look at some of the important fields in Mount:
- Name is the Name of the data volume. Since it is an anonymous volume, the Name field is a long random number, and the named volume is the specified Name.
- Source is where the data volume is stored in the host file system (as mentioned earlier, Windows and Mac are stored in the Docker VIRTUAL machine).
- Destination is the mount point of the data volume in the container.
- RW indicates read-write. If false, it is a read-only data volume.
Use data volumes in Dockerfile
Using a data VOLUME in a Dockerfile is as simple as specifying the VOLUME keyword:
VOLUME ["/data1", "/data2", "/data3"]Copy the code
Pay attention to
Only anonymous volumes can be created
When a data volume is specified using docker run -v, the configuration in the Dockerfile is overwritten
Bind mounts
Bind Mount is the earliest Docker data management and storage solution. Its general idea is the same as data volume, but it directly establishes the mapping relationship between the local file system and the container file system, which is very suitable for simple and flexible data transfer between the local and container.
We can try to mount our own machine’s desktop (or some other path) to the container:
docker run -it --rm -v ~/Desktop:/desktop alpine
Copy the code
/Desktop is the local file system path, /Desktop is the container path, /Desktop:/ Desktop is bound to the local path and container path, as if to build a bridge. The –rm option here refers to the automatic deletion after the container is stopped.
After entering the container, you can try to see if there is anything on your desktop under /desktop, and then create a file in the container to see if the desktop receives the file:
/# ls /desktop
/# touch /desktop/from-container.txt
Copy the code
You should see from-container. TXT created in the container on your desktop!
To combat
Above we are familiar with the two ways of Docker data management, it is better to practice, let’s directly hands-on demonstration.
Project preparation
git clone -b volume-start https://github.com/tuture-dev/docker-dream.git
cd docker-dream
Copy the code
Actual combat content:
- Store and back up log data output by the Express server, rather than storing it in a “dead” container.
- MongoDB images are already configured with data volumes, so we just need to go through a few steps to back up and restore data.
Mount data volumes to the Express server
We first VOLUME is added in the server/Dockerfile configuration, and specify the LOG_PATH (log output path environment variable, refer to server/index, js source) to the/var/log/server/access. The log, The code is as follows:
FROM node:10 # specify directory /usr/src/app WORKDIR /usr/src/app VOLUME /var/log/server # COPY package.json to the working directory COPY package.json The RUN NPM config set registry https://registry.npm.taobao.org && NPM install # COPY the source code COPY.. # set the environment variable (server host IP and port) ENV MONGO_URI = mongo: / / dream - db: 27017 / todos ENV HOST = 0.0.0.0 ENV PORT = 4000 ENV LOG_PATH = / var/log/server/access. Open the log # CMD ["node", "index.js"]Copy the code
Then build the server image:
docker build -t dream-server server/
Copy the code
Now let’s put the whole project together, which is the content of the previous two articles:
# Create a network, Docker run --name dream-db --network dream-net -d mongo Docker run -p 45:4000 --name dream-api --network dream-net -d dream-server Nginx docker run -p 8080:80 --name client -d dream-clientCopy the code
After the project started, wedocker ps
Make sure all three containers are open:
Then access localhost:8080(server domain name :8080)
Backup of log data
Create a new temporary container to back up data by sharing data volumes.
- Implement data sharing between dream-API containers and data volumes (implemented).
- Create a temporary container to get the dream-API data volume. Run the following command:
docker run -it --rm --volumes-from dream-api -v $(pwd):/backup alpine
Copy the code
This command uses both the data volume and the binding mount described above:
–volumes from dream-api used to share data volumes between containers
-v $(PWD):/backup Is used to mount the current local file path (obtained by using the PWD command) and the binding /backup path in the temporary container
- Once inside the temporary container, we tar the log data into the /backup directory and exit:
/ # tar cvf /backup/backup.tar /var/log/server/
tar: removing leading '/' from member names
var/log/server/
var/log/server/access.log
/ # exit
Copy the code
After exiting, run the following command to view the backup backup.tar in the current directory
docker run -it --rm --volumes-from dream-api -v $(pwd):/backup alpine tar cvf /backup/backup.tar /var/log/server
Copy the code
Database backup and recovery
prompt
Here we use MongoDB’s own backup and restore commands (Mongodump and Mongorestore), and other databases (such as MySQL) have similar commands, which can be referenced in this article.
Temporary container + container connection
First, our temporary container has to connect to the Dream-DB container and configure the binding mount as follows:
docker run -it --rm -v $(pwd):/backup --network dream-net mongo sh
Copy the code
Instead of backing up the log data, we need to connect the temporary container to the Dream-NET network so that it can access the dream-DB data for backup.
Second, after entering the temporary container, run the mongodump command:
/ # mongodump -v --host dream-db:27017 --archive --gzip > /backup/mongo-backup.gz
Copy the code
At this point, the files output to /backup will be saved in the current directory (PWD) due to the binding mount. After you exit, you can see the mongo-backup.gz file in the current directory.
Prepare the binding mount in advance
When creating the database container, run the following command:
docker run --name dream-db --network dream-net -v $(pwd):/backup -d mongo
Copy the code
Then execute mongodump from docker exec:
docker exec dream-db sh -c 'mongodump -v --archive --gzip > /backup/mongo-backup.gz'
Copy the code
In this way, you can create the database container when the binding mount, and then use Mongodump to back up the data to the mount area. Here we use sh -c to execute an entire Shell command (as a string) to avoid ambiguity caused by the redirection > (replace sh -c ‘XXX’ with XXX if you don’t understand). As you can see, the mongodump command is much simpler and we no longer need to specify the –host parameter because the database is in the container.
But there’s a problem: if you’ve already created the database and haven’t done the binding mount in advance, this approach won’t work!
Mind you, this is not a drill!
With database backup files, we can do a wave of “drills” with impunity. The current database and API server are directly installed with the following command:
docker rm -f --volumes dream-db
docker rm -f dream-api
Copy the code
Yes, with the –volumes switch, we not only deleted the Dream-DB container, but also deleted all the data volumes we had mounted. The drill has to be realistic. Localhost :8080
Now let’s create a new dream-DB container again:
docker run --name dream-db --network dream-net -v $(pwd):/backup -d mongo
Copy the code
Note that we mapped the current directory to the container’s /backup directory by binding mount, which means that data can be recovered from the new container using /backup/mongo-backup.gz. Run the following command:
docker exec dream-db sh -c 'mongorestore --archive --gzip < /backup/mongo-backup.gz'
Copy the code
We should see some logs output indicating that the data recovery was successful. Finally restart the API server:
docker run -p 4000:4000 --name dream-api --network dream-net -d dream-server
Copy the code
Now access the application again is not found the data has been recovered!
Today’s Docker is over here. Come on!
Tutorial: Turing Community: Hands-on container data management