Docker is an epoch-making open source project, which completely releases the power of computing virtualization, greatly improves the maintenance efficiency of applications, and reduces the cost of cloud computing application development! Docker makes application deployment, testing, and distribution more efficient and easy than ever before!
No matter application developers, operation and maintenance personnel, or other information technology practitioners, it is necessary to understand and master Docker to save limited lives.
This paper is the author’s notes in the process of learning Docker from the perspective of a front-end engineer. If it can help you, I will be honored.
Docker profile
Docker uses Go language launched by Google for development and implementation, based on Cgroup, Namespace of Linux kernel, and Union FS of OverlayFS class, encapsulating and isolating processes. Virtualization technology at the operating system level. Since a quarantined process is independent of the host and other quarantined processes, it is also called a container. The initial implementation was based on LXC, which was removed after version 0.7 in favor of home-grown libContainer, and has evolved to use runC and Containerd since 1.11.
Docker core concepts
Image
As we all know, operating systems are divided into kernel and user space. For Linux, the root file system is mounted to provide user-space support after the kernel is started. A Docker Image, on the other hand, is a root file system. For example, the official ubuntu:18.04 image contains a complete set of root file systems for ubuntu 18.04 minimum system.
Docker image is a special file system, in addition to providing programs, libraries, resources, configuration files required by the container runtime, but also contains some configuration parameters prepared for the runtime (such as anonymous volumes, environment variables, users, etc.). The image does not contain any dynamic data and its contents are not changed after the build.
Container
The relationship between an Image and a Container is similar to that between a class and an instance in object-oriented programming. An Image is a static definition and a Container is an entity of the Image runtime. Containers can be created, started, stopped, deleted, paused, and so on.
The essence of a container is a process, but unlike processes that execute directly on the host, container processes run in their own separate namespace. So a container can have its own root file system, its own network configuration, its own process space, and even its own user ID space. The processes inside the container run in an isolated environment and are used as if they were operating on a separate system from the host. This feature makes container-wrapped applications more secure than running directly on the host. Because of this isolation, many newcomers to Docker often confuse containers with virtual machines.
As mentioned earlier, images use tiered storage, as do containers. Each container runtime is based on an image, on which a storage layer of the current container is created. We can call this storage layer prepared for the container runtime reads and writes the container storage layer.
The container storage layer lives the same as the container. When the container dies, the container storage layer dies with it. Therefore, any information stored in the container storage layer is lost when the container is deleted.
As per Docker best practices, containers should not write any data to their storage layer, and the container storage layer should remain stateless. All file writing operations should use data volumes or bind host directories. Read/write operations in these locations skip the container storage layer and directly read/write operations to the host (or network storage), achieving higher performance and stability.
The lifetime of a data volume is independent of the container. The container dies and the data volume does not die. Therefore, after using data volumes, the container is deleted or re-run without data loss.
Warehouse Registry Server
A Docker Registry can contain multiple repositories. Each repository can contain multiple tags; Each label corresponds to a mirror.
Typically, a repository contains images of different versions of the same software, and labels are often used to match versions of the software. We can specify which version of this software is the mirror by using the format < repository >:< tag >. If no label is given, latest is used as the default label.
Take the Ubuntu image as an example. Ubuntu is the name of the repository, which contains different version labels such as 16.04 and 18.04. We can specify which version of the image we want with Ubuntu :16.04 or Ubuntu :18.04. If you omit the tag, such as Ubuntu, it will be treated as Ubuntu: Latest.
The repository name is often presented as a two-step path, such as Jwilder /nginx-proxy, which tends to mean the user name in a Docker Registry multi-user environment and the corresponding software name. This is not absolute, however, depending on the specific Docker Registry software or service being used.
Public Docker Registry:
- Docker Hub
- Netease Cloud image service
- DaoCloud Mirror market
- Aliyun image libraries
Private Docker Registry:
- Sonatype Nexus
- Harbor
Install the configuration
- Operating system: Linux Ubuntu18 4.15.0-91- Generic, passed
uname -a
To view - The docker – ce mirror: developer.aliyun.com/mirror/dock…
Uninstall the previous version
$ apt remove docker docker-engine docker.io containerd runc
Copy the code
Using a Software package
#Step 1: Install the necessary system tools
sudo apt-get update
sudo apt-get -y install apt-transport-https ca-certificates curl gnupg-agent pass software-properties-common
#Step 2: Install the GPG certificate
curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
#Step 3: Write software source information
sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
#Step 4: Update and install docker-CE
sudo apt-get -y update
sudo apt-get -y install docker-ce
Copy the code
Installation by Script
$ curl -fsSL https://get.docker.com -o get-docker.sh
$ sudo sh get-docker.sh --mirror Aliyun
Copy the code
After the installation is successful, the Docker service will be automatically started. You can use systemctl is-enabled docker to check whether the Docker service starts automatically after startup.
Optional configuration
To solveWARNING: Your kernel does not support cgroup swap limit capabilities
:
1. Edit the /etc/default/grub file
$ nano /etc/default/grub
Copy the code
2. Find the GRUB_CMDLINE_LINUX= config item and append cgroup_enable=memory swapaccount=1.
3. Save the file and run the sudo update-grub command
4. Restart the server
Test whether Docker is installed correctly
$ docker run hello-world
Copy the code
If the command output is normal, the installation is successful.
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete
Digest: sha256:f9dfddf63636d84ef479d645ab5885156ae030f611a56f3a7ac7f2fdd86d7e4e
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
Copy the code
Docker Deamon configuration
Run nano /etc/docker-daemon. json and write the following information:
{
"experimental": false."registry-mirrors": [
"https://registry.docker-cn.com"."https://mirror.ccs.tencentyun.com"."http://docker.mirrors.ustc.edu.cn"."http://hub-mirror.c.163.com"]}Copy the code
Restart the service:
$ systemctl daemon-reload
$ systemctl restart docker.service
Copy the code
Use Docker images
Access to the mirror
Docker pull [options][Docker Registry address [: port number]/][user name]< repository name >[:TAG]
- The default option
-a
.--all-tags=true|false
: Indicates whether to obtain all mirrors in the repository. The default value is no--disable-content-trust
: Disables the content verification of the mirror. The value is true by default
- Default Docker Registry:
registry.hub.docker.com
- Default username:
library
, which is the official mirror - The default TAG:
latest
Viewing Mirror Information
Lists existing mirrors on the local host
docker image ls
|docker images
The size of an image indicates only the logical volume of the image. In fact, only one copy of the image is stored locally at the same mirror layer, so the physical storage space occupied by the mirror is smaller than the sum of the logical volumes.
Run the tag command to add a mirror tag
docker tag ubuntu:latest myubuntu:latest
To facilitate the use of specific images later, you can also use the Docker tag command to add any new tags to the local image.
Use the inspect command to see the details
Docker inspect < warehouse >
Use the Docker inspect command to get details about the image, including maker, adaptive architecture, digital summary of each layer, and so on.
Run the history command to view the history of a mirror
Docker history
[:TAG]
Note that long commands are automatically truncated, and you can use the –no-trunc option to print the full command.
Remove the mirror
1. Use the label to delete the mirror
docker rmi <IMAGE> [IMAGE...]
或docker image rm <IMAGE> [IMAGE...]
2. Use the mirror ID to delete the mirror
docker rmi <IMAGE ID>
When using the docker rmI command followed by the ID of the image (or a distinguishable partial ID string prefix), it first tries to remove all labels pointing to the image, and then deletes the image file itself.
Note that when a container is created based on the image, the image file cannot be deleted by default. We can use the docker ps -a command to view all containers that exist on this machine.
Best practice: Use docker rm
to delete all containers that depend on the IMAGE, and then execute docker RMI
to delete the IMAGE.
Clean up the mirror
docker image prune [options]
-a
.--all
: Delete all useless mirrors, not only temporary mirrors-f
.--force
: Forcibly deletes a mirror without prompting for confirmation
After using Docker for a period of time, some temporary image files and unused images may be left in the system. You can use the Docker image prune command to clear them.
You can use crontab to perform periodic cleanup. Run crontab -e.
#Be sure to press Enter to Enter a newline character, otherwise this will not work
59 23 * * * docker image prune -f
Copy the code
Create a mirror image
1. Create from an existing container
docker commit [OPTIONS] <CONTAINER> <REPOSITORY>[:TAG]
-a
.--author=
: Author Information-m
.--message=""
: Submit information-p
.--pause=true
: Pauses container execution while committing
First, launch a Alpine image and install nano in it, then publish a new image:
$ docker run -it alpine bash
$ docker commit -m "install nano" -a "Yang Junning"Ff3034d2ffa7 my - alpine: 0.1
Copy the code
2, based on Dockerfile create
Docker build -t <IMAGE NAME> < context path /URL/->
Dockerfile creation is the most common way. A Dockerfile is a text file that describes the process of creating a new image based on a parent image using specified instructions.
Here is a simple example of a Dockerfile installation based on the Alpine image node environment to form a new Youngjuning/Alpine image:
FROM alpine LABEL version="1.0" Maintainer ="youngjuning<[email protected]>" RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositoriesCopy the code
Construction:
$ docker build -t youngjuning/alpine:latest .
Copy the code
Store image
To export the image to a local file, use the docker save command. The command supports -o
or –output
to export the image to a specified file.
For example, export the local alpine image as a file alpine. Tar, as shown below:
$ docker save -o alpine.tar alpine
Copy the code
Users can then share the image with others by copying alpine. Tar.
Load the image
You can use Docker Load to import exported tar files into the local image library. The -i
or -input
options are supported to read the mirrored content from the specified file.
For example, import the image from the file alpine. Tar to the list of local images as follows:
$ docker load -i alpine.tar
Copy the code
Upload the image
Docker push [options][Docker Registry address [: port number]/][user name]< repository name >[:TAG]
Release process:
- Latest release:
docker push youngjuning/alpine:latest
- Add new tags:
Docker tag youngjuning/alpine: latest youngjuning/alpine: 1.0.0
- Release 1.0.0:
Docker push youngjuning/alpine: 1.0.0
Check out the Youngjuning/Alpine project to see the Aplpine Docker Image I published based on aliyun images
Manipulate the Docker container
- A Docker container is a running instance of an image.
- A Docker container is a single application (or group of applications) that runs independently, along with the environment in which they must run
Start the container
1. Create and start
$Docker run -it Ubuntu :18.04 /bin/bash
Copy the code
The -t option causes Docker to assign a pseudo-tty and bind it to the container’s standard input, while -i keeps the container’s standard input open.
When docker run is used to create containers, the standard operations that Docker runs in the background include:
- Check whether the specified image exists locally and download it from the public repository if it does not
- Create and start a container with an image
- Assign a file system and hang it on a read-write layer outside the read-only mirror layer
- Bridge a virtual interface from the host host’s configured bridge interface to the container
- Configure an IP address from the address pool to the container
- Execute the user-specified application
- The container is terminated after execution
Some common options:
-d
.--detach=true|false
: Whether to run the container in the background. The default value isfalse
-i
.--interactive=true|false
: Keep standard input enabled. The default value isfalse
-p
.--publish=[]
: Specifies how to map to a local host port, for example-p 9000:9000
--restart="no"
: Container restart policy, includingno
,on-failure[:max-retry]
,always
,unless-stopped
等--rm=true|false
: Indicates whether the container is automatically deleted after exiting-d
At the same time use-t
.--tty=true|false
: Specifies whether to assign a dummy terminal. The default value isfalse
-v [HOST-DIR:]<CONTAINER-DIR>[:OPTIONS]
.--volume=[HOST-DIR:]<CONTAINER-DIR>[:OPTIONS]
: Volumes files attached to the host into containers--name=""
: Specifies the alias of the container
2. Start the terminated container
You can use the docker start
command to start a terminated CONTAINER.
3. View the container output
To get the output of the CONTAINER, use the docker
logs command.
Termination of the container
Docker stop
can be used to terminate a running CONTAINER.
A terminated container can be restarted by using the docker container start command.
In addition, the Docker container restart command terminates a running container and then restarts it.
exec
Into the container
With the -d argument, the container goes into the background after it starts.
Docker exec command is recommended when you need to enter the container for operation:
$ docker run -dit alpine
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3d95dabef801 alpine "/bin/sh" 21 seconds ago Up 19 seconds recursing_aryabhata
Copy the code
$ docker exec -it <CONTAINER ID>
Copy the code
If you exit from this stdin, it does not cause the container to stop.
Remove the container
Docker Container RM can be used to delete a terminated container. For example,
$docker rm
#Delete a running container and the data volumes mounted to the container
$ docker rm -vf
Copy the code
If you want to remove a running container, add the -f argument. Docker sends SIGKILL to the container.
Clean up all terminated containers
$ docker container prune
Copy the code
Export and import containers
$ docker export 7691a814370e > ubuntu.tar
$ cat ubuntu.tar | docker import - test/ ubuntu: v1.0
$ docker import http://example.com/exampleimage.tgz example/imagerepo
Copy the code
Check the container
Viewing Container Details
$ docker inspect [OPTIONS] <CONTAINER ID>
Copy the code
View the processes in the container
$ docker top [OPTIONS] <CONTAINER ID>
Copy the code
Viewing Statistics
$ docker stats [OPTIONS] <CONTAINER ID>
Copy the code
Update the configuration
$ docker update --restart=always <CONTAINER ID>
Copy the code
Renaming containers
$ docker rename <old name> <new name>
Copy the code
Viewing container Logs
$ docker logs -f <CONTAINER ID>
Copy the code
Portainer container management tool
$ docker volume create portainer_data
$docker run -d -p 9000:9000 \ -v /var/run/docker.sock:/var/run/docker.sock \ -v portainer_data:/data \ --name portainer \ --restart always \ portainer/portainer
Copy the code
/etc/nginx/sites-enabled/dafulat
upstream portainer {
server 127.0.0.1:9000;
}
server {
listen 80;
location /portainer/ {
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass http://portainer/;
}
location /portainer/ws/ {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_passhttp://portainer/ws/; }}Copy the code
Docker data persistence
A data volume is a special directory that can be used by one or more containers, bypassing UFS and providing a number of useful features:
Data volume
It can be shared and reused between containers- right
Data volume
The changes will take effect immediately - right
Data volume
The update does not affect the mirror Data volume
The default persists even if the container is deleted
Creating a Data Volume
$ docker volume create my-vol
Copy the code
In addition to the create subcommands, Docker Volume supports inspect(viewing details), ls (listing existing data volumes), prune (cleaning up unwanted data volumes), and rm (deleting data volumes).
Binding a Data Volume
--mount
$ docker run -d -P \
--name web \
--mount source=my-vol,target=/webapp \
training/webapp \
python app.py
Copy the code
-v
.--volume
$docker run -d -P \ --name web \ -v my-vol:/wepapp \ training/webapp \ python app.py
Copy the code
Source can also be any system location of an absolute path.
If you mount a file directly to the container and use file editing tools, including vi or sed –in-place, the inode of the file may change. As of Docker 1.1, this will result in an error message. Therefore, the recommended method is to directly mount the directory where the file resides into the container.
Dockerfile
Please see Dockerfile instruction for details
The application installation
GitLab and its official mirror
docker-compose.yml
web:
image: 'gitlab/gitlab-ce:latest'
restart: always
hostname: 'gitlab.yangjunning.pro'
environment:
GITLAB_OMNIBUS_CONFIG: | external_url 'http://gitlab.yangjunning.pro:8929' gitlab_rails['gitlab_shell_ssh_port'] = 2224 ports:
- '8929:8929'
- '2224:22'
volumes:
- 'gitlab_config:/etc/gitlab'
- 'gitlab_logs:/var/log/gitlab'
- 'gitlab_data:/var/opt/gitlab'
Copy the code
Run the container
$ docker-compose up -d
Copy the code
Update gitlab
$ docker-compose pull
$ docker-compose up -d
Copy the code
Docker-related scheduled tasks
# crontab -e # delete unnecessary mirror (not only temporary mirror) 00 00 * * * Docker image une -af && docker volume une -f && rsync-arv /var/lib/docker/volumes /backups/dockerCopy the code
For details about how to synchronize files to QShell, see Backing up to Q7
concept
DevOps
DevOps (a portmanteal of Development and Operations) is a culture, movement, or practice that values communication and collaboration between software developers (Dev) and IT Operations technicians (Ops). Build, test, and release software faster, more frequently, and more reliably by automating the software delivery and architecture change processes.
The introduction of DevOps can have a profound impact on product delivery, testing, feature development, and maintenance (including the once rare but now common “hot patches”). In organizations that lack DevOps capabilities, there is an information “gap” between development and operations. For example, operators want better reliability and security, developers want more responsive infrastructure, and business users want more features to be released to end users faster. This information gap is where things go wrong most often.
virtualization
In computer technology, virtualization technology is a kind of resource management, are all kinds of entities in the computer resources, such as servers, network, memory and storage, etc., shall be abstract, after conversion, break down barriers not cutting between the entity structure, the user can use than the original configuration better ways to apply these resources.
The container
Containers effectively divide resources managed by a single operating system into isolated groups to better balance conflicting resource usage requirements between isolated groups. In contrast to virtualization, this requires neither instruction level emulation nor just-in-time compilation. Containers can run instructions locally on the core CPU without requiring any specialized interpretation mechanism. In addition, the complexity of para-virtualization and system call substitution is avoided.
A hierarchical
Because the image contains the complete root file system of the operating system, and its volume is often huge, Docker made full use of the technology of Union FS to design it as a layered storage architecture. So, strictly speaking, an image is not a packaged file like an ISO. An image is a virtual concept whose actual embodiment is not a file but a group of file systems, or a combination of multiple file systems.
When a mirror is built, one layer is built on top of the other. After each layer is built, there are no more changes, and any changes on the next layer only happen on your own layer. For example, deleting a file at the previous layer does not actually delete the file at the previous layer, but only marks the file as deleted at the current layer. This file will not be seen when the final container runs, but it will actually follow the image. Therefore, when building the image, extra care should be taken. Each layer should contain only what needs to be added to that layer, and any extras should be cleared away before the layer is built.
The feature of hierarchical storage also makes it easier to reuse and customize images. You can even use the previously built image as the base layer and then add new layers to customize what you need to build new images.
Daemon process
In a multitasking computer operating system, a daemon is a computer program that executes in the background. Such programs are initialized as processes. The name of a daemon program usually ends with the letter “D” : syslogd, for example, is the daemon that manages system logs.
In general, daemons do not have any existing parent (PPID=1) and are directly below init in the UNIX system process hierarchy. Daemon programs typically become daemons themselves by forking a child process and then terminating its parent immediately so that the child process can run under init. This method is often called “shucking”.
Systems usually boot daemons together at startup. Daemons provide support for responding to network requests, hardware activities, etc., or other requests from other applications through certain tasks. Daemons can also configure hardware (such as DevfSD on some Linux systems), run scheduled tasks (such as CRon), and run other tasks.
In a DOS environment, such applications are called resident programs (TSR). In Windows, daemon duties are performed by applications called Windows services.
In the original Mac OS, such applications were called “extensions.” Mac OS X, which is unux-like, has daemons.
Docker vs. Virtual machine
features | The container | The virtual machine |
---|---|---|
Start the | Second level | Minutes of class |
The hard disk to use | As a general rule, beMB |
As a general rule, beGB |
performance | Close to the native | Weaker than |
System support | Supports thousands of containers on a single machine | Usually dozens |
In the following figure, VMS are virtualized at the hardware level, requiring additional VM management applications and VM operating system layers. Docker container is a more lightweight operating system that implements virtualization at the operating system level and directly reuse local hosts.
Further reading
- DevOps knowledge platform Ledge
- Jenkins + Docker continuous integration
To practice
- SonarQube
- Nexus Repository Manager
- ShowDoc
- Verdaccio
- EasyMock
- Sentry
- Ansible
- code-push-server
- BugOut
This article was first published in Yang Junning’s blog, it is not easy to create, your praise 👍 is my motivation!!