This article has participated in the Denver Nuggets Creators Camp 3 “More Productive writing” track, see details: Digg project | creators Camp 3 ongoing, “write” personal impact.

1 Basic Concepts

Docker consists of three basic concepts

  • Mirror (Image)
  • Container (Container)
  • Warehouse (Repository)

By understanding these three concepts, you understand the entire Docker life cycle.

2 Docker mirror

As we all know, operating systems are divided into kernel and user space. For Linux, the root file system is mounted to provide user-space support after the kernel is started. A Docker Image, on the other hand, is a root file system. For example, the official ubuntu:18.04 image contains a complete set of root file systems for ubuntu 18.04 minimum system.

Docker image is a special file system, in addition to providing programs, libraries, resources, configuration files required by the container runtime, but also contains some configuration parameters prepared for the runtime (such as anonymous volumes, environment variables, users, etc.). The image does not contain any dynamic data and its contents are not changed after the build.

3 Docker container

The relationship between an Image and a Container is similar to that between a class and an instance in object-oriented programming. An Image is a static definition and a Container is an entity of the Image runtime. Containers can be created, started, stopped, deleted, paused, and so on.

The essence of a container is a process, but unlike processes that execute directly on the host, container processes run in their own separate namespace. So a container can have its own root file system, its own network configuration, its own process space, and even its own user ID space. The processes inside the container run in an isolated environment and are used as if they were operating on a separate system from the host. This feature makes container-wrapped applications more secure than running directly on the host. Because of this isolation, many newcomers to Docker often confuse containers with virtual machines.

As mentioned earlier, images use tiered storage, as do containers. Each container runtime is based on an image, on which a storage layer of the current container is created. We can call this storage layer prepared for the container runtime reads and writes the container storage layer.

The container storage layer lives the same as the container. When the container dies, the container storage layer dies with it. Therefore, any information stored in the container storage layer is lost when the container is deleted.

As per Docker best practices, containers should not write any data to their storage layer, and the container storage layer should remain stateless. All file writing operations should use data volumes or bind host directories. Read/write operations in these locations skip the container storage layer and directly read/write operations to the host (or network storage), achieving higher performance and stability.

The lifetime of a data volume is independent of the container. The container dies and the data volume does not die. Therefore, after using data volumes, the container is deleted or re-run without data loss.

4 the Docker warehouse

After the image is built, it can be easily run on the current host. However, if the image needs to be used on other servers, we need a centralized service to store and distribute the image, and Docker Registry is such a service.

A Docker Registry can contain multiple repositories. Each repository can contain multiple tags; Each label corresponds to a mirror.

Typically, a repository contains images of different versions of the same software, and labels are often used to match versions of the software. We can specify which version of this software is the mirror by using the format < repository >:< tag >. If no label is given, latest is used as the default label.

Take the Ubuntu image as an example. Ubuntu is the name of the repository, which contains different version labels such as 16.04 and 18.04. We can specify which version of the image we want with Ubuntu :16.04 or Ubuntu :18.04. If you omit the tag, such as Ubuntu, it will be treated as Ubuntu: Latest.

The repository name is often presented as a two-step path, such as Jwilder /nginx-proxy, which tends to mean the user name in a Docker Registry multi-user environment and the corresponding software name. This is not absolute, however, depending on the specific Docker Registry software or service being used.

4.1 Docker Registry exposes services

Docker Registry exposed services are Registry services that are open for user use and allow users to manage images. These public services typically allow users to upload and download public images for free, and may provide a fee service to manage private images.

The most commonly used Registry exposure service is the official Docker Hub, which is also the default Registry and has a large number of high quality official images. In addition, CoreOS quay. IO, where CoreOS-related images are stored; Google’s Google Container Registry, which is the service used by Kubernetes’ image.

For some reason, domestic access to these services may be slow. Some domestic cloud service providers provide Registry Mirror services for Docker Hub, which are called accelerators. Ali Cloud accelerator, DaoCloud accelerator and so on are common. Using the accelerator will directly download the image of Docker Hub from the domestic address, compared to directly download from Docker Hub will be much faster. The configuration is detailed in the Installing Docker section.

There are also some domestic cloud service providers that provide open services similar to Docker Hub. For example, netease cloud image service, DaoCloud image market, Ali Cloud image library, etc.

4.2 Private Docker Registry

In addition to using public services, users can also set up the private Docker Registry locally. Docker officially provides the Docker Registry image, which can be directly used as a private Registry service. Further instructions on setting up private Registry services are provided in the Private Repository section.

The open source Docker Registry image only provides the server-side implementation of the Docker Registry API, which is sufficient to support Docker commands without affecting use. Advanced functions such as image maintenance, user management, and access control are not included. These advanced features are available in the official commercial version of Docker Trusted Registry.

5 installation

Step 1: Install the necessary system tools

$ sudo yum install -y yum-utils device-mapper-persistent-data lvm2
Copy the code

Step 2: Add software source information

$ sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Copy the code

Step 3: Update and install docker-CE

sudo yum makecache fast
sudo yum -y install docker-ce
Copy the code

Step 4: Enable the Docker service

$ sudo systemctl enable docker
$ sudo systemctl start docker
Copy the code

Step 5: Verify the installation

[root@dts-test ~]# Docker version Client: Docker Engine - Community version: 19.03.2 API version: 1.40 Go version: Git commit: 6a30dFC Built: Thu Aug 29 05:28:55 2019 OS/Arch: Linux/AMd64 Experimental: false Server: Docker Engine - Community Engine: Version: 19.03.2 API Version: 1.40 (minimum Version 1.12) Go Version: Go1.12.8 Git commit: 6a30dFC Built: Thu Aug 29 05:27:34 2019 OS/Arch: Linux/AMd64 Experimental: falseCopy the code

Configuration accelerator

For users with Docker client version greater than 1.10.0

You can use the accelerator by modifying the daemon configuration file /etc/docker-daemon. json

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://su9ppkb0.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
Copy the code