Introduction to Docker

Docker is an open platform for building, publishing and running applications. Docker takes container as the basic unit of resource separation and scheduling. The container encapsulates all the environments required by the entire project runtime. With Docker, you can separate the application and infrastructure, and manage the infrastructure like the application, so as to quickly complete the deployment and delivery of the project.

Docker is developed with Go language, and processes are encapsulated and isolated based on cgroup, Namespace of Linux kernel, Union FS of AUFS class and other technologies, which is a virtualization technology at the level of operating system. The initial implementation was based on LXC, but since version 0.7, LXC has been removed in favour of libContainer, and since 1.11, runC and Containerd have been further evolved.

  • Runc: is a Linux command line tool for creating and running containers according to the OCI container runtime specification.
  • Containerd: is a daemon that manages the container life cycle and provides a minimal set of functions for executing containers and managing images on a node.

The following figure shows the difference between Docker and traditional virtualization: Traditional virtual machine technology is to virtual a set of hardware, run a complete operating system on it, and then run the required application process on the system; However, the application process in Docker container is directly run on the kernel of the host, and the container does not have its own kernel, and there is no hardware virtual, so it is more portable than traditional virtual machine.

Ii. Docker architecture and core concepts

Docker uses client-server architecture, where Docker clients send commands to Docker daemons, which are responsible for building, running, and distributing Docker containers. Docker clients and daemons use the REST API to communicate through UNIX sockets or network interfaces. The core concepts are as follows:

2.1 mirror

A Docker Image is a special file system that contains the resources and environment that a program needs to run. The mirror does not contain any dynamic data, and its contents are not changed after the build.

Because the image contains the complete root file system of the operating system, its volume is often huge. Therefore, when Docker is designed, the technology of Union FS (federated file system) is fully utilized and it is designed as a hierarchical storage architecture. Therefore, an image is actually composed of multiple layers of file systems. When the mirror is built, it will be built layer by layer, the former layer is the foundation of the next layer; After each layer is built, no changes will occur, and any changes on the latter layer will only occur on the own layer. For example, deleting a file from a previous layer does not actually delete the file from the previous layer, but merely marks the file as deleted at the current layer. When the final container runs, you won’t see this file, but it will actually follow the image. Therefore, extra care should be taken when building a mirror. Each layer should contain only what needs to be added to that layer, and any extra stuff should be cleaned up before the layer is built.

The feature of hierarchical storage makes it easier to reuse and customize images. You can even use the previously built image as a base layer and then add new layers to customize what you need to build a new image.

2.2 container

The relationship between an Image and a Container is similar to that between classes and instances in object-oriented programming. An Image is a static definition, and a Container is an entity that the Image runs on. Containers can be created, started, stopped, deleted, suspended, and so on.

The essence of a container is a process, but unlike a process that executes directly in a host, a container process runs in its own, separate namespace. So a container can have its own root file system, its own network configuration, its own process space, and even its own user ID space. Processes in the container run in an isolated environment and are used as if they were operating on a separate system from the host. This feature makes container-wrapped applications safer than running directly on the host.

As mentioned earlier, mirroring uses tiered storage, so do containers. Each container runtime creates a storage layer of the current container on top of the mirror-based layer. We can call this storage layer prepared for the reads and writes of the container runtime the container storage layer. The lifetime of the container storage layer is the same as that of the container. When the container dies, the container storage layer dies with it. Therefore, any information stored in the container storage layer will be lost when the container is deleted.

According to The requirements of Docker best practices, the container should not write any data into its storage layer, and the container storage layer should be kept stateless. All file write operations should be performed using data volumes or bound to host directories. In these locations, reads and writes directly to the host (or network storage) skip the container storage layer, resulting in higher performance and stability. The life cycle of the data volume is independent of the container. The data volume does not die when the container dies. Therefore, the data volume is not lost after the container is deleted or re-run after the data volume is used.

2.3 warehouse

Once the image is built, it can be easily run on the current host, but if you need to use the image on another server, you need a centralized service to store and distribute the image, which is the Image Registry. Docker Hub is an image public warehouse officially provided by Docker, which provides a large number of images of commonly used software. Of course, for the sake of security and confidentiality, you can also build your own private warehouse.

2.4 Docker daemon

The Docker Daemon (Dockerd) is responsible for listening to the Docker API requests and managing Docker objects, such as images, containers, networks, and volumes. Daemons can also communicate with each other.

2.5 Docker Client

Docker client (Docker) is the main way for users to interact with Docker. When you use commands like Docker run, the client sends them to Dockerd, and dockerd executes them. A Docker client can communicate with multiple DockerDs.

Docker common commands

Docker provides a large number of commands to manage images, containers, and services. The unified COMMAND format is Docker [OPTIONS] COMMAND, where OPTIONS represents optional parameters. It should be noted that the execution of Docker commands generally requires the root permission, because Docker’s command line tool Docker and Docker daemon are the same binary file. The Docker Daemon is responsible for receiving and executing commands from Docker and requires root permission to run. All common commands and their application scenarios are as follows:

3.1 Basic Commands

  • Docker version: Displays the docker version information
  • Docker info: Used to view the configuration information of Docker
  • Docker help: Used to view help information

3.2 Mirroring commands

1. Docker Search image name

Find the image with the specified name from the official image repository Docker Hub. The common parameter is –no-trunc, which indicates that complete image information is displayed.

2. docker images

Lists information about all top-level images. Common parameters are as follows:

  • -a: displays all the images, including those hidden in the middle
  • -q: Displays only mirror ids
  • –digests: Displays summary information
  • –no-trunc: Displays the full image information

3. Docker pull image name [:TAG]

Download the image from the official repository. :TAG indicates the image version. If you do not add the image TAG, the latest version is downloaded by default.

Docker RMI image name or ID [:TAG]

If you do not add :TAG to the image, the image of the latest version is deleted by default. If a container based on the image exists, the image cannot be deleted directly. In this case, you can use the -f parameter to forcibly delete the image. The rmI command supports batch deletion. Multiple mirror names are separated by Spaces. If you want to delete all images, you can use the command docker rmi-f $(docker images-qa).

3.3 Container-related commands

1. docker run [OPTIONS] IMAGE [COMMAND] [ARG…]

Run is the most core command in Docker, which is used to create and start a container. It has many available parameters. You can use docker run –help to view all available parameters. Common parameters are as follows:

  • -i: indicates that the interactive mode is used and the input stream is always open.
  • -t: assigns a pseudo terminal, usually and-iIn combination, pseudo terminals are used to interact with containers;
  • -d: Run the container in back-end mode.
  • –name: specifies the name of the container to start. If not specified, docker will assign it randomly;
  • -c: Allocates the CPU shares value to all processes running in the container. This is a relative weight, and the actual processing speed is related to the host CPU.
  • -m: limits the amount of memory allocated to all processes in the container. The unit is B, K, m, and G.
  • -v: Mounts data volumes. You can use multiple volumes-vMultiple volumes are mounted at the same time. The format of the volume is:[host-dir]:[container-dir]:[rw:ro][rw:ro]To specify the schema of the data volume,rwRead/write mode,roRead-only mode.
  • -p: The port used to expose the container’s ports to the host. The format is:hostPort:containerPortBy exposing ports, external hosts can access applications in the container.

2. docker ps [OPTIONS]

Lists all currently running containers. Common parameters are as follows:

  • -a: Lists all containers, both running and stopped
  • -n: Displays n recently created containers
  • -q: Displays only container numbers
  • –no-trunc: Do not truncate the output

3. Start/restart/stop/force the container to stop

The commands associated with starting and stopping containers are: Docker start | restart | stop | kill container name or ID, start command used to start the existing container, restart to restart the running container, stop for stop running container, kill to stop the container.

4. Access the running container

There are two common ways to enter a running container and interact with the container’s main process:

  • Docker Attach container name or ID
  • Docker exec -it container name or ID /bin/bash

5. Exit the container

There are two common ways to exit a running container:

  • Exit: exit and stop the container;
  • CTRL +P+Q: Exit.

Docker Rm container name or ID

To delete a stopped container, the common parameter is -f, indicating that the container is forcibly deleted even though it is still running. To remove all containers, use the docker rm -f $(docker ps -aq) command.

7. View container information

You can use the docker inspect [OPTIONS] NAME | ID/NAME | ID… To view detailed information about a container or image, use the — format parameter to specify the template format, as shown in the following example:

docker inspect --format='{{.NetworkSettings}}'  32cb3ace3279
Copy the code

8. View container run logs

You can run the docker logs [OPTIONS] CONTAINER command to view the run logs of processes in the CONTAINER. The common parameters are as follows:

  • — Details: Displays log details
  • -f: follows the log output
  • –tail: Displays the data of the specified row from the end
  • -t: displays the time stamp
  • –since: The start time
  • — Until: Indicates the end time

Four, DockerFile

Dockerfile is a text file used by Docker to build images. It contains custom instructions and formats. You can build images from dockerfile by using the build command. Docker build [OPTIONS] PATH | | – the URL.

Dockerfile describes the steps for assembling the image, where each instruction is executed separately. Every instruction except FROM is executed on the basis of the mirror generated by the previous instruction. After execution, a new mirror layer is generated. The new mirror layer is overlaid on the original one to form a new mirror. To speed up the image build, the Docker Daemon caches intermediate images during the build process. When the image is built, it compares the next instruction in the dockerfile to all the subimages of the underlying image. If one of the subimages is generated by the same instruction, the cache is hit and the image is used directly instead of being regenerated as a new image. Common commands are as follows:

1. FROM

The FROM directive is used to specify the underlying image, so all dockerfiles must begin with the FROM directive. The FROM directive can appear multiple times, so that multiple images will be built. After each image is created, the Docker command line interface will output the ID of the image. The common command format is: FROM [:

] [AS

].

2. MAINTAINER

MAINTAINER directives can be used to set the author name and mailbox. Currently, MAINTAINER directives are deprecated.

3. LABEL

The LABEL directive can be used to specify the metadata information associated with the image. Format for: LABEL < key > = < value > < key > = < value > < key > = < value >… .

4. ENV

The ENV directive is used to declare environment variables that can be referenced in subsequent directives in the format of $variable_name or ${variable_name}. There are two common formats:

  • ENV <key> <value>: used to set a single environment variable;
  • ENV <key>=<value> ...: Used to set multiple environment variables at a time.

5. EXPOSE

EXPOSE indicates the port number that the container exposes to the outside. The format is as follows: EXPOSE [ / … , you can specify whether the port listens on TCP or UDP. If no protocol is specified, the default is TCP.

6. WORKDIR

WORKDIR is used to indicate the working directory and can be used multiple times. If a relative path is specified, it will be relative to the path of the last WORKDIR directive. The following is an example:

WORKDIR /a WORKDIR b WORKDIR c RUN PWD #Copy the code

7. COPY

COPY < SRC >…

, used to add files in the specified path to a new mirror, the target path may not exist, the program will automatically create.

8. ADD

COPY < SRC >…

, which is similar to the COPY directive, but more powerful. For example, Src supports the network address of a file, and if Src points to a compressed file, ADD will automatically decompress the file after copying.

9. RUN

The RUN command creates another container based on the image created by the previous command, runs the command in the container, and submits the container as a new image after the command is finished. It supports the following two formats:

  • RUN <command>shellFormat)
  • RUN ["executable", "param1", "param2"] (execFormat)

In shell format, commands are run through /bin/sh -c, whereas in exec format, commands are run directly and the container does not call the shell program, which means that normal shell processing does not occur. For example, RUN [“echo”,”$HOME”] does not perform a variable substitution for $HOME. The correct format is RUN [“sh”,”-c”,”echo $HOME”]. The CMD directive below has the same problem.

10. CMD

  • CMD ["executable","param1","param2"] (execFormat, preferred)
  • CMD ["param1","param2"](asENTRYPOINTDefault parameter)
  • CMD command param1 param2 (shellFormat)

The CMD directive provides the default values for the container runtime, which can be either a directive or parameters. A dockerfile can have multiple CMD directives, but only the last one is valid. The CMD command has the same format as the RUN command, but has different functions. The RUN command is used to generate a new image during the construction phase of the image. By default, the CMD command is the first command executed during the startup phase of the container. If the user specifies a new command parameter when docker run, the command in the CMD command will be overwritten.

11. ENTRYPOINT

The ENTRYPOINT directive supports the following two formats:

  • ENTRYPOINT ["executable", "param1", "param2"] (execFormat, first)
  • ENTRYPOINT command param1 param2 (shellFormat)

The ENTRYPOINT directive is similar to the CMD directive in that it allows the container to execute the same command each time it starts. But the difference is that CMD can be followed by an argument or a command, whereas ENTRYPOINT can only be a command; In addition, the runtime parameters provided by the Docker run command can overwrite CMD, but cannot overwrite ENTRYPOINT, which means that the commands on the ENTRYPOINT directive must be executed. The following dockerfile fragment:

ENTRYPOINT ["/bin/echo"."Hello"]
CMD ["world"]
Copy the code

When you execute docker run-it image, the output is Hello World. When you execute docker run-it image spring, the parameters in CMD will be overwritten, and the output is Hello Spring.

Five, the case

5.1 Deploying the Spring Boot Project Based on the Centos Image

Most projects in a production environment are typically deployed on Linux servers, so let’s start with the basic Linux image and package our project (in this case, the Spring Boot project) together to build a complete executable image. First, you need to create a Dockerfile file, which is as follows:

#Create a vM based on the centos image in the official warehouse
FROM centos
#The author information
MAINTAINER  [email protected]

#Copy the JDK installation package to the container and decompress it automatically
ADD jdk-8u211-linux-x64.tar.gz /usr/java/
#Copy the project Jar package into the container
COPY spring-boot-base.jar  /usr/app/
#Configure Java environment variablesENV JAVA_HOME /usr/jav/jdk1.8.0_211 ENV JRE_HOME ${JAVA_HOME}/jre ENV CLASSPATH.:${JAVA_HOME}/lib:${JRE_HOME}/lib ENV PATH ${JAVA_HOME}/bin:$PATH#Project start command
ENTRYPOINT ["java", "-jar", "/usr/app/spring-boot-base.jar"]
Copy the code

Put the JDK installation package, the Spring Boot project Jar package, and the Dockerfile file in the same directory, and then execute the following image build command:

docker build -t spring-boot-base-java:latest .
Copy the code

After the image is built, you can run the following command to start the image:

docker run -it  -p 8080:8080 spring-boot-base-java
Copy the code

In order to observe the effect of startup, you can use the interactive mode to start. In actual deployment, you can use the -d parameter to start the background. The output is as follows:

5.2 Deploying the Spring Boot Project Based on JDK Images

For the above project, we started to build based on the most basic Centos image. However, as THE JDK image has been provided on Docker Hub, we can also choose to build from the JDK image, and the build process is simpler at this time. The build steps are exactly the same as above, except that the contents of the Dockerfile are different, as follows:

#Since we only need to run the environment, here we start directly from the JRE image from the official repository
FROM openjdk:8u212-jre
#The author information
MAINTAINER  [email protected]

#Copy the project Jar package into the container
COPY spring-boot-base.jar  /usr/app/
#Project start command
ENTRYPOINT ["java", "-jar", "/usr/app/spring-boot-base.jar"]
Copy the code

The resources

  1. Docker official introduction: docs.docker.com/engine/dock…
  2. Docker CLI and Dockerfile official documentation: docs.docker.com/reference/
  3. Zhejiang University SEL Laboratory. Docker Containers and Container Cloud (2nd edition). Posts and Telecommunications Press. 2016-10
  4. Docker from Introduction to practice: yeasy.gitbooks. IO /docker_prac…

For more articles, please visit the Full Stack Engineer Manual at GitHub:Github.com/heibaiying/…