What is a docker

  • Docker is an open platform for developing, publishing and running applications. Docker speeds up software delivery by allowing users to separate applications from their infrastructure into smaller particles (containers).

  • Docker containers are similar to virtual machines, but they differ in principle. Containers virtualize the operating system layer, and virtual machines virtualize hardware. Therefore, containers are more portable and use servers efficiently.

The core concept

The mirror

A read-only combination of files and folders, the foundation of container startup, image is a prerequisite for Docker container startup.

mirror

  • Pull image: Use the docker pull command to pull the image from the remote repository to the local PC.

    // Pull the image from the Docker Hub and search locally. If not, download the image from the Docker Hub
    docker pull node 
    Copy the code
  • Rename the image: Use the docker tag command to rename the image.

    // Rename the match-list mirror. After the rename, a TestList mirror is added to the mirror list. The IMAGE IDS of the two mirrors are identical
    docker tag match-list TestList
    Copy the code
  • View images: Run the docker image ls or docker images command to view local images

     // View the mirror
    docker images
    // Query the specified mirror
    docker image ls node
    // Run the grep command to filter
    docker images | grep node
    Copy the code
  • Delete an image: Run the docker rmi command to delete an unwanted image

    ```javascript
     docker rmi node
    ```
    Copy the code
  • Build the mirror

    • The first is to commit as an image based on an already running container using the Docker commit command

      docker commit busybox busybox:hello
      Copy the code
    • The second way is to build an image based on a Dockerfile using the Docker build command

      A Dockerfile is a text containing all of the user’s build commands

      Example -Dockerfile file

      /** Dockerfile **/
      // Create a mirror layer based on Node
      FROM node:carbon-alpine
      // Set environment variablesENV NODE_ENV=prod ## Configure apK package to RUN sed -i's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories
      // Set the time zone
      RUN apk update \
       && apk add tzdata \
       && rm -rf /var/cache/apk/* // set the working directory WORKDIR /match-list // COPY the files in the current directory to the working directory COPY. /match-list // Specify the port that the container is monitoring EXPOSE 4001 // Run the NPM run after the container is started CMD [" NPM ", "run", "start:docker"]Copy the code

      Create a mirror image

        // Create an image in Dockerfile. -t: the image name,.: the current directory
        docker build -t match-list .
      Copy the code

The container

  • The running entity of the image, running the real application process (Docker PS), has its own independent namespace isolation and resource restrictions, can not see the process on the host, environment variables, network information, etc.
  • One image can create multiple containers. When you run a containerized environment, you actually create a read and write copy of the file system inside the container. This adds a container layer that allows you to modify an entire copy of the image

The life cycle

The container state generated by docker create command is the initial state, the initial state can be converted to the running state by docker start command, and the running state of the container can be converted to the stopped state by docker stop command. Docker start can be used to change a stopped container to a running state. Docker pause can also be used to change a running container to a paused state. Docker unpause can be used to change a paused container to a running state. Containers in the initial, running, stopped, or paused states can be deleted directly

Common commands

  • View information about running containers

      docker ps
    Copy the code
  • View container information including the stopped state

      docker ps -a
    Copy the code
  • Enter the container (view the workspace)

      docker exec -it CONTAINER sh
    Copy the code
  • Viewing Console Information

      docker logs CONTAINER
    Copy the code
  • Import/export containers

    Mainly used for container migration, all files in the container will be migrated

    /** Export container **/
    // The newimage. zip file is generated in the current folder. We can copy the file to another machine and use the import command to migrate the container
    docker exprot CONTAINER > newImage.zip
    /** import container **/
    // newimage. zip is imported as a newImage named newImage
    docker import newImage.zip newImage
    Copy the code

warehouse

Docker’s image repository is similar to a code repository for storing and distributing Docker images

Public warehouse

Docker Hub

Private warehouses

Docker officially provides an open source image warehouse Distribution, and images are stored under the Registry repository of Docker Hub for us to download

Common commands

  • The login
 // Docker Hub is requested by default. If you want to login to a third-party or self-built image repository, you can add the registration server after Docker login
        
docker login registry.cn-beijing.aliyuncs.com
Copy the code
  • Push the mirror
// Registry is the Registry server, Docker will pull the image from docker. IO by default, if it is a private repository, it can be replaced by its own Registry server
// Library is the default Docker image Repository.
// Image indicates the Image name.
// Tag is the Tag of the image. If you do not specify the Tag to pull the image, the default is Latest.
docker push [Registry]/[Repository]/[Image]:[Tag]
Copy the code
  • Pull the mirror
docker pull [Registry]/[Repository]/[Image]:[Tag]
Copy the code

Underlying implementation principles and key technologies

Resource isolation

Docker uses Linux’s Namespace technology to isolate various resources.

  • Mount Namespace: You can view different Mount directories in different processes. After you Mount a Namespace, you can view only its own Mount information in the container. Mounting operations in the container do not affect the host Mount directory.

  • PID Namespace: Used to isolate processes. In different PID namespaces, processes can have the same PID number. The PID Namespace can be used to realize that the main process of each container is no. 1 process, while the processes in the container have different Pids on the host. For example, if the PID of a process on the host is 122, the PID Namespace can be used to ensure that the PID of the process in the container is 1.

  • UTS Namespace: Primarily used to isolate host names, it allows each UTS Namespace to have a separate host name. For example, our host name is Docker, and UTS Namespace can be used to make the host name in the container lagoudocker or any other customized host name.

  • IPC Namespace: It is used to isolate communication between processes. For example, if the PID Namespace and IPC Namespace are used together, processes in the same IPC Namespace can communicate with each other, but processes in different IPC namespaces cannot

  • User Namespace: It is used to isolate users and User groups. A typical application scenario is that processes running as a non-root User on a host can be mapped as root users in a separate User Namespace. Using the User Namespace allows a process to have root privileges in the container while being a normal User on the host.

  • Net Namespace: Isolates network devices, IP addresses, and ports. Net Namespace Allows each process to have its own IP address, port, and nic information. For example, if the host IP address is 172.16.4.1, an independent IP address 192.168.1.1 can be set in the container

When Docker creates a new container, it creates these six namespaces, and then adds the processes in the container to these namespaces, so that the processes in the Docker container can only see the system resources in the current Namespace.

Resource constraints

Resource restrictions are implemented through Cgroups

  • Resource limit: Limits the resource usage. For example, you can limit the memory upper limit of a service to ensure the secure running of other services on the host.

  • Priority control: Different groups can have different priorities for resources (CPU, disk I/O, etc.).

  • Audit: Calculates the resource usage of the control group

  • Control: Control the suspension or resumption of a process.

    Docker run-it -m= 1gb nginx # starts an nginx container and limits memory to 1gbCopy the code

Note: While cgroups can restrict resources, they cannot guarantee resource usage. For example, cgroups limits the use of a container to a maximum of one CPU core, but does not guarantee that the container can always use one CPU core. CPU resource contention may result in CPU resource contention.

A network model

In order to better build container network standards, Docker team separated network functions from Docker and became an independent project libNetwork, which provides network functions for Docker in the form of plug-ins.

  • Null network mode: It helps us build a container environment without network access to ensure data security.

     docker run --net=none -it busybox
    Copy the code
  • Bridge mode: This mode is the default network mode when containers are started. Containers can communicate with each other and can be directly accessed from one container to another through the IP address of the container. The communication between the host and the container can be realized. The services started in the container can be directly requested from the host

  • Host Network mode: Allows processes in a container to share the host network to monitor or modify the host network.

    // Net Namespace is not created, but shared with the host. Docker runit --net=host busybox;Copy the code
  • Container Network mode: Two containers can be placed in the same network namespace so that services can be accessed using localhost.

      docker run -d --name=busybox1 busybox sleep 3600
      docker run -it --net=container:busybox1 --name=busybox2 busybox sh
    Copy the code

Data is stored

Docker volume, for our container plug disk, container data persistence

When we want to start a container, Docker will create a read-write layer on top of the image. The files in the container will work in this read-write layer. When the container is deleted, all working files related to the container will be lost. In order to solve the requirements of stateful business, Docker puts forward the concept of Volume. A volume is essentially a file or directory that can bypass the default federated file system and reside directly on the host as a file or directory. The volume concept solves not only the problem of data persistence, but also the problem of sharing data between containers. Volumes can be used to persist directories or files in containers. After the container is restarted, data is not lost. For example, volumes can be used to persist MySQL directories to prevent data loss when the container is restarted.

Basic operation

  • Creating a Data Volume

    Docker volume create myVolume docker volume create myVolumeCopy the code
    • On Linux, run the /var/lib/docker-volumes directory. You can see that a _data directory has been created in the myvolume directory

    • Since macOS uses a virtual machine to run the actual Docker process, * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * After logging in to the VM, you can see that the /var/lib/docker-volumes directory has created a _data directory under the myvolume directory

  • Using a Data Volume

    Docker run it --mount source=myvolume,target=/data busybox docker run it --mount source=myvolume,target=/data busyboxCopy the code
  • Deleting a Data Volume

    Delete docker volume rm myVolume; // Delete docker volume rm myVolumeCopy the code
  • Data is shared between containers

    Use the volumes-from parameter to mount the volumes of the existing container when you start the new container. This parameter is followed by the name of the started container.

    • Start a container

      docker run --mount source=myvolume,target=/tmp/log --name=producer -it busybox
      Copy the code
    • Start another container

      docker run -it --name consumer --volumes-from producer  busybox
      Copy the code
    • The content of files we write from the Producer container will automatically appear in the Consumer container, realizing data sharing between the two containers, just like two processes on the host. One process writes data to the host directory, and the other process reads data from the host directory. Data sharing is realized between the containers using the host directory

  • Data is shared between hosts and containers

    To enable data sharing between hosts and containers, add the -v parameter when starting containers

    // Mount the/TMP directory on the host to /usr/local/data in the container docker run -it --name=mySqlTest -v/TMP: /usr/local/data-it busyboxCopy the code

    After the container is started, the contents of the/TMP directory can be accessed from /usr/local/data in the container, and data in /data is not lost after the container is restarted

Realize the principle of

Run the /var/lib/docker-volumes directory on the host/VM (MAC) to create directories based on the volume name and _data directory on each volume. If the –mount parameter is used when the container is started, Docker will map the directory on the host directly to the specified directory of the container to realize data persistence

The Docker practice

Use Docker Compose to resolve the dependencies of your development environment

Compose is a tool for defining and running multi-container Docker applications. With Compose, you can use YML files to configure all the services your application needs. The implementation uses a single command to create and start all the services from the YML file configuration

The Docker Compose file is divided into three main parts

  • Services: The service defines the container startup configuration, just like the container startup parameters we pass when executing the Docker run command, specifying how the container should be started, such as the container startup parameters, container images and environment variables, etc.

  • Networks: Networks define the network configuration of the container, just as we create the network configuration by executing the Docker network create command

  • Volumes: Data volumes define the volume configuration of the container, just as we used the Docker volume create command to create data volumes.

Examples of operation

Deploy a Node project using the monogoDB database with Docker and broker it using Nginx

version: '2'

services:
  cms-backend:
    image: 'registry-new.ijunhai.com/nexus/junhai/jh-cms'
    volumes:
      - '/work/data/cms-data/upload-data:/jh-cms/app/public/upload'
      - '/work/data/cms-data/dist-data:/jh-cms/dist'
      - '/work/data/cms-data/template-data:/jh-cms/template'
      - '/work/logs/cms-log:/root/logs'
    environment:
      - EGG_MONGODB_URL=mongodb://cms-db/website
      - ENV_ONLINE=dev_online
    links:
      - cms-lb:cms-backend.ijunhai.com
    depends_on:
      - cms-db
    networks:
      - cms-net
      - cms-sys
  cms-db:
    image: 'mongo:latest'
    volumes:
      - '/work/data/cms-data/mongo-data:/data/db'
    networks:
      - cms-sys
  cms-lb:
    image: 'nginx:latest'
    ports:
      - 80:80
    volumes:
      - '/work/config/cms-config/nginx:/etc/nginx/conf.d'
    networks:
      - cms-net

networks:
  cms-net:
    driver: bridge
  cms-sys:
    driver: bridge
Copy the code

Docker Compose Compose template file

  • Three service names are defined: CMs-backend, CMS-DB, and CMS-LB.
  • Image: Mirrors exist for each service
  • Volumes: Mount the host directory to the container
  • Networks: Cms-backend and CMS-DB share a network cmS-NET. Cms-backend and CMS-LB share a network cms-SYS. Containers can communicate with each other. You can access cmS-DB, CMS-LB, or vice versa from the CMS-Backend container through the IP address of the container
  • Depends_on: Specifies the dependencies between services so that the dependent service can be started first. Cms-backend relies on the database service CMs-DB
  • Environment: Specifies the environment variable used when the container is started
  • Ports: Exposes the port information in the format of HOST:CONTAINER. Enter the port to be mapped to the HOST and the port to be mapped to the CONTAINER

Docker Compose operation command

Compose file create create service down stop service Events Monitor the time information of the container in real time exec run the command help to get help in a running container images list images kill Kill the container logs View the container output pause Pause the container port Print public ports mapped by container ports ps List containers in the project pull pull all images in the service push all images in the service restart Restart the service rm Delete containers that have been stopped in the project run Run the scale command on the specified service Set the number of containers for running the service start Start the service stop Stop the service Top Limit the process information in the running service Unpause Restore the suspended container Up Create and start the service version Print the version information and exitCopy the code

conclusion

  • The appearance of Docker helps us to quickly deploy a project, and also saves us from configuring different environments on each machine, such as Node environment, PHP environment, Java environment, mongodb environment and so on. Docker Compose is composed for Docker swarm, which is composed for Docker swarm. Docker Compose is composed for Docker Swarm. Docker Compose is composed for Docker Swarm. Greatly improve our development efficiency, but also avoid pollution of our development machine configuration.

  • The emergence of Docker also solves various problems in CI/CD process. Docker+Jenkins+GitLab can also be used to build CI/CD system in work to realize CI continuous integration and CD continuous deployment. The code is hosted in GitLab. Then, by configuring the mutual call between GitLab and Jenkins, the code push to GitLab code warehouse can automatically trigger the image construction and push the image to the remote image warehouse, and finally publish the latest version of the image to the remote server.

Reference: Guo Shao from shallow to deep Understanding of Docker