The background of Docker

In the host era, physical performance (such as CPU frequency and memory) of a single server is competed. In the cloud era, cluster processing capacity built by virtualization technology is most important. The concept of virtualization has long been widely used in a variety of critical scenarios. From the mainframe virtualization introduced by IBM in the 1960s, to the virtual machine virtualization represented by Xen and KVM, and then to the container technology represented by Docker, the virtualization technology itself is constantly innovating and making breakthroughs. Virtualization can be implemented either through hardware emulation or through operating system software. Container technology is more elegant, leveraging existing mechanisms and features of the operating system to achieve lightweight virtualization far beyond traditional virtual machines. Therefore, some people even call it “the new generation of virtualization” technology, and the cloud platform based on container is affectionately called “container cloud”. Docker is the leader among many container technologies

The initialDockerThe container

Docker is an open source container project based on the Go language. It was launched in early 2013 by dotCloud (now Docker). The Docker project is affiliated with the Linux Foundation and follows the Apache 2.0 protocol.

Linux Container Technology

Like most new technologies, Docker didn’t “jump out of a stone”, but stood on the shoulders of its predecessors, Linux Containers (LXC) technology. The DESCRIPTION of container technology on the IBM developer website is accurate: “Containers effectively divide resources managed by a single operating system into isolated groups to better balance conflicting resource usage requirements between isolated groups. In contrast to virtualization, this requires neither instruction level emulation nor just-in-time compilation. Containers can run instructions locally on the core CPU without requiring any specialized interpretation mechanism. In addition, the complexity of para-virtualization and system call substitution is avoided. LXC has also undergone a long evolution, dating back to the chroot tool on the Unix family of operating systems in 1982.

From Linux containers to Docker

On the basis of LXC, Docker further optimizes the experience of using containers. First, Docker provides various container management tools (such as distribution, version, migration, etc.), which enables users to manage and use containers more simply and clearly without paying attention to the underlying operations. Secondly, By introducing layered file system construction and efficient mirroring mechanism, Docker reduces the difficulty of migration and greatly improves user experience. Users can manipulate the Docker container as easily as the application itself. Early Docker code implementations were directly based on LXC. Since version 0.9, Docker has developed the libcon-Tainer project as a broader container driver implementation, replacing the LXC implementation. Currently, Docker actively promotes the establishment of the runC standard project and contributes to the Open Container Alliance, trying to make container support not limited to Linux operating system, but more secure, open, and extensible. In a nutshell. A Docker container is understood as a lightweight sandbox. Each container runs one or more applications. Different containers are isolated from each other, and containers can communicate with each other over a network. Containers are created and stopped very quickly, almost as quickly as native applications are created and terminated; In addition, the container itself has very limited additional requirements on system resources, much less than traditional virtual machines. Most of the time, there’s nothing wrong with even treating a container as an application.

The goal of the Docker

Docker was conceived to “Build, Ship and Run Any App, Anywhere”, In other words, the whole life cycle of application Packaging, Distribution, Deployment and Runtime can be managed to achieve “once packaged and run everywhere” at the application component level. The application component can be a Web application, a compilation environment, a set of database platform services, or even an operating system or cluster.

Understanding of Docker engine

Docker engine consists of the following components: Docker Client Docker daemon Containerd and Runc. Together, they are responsible for creating containers and running the latest architectural schematics:

Core concepts and installation configuration

This paper introduces the three core concepts of Docker: ❑ Image ❑ Container ❑ Repository Only by understanding these three core concepts can we successfully understand the whole life cycle of Docker Container.

The mirror

A Docker image is similar to a virtual machine image and can be thought of as a read-only template. For example, an image can contain a basic operating system environment with only Apache applications installed (or any other software required by the user). Call it an Apache image. Images are the basis for creating Docker containers. Through version management and incremental file system, Docker provides a very simple mechanism to create and update existing images. Users can even download a ready-made application image from the Internet and use it directly. Images can be understood as source code.

The container

A Docker container is a lightweight sandbox that Docker uses to run and isolate applications. Containers run instances of applications created from images. It can start, start, stop, and delete containers that are isolated from each other. You can think of a container as a simplified version of the Linux system environment (including root user permissions, process space, user space, network space, and so on) and a box packed with applications running in it. A container can be thought of as an application compiled and run from an image.

warehouse

A mirror repository, on the other hand, is a place where image files are stored. It can be understood as a source repository, such as GitHub or GitLab. Each repository centrally stores a certain type of image, often including multiple image files, which are distinguished by different tags. For example, the repository for storing Ubuntu OS images is called the Ubuntu repository, which may contain different versions of images, such as 16.04 and 18.04. Currently, the largest public repository is the official Docker Hub, which houses a huge number of images for users to download. Ali Cloud Warehouse, a number of domestic cloud service providers, also provides local sources of warehouses, which can provide stable domestic access.

The installation of a Docker

Docker engine is currently divided into two versions: Community Edition (CE) and Enterprise Edition (EE). Both Windows and Mac versions of Docker are provided by the Community and are not recommended to be used in production environments. So focus on the use of its Linux system, take centos8 as an example to install learning:

#! /bin/bashEcho -e 'yum remove docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ Docker-latest -logrotate \ docker-logrotate \ docker-engine echo -e 'yum install yum-utils Lvm2: device-mapper-persistent-data lvm2 -y echo -e: add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo echo 'update and install the Docker - CE sudo yum makecache fast echo - e IO echo -e 'systemctl enable docker echo' yum install -y docker-ce docker-ce-cli containerd. IO echo -e -e 'start docker' systemctl start docker echo -e' start docker version 'docker --version echo -e' /etc/docker/daemon.json { "registry-mirrors": [" https://fl791z1h.mirror.aliyuncs.com "]} EOF echo - e 'restart docker services' systemctl restart docker echo - e' view docker information docker  info exitCopy the code

Possible problems during the installation process: Installation problems caused by the latest Docker dependent Container version change

[root@localhost ~]# yum -y install Docker-ce Last metadata expiration check: 0:00:32, executed at 08:07:56, ThU, jan 07, 2021 Error: Problem: Package Docker-ce-3:20.10.2-3.el7.x86_64 requires Containerd. IO >= 1.4.1, but none of the providers can be installed - cannot install the best candidate for the job - package Io-1.4.3-3.1. el7.x86_64 is filtered out by Modular filtering
#If the above error message appears, open the link below to find the corresponding centos version, and then click to find it and continue to look like the following imagehttps://mirrors.aliyun.com/docker-ce/linux/centos/ wget https://mirrors.aliyun.com/docker-ce/linux/centos/8/x86_64/stable/Packages/containerd.io-1.4.3-3.1.el8.x86_64.rpm yum Install containerd. IO - 1.4.3-3.1. El8. X86_64. RPM
#Continue to
yum -y install docker-ce 
Copy the code

The mirror image

docker image pullordocker pull

The image is the premise of running the container. You can use the docker [image] pull command to download the image directly from the Docker Hub image source. The command is in the format of docker [image] pull NAME[:TAG] NAME is the image repository NAME (used to distinguish images), and TAG is the image label (used to indicate version information). Generally, to describe a mirror, you need to include name and label information

[root@localhost ~]# docker pull = "latest"; [root@localhost ~]# docker pull = "latest"; Pulling from library/centos 7a0437f04f83: Pull complete Digest: sha256:5528e8b1b1719d34604c87e11dcd1c0a20bedf46e83b5632cdeac91b8c04efc1 Status: Downloaded newer image for centos:latest docker.io/library/centos:latestCopy the code

As can be seen from the download process, the image file is generally composed of several layers, such as 7a0437F04F83 string is the unique ID of the layer (in fact, the full ID contains 256 bits, 64 hexadecimal characters). Use the Docker pull command to get and output the layers of the image. When different mirrors contain the same layer, only one part of the layer is stored locally, reducing the storage space

docker imagesThe command is used to list basic information about existing mirrors on the local host

[root@localhost ~]# docker images
REPOSITORY            TAG       IMAGE ID       CREATED         SIZE
nginx                 latest    ae2feff98a0c   3 weeks ago     133MB
centos                latest    300e315adb2f   4 weeks ago     209MB
portainer/portainer   latest    62771b0b9b09   5 months ago    79.1MB
Copy the code

You can see the following fields: ❑ from which repository, for example, centos indicates the basic image of centos. ❑ Label information of an image, for example, Latest indicates different versions. A label is only a tag and does not identify the mirrored content. ❑ ID of the mirror (which uniquely identifies the mirror). If two mirrors have the same ID, they actually point to the same mirror with different label names. ❑ Creation time, which indicates the last update time of the image. ❑ Mirror size, good mirrors tend to be smaller

Docker [image]inspect Image name/IMAGE IDCommand to obtain detailed information about the image, including maker, adaptive architecture, digital summary of each layer, and so on

Docker history Image name/image IDCommand to view mirror history

Docker Search Image nameCommand to search for images in the official Docker Hub repository

Docker RMI Image name/IMAGE IDorDocker Image RM Image name/ID of an imageCommand to delete a mirror

docker image pruneCommand to clean the mirror

After using Docker for a period of time, some temporary image files may be left in the system, and some unused images can be cleared by using the preceding command. Supported options include: ❑ -a, -all: delete all useless images, not only temporary images ❑ -filter filter: ❑ -f, -force: forcibly deletes a mirror without prompting for confirmation

docker [container] commit Command to create an image based on an existing container

The command format is docker [container] commit [OPTIONS] container [REPOSITORY[:TAG]] The main OPTIONS are: OPTIONS ❑ -a, –author=”” : ❑ -c, –change=[] : When submitted to execute commands Dockerfile, including CMD | ENTRYPOINT | ENV | EXPOSE | LABEL | ONBUILD | USER | VOLUME | WORKDIR ❑ – such as m, – message = “” : Submit message ❑ -p, –pause=true: Pause the container during submission

docker [container] import Import an image from a template file

The import command format for docker [image] [OPTIONS] file | | – URL [REPOSITORY [: TAG]]

Create an image based on Dockerfile

docker [image] save Export the image to a local file

This command supports -o, -output string parameters, export image to the specified file, you can copy files everywhere for others to use

// Save the current centos image as centos_letest.tar docker save -o centon_letest.tar centos:latestCopy the code

docker [image] loadImport the exported tar file to the local image library

The -i and -input string options are supported to read the image content from the specified file

 docker load -i centon_letest.tar
Copy the code

docker [image] pushCommand to upload an image to the repository

Default upload to Docker Hub official repository, similar to Git push

Container, the container

Containers are another core concept of Docker. Simply put, a container is a running instance of an image. The difference is that the image is a static read-only file, while the container has the writable file layer required by the runtime, and the application process in the container is running.

Create a container

docker [container] createCommand to create a new container

A container created by using the docker[container] create command is stopped. You can start it by using the docker[container] start command.

[root@localhost ~]# docker create -it centos:latest
27a89a80a2cb4e5c5fe0e24ef86cb2dac451219beeee3f11ac916fe2143f369a
Copy the code

docker [container] startStart the container

[root@localhost ~]# docker start 27a89a80a2c
27a89a80a2c
[root@localhost ~]# docker ps
CONTAINER ID   IMAGE                 COMMAND                  CREATED         STATUS         PORTS      NAMES
27a89a80a2cb   centos:latest         "/bin/bash"              6 minutes ago   Up 4 seconds              objective_lederberg
Copy the code

docker [container] run Create and start the container

When using the docker [container] run command to create and start a container, the standard operations that docker runs in the background include: ❑ Check whether the specified image exists locally, if not, download it from the public repository ❑ create a container using the image, and start the container ❑ to allocate a file system to the container. Mount a read-write layer outside the read-only mirror layer ❑ Bridge a virtual interface from the bridge interface configured by the host to the container ❑ Configure an IP address from the address pool of the bridge to the container ❑ Execute the user-specified application ❑ after the execution is complete, the container is terminated automatically

[root@localhost ~]# docker run -it --name centos-test  centos:latest /bin/bash
[root@d078291e042a /]#
Copy the code

Commonly used options

  • The -t option lets Docker assign a pseudo-tty to the container’s standard input,
  • -I leaves the container’s standard input open
  • –name Gives the started container a name
  • -d requires the Docker container to run Daemonized in the background
  • More command options can be viewed with the docker run –help command

docker [container] logsView container output logs

Stop the container

docker [container] pauseCommand to pause a running container

Paused Container docker [container] Unpause Container [container…] Command to restore to the running state

Docker [container] Unpause command to restore to the running state

docker [container] stopTo terminate a running container

This command sends a SIGTERM signal to the container, waits for a timeout period (10 seconds by default), and then sends a SIGKILL signal to terminate the container. Docker [container] start can be used to restart terminated containers

docker [container] killThe SIGKILL signal is sent directly to forcibly terminate the container

docker [container] restartThe command terminates a running container and then restarts it

Into the container

docker [container] attachConnect to a running container

docker [container] execThe exec command, a more convenient tool, allows you to execute arbitrary commands directly from within the running container

Docker exec -it Container name/container ID /bin/bashCopy the code

Remove the container

docker [container] rmCommand to delete a container in the terminated or exited state

The following options are supported: ❑ -f, –force=false: whether to forcibly terminate and delete a running container ❑ -l, –link=false: delete the connection of the container but retain the container ❑ -v, and –volumes=false: delete the data volumes attached to the container

Import and export containers

docker [container] export

Exporting a container means exporting an already created container to a file, whether the container is running or not

Docker export -o export_centos.tar Container ID/ container name // -o File name of the export fileCopy the code

docker [container] importCommand import becomes mirror

docker container inspectViewing Container Details

docker [container] top View the processes in the container

docker [container] statsViewing Statistics

docker [container] cpCommand to copy files between containers and hosts

Docker cp data test:/ TMP docker cp data test:/ TMP

docker [container] diffView file system changes in the container

docker [container] port Command to view port mapping of a container

docker [container] updateThe command updates some runtime configurations of the container, mainly some resource limit quotas

❑ -blkio-weight uint16: Update block IO limit, 10 ~ 1000, default value 0, indicating unrestricted ❑ -cpu-period int: Limit the CPU Scheduler CFS (Completely Fair Scheduler) usage time, in microseconds, minimum 1000 ❑ -cpu-quota int: Limit CPU scheduler CFS quota, in microseconds, minimum 1000 ❑ -cpu-rt-period int: limit CPU scheduler real-time period, in microseconds ❑ – cpu-rt-Runtime int: limit CPU scheduler real-time period, in microseconds ❑ – cpu-rt-Runtime int: limit CPU scheduler CFS quota, in microseconds, minimum 1000 ❑ -cpu-rt-period int: limit CPU scheduler real-time period, in microseconds ❑ – cpu-rt-Runtime int: ❑ -c, -cpu-shares int: indicates the CPU usage limit. ❑ -cpus decimal: limits the number of cpus; ❑ -cpuset-mems string: specifies the allowed CPU cores, for example, 0-3,0,1 ❑ -cpuset-mems string: specifies the allowed memory blocks, for example, 0-3,0,1 ❑ -kernel-memory bytes: specifies the restricted kernel memory. ❑ -m, -memory bytes: limited memory usage ❑ -memory-reservation bytes: soft memory usage ❑ -memory-swap bytes: Memory plus cache limit. -1 means no limit on the buffer ❑ -restart String: restart policy after the container exits

Docker data management

When Docker is used in a production environment, it is often necessary to persist data or share data among multiple containers, which inevitably involves data management operations of containers. There are two methods for managing Data in containers: ❑ Data Volumes: Data in containers is mapped to the local host environment. ❑ Data Volume Containers: Use specific Containers to maintain Data volumes. This section describes how to create a data volume in a container and mount local directories or files to the data volume in the container. Secondly, it introduces how to use data volume containers to share data between containers and hosts and between containers, and realize data backup and recovery.

Data volume

Data Volumes are a special directory for containers. They map host operating system directories into containers. Similar to the mount in Linux, Data Volumes provide many useful features: ❑ data volume can be Shared between the container and reuse, and transfer data between the container will be efficient and convenient ❑ data volumes changes will take effect immediately, either in the container operations or the local operating ❑ update will not affect the image data volume, decoupling ❑ open application and data volumes will continue to exist, until there is no container is used, can be safely uninstall it

Creating a Data Volume

docker volume createCreating a Data Volume

Docker provides the volume subcommand to manage data volumes. You can quickly create a data volume locally by using the following command:

Docker volume create -d local test-vol Run the following command to create a local data volume: // /var/lib/docker-volumes The data volume test-vol can be found in the //var/lib/docker-volumes directoryCopy the code

docker volume inspect(View details)

[root@localhost volumes]# docker volume inspect test-vol
[
    {
        "CreatedAt": "2021-01-07T16:36:32-05:00",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/test-vol/_data",
        "Name": "test-vol",
        "Options": {},
        "Scope": "local"
    }
]
Copy the code

docker volume ``lsLists the existing data volumes

docker volume pruneClear unnecessary data volumes

docker volume rmDeleting a Data Volume

Binding a Data Volume

In addition to using the volume subcommand to manage data volumes, you can also mount any local path on the host to the container as a data volume when creating a container. When using the docker [container] run command, you can use the -mount option to use the data volume. – The mount type option supports three types of data volumes, including: ❑ volume: common data volumes mapped to the /var/lib/docker-volumes directory of the host. ❑ bind: binds the data volume to a specified path on the host. ❑ TMPFS: temporary data volumes that exist only in memory

Source: Host directory path to the destination: container directory path type = bind, source = / path/on/host, destination = / path/in/container type=volume,source=my-volume,destination=/path/in/container,volume-label="color=red",volume-label="shape=round" type=tmpfs,tmpfs-size=512M,destination=/path/in/containerCopy the code
Docker run -itd --name centos-test --mount type=bind,source=/ WWW,destination=/cen_www centos:latest run -itd --name centos-test -v /www:/cen_www centos:latestCopy the code

The default permission for Docker to mount data volumes is read/write (RW). Users can also specify read-only via ro:

docker run -itd --name centos-test  -v /www:/cen_www:ro centos:latest
Copy the code

Data volume container

If you need to share some continuously updated data between multiple containers, the easiest way is to use a data volume container. The data volume container is also a container, but its purpose is to provide data volumes to be mounted to other containers. First, create a data volume container dbContainer and mount a data volume to/dbContainer. Use –volumes-from to mount volumes from dbContainer. For example, run the following command to mount volumes from db1 and DB2:

docker run -itd --name dbcontainer -v /db_host:/dbdata centos:latest
docker run -itd --name db1  --volumes-from   dbcontainer centos:latest
docker run -itd --name  db2  --volumes-from  dbcontainer centos:latest
Copy the code

Db1 and DB2 used the –volumes-from parameter to mount volumes to the /dbdata directory. When volumes were written to the /dbdata directory, you could use the –volumes-from parameter to mount volumes to the /dbdata directory. You can also mount data volumes from other containers that have already mounted container volumes

Port mapping connects to containers

In addition to network access, Docker also provides two convenient functions to meet the basic needs of service access: one is to allow mapping of service ports of applications in containers to local host hosts; Another is the interconnection mechanism that enables quick access between multiple containers by container name

Access container applications externally

When network applications are running in a container and you want to allow external access to these applications, you can specify port mappings using the -p or -p parameters. When -p (uppercase) is used, Docker will map a port from 49,000 to 49900 randomly to the network port opened by the internal container. -p (lowercase) can specify the port to be mapped, and only one container can be bound to a specified port. The supported formats are

  • Map all interface addresses

HostPort:ContainerPort eg:80 8080:8080

  • Maps to the specified port at the specified address

IP: HostPort: ContainerPort eg: 127.0.0.1:5000-5000

  • Maps to any port at the specified address

IP::ContainerPort eg:127.0.0.1::5000 use IP::ContainerPort to bind any port of localhost to port 5000 of the container. The localhost automatically allocates a port

  • Specifying a UDP Port

IP: ContainerPort/udp eg: 127.0.0.1:5000 / udp

The container of interconnected

Containers can safely interact with each other using the –link argument, which is formatted as –link Name :alias, where name is the name of the container to be linked and alias is an alias

Docker run -it --rm --name centos-1 centos:latest docker run -itd --name centos-2 --link centos-1:centos-1 centos:latestCopy the code

Docker creates a virtual channel between two interconnected containers without mapping their ports to the host host. The -p and -p flags are not used when starting the container, thus avoiding exposing the database service port to the external network. Docker exposes connection information for containers in either of the following ways: ❑ Update environment variables Check environment variables of containers: Docker exec centos-1 env Run the env command to check environment variables of containers ❑ update the /etc/hosts file

Use Dockerfile to create the image

Dockerfile is a configuration file in text format that users can use to quickly create custom images. Dockerfile consists of a line of command statements and supports comment lines starting with #. Generally speaking, Dockerfile main content is divided into four parts: basic image information, maintainer information, image operation instructions and container startup execution instructions

Configuration Instructions

The general format of instructions in Dockerfile is INSTRUCTION arguments, including “configure instructions” (configure image information) and “operation instructions”

ARGThe only variable used to define the image creation process can be inFROMInstructions beforeFROMSpecify base mirror, must be the first command (in noARG)
LABELThe LABEL directive adds metadata LABEL information to the generated image. This information can be used to help filter out specific images
EXPOSEDeclares the port on which the service in the mirror listensENVSpecifies the environment variables that will be followed during image generationRUNThe command will also exist in the container in which the image is started
ENTRYPOINTSpecifies the default entry command for the image, which is executed as the root command when the container is started, with all passed values as arguments to the command
VOLUMECreate a data volume mount point
USERSpecify the user name or UID to run the container, and thereafterRUNAnd other directives also use the specified user identity
WORKDIRConfigure the working directory for subsequent RUN, CMD, and ENTRYPOINT directives
ONBUILDSpecifies the action instructions that are automatically executed when a child image is created based on the generated imageSTOPSIGNALSpecifies the value of the exit signal received by the container started by the created imageHEALTHCHECKHow do I perform health checks on containers started by configuration
SHELLSpecifies the default shell type for other commands to use
Copy the code

Operating instructions

RUNRun the specified command
CMDUsed to specify the command to be executed by default when the container is started
ADDAdd content to the image
COPYCopy content to mirror
Copy the code

Commonly used instructions

ARG: Defines the variables to be used during image creation

The only parameter defined by the ARG directive that can precede the FROM directive is assigned as –build-arg a_name=a_value in the docker build command. Warning if the docker build command passes a parameter that has no corresponding parameter in the Dockerfile

# format:
    ARG <name>[=<default value>]
# sample:
    ARG VERSION="7.0"
    FROM centos:${VERSION}
Copy the code

FROM: Specifies the base image of the image to create

Tag or digest are optional, and if they are not used, the latest version of the base image is used

# format:
    FROM <image>
    FROM <image>:<tag>
    FROM <image>@<digest>
 # sample:
    FROM mysql:5.6
Copy the code

 LABELThe: directive adds metadata label information to the generated image

When LABEL is used to specify metadata, one LABEL can specify one or more metadata. When multiple metadata are specified, the metadata are separated by Spaces. It is recommended to specify all metadata through a LABEL directive to avoid generating too many intermediate images

# format:
    LABEL <key>=<value> <key>=<value> <key>=<value> ...
# sample:
    LABEL version="1.0" description="This is a Web server." by="IT record"
    LABEL author="[email protected]" data="2021-1-1"
Copy the code

EXPOSE: specifies the port listened by the service in the mirror

EXPOSE doesn’t give the container’s ports access to the host. To make them accessible, you need to publish these ports with -p when docker Run runs the container, or all the ports that EXPOSE exports with the -p argument

# format:
    EXPOSE <port> [<port>...]
# sample:
    EXPOSE 80 443
    EXPOSE 8080    EXPOSE 11211/tcp 11211/udp
Copy the code

ENV: Specifies environment variables

Directives specifying environment variables can be overridden at run time, such asdocker run --env <key>=<value> built_image

# format:
    ENV <key> <value>  Everything after #
      
        is treated as part of its 
       
        , so only one variable can be set at a time
       
      
    ENV <key>=<value> ...  
      
       =
        
         
         
          =
           
            
            
             =
             
               In addition, backslashes can also be used for line continuation
             
            
           
          
         
        
       
      
# sample:
    ENV myName John Doe
    ENV myDog Rex The Dog
    ENV myCat=fluffy
Copy the code

ENTRYPOINT : Specifies the default entry command for the mirror

ENTRYPOINT is very similar to CMD, except that commands executed through Docker run do not overwrite ENTRYPOINT. Any parameters specified in the Docker run command are passed to ENTRYPOINT as parameters. Only one ENTRYPOINT command is allowed in a Dockerfile, which will overwrite the previous Settings if specified. Only the last ENTRYPOINT command is executed, which can be overwritten by the — ENTRYPOINT parameter at runtime, such as docker run — ENTRYPOINT

# format:
    ENTRYPOINT ["executable"."param1"."param2"] (Executable file preferred)
    ENTRYPOINT commandParam1 Param2 (Internal shell command)Example:FROM ubuntu
    ENTRYPOINT ["top"."-b"]
    CMD ["-c"]
Copy the code

VOLUME: Creates a data volume mount point

# format:
    VOLUME ["/path/to/dir"]
# sample:
    VOLUME ["/data"]
    VOLUME ["/var/www"."/var/log/apache2"."/etc/apache2"Note: A volume can exist in one or more containers in a specified directory that bypasses the federated file system and has the following features:1Volumes can be shared and reused between containers2Containers do not have to share volumes with other containers3The modification takes effect immediately4The volume modification has no impact on the mirror5The volume persists until no container is using itCopy the code

USER: Specifies the user name or UID to run the container

# format:
    USER user  
    USER user:groupUSERuidUSERuid:gidUSER user:gidUSER uid:group 
# sample:
		USER www 
RUN, CMD, and ENTRYPOINT will all use this USER. After the image is built, when the container is run through Docker run, the specified user can be overwritten with the -u parameter.
Copy the code

WORKDIR Configur es the working directory for subsequent RUN, CMD, and ENTRYPOINT commands. After setting the working directory through WORKDIR, the subsequent commands RUN, CMD, ENTRYPOINT, ADD, and COPY in Dockerfile will be executed in this directory. When running the container with ** docker run -w ****, you can override the working directory ** set at build time with the -w argument

# format:
    WORKDIR /path/to/workdir
# sample:
    WORKDIR/a (working directory is /a)
    WORKDIRB (the working directory is /a/b)
    WORKDIRC (working directory is /a/b/c)
Copy the code

ONBUILD: specifies the command to be automatically executed when a child mirror is created based on the generated image

# format:
   ONBUILD [INSTRUCTION]
# sample:
  ONBUILD ADD . /app/src
  ONBUILD RUN /usr/local/bin/python-build --dir /app/src
# Note: When the image you build is used as a base image for another image, the trigger in that image will be triggered by the key
Copy the code

RUN : Runs the specified command

RUNCommands can be executed in the image container in either of the following ways:
shellperformFormat:RUN <command>Exec execution format:RUN ["executable"."param1"."param2"]Example:RUN ["executable"."param1"."param2"]
    RUN apk update
    RUN ["/etc/execfile"."arg1"."arg1"]Note:RUNThe intermediate image created by the directive is cached and used in the next build. If you don't want to use these cache images, you can specify the --no-cache parameter at build time, such as docker build --no-cache
Copy the code

CMDThe: directive specifies the command to be executed by default when the container is started

Format:CMD ["executable"."param1"."param2"] (Executable file execution is preferred)
    CMD ["param1"."param2"] (If ENTRYPOINT is set, call ENTRYPOINT directly to add parameters)
    CMD commandParam1 Param2 (Running internal shell commands)Example:CMD echo "This is a test." | wc -
    CMD ["/usr/bin/wc"."--help"]Note:CMDUnlike RUN, where CMD is used to specify the commands to be executed when the container is started, RUN is used to specify the commands to be executed when the image is built.
Copy the code

ADD: Adds content to the mirror

Format:ADD <src>... <dest>
    ADD ["<src>"."<dest>"] is used to support paths that contain SpacesExample:ADD hom* /mydir/          Add all files that begin with "hom"
    ADDhom? .txt /mydir/#? Instead of a single character, e.g. "home.txt"
    ADD test relativeDir/     Add "test" to 'WORKDIR' /relativeDir/
    ADD test /absoluteDir/    # add "test" to /absoluteDir/
   
Copy the code

COPY: Copies the content to the mirror

COPY is similar to the ADD command, but it does not automatically decompress files and cannot access network resources. You are advised to use COPY when using a local directory as the source directory

Download nginx-1.18.0.tar.gz, epel-release-latest-7.noarch. RPM in the same directory and create a new Dockerfile in the current directory

# This nginx Dockerfile
# Version 1.0

# Base images
FROM centos

#MAINTAINER MAINTAINER information
MAINTAINER xxxxx 

#ENV Sets the environment variable
ENV PATH /usr/local/nginx/sbin:$PATH

The #ADD file is placed in the current directory and will be automatically decompressed when copied
ADD. / nginx - 1.18.0. Tar. Gz/usr /local/  
ADD ./epel-release-latest-7.noarch.rpm /usr/local/  

#RUN RUN the following command
RUN rpm -ivh /usr/local/epel-release-latest-7.noarch.rpm
RUN yum install -y wget lftp gcc gcc-c++ make openssl-devel pcre-devel pcre && yum clean all
RUN useradd -s /sbin/nologin -M www

#WORKDIR corresponds to CD
WORKDIR /usr/local/ nginx - 1.8.0 comes with

RUN ./configure --prefix=/usr/local/nginx --user=www --group=www --with-http_ssl_module --with-pcre && make && make install

RUN echo "daemon off;" >> /etc/nginx.conf

#EXPOSE Map port
EXPOSE 80

#CMD Run the following command
CMD ["nginx"]
Copy the code

Dockerfile build image

docker build -t Name:Tag -f Dockerfile . Build the mirror

–add-host list Adding a custom mapping from Host to IP (Host: IP)
–build-arg list Set variables at build time
–cache-from strings Treat the image as a cache source
–cgroup-parent string Optional container parent CGroup
–compress Use the gzip tool to compress the build context
–console  Display console output using only buildKit with arguments: true, false, auto (default auto)
–cpu-period int Limit the cycle of CPU CFS (perfectly fair scheduling)
–cpu-quota int Limit the CPU CFS (perfectly fair scheduling) quota
-c, –cpu-shares int Set the share weight of the CPU
–cpuset-cpus string Number of cpus allowed to execute (0-3,0,1)
–cpuset-mems string Number of MEM allowed (0-3,0,1)
–disable-content-trust Skip mirror validation (default: true)
-f, –file string The name of the Dockerfile to build (default: ‘PATH/Dockerfile’)
–force-rm Always remove the intermediate container
–iidfile string Writes the mirror ID to the specified file
–isolation string Container isolation technique
–label list Set metadata for the mirror
-m, –memory bytes Memory limit
–memory-swap bytes If unlimited swapping is enabled, the swap limit is equal to memory plus swap :’-1′
–network string Set networking mode for the RUN directive during build (default: “default”)
–no-cache Do not use caching when building the image (after setting, it will be pulled again each time, the default is to use cached)
–platform string If the server has multi-platform capability, set that platform
–pull Always try to pull a new version of the image
-q, –quiet After the image is successfully constructed, do not generate output or print the IMAGE ID
–rm Delete intermediate containers after successful build (default true)
–security-opt strings Security options
–shm-size bytes Set the size of /dev/shm
–squash Compress the new image layer into a new image layer
–stream Attach the stream to the server to negotiate the build context
-t, –tag list Name and optional tag (format ‘name:tag’)
–target string Set the target build phase that needs to be built
–ulimit ulimit U Restriction (default: [])

Docker network

By default, the network between containers and containers and between containers and hosts is isolated. What Docker network needs to solve is the communication between containers.

Network mode

Docker network mode configuration instructions
Host mode –net=host The container and host share a Network namespace
Container pattern –net=container:NAME_or_ID A container shares a Network namespace with another container. A Pod in Kubernetes shares a Network namespace with multiple containers
None mode –net=none The container has a separate Network namespace but does not perform any Network Settings on it, such as assigning Veth pairs and Bridges, configuring IP addresses, etc
Bridge pattern –net=bridge (default mode) In Network bridge mode, a unique Network namespace is automatically generated
Custom mode –net= User-defined network name

Host mode

How it works: If you start a container in host mode,Instead of getting a separate Network Namespace, the container shares a Network Namespace with the host.The container does not virtualize its own network card, configure its own IP address, etc., but uses the IP address and port of the host. However,Other aspects of the container, such as the file system and process list, are still isolated from the host. A container in host mode can directly use the IP address of the host to communicate with the outside world, and the internal service port of the container can also use the port of the host, without NAT.The biggest advantage of host is good network performance, but the ports already used on Docker host cannot be used any more, resulting in poor network isolation.台湾国台湾国

Container pattern

This pattern specifies that newly created containers share a Network Namespace with an existing container, rather than with the host. A newly created container does not create its own network adapter or configure its own IP address. Instead, it shares IP address and port range with a specified container. Also, the two containers are isolated from each other except for the network aspects, such as file systems, process lists, and so on. The processes of the two containers can communicate through the LO network device

None mode

In None mode, Docker containers have their own Network Namespace, but do not do any Network configuration for Docker containers. That is, the Docker container has no network card, IP, routing, etc. We need to add network cards and configure IP for Docker containers by ourselves. In this network mode, the container has only lo loopback network and no other network adapter. The None mode can be specified at container creation time with –network= None. This type of network cannot be connected to the Internet. A closed network can ensure the security of containers

Bridge pattern

When the Docker process starts, a virtual bridge named Docker0 is created on the host, and the Docker container started on the host is connected to the virtual bridge. The virtual bridge works in a similar way to a physical switch. In this way, all containers on the host are connected to a layer 2 network through the switch. An IP address is assigned to the container from the Docker0 subnet, and the DOCker0 IP address is set as the default gateway of the container. Create a pair of virtual network cards veth pair on the host. Docker puts one end of the Veth pair in a newly created container, named eth0 (the container’s network card), and the other end in the host, named vethxxx or something like that, and adds this network device to the Docker0 bridge. BRCTL show bridge mode is the default network mode of Docker, without the –net parameter. When docker run -p is used, Docker actually makes NAT rules on iptables to implement port forwarding. You can use iptables -t nat-vnl

A VETH-pair is a pair of virtual device interfaces that come in pairs. One end is connected to the protocol stack, and the other end is connected to each other

Problems existing in bridge mode

The two containers are created as follows, and the connection is not used when accessed by name interconnection

Docker run-itd --name centos-bdG-01 centos:latest docker run-itd --name centos-bdG-02 centos:latest exec -it centos-bdg-01 ping centos-bdg-02 ping: centos-bdg-02: Name or service not knownCopy the code

Custom mode

docker network ls Lists the local network mode

Docker network ls network ID NAME DRIVER SCOPE 63D9493bce38 bridge Bridge local # De1105dfa5a8 host Host local # Host mode 87441a7540AA None NULL Local # None modeCopy the code

docker network create Create a network schema

  • — Network mode for driver connection
  •  — Subnet Subnet mask 192.168.0.0/16
  • – gateway 192.168.1.0 gateway
Docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.1.0 mynet 336cdb202d7a5d7f1fb3f8dc087d5d6e1b7aa46b3513186c8cb35bc1125a0784 docker network ls NETWORK ID NAME DRIVER SCOPE 63d9493bce38 bridge bridge local de1105dfa5a8 host host local 336cdb202d7a mynet bridge local 87441a7540aa none null Local Create a custom docker container docker run-itd --name centos-net-01 --network mynet centos:latest docker run-itd --name Centos-net-02 --network mynet centos:latest Docker exec it centos-net-01 ping centos-net-02 ping Centos-net-02 (192.168.0.2) 56(84) bytes of data. 64 bytes from centos-net-02.mynet (192.168.0.2): Icmp_seq =1 TTL =64 time= 0.096ms 64 bytes from centos-net-02.mynet (192.168.0.2): Icmp_seq =2 TTL =64 time=0.162 ms 64 bytes from centos-net-02.mynet (192.168.0.2): Icmp_seq =3 TTL =64 time=0.160 ms 64 bytes from centos-net-02.mynet (192.168.0.2): Icmp_seq =4 TTL =64 time=0.181 ms 64 bytes from centos-net-02.mynet (192.168.0.2): icmp_seq=5 TTL =64 time=0.158 msCopy the code

docker network inspectDisplays detailed information on one or more networks

docker network connect Connect the container to the network

Docker network connect [OPTIONS] Network CONTAINER There are several containers in the current environment: centos-bDG-xx Docker0 network segment 172.17.0.0/16 Centos-net-xx runs the myNET network segment of the customized Mynet mode at 192.168.0.0/16

How to implement the communication between centos-BDG-XX and centos-NET —

docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1573b7088604 centos:latest "/bin/bash" 6 minutes ago Up 6 minutes centos-net-02 22a4f34fb7b8 centos:latest "/bin/bash" 6 minutes ago Up 6 minutes centos-net-01 22844f330ba3 centos:latest "/bin/bash" 25 minutes ago Up 25 minutes centos-bdg-02 8f7ae3be00e3 centos:latest "/bin/bash" 25 minutes ago Up 25 minutes centos-bdg-01 docker network connect mynet centos-bdg-01 docker exec -it centos-bdg-01 ping Centos-net-01 PING centos-net-01 (192.168.0.1) 56(84) bytes of data. 64 bytes from centos-net-01.mynet (192.168.0.1): Icmp_seq =1 TTL =64 time=0.095 ms 64 bytes from centos-net-01.mynet (192.168.0.1): Icmp_seq =2 TTL =64 time=0.161 ms 64 bytes from centos-net-01.mynet (192.168.0.1): Icmp_seq =3 TTL =64 time= 0.209ms 64 bytes from centos-net-01.mynet (192.168.0.1): Icmp_seq =4 TTL =64 time=0.153 ms Docker network inspect mynet ··· "Containers": { "1573b70886040ce5922a0366e866bd6c5ac91acd851dd53dd43b2c2fbc388389": { "Name": "centos-net-02", "EndpointID": "968dd40f23362c3066df4092a2be6955c8681eaf38e07e505344492bc383a8c9", "MacAddress": "02:42:c0:a8:00:02", "IPv4Address": "192.168.0.2/16", "IPv6Address" : ""}," 22 a4f34fb7b8becd82719d16cecf84c54ca48bc184964042aecec74d18d1c931 ": {" Name" : "centos-net-01", "EndpointID": "b2c7cf4d66fdeeea5559bd81fd3166bf78e0e3064e5016e71e51ba5543c5ded4", "MacAddress": "02:42: c0: a8:00:01onsaturday (UK time)", "IPv4Address" : "192.168.0.1/16", "IPv6Address" : ""}, / / in the container is connected to a custom card" 8 f7ae3be00e349005326d1fd4ff57fd0e19867b66a23bbc73c9568c9467b7c6f ": {" Name" : "centos-bdg-01", "EndpointID": "22427d81dc7386d04c7502ae90c99b812503e8514a13c1e89207435a2ec9ebb4", "MacAddress": "02:42: c0: a8:00:03", "IPv4Address" : "192.168.0.3/16", "IPv6Address" : ""}}Copy the code

docker network rm Remove a network mode

docker network pruneDelete all unused networks

台湾国

Docker visual management

docker run -itd --restart always --name portainer-web -p 9000:9000  -v /var/run/docker.sock:/var/run/docker.sock  portainer/portainer
Copy the code