An overview,

1.1 Basic Concepts:

Docker is an open source application container engine, which is based on the Go language and complies with the Apache2.0 protocol. Docker allows developers to package their applications and dependencies into a lightweight, portable container that can then be distributed to any popular Linux machine, as well as virtualization. Containers are completely sandboxed, have no interfaces with each other (like iPhone apps), and most importantly, have very low performance overhead.

1.2 advantage:

Simplification: Docker allows developers to package their applications and dependencies into a portable container and distribute them to any popular Linux machine for virtualization. Docker changes the way of virtualization, so that developers can directly put their work into Docker for management. Convenience and speed has been the biggest advantage of Docker. Tasks that used to take days or even weeks can be completed in seconds under the processing of Docker containers.

Cost saving: On the one hand, with the advent of cloud computing era, developers do not need to configure high hardware in pursuit of effect. Docker has changed the mindset that high performance inevitably leads to high price. The combination of Docker and cloud makes cloud space more fully utilized. It not only solves the problem of hardware management, but also changes the way virtualization is done.

1.3 Comparison with traditional VM Features:

As a lightweight virtualization method, Docker has significant advantages over traditional VIRTUAL machine in running applications:

Docker containers are fast and can be started and stopped in seconds, much faster than traditional virtual machines.

Docker containers have little demand for system resources, and thousands of Docker containers can run simultaneously on a host.

Docker facilitates users to obtain, distribute and update application images through operations similar to Git, with simple instructions and low learning costs.

Docker supports flexible automatic creation and deployment mechanisms through Dockerfile configuration files to improve work efficiency.

In addition to running applications, Docker container basically does not consume additional system resources to ensure application performance and minimize system overhead.

Docker leverages multiple protection mechanisms on Linux systems to achieve strict and reliable isolation. Since version 1.3, Docker has introduced security options and mirror signature mechanisms that greatly improve the security of using Docker.

features The container The virtual machine
startup Second level Minutes of class
The hard disk to use Generally for MB Generally for GB
performance Close to the native Weaker than the native
System support Supports thousands of containers on a single machine Usually dozens
Isolation, Security isolation Is completely isolated

1.4 Infrastructure

Docker uses a client-server (C/S) architecture pattern that uses remote apis to manage and create Docker containers.

Docker containers are created by Docker images.

The relationship between containers and images is similar to that between objects and classes in object-oriented programming.

Docker object-oriented
The container object
The mirror



1.5 Docker technology foundation:

  • Namespace, container isolated foundation, ensure container. A container to see B 6 name space: the User, Mnt, Network, UTS, IPC, Pid

  • Cgroups, container resource statistics and isolation. Main use cgroups subsystems: CPU, blkio, device, freezer, the memory

  • Unionfs, typical: AUFS /overlayfs, the basis of layered mirroring implementation

1.6 Docker components:

  • Docker Client ————> Sends requests to the Docker server process, such as creating, stopping, and destroying containers

  • Docker Server processes — > handle all Docker requests and manage all containers

  • Docker Registry is a central repository for images, which can be regarded as a binary SCM

Two, installation and deployment

2.1 Preparations

Currently, CentOS only supports Docker kernels in distributions.

Docker runs on CentOS 7. The operating system must be 64-bit and the kernel version must be at least 3.10.

Docker runs on CentOS 6.5 or later, which must be 64-bit and have a kernel version of 2.6.32-431 or later.

2.2 installation docker

yum install docker -y          # installation
systemctl start docker         # start
systemctl enable docker        # Set boot autoboot
Copy the code

2.3 Basic Commands

docker search centos   # search mirror
Copy the code

The default is to pull from abroad, which is slow and can be accelerated using daoCloud configuration

 curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -shttp://d6f11267.m.daocloud.io scripts are writtenecho "{\"registry-mirrors\": [\"http://d6f11267.m.daocloud.io\"]}"> /etc/docker/daemon.json
systemctl restart docker              # Restart failure
Copy the code



Pull mirror as required:

docker pull docker.io/ansible/centos7-ansible
Copy the code

Drag all images to search:

for i in `docker search centos|awk '! /NAME/{print $2}'`;do docker pull $i;done
Copy the code

Viewing a local mirror:

docker images
Copy the code

2.4 Command Arrangement:

Container operation:

docker create Create a container without starting it
docker run Create and start a container
docker stop Stop the container and send SIGTERM
docker start Start a stopped container
docker restart Restart a container
docker rm Delete a container
docker kill Send a signal to the container, default SIGKILL
docker attach Connect to a running container
docker wait Block a container until it stops running
Copy the code

Get container information:

docker ps # display the container in the Up state
docker ps -a # display all containers, both running and Exited
docker inspect Get all the information in the container
docker logs # check the container log (stdout/stderr)
docker events Get real-time events from the Docker server
docker port Display the port mapping for the container
docker top # display process information for the container
docker diff Display the container file system before and after
Copy the code

Export container:

docker cp Copy files or directories out of the container
docker export Export the entire file system of the container as a tar package, without layers, tag, etc
Copy the code

Perform:

docker exec # Execute a command in the container, you can execute bash to enter the interactive
Copy the code

Mirror operation:

docker images # display a list of all local mirrors
docker import Create an image from a tar package, often in conjunction with export
docker build # Create an image using Dockerfile (recommended)
docker commit Create an image from the container
docker rmi Delete a mirror
docker load Create an image from a tar package, use with save
docker save Save an image as a tar package with layers and tag information
docker history Display the history command to generate an image
docker tag # Create an alias for the image
Copy the code

Registry operations:

docker login Login to a registry
docker search Search for images from the Registry repository
docker pull Download the image locally from the repository
docker push Push an image into the Registry repository
Copy the code

2.5 Simple Operations

Run and enter the container operation:

Docker run - I - t docker. IO / 1832990 / centos6.5 / bin/bashCopy the code

-t specifies a dummy terminal or terminal in the new container.

-i allows us to interact with (STDIN) inside the container;

-d indicates that the container is running in the background.

/ bin/bash. This launches the bash shell inside the container;

So when the container starts, we get a command prompt:



In the container, we install mysql and set it to start after startup, and submit the modified image:

docker ps -lQuery the container ID docker commit -m"Function" -a "User Information"ID tag Submits the modified imageCopy the code

Docker inspect ID Docker push ID upload docker imageCopy the code

Create an image using DockerFile

With the command docker build, you create a Dockerfile that contains a set of instructions that tell Docker how to build the image.

mkdir DockerFile
cd DockerFile
cat > Dockerfile <<EOF
FROM 603dd3515fcc
MAINTAINER Docker xuel
RUN yum install mysql mysql-server -y
RUN mddir /etc/sysconfig/network
RUN /etc/init.d/mysqld start
EOF
Copy the code

docker build -t "Centos6.8: mysqld" .
Copy the code

-t Specifies repository and tag

. Specifies the path to the Dockerfile

Note that a mirror cannot exceed 127 layers

In addition, you can copy local files to the image using the ADD command.

Open ports externally with the EXPOSE command;

Use CMD commands to describe the programs run after the container is started, etc.

CMD [“/usr/sbin/apachectl”, “-D”, “FOREGROUND”]

2.6 Dockerfile,

Dockerfile instructions are case-insensitive, it is recommended to use uppercase, use # as a comment, each line only supports one instruction, each instruction can carry multiple parameters.

The instructions of Dockerfile can be divided into two kinds according to their functions, namely, build instructions and set instructions.

Build instruction: used to build an image. The operation specified by this instruction will not be performed on the container where the image is running.

Set directive: Sets the properties of the image, and the action specified by this directive will be performed in the container where the image is running.

  • FROM (specify base image)

Build directives must be specified and need to precede other directives in Dockerfile. Subsequent directives depend on the image specified by the directive. The underlying image specified by the FROM directive can be either in an official remote repository or in a local repository.

This directive has two formats:

FROM <image>                  Specify the underlying image as the last modified version of the image
FROM <image>:<tag>              # specify the underlying image as a tag version of the image.
Copy the code
  • MAINTAINER (used to specify image creator information)

Build instructions that write information about the creator of the image to the image. When we execute the docker inspect command on the image, there are corresponding fields in the output to record the information.

MAINTAINER <name>
Copy the code
  • RUN (for installing software)

Build instructions, RUN can RUN any command supported by the underlying image. If ubuntu is selected for the base image, the software management section can only use ubuntu commands.

RUN <command> (the command is run in a shell - `/bin/sh -c`)  
RUN ["executable"."param1"."param2". ] (exec form)
Copy the code
  • CMD (sets what to do when the Container starts)

Set commands for operations specified when Containers are started. This operation can be performed by executing custom scripts or system commands. This directive can exist only once in the file, and if there are more than one, only the last one is executed.

CMD ["executable"."param1"."param2"] (like an exec, this is the preferred form)  
CMD command param1 param2 (as a shell)
Copy the code

ENTRYPOINT specifies the path of an executable script or program that will be executed with param1 and param2 as parameters. So if CMD directives use the above form, then Dockerfile must have an ENTRYPOINT. When Dockerfile specifies ENTRYPOINT, use the following format:

CMD ["param1"."param2"] (as default parameters to ENTRYPOINT)
Copy the code
  • ENTRYPOINT (Sets what to do when container starts)

The setup directive specifies the command to execute when the container is started. It can be set multiple times, but only the last one is valid.

ENTRYPOINT ["executable"."param1"."param2"] (like an exec, the preferred form)  
ENTRYPOINT command param1 param2 (as a shell)
Copy the code

The use of this directive is divided into two cases, one is used alone, and the other is used with CMD instruction. When used alone, if you also use CMD and CMD is a complete executable command, then CMD and ENTRYPOINT overwrite each other and only the last CMD or ENTRYPOINT is valid.

# CMD directives will not be executed, only ENTRYPOINT directives will be executed
CMD echo"Hello, World!" ENTRYPOINT ls-l
Copy the code

Another way to use CMD in conjunction with ENTRYPOINT is to specify the default parameters of the CMD command. In this case, the CMD command is not a complete executable command, only the parameters part. The ENTRYPOINT directive can only use JSON to specify execution commands, not parameters.

FROM ubuntu  
CMD ["-l"]  
ENTRYPOINT ["/usr/bin/ls"]
Copy the code
  • USER (The USER who sets the container)

Setup directive that sets the user to start the container. The default is root

# specify the user to run memcached
ENTRYPOINT ["memcached"] USER daemon or ENTRYPOINT ["memcached"."-u"."daemon"]
Copy the code
  • EXPOSE (specifies the port that the container needs to map to the host machine)

Sets a directive that maps ports in the container to a port on the host machine. When you need to access a container, you can use the host machine’s IP address and mapped port instead of the container’s IP address. To do this, you first set up the container port you want to map with EXPOSE in the Dockerfile, and then specify the -p option with the EXPOSE port when you run the container, so that the EXPOSE port is randomly mapped to a port on the host machine. You can also specify which port you want to map to the host machine, making sure that the port number on the host machine is not in use. The EXPOSE directive allows you to set multiple port numbers at once, and when running the corresponding container, you can use the -p option multiple times.

# Map a port
EXPOSE port1  
# corresponding command to run container (host port: container port)
docker run -p port1 image  
  
# Map multiple ports
EXPOSE port1 port2 port3  
Run the corresponding command used by the container
docker run -p port1 -p port2 -p port3 image  
You can also specify a port number that you want to map to the host machine
docker run -p host_port1:port1 -p host_port2:port2 -p host_port3:port3 image
Copy the code

Port mapping is an important function of Docker, because the IP address of the container cannot be specified every time we run the container, but is randomly generated within the address range of the bridge network card. The IP address of the host machine is fixed, so we can map the port of the container to a port on the host machine, instead of checking the IP address of the container every time we access a service in the container. For a running container, you can use the Docker port plus the port to be mapped in the container and the container ID to see the port number mapped on the host machine.

  • ENV (for setting environment variables)

Build directive that sets an environment variable in the image.

ENV <key> <value>
Copy the code

After container is set, subsequent RUN commands can be used. After container is started, you can use docker inspect to check the environment variable, or set or modify the environment variable when docker RUN –env key=value. If you have JAVA installed and need to set JAVA_HOME, you can write this in your Dockerfile:

ENV JAVA_HOME /path/to/java/dirent
Copy the code
  • ADD (copy files from SRC to dest path of container)

All files and folders copied to the Container have permission 0755 and UID and GID 0. If it is a directory, all files in the directory are added to the Container, excluding the directory. If the file is in a recognizable compression format, Docker will help uncompress it (pay attention to the compression format); If < SRC > is a file and does not end with a slash, <dest> is treated as a file and the contents of < SRC > are written to <dest>; If < SRC > is a file and <dest> is terminated with a slash, the < SRC > file will be copied to the <dest> directory.

ADD <src> <dest>
Copy the code

< SRC > is the path relative to the source directory being built. This can be a file or directory path, or a remote file URL.

is the absolute path in the Container

  • VOLUME (specify mount point)

Sets instructions to persistently store data in a directory in a container that can be used by the container itself or shared with other containers. We know that the container uses AUFS, a file system that cannot persist data, and all changes are lost when the container is closed. This directive can be used in a Dockerfile when the application in the container needs to persist data.

FROM base  
VOLUME ["/tmp/data"]
Copy the code
  • WORKDIR (Switching directories)

Set command, can be switched multiple times (equivalent to CD command), RUN,CMD,ENTRYPOINT effect.

Execute vim a.txt under /p1/p2
WORKDIR /p1 WORKDIR p2 RUN vim a.txt
Copy the code

2.7 Image Import or Export



Export an image to a local directory:



Docker save -o Centos6.5.tar Centos6.5 or DockerexportF9c99092063c > centos6.5. TarCopy the code

Import an image locally:

Docker load -- Input centos6.5.tar or docker load < centos6.5.tarCopy the code

Docker rm deletes the terminated container Docker-fRm can delete running containersCopy the code

Modify an already running background container:

docker exec -it CONTAINER ID /bin/bash
Copy the code



Three, storage,

3.1 the data plate

Docker images are made up of layers of files, and some of Docker’s storage engines handle how to store these files.

docker inspect centos            View the container details
Copy the code

The Layers below the information are the files of centos, which are read-only and cannot be modified. Images and containers created based on this image will also share these file Layers, and Docker will add a read-write file layer on these Layers. If you need to modify something in the file layer, Docker will make a copy of it in the read-write file layer, and if you delete the container, the corresponding file in the read-write file layer will also be deleted.

If you have some data you want to keep, such as logs from your Web server or data from your database management system, put it together in your data volumes. The data on it, even if the container is deleted, remains forever. When creating the container, we can specify the data disk. To specify a specific directory.

docker run -i -t -v /mnt  --name nginx docker.io/nginx /bin/bash
Copy the code

-v: specifies the directory to be mounted to the container

Use docker Inspect container ID to view the physical file path corresponding to the mount directory of the host

Similarly, we can mount the directory of the custom physical host to the custom directory of the container using:

Mount the host directory into the container:

docker run -d -p 80:80 --name nginx -v /webdata/wordpress:/usr/share/nginx/html docker.io/sergeyzh/centos6-nginx
Copy the code

-d Background running

–name Specifies the name of the running container

-v Host directory: Container directory Mounts the host directory to a container

-p Host port: Container listening port Maps the container application listening port to a specific port on the physical host

Map multiple physical directories :(write more -v)





3.2 Data Container:

You can create a data container, which is the data disk specified by the container, and then let other containers use the container as their data disk, sort of inheriting the data disk specified by the data container as the data disk.

Start by creating a data container named NewNginx

docker create -v /mnt -it --name newnginx docker.io/nginx /bin/bash
Copy the code

Using the data container container, run a container nginx1 to create a file under the data directory/MNT

docker run --volumes-from newnginx --name nginx1 -it docker.io/nginx /bin/bash
Copy the code

The files created by nginx1 in the data directory will still exist. Similarly, the files created by nginx2 under/MNT will also be visible in other new containers running on the data container

3.3 Data Disk Management:

When deleting a container, Docker does not delete its data disks by default.

docker volume ls                    # View data disks
docker volume ls -f dangling=true        View the data disks that are not being used by the container
docker volume rm VOLUME NAME        # Delete a data disk
Copy the code



If you want to delete a container while also deleting its data disks, you can use the -v parameter.

docker rm -v newnginx
Copy the code

Four, network

Docker provides several networks that determine how containers communicate with each other and with the outside world.

docker network ls        # Check the network
Copy the code

When the Docker process starts, a virtual bridge named Docker0 is created on the host, and the Docker container started on the host is connected to the virtual bridge. A virtual bridge works like a physical switch, so that all containers on a host are connected to a layer 2 network through the switch. Assign an IP address to the container from the Docker0 subnet and set the DOCker0 IP address to the container’s default gateway. Create a pair of virtual network cards veth pair on the host. Docker puts one end of the Veth pair in a newly created container, named eth0 (the container’s network card), and the other end in the host, named vethxxx or something like that, and adds this network device to the Docker0 bridge.

4.1 Bridge Network

Unless the network is specified when the container is created, the container uses a bridge network by default. Containers that belong to the network can communicate with each other, but containers that want to access the network use a bridging network, sort of like a bridge between the host and the container, which isolates the container a little bit. In fact, DNAT rules are made on iptables to implement port forwarding. You can use iptables -t nat-vnl.

4.2 Host Host network

If the host mode is used when the container is started, the container does not get a separate Network Namespace, but shares a Network Namespace with the host. The container does not virtualize its own network card, configure its own IP address, etc., but uses the IP address and port of the host. However, other aspects of the container, such as the file system and process list, are still isolated from the host. A container that uses only this network will use the host’s network, which is completely open to the outside world. If you can access the host, you can access the container.

4.3 Using The None mode

Docker containers have their own Network Namespace, but do not perform any Network configuration for Docker containers. That is, the Docker container has no network card, IP, routing, etc. We need to add network cards and configure IP for Docker containers by ourselves. Containers using this network are completely isolated.

4.4 Simple Demo:

Start both containers and view their internal IP addresses

for i in `docker ps |grep -v "CONTAINER"|awk '{print $1}'`;do docker inspect $i|grep 'IPAddress';done
Copy the code

In bridge mode, containers within the host and containers directly communicate with the host

Docker inspect Container IDCopy the code

The container created by host does not have an INTERNAL IP address. It uses the host address

docker run -d --net host docker.io/sergeyzh/centos6-nginx
Copy the code





The container created by host does not have an INTERNAL IP address. It uses the host address

docker run -d --net none docker.io/sergeyzh/centos6-nginx
Copy the code



4.5 Container Ports:

You can tell Docker which interfaces you want to use if you want to make the services provided by containers created based on the Bridge network accessible. If you want to see which ports are used by the mirror, ExposedPorts tells you which ports are used by the mirror.

docker run -d -p 80 docker.io/sergeyzh/centos6-nginx        
docker port 09648b2ff7f6
Copy the code

The -p parameter randomly maps a high-end port on the host to a specified port in the container



docker run -d -p 80:80 docker.io/sergeyzh/centos6-nginx    Map port 80 of the host to port 80 of the container
Copy the code