The original link: blog.opskumu.com/docker.html
Author: Kumu
1 Docker profile
Docker has two main components:
- Docker: Open source container virtualization platform
- Docker Hub: A Docker SaaS platform for sharing and managing Docker containers
Docker uses the client-server (C/S) architecture pattern. Docker clients communicate with Docker daemons. Docker daemons handle complex and onerous tasks, such as creating, running, and publishing your Docker containers. Docker client and daemon can run on the same system, of course, you can also use Docker client to connect to a remote Docker daemon. Docker clients communicate with daemons through sockets or RESTful apis.
1.1 Docker daemon
As shown in the figure above, the Docker daemon runs on a host. The user does not interact with the daemon directly, but communicates with it indirectly through the Docker client.
1.2 Docker Client
Docker client, actually Docker’s binary program, is the main way for users to interact with Docker. It receives user instructions and communicates with the Docker daemon behind it, back and forth.
1.3 Docker internal
To understand Docker’s internals, you need to understand the following three components:
- Docker images – Docker images
- Docker Repository – Docker Registeries
- Docker containers – Docker containers
1.3.1 Docker mirror
Docker image is the read-only template of Docker container. Each image is composed of a series of layers. Docker uses UnionFS to combine these layers into a single image. UnionFS allows files and folders (called branches) in a separate file system to be transparently overwritten to form a single coherent file system. Because of these layers, Docker is so lightweight. When you change a Docker image, such as upgrading to a new version of an application, a new layer is created. So instead of replacing the entire original image or rebuilding it (as you might do with virtual machines), a new layer is added or upgraded. Now that you don’t have to republish the entire image, just upgrade it, layer makes it easy and fast to distribute Docker images.
1.3.2 Docker warehouse
Docker repository is used to store images, which can be understood as code repository in code control. Similarly, Docker repositories have public and private concepts. The public Docker repository name is Docker Hub. Docker Hub provides a huge collection of images to use. These images can be created by yourself or based on someone else’s image. Docker repository is the distribution part of Docker.
1.3.3 Docker container
A Docker container is similar to a folder. A Docker container contains all the environments that an application needs to run. Every Docker container is created from a Docker image. Docker containers can run, start, stop, move, and delete. Each Docker container is an independent and secure application platform, and Docker container is the running part of Docker.
1.4 libcontainer
Docker uses libContainer instead of LXC from version 0.9. The interaction between libContainer and Linux is as follows:
- Docker 0.9: Introducing Execution Drivers and libcontainer
1.5 Namespaces
1.5.1 pid namespace
The processes of different users are separated through the PID namespace, and different namespaces can have the same PID. It has the following characteristics:
- The pid in each namespace is a process with its own PID =1 (similar to /sbin/init).
- A process in a namespace can affect only the processes in its own namespace or subnamespace
- Because /proc contains running processes, only the processes in its namespace are visible in the /proc directory of pseudo-filesystem in the Container
- Because namespaces allow nesting, the parent namespace can influence the processes of the child namespace, so the processes of the child namespace can be seen in the parent namespace, but have different Pids
Reference Documents: Introduction to Linux Namespaces – Part 3: PID
1.5.2 mnt namespace
Similar to chroot, a process is placed in a specific directory to execute. MNT namespace allows processes with different namespaces to see different file structures, so that each process in a namespace can see different file directories. Unlike the chroot, mounting information about the /proc/mounts container in a namespace contains only the mount point of the namespace where the namespace is virtualized.
1.5.3 net namespace
Network isolation is implemented through net namespace. Each NET namespace has its own network devices, IP addresses, IP routing tables, and /proc/net directory. In this way, the network of each Container can be isolated. By default, Docker uses veTH to connect the virtual network adapter in container to a Docker bridge on host.
Introductionto Linux Namespaces – Part 5: NET
1.5.4 uts namespace
UTS (“UNIX Time-Sharing System”) namespace allows each container to have independent hostname and domain name. So that it can be treated as an independent node on the network rather than as a process on Host.
Introductionto Linux Namespaces – Part 1: UTS
1.5.5 ipc namespace
Linux interprocess communication-IPC is used for process interaction in Containers, including semaphores, message queues, and shared memory. However, unlike VM, the interprocess interaction of container is actually interprocess interaction of host with the same PID namespace. Therefore, you need to add namespace information when applying for IPC resources. Each IPC resource has a unique 32-bit ID.
Reference documentation: Introduction to Linux Namespaces – Part 2: IPC
1.5.6 user namespace
Each Container can have a different user and group ID, which means that you can execute programs inside the Container as a user inside the Container rather than as a user on the Host.
With these six namespaces isolated from processes, networks, IPC, file systems, UTS, and users, a container can display the capabilities of an independent computer. Different Containers are isolated from each other at the OS level. However, resources of different namespaces compete with each other. You still need a ulimit to manage resources that can be used by each Container – cgroup.
1.5.7 Reference
- Docker Getting Start: Related Knowledge
- An introduction to Docker and its related terminology, underlying principles, and technologies
1.6 Resource Quota Cgroups
Cgroups implements quotas and measures for resources. The use of cgroups is very simple. It provides a file-like interface. Create a new folder in the /cgroup directory to create a group, create a task file in this folder, and write pid into the file to achieve resource control over the process. {subsystem prefix}.{resource item} is a typical configuration method. For example, memory.usageInBytes defines a memory limit option for the group within subsystem memory. In addition, the subsubsystem in cgroups can be composed at will. One subsubsystem can be in different groups, or a group can contain more than one subsystem – that is, one subsubsystem.
- memory
- Memory-related limitations
- cpu
- In Cgroups, CPU capabilities cannot be defined as in hardware virtualization scenarios, but CPU rotation priorities can be defined, so processes with higher CPU priorities are more likely to receive CPU operations. The CPU priority of the cgroup can be defined by writing the parameter to CPU.shares – a relative weight, not absolute value
- blkio
- Block I/O statistics and limits, byte/operation statistics and limits (such as IOPS), and read/write speed limits, but the statistics are mainly about synchronous I/OS
- devices
- Device Permission Restrictions
Reference document: How to Use cGroup
2 Docker installation
The installation method of Docker is not introduced here, please refer to official documents for specific installation
Get the current docker version
$ sudo docker version
Client version: 1.3.2
Client API version: 1.15
Go version (client): go1.3.3
Git commit (client): 39fa2fa/1.3.2
OS/Arch (client): linux/amd64
Server version: 1.3.2
Server API version: 1.15
Go version (server): go1.3.3
Git commit (server): 39fa2fa/1.3.2
Copy the code
3 Basic usage of Docker
Docker HUB: Docker image homepage, including official images and other public images
Due to national conditions, Docker HUB official image download in China is relatively slow, you can use Daocloud image acceleration.
3.1 Search images
$ sudo docker search ubuntu
Copy the code
3.2 Pull images
$sudo docker pull Ubuntu # Get ubuntu official images $sudo Docker images # View the list of current imagesCopy the code
3.3 Running an interactive shell
$ sudo docker run -i -t ubuntu:14.04 /bin/bash
Copy the code
- Docker run – Runs a container
- -t – Assign a tty (link is external)
- So we can interact with it
- Ubuntu :14.04 – Use ubuntu Base image 14.04
- /bin/bash – Runs the bash shell command
[image]:[tag]
$sudo docker ps # Ps -a Lists all containers in the system. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 6c9129e9DF10 Ubuntu :14.04 /bin/bash 6 minutes ago Up 6 minutes cranky_babbageCopy the code
3.4 Related Shortcut Keys
- Exit:
Ctrl-D
orexit
- Detach:
Ctrl-P + Ctrl-Q
- attach:
docker attach CONTAINER-ID
4 Docker command help
4.1 docker help
4.1.1 docker command
$sudo docker # docker Build build an image from a Dockerfile # Attach attach to a running container Cp Copy files/folders from the containers cp Copy files/folders from the containers Filesystem to the host path filesystem to the host path filesystem to create a new container Diff Inspect changes on a container's filesystem Exec Run a command in an existing container # export Stream the contents of a container Export the contents of the container as a tar archive [corresponding import] history Show the history of an image # Show a List of historical images formed by the image Import Create a new filesystem image from the contents of a tarball Create a new filesystem image from the contents of a tarball Inspect Return low-level information on a container kill Kill a running container # Kill docker container load load an image from a tar archive # Register and Log out of a docker registry server Port Lookup the public-facing port which is Nat-ed to PRIVATE_PORT # Check the internal source port of the container corresponding to the mapped port. Pause Pause all processes within a container # Pause containers ps List containers # Pull an image or a repository from the Docker Registry server # Pull an image from the docker registry server Or a repository to the docker registry server Remove one or more containers # Remove one or more containers # Remove one or more images Run run a command in a new container # Create a new container and run a command save save an image to a tar archive # Search search for an image on the Docker Hub start start a stopped Containers # start containers stop stop a running containers # stop containers tag an image into a repository top Lookup the Running processes of a container unpause unpause a paused container Version Show the docker Wait Block until a container stops, Then print its exit code # Run 'docker COMMAND --help' for more information on a COMMANDCopy the code
4.1.2 docker option
Usage of docker: --api-enable-cors=false enable cors headers in the remote API --bridge="" use 'none' to disable container networking --bip=" Use this CIDR notation address for the network bridge's IP, not compatible with -b # and -b -d, --daemon=false daemon mode # daemon mode -d, --debug=false Enable debug mode # debug mode -- DNS =[] Force docker to use specific DNS servers --dns-search=[] Force Docker to use specific DNS search domains --exec-driver="native" Force the docker runtime to use a specific exec driver IPv4 subnet for fixed IPs (ex: This subnet must be nested in the bridge subnet (which is defined by -b or --bip) -g, --group="docker" Group to assign the unix socket specified by -H when running in daemon mode use '' (the empty string) to disable setting of a group -g, --graph="/var/lib/docker" Path to use as the root of the docker runtime --host=[] The socket(s) to bind to daemon mode # Docker specified [TCP or local socket] specified using one or more tcp://host:port, unix:///path/to/socket, Fd ://* or fd://socketfd. -- ICC =true Enable inter-container communication # --insecure-registry=[] Enable insecure communication with specified registries (no certificate verification for HTTPS and enable HTTP fallback) (e.g., Localhost :5000 or 10.20.0.0/16) -- IP ="0.0.0.0" Default IP address to use when binding container ports # Default all IP --ip-forward=true Enable net.ipv4.ip_forward # Enable forwarding --ip-masq=true Enable IP masquerading for bridge's IP range --iptables=true Enable Docker's addition of iptables rules --mtu=0 Set the containers network mtu # If no value is provided: default to the default route MTU or 1500 if no default route is available -p, --pidfile="/var/run/docker.pid" Path to use for daemon PID file # Specify pidfile location --registry-mirror=[] Specify a preferred pidfile location Docker registry mirror -s, --storage-driver="" Force the docker runtime to use a specific storage driver --selinux-enabled=false Enable selinux support --storage-opt=[] Set storage driver options --tls=false Use TLS; Implied by TLS-verify flags # To enable TLS --tlscacert="/root/.docker/ca.pem" Trust only Remotes providing a certificate Signed by the CA given here --tlscert="/root/.docker/cert.pem" Path to TLS certificate file # --tlskey="/root/.docker/key.pem" Path to TLS key file # tlsverify=false Use TLS and verify the remote (daemon: verify client, client: --version=false Print version information and quitCopy the code
4.2 docker search
$ sudo docker search --help Usage: Docker Search TERM Search the Docker Hub for images # Search images from the Docker Hub -- Automated =false Only show automated builds --no-trunc=false Don't truncate output -s, --stars=0 Only displays with at least xxx starsCopy the code
Example:
$sudo docker search -s $sudo docker search -s NAME DESCRIPTION STARS OFFICIAL AUTOMATED Ubuntu OFFICIAL Ubuntu Base Image 425 [OK]Copy the code
4.3 docker info
$sudo docker info Containers: 1 # number of Containers Images: 22 # number of Images Docker-8:17-3221225728 - Pool pool Blocksize: 65.54 kB Data file: /data/docker/devicemapper/devicemapper/data Metadata file: / data/docker/devicemapper/devicemapper/metadata data Space, informs: 1.83 GB of data Space Total: 107.4 GB Metadata Space Used: 2.191 MB Metadata Space Total: 2.147 GB Library Version: Kernel Version: 3.10.0-123.el7.x86_64 Operating System: Kernel Version: 3.10.0-123.el7.x86_64 CentOS Linux 7 (Core)Copy the code
4.4 docker pull && docker push
$sudo docker pull --help # pull docker pull [OPTIONS] NAME[:TAG] Pull an image or a repository from the registry -a, --all-tags=false Download all tagged images in the repository $sudo docker push # push docker push NAME[:TAG] Push an image or a repository to the registryCopy the code
Example:
$sudo Docker pull Ubuntu # The default download all the official ubuntu repositories mirror $sudo docker pull ubuntu: 14.04 # download designated official ubuntu mirror $sudo docker push 192.168.0.100: # 5000 / ubuntu Push image libraries to private sources [be pushed to the official registered docker official account, their own account] $sudo docker push 192.168.0.100:5000 / ubuntu: 14.04 # push specified mirror to private sourcesCopy the code
4.5 docker images
List the current system mirroring
$ sudo docker images --help Usage: docker images [OPTIONS] [NAME] List images -a, --all=false Show all images (by default filter out the intermediate image layers) The default Docker images display the final image, -f, --filter=[] Provide filter values (i.e. 'dangling=true') --no-trunc=false --quiet=false Only show numeric IDsCopy the code
Example:
$sudo docker images -a $sudo docker images -a $sudo Docker images Ubuntu # Displays the REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE Ubuntu 12.04 ebe4be4dd427 4 weeks ago 12.04 MB Ubuntu 14.04 e54CA5efa2e9 4 weeks ago 12.05 MB Ubuntu 14.04- SSH 6334d3ac099a 7 Weekes line 383.2 MBCopy the code
4.6 docker rmi
Delete one or more mirrors
$ sudo docker rmi --help Usage: docker rmi IMAGE [IMAGE...] Remove one or more images -f, --force=false force removal of the image --no-prune=false Do not delete untagged parents # Do not delete an unmarked parent mirrorCopy the code
4.7 docker run
$ sudo docker run --help Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...] Run a command in a new container -a, --attach=[] Attach to stdin, stdout or stderr. -c, --cpu-shares=0 CPU shares (relative weight) # Set CPU usage weight --cap-add=[] add Linux capabilities --cap-drop=[] drop Linux Capabilities --cidfile=" Write the container ID to the file "--cpuset="" CPUs in which to allow Execution (0-3, 0,1) # Detach =false Detached mode: Run container in the background, --device=[] Add a host device to the container (e.g. --device=/dev/ SDC :/dev/ XVDC) -- DNS =[] Set DNS servers -- DNS -search=[] Set DNS search domains -- search=[] Set DNS search domains --env=[] Set environment variables # --env-file=[] Read in a line delimited file of env variables # Expose =[] expose a port from the container --hostname="" Container hostname -- Interactive =false Keep stdin open even if not attached # Keep stdin open even if not attached --link=[] Add link to another container --lxc-conf=[] (LXC exec-driver only) Add custom LXC options --lxc-conf="lxc.cgroup.cpuset.cpus = 0,1" -m, --memory="" memory limit (format: <number><optional unit>, where unit = b, k, Net ="bridge" Set the Network mode for the container # Creates a new network stack for the container on the docker bridge' None ' no networking for this container 'container:<name|id>': reuses another container network stack 'host': use the host network stack inside the container. Note: the host mode gives the container full access to local system services such as D-bus and is therefore considered Insecure. -p, --publish-all=false publish all exposed ports to the host interfaces --publish=[] publish a container's port to the host ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort (use 'docker port' to see the actual mapping) -- Privileged =false Give extended PRIVILEGES to this container. -- Privileged =false Give privileges to this container container exits (no, on-failure[:max-retry], Always) --rm=false Automatically remove the container when it exits (incompatible with -d) # --security-opt=[] Security Options --sig-proxy=true Proxify received signals to the process (even in non-tty mode). SIGCHLD is not proxied. -t, --tty=false Allocate a pseudo-tty -v, --volume=[] Bind mount a volume (e.g., from the host: -v /host:/container, from docker: -v /container) # Run the following command to install volumes from the specified container(s) : --workdir="" Working directory inside the container #Copy the code
Example:
$sudo Docker Images Ubuntu REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE Ubuntu 14.04 E54CA5eFA2E9 4 weeks ago 276.5 MB . . $sudo docker run -t -i -c 100 -m 512MB -h test1 -d --name="docker_test1" Ubuntu /bin/bash The memory limit is 512MB. The host name is test1. Called docker_test1 background a424ca613c9f2247cd3ede95adfbaf8d28400cbcb1d5f9b69a7b56f97b2b52e5 $sudo bash container docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a424CA613C9f Ubuntu :14.04 /bin/bash 6 seconds ago Up 5 seconds docker_test1 $ sudo docker attach docker_test1 root@test1:/# pwd / root@test1:/# exit exitCopy the code
About CPU priorities:
By default all groups have 1024 shares. A group with 100 shares will get a ~10% portion of the CPU time – archlinux cgroups
4.8 docker start | stop | kill… …
- Docker start the CONTAINER/CONTAINER…
- # Run one or more stopped containers
- Docker stop the CONTAINER/CONTAINER…
- Stop one or more running containers
-t
Option to specify a timeout period
- Stop one or more running containers
- Docker kill [OPTIONS] CONTAINER [CONTAINER…]
- # default kill sends SIGKILL signal
-s
You can specify the kill signal type to send
- # default kill sends SIGKILL signal
- Docker restart [OPTIONS] CONTAINER [CONTAINER…]
- Restart one or more running containers
-t
Option to specify a timeout period
- Restart one or more running containers
- docker pause CONTAINER
- # pause a container for commit
- docker unpause CONTAINER
- # continue suspending containers
- Docker rm [OPTIONS] CONTAINER [CONTAINER…]
- Remove one or more containers
- -f, -force =false force removal of running Container
- -l, –link=false Remove the specified link and not the underlying container
- -v, — volumes=false Remove the volumes associated with the container
- docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
- # commit specifies the container to be an image
- -a, — author=”” author (e.g., “John Hannibal Smith [email protected]”)
- – m – message = “” Commit message
- -p, – pause=true Pause container during commit
- The default commit state is paused
- Docker inspect the CONTAINER | IMAGE [CONTAINER | IMAGE…
- View the details of the container or image
- docker logs CONTAINER
- # Output the specified container log information
- -f, -follow =false Follow log output
- # tail -f
- – t – timestamps = false Show timestamps
- Tail =”all” Output the specified number of lines at the end of logs (defaults to all logs)
Docker Run Reference
4.9 Docker 1.3 New Features and Commands
4.9.1 Digital Signature Verification
Docker version 1.3 will use digital signatures to automatically verify the origin and integrity of all official libraries. If an official image is tampered with or corrupted, Docker currently only alerts you to the situation and does not prevent the container from running.
4.9.2 Inject new processes with docker exec
docker exec --help Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...] Run a command in an existing container -d, --detach=false Detached mode: run command in the background -i, --interactive=false Keep STDIN open even if not attached -t, --tty=false Allocate a pseudo-TTYCopy the code
To simplify debugging, you can use the Docker exec command to run programs on the running container through the Docker API and CLI.
$ docker exec -it ubuntu_bash bash
Copy the code
The previous example creates a new Bash session in the Ubuntubash container.
4.9.3 Tune container lifecycles with docker create
We can use the docker run command to create a container and run the program inside, because there are many users to create the container without starting the container, so the Docker create came into being.
$ docker create -t -i fedora bash
6d8af538ec541dd581ebc2a24153a28329acb5268abe5ef868c1f1a261221752
Copy the code
The above example creates a writable container layer (and prints the container ID), but instead of running it, you can run the container using the following command:
$docker start -a -i 6d8af538ec5 bash-4.2#Copy the code
4.9.4 Security Options
With the –security-opt option, users can customize SELinux and AppArmor volume labels and configurations while running the container.
$ docker run --security-opt label:type:svirt_apache -i -t centos \ bash
Copy the code
In the example above, the container is only allowed to listen on the Apache port. The advantage of this option is that the user does not need to specify the — Privileged option when running docker, thus reducing security risks.
Docker 1.3: Signed images, Process Injection, Security Options, Mac shared directories
4.10 Docker 1.5 new features
Reference: New features in Docker 1.5
5 Docker port mapping
# Find the IP address of the container with ID < container_id > by container ID get IP $sudo docker inspect < container_id > | grep IPAddress | cut -d '"' -f 4Copy the code
In any case, these IP’s are local system-based and the container’s ports are not accessible to non-local hosts. In addition, aside from the fact that ports are only locally accessible, another problem with containers is that these IP addresses change every time the container is started.
Docker solves both of these problems with containers and provides a simple and reliable way to access services inside containers. Docker binds the interface of the host system through ports, allowing non-local clients to access services running inside the container. In order to facilitate communication between containers, Docker provides this connection mechanism.
5.1 Automatic Port Mapping
-p Specifies the –expose option, which specifies the port for providing services externally
$sudo Docker run -t -p --expose 22 -- Name Server Ubuntu :14.04Copy the code
Use docker run -p to automatically bind all container ports that provide external services. The mapped ports will be removed from the unused port pool (49000.. Docker inspect
or docker port
5.2 Binding ports to specified interfaces
The basic grammar
$ sudo docker run -p [([<host_interface>:[host_port]])|(<host_port>):]<container_port>[/udp] <image> <cmd>
Copy the code
If no binding IP is specified by default, all network interfaces are listened on.
5.2.1 Binding TCP Ports
Bind TCP port 8080 of the container to TCP port 80 on 127.0.0.1 of the host machine. $sudo docker run -p 127.0.0.1:80:8080 <image> < CMD > # Bind TCP port 8080 of the container to a dynamically allocated TCP port on 127.0.0.1 Of the host machine. $sudo docker run -p 127.0.0.1::8080 <image> < CMD > # Bind TCP port 8080 of the container to TCP port 80 on all available interfaces of the host machine. $ sudo docker run -p 80:8080 <image> <cmd> # Bind TCP port 8080 of the container to a dynamically allocated TCP port on all available interfaces $ sudo docker run -p 8080 <image> <cmd>Copy the code
5.2.2 Binding UDP Ports
Bind UDP port 5353 of the container to UDP port 53 on 127.0.0.1 of the host machine. $sudo docker run -p 53 / udp 127.0.0.1:53:53 < image > < CMD >Copy the code
6 Docker Network Configuration
Image: Docker-Container and Lightweight Virtualization
Dokcer provides communication between containers by using Linux Bridges. The purpose of the Docker0 bridge interface is to facilitate Docker management. To start the Docker Daemon, do the following:
- creates the docker0 bridge if not present
- Create docker0 if it does not exist
- searches for an IP address range which doesn’t overlap with an existing route
- Search for an IP address segment that does not conflict with the current route
- picks an IP in the selected range
- Select IP from the specified range
- assigns this IP to the docker0 bridge
- Bind IP to Docker0
6.1 Docker four network modes
The four network modes are extracted from Docker network details and pipework source code interpretation and practice
Docker run create docker container, you can use — NET option to specify the container network mode, Docker has the following 4 network modes:
- Host mode: net=host
- Container mode: -net = Container :NAMEorID
- None mode, specified using -net = None
- Bridge mode, specified using -net =bridge, default setting
6.1.1 host mode
If the host mode is used when the container is started, the container does not get a separate Network Namespace, but shares a Network Namespace with the host. The container does not virtualize its own network card, configure its own IP address, etc., but uses the IP address and port of the host.
For example, let’s start a Docker container with a Web application in host mode on a machine at 10.10.101.105/24 and listen on TCP port 80. When we run anything like ifconfig in the container to view the network environment, we see information from the host. External applications can access the container using 10.10.101.105:80 without any NAT, as if running directly on the host. However, other aspects of the container, such as the file system and process list, are still isolated from the host.
6.1.2 container pattern
This pattern specifies that newly created containers share a Network Namespace with an existing container, rather than with the host. A newly created container does not create its own network adapter or configure its own IP address. Instead, it shares IP address and port range with a specified container. Also, the two containers are isolated from each other except for the network aspects, such as file systems, process lists, and so on. The processes of the two containers can communicate through the LO network device.
Also 6.1.3 none mode
This pattern is different from the first two. In this mode, Docker containers have their own Network Namespace, but do not perform any Network configuration for Docker containers. That is, the Docker container has no network card, IP, routing, etc. We need to add network cards and configure IP for Docker containers by ourselves.
6.1.4 bridge model
Photo: The Container World Part 2 Networking
Bridge mode is the default Network setting of Docker. In this mode, Network Namespace is assigned to each container, IP is set, and Docker containers on a host are connected to a virtual bridge. When Docker Server is started, a virtual bridge named Docker0 will be created on the host, and the Docker container started on the host will be connected to the virtual bridge. A virtual bridge works like a physical switch, so that all containers on a host are connected to a layer 2 network through the switch. Docker will select a different IP address and subnet from the private IP network segment defined by RFC1918 and assign it to docker0. Containers connected to docker0 will select an unoccuated IP from this subnet. Docker uses 172.17.0.0/16 and assigns 172.17.42.1/16 to the docker0 bridge. Used as a virtual network card on the host)
6.2 Listing the current Host Network Bridges
Bridge name Bridge ID STP Enabled interfaces docker0 8000.000000000000 noCopy the code
6.3 Viewing the Current Docker0 IP Address
$sudo ifConfig docker0 docker0 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx inet ADDR :172.17.42.1 Bcast:0.0.0.0 Mask: 255.255.0.0Copy the code
When containers run, each container is assigned a specific virtual machine port and bridged to Docker0. Each container is configured with a dedicated IP address on the same network segment as the Docker0 IP address, which is used as the default gateway for all containers.
6.4 Running a Container
$ sudo docker run -t -i -d ubuntu /bin/bash
52f811c5d3d69edddefc75aff5a4525fc8ba8bcfa1818132f9dc7d4f7c7e78b4
$ sudo brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.fef213db5a66 no vethQCDY1N
Copy the code
Above, docker0 plays the role of bridging the vethQCDY1N interface of 52F811C5D3D6 container.
6.4.1 Using a specific RANGE of IP addresses
Docker tries to find IP segments that are not being used by the host. Although it works in most cases, it is not a panacea, and sometimes we need to plan for IP further. Docker allows you to manage A Docker0 bridge or to customize a bridge card with the -b option. Bridge-utils is required.
The basic steps are as follows:
- ensure Docker is stopped
- Make sure the docker process is stopped
- create your own bridge (bridge0 for example)
- Create a custom bridge
- assign a specific IP to this bridge
- Assign a specific IP address to the bridge
- start Docker with the -b=bridge0 parameter
- Specify the bridge in -b mode
# Stopping Docker and removing docker0 $ sudo service docker stop $ sudo ip link set dev docker0 down $ sudo brctl delbr Docker0 # Create our own bridge $sudo BRCTL addbr bridge0 $sudo IP addr add 192.168.5.1/24 dev bridge0 $sudo IP link set dev bridge0 up # Confirming that our bridge is up and running $ ip addr show bridge0 4: bridge0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state UP group default link/ether 66:38:d0:0d:76:18 brd ff:ff:ff:ff:ff:ff inet 192.168.5.1/24 Scope global Bridge0 valid_lft forever preferred_lft forever # Tell Docker about it and restart (on) Ubuntu) $ echo 'DOCKER_OPTS="-b=bridge0"' >> /etc/default/docker $ sudo service docker startCopy the code
Reference document: Network Configuration
6.5 Container Communication between Hosts
Communication between different containers can be done with pipework:
$ git clone https://github.com/jpetazzo/pipework.git
$ sudo cp -rp pipework/pipework /usr/local/bin/
Copy the code
6.5.1 Installing dependent Software
$ sudo apt-get install iputils-arping bridge-utils -y
Copy the code
6.5.2 Bridging networks
For bridging networks, refer to the bridge configuration instructions in Daily Troubleshooting Tips.
# brctl show
bridge name bridge id STP enabled interfaces
br0 8000.000c291412cd no eth0
docker0 8000.56847afe9799 no vetheb48029
Copy the code
You can delete docker0 and specify the docker bridge as BR0. The default configuration can also be retained, so that the communication between single host containers can be through Docker0, and the network adapter bridge between different containers across the host docker container through Pipework is connected to BR0, so that the communication between containers across the host can be.
- ubuntu
$ sudo service docker stop
$ sudo ip link set dev docker0 down
$ sudo brctl delbr docker0
$ echo 'DOCKER_OPTS="-b=br0"' >> /etc/default/docker
$ sudo service docker start
Copy the code
- CentOS 7/RHEL 7
$ sudo systemctl stop docker
$ sudo ip link set dev docker0 down
$ sudo brctl delbr docker0
$ cat /etc/sysconfig/docker | grep 'OPTIONS='
OPTIONS=--selinux-enabled -b=br0 -H fd://
$ sudo systemctl start docker
Copy the code
6.5.3 pipework
For communication between different containers, the tool Pipework can be used to create virtual network cards for Docker containers and bind IP Bridges to BR0
$ git clone https://github.com/jpetazzo/pipework.git
$ sudo cp -rp pipework/pipework /usr/local/bin/
$ pipework
Syntax:
pipework <hostinterface> [-i containerinterface] <guest> <ipaddr>/<subnet>[@default_gateway] [macaddr][@vlan]
pipework <hostinterface> [-i containerinterface] <guest> dhcp [macaddr][@vlan]
pipework --wait [-i containerinterface]
Copy the code
–net= None –net=none –net=none –net=none –net=none –net=none –net=none –net=none
$sudo docker run --rm -ti --net=none Ubuntu :14.04 /bin/bash root@a46657528059:/# $# ctrl-p + Ctrl-q Detach STATUS $sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a46657528059 Ubuntu :14.04 "/bin/bash" 4 minutes ago Up 4 minutes hungry_lalande $ sudo pipework br0 -i eth0 a46657528059 192.168.115.10/[email protected] # add static route to pipework If you need it, you can add the --privileged=true privilege to run and then add it to the container. $sudo docker attach a46657528059 root@a46657528059:/# ifconfig eth0 eth0 Link encap:Ethernet HWaddr 86:b6:6b:e8:2e:4d inet addr:192.168.115.10 Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::84b6:6bff:fee8:2e4d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:9 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:648 (648.0b) TX bytes:690 (690.0b) root@a46657528059:/# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.115.2 0.0.0.0 UG 0 0 0 eth0 192.168.115.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0Copy the code
Add static routes using IP netns to avoid unnecessary security issues when creating a container using the –privileged=true option:
$docker inspect --format="{% raw %}{{.state. Pid}}{% endRAW %}" a46657528059 $sudo ln -s /proc/6350/ns.net/var/run/netns.6350 $sudo IP netns exec 6350 IP route add 192.168.0.0/16 dev eth0 via 192.168.115.2 $ Sudo IP netns exec 6350 IP route # 192.168.0.0/16 via 192.168.115.2 dev eth0 .Copy the code
Perform corresponding configuration on other hosts, create a container and use Pipework to add virtual network adapter bridge to BR0, and test the communication.
In addition, pipework can create a vlan network for containers, which is not described too much here. The official documentation is very clear.
- Pipework official documentation
- Docker network details and pipework source code interpretation and practice
7 Dockerfile
Docker can automatically build images from the contents of dockerfiles. Dockerfile is a text file that contains all the commands to create an image. Using the docker build command, you can build an image according to the content of the Dockerfile. Before introducing how to build, we will introduce the basic syntax structure of Dockerfile.
Dockerfile has the following command options:
FROM
MAINTAINER
RUN
CMD
EXPOSE
ENV
ADD
COPY
ENTRYPOINT
VOLUME
USER
WORKDIR
ONBUILD
7.1 the FROM
Usage:
FROM <image>
Copy the code
or
FROM <image>
Copy the code
FROM
Specifies the base source image to build the image. If there is no local image, it will automatically pull the image from the Docker public libraryFROM
Must be the first directive in a non-comment line of a Dockerfile, i.e. a Dockerfile fromFROM
Statements beginFROM
Can occur multiple times in a Dockerfile if there is a need to create multiple images in a Dockerfile- if
FROM
Statement does not specify a mirror labellatest
The label
7.2 MAINTAINER
Usage:
MAINTAINER <name>
Copy the code
Specify the user to create the image
RUN can be used in two ways
- RUN (the command is run in a shell – /bin/sh -c – shell form)
- RUN [“executable”, “param1”, “param2”] (exec form)
Each RUN command executes the specified command on top of the current image and submits it as a new image. Subsequent runs are based on the image submitted after the previous RUN. Images are layered and can be created from any historical commit point of an image, similar to source control.
The exec method is parsed to a JSON array, so you must use double quotes instead of single quotes. Exec does not invoke a command shell, so it does not inherit corresponding variables, such as:
RUN [ "echo", "$HOME" ]
Copy the code
This approach will not reach the output HOME variable, the correct approach should be like this
RUN [ "sh", "-c", "echo", "$HOME" ]
Copy the code
The cache generated by the RUN will not be invalidated by the next build and will be reused. You can use the –no-cache option, i.e. the docker build –no-cache, so it will not be cached.
7.3 CMD
CMD can be used in three ways:
- CMD [“executable”,”param1″,”param2″] (exec form, this is the preferred form,
- CMD [“param1″,”param2”] (as default parameters to
ENTRYPOINT
) - CMD command param1 param2 (shell form)
CMD specifies that you can only use it once in a Dockerfile, and if there are more than one, only the last one will take effect.
The purpose of CMD is to provide a default command execution option when starting the container. If the user specified a command to run when starting the container, the commands specified by CMD will be overridden.
CMD is executed when the container is started, but not when the container is built. RUN is only executed when the image is built. After the image is built, the container is started and RUN is not involved.
7.4 EXPOSE
EXPOSE <port> [<port>...]
Copy the code
Tell the Docker server container to map the local port, need to use the -p or -p option when Docker run.
7.5 ENV
ENV <key> <value> # Only one variable ENV <key>=<value> # Allow multiple variables to be set at onceCopy the code
Specifies a link variable that will be used by subsequent RUN directives and retained when the container runs.
Example:
ENV myName="John Doe" myDog=Rex\ The\ Dog \
myCat=fluffy
Copy the code
Is equivalent to
ENV myName John Doe
ENV myDog Rex The Dog
ENV myCat fluffy
Copy the code
7.6 the ADD
ADD <src>... <dest>
Copy the code
ADD copies the localhost file, directory, or remote file URLS from and adds them to the container specified path.
Regular fuzzy matching through GO is supported. For details, see GO Filepath.Match
ADD hom* /mydir/ # adds all files starting with "hom" ADD hom? .txt /mydir/ # ? is replaced with any single characterCopy the code
- The path must be an absolute path. If it does not exist, a directory is automatically created
- The path must be relative to the path where the Dockerfile resides
- If it is a directory, only the contents of the directory will be copied, but the directory itself will not be copied
7.7 the COPY
COPY <src>... <dest>
Copy the code
COPY Copies new files or directories from being added to the specified path in the container. The only difference is that you cannot specify remote file URLS.
7.8 ENTRYPOINT
- ENTRYPOINT [“executable”, “param1”, “param2”]
- ENTRYPOINT command param1 param2 (shell form)
Config commands executed after the container is started, and cannot be overridden by arguments provided by Docker run, whereas CMD can be overridden. If overwriting is required, the Docker Run — EntryPoint option can be used.
Only one ENTRYPOINT can exist in each Dockerfile. If multiple entryPoints are specified, only the last ENTRYPOINT takes effect.
7.8.1 Exec Form ENTRYPOINT example
Stable default commands and options are set using exec Form via ENTRYPOINT, while options that are often changed beyond the default are added using CMD.
FROM ubuntu
ENTRYPOINT ["top", "-b"]
CMD ["-c"]
Copy the code
Using ENTRYPOINT to show the foreground running the Apache service through Dockerfile
FROM debian:stable
RUN apt-get update && apt-get install -y --force-yes apache2
EXPOSE 80 443
VOLUME ["/var/www", "/var/log/apache2", "/etc/apache2"]
ENTRYPOINT ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
Copy the code
7.8.2 Shell Form ENTRYPOINT example
This method is executed in /bin/sh-c and ignores any CMD or docker run options. To ensure that docker stop stops containers running ENTRYPOINT for a long time, make sure to use the exec option when executing.
FROM ubuntu
ENTRYPOINT exec top -b
Copy the code
If you forget to use the exec option in ENTRYPOINT, you can use CMD to fill in:
FROM ubuntu ENTRYPOINT top -b CMD --ignored-param1 # --ignored-param2 ... --ignored-param3 ... And so onCopy the code
7.9 VOLUME
VOLUME ["/data"]
Copy the code
Create a mount point that can be mounted from localhost or other containers, more on that later.
7.10 the USER
USER daemon
Copy the code
Specify the user name or UID used to RUN the container, as well as the subsequent RUN, CMD, and ENTRYPOINT.
7.11 WORKDIR
WORKDIR /path/to/workdir
Copy the code
Configure the working directory for subsequent RUN, CMD, and ENTRYPOINT directives. Multiple WORKDIR directives can be used, and subsequent commands, if their arguments are relative paths, will be based on the path specified by the previous command.
WORKDIR /a
WORKDIR b
WORKDIR c
RUN pwd
Copy the code
The final path is/A /b/c
The WORKDIR directive can call an environment variable after ENV sets the variable:
ENV DIRPATH /path
WORKDIR $DIRPATH/$DIRNAME
Copy the code
The final path is /path/$DIRNAME.
7.12 ONBUILD
ONBUILD [INSTRUCTION]
Copy the code
Configure the operation instructions to perform when the created image is used as the base image for other newly created images.
For example, Dockerfile creates the image image-a with the following content:
[...]. ONBUILD ADD . /app/src ONBUILD RUN /usr/local/bin/python-build --dir /app/src [...]Copy the code
If A new image is created based on image-a and the new Dockerfile uses FROM image-a to specify the base image, the ONBUILD directive will be executed automatically, which is equivalent to adding two directives later.
# Automatically run the following
ADD . /app/src
RUN /usr/local/bin/python-build --dir /app/src
Copy the code
It is recommended to label images using the ONBUILD directive, such as Ruby :1.9-onbuild.
7.13 Dockerfile Examples
# Nginx # # VERSION 0.0.1 FROM Ubuntu MAINTAINER Victor Vieux <[email protected]> RUN apt-get update && apt-get install -y inotify-tools apache2 Openssh-server # Firefox over VNC # # VERSION 0.3 FROM Ubuntu # Install xvfb in order to create a 'fake' display and firefox RUN apt-get update && apt-get install -y x11vnc xvfb firefox RUN mkdir ~/.vnc # Setup a password RUN x11vnc -storepasswd 1234 ~/.vnc/passwd # Autostart firefox (might not be the best way, but it does the trick) RUN bash -c 'echo "firefox" >> /.bashrc' EXPOSE 5900 CMD ["x11vnc", "-forever", "-usepw", "-create"] # Multiple images example # # VERSION 0.1 FROM Ubuntu RUN echo foo > bar # Will output something like ===> 907ad6c2736f FROM Ubuntu RUN echo moo > oink # Will output something like ===> 695d7793cbe4 # You᾿ll now have two images, 907ad6c2736f with /bar, and 695d7793cbe4 with # /oink.Copy the code
7.14 docker build
$ docker build --help Usage: docker build [OPTIONS] PATH | URL | - Build a new image from the source code at PATH --force-rm=false Always remove Intermediate containers, even after unsuccessful builds --no-cache=false Do not use cache when building the image # --quiet=false Suppress the verbose output generated by the containers --rm=true Remove intermediate containers after a Remove the transition layer container after the build is successful. --tag="" Repository name (and optionally a tag) to be applied to the resulting image in case of successCopy the code
Dockerfile Reference
7.15 Dockerfile best practices
- use
.dockerignore
file
For faster uploads and more efficiency during docker builds, a.dockerignore file should be used to exclude files or directories that are not needed during image building. For example, unless.git is needed during the build process, you should add it to the.dockerignore file to save a lot of time.
- Avoid installing unnecessary software packages
To reduce complexity, dependencies, file sizes, and build times, you should avoid installing additional or unnecessary packages. For example, there is no need to install a text editor in a database image.
- Each container runs a process
In most cases, a container should only run one program on its own. Decoupling applied to multiple containers makes it easier to scale horizontally and reuse. If one service depends on another, refer to Linking Containers Together.
- Minimize the layer
We know that for every instruction, there will be a commit of the image. The image is a hierarchical structure. For dockerfiles, a balance should be found between readability and minimization.
- Multi-row parameter sort
If possible, sort by alphabetical order, which avoids duplication of installation packages and makes it easier to update the list, plus more readability. Add an empty line with \ newline:
RUN apt-get update && apt-get install -y \
bzr \
cvs \
git \
mercurial \
subversion
Copy the code
- To create the cache
During the image construction process, the Dockerfile will be executed in sequence. Every time the command is executed, Docker will look for whether there is an existing image cache to reuse, if not, it will create a new image. If you don’t want to use caching, you can add the –no-cache=true option to your Docker build.
Since the base image is already in the cache, the next instruction compares all the child images to see if the same instruction is executed. If not, the cache invalidates. In most cases, just comparing the Dockerfile directive to the child image is sufficient. Except for ADD and COPY commands, files stored in the image during ADD and COPY execution also need to be checked. After completing the verification of a file, use this verification to search in the cache. If the detected file changes, the cache becomes invalid. The RUN apt-get -y update command only checks if the command matches. If the command matches, the update will not be executed.
To take advantage of caching effectively, you need to keep your Dockerfiles consistent and modify them as late as possible.
7.15.1 Dockerfile instruction
-
FROM: Use the official mirror library as the base image whenever possible
-
RUN Copy the code
: Long or complex for readability, ease of understanding, and maintainability
RUN Copy the code
Statements use
\ Copy the code
The delimiter is divided into multiple lines
- Don’t recommend
RUN apt-get update
Separate lines, otherwise updates will not be performed if subsequent packages are updated - Avoid the use of
RUN apt-get upgrade
ordist-upgrade
, many necessary packages in a nonprivileged
Permissions cannot be upgraded in a container. If you know of a package update, useapt-get install -y xxx
- The standard of writing
RUN apt-get update && apt-get install -y package-bar package-foo
- Don’t recommend
Example:
RUN apt-get update && apt-get install -y \ aufs-tools \ automake \ btrfs-tools \ build-essential \ curl \ dpkg-sig \ git \ iptables \ libapparmor-dev \ libcap-dev \ libsqlite3-dev \ LXC =1.0* \ Mercurial \ Parallel \ reprepro \ ruby1.9.1 \ Current - dev \ s3cmd = 1.1.0 *Copy the code
CMD
: Recommended useCMD [" executable ", "param1", "param2...]."
In this format,CMD [" param ", "param"]
The cooperateENTRYPOINT
useEXPOSE
: Dockerfile specifies the port to expose, useddocker run
Specify the port to be mapped to the hostENV
: To make new software easier to run, can be usedENV
updatePATH
The variable. Such asENV PATH /usr/local/nginx/bin:$PATH
Make sure thatCMD ["nginx"]
You can run
ENV can also define variables like this:
ENV PG_MAJOR 9.3 ENV PG_VERSION 9.3.4 RUN curl - SL http://example.com/postgres-$PG_VERSION.tar.xz | tar - xJC The/usr/SRC/postgress &&... ENV PATH /usr/local/postgres-$PG_MAJOR/bin:$PATHCopy the code
ADD
orCOPY
:ADD
比COPY
Additional features Include Automatic tar file unpacking and remote URL support. Remote urls are not recommended
If not recommended:
ADD http://example.com/big.tar.xz /usr/src/things/
RUN tar -xJf /usr/src/things/big.tar.xz -C /usr/src/things
RUN make -C /usr/src/things all
Copy the code
You are advised to use curl or wget instead:
RUN mkdir -p /usr/src/things \
&& curl -SL http://example.com/big.tar.gz \
| tar -xJC /usr/src/things \
&& make -C /usr/src/things all
Copy the code
If you do not need to add a tar file, COPY is recommended.
Reference Documents:
- Best practices for writing Dockerfiles
- Dockerfile Best Practices (part 1)
- Dockerfile Best Practices (2)
8 Container Data Management
Docker manages data in two ways:
- Data volume
- Data volume container
8.1 data volume
A data volume is a directory specifically designated by one or more containers to bypass the Union File System and provide some useful functionality for persisting or sharing data:
- Data volumes can be shared and reused across containers
- Data volume changes are directly modified
- Volume data changes are not included in the container
- Data volumes are persistent until no container uses them
8.1.1 Adding a Data Volume
You can add a data volume using the -v option, or you can mount multiple data volumes for a docker container using the -v option multiple times.
$sudo docker run --name data -v /data -t -i Ubuntu :14.04 /bin/bash $sudo docker run --name data -v /data -t -i Ubuntu :14.04 /bin/bash The /data data volume bash-4.1# ls-ld /data/ drwxr-xr-x 2 root root 4096 Jul 23 06:59 /data/ bash-4.1# df -th Filesystem will be created Type Size Used Avail Use% Mounted on ... . Ext4 91G 4.6G 82G 6% /dataCopy the code
The created data volume can obtain the corresponding path of the host through Docker Inspect
$ sudo docker inspect data ... . "Volumes": { "/data": "/ var/lib/docker/VFS/dir / 151 de401d268226f96d824fdf444e77a4500aed74c495de5980c807a2ffb7ea9"}, # can be seen to create the data volume of hosting path... .Copy the code
Or you can specify it
$ sudo docker inspect --format="{% raw %}{{ .Volumes }}{% endraw %}" data
map[/data: /var/lib/docker/vfs/dir/151de401d268226f96d824fdf444e77a4500aed74c495de5980c807a2ffb7ea9]
Copy the code
8.1.2 Mounting the Host Directory as a Data volume
The -v option can create a volume or mount a directory on the current host to a container.
$sudo docker run --name web -v /source/:/web -t -i Ubuntu :14.04 /bin/bash bash-4.1# ls -ld /web/ drwxr-xr-x 2 root root 4096 Jul 23 06:59 /web/ bash-4.1# df-th... . Ext4 91G 4.6G 82G 6% /web bash-4.1# exitCopy the code
By default, the mounted volume is readable and writable. You can specify read-only during mounting
$ sudo docker run --rm --name test -v /source/:/test:ro -t -i ubuntu:14.04 /bin/bash
Copy the code
8.2 Creating and Mounting a Data Volume Container
If you have persistent data that you want to share between containers or use on non-persistent containers, the best way to do this is to create a data volume container and then mount the data to it.
Create a data volume container
$ sudo docker run -t -i -d -v /test --name test ubuntu:14.04 echo hello
Copy the code
Use the –volumes-from option to mount the /test volume in another container. No matter whether the test container is running or not, other containers can mount the container data volume, of course, there is no need to run the container if it is a separate data volume.
$ sudo docker run -t -i -d --volumes-from test --name test1 ubuntu:14.04 /bin/bash
Copy the code
Add another container
$ sudo docker run -t -i -d --volumes-from test --name test2 ubuntu:14.04 /bin/bash
Copy the code
You can also inherit from other containers that hold /test volumes
$ sudo docker run -t -i -d --volumes-from test1 --name test3 ubuntu:14.04 /bin/bash
Copy the code
8.3 Backing up, Restoring, or Migrating a Data Volume
8.3.1 backup
$ sudo docker run --rm --volumes-from test -v $(pwd):/backup ubuntu:14.04 tar cvf /backup/test.tar /test
tar: Removing leading `/' from member names
/test/
/test/b
/test/d
/test/c
/test/a
Copy the code
Start a new container and mount the volume from the test container, then mount the current directory to the container as backup, and backup all data in the test volume as test.tar, delete the container –rm, at this time in the current directory named test.tar.
$ls # Test.tar test.tar test.tar test.tar test.tarCopy the code
8.3.2 recovery
You can restore to the same container or another container, create a new container and unzip the backup files to the new container data volume
$sudo docker run -t -i -d -v /test --name test4 Ubuntu :14.04 /bin/bash $sudo docker run --rm --volumes-from test4 -v $(PWD):/backup ubuntu:14.04 tar XVF /backup/test.tar -c / #Copy the code
8.4 delete Volumes
A Volume can only be deleted if:
docker rm -v
Added when the container was deleted-v
optionsdocker run --rm
Added when the container is run--rm
options
Otherwise, many unknown directories will be left in /var/lib/docker-vfs/dir.
Reference Documents:
- Managing Data in Containers
- In-depth understanding of Docker Volume
- In-depth understanding of Docker Volume (2)
9 Link Container
Docker allows multiple containers to be linked together to exchange information with each other. Docker links create a parent-container relationship where the parent container can see the information provided by its child.
9.1 Container Naming
When creating a container, if you do not specify a name for the container, a name is automatically created by default.
- For example, name the container that runs the Web application web
- Provides a reference for Docker containers, allowing easy calls to other containers, such as web linking containers to the container DB
You can give the container a custom name with the –name option:
$sudo docker run -d -t -i --name test Ubuntu :14.04 bash $sudo docker inspect --format="{% raw %}{{.nmae}}{% endraw %}" test /testCopy the code
Note: The container name must be unique, that is, you can name only one container called test. If you want to reuse the container name, you must either delete the old container through docker rm before creating the new container or add the –rm option when creating the container.
9.2 Link Containers
Links allow secure communication between containers and are created using the –link option.
$ sudo docker run -d --name db training/postgres
Copy the code
Create a container named DB based on the Training/Postgres image, then create a container called Web and interconnect it with db
$ sudo docker run -d -P --name web --link db:db training/webapp python app.py
Copy the code
–link
: Alias option specifies the container to link to.
View the link relationship of the Web container:
$ sudo docker inspect -f "{% raw %}{{ .HostConfig.Links }}{% endraw %}" web
[/db:/web/db]
Copy the code
You can see that the Web container is linked to the DB container as /web/ DB, which allows the Web container to access the db container’s information.
What do links between containers actually do? A link allows a source container to provide information access to a receiving container. In this case, the Web container acts as a receiver, allowing access to the relevant service information of the source container DB. Docker creates a secure tunnel without exposing any ports to external containers. Therefore, there is no need to add -p or -p to specify public ports when creating containers. This is also the biggest benefit of linking containers, such as PostgreSQL database in this case.
Docker provides connection information to the receiving container in the following two ways:
- The environment variable
- update
/etc/hosts
file
9.2.1 Environment Variables
When two containers are linked, Docker sets some environment variables on the target container to get information about the source container.
First, Docker sets an
_NAME environment variable on each target container that is aliases through the –link option. If a container named web is linked to a database container named DB via –link DB: webDB, then the web container sets an environment variable to WEBDB_NAME=/web/ webDB
Using the previous example, Docker also sets the port variable:
$sudo docker run --rm --name web2 --link db:db training/webapp env.. DB_NAME=/web2/db DB_PORT= TCP ://172.17.0.5:5432 DB_PORT_5432_TCP= TCP ://172.17.0.5:5432 # <name>_PORT_<port>_<protocol> The protocol can be TCP or UDP DB_PORT_5432_TCP_PROTO= TCP DB_PORT_5432_TCP_PORT=5432 DB_PORT_5432_TCP_ADDR=172.17.0.5..Copy the code
Note: These environment variables are only set to the first process in the container. Similar to daemons (such as SSHD), these variables are cleared when they spawn Shells
9.2.2 update/etc/hosts
file
In addition to environment variables, Docker adds relevant host entries to /etc/hosts on the target container, in this case the Web container.
$ sudo docker run -t -i --rm --link db:db training/webapp /bin/bash root@aed84ee21bde:/opt/webapp# cat /etc/hosts 172.17.0.7aED84EE21bde.. 172.17.0.5DBCopy the code
The IP address in the /etc/host file is automatically updated after the source container is restarted, whereas the IP address in the environment variable is not automatically updated.
10 Build a private library
Docker officially provides the construction method of Docker Registry docker-Registry
10.1 Quick Build
Quickly build the Docker Registry by following two steps:
- Install the docker
- Run registry:
docker run -p 5000:5000 registry
This method uses the official image from the Docker Hub
10.2 Building Registry without containers
10.2.1 Installing Required Software
$ sudo apt-get install build-essential python-dev libevent-dev python-pip liblzma-dev
Copy the code
10.2.2 configuration docker registry. –
sudo pip install docker-registry
Copy the code
Or manually install using Github Clone
$ git clone https://github.com/dotcloud/docker-registry.git
$ cd docker-registry/
$ cp config/config_sample.yml config/config.yml
$ mkdir /data/registry -p
$ pip install .
Copy the code
10.2.3 run
docker-registry
Copy the code
10.2.4 Advanced Boot Mode Not Recommended
Control with Gunicorn:
gunicorn -c contrib/gunicorn_config.py docker_registry.wsgi:application
Copy the code
Or open to external wiretapping
Gunicorn --access-logfile -- --error-logfile -- -k gevent -b 0.0.0.0:5000 -w 4 --max-requests 100 docker_registry.wsgi:applicationCopy the code
10.3 Commit a specified container to a private library
$Docker Tag Ubuntu :12.04 Private library IP:5000/ Ubuntu :12.04 $Docker push private library IP:5000/ UbuntuCopy the code
For more configuration options, read the official documentation:
- Docker-Registry README
- Docker-Registry advanced use