This paper aims to let readers have a rough understanding of the whole docker system and have some understanding of the operating mechanism of Docker. To dig deeper, dig deeper into Linux.
1. Docker
1.1 What is a Docker
Docker is an open source, Linux-based container technology engine that unifies the API that quarantined applications use to access the core of the system. Trying to solve the developer’s problem of the century can run on my machine.
The front end students can view the image as NPM package and the warehouse as NPM warehouse. It’s easier to understand.
1.2 Why Docker is used
Docker is a reduced version similar to virtual machine technology. Due to the long startup process of virtual machine and the running process of hardware after virtualization, it does not fit the physical machine well. A typical example is mobile terminal development, which takes a long time to start virtual system.
We often start a virtual machine only to isolate an application, but the creation of a virtual machine occupies a complete set of system resources (guest OS), which is overqualified and closely related to the cost.
While Docker emerged with the update of Linux functions,Docker essentially only isolates applications and shares the current system core.
The following figure shows the comparison between virtual machine and Docker architecture:
The following figure shows the function comparison of container VMS:
In this way, Docker can start in seconds, because Docker skips kernel init and uses the current system core. However, it also has drawbacks. For example, The function of virtual machine live migration is not well done by Docker.
Docker can be used to quickly build and configure the application environment, simplify operations, ensure the consistency of the running environment, “run everywhere once compiled”, application-level isolation, elastic scaling, and rapid expansion.
1.3 Basic Concepts of Docker
1.3.1 mirror
A mirror is a special file system that provides programs, libraries, resources, and configuration files required by the container runtime, as well as configuration parameters (such as anonymous volumes, environment variables, and users) prepared for the runtime. The image does not contain any dynamic data and its contents are not changed after the build.
The Union File System provides a read-only template for applications to run. It can provide only one function, or multiple images can be superimposed to create multiple function services.
1.3.2 container
Images simply define what is needed to isolate the application from running, and containers are the processes that run these images. Inside the container, a complete file system, network, process space, and so on are provided. It is completely isolated from the external environment and will not be disturbed by other applications.
The container must be read or written using **Volume**, or the storage environment of the host. After the container is restarted or shut down, the data inside the running container will be lost. Each time the container is started, a new container is created through the image.
1.3.3 warehouse
A Docker repository is a centralized repository for image files. Once the image is built, it can be easily run on the current host. However, if the image needs to be used on other servers, we need a centralized service to store and distribute the image. The Docker Registry is such a service. Repository and Registry are sometimes confused, but not strictly distinguished. The concept of a Docker repository is similar to Git, and the registry server can be understood as a hosting service like GitHub. In fact, a Docker Registry can contain multiple repositories. Each Repository can contain multiple tags, and each Tag corresponds to an image. So, the mirror repository is a place where Docker centrally stores image files similar to the code repository that we used to use.
Typically, a repository contains images of different versions of the same software, and labels are often used to match versions of the software. We can specify which version of this software is the mirror by using the format < repository >:< tag >. If no label is given, latest is used as the default label.
Warehouses can be divided into two forms:
public
(Public warehouse)private
(Private warehouse)
1.3.4 Docker client
Docker client is a generic name used to make requests to a specified Docker Engine and perform corresponding container-managed operations. It can be either a Docker command-line tool or any client that follows the Docker API. At present, the Docker client maintained in the community is very rich, including C#(support Windows), Java, Go, Ruby, JavaScript and other common languages, and even WebU client written using Angular library. Enough to meet the needs of most users.
1.3.5 Docker Engine
Docker Engine is the most core background process of Docker. It is responsible for responding to requests from Docker Client, and then translating these requests into system calls to complete container-managed operations. The process starts an API Server in the background that receives requests sent by the Docker Client. The received request is dispatched through a route within the Docker Engine, and the specific function executes the request.
2. The actual combat Docker
# # 2.1 installation Docker
All environments in this article run under centos7.
First, remove all older versions of Docker.
sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
Copy the code
If it’s a new environment, you can skip this step.
For some domestic reasons, it is most likely that docker-CE cannot be installed according to the official website. Therefore, we need the domestic mirror to speed up the installation. Let’s speed up the installation with Aliyun.
Lvm2 # step 2: y y y y y y y y y y y y y y y y y Add software source information sudo yum - config - manager - add - 'http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # Step 3: Docker-ce sudo yum makecache fast sudo yum -y install docker-ce # Step 4: enable Docker sudo service Docker startCopy the code
After the installation is complete, you can run the Docker version to check whether the installation is successful.
➜ ~ docker version Client: Docker Engine - Community version: 19.03.3 API version: 1.40 Go version: Git commit: a872fc2F86 Built: Tue Oct 8 00:58:10 2019 OS/Arch: Linux/AMd64 Experimental: false Server Docker Engine - Community Engine: Version: 19.03.3 API Version: 1.40 (minimum Version 1.12) Go Version: Go1.12.10 Git commit: a872fc2F86 Built: Tue Oct 8 00:56:46 OS/Arch: Linux/AMd64 Experimental: false containerd Version: 1.2.10 GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339 runc: Version: 1.0.0 rc8 + dev GitCommit: 3 e425f80a8c931f88e6d94a8c831b9d5aa481657 docker - init: Version: 0.18.0 GitCommit: fec3683Copy the code
Get a mirror image
Now we need to pull an Nginx image and deploy an Nginx application.
➜ ~ docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
68ced04f60ab: Pull complete
28252775b295: Pull complete
a616aa3b0bf2: Pull complete
Digest: sha256:2539d4344dd18e1df02be842ffc435f8e1f699cfc55516e2cf2cb16b7a9aea0b
Status: Downloaded newer image for nginx:latest
docker.io/library/nginx:latest
Copy the code
After the pull is complete, use docker image ls to view the list of current Docker local images.
➜ ~ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest 6678c7c2e56c 13 hours ago 127MB
Copy the code
Re-running the same command docker pull nginx updates the local image.
Run a Docker container
Create a shell script file and write the following:
Docker run \ # specifies the restart policy for the stopped container: # on-failure: restart the container when it fails to exit (return value non-zero) # always: Container exits always restart - restart = always \ # specified docker run in the background, without - d, the execution of the this command # after you exit the command line will also return the docker container - d \ # will be the host port binding to the container port - p 8080:80 \ # designated container exposed ports that modify the image exposure port - expose = 80 \ # mapping directory to host - v/below: / usr/share/nginx/HTML \ # specified vessel name, container management for follow-up by name, The links feature needs to initialize the container with the name --name= testDocker \ # which image nginx:lastestCopy the code
We need to make it clear that docker’s container network is isolated from the host. Unless the container network mode is specified to rely on the host, it cannot be directly accessed.
Now run the script, open a browser, and type http://ip:8080 to see the application running with the Nginx image.
2.3.1 Command Parameters Brief edition
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...] -d, --detach=false Specifies whether the container is running in the foreground or background. The default is false. 05. -t, --tty=false Assign TTY devices, which can support terminal login, -u, --user="" User of the specified container 07. -A, --attach=[] Login container (must be started with docker run -d container) 08. -w, -c, --cpu-shares=0 Sets the CPU weight of the container, which is used in CPU sharing scenarios 10. -e, --env=[] Specifies the environment variable, -m, --memory="" Specifies the memory limit of the container 12. -p, --publish-all=false Specifies the exposed port of the container 13. -v, --volume=[] Mount storage volumes to a directory in the container. --volumes-from=[] Mount volumes from other containers. Mounted to the container of a directory 17. - cap - add = [] add permissions, permissions list as bellow: http://linux.die.net/man/7/capabilities. 18 - cap - drop = [] remove permissions, permissions list as bellow: http://linux.die.net/man/7/capabilities. 19 - cidfile = "" run after container, container PID value written into the specified file, a kind of typical monitoring system usage. 20 - cpuset =" "set container which CPU to use, --device=[] Adds a host device to a container, which is equivalent to device passthrough 22. -- DNS =[] DNS server of a specified container 23. --dns-search=[] DNS search domain name of a specified container. --entrypoint="" overwrite the entrypoint of the image 25. --env-file=[] specifies the environment variable file, --expose=[] Specifies the expose port of the container, that is, modifying the exposed port of the mirror 27. --link=[] Specifies the association between containers, using IP addresses and env of other containers 28. Use 29. --name="" to specify the container name only if --exec-driver= LXC is specified. Subsequent container management can be done by the name. Host :NAME_or_ID >// Use the network of other containers. Share network resources such as IP address and PORT 34. None The container uses its own network (similar to --net=bridge), but does not configure it. --restart="no" Specifies the restart strategy of the container after it is stopped: 38. On-failure: restarts when the container exits due to a failure (return value is non-zero) 39. Always: --rm=false Specifies that the container will be automatically deleted when stopped (docker run -d is not supported) 41. --sig-proxy=true Sets the signal received and processed by the proxy, But SIGCHLD, SIGSTOP, and SIGKILL cannot be proxiedCopy the code
2.4 Access to containers
We can use docker exec it [docker container id] /bin/bash to access the running container.
To exit the container, there are two ways:
- Enter directly on the command line
exit
quit - Use shortcut keys
ctrl+P Q
Will quit
Either way, you can exit the container and keep it running in the background.
##2.5 Customize a mirror Dockerfile
Dockerfile is divided into four parts: basic image information, maintainer information, image operation instructions and container startup instructions.
Here I use a simple Node file to launch the Dockerfile of the development environment.
WORKDIR/ENTRYPOINT/WORKDIR/WORKDIR/WORKDIR/WORKDIR NPM install - registry=https://registry.npm.taobao.org # initialization exposed port # 8080 8001 8800 the following port number Can also be docker run when exposed to EXPOSE out 8080 EXPOSE 8001 EXPOSE 8800 # This command is executed by default if the docker runit /bin/bash is entered on the host. CMD [" NPM ","run","dev-server"]Copy the code
Save the configuration and exit. Run docker build -t nodeApp :v1.0. Notice the last one. Represents the current directory.
After the run is complete, use the Docker image ls to check whether the image has been compiled.
At this time, some people may have a question, is every time need NPM install installation file?
In fact, if your Node application package does not change and your image is specifically developed for this application, consider appending node_modules to the Docker image with the ADD command. (This is rarely done in real life, since external maps will overwrite directories if they are rolled in. This is just to demonstrate apending files like images.)
##2.6 Multi-container startup: docker-compose
Docker-compose requires a separate installation.
Let’s imagine a scenario where we start a front-end project. You need to start Nginx to run the foreground project, start a database to record data, and ensure the integrity of the entire application. That’s where Docker-compose comes in.
Docker-compose is composed of two types:
- Service (
service
) : an application container that can actually run multiple instances of the same image. - Project (
project
) : A complete business unit consisting of a set of associated application containers.
Go back to the directory where we created the Dockerfile, write a docker-comemess. yml file, and configure multiple containers.
version: '1'
services:
web:
build: .
ports:
- "8080:80"
volumes:
- /wwwroot:/usr/share/nginx/html
redis:
image: "redis:alpine"
Copy the code
Docker-compose up: docker-compose up: docker-compose up: docker-compose up: docker-compose up: docker-compose up
Visit http://ip:8080 to see the same page as before.
2.7 the network
Due to isolation, you cannot access the Docker container on the host directly from the extranet. So we need to bind the host ports to the container.
2.3 describes how to bind ports to export container ports. So let’s look at container interconnection here.
$docker network create -d bridge my-net $docker run -it --rm --name busybox1 --network my-net busybox sh $docker run-it --rm --name busybox2 --network my-net busybox sh # $root@busybox1:ping busybox2 ping busybox2 (172.19.0.3): 56 data bytes 64 bytes from 172.19.0.3: seq=0 TTL =64 time=0.072 ms 64 bytes from 172.19.0.3: seq=1 TTL =64 time= 0.118msCopy the code
3. Expand your knowledge
3.1 Docker principle
Docker is written in the Go language and uses a series of features provided by the Linux kernel to achieve its function.
A system that can execute Docker has two main parts:
- Core components of Linux
- Docker-related components
The core Linux module functions used by Docker include the following:
- Cgroup – Used to allocate hardware resources
- Namespace – Separates the execution space of different Containers
- AUFS(chroot) – Used to create file systems for different containers
- SELinux – Used to secure the Container network
- Netlink – Used to communicate trips between different containers
- Netfilter – Establishes network firewall packet filtering based on the Container port
- AppArmor — Protects networking and execution security for containers
- Linux Bridge – Enables different Containers or containers on different hosts to communicate
3.2 MAC Window Running Docker principle
Run Linux using a virtual machine, and then run Docker Engine in Linux. Run the Docker Client on the machine.
3.3 Docker cannot be started using CMD [‘node’]
Front-end classmate focus
CMD [‘node’,’app.js’] as default startup Node.js was not designed to run as PID 1 which leads to unexpected behaviour when running Inside of Docker. “. The image below is from github.com/nodejs/dock… .
This problem involves the Linux operating mechanism. Simply put, a Linux process whose PID is 1 is a system daemon and will receive all orphan processes. And send shutdown signals to these processes when appropriate.
However, docker pid 1 process is node, and Node does not do orphan process recycling. So, if your application runs like a crawler, hang the process on PID 1 after execution, and slowly the container will BOOM.
Solution:
1. Start with '/bin/bash'. 2. Append '--init' to 'docker run' to initialize a docker process to PID 1. Docker provides a process that can reclaim all orphaned processes. 3. Front-end learning training, video tutorials, learning routes, add weixin Kaixin666Haoyun contact meCopy the code