Abstract:

Docker is a process in a narrow sense and a virtual Container in a broad sense. In fact, it is more professionally called Application Container. There is no difference between a Docker process and a common process, it is a common Application process.

Docker is what?


Docker results show Docker workers, yes! Docker workers carry containers, so what Docker operates today is also a container. This container is an application image file in terms of static state, and a container in terms of dynamic state. Obtained? Well, the illustration above explains.

Docker is a process in a narrow sense and a virtual Container in a broad sense. In fact, it is more professionally called Application Container. There is no difference between a Docker process and a common process, it is a common Application process. But it is used to manipulate image files. So the Docker process + the application image file built is equal to the Docker container. All Docker references in this article refer to Docker containers.


Before we move on, let’s first clarify some important basic concepts of Docker: images, containers and warehouses.


Mirror Docker images, similar to the snapshot in the VM, but much lighter than snapshot. Snapshots don’t understand? So you can think of images as a folder. We can identify the unique target mirror by ID or easily identifiable name +tag. ImagesID is a 64-bit character, but we usually use the first 12 bits to make the distinction.


Both redis: Lates in the red box on the left and 5F515359C7F8 in the red box on the right are uniquely represented as the same image. So our general image can be named like centos:latest, centos: Centos7.1.1503 and so on.


The image is layered. There are basic images that only contain operating systems, such as centos images. There are middleware mirroring, such as redis database mirroring; Finally, the application mirror, which refers to the specific application service, application mirror can be very rich, can be released at any time, the three sequentially overlapping.


So when we use Docker to build the image, each command will form a new image layer based on the previous command. As shown below, the basic image is a centos image, the middleware image is two red circles, the application image is a purple circle. The combination of redis+centos middleware images can be used by A service or B service, which makes the combination more flexible. Both images and images can be pulled from the Docker Hub public repository.

Containers Docker containers, you can create containers from an image, which is like creating a virtual machine from a snapshot, but lighter, faster to start, in seconds. The application runs in a container. For example, you download an Ubuntu image and install mysql and Django applications and their dependencies to modify the Ubutun image. A perfect application image is generated! I’m going to share this image with you, and you’re going to create a container from this image. Once the container is started, Django services run.


Also said to the above, the container is to separate the closed container, but also need to provide services from abroad, so the Docker allowed to open the container to a specific port, when to start the Docker, we can be the container to a specific port mapping to a host computer any port, so, if several services need to port 80, Then the external port of the container is 80, but it is mapped to any port on the host, so there is no conflict, so there is no need to resolve the conflict through proxy. The container port mapping to the host port can be done by using the following command.

Start the Docker container Docker Run-d-p 2222:22 --name Container name Image name-d-p Host port and container port mapping 8081:80 Host port: port exposed by the containerCopy the code


Docker registeries Docker registeries are the same as repositories for containers, but Docker uses them to store images. There are public and private warehouse, public warehouse Docker hub provides a lot of image files, these images can be directly pulled down to run, you can also upload your own image to docker Hub above. You can also build your own private repository for team project management.

Combined with the basic concepts introduced above, we can roughly string together several concepts of Docker and how they operate, that is, the life cycle of Docker.


Look at the picture below. There are three main steps.


1. Develop and build the image and push it to the Docker warehouse

2. Test or operate a copy of the image from the Docker repository to the local

3. Open the Docker container and provide services through the image file


Why Docker? What can you do?

Why Docker? This should be started from the pain points of the current software industry: (1) low efficiency in software update release and deployment, cumbersome process and need manual intervention; (2) difficulty in ensuring environmental consistency; (3) high cost of migration between different environments. Docker can solve the above problems to a large extent.


First of all, Docker is extremely simple to use. From the perspective of development, there are three steps: build, transport and run. The key step is the build step, which is to package the image file. But from a testing and operations perspective, there are only two steps: copy and run. With this image, you can copy and run it anywhere you want, platform independent. At the same time, Docker container technology isolates independent running space, does not compete with other applications for system resources, and does not need to consider the interaction between applications, which is happy to think.


Second, because all of the server’s dependencies on the system are handled when you build the image, you can ignore the dependencies of the original application and the development language when you use it. For testing and operations, focus more on your business content.


Finally, Docker provides a management method of the development environment for developers, ensures the synchronization of the environment with testers, and provides a portable standardized deployment process for operation and maintenance personnel.


So, what Docker can do is summarized as follows:

  • Easy to build and easy to distribute

  • The isolation application removes dependencies

  • Quick deployment test and sell


Docker is a process-level lightweight virtual machine. What’s the difference between it and a traditional virtual machine?


Docker is a super lightweight virtual machine that is just a process. And the traditional virtual machine such as VM has a huge difference


See the difference below:


Let’s take a look at the difference between the two, because the VM Hypervisor need to implement for hardware virtualization, and will carry its own operating system, the virtual machine operating system memory is bigger, an operating system, there are several G nature in startup time and resource utilization, and performance has a very big spending, if the local, Or personal computers, it’s not that big, but it’s a huge waste of resources in the cloud.


When a lot of time to do things we don’t consider the question has nothing to do with the thing itself, such as the aircraft would not consider whether to diving plane, for our present number of mobile Internet applications, rarely involves the part of the operating system, in fact, our main concern is the application itself, and the upper deck of the virtual machine is running in the VM runtime libraries and applications, The space of the entire VIRTUAL machine is very large, but with the emergence of containerization technology Docker technology, the operating system layer is omitted, and multiple containers are isolated from each other and share the host operating system and runtime library.


Therefore, Compared with VM, Docker application container has the following advantages:

  • Startup speed is fast. Container startup is essentially a process that starts, so it starts in seconds, while VM usually takes longer.

  • High resource utilization, an ordinary PC can run hundreds of containers, you run ten VM try.

  • The performance overhead is low, and VMS usually need extra CPU and memory to perform OS functions, which take up extra resources.


So many mobile Internet applications or cloud computing back-end nodes can use Docker to replace physical machines or virtual machines. For example, many background services of Tencent Map have basically migrated docker deployment.


What architecture is Docker? What is the underlying technology?

Said so much in front, always or fog. Here is a detailed introduction to the technical architecture. What is the underlying technology used to achieve the above advantages?

Docker technology Architecture Diagram:


From the perspective of the underlying technology Docker relies on, Docker cannot run directly on Windows platform. It only supports Linux system, because Docker relies on the three most basic technologies of the Linux kernel. Namespaces act as the first level of isolation. Docker container is isolated, so that the container has independent hostname, IP, PID, at the same time to ensure that a process running in a container and can not see or affect other processes outside the container; Cgroups is a key function for containers to account for and limit host resources used.


For example, CPU, memory, disk, etc. Union FS mainly supports image, that is, image, using copy-on-write technology, so that people can share a certain layer. For some different layers, you can store them in different memory. Libcontainer is a library. It encapsulates these three technologies.


Docker Engine controls the running of containers and the pulling and pulling of image files.


How to install Docker? How does Docker work?

Before installing Docker, make sure your Linux kernel version is higher than 3.10 and 64-bit.

Run the uname -ir command to check whether the requirements are met.


Docker installation

Installing Docker via script is very simple.

1. Obtain the latest Docker installation package

nicktang@nicktang-virtual-machine:~$ wget -qO- https://get.docker.com/ | sh

After entering the password of the current user, the script is downloaded and the Docker and dependencies are installed.

The installation is complete when the above content is displayed.

2. Start the Docker background service

root@nicktang-virtual-machine:/data # sudo service docker start #

root@nicktang-virtual-machine:/data # docker -v

If the version number is displayed, the Docker is successfully installed. Simple!!!! So we’re one mirror image away. It’s up to you to make it yourself or pull it from a public warehouse.

root@nicktang-virtual-machine:/data # sudo service docker stop # Shutdown daemon

Docker use

In the use of Docker, we mainly talk about how to use Docker from the aspects of [add, delete and check]. Why not [change]? Because in my opinion, once there is a problem with Docker container, there is no need to repair it. So we only need to master a few basic commands, as follows.


Docker images (Docker images


[add] run a container docker run image name, such as we run docker run centos

Three things are done when you type this command

1, check whether the local hello-world mirror exists, if yes -> skip step 2 -> no -> execute the following sequence

2, automatically go to docker Hub to download the image

3. Load the image into the container and run it





When using Docker images to view the local add centos image.





Tag latest indicates the latest version of the centos system image. Since it will pull an image from the Docker Hub that does not exist, it is added inside.

[add] pull the specified image file docker pull image name :TAG


The above direct pull is the latest image in the Docker Hub, but sometimes I need to use the Docker pull command to pull the specified image file. Because pulling image files from the official is usually slow, we can use accelerator technology to pull image files from the domestic warehouse.


Docker ps-A allows you to view all containers, both running and stopped.





The first field is the ID of the started container, and the second field is the image from which the container was generated. However, this command is only a temporary start of the container. The status shown in the figure above is exited(0), indicating that the container is exiting. If you want the container to run in the background, you need to start the daemon container. Just add a -d parameter to the startup command, that is, docker run -d centos.


Docker inspect image ID/container ID docker inspect centos

This command returns a JSON string of image or container details. There is a lot of information about ID, IP, version, container’s main program, etc., from which we can do secondary development. On the basis of this command, we can add a -f parameter to obtain the information we need, such as the IP address of the redis container, memory information, CPU usage. docker inspect -f ‘{{.NetworkSettings.IPAddress}}’ [ID /Name]






Docker run -it centos

-it Associates the container terminal with the current terminal. That is, the display of the current terminal switches to that of the container terminal.


Check the container directory structure, it is found that the directory structure is exactly the same as that of the physical machine, which is why some people call docker container also called virtual machine.

Exit You can Exit the container terminal.


Docker rm container ID docker rm container ID docker rm container ID

How to achieve continuous integration, automatic delivery and automatic deployment with Docker?

I’m sorry we don’t talk about automation or continuity these days. So, let’s also look at continuous integration, automated delivery, automated deployment. Docker does not have this function, but you rely on Docker to complete the above three automatic steps. Docker is the basis for the realization of these processes. Just like software development, software code is the root and development tools are the auxiliary. Therefore, github+ Jenkins + Registry are also needed to build a complete automated process.


Continuous integration and automatic deployment work as shown below:

  1. RD pushes the code to git repository or SVN code server, git server will notify Jenkins via hook.

  2. Jenkine clones git code locally and compiles it from a dockerFile file.

  3. Package to generate a new version of the image and push to the repository, delete the current container, with the new version of the image to run again.


All RD needs to do is type three commands Git add *; Git commit -m “”; Git push enables continuous integration, automated delivery, and automated deployment. The magic of this process is demonstrated through actual cases.


Docker can also be very convenient for automatic expansion. There are two ways of automatic expansion, one is Docker capacity expansion, the other is Docker node number expansion. In the first way, you can modify the configuration file. In the second way, you can simply copy and run the node expansion.

conclusion

Although Docker is ultra-lightweight, it is not recommended to deploy too many applications on one machine. At the same time, differentiated deployment must be made. What does that mean? It is to deploy a large number of computing resources, large memory requirements, frequent I/O operations and inconsistent system resource requirements on the same host.


Author: Tang Wenguang, Tencent engineer, responsible for map test of wireless R&D Department

This paper links: https://cloud.tencent.com/community/article/288560?utm_source=csdn_geek

Copyright Notice: The content of this article is contributed by Internet users, the community does not have the ownership, also does not assume the relevant legal responsibility. If you find any content suspected of plagiarism in our community, you are welcome to send an email to [email protected] to report and provide relevant evidence. Once verified, our community will immediately delete the content suspected of infringement.

The original link