This article was adapted from: Lebyte

Docker

For more Java related knowledge, you can follow the public account “LeByte” to send: 166

How should we as programmers understand Docker?

The origin of container technology

Assume that your company is secretly developing a “today’s headline” APP, we can call tomorrow’s headlines, programmer to build a set of environment to start from beginning to end to write code, after writing the code programmers to give the code to test students, then test the students start from the beginning to the end structures, the environment, the problems appeared in the process of test programmers don’t have to worry about. Can a face of innocent coquetry, “clearly in someone else’s environment can operate.”

Test after test can finally launched classmate, then ops students and want to set up the environment, from beginning to end fee and try to build good environment began to online, bad, online system collapses, the psychological quality good programmers can cast acting again, “it can run in the family environment”.

From the whole process, we can see that not only have we repeatedly built three sets of environments but also forced programmers to turn to actors to waste their acting talent, which is typically a waste of time and efficiency. Smart programmers will never be satisfied with the status quo, so it’s time for programmers to change the world again, and container technology comes into being.

Some students may say: “wait, don’t change the world, we have virtual machine ah, VMware is easy to fly, first build a virtual machine environment and then for testing and operation clone out of the ok?”

Before container technology, this was a good idea, but it wasn’t that good.

First, the foundation of cloud computing is virtual machine technology. After cloud computing manufacturers buy a pile of hardware to build a data center, they can use virtual machine technology to slice hardware resources. For example, they can slice 100 virtual machines, which can be sold to many users.

You might be thinking why is this a bad idea?

Container technology vs. Virtual machines

We know that an operating system is a heavy and clunky program compared to a simple application. How clunky is it?

We know that the operating system needs to take up a lot of resources to run, we must have deep experience of this, just installed the system has not deployed anything, the simple operating system of its disk occupation at least tens of GIGABytes, a few gigabytes of memory to start.

Suppose I have a machine with 16GB of memory and need to deploy three applications, then the use of virtual machine technology can be divided as follows:

Each vm has one application deployed. VM1 occupies 2 gb memory, VM2 1 gb memory, and VM3 4 gb memory.

We can see that the virtual machine itself takes up a total of 7GB of memory, so there is no way to partition more virtual machines to deploy more applications, but we are deploying applications, and we are deploying applications instead of operating systems.

Wouldn’t it be nice if there was a technology that could prevent us from wasting memory on “useless” operating systems? This is problem number one, mainly because the operating system is too heavy.

There is another problem, and that is the boot time, we know that operating system restarts are very slow, because the operating system has to check everything and load everything that needs to be loaded from beginning to end, and this process is very slow, it can take several minutes, so the operating system is still too stupid.

So is there a technology that allows us to get all the benefits of the virtual machine and overcome all the drawbacks so that we can have it all at once?

The answer is yes, this is container technology.

What is a container

Container is one of the most remarkable inventions in the history of commerce. It has greatly reduced the cost of shipping goods and services by sea. Let’s look at the benefits of containers:

Containers are isolated from each other

Long-term reuse

Fast loading and unloading

Standard specifications, in port and ship can be placed

Going back to containers in software, containers and containers are conceptually very similar.

One of the goals of modern software development is isolation. Applications run independently of each other. This isolation is not easy to achieve.

But virtual machine technology has all the disadvantages mentioned above, what about container technology?

Unlike virtual machines, which are isolated by operating systems, container technology isolates only the application’s runtime environment, which refers to the various libraries and configurations on which the program runs, but containers can share the same operating system.

We can see from the picture more lightweight containers and occupy less resources, and the operating system is often compared to a few G memory footprint, container technology a few M space, so we can deploy the same specifications of the hardware of the container, this is can’t be matched by the virtual machine, and is different from the operating system for a few minutes of setup time containers almost instantaneous start, Container technology provides a more efficient way to package service stacks, So Cool.

So how do we use containers? Which brings us to Docker.

Note that containers are a generic technology, and Docker is just one implementation.

What is a docker

Docker is an open source project implemented with Go language, which enables us to easily create and use containers. Docker packages programs and all program dependencies into Docker Container, so that your program can have consistent performance in any environment. Here, the program running dependencies are containers, just like containers. The operating system environment of the container is like that of the ship or port. The performance of the program depends only on the container (container), not on which ship or port the container is placed (operating system).

Therefore, we can see that Docker can shield environmental differences, that is to say, as long as your program is packaged into Docker, the behavior of the program is consistent no matter what environment it runs in. Programmers can no longer display their performance talent, and there will be no more “it can run on my environment”. Truly achieve “Build once, run everywhere”.

Moreover docker another advantage is the rapid deployment, this is the current Internet company is one of the most common application scenario, one reason is the container very fast start, another reason is that as long as ensure a container of the program will then run correctly, then you can be sure that no matter how many can correct operation in production deployment.

How do I use Docker

There are several concepts in Docker:

  1. dockerfile
  2. image
  3. container

In fact, you can simply think of an image as an executable, and a Container is a running process.

So you need the source code to write the program, so you need a Dockerfile to write the image, and a Dockerfile is the source code for the image, and a docker is the compiler.

Therefore, we only need to specify which programs are needed in the dockerfile, which configuration depends on, and then send the Dockerfile to the “compiler” docker for “compilation”, that is, the docker build command. The generated executable is image, and then we can run the image. This is the Docker run command, and when the image is run, it becomes a Docker container.

The specific usage method will not be described here, you can refer to the official docker document, there is a detailed explanation.

How does Docker work

In fact, Docker uses a common CS architecture, that is, client-server mode. Docker client is responsible for processing various commands input by users, such as Docker build and Docker run. What really works is the server. It is also the Docker Demon. It is worth noting that the Docker client and Docker Demon can run on the same machine.

Let’s take a look at how Docker works with a few commands:

1, the docker build

This command is used when we write a dockerfile and give it to the Docker to “compile”. Then the client receives the request and forwards it to the Docker Daemon, which then creates an “executable” image from the Dockerfile.

2, the docker run

After the “executable” image is installed, the program can be run. Then, the docker run command is used. The Docker daemon receives the command, finds the specific image, loads it into the memory, and starts executing the image.

3, the docker pull

In fact, Docker build and Docker run are the two most core commands, will use these two commands basically docker can be used, the rest is some supplement.

So what does docker pull mean?

As we said before, the concept of image in Docker is like an “executable”, where can we download an application written by someone else? It’s very simple. It’s the APP Store. Similarly, since image is also an “executable program”, is there a “Docker Image Store”? The answer is yes, this is Docker Hub, Docker’s official “app store”, where you can download images written by others, so you don’t have to write your own Dockerfiles.

Docker Registry can be used to store all kinds of images. The common repository for anyone to download images is the Docker Hub. How to download an image from a Docker Hub is the Docker pull command.

Therefore, the implementation of this command is also very simple, that is, the user sends the command through the Docker client, the Docker Daemon receives the command and sends the image download request to the Docker Registry. After the download, it is stored locally, so that we can use the image.

Finally, let’s look at the underlying implementation of Docker.

The underlying implementation of Docker

Docker provides several functions based on Linux kernel:

NameSpace

We know that PID, IPC, network and other resources in Linux are global, while the NameSpace mechanism is a kind of resource isolation scheme. Under this mechanism, these resources are no longer global, but belong to a specific NameSpace, and the resources under each NameSpace do not interfere with each other. This makes each NameSpace look like a separate operating system, but namespaces alone are not enough.

Control groups

Although NameSpace technology can achieve resource isolation, processes can still access system resources uncontrollably, such as CPU, memory, disk, network, etc. In order to control access to resources by processes in containers, Docker adopts control Groups technology (also known as Cgroup). With cgroups, you can control the amount of system resources consumed by processes in a container. You can limit how much memory a container can use, which cpus it can run on, and so on.

With these two technologies, the container really does look like a separate operating system.

conclusion

Docker is a very popular technology at present, which is used by many companies in the production environment. However, the underlying technology docker relies on has actually emerged very early. Now it is revitalized in the form of Docker and can solve the problems it faces well.

Thank you for your recognition and support, xiaobian will continue to forward “LeByte” quality articles