Docker is a container based on lightweight virtualization technology. The whole project is developed based on Go language and uses Apache 2.0 protocol. Docker can package our application into a container, which contains the code, running environment, dependency library, configuration file and other necessary resources of the application. Through the container, we can achieve convenient, rapid and automatic deployment decoupled-from the platform. No matter what the deployment environment is, The applications in the container all run in the same environment.
For example, Ming wrote a CMS system that has a very broad technology stack and relies on a variety of open source libraries and middleware. If the deployment mode is purely manual, Xiaoming needs to install various open source software and write the configuration files of each open source software. This is acceptable if you deploy only once, but if Ming needs to change servers every few days to deploy his application, the tedious rework can be maddening. At this time, the use of Docker comes into play. Ming only needs to write a Dockerfile file according to the deployment steps of the application (handing over installation, configuration and other operations to Docker for automatic processing), and then build and publish his image. In this way, no matter what machine is on, Ming only needs to pull the image he needs. It can then be deployed and run directly, which is the charm of Docker.
So what’s the mirror image? Image is an important concept in Docker:
Image: This is similar to images used in virtual machines, and since any application needs its own runtime environment, an Image is a template to provide the runtime environment it needs. Container (Container) : A Container is an abstraction layer provided by Docker that acts like a lightweight sandbox containing a minimalist Linux environment and applications running within it. Container is a running instance of an Image (the Image itself is read-only. When the Container is started, Docker creates a writable layer on top of the Image. Any changes in the Container do not affect the Image. If you want to store changes in a Container in an Image, Docker uses the strategy of generating a new Image layer based on a Container. The Docker engine uses containers to operate and isolate each application (that is, the applications in each Container are independent of each other). In fact, the idea of Docker can be understood from the original meaning of The English words Docker and Container. Container can be defined as Container. Container is a general standard specification of packaged goods that can be easily loaded and unloaded by mechanical equipment. Its invention simplifies the mechanization of logistics transportation and establishes a set of standardized logistics transportation system. Docker means dockworker. It can be considered that Docker is like a worker working hard on the dock, packaging the application into “containers” with certain standardized specifications (actually the Container mentioned here corresponds to Image, Container is more like a sandbox in operation in Docker). When the goods arrive at their destination, dockers can take the Container apart and pull it out (create the Container based on the Image and run it). This standardization and isolation makes it easy to combine multiple images to build your application environment (Docker also advocates a single responsibility for each Image, that is, doing one thing well), or to share your images with others.
This article was written by SylvanasSun([email protected]) and appeared on SylvanasSun’s Blog. The original link: sylvanassun. Making. IO / 2017/11/19 /… (Please be sure to retain this paragraph statement and keep the hyperlink.) Docker VS VIRTUAL Machine As mentioned above, Docker is based on lightweight virtualization technology, so it is different from the virtual machine we usually use. Virtual machine technologies can be divided into the following two categories:
System VMS
System virtual machine: Provides a substitute for a real computer through software simulation of a computer system. It is an abstraction of physical hardware and provides the functionality needed to run a full operating system. VMS use physical machines to manage and share hardware, so that multiple VIRTUAL machine environments are isolated from each other. One machine can run multiple VMS, and each VM contains a full copy of the operating system. In a system VM, all software or operations run only affect the vm environment. VMWare, which we often use, is the realization of system virtual machine. Application virtual machine: Allows applications to run independently of the platform. A typical example is the JVM. Java decouples Java programs from the operating system and hardware platform through the ABSTRACTION layer of the JVM (because every Java program runs in the JVM), thus implementing the so-called Compile Once, Run Everywhere. The technology used in Docker is different from either of these. It uses a lighter virtualization technology, with multiple Containers sharing the same operating system kernel and running as if it were local. Container technology is an application-level abstraction compared to a virtual machine. It packages code and dependencies together. Multiple Containers can run on the same machine (meaning that one virtual machine can run multiple Containers) and share the operating system kernel with other Containers. Each Container runs as a separate process in user space, which proves that Containers are much more flexible and lightweight than virtual machines (typically used in conjunction with Docker).
Container technology is nothing new. It can be traced back to the UNIX chroot (introduced in V7 UNIX in 1979), which changes the root directory of a currently running process and its subdirectories. Programs running in this modified environment cannot access files outside the specified directory tree. This limits the user’s scope of activity and provides isolation for the process.
After that, many Container technologies emerged in various Unix versions. In 2006, Google proposed “Process Containers”, hoping to achieve the related features of Process resource isolation in the Linux kernel. Due to the excessively broad and chaotic definition of Container in the Linux kernel, The project was later renamed CGroups to implement resource restrictions on processes.
In 2008, LXC (Linux Containers) was released, which is an operating system-level virtualization approach for running multiple isolated programs (Containers) on Linux systems by sharing a single kernel. It is LXC’s combination of CGroups in the Linux kernel and support for separate namespaces that provides an isolated environment for applications. Docker is also based on LXC. (Docker was originally an internal project at dotClound, a PaaS service.) And made many improvements.
You need to install Docker before you can use Docker. Installation method is different, depending on the platform, can go to refer to the Install Docker | Docker Documentation or Google it.
After the installation is complete, type docker –version to confirm the installation was successful.
$ docker –version
Docker version 17.05.0-CE-rC1, build 2878a85
From Docker Hub, we can pull images published by others. We can also register an account to publish our own images and share them with others.
[root@Jack ~]# docker search redis # check whether the redis image exists
[root@Jack ~]# docker pull redis # pull redis image to this machine
Using default tag: latest
Trying to pull repository docker.io/library/redis …
latest: Pulling from docker.io/library/redis
Digest: sha256:cd277716dbff2c0211c8366687d275d2b53112fecbf9d6c86e9853edb0900956
[root@Jack ~]# docker images #
REPOSITORY TAG IMAGE ID CREATED SIZE
Docker. IO/Python 3.6- onBuild 7195f9298ffb 2 weeks ago 691.1 MB
Docker. IO/Mongo Latest d22888AF0ce0 2 weeks ago 360.9 MB
Docker. IO /redis latest 8f2e175b3bd1 2 weeks ago 106.6 MB Copy code
With the Image, you can then run a Container on top of it, using the following command.
[root@Jack ~]# docker run -d -p 6379:6379 redis
[root@Jack ~]# docker ps -a # check the container information. If you do not add -a, only the current container will be displayed
If you want to enter the container, you need to execute the following command
[root@Jack ~]# docker ps #
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1f928073b7eb redis “docker-entrypoint.sh” 45 seconds ago Up 44 seconds 0.0.0.0:6379->6379/ TCP Desperate_khorana
[root@Jack ~]# docker exec it 1f928073b7eb /bin/bash #
root@1f928073b7eb:/data# touch hello_docker.txt # Create a file in the container
root@1f928073b7eb:/ dataexit # exit
exit
[root@Jack ~]#
You can also enter the following command directly at startup time
[root@Jack ~]# docker run -d it -p 6379:6379 redis /bin/bash copy code
We made a change to the Container. If you want to keep the change, you can use the commit command to generate a new Image.
-m is the description. -a is the author. 1f9 is the container id you want to save
Sylvanassun/Redis for the image name :test is a tag commonly used to identify the version
[root@Jack ~]# docker commit -m “test” -a “SylvanasSun” 1f9 sylvanassun/redis:test
sha256:e7073e8e5bd70b8d58092fd6bd8c2551e65dd29241c235eddf2a7f4b4b25cbbd
[root@Jack ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
Sylvanassun/Redis test e7073e8e5bd7 2 seconds ago 106.6 MB
Docker. IO/Python 3.6- onBuild 7195f9298ffb 2 weeks ago 691.1 MB
Docker. IO/Mongo Latest d22888AF0ce0 2 weeks ago 360.9 MB
Docker. IO /redis latest 8f2e175b3bd1 2 weeks ago 106.6 MB Copy code
It’s easy to delete a container or image, but you need to delete the containers that depend on it before deleting the image.
[root@Jack ~]# docker stop 1f9 # docker stop 1f9
1f9
[root@Jack ~]# docker rm 1f9 # delete container
1f9
[root@Jack ~]# docker rmi e70 # delete image saved above
Untagged: sylvanassun/redis:test
Deleted: sha256:e7073e8e5bd70b8d58092fd6bd8c2551e65dd29241c235eddf2a7f4b4b25cbbd
Does: sha256:751 db4a870e5f703082b31c1614a19c86e0c967334a61f5d22b2511072aef56d duplicate code
If you want to build an image yourself, you need to write a Dockerfile file that describes the image’s dependency environment and how to configure your application environment.
Use Python :2.7-slim as the parent image
The FROM python: 2.7 – slim
Jump to /app is the CD command
WORKDIR /app
Place the contents of the current directory (.) Copy to the /app directory of the image
ADD . /app
RUN stands for the shell command to RUN. The following command is the dependency package for installing python applications according to requirements.txt
RUN pip install –trusted-host pypi.python.org -r requirements.txt
Expose port 80 for outside access
EXPOSE 80
Defining environment variables
ENV NAME World
The command executed when the container is started, which, unlike RUN, is executed only once when the container is started
CMD [“python”, “app.py”] copies the code
Docker build -t XXX/XXXX The -t command is used to build the image. Indicates that a Dockerfile file is found in the current directory.
Now that you’ve learned how to build your own image, do you want to share it with others on Docker Hub? To do this, you need to register a Docker Hub account first, then login with the Docker login command, and then Docker push image name, just like using Git.
More about Docker command and use method, please refer to the Docker Documentation | Docker Documentation, I also recommend using Docker Compose to build the image, it can be easily combined to manage multiple images.
Docker provides a very powerful automated deployment mode and flexibility, decoupling multiple applications, providing agility, controllability and portability in development. At the same time, Docker is also helping more and more enterprises to realize the practice of cloud migration, micro service transformation and DevOps model.
With microservices and DevOps on the rise, why would you reject Docker? Let’s choose to embrace Docker and embrace the future!