As programmers, let’s recall the familiar software development tasks we do every day:

Set up a good development environment in the local, development work, after the unit test, the developed code deployed to the test system, repeated testing, and finally deployed to the production system.

Inevitably, we run into situations where the same code doesn’t work when the environment is changed.

This operating environment change can be divided into different dimensions:

Switching code from a programmer’s laptop to a test server, or from a physical server to a public/private cloud; The version of the runtime on which the code depends has changed, such as Python2.7 for development, but Python3 for production can also be the operating system on which the code is running, such as Ubuntu for development and Redhat for production

In addition to investing time in the development of the application itself, programmers have to spend extra energy dealing with the environment or infrastructure issues, which can sometimes be a headache.

As an application developer, I’m not interested in these underlying environmental issues. Is there a way I can take them out of my mind? Yes, using container technology.

What is a container? An analogy can be drawn from the concept of a container in real life.



Simply put, a container contains the complete runtime environment: all the dependencies, class libraries, other binaries, configuration files, and so on required by the application, in addition to the application itself, are bundled into a package called the container image. By containerizing the application itself and its dependencies, the differences between operating system distributions and other underlying environments are abstracted away.

Why do we use containers? Well, let’s see what good it does.

Since the container encapsulates all the relevant details necessary to run the application, such as application dependencies and the operating system, it makes migrating images from one environment to another more flexible. For example, the same image can run in Windows or Linux, development, test, or production environments.

Standardization: Most container implementation technologies are based on open standards and can run on all major Linux distributions, Microsoft, and other operating systems.

Container images provide versioning so that you can track different versions of containers and monitor differences between versions.

Security from container isolation: A host can run multiple containers, but the processes in these containers are isolated and not aware of each other. An upgrade or failure of one container does not affect the others.

More lightweight than virtual machines:

Virtual machines and containers serve a similar purpose in isolating applications and their dependencies to build a unit of applications that can operate independently of the specific environment.

Virtual machines simulate specific hardware systems with software on top of physical servers. The Hypervisor sits between the hardware and the system and is a necessary part of creating virtual machines. Vm software must use the Hypervisor as an intermediate layer, which is the core of VM technology. When the host OPERATING system starts VMS, it allocates memory, CPU, network, and disk resources to VMS through the Hypervisor, and loads the VIRTUAL OPERATING system. Therefore, the host consumes a large amount of physical resources.

Multiple containerized applications running on a host share the kernel of the host’s operating system, thus eliminating the need for virtual machine technology’s hypervisor middle layer and making it lighter and faster to start than virtual machine technology.

Why has container technology become so popular in recent years?

With the popularity of application development of micro-service architecture, many IT companies have launched new products based on micro-service architecture. Even SAP, a traditional enterprise management software giant, has released many solutions based on micro-service, such as Engagement Center, Revenue Cloud and so on.

Initially, microservice providers tended to deploy microservices in virtual machines (VMS), which also enabled isolation of microservices, but could not scale quickly because, as described earlier, virtual machines took some time to start up and could not immediately respond to sudden increases in load or traffic. And from the cost consideration, using the traditional virtual machine technology, in order to realize the isolation, each application or micro service must be running in a virtual machine, this duplication and waste of the operating system and resource allocation, can be avoid by container technology, thus greatly reducing the cloud service provider for hardware, save the cost of cloud service center.

For more of Jerry’s original articles, please follow the public account “Wang Zixi “: