When it comes to “containers”, the first thing that comes to mind is the POTS and pans used in daily life, or the boxes and boxes used to carry goods, which are used to hold all kinds of things. So take the ports and docks for example, a lot of goods are sent out by ship every day. They’re not going to be thrown directly into the hold of the ship when they’re loaded, so every dock uses a lot of containers to carry the goods. With these containers, goods don’t have to be stacked in a haphazard way and can be arranged layer by layer, making it easier to manage and transport.

So what do we mean by “container” today? Its inspiration actually comes from those “containers”. Before we say “container,” let’s talk briefly about the familiar “virtual machine (VM)” and compare the two.

▽ Container terminal (photo by Petal.com)

VMS and containers

Virtual machine (VM), we certainly will not be unfamiliar, as a computer major graduate xiaobian, in college courses will also use virtual machine to learn Linux operating system. As the name suggests, a virtual machine is software designed to simulate a computer system, allowing users to run seemingly multiple computers on a single computer. Virtual machines are a good way to run software on a different type of hardware or operating system, eliminating the need for additional hardware.

Since the advent of virtualization technology and cloud computing services, IT companies large and small have embraced virtual machines as a way to reduce costs and improve efficiency. However, virtual machines consume a lot of system resources. Each virtual machine needs to run not only a full operating system, but all the virtual hardware that the operating system runs on. This can consume a lot of memory and CPU resources. This is more economical than running a separate physical computer; But for some applications it can be wasteful.

In this case, the development of containers was promoted.

A Container is a lighter, more flexible approach to virtualization that packages everything an application needs. Containers include all the code, various dependencies and even the operating system, allowing applications to run almost anywhere. Therefore, it was born to solve an important problem: how to ensure that applications work correctly when moving from one environment to another. It only virtualizes the operating system, not the underlying computer like a virtual machine.

▽ VM and Container

So what are the characteristics of containers compared to virtual machines?

  • portability

The current modern form of container technology is mainly embodied in application containerization (e.g. Docker) and system containerization (e.g. LXC). Both forms of containers allow IT teams to abstract program code from the underlying architecture, enabling portability across a variety of deployment environments.

  • lightweight

Containers are typically located on top of the physical server and its host operating system. It can run multiple workenvironments from a single operating system installation. That’s why containers are especially “light” — they’re only a few megabytes long and can start in seconds.

  • Reduce the cost

Increased memory, CPU, and storage efficiency over virtual machines are key advantages of container technology. Because more containers can be supported on the same infrastructure, these reductions in resources can translate into significant cost savings, as well as reduced administrative overhead.

Virtual machine container heavyweight lightweight performance limited native each VM in its own operating system running in all containers share the host operating system level hardware virtualization operating system virtualization start time (in minutes) for the unit start time (in milliseconds) allocate the required memory needs less memory completely isolated process level

▽ Comparison between virtual machine and container

Container technology and DevOps

When it comes to container technology, DevOps has to be mentioned. DevOps (a combination of Development and Operations) is a collection of processes, methods, and systems used to facilitate communication, collaboration, and integration among Development (application/software engineering), technical Operations, and quality assurance (QA) departments.

In November 2014, Docker entered the DevOps world as a potentially trending container technology. It gained popularity by speeding up the ability to continuously deploy through simple packaging and application shipping. Docker is an open source tool that packages applications and their dependencies (configuration files, etc.) into containers that can be run on any Linux server without any compatibility issues.

Containerization is a pretty old concept, but Docker brings something new that the earlier technology didn’t.

  • Docker is designed to integrate most of the DevOps tools commonly used in recent times, such as Puppet, Ansible, Jenkins, etc.
  • With Docker, developers can easily replicate their production environments as runnable container applications, making work more efficient.
  • Docker allows applications to run on laptops, internal servers, public or private clouds for flexibility and portability. It is much easier to manage and deploy applications.
  • Docker implements a high-level API to provide a lightweight container for running separate processes.

Today, Docker is primarily used by developers and system administrators to build and run distributed applications in association with DevOps.


Container technology and microservices

Microservices, as a new software architecture, is closely related to container technology. Microservices are the breaking up of a large single application and service into dozens of smaller services. A microservices strategy can make things easier, and one of its biggest advantages is that it can use computing resources more efficiently than traditional applications.

Most services have different resource requirements. Whether it’s network, disk, CPU, or memory, one resource will be used more than another. While cloud providers can provide different Settings for memory, disk IO, or CPU, the system is still left with a lot of redundant resources.

▽ Resource redundancy

With microservices, mixing services with different resource allocation profiles provides optimal utilization.

Micro-services provide optimal utilization

Because microservices are similar to small applications, we must deploy microservices to our own virtual machine instances. As you can imagine, dedicating an entire virtual machine to deploying a small portion of an application is not the most efficient option. However, using container technology, you can reduce the performance overhead and deploy thousands of microservices on the same server, because containers require far fewer computing resources than virtual machines. It is essential that microservices be containerized. It can improve utilization and availability and reduce costs.

The container cloud of Yupaiyun is a distributed computing resource network based on Docker, with nodes scattered throughout the country and overseas. It provides telecom, Unicom, mobile and multi-line networks, integrates the concepts of micro-service and DevOps, meets the requirements of lean development, operation and maintenance integration, greatly reduces the construction complexity of distributed computing resources, and greatly reduces the use cost.


Recommended reading:

Cloud OpenResty/Nginx service optimization practicestech.upyun.com
Edge rules, whatever you want. – Clouds againtech.upyun.com