I met Containerd

Containerd and Docker

In its previous incarnation, Containerd was named Schrodinger, but it didn’t really do much for it until K8s announced it was scraping Docker, bringing it back into the public eye. On a technical level, I discussed the relationship between CRI, Containerd, and Docker container runtimes in previous articles. Interested partners can take a look at these articles:

Understand the container runtime

K8s, CRI and Container

Now, just out of curiosity as to why CNCF is doing this on a non-technical level, let’s go over Containerd and Docker’s history and love and hate.

A few years ago, Docker company emerged as a dominant company in the field of container technology. Giants like Google and RedHat all felt a great sense of crisis, so they wanted to jointly develop and promote an open source container runtime as the core of Docker technology. But Docker says, “I’m not doing it!” The giants were so upset by this feedback that they managed to bully Docker into donating LibContainer to the open source community, now known as Runc, a low-level container runtime. In addition, the giants set up CNCF to counter the dominance of Docker. CNCF was founded with a clear idea: if Docker could not do well in the field of containers, it should focus on the construction of the upper layer of containers — container arrangement, from which K8s was born. Docker also tried to use Swarm to counter K8s, but it failed.

Since then, K8s has slowly become the standard in the cloud native field, and its ecology is getting bigger and better. In order to integrate into ecology, Docker opened source containerd, the core dependency of Docker. In addition, K8s provides a neutral Runtime Interface (CRI) for interconnecting with containers at the next layer. The RUNC and Containerd runtimes support this Interface. Because there was really no high level runtime at that time, oci-O supported CRI interface, but it was too weak; Containerd was also immature and was intended to be built into systems, not for end users; RKT had a system of its own (and later failed). We can only temporarily make a SHIM for Docker and convert CRI to Docker API. K8s and Docker have entered a cooling-off period, and both sides are optimizing themselves independently without interfering with each other. The CNCF community has continued to improve containerD and its positioning has changed from a system-in-the-box component to an “industry standard container runtime” that can be used by end users. Until last year, K8s announced it was scrapping Docker in favor of Containerd. In addition to these commercial factors, on the other hand, K8s has provided a standard interface to the underlying container runtime, so it is not wrong to maintain a separate adaption layer similar to Docker Shim to cater to different runtime.

Containerd architecture

Okay, now that we’re out of melons, let’s go back to the technical side and see what Containerd’s architecture is like. Let’s start with containerd’s features:

  • The official website claims to support OCI’s mirroring standard
  • OCI container runtime
  • Mirror push and pull
  • Container runtime lifecycle management
  • Multi-tenant mirror storage
  • Network management and namespace management: supports adding existing namespaces to container networks

I’m just going to go boy, Docker has all the core features you need. Look again at the architecture diagram

Containerd is designed to be a large plug-in system, and each dotted box in the diagram corresponds to a plug-in.

From the bottom up, the bottom layer supports ARM and x86 architectures, and supports Linux and Windows.

Containerd contains Backend, core, and API. Backend Runtime Plugin provides specific operations for container running. To support different container running times, Containerd also provides a series of containerd-shim. As mentioned in the previous article K8s & Kata Container Principle practice, shiM is this. Core is the Core part, which provides a variety of functions of the service, the more commonly used is Content Service, provides access to the image of the addressable Content, all immutable Content is stored here; Images Service provides image-related operations; Snapshot Plugin: Used to manage file system snapshots of container images. Each layer in the image is decompressed into a filesystem snapshot, similar to graphDriver in Docker. Above that is the API layer, which connects to the client via GRPC and provides Prometheus with an API for monitoring.

At the top are various clients, including Kubelet for K8s and Containerd’s own command line CTR.

To simplify the figure above:

After simplification, Containerd is divided into three parts: Storage Manages image files. Metadata manages Metadata for images and containers. In addition, the runtime is triggered by Task. GRPC apis and Metrics interfaces are provided externally.

conclusion

The Containerd community has been very active, and the project has matured because of the contributions of the community. The point of this article is to review the history and present its present structure. As users do not go to study its internal design, I believe it is also extremely complex and sophisticated. I find containerd and Docker have a lot in common just in terms of usage, so I’ll focus on containerd practices in the next article.