The original link: fuckcloudnative. IO/posts/getti…

1. Containerd’s Past and present lives

A long time ago, Docker rose to prominence and swept the world with its “mirror” strategy, which dealt a fatal blow to other container technologies, including Google. In order not to be shot dead on the beach, Google was forced to lower its face (of course, licking is impossible), hoping that Docker company and itself jointly promote an open source container runtime as the core dependency of Docker, or we will see. Docker feels its INTELLIGENCE has been insulted. We’ll see what happens.

Obviously, Docker’s decision ruined its promising future and caused today’s tragedy.

Then Google, along with Red Hat, IBM and other big names, coaxed Docker into donating LibContainer to a neutral community (OCI, Open Container Intiative) and renamed it RUNC. Not a trace of Docker

This is not enough. In order to completely reverse the dominant situation of Docker, several executives jointly established a foundation called CNCF (Cloud Native Computing Fundation). The name must be familiar to everyone, but I will not introduce it in detail. The goal of CNCF is clear: since it can’t beat Docker in the current dimension, it will simply climb up and upgrade to the dimension of large-scale container arrangement, so as to beat Docker.

Swarm defeated Kubernetes and Docker Swarm defeated Kubernetes and Docker Swarm defeated Kubernetes. Then Docker played a clever trick and donated its core dependency Containerd to CNCF to advertise Docker as a PaaS platform.

Apparently, this little genius has greatly accelerated his own demise.

The giants thought to themselves, “When I first wanted to cooperate with you to build a neutral core operation, you just didn’t agree to it. Good guy, now you build one and donate it. What is this operation?” Well, that’ll make it easier, so I’ll just use Containerd.

First of all, to make Kubernetes neutral, we need to standardize the container runtime interface. Any container runtime that matches this interface can play with me. The first one to support this interface is Of course Containerd. The name of this Interface is known as CRI (Container Runntime Interface).

In order to confuse Docker, Kubernetes temporarily displeases itself and integrates a SHIm (you can understand it as a shim) into its component to translate CRI calls into Docker API, so that Docker can also play with itself happily. Boil frogs in warm water and fatten them up before killing them…

In this way, Kubernetes pretended to play with Docker while secretly optimizing Containerd’s robustness and smoothness with CRI. Now that Containerd’s wings are fully hardened, it’s time to take off my disguise and say bye bye to Docker. The back of the matter we all know ~~

Docker succeeded as a technology, but Docker failed as a company.

2. Containerd architecture

Today, Containerd is an industrial-scale container with a slogan: Super simple. Super strong! Portability is super!

Of course, to make Docker think it’s not going to get its job, Containerd says it’s designed primarily to be embedded into a larger system (referring to Kubernetes), not directly used by developers or end users.

In fact, Containerd can now do almost anything. Developers or end users can manage the entire container life cycle from the host, including the transfer and storage of container images, the execution and management of containers, storage, and networking. Think about it.

The best time to learn about Containerd is right after Containerd’s public cloud native lab, followed by right now and then 😆.

Let’s take a look at Containerd’s architecture:

As you can see, Containerd still uses the standard C/S architecture. The server uses the GRPC protocol to provide stable APIS, and the client uses the server API to perform advanced operations.

For decoupling, Containerd divides the different responsibilities between different components, each acting like a subsystem. Components that connect different subsystems are called modules.

Overall Containerd is divided into two subsystems:

  • Bundle: In Containerd,BundleContaining configuration, metadata, and root file system data, you can think of it as the container’s file system. whileBundle subsystemAllows users to extract and package Bundles from the image.
  • Runtime: The Runtime subsystem is used to perform Bundles, such as creating containers.

The behavior of each subsystem is accomplished by one or more modules (Core in the architecture diagram). Each type of module is integrated into Containerd as a plug-in, and plug-ins are interdependent. For example, each of the long dashed boxes in the figure above represents a type of plug-in, including Service Plugin, Metadata Plugin, GC Plugin, Runtime Plugin, etc. The Service Plugin in turn relies on the Metadata Plugin, GC Plugin, and Runtime Plugin. Each small box represents a subdivided plug-in, such as Metadata Plugin dependencies on Containers Plugin, Content Plugin, and so on. In short, everything is a plug-in, a plug-in is a module, and a module is a plug-in.

Here are a few commonly used plug-ins:

  • Content Plugin: Provides access to addressable Content in the image, where all immutable Content is stored.
  • Snapshot Plugin: file system snapshot used to manage container images. Each layer in the image is decompressed into a filesystem snapshot, similar to the one in Dockergraphdriver.
  • Metrics: exposes the monitoring Metrics of each component.

Containerd is divided into Storage, Metadata, and Runtime.

Here’s a bucketbench performance test of Docker, Crio, and Containerd, including starting, stopping, and removing containers, to compare how long they took:

As you can see, Containerd performs well on all fronts, and is still better than Docker and Crio in overall performance.

3. Containerd installation

Once you understand the concept of Containerd, you’re ready to try it out. The demo environment for this article is Ubuntu 18.04.

Install dependencies

Install dependencies for Seccomp:

🐳  → sudo apt-get update
🐳  → sudo apt-get install libseccomp2
Copy the code

Download and unzip the Containerd program

Containerd -${VERSION}.${OS}-${ARCH}.tar.gz and cri-containerd-${VERSION}.${OS}-${ARCH}.tar.gz. Ci-containerd -${VERSION}.${OS}-${ARCH}.tar.gz contains all binary files required by Kubernetes. If you are only testing locally, you can select the previous package. If you are running as a container for Kubernetes, you need to select the latter package.

Containerd is required to call runc, and the first package does not contain runc binaries. If you choose the first package, you will need to install runc. So I recommend going straight to the Cri-Containerd package.

First download the latest version of the compressed package from the Release page, the current latest version is 1.4.3:

🐳  → wget https://github.com/containerd/containerd/releases/download/v1.4.3/cri-containerd-cni-1.4.3-linux-amd64.tar.gz

# can also be replaced with the following URL to speed up the downloadWget 🐳 - https://download.fastgit.org/containerd/containerd/releases/download/v1.4.3/cri-containerd-cni-1.4.3-linux-amd64.tar.gzCopy the code

You can directly see what files are included in the package with the -t option of tar:

🐳  → tar -tf cri-containerd-cni-1.4.3-linux-amd64.tar.gz
etc/
etc/cni/
etc/cni/net.d/
etc/cni/net.d/10-containerd-net.conflist
etc/crictl.yaml
etc/systemd/
etc/systemd/system/
etc/systemd/system/containerd.service
usr/
usr/local/
usr/local/bin/
usr/local/bin/containerd-shim-runc-v2
usr/local/bin/ctr
usr/local/bin/containerd-shim
usr/local/bin/containerd-shim-runc-v1
usr/local/bin/crictl
usr/local/bin/critest
usr/local/bin/containerd
usr/local/sbin/
usr/local/sbin/runc opt/ opt/cni/ opt/cni/bin/ opt/cni/bin/vlan opt/cni/bin/host-local opt/cni/bin/flannel opt/cni/bin/bridge opt/cni/bin/host-device opt/cni/bin/tuning opt/cni/bin/firewall opt/cni/bin/bandwidth opt/cni/bin/ipvlan opt/cni/bin/sbr  opt/cni/bin/dhcp opt/cni/bin/portmap opt/cni/bin/ptp opt/cni/bin/static opt/cni/bin/macvlan opt/cni/bin/loopback opt/containerd/ opt/containerd/cluster/ opt/containerd/cluster/version opt/containerd/cluster/gce/ opt/containerd/cluster/gce/cni.template opt/containerd/cluster/gce/configure.sh opt/containerd/cluster/gce/cloud-init/ opt/containerd/cluster/gce/cloud-init/master.yaml opt/containerd/cluster/gce/cloud-init/node.yaml opt/containerd/cluster/gce/envCopy the code

Directly decompress the compressed package to each directory in the system:

🐳 → sudo tar -c / -xzf Cri-containerd-Cuni-1.4.3-linux-amd64.tar.gzCopy the code

Append /usr/local/bin and /usr/local/sbin to the $PATH environment variable in the ~/. Bashrc file:

export PATH=$PATH:/usr/local/bin:/usr/local/sbin
Copy the code

Effective immediately:

🐳 -source ~/.bashrc
Copy the code

View version:

🐳 - > CTR version Client: version: v1.4.3 Revision: 269548 fa27e0089a8b8278fc4fc781d7f65a939b Go version: Go1.15.5 Server: Version: v1.4.3 Revision: 269548 fa27e0089a8b8278fc4fc781d7f65a939b UUID: d1724999-91b3-4338-9288-9a54c9d52f70Copy the code

Generating a Configuration File

The default configuration file is/etc/Containerd Containerd/config toml, we can through the command to generate a default configuration:

🐳 - mkdir/etc/containerd 🐳 - containerd config default > / etc/containerd/config. TomlCopy the code

Mirror to accelerate

Due to some undescribable factors, the pull speed of the public mirror warehouse in China is very slow. To save the pull time, configure the mirror of the mirror warehouse for Containerd. There are two differences between Containerd mirror and Docker:

  • Containerd only supports passCRIPull the mirror of the mirror, that is, only throughcrictlOr the mirror will only take effect when Kubernetes calls itctrThe pull will not work.
  • DockerOnly support forDocker HubConfigure the mirror, whileContainerdYou can configure a mirror for any mirror repository.

Before configuring image acceleration, let’s take a look at Containerd’s configuration structure, which may seem complicated at first glance. The complexity is found in the configuration section of the Plugin:

[plugins]
  [plugins."io.containerd.gc.v1.scheduler"]
    pause_threshold = 0.02
    deletion_threshold = 0
    mutation_threshold = 100
    schedule_delay = "0s"
    startup_delay = "100ms"
  [plugins."io.containerd.grpc.v1.cri"]
    disable_tcp_service = true
    stream_server_address = "127.0.0.1"
    stream_server_port = "0"
    stream_idle_timeout = "4h0m0s"
    enable_selinux = false
    sandbox_image = "K8s. GCR. IO/pause: 3.1"
    stats_collect_period = 10
    systemd_cgroup = false
    enable_tls_streaming = false
    max_container_log_line_size = 16384
    disable_cgroup = false
    disable_apparmor = false
    restrict_oom_score_adj = false
    max_concurrent_downloads = 3
    disable_proc_mount = false
    [plugins."io.containerd.grpc.v1.cri".containerd]
      snapshotter = "overlayfs"
      default_runtime_name = "runc"
      no_pivot = false
      [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
        runtime_type = ""
        runtime_engine = ""
        runtime_root = ""
        privileged_without_host_devices = false
      [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
        runtime_type = ""
        runtime_engine = ""
        runtime_root = ""
        privileged_without_host_devices = false
      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          runtime_type = "io.containerd.runc.v1"
          runtime_engine = ""
          runtime_root = ""
          privileged_without_host_devices = false
    [plugins."io.containerd.grpc.v1.cri".cni]
      bin_dir = "/opt/cni/bin"
      conf_dir = "/etc/cni/net.d"
      max_conf_num = 1
      conf_template = ""
    [plugins."io.containerd.grpc.v1.cri".registry]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          endpoint = ["https://registry-1.docker.io"]
    [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
      tls_cert_file = ""
      tls_key_file = ""
  [plugins."io.containerd.internal.v1.opt"]
    path = "/opt/containerd"
  [plugins."io.containerd.internal.v1.restart"]
    interval = "10s"
  [plugins."io.containerd.metadata.v1.bolt"]
    content_sharing_policy = "shared"
  [plugins."io.containerd.monitor.v1.cgroups"]
    no_prometheus = false
  [plugins."io.containerd.runtime.v1.linux"]
    shim = "containerd-shim"
    runtime = "runc"
    runtime_root = ""
    no_shim = false
    shim_debug = false
  [plugins."io.containerd.runtime.v2.task"]
    platforms = ["linux/amd64"]
  [plugins."io.containerd.service.v1.diff-service"]
    default = ["walking"]
  [plugins."io.containerd.snapshotter.v1.devmapper"]
    root_path = ""
    pool_name = ""
    base_image_size = ""
Copy the code

Each top-level configuration block is named plugins.io.containerd.xxx. vx. XXX. Each top-level configuration block represents a plug-in, where io.containerd.xxx.vx indicates the type of the plug-in. The XXX after vx indicates the ID of the plug-in. You can see it all through CTR:

🐳 - > CTR plugin ls TYPE ID PLATFORMS STATUS. IO containerd. Content. v1 content - ok IO. Containerd. Snapshotter. V1 BTRFS linux/amd64 error io.containerd.snapshotter.v1 devmapper linux/amd64 error io.containerd.snapshotter.v1 aufs linux/amd64  ok io.containerd.snapshotter.v1 native linux/amd64 ok io.containerd.snapshotter.v1 overlayfs linux/amd64 ok io.containerd.snapshotter.v1 zfs linux/amd64 error io.containerd.metadata.v1 bolt - ok io.containerd.differ.v1 walking linux/amd64 ok io.containerd.gc.v1 scheduler - ok io.containerd.service.v1 containers-service - ok io.containerd.service.v1 content-service - ok io.containerd.service.v1 diff-service - ok io.containerd.service.v1 images-service - ok io.containerd.service.v1 leases-service - ok io.containerd.service.v1 namespaces-service - ok io.containerd.service.v1 snapshots-service - ok io.containerd.runtime.v1 linux linux/amd64 ok io.containerd.runtime.v2 task linux/amd64 ok io.containerd.monitor.v1 cgroups linux/amd64 ok io.containerd.service.v1 tasks-service - ok io.containerd.internal.v1 restart - ok io.containerd.grpc.v1 containers - ok io.containerd.grpc.v1 content - ok io.containerd.grpc.v1 diff - ok io.containerd.grpc.v1 events - ok io.containerd.grpc.v1 healthcheck - ok io.containerd.grpc.v1 images - ok io.containerd.grpc.v1 leases - ok io.containerd.grpc.v1 namespaces - ok io.containerd.internal.v1 opt - ok io.containerd.grpc.v1 snapshots - ok io.containerd.grpc.v1 tasks - ok io.containerd.grpc.v1 version - ok io.containerd.grpc.v1 cri linux/amd64 okCopy the code

For example, the CRI plug-in is divided into containerd, CNI, and Registry configurations, and containerd can be configured with runtime configurations, as well as the default Runtime configurations.

The configuration for image acceleration is in the Registry configuration block below the CRI plug-in configuration block, so the following changes are needed:

    [plugins."io.containerd.grpc.v1.cri".registry]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          endpoint = ["https://dockerhub.mirrors.nwafu.edu.cn"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
          endpoint = ["https://registry.aliyuncs.com/k8sxio"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"]
          endpoint = ["xxx"]
Copy the code
  • registry.mirrors.”xxx”: indicates the mirror warehouse for which the mirror needs to be configured. For example,registry.mirrors."docker.io"Indicates to configure the mirror of docker. IO.
  • endpoint: indicates that the mirror acceleration service is provided. For example, it is recommended to use the mirror acceleration service provided by Northwest A&F Universitydocker.ioIn the mirror.

As for thegcr.ioThere is currently no public acceleration service. I paid for an acceleration service, and it’s about3M/sAbout, students who need to speed up can add me as a friend through wechat: Cloud-native-Yang for further detailed consultation.

Storage configuration

Containerd has two different storage paths, one for persistent data and one for runtime state.

root = "/var/lib/containerd"
state = "/run/containerd"
Copy the code

“Root” is used to save persistent data, including Snapshots, Content, Metadata, and plugins. Each plug-in has its own directory. Containerd doesn’t store any data itself, and all of its functionality comes from loaded plug-ins.

🐳 - > tree - 2 L/var/lib/containerd / / var/lib/containerd / ├ ─ ─ IO. Containerd. Content. v1. The content │ ├ ─ ─ blobs │ └ ─ ─ ingest ├ ─ ─ IO. Containerd. GRPC. V1. Cri │ ├ ─ ─ containers │ └ ─ ─ sandboxes ├ ─ ─ IO. Containerd. Metadata. V1. Bolt │ └ ─ ─ meta. Db ├ ─ ─ IO. Containerd. Runtime. V1. Linux │ └ ─ ─ k8s. IO ├ ─ ─ IO. Containerd. Runtime. V2. The task ├ ─ ─ IO. Containerd. Snapshotter. V1. Aufs │ └ ─ ─ snapshots ├ ─ ─ IO. Containerd. Snapshotter. V1, BTRFS ├ ─ ─ IO. Containerd. Snapshotter. V1. Native │ └ ─ ─ snapshots ├ ─ ─ IO. Containerd. Snapshotter. V1. Overlayfs │ ├ ─ ─ the metadata. The db │ └ ─ ─ snapshots └ ─ ─ tmpmounts 18 directories, 2 filesCopy the code

State is used to hold temporary data, including sockets, pids, mount points, runtime state, and plug-in data that does not need to be persisted.

🐳 → tree-l 2 /run/containerd/ /run/ Containerd / ├── Containerd.sock ├── containerd.sock IO. Containerd. GRPC. V1. Cri │ ├ ─ ─ containers │ └ ─ ─ sandboxes ├ ─ ─ IO. Containerd. Runtime. V1. Linux │ └ ─ ─ k8s. IO ├ ─ ─ IO. Containerd. Runtime. V2. Task └ ─ ─ runc └ ─ ─ k8s. IO 8 directories, 2 filesCopy the code

OOM

One more configuration to watch out for:

oom_score = 0
Copy the code

Containerd is the container’s guardian, and in the event of running out of memory, it would be ideal to kill the container first, not Containerd. Therefore, you need to adjust the OOM weight of Containerd to reduce its OOM Kill probability. It is best to set the value of oOM_score to a slightly lower value than the other daemons. The oom_socre corresponds to /proc/ /oom_socre_adj, which was used in earlier Linux kernel versions to adjust the weight, but has since been changed to oom_socre_adj. The file is described as follows:

The value of /proc/<pid>/oom_score_adj is added to the badness score before it

is used to determine which task to kill. Acceptable values range from -1000 (OOM_SCORE_ADJ_MIN) to +1000 (OOM_SCORE_ADJ_MAX). This allows userspace to polarize the preference for oom killing either by always preferring a certain task or completely disabling it. The lowest possible value, -1000, is equivalent to disabling oom killing entirely for that task since it will always report a badness score of 0.

When calculating the final badness score, oom_score_adj is added to the result so that the user can use the value to protect a process from being killed or to kill a process every time. The value ranges from -1000 to 1000.

If this value is set to -1000, the process is never killed because badness Score always returns 0.

It is recommended that Containerd set the value between -999 and 0. If it is the Worker node of Kubernetes, consider setting it to -999.

Systemd configuration

It is recommended that you configure Containerd to run as the systemd daemon.

🐳  → cat /etc/systemd/system/containerd.service
# Copyright The containerd Authors.
#
Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.[Unit] Description=containerd container runtime Documentation=https://containerd.io After=network.target local-fs.target  [Service] ExecStartPre=-/sbin/modprobe overlay ExecStart=/usr/local/bin/containerd

Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=1048576
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target
Copy the code

There are two important parameters:

  • Delegate: This option allows Containerd and the runtime to manage cgroups for containers it creates. If this option is not set, SystemD will move the process to its own CGroups, causing Containerd to fail to get the container’s resource usage correctly.

  • KillMode: This option deals with how the Containerd process is killed. By default, Systemd looks for and kills all of Containerd’s children in the process’s cgroup, which is definitely not what we want. The KillMode field can be set as follows:

    • Control-group (default) : All child processes in the current control group are killed
    • Process: Kills only the main process
    • Mixed: The main process will receive the SIGTERM signal and the child process will receive the SIGKILL signal
    • None: No process is killed, only the stop command of the service is executed.

    We need to set KillMode to process to ensure that existing containers are not killed when Containerd is upgraded or restarted.

Now comes the crucial step: starting Containerd. Execute a command and you’re done:

🐳  → systemctl enable containerd --now
Copy the code

Let’s move on to the final part of this article: The basic usage of Containerd. This article will only cover the native use of Containerd, the local client CTR, and will not cover crictl (more on crictl later).

4. Containerd can be quickly installed

If you want to quickly install Kubernetes and Containerd in under a minute, you can use Sealos to deploy. Kubernetes installation tool, one command, offline installation, including all dependencies, kernel load does not depend on Haproxy Keepalived, pure Golang development,99 year certificate. The 1.12.0 offline package includes the latest version of Containerd and supports the ARM64 architecture, which is pretty cool.

Sealos is a binary golang tool. You can download and copy it directly to the bin directory, or download it from the Release page:

🐳 - https://sealyun.oss-cn-beijing.aliyuncs.com/latest/sealos 🐳 wget - c - chmod + x sealos && sealos mv/usr/binCopy the code

Download offline resource packs:

https://sealyun.oss-cn-beijing.aliyuncs.com/7b6af025d4884fdd5cd51a674994359c-1.18.0/kube1.18.0.tar.gz 🐳 - wget - cCopy the code

Install a three master high availability Kubernetes cluster:

🐳 → SEALos init --passwd 123456 --master 192.168.0.2 --master 192.168.0.3 -- Master 192.168.0.4 --node 192.168.0.5 -- PKG - url/root/kube1.18.0. Tar. Gz - version v1.18.0Copy the code

And then we’re done…

5. Use CTR

CTR currently has many functions that are not as perfect as Docker, but the basic functions are already available. The following will introduce its use around the two aspects of image and container.

The mirror

Image download:

🐳 - > CTR I pull docker. IO/library/nginx: alpine docker. IO/library/nginx: alpine: resolved |++++++++++++++++++++++++++++++++++++++| index-sha256:efc93af57bd255ffbfb12c89ec0714dd1a55f16290eb26080e3d1e7e82b3ea66:done           |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:6ceeeab513f7d15cea38c1f8dfe5455323b5a1bfd568516b3b0ee70406f75247: done           |++++++++++++++++++++++++++++++++++++++|
config-sha256:0fde4fb87e476fd1655b3f04f55aa5b4b3ef7de7c701eb46573bb5a5dcf66fd2:   done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:abaddf4965e5e9ce9953f2e136b3bf9cc15365adbcf0c68b108b1cc26c12b1be:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:05e7bc50f07f000e9993ec0d264b9ffcbb9a01a4d69c68f556d25e9811a8f7f4:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:c78f7f670e47cf98494e7dbe08e463d34c160bf6a5939a2155ff4438cb8b0e80:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:ce77cf6a2ede66c463dcdd39f1a43cfbac3723a99e94f697bc20faee0f7cce1b:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:3080fd9f46494247c9298a6a3d9694f03f6a32898a07ffbe1c17a0752bae5c4e:    done| + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + | elapsed: 17.3 s total: 8.7 Mi (513.8 KiB/s) unpacking Linux/amd64 sha256: efc93af57bd255ffbfb12c89ec0714dd1a55f16290eb26080e3d1e7e82b3ea66...done
Copy the code

Local mirror list query:

🐳 - > CTR I ls REF TYPE DIGEST SIZE PLATFORMS LABELS docker. IO/library/nginx: alpine application/vnd.docker.distribution.manifest.list.v2+json Sha256: efc93af57bd255ffbfb12c89ec0714dd1a55f16290eb26080e3d1e7e82b3ea66 9.3 MiB linux/386,linux/amd64,linux/arm/v6,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x -Copy the code

Note here that PLATFORMS is a mirrored, runnable platform id.

Mount the image to the host directory:

🐳 - > CTR I mount docker. IO/library/nginx: alpine/MNT 🐳 - > tree - 1 L/MNT/MNT ├ ─ ─ bin ├ ─ ─ dev ├ ─ ─ docker - entrypoint. D ├ ─ ─ docker - entrypoint. Sh ├ ─ ─ etc ├ ─ ─ home ├ ─ ─ lib ├ ─ ─ media ├ ─ ─ MNT ├ ─ ─ opt ├ ─ ─ proc ├ ─ ─ root ├ ─ ─ the run ├ ─ ─ sbin ├ ─ ─ the SRV ├── Sys ├─ usr ├─ exdirectories, 1 fileCopy the code

Unmount the image from the host directory:

🐳 → CTR I unmount/MNTCopy the code

Export the image as a compressed package:

🐳 - > CTR Iexport nginx.tar.gz docker.io/library/nginx:alpine
Copy the code

Importing an image from a compressed package:

🐳  → ctr i import nginx.tar.gz
Copy the code

For other operations, you can see the help:

🐳  → ctr i --help
NAME:
   ctr images - manage images

USAGE:
   ctr images command [commandoptions] [arguments...]  COMMANDS: check check that an image has all content available locallyexport      export images
   import      import images
   list, ls    list images known to containerd
   mount       mount an image to a target path
   unmount     unmount the image from the target
   pull        pull an image from a remote
   push        push an image to a remote
   remove, rm  remove one or more images by reference
   tag         tag an image
   label       set and clear labels for an image

OPTIONS:
   --help, -h  show help
Copy the code

More advanced manipulation of the image can be done using the content subcommand, such as editing the image’s BLOB online and generating a new digest:

🐳 → CTR Content ls DIGEST SIZE AGE LABELS... . sha256:fdd7fff110870339d34cf071ee90fbbe12bdbf3d1d9a14156995dfbdeccd7923 740B 7 days containerd.io/gc.ref.content.2=sha256:4e537e26e21bf61836f827e773e6e6c3006e3c01c6d59f4b058b09c2753bb929,containerd.io/gc. ref.content.1=sha256:188c0c94c7c576fff0792aca7ec73d67a2f7f4cb3a6e53a84559337260b36964,containerd.io/gc.ref.content.0=sha 256:b7199797448c613354489644be1f60aa2d8e9c2278989100c72ede3001334f7b,containerd.io/distribution.source.ghcr.fuckcloudnat IO =yangchuansheng/grafana-backup-tool 🐳 → CTR Content edit -- Editor vim sha256:fdd7fff110870339d34cf071ee90fbbe12bdbf3d1d9a14156995dfbdeccd7923Copy the code

The container

Create a container:

🐳 - > CTR c create docker. IO/library/nginx: alpine nginx 🐳 - > CTR c ls CONTAINER IMAGE RUNTIME nginx docker.io/library/nginx:alpine io.containerd.runc.v2Copy the code

View the detailed configuration of the container:

# similar to Docker Inspect
🐳  → ctr c info nginx
Copy the code

For other operations, you can see the help:

🐳  → ctr c --help
NAME:
   ctr containers - manage containers

USAGE:
   ctr containers command [commandoptions] [arguments...]  COMMANDS: create create container delete, del, rm delete one or more existing containers info get info about a container list, ls list containers labelset and clear labels for a container
   checkpoint       checkpoint a container
   restore          restore a container from checkpoint

OPTIONS:
   --help, -h  show help
Copy the code

task

The create command above creates the container, but it is not in a running state, just a static container. A Container object simply contains the resources and configuration data structures needed to run a container. This means that namespaces, Rootfs, and container configuration have been successfully initialized, but the user process (in this case nginx) has not yet started.

However, the real operation of a container is realized by the Task object, which stands for Task. You can set network card for the container, and also configure tools to monitor the container.

So you also need to start the container via Task:

🐳  → ctr task start -d nginx

🐳  → ctr task ls
TASK     PID       STATUS
nginx    131405    RUNNING
Copy the code

Of course, you can also create and run containers directly in one step:

🐳 - > CTR run - d docker. IO/library/nginx: alpine nginxCopy the code

Enter the container:

# this is similar to the docker operation, but you must specify --exec-id. This id can be written arbitrarily, as long as it is unique🐳 - > CTR taskexec --exec-id 0 -t nginx sh
Copy the code

Pause container:

# similar to docker Pause
🐳  → ctr task pause nginx
Copy the code

The container state changes to PAUSED:

🐳  → ctr task ls
TASK     PID       STATUS
nginx    149857    PAUSED
Copy the code

Recovery container:

🐳  → ctr task resume nginx
Copy the code

CTR does not have the ability to stop a container, only to pause or kill it.

Kill container:

🐳 - > CTR taskkill nginx
Copy the code

Get the cgroup information for the container:

This command is used to obtain the container memory, CPU, and PID quota and usage.🐳 → CTR Task Metrics nginx ID TIMESTAMP nginx 2020-12-15 09:15:13.943447167 +0000 UTC METRIC VALUE memory.usage_in_bytes 77131776 memory.limit_in_bytes 9223372036854771712 memory.stat.cache 6717440 cpuacct.usage 194187935 cpuacct.usage_percpu [0 335160 0 5395642 3547200 58559242 0 0 0 0 0 0 6534104 5427871 3032481 2158941 8513633 4620692 8261063 3885961 3667830 4367411 356280 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 7754942 1585841 5818102 21430929 00 00 1811840 2241260 2673960 6041161 8210604 2991221 10073713 1111020 3139751 0 640080 00 00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] pids.current 97 pids.limit 0Copy the code

Check the Pids of all processes in the container:

🐳 → CTR task ps nginx PID INFO 149857-149921-149922-149923-149924-149925-149926-149928-149929-149930 - 149932-149933-149934 -...Copy the code

Note: The PID here is the PID seen by the host machine, not the PID seen in the container.

The namespace

In addition to k8S, Containerd also supports namespaces.

🐳  → ctr ns ls
NAME    LABELS
default
Copy the code

If not specified, CTR defaults to the default space.

Currently Containerd is targeted at runtime, so it can’t completely replace Dockerd, such as using Dockerfile to build images. This isn’t a big deal, so I’m going to give you one more trick: Containerd and Docker.

Containerd + Docker

In fact, Docker and Containerd can be used together, but by default Docker uses the Containerd namespace moby instead of default. Here are the moments of wonder.

First download the Docker-related binaries from other Docker machines or GitHub and start Docker with the following command:

🐳 - dockerd -- containerd/run/containerd/containerd. The sock - cri - containerdCopy the code

Then run a container with Docker:

🐳  → docker run -d --name nginx nginx:alpine
Copy the code

Now go back to the Containerd namespace:

🐳  → ctr ns ls
NAME    LABELS
default
moby
Copy the code

Check whether there are containers in the namespace:

Ls 🐳 - > CTR -n moby c CONTAINER IMAGE RUNTIME b7093d7aaf8e1ae161c8c8ffd4499c14ba635d8e174cd03711f4f8c27818e89a - io.containerd.runtime.v1.linuxCopy the code

Fuck me, what else can it be? It seems that using Containerd will not delay my Docker build

One final word of caution: Kubernetes users should not panic. By default, Kubernetes uses Containerd’s k8s. IO namespace, so ctr-n k8s. IO can see all containers created by Kubernetes. Ctr-n k8s. IO can load images from crictl