At the end of 2020, I believe you will beKubernetes is ditching Docker“Has gone viral. In fact, Containerd, the successor to the Docker runtime, can be directly integrated with Kubelet as early as Kubernets 1.7, but most of the time we are familiar with Docker and use the default Dockershim for cluster deployment. However, the community has said that kubelet will drop support for the Dockershim portion of kubelet after 1.20.As a common small white, we in the replacement of Containerd, some of the past habits and configuration also have to change and adapt. So this is a summary of the recent replacement of Containerd by Containerd.

1. Containerd installation integrates with Kubelet

  • Preparation before installation
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF sudo modprobe overlay sudo modprobe Br_netfilter # Sets the required SYSCtl parameters, which will persist after the restart. cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1  net.bridge.bridge-nf-call-ip6tables = 1 EOF sudo sysctl --systemCopy the code
  • Install containerd
Official # installation Docker GPG key curl - fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt - key - the keyring The/etc/apt/trusted. GPG. D/docker. GPG add new docker apt - # warehouse. sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" # Sudo apt-get update && sudo apt-get install -y containerd. IOCopy the code
  • Generate containerd default configuration
sudo mkdir -p /etc/containerd
sudo containerd config default > /etc/containerd/config.toml
Copy the code
  • Modify kubelet configuration

Append the following section to KUBELET_KUBEADM_ARGS in /var/lib/kubelet/kubeadm-flags.env

KUBELET_KUBEADM_ARGS="--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock --image-service-endpoint=unix:///run/containerd/containerd.sock"
Copy the code
  • Restart the Containerd and Kubelet services
systemctl restart containerd kubelet
Copy the code

2. Containerd Common operations

When Containerd is replaced, the docker command is replaced by crictl and CTR.

  • Crictl is a command-line tool that follows the CRI interface specification and is commonly used to check and manage container runtimes and images on Kubelet nodes
  • CTR is one of containerd’s client tools,

To run the crictl command, run /etc/crictl.yaml as follows:

runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
Copy the code

Here are the common crictl commands, which can completely replace the Docker command in the following table

operation crictl docker
View the running container crictl ps docker ps
Look at mirror crictl images docker images
Viewing container Logs crictl logs docker logs
Log in to the container console crictl exec docker exec
Pull the mirror crictl pull docker pull
The container starts/stops crictl start/stop docker start/stop
Container resource case crictl stats docker stats

As you can see, the container lifecycle management of Crictl is almost covered, but there are a lot of things we can’t do in Crictl, such as managing mirrors. This part also depends on CTR to achieve, the operation mode can also refer to the following table

operation ctr docker
Look at mirror ctr images ls docker images
Image import/export ctr images import/exporter docker load/save
Mirror pull/push ctr images pull/push docker pull/push
Image tag ctr images tag docker tag

K8s. IO, Moby, and Default are supported by Containerd as part of Containerd’s namespaces. All of the above is done in the k8s. IO namespace using crictl

ctr -n k8s.io images list
Copy the code

3. Containerd and (virtual) graphics card equipment

In Docker, nvidia-Docker is usually used to call nvidia-Container-Runtime to achieve the container’s GPU device mount. After switching to Containerd, we no longer need the nvidia-Docker client. Instead, we call the Nvidia-Container-Runtime directly from the Containerd plugin

In addition to installing ContainerD, NVIDIA, and CUDA drivers, you need to install Nvidia-Container-Runtime

curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | \ sudo apt-key add - distribution=$(. /etc/os-release; echo $ID$VERSION_ID) curl -s -L https://nvidia.github.io/nvidia-container-runtime/$distribution/nvidia-container-runtime.list | \ sudo tee /etc/apt/sources.list.d/nvidia-container-runtime.list sudo apt update sudo apt install nvidia-container-runtime -yCopy the code

Finally, add nVIDIA runtime configuration to Containerd

/etc/containerd/config.toml

    [plugins."io.containerd.grpc.v1.cri".containerd]
      snapshotter = "overlayfs"
-     default_runtime_name = "runc"
+     default_runtime_name = "nvidia"
      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
        # This section is added by system, we can just ignore it.
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          runtime_type = "io.containerd.runc.v2"
          runtime_engine = ""
          runtime_root = ""
          privileged_without_host_devices = false
          base_runtime_spec = ""
+       [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.nvidia]
+         runtime_type = "io.containerd.runc.v2"
+         runtime_engine = ""
+         runtime_root = ""
+         privileged_without_host_devices = false
+         base_runtime_spec = ""
+         [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.nvidia.options]
+           BinaryName = "nvidia-container-runtime"
Copy the code

In addition, if you use Tencent’S TKE-GPU-Manager to virtualize GPU devices like Xiaobai, you also need to upgrade the GPU-Manager to version 1.1.0 or higher. Then modify the startup parameters to solve the problem

containers:
- env:
  - name: EXTRA_FLAGS
    value: --container-runtime-endpoint=/var/run/containerd/containerd.sock
Copy the code

Finally, we use a POD to verify whether GPU can recognize it normally

apiVersion: v1 kind: Pod metadata: name: vcuda spec: restartPolicy: Never containers: - image: Nvidia/cuda: 10.1 runtime - ubuntu16.04 name: nvidia command: - "/usr/local/nvidia/bin/nvidia-smi" - "pmon" - "-d" - "10" resources: requests: tencent.com/vcuda-core: 50 tencent.com/vcuda-memory: 4 limits: tencent.com/vcuda-core: 50 tencent.com/vcuda-memory: 4Copy the code

4. Containerd console logs

In the Docker era, the kubernetes container control log format is json by default. After changing to Containerd, the container console output is changed to text format, as shown below

# docker json plugin {"log":"[INFO] plugin/reload: Running configuration MD5 = 4665410 bf21c8b272fcfd562c482cb82 \ n ", "stream" : "stdout," "time" : "the 2020-01-10 T17: o. 838559221 z"} # contaienrd log text format 2020-01-10T18:10:40.01576219z stdout F [INFO] plugin/reload: Running configuration MD5 = 4665410bf21c8b272fcfd562c482cb82Copy the code

Most of the time this will cause our default log collector client to fail to collect logs due to json parser errors, so we will need to modify the log collector configuration once Containerd is online.

Using Fluentd as an example, we need to introduce multi_format to parse container logs in both formats

<source>
  @id fluentd-containers.log
  @type tail
  path /var/log/containers/*.log
  pos_file /var/log/es-containers.log.pos
  tag raw.kubernetes.*
  read_from_head true
  <parse>
    @type multi_format
    <pattern>
      format json
      time_key time
      time_format %Y-%m-%dT%H:%M:%S.%NZ
    </pattern>
    This section is used to rematch the CRI container log format
    <pattern>
      format / ^ (?  (? 
      
       stdout|stderr)
       [^ ]* (? 
      
       .*)$/
      
      time_format %Y-%m-%dT%H:%M:%S.%N%:z
    </pattern>
  </parse>
</source>
Copy the code

conclusion

K8S Worker nodes are becoming lighter and Containerd has its own child, Cri-O, but it’s still the best container runtime management tool compared to Containerd’s proven products in mass production environments.


Pay attention to the public account “cloud native Xiaobai”, get more exciting content