Kubernetes deprecates Docker from V1.20 and recommends users switch to container engines based on the Container Runtime Interface (CRI), such as Containerd and Cri-O. If you’re using a hosted Kubernetes service from a cloud provider, don’t worry. Cloud providers like GKE and AKS have switched the default runtime to Containerd in their new clusters.
How do you switch container runtime from Docker to Containerd for self-managed clusters?
Method to switch the container runtime
First, mark the node as maintenance mode and expel the running Pod on it to avoid affecting the normal operation of the application during the switchover:
kubectl cordon <node-name>
kubectl drain <node-name> --ignore-daemonsets
Copy the code
Then log in to Node as root, stop docker and Kubelet, and delete docker:
systemctl stop kubelet
systemctl stop docker
apt purge docker-ce docker-ce-cli
Copy the code
Next, generate the Containerd configuration file:
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
Copy the code
Note Because the GCR cannot be accessed in the domestic environment, you need to change the pause mirror address to one that can be accessed in the domestic environment, for example, MCR:
. [plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "mcr.microsoft.com/oss/kubernetes/pause:1.3.1".Copy the code
Next, open /etc/default/kubelet, modify kubelet startup options, and configure the container to run as containerd:
KUBELET_FLAGS=... --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock
Copy the code
Restart containerd and Kubelet:
systemctl daemon-reload
systemctl restart containerd
systemctl restart kubelet
Copy the code
Finally, exit Node and use kubectl to verify the container runtime of the Node:
# kubectl get node <node-name> -o wideNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME <node-name> Ready agent 13d v1.18.2 10.241.0.21 < None > Ubuntu 18.04.5Lts 5.4.0-1039 containerd://1.4.3Copy the code
As you can see, the container runtime has been switched to Containerd at version 1.4.3.
Finally, add the node back to the cluster:
kubectl uncordon <node-name>
Copy the code
Repeat the above steps for the other nodes to replace the cluster docker with containerd.
Method of image construction
Sock is no longer available to containerd, which means that you can no longer run the Docker command to build the image inside the container. Here, I recommend several ways to build images that do not require Docker.sock.
The first is Docker Buildx, which is the method used by the Kubernetes community to build multi-architecture images. For example, you can run the following command to build the image:
docker buildx create --driver kubernetes --driver-opt replicas=3 --use
docker buildx build -t example.com/foo --push .
Copy the code
The second is Redhat’s open source Buildah. Buildah is the default image builder for Openshift and supports both OCI and Docker image formats. Buildah is used in a similar way to Docker build, such as:
# build mirror
buildah bud -t example.com/foo:latest .
# Query the mirror list
buildah images
Copy the code
The third is Google’s open source kaniko. Kaniko can also build images from Dockerfiles without a Docker daemon. Note that Kaniko builds images by passing a build context to the Kaniko command line, either into standard input or into AWS S3, Azure Blob Storage, GCS Bucket, etc.
conclusion
When Docker is deprecated, Kubernetes container runtime can be switched to community-maintained container engines that support CRI, such as Containerd and Cri-O. After switching, it should also be noted that the application that used Docker build to build images needs to be switched to tools that can build images without Dockerd, such as Docker Build, Buildah, Kaniko, etc.
Welcome to pay attention to chat cloud native public number, learn more cloud native knowledge.