preface

In December last year, when Kubernetes community announced that it would gradually abandon Dockershim after version 1.20, there were also a lot of we-media propaganda in Kubernetes to abandon Docker. In fact, I think this is misleading, maybe just to catch the heat.

Dockershim is a component of Kubernetes for Docker manipulation. Docker launched in 2013 and Kubernetes in 2016, so Docker didn’t think of the choreography at first and didn’t know Kubernetes was coming. . However, When Kubernetes was created, Docker was used as a container runtime, and a lot of operation logic was aimed at Docker. As the community became more and more robust, in order to be compatible with more container runtime, Docker related logic was independent to form Dockershim.

Because of this, whenever Kubernetes changes or Docker changes, dockerShim must be maintained in order to ensure adequate support, but the essence of operating Docker through Dockershim is to operate Containerd, the underlying runtime of Docker, Containerd also supports CRI (Container Runtime Interface), so why does it need a layer around Docker? Is it possible to interact directly with Containerd via CRI? This is one of the reasons the community wanted To launch Dockershim.

Let’s take a look at how launching Dockershim affects users and maintainers.

For upper level users, it doesn’t really matter, because the upper level has already shielded these details, just use them. More impact is only for us YAML engineers, because we mainly consider which container to use when running. If we continue to use Docker, will there be any impact on future version upgrade? If Docker is not used, will the maintenance cost, complexity and learning cost increase? If you want to use Containerd, you can still use Docker. If you want to use Containerd, you can use Containerd. However, the Kubernetes community will no longer maintain Dockershim. Mirantis and Docker have decided to work together to maintain dockershim components. That is to say, Dockershim can still serve as a bridge to Docker. Just changed from kubernetes built-in portability to standalone.

So what is Containerd?

Containerd is a separate Docker project designed to provide a container runtime for Kubernetes that manages the image and container lifecycle. But Containerd can work independently of Docker. Its features are as follows:

  • Support for the OCI mirroring specification, also known as RUNC
  • Support for the OCI runtime specification
  • Mirroring pull is supported
  • Supports container network management
  • Storage supports multiple tenants
  • Support for container runtime and container lifecycle management
  • Support for managing network namespaces

The differences between Containerd and Docker are as follows:

function Docker Containerd
The list of local mirrors is displayed docker images crictl images
Download mirror docker pull crictl pull
Upload the image docker push There is no
Deleting a Local Mirror docker rmi crictl rmi
Viewing Image Details docker inspect IMAGE-ID crictl inspecti IMAGE-ID
Display a list of containers docker ps crictl ps
Create a container docker create crictl create
Start the container docker start crictl start
Stop the container docker stop crictl stop
Remove the container docker rm crictl rm
Viewing Container Details docker inspect crictl inspect
attach docker attach crictl attach
exec docker exec crictl exec
logs docker logs crictl logs
stats docker stats crictl stats

You can see that the usage is pretty much the same.

The following steps describe how to install a K8S cluster using Kubeadm and use Containerd as a container runtime.

The environment that

The host node

The IP address system The kernel
192.168.0.5 CentOS7.6 3.10
192.168.0.125 CentOS7.6 3.10

Software specifications

software version
kubernetes 1.20.5
containerd 1.4.4

Environment to prepare

(1) Add hosts information on each node: $cat /etc/hosts

192.168. 0. 5 k8s-master
192.168. 0125. k8s-node01
Copy the code

(2) Disable firewall:

$ systemctl stop firewalld
$ systemctl disable firewalld
Copy the code

Disable SELINUX:

$ setenforce 0
$ cat /etc/selinux/config
SELINUX=disabled
Copy the code

(4) Create the /etc/sysctl.d/k8s.conf file and add the following content:

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
Copy the code

(5) Run the following command to make the modification take effect:

$ modprobe br_netfilter
$ sysctl -p /etc/sysctl.d/k8s.conf
Copy the code

(6) Install ipvS

$ cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#! /bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
$ chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
Copy the code

The script creates the/etc/sysconfig/modules/ipvs modules file, ensure the node can automatically load the required module after restart. Using lsmod | grep ip_vs – e – e nf_conntrack_ipv4 command to see whether has the correct load required kernel modules.

(7) Installed ipset software package:

$ yum install ipset -y
Copy the code

To view the ipvS proxy rules, it is best to install the management tool IPVSADm:

$ yum install ipvsadm -y
Copy the code

(8) Synchronize server time

$ yum install chrony -y
$ systemctl enable chronyd
$ systemctl start chronyd
$ chronyc sources
Copy the code

(9) Disable swap partition:

$ swapoff -a
Copy the code

(10) Modify the /etc/fstab file, comment out the automatic mount of SWAP, and run free -m to confirm that SWAP is disabled. Add the following line to /etc/sysctl.d/k8s.conf:

vm.swappiness=0
Copy the code

Run the sysctl -p /etc/sysctl.d/k8s.conf command for the modification to take effect.

(11) You can install Containerd next

$ yum install -y yum-utils \
 device-mapper-persistent-data \
 lvm2
$ yum-config-manager \
 --add-repo \
 https://download.docker.com/linux/centos/docker-ce.repo
$ yum list | grep containerd
Copy the code

You can choose to install a version, such as the latest version here:

$ yum install containerd.io-1.44. -y
Copy the code

Create containerd configuration file:

mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
Replace the configuration file
sed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g"  /etc/containerd/config.toml
sed -i '/containerd.runtimes.runc.options/a\ \ \ \ \ \ \ \ \ \ \ \ SystemdCgroup = true' /etc/containerd/config.toml
sed -i "s#https://registry-1.docker.io#https://registry.cn-hangzhou.aliyuncs.com#g"  /etc/containerd/config.toml
Copy the code

(13) Start Containerd

systemctl daemon-reload
systemctl enable containerd
systemctl restart containerd
Copy the code

Kubeadm is now installed by specifying the yum source and using the Aliyun source:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
 http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
Copy the code

Then install kubeadm, kubelet, kubectl (kubectl) :

$  yum install -y kubelet-1.20. 5 kubeadm-1.20. 5 kubectl-1.20. 5
Copy the code

Set up the runtime:

$ crictl config runtime-endpoint /run/containerd/containerd.sock
Copy the code

As you can see here we have installed v1.20.5 and then set Kubelet to boot:

$ systemctl daemon-reload
$ systemctl enable kubelet && systemctl start kubelet
Copy the code

All of the above operations up to this point require configuration on all nodes.

**

Initializing a Cluster

Initialize the Master

Then configure the kubeadm initialization file on the master node. You can export the default initialization configuration by using the following command:

$ kubeadm config print init-defaults > kubeadm.yaml
Copy the code

We then modify the configuration to our own needs, such as the value of imageRepository. The kube-proxy mode is IPVS. Note that since we use Containerd as the runtime, Therefore, you need to set cgroupDriver to systemd during node initialization

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef0123456789.abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168. 0. 5 
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock 
  name: k8s-master
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v120.. 5
networking:
  dnsDomain: cluster.local
  podSubnet: 172.16. 0. 0/16
  serviceSubnet: 10.96. 0. 0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
Copy the code

Then use the configuration file above to initialize:

$ kubeadm init --config=kubeadm.yaml

[init] Using Kubernetes version: v120.. 5
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96. 01. 192.168. 0. 5]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168. 0. 5 127.0. 01.: :1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168. 0. 5 127.0. 01.: :1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 70.001862 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "Kubelet - config - 1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef0123456789.abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168. 0. 5:6443 --token abcdef0123456789.abcdef \
    --discovery-token-ca-cert-hash sha256:446623b965cdb0289c687e74af53f9e9c2063e854a42ee36be9aa249d3f0ccec
Copy the code

Copy the KubeconFig file

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Copy the code

**

Add a node

$HOME/. Kube /config: $HOME/. Kube /config: $HOME/. Kube /config: $HOME/. Then execute the join command as prompted after initialization above:

# kubeadm join 192.168.0.5:6443 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:446623b965cdb0289c687e74af53f9e9c2063e854a42ee36be9aa249d3f0ccec 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Copy the code

If you forget the join command, you can use kubeadm token create –print-join-command to obtain it again.

Run the get Nodes command after the command is successfully executed:

$ kubectl get no
NAME         STATUS     ROLES                  AGE   VERSION
k8s-master   NotReady   control-plane,master   29m   v120.. 5
k8s-node01   NotReady   <none>                 28m   v120.. 5
Copy the code

Can see are NotReady state, this is because has not installed the network plugin, the following network plug-in installation, can be in the document kubernetes. IO/docs/setup /… Select our own network plug-in from the

$ wget https://docs.projectcalico.org/v38./manifests/calico.yaml
Copy the code

Because some nodes have multiple network adapters, you need to specify Intranet network adapters in the resource list file

$ vi calico.yaml

. spec: containers: - env: - name: DATASTORE_TYPE value: kubernetes - name: IP_AUTODETECTION_METHODAdd the environment variable to # DaemonSet
   value: interface=eth0 # Specify an Intranet nic
 - name: WAIT_FOR_DATASTORE
   value: "true"
- name: CALICO_IPV4POOL_CIDR Network segment 172 is configured in init, so it needs to be modified
  value: "Along / 16".Copy the code

Install the Calico Network plug-in

$ kubectl apply -f calico.yaml
Copy the code

Check Pod status every once in a while:

# kubectl get pod -n kube-system 
NAME                                      READY   STATUS              RESTARTS   AGE
calico-kube-controllers-bcc6f659f-zmw8n   0/1     ContainerCreating   0          7m58s
calico-node-c4vv7                         1/1     Running             0          7m58s
calico-node-dtw7g                         0/1     PodInitializing     0          7m58s
coredns-54d67798b7-mrj2b                  1/1     Running             0          46m
coredns-54d67798b7-p667d                  1/1     Running             0          46m
etcd-k8s-master                           1/1     Running             0          46m
kube-apiserver-k8s-master                 1/1     Running             0          46m
kube-controller-manager-k8s-master        1/1     Running             0          46m
kube-proxy-clf4s                          1/1     Running             0          45m
kube-proxy-mt7tt                          1/1     Running             0          46m
kube-scheduler-k8s-master                 1/1     Running             0          46m
Copy the code

The network plug-in is running successfully and the Node status is normal:

# kubectl get nodes 
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   47m   v120.. 5
k8s-node01   Ready    <none>                 46m   v120.. 5
Copy the code

Add another node in the same way. **

Automatic command completion

yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
Copy the code

Hit the pit

After version 1.20, when using NFS for storage, the following error is reported when creating a PVC:

I0323 08: 41:25. 264754       1 controller.go:987] provision "default/test-nfs-pvc2" class "nfs-client-storageclass": started
E0323 08: 41:25. 267631       1 controller.go:1004] provision "default/test-nfs-pvc2" class "nfs-client-storageclass": unexpected error getting claim reference: selfLink was empty, can't make reference
Copy the code

This is because Kubernets 1.20.0 has abandoned selfLink, and the solution is to add back again. The following parameters are added to Kube-Apiserver. yamL:

$ vim /etc/kubernetes/manifests/kube-apiserver.yaml
# add a line
- --feature-gates=RemoveSelfLink=false
Copy the code

Then re-apply the following for this to take effect:

kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml
Copy the code

Reference documentation

[1] : github.com/containerd/… [2] : github.com/containerd/… [3] : github.com/kubernetes/…