Before the speech
As a lightweight Kubernetes distribution, running containers is essential. The management of VM was originally a basic capability of IaaS platform, and as Kubernetes continues to evolve, VM can also be incorporated into its management system. Combining the advantages of Container and VM, the concept of lightweight VM has been developed, which combines the lightweight characteristics of containers with the isolation security of VMS. Kata-container is a popular open source project at present. So lightweight K3S and lightweight VM collision, and what kind of effect, combined with the two practices, I will explain in this article.
Note that the main software versions used for deployment in this article are:
K8s V1.17.2 + K3s.1, KATA-containers V1.9.3.
Environment to prepare
Kata-containers can run on both pure Docker and Kubernetes, but this article focuses only on the Kubernetes scenario. Kubelet supports multiple runtimes that meet the CRI interface, including Cri-O and Cri-containerd. Kubernetes RuntimeClaas can configure pods to use runc to create containers. Kata-runtime is called when the VM is created to create the VM. Kata-runtime supports many forms of lightweight VM, katA-Qemu is the default supported option, and IT is also possible to use firecracker (another MicroVM). The overall call relationship is shown in the figure below:
No matter what kind of VM, KVM support is required. The mainstream MicroVM technology is based on KVM, so our environment needs to prepare hosts that support KVM. You can use Bare Metal, YOU can use A KVM-enabled PC Server, or you can use nested virtualization. Host After KVM is enabled, run the following command:
# Check virtualization (need >0)
grep -cw vmx /proc/cpuinfo
Copy the code
Install K3S and KATA-Containers
K3s installation and deployment is very simple, directly refer to the official documentation:
Rancher.com/docs/k3s/la…
The author uses the Host of GCP and enables nested virtualization, so online script installation is adopted:
curl -sfL https://get.k3s.io | sh -
Backup the original configuration of containerd for later use
cd /var/lib/rancher/k3s/agent/etc/containerd/
cp config.toml config.toml.base
Copy the code
Kata-containers deployment has some caveats, katA-deploy (github.com/kata-contai…
First set up RBAC, install the default RBAC YAML in the documentation, and make some changes:
kubectl apply -f https://raw.githubusercontent.com/kata-containers/packaging/master/kata-deploy/kata-rbac/base/kata-rbac.yaml
# Edit ClusterRole Node-Labeler to add new API authorization
Leases # New Leases for coordination. K8s.iokubectl edit clusterrole node-labeler ... . rules: - apiGroups: -""
resources:
- nodes
verbs:
- get
- patch
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- get
- list
Copy the code
To deploy Kata, install yamL as the default in the documentation:
kubectl apply -k github.com/kata-containers/packaging/kata-deploy/kata-deploy/overlays/k3s
Copy the code
Check katA-deploy Pod in action and you’re almost sure to see an error message:
crictl ps -a| grep kube-kata crictl logs <kube-kata-container-id> ... . Failed to restart containerd.service: Unit containerd.service not found.Copy the code
The configuration of Containerd is complete, but the restart fails. K3s currently containerd is built into K3S management, not through Systemd, and kata-deploy is not recognized. This requires us to manually complete the process:
# to create the/var/lib/rancher/k3s/agent/etc/containerd/config toml. TMPL
Use the original configuration
cat config.toml.base > config.toml.tmpl
# append to the end of config.toml.tmpl
You can filter to add runtimes you use, or add them all
[plugins.cri.containerd.runtimes.kata]
runtime_type = "io.containerd.kata.v2"
[plugins.cri.containerd.runtimes.kata.options]
ConfigPath = "/opt/kata/share/defaults/kata-containers/configuration.toml"
[plugins.cri.containerd.runtimes.kata-fc]
runtime_type = "io.containerd.kata-fc.v2"
[plugins.cri.containerd.runtimes.kata-fc.options]
ConfigPath = "/opt/kata/share/defaults/kata-containers/configuration-fc.toml"
[plugins.cri.containerd.runtimes.kata-qemu]
runtime_type = "io.containerd.kata-qemu.v2"
[plugins.cri.containerd.runtimes.kata-qemu.options]
ConfigPath = "/opt/kata/share/defaults/kata-containers/configuration-qemu.toml"
[plugins.cri.containerd.runtimes.kata-qemu-virtiofs]
runtime_type = "io.containerd.kata-qemu-virtiofs.v2"
[plugins.cri.containerd.runtimes.kata-qemu-virtiofs.options]
ConfigPath = "/opt/kata/share/defaults/kata-containers/configuration-qemu-virtiofs.toml"
[plugins.cri.containerd.runtimes.kata-nemu]
runtime_type = "io.containerd.kata-nemu.v2"
[plugins.cri.containerd.runtimes.kata-nemu.options]
ConfigPath = "/opt/kata/share/defaults/kata-containers/configuration-nemu.toml"
Copy the code
Since kata-deploy overwrites the K3S containerd configuration every time it starts, to avoid interference, once everything is ready, we can delete kata-deploy and restart K3S:
# Since kata-deploy stop will clean up installed kata-related programs, we will remove this mechanism before deleting kata-deploy
# edit Kata-deploy and delete Lifecycle preStop
lifecycle:
preStop:
exec:
command:
- bash
- -c
- /opt/kata-artifacts/scripts/kata-deploy.sh cleanup
kubectl delete -k github.com/kata-containers/packaging/kata-deploy/kata-deploy/overlays/k3s
systemctl restart k3s.service
Copy the code
Run the demo
K3s only supports kata-qemu, so we only need to install kata-qemu-runtimeclass:
kubectl apply -f https://raw.githubusercontent.com/kata-containers/packaging/master/kata-deploy/k8s-1.14/kata-qemu-runtimeClass.yaml
Copy the code
Add the workload
kubectl apply -f https://raw.githubusercontent.com/kata-containers/packaging/master/kata-deploy/examples/test-deploy-kata-qemu.yaml
kubectl get deploy php-apache-kata-qemu
NAME READY UP-TO-DATE AVAILABLE AGE
php-apache-kata-qemu 1/1 1 1 1m
Copy the code
Verify that KATA-Qemu creates VM properly:
Ps aux | grep qemu root 3589 2490176 151368 0.9 0.9? Sl 06:49 0:15 /opt/kata/bin/qemu-system-x86_64# enter the VM
crictl ps | grep php-apache
crictl exec -it <php-apache-container-id> bash
# uname -r
4.19.75
# exit
Check the Host kernelUname -r 5.0.0-1029 - GCPCopy the code
We can see that each KATA container created with K3S has a separate kernel, with no shared host kernel.
Remember after
As K3S continues to evolve, more and more software will support not only complete Kubernetes, but also installation and deployment in K3S. In addition to the continuous development of K3S in the field of Devops and edge computing, as a carrier of software operation, K3S is also continuously accepted by various open source products. K3s assists other software to provide users with lightweight delivery experience and out-of-the-box use experience.