Kubernetes is everywhere — developer laptops, raspberry PI, clouds, data centers, hybrid clouds, and even multi-clouds. It has become the foundation of modern infrastructure, abstracting the underlying computing, storage and networking services. Kubernetes hides the differences between various infrastructure environments and makes cloudy a reality.
Kubernetes has also become the universal control plane for orchestration, not just container orchestration, but also resources such as virtual machines, databases, and even SAP Hana instances.
Despite Kubernetes’ rapid growth, it still presents many challenges for developers and carriers. One of the key challenges is running Kubernetes on the edge. Edges are very different than clouds or data centers. It runs at a remote location in a highly constrained environment. Edge devices have only a fraction of the computing, storage, and networking resources of similar devices running in data centers. Edge devices are intermittently connected to the cloud, and they operate primarily in an offline environment. These factors make it difficult to deploy and manage the Kubernetes cluster on the edge.
With this in mind, Rancher Labs, creator of the industry’s most widely used K8S management platform, has released K3s, a release of Kubernetes that is highly optimized for edges. Although K3s is a simplified, mini-version of Kubernetes, consistency and functionality of the API are not compromised. From Kubectl to Helm to Kubernetes, almost all the tools of the cloud native ecosystem can seamlessly integrate with K3s. In fact, K3s is a CNCF certified, compliant Kubernetes distribution that can be deployed in a production environment. Almost all workloads running a full Kubernetes cluster are guaranteed to work on the K3s cluster.
Kubernetes, a 10-letter word, is known in the community as K8S. Since K3s is exactly half of Kubernetes’ memory, Rancher found a five-letter word for the new distribution and shortened it to K3s.
Learn more about the K3s architecture
The appeal of K3s lies in its simplicity. Packaged and deployed as a single binary (about 100MB), you get a fully fledged Kubernetes cluster in a matter of seconds. The installation experience is as simple as running a script on each node of the cluster.
The K3s binary is a self-contained wrapped entity that runs almost all components of the Kubernetes cluster, including API Server, Scheduler, and Controller. By default, each K3s installation includes the control plane and Kubelet and Containerd runtimes, which are sufficient to run the Kubernetes workload. Of course, you can also add dedicated worker nodes that only run kubelet Agent and Containerd runtimes to schedule and manage the pod life cycle.
Compared with traditional Kubernetes cluster, there is no obvious difference between master node and worker node in K3s. Pods can be scheduled and managed on any node, regardless of their role. Therefore, the naming method of master node and worker node is not applicable to K3S cluster.
In a K3S cluster, the nodes running the control plane components and Kubelet are called servers, while the nodes running only Kubelet are called agents. Both servers and agents have container runtimes and a KubeProxy to manage tunnel and network traffic of the entire cluster.
In a typical K3S environment, you run one server and multiple agents. During installation, if you pass the URL of the server, the node becomes an Agent. Otherwise, you’ll end up running another stand-alone K3S cluster with its own control plane.
So how does Rancher reduce the memory of k3s? First, they removed many of the optional components of Kubernetes that are not essential to running a minimal cluster. It then adds the necessary elements, including Containerd, Flannel, CoreDNS, CNI, Traefik Ingress Controller, local storage programs, an embedded service load balancer, and an integrated network policy Controller. All of these components are packaged into a binary file and run in the same process. In addition to these, the distribution also supports Helm Chart out of the box.
The upstream Kubernetes distribution is bloated and has a lot of code to remove. For example, storing volume plug-ins and cloud provider apis can greatly increase the distribution’s memory. K3s omits all of this to minimize the size of the binary.
Another key difference is the way cluster state is managed. Kubernetes relies on the distributed key-value database ETCD to store the state of the entire cluster. K3s replaces ETCD with a lightweight database called SQLite, a full-fledged embedded scenario database. Many mobile applications bundle SQLite to store state.
The Kubernetes control plane becomes highly available by running ETCD on at least three nodes. SQLite, on the other hand, is not a distributed database, it is made for high availability on the control plane, and K3s Server can point to external database endpoints. Supported databases include ETCD, MySQL, and PostgreSQL. By effectively delegating state to an external database, K3s enables multiple instances of the control plane, making the cluster highly available.
Rancher is experimenting with a distributed version of SQLite called DQLite, which could eventually become the default data store for K3s.
The biggest advantage of the K3s is its battery-inclusive but replaceable approach. For example, we can replace containerd runtime with Docker CE runtime, Flannel with Calico, local storage with Longhorn, and so on.
For a detailed discussion of K3s architecture, I highly recommend watching K3s architect Darren Shepherd’s talk at KubeCon 2019 in North America:
youtu.be/-HchRyqNtkU
K3s deployment scenario and topology
The K3s release supports a variety of architectures, including AMD64, ARM64, and ARMv7. With a consistent installation experience, the K3s can run on Raspberry Pi Zero, NVIDIA Jetson Nano, Intel NUC, or Amazon EC2 A1.4 Xlarge instances.
In environments where you need a single-node Kubernetes cluster to maintain the same workflow that deploys manifest, install K3s on servers or edge devices. This gives you the flexibility to use your existing CI/CD pipeline and container images as well as Helm Chart or YAML files.
If you need a high availability cluster running on AMD64 or ARM64 architecture, install a 3 node ETCD cluster followed by 3 K3s Servers and one or more agents. This gives you a production-grade environment and provides HA for the control plane.
When running a K3s cluster in the Cloud, point the server to a managed database, such as Amazon RDS or Google Cloud SQL, to run a highly available control plane with multiple agents. Each K3s Server can run in a different availability zone for maximum uptime.
If you run K3s in an edge computing environment with a reliable, always-on connection, you run the Server in the cloud and the Agent on the edge. This gives you the flexibility to run a highly available and manageable control plane in the cloud while running agents in a remote environment.
Finally, you can deploy the K3s HA control plane in 5G Edge locations, such as AWS Wavelength and Azure Edge Zones environments, where the Agent is running on the device. This topology echoes the smart building, smart factory and smart healthcare scenarios.
In the next article in this series, I’ll take steps to deploy an HA cluster in a marginal environment, so stay tuned!