Have you ever wanted to try K3s high availability mode? But not having three “standby nodes” or the time it takes to set up the same number of virtual machines? So maybe you need k3D very much!

If you don’t know k3D yet, its name may give you a way to get to know it: K3s in Docker. K3d is a lightweight wrapper for running K3S in Docker. With K3D, you can easily create single-node or multi-node K3S clusters within Docker for local development on Kubernetes.

K3d allows you to start a K3S cluster in a short amount of time. In addition, you can quickly learn its few but very useful commands. K3d runs inside Docker, which means you can expand or reduce nodes without extra setup. In this article, we will show you how to use K3D to set up a single-node K3s cluster and how to use K3D to set up K3s in high availability mode.

The two main purposes of this article are to introduce K3D as a tool for deploying K3s clusters and to show how K3s high availability resists “Nodes degradation”. Also, you’ll learn what components K3S deploys in a cluster by default.

preparation

When it comes to operating systems (Linux, MacOS, Windows), everyone has their own preferences. So before we can examine the Settings for this article, there are only two necessary requirements: Docker and Linux shell.

If you’re running MacOS or Windows, Docker Desktop is the preferred solution for Dcoker. For Linux, you can get Docker Engine and CLIs as follows:

www.docker.com/products/do…

Linux shells, MacOS, and Linux are all involved. For Windows, the easiest and fastest solution is WSL2, which we will use in the demo.

Here are the Settings we’ll use:

  • OS: Windows 10 Version 2004 (Build: 19041)

  • OS components: virtualization machine platform and Windows subsystem of Linux

  • Installation steps:

  • Docs.microsoft.com/en-us/windo…

  • WLS2 distribution: Ubuntu

  • Windows Store address:

  • www.microsoft.com/en-us/p/ubu…

  • Optional Console: Windows Terminal

  • Windows Store address:

  • www.microsoft.com/en-us/p/win…

Step1: start with installation

Visit the link below to learn how to install K3D:

K3d. IO / # installati…

In this article, we will install with curl.

Please note: Running scripts directly from urls has serious security issues. So before running any scripts, make sure the source is your project’s website or Git online repository.

Here are the installation steps:

Access: k3d. IO / # installati…

Copy the “curl” installation command and run it in your terminal:

curl -s Raw.githubusercontent.com/rancher/k3d… | bash

Note: This screenshot shows two commands:

  • K3d Version: Provides the installed K3D version

  • K3d -help: lists the commands available for k3D

K3d is now installed and ready to go.

Step2: start from a single-node cluster

Before we create an HA cluster, let’s start with a single-node cluster to understand the commands (” grammar “) and see what K3D deployable by default.

First, grammar. In V3, K3D made a big change in the way commands are used. We will not delve into how the previous commands are done; we will use V3 syntax.

K3s follows the “noun + verb” syntax. First specify what we want to use (cluster or node), and then specify the operations we want to apply (create, DELETE, start, stop).

Create a single-node cluster

We will use K3D to create a single-node cluster with default values:

**k3d cluster create **

Note: The output of the k3d cluster create command suggests running another command to check if the cluster is running and accessible: kubectl cluster-info

Now the cluster is up and running!

Peep inside

We can also look at what is deployed from different angles.

Let’s start at the beginning and see what’s in a K3s cluster (Pods, services, deployments, etc.) :

kubectl get all –all-namespaces

We can see that in addition to the Kubernetes service, K3s deploys DNS, Metrics, and Ingress (Traefik) services when we use the default values.

Now let’s look at the node from a different perspective.

First, let’s examine it from a cluster perspective:

kubectl get nodes –output wide

As expected, only one node is seen. Now let’s look at it from the perspective of K3D:

k3d node list

Now we have two nodes. A smarter implementation here is when the cluster is running on its nodesk3d-k3s-default-server-0On, there is another “node” as the load balancer. While this might not do much for a single-node cluster, it would save us a lot of effort in our HA cluster.

Finally, we see two nodes from Docker: Docker PS

Clean up resources

Our single-node cluster helps us understand the mechanics and commands of K3D. Now we need to clean up the resource before deploying the HA cluster: k3D Cluster DELETE

Please note thatFor demo purposes, we added the following command in the screenshot above:

  • K3d Cluster List: Lists active K3D clusters

  • Kubectl cluster-info: checks cluster connections

  • Docker PS: Check for active containers

We have now created, examined, and deleted a single-node cluster using K3D. Next, we tried HA.

Step3: Welcome to the world of HA

Before we open the command line, let’s have a basic understanding of what we’re going to deploy and some additional requirements.

First, Kubernetes HA has two possible Settings: embedded or external databases. We will use the setup of an embedded database.

Second, K3s has two different technologies for embedded database HA: one is based on DQLite (K3s V1.18) and the other is based on ETCD (K3s V1.19 +).

This means that ETCD is the default version in the current stable version of K3s and is the version you will use in this article. Dqlite has been deprecated.

As of this writing, the default version of K3D is K3S V1.18.9-K3S1. You can check with k3D version:

So does this mean we need to reinstall K3D with K3s V1.19 support? Of course not!

We can use the currently installed VERSION of K3D and still support K3s V1.19 because:

  • K3d allows us to specify a specific K3S Docker image to use

  • All VERSIONS of K3S are released as container images

For the above reasons, we can assume that a K3s V1.19 is stored in a Docker Hub as a container image:

Hub.docker.com/r/rancher/k…

Now, let’s create our first K3s HA cluster using K3D.

Three control planes

According to Kubernetes HA best practices, we should use at least three control planes to create an HA cluster.

In K3D we use the following command:

K3d cluster create — Servers 3 — Image Rancher/K3S :v1.19.3-k3s2

Understand commands:

Basic command: k3d cluster create

Options:

  • Server 3: Request to create three nodes with role servers.

  • Image Rancher/K3S: v1.19.3-k3S2: Specifies the K3S image to use

Now we can examine the cluster we have created from a different perspective

kubectl get nodes –output wide

As you can see, we check various aspects to make sure our nodes are working properly.

If we see that the component has been deployed, then our Daemonset now has 3 copies instead of 1:

kubectl get all –all-namespaces

The final check is to see which node the Pods are running on:

kubectl get podes –all-namespaces –output wide

Now we have the basis for an HA cluster. Let’s add additional control plane nodes and deliberately break them to see how the cluster behaves.

Extension cluster

Because of K3D and the fact that our cluster runs in the top container, we can quickly simulate adding another control plane node to an HA cluster:

K3d node create extraCPnode –role=server –image= Rancher/k3S :v1.19.3-k3s2

Understand commands:

Basic command: k3d node create

Options:

ExtraCPnode: The base name used by k3D to create the final node name.

Role =server: Set the role of a node to the control plane.

Image Rancher/K3S: v1.19.3-k3S2: Specifies the K3S image to use.

As you can see here, we checked from different angles to make sure the new control plane nodes were working properly.

With this additional node added, we are ready for the final test: lower node0!

HA: Heavily armored crashproof vehicle

Node0 is usually the node (denoted by IP or hostname) that our KUBECONFIG refers to, so our Kubectl application tries to connect to it to run different commands.

Since we are using containers, the best way to “break” a node is to simply stop the container.

docker stop k3d-k3s-default-server-0

Please note: Docker and k3D commands show status changes immediately. However, it takes a very short time for the Kubernetes cluster to see a state change to NotReady.

In addition, our cluster still uses Kubectl to respond to our commands.

Now it is time to revisit the time taken by the load balancer K3D and its importance in allowing us to continue accessing the K3s cluster.

From an external connection point of view, although the load balancer internally switches to the next available node, we still use the same IP/ host. This abstraction saves us a lot of managers and is one of the most useful features of K3D.

Let’s look at the state of the cluster:

kubectl get all –all-namespaces

Everything seems normal. If we look at Pods in detail, we see that K3s self-heal by recreating Pods running on the failed node on other nodes:

kubectl get pods –all-namespaces –output wide

Finally, to show how powerful HA is and how K3s manages it, let’s restart Node0, and we’ll see it brought back into the cluster as if nothing happened.

docker start k3d-k3s-default-server-0

Our cluster is stable and all the nodes are up and running again.

Clean up resources again

Now we can delete the local HA cluster because it has done its job. In addition, we knew we could easily create a new cluster.

To clean up our HA cluster, use the following command:

k3d cluster delete

Total knot

Even though we created single-node and HA clusters locally, in containers, we can still see how K3s behaves under the new ETCD embedded DB, and it works the same way if we deploy K3s on bare metal or virtual machines.

That said, K3D helps a lot in terms of management. It creates a load balancer by default, allowing permanent connectivity to the K3s cluster while abstracting all tasks. If it is deployed outside the container, we need to do this manually.

In this article, we’ve seen how easy it is to set up a high-availability K3s cluster using K3D. If you haven’t tried it yet, it’s highly recommended that you follow this tutorial, which is open source and easy to use.