Before the speech

In this article, we will briefly introduce the reference architecture for setting up K3s in a high availability (HA) configuration. This means that your K3s cluster can tolerate failures and stay up and running, providing traffic to your users. Your application should also be built and configured for high availability, but that is beyond the scope of this article.

In this tutorial, we will use CLI tools to configure an HA K3s cluster on DigitalOcean. We will use MySQL for data storage and TCP load balancer to provide a stable IP address for Kubernetes API Server.

So why do we need high availability? We can create a K3s cluster by deploying a K3s server on a virtual machine with a public IP address. Unfortunately, if the virtual machine crashes, our application will fail. By adding multiple servers and configuring them to work together, the cluster can tolerate the failure of one or more nodes. This is called high availability.

High availability of the control plane

For K3s version 1.19, the following two methods can be used to achieve high availability:

  • SQL data store: SQL databases can be used to store cluster state, but SQL databases must also be running in a high-availability configuration to be effective.
  • Embedded ETCD: Most similar to traditional Kubernetes configuration, using tools such as Kops and Kubedam

In this article, we’ll look at the first approach, which uses an SQL database to store state. The reason we used SQL is that a single database can scale to support multiple clusters.

High availability of API Server

Kubernetes API Server is configured for TCP traffic on port 6443. External traffic (such as traffic from kubectl clients) connects to API Server using IP addresses or DNS entries in KUBECONFIG files.

This configuration introduces the need for a TCP load balancer between all our servers, because if we use the IP address of one of the servers, that server crashes and we won’t be able to use Kubectl. The Agent also needs to connect to a server with IP address and port 6443 to communicate with the cluster.

In the figure above, the user running Kubectl and both agents are connected to the TCP load balancer. The load balancer uses the private IP address list to balance the traffic between the three servers. If one of the servers crashes, it will be removed from the IP address list.

Server uses SQL data stores to synchronize cluster state.

preparation

You may need a DigtalOcean account to perform the following steps, but most of the following content is also available from other cloud providers. We previously published a tutorial on using Ali Cloud for K3s high availability, which you are welcome to refer to:

  • Three VMS run as K3s servers
  • Two VMS run as K3s Agents
  • Hosted MySQL service
  • Managed TCP load balancer

If you are using a cloud service without a hosted TCP load balancer, consider using a tool like Keepalived or Kube-VIP. These tools are implemented by publishing a “virtual IP address” to which the KUBECONFIG file points, rather than a single server.

These instructions need to run with terminals such as Git Bash on Windows, MacOS Terminal, WSL1 or 2, or Linux Terminal.

Make sure you have downloaded and installed:

  • DigitalOcean CLI (doctl)
  • Kubernetes CLI (kubectl)
  • K3sup: A widely used open source tool for installing K3s over SSH

All three CLI can be installed using the arkade get NAME command or brew install NAME.

Arkade:github.com/alexellis/a… The brew: brew. Sh /

If you’re using Arkade, run the following command:

arkade get doctl
arkade get kubectl
arkade get k3sup
Copy the code

Once you have docTL installed, you need to create an API key with read and write permissions on DigitalOcean’s dashboard and run doctl Auth Init for authentication.

Add your SSH key to your Digital Ocean dashboard and look up the SSH key ID. We need it when we use SSH to install K3s.

If this is the first time you use SSH key or if you want to learn how to configure them, you can visit the following link to view the tutorial: www.digitalocean.com/docs/drople…

Now list your SSH key and copy the following ID:


 1. doctl compute ssh-key list
 2. ID             Name     FingerPrint
 3. 24824545       work e2:31:91:12:31:ad:c7:20:0b:d2:b1:f2:96:2a:22:da

Copy the code

Run the following command:

export SSH_KEY='24824545'
Copy the code

The detailed steps

The easiest way to configure the resources we will need in this tutorial is to use DigitalOcean’s dashboard or CLI (DOCTL) configuration. After you complete the tutorial, you may need to automate the various steps using a tool like Terraform.

You can use the following link to understand the scale of the DigitalOcean and regional choices: www.digitalocean.com/docs/apis-c…

Create a node

Create three servers with 2GB memory and 1 vCPU:

 1. doctl compute droplet create --image ubuntu-20-04-x64 --size
    s-1vcpu-2gb --region lon1 k3s-server-1 --tag-names k3s,k3s-server
    --ssh-keys $SSH_KEY
 2. doctl compute droplet create --image ubuntu-20-04-x64 --size
    s-1vcpu-2gb --region lon1 k3s-server-2 --tag-names k3s,k3s-server
    --ssh-keys $SSH_KEY
 3. doctl compute droplet create --image ubuntu-20-04-x64 --size
    s-1vcpu-2gb --region lon1 k3s-server-3 --tag-names k3s,k3s-server
    --ssh-keys $SSH_KEY
Copy the code

Create 2 workers using the same configuration:

 1. doctl compute droplet create --image ubuntu-20-04-x64 --size
    s-1vcpu-2gb --region lon1 k3s-agent-1 --tag-names k3s,k3s-agent
    --ssh-keys $SSH_KEY
 2. doctl compute droplet create --image ubuntu-20-04-x64 --size
    s-1vcpu-2gb --region lon1 k3s-agent-2 --tag-names k3s,k3s-agent
    --ssh-keys $SSH_KEY
Copy the code

The attached tag will be used with the load balancer so that we don’t have to specify node IP and can add more servers later if needed.

Create a load balancer

 1. doctl compute load-balancer create --name k3s-api-server \
 2. --region lon1 --tag-name k3s-server \   --forwarding-rules
 3. entry_protocol:tcp,entry_port:6443,target_protocol:tcp,target_port:6443
    \   --forwarding-rules
 4. entry_protocol:tcp,entry_port:22,target_protocol:tcp,target_port:22
    \   --health-check
 5. protocol:tcp,port:6443,check_interval_seconds:10,response_timeout_seconds:5,healthy_threshold:5,unhealthy_threshold:3
Copy the code

We need to forward port 6443 to Kubernetes API Server and port 22 to K3sup for later use to obtain the cluster “Join token”.

This rule receives incoming traffic on the IP of the LB and forwards it to the VIRTUAL machine labeled “K3S-Server”.

Make a note of the ID you obtained:

export LB_ID='da247aaa-157d-4758-bad9-3b1516588ac5'
Copy the code

Next, find the IP address of the load balancer:

doctl compute load-balancer get $LB_ID
Copy the code

Write down the value in the IP column:

Export LB_IP = '157.245.29.149'Copy the code

Configure a managed SQL database

doctl databases create k3s-data --region lon1 --engine mysql
Copy the code

The above command will create a version 8 MySQL database.

You’ll also see the output URI and connection string, including the password needed for the connection.


 1. export
    DATASTORE=mysql://doadmin:z42q6ovclcwjjqwq@tcpk3s-data-do-user-2197152-
 2. 0.a.db.ondigitalocean.com:25060/defaultdb&sslmode=require"

Copy the code

To use K3s, we need to modify the string as follows:

export DATASTORE='mysql://doadmin:z42q6ovclcwjjqwq@tcp(k3s-data-do-user-2197152-0.a.db.ondigitalocean.com:25060)/defaultdb' INSTALL_K3S_VERSION = 'v1.19.1 + k3s1'Copy the code

Do we remove ** “? Sslmode =require** “and adds TCP () around the host name and port.

Start the cluster

Let’s use K3sup to start K3s over SSH.

The two most important commands in K3sup are:

  • Install: Installs K3s into the new server and creates a Join token for the cluster
  • Join: Obtain the join token from the server and use it to install K3S on the agent.

The advantage of using K3sup over other methods is that it tends to be less cumbersome and easier to use with intuitive flags.

You can find these flags and additional options by running: k3sup Install –help or k3sup join –help.

Install K3s to the server

Set the channel to the latest version, 1.19 as of this writing

export CHANNEL=latest

Before proceeding, check to see if your environment variables were populated earlier, and if not, backtrack and populate them.

 1. echo $DATASTORE
 2. echo $LB_IP
 3. echo $CHANNEL
Copy the code

In the following command, fill in the following IP addresses using the Public IPv4 field:

Doctl compute droplet ls --tag-name k3s 2. Export SERVER1=134.209.16.225 3. Export SERVER2=167.99.198.45 4 SERVER3=157.245.39.44 5. Export AGENT1=161.35.32.107 6. Export AGENT2=161.35.36.40Copy the code

Now that you have populated the environment variables, run the following command:

k3sup install --user root --ip $SERVER1 \ 2. --k3s-channel $CHANNEL \ 3. --print-command \ 4. --datastore='${DATASTORE}'  \ 5. --tls-san $LB_IP 6. List item 7. k3sup install --user root --ip $SERVER2 \ --k3s-channel $CHANNEL \ 8. --print-command \ 9. --datastore='${DATASTORE}' \ 10. --tls-san $LB_IP 11. k3sup install --user root --ip $SERVER3 \ 12.  List item 13. --k3s-channel $CHANNEL \ 14. List item 15. --print-command \ 16. --datastore='${DATASTORE}' \ 17. --tls-san $LB_IP 18. k3sup join --user root --server-ip $LB_IP --ip $AGENT1 \ 19. List item 20. --k3s-channel $CHANNEL \  21. --print-command 22. List item 23. k3sup join --user root --server-ip $LB_IP --ip $AGENT2 \ 24. List item 25. --k3s-channel $CHANNEL \ 26. --print-commandCopy the code

Check whether the node is added

1. export KUBECONFIG=`pwd`/kubeconfig 2. List item 3. NAME STATUS ROLES AGE VERSION 4. k3s-server-2 Ready master 18m V1.19.3 + k3S1 5. k3S-server-3 Ready master 18m v1.19.3+k3s1 6. k3s-agent-1 Ready < None > 2m39s v1.19.3+k3s1 7. K3s-server-1 Ready master 23m V1.19.1 + K3S1 8. K3s-agent-2 Ready < None > 2m36s v1.19.3+ k3S1Copy the code

Open the KUBECONFIG file and find the IP address, which should be the address of the load balancer.

Moments later, you can see the status of the load balancer on DigitalOcean’s dashboard.

Simulated fault

To simulate a failure, stop the K3S service on one or more K3S servers and run the kubectl get Nodes command:

 1. ssh root@SERVER1 'systemctl stop k3s'
 2. ssh root@SERVER2 'systemctl stop k3s'
Copy the code

At this point, a third server will take over.

kubectl get nodes
Copy the code

Then restart the service on the other two servers:

 1. ssh root@SERVER1 'systemctl start k3s'
 2. ssh root@SERVER2 'systemctl start k3s'
Copy the code

If you want to learn more about how Kubernetes interrupt processing node information, please see the following links: Kubernetes. IO/docs/concep…

At this point, you can use clustering to deploy an application, or continue cleaning up your configured resources so that you won’t be charged for any additional use.

Clean up the

Delete the droplets

doctl compute droplet rm --tag-name k3s
Copy the code

For load balancers and databases, you need to get the ID and then use the delete command.

 1. doctl compute load-balancer list/delete
 2. doctl databases list/delete
Copy the code

Total knot

We have now configured a fault-tolerant, highly available K3s cluster and provided users with a TCP load balancer so that users can access Kubectl even if one of the servers goes down or crashes. Using “tag “also means that we can add more servers without having to manually update the LOAD balancer’s IP list.

The tools and techniques we use here can also be applied to other cloud platforms that support hosted databases and hosted load balancers, such as AWS, Google Cloud, and Azure.

As you may have noticed from the steps we’ve gone through, high availability is one that takes some time and thought to get configured correctly. If you plan to create many HA K3s clusters, it may be beneficial to automate the steps using Terraform.

You can also use K3sup to configure HA K3s clusters using embedded ETCD instead of a managed database. This reduces the cost but increases the load on the server.

If you want to learn more about K3s, please check out the following documentation: docs.rancher.cn/k3s/

You can also find more information about how K3sup works from the repository on GitHub, including an alternative to using ETcd for HA: k3sup.dev/