In order to improve the k3S experience, we will publish a series of “K3S Dark Magic” articles written by K3S developers to explain the functions and principles of K3S in detail. This article, the second in a series, details the HA deployment practices of K3S and explores its principles.
Meanwhile, welcome to add K3S assistant (wechat id: K3s2019) and join the official wechat group to communicate with you.
Before the speech
In the last article, we discussed how the K3S single process implements the management of various K8S services, and we showed the effect of the K3S runtime through a single point of deployment. In the practice of production environment, high availability HA is an unavoidable problem. K3s itself has gone through several iterations, and the HA scheme has been constantly optimized, forming the current relatively stable HA scheme. In the HA scheme of K3S, the following points are mainly concerned:
-
How to select a DATASTore in K3S?
-
How can K3s worker nodes evenly access the master service?
-
Do you need to rely on other third-party components in addition to the datastore itself?
With these questions in mind, let’s start with HA deployment practices and then explore the principles.
HA Deployment Practices
At present, k3s support SQLite/etcd/MySQL/PostgreSQL/DQLite datastore, different datastore for different usage scenarios. Most users will use SQlite when they install with default parameters, and ETCD is already familiar to everyone. So we’ve chosen MySQL, which is well known for its HA practices. PostgreSQL is similar to MySQL, so we won’t repeat it.
For THE HA of MySQL itself, since we are not talking about dbAs, we create MySQL services directly using RDS of the public cloud and use it through K3S. Creating a MySQL service using AWS is very simple and we can easily get a service instance as follows:
Run the following command to create two nodes, for example, K3S V1.0:
$ curl -sfL https://get.k3s.io | sh -s - server --datastore-endpoint='mysql://admin:Rancher2019k3s@tcp(k3s-mysql.csrskwupj33i.ca-central-1.rds.amazonaws.com:3306)/k3sdb'
Note that database name does not contain special characters
Copy the code
In any node, look at node:
$kubectl get no NAME STATUS ROLES AGE VERSION IP-172-31-4-119 Ready master 7s v1.16.3-k3s.2 IP-172-31-7-200 Ready Master 2 m32s v1.16.3 - k3s. 2Copy the code
Before we add a worker node, we need to provide a unified access for multiple K3S servers, which can be achieved in the following ways:
-
L4 layer load balancer
-
Round-robin DNS
-
VIP or elastic IP address
This paper adopts A relatively simple DNS scheme. We add the corresponding A record to the IP of K3s server and obtain A domain name. With this domain name, we can add worker:
$ curl -sfL https://get.k3s.io | K3S_URL=https://k3s-t2.niusmallnan.club:6443 K3S_TOKEN=${K3S_TOKEN}Sh - $kubectl get no NAME STATUS ROLES AGE VERSION IP-172-31-7-200 Ready master 15m v1.16.3-k3s.2 IP-172-31-4-119 Ready Master 12m V1.16.3-k3s3 IP-172-31-7-217 Ready < None > 25s v1.16.3-k3s3Copy the code
In this way, the two K3S servers also have worker attributes, and we have added a pure Woker. How does this worker node connect to K3S Server? In the previous article, we discussed that worker nodes still run kubelet/ KubeProxy through thread mode, and these components also need to connect to apI-server to obtain resource status. For example, we can see kubeconfig file of Kubelet:
$ cat /var/lib/rancher/k3s/agent/kubelet.kubeconfig apiVersion: v1 clusters: - cluster: server: https://127.0.0.1:42545 certificate authority: / var/lib/rancher/k3s/agent/server - ca. CRT name:local
contexts:
- context:
cluster: local
namespace: default
user: user
name: Default
current-context: Default
kind: Config
preferences: {}
users:
- name: user
user:
client-certificate: /var/lib/rancher/k3s/agent/client-kubelet.crt
client-key: /var/lib/rancher/k3s/agent/client-kubelet.key
Copy the code
Kubelet = kubelet = kubelet = kubelet = kubelet = kubelet = kubelet = kubelet = kubelet = kubelet = kubelet = kubelet = kubelet
cat /var/lib/rancher/k3s/agent/etc/k3s-agent-load-balancer.json
{
"ServerURL": "https://k3s-t2.niusmallnan.club:6443"."ServerAddresses": [
"172.31.4.119:6443"."172.31.7.200:6443"]}Copy the code
Entering the database, we can see the table structure stored by K3S, which is equivalent to changing kv mode like ETCD into relational database mode, for example:
$ select * from kine limit3\G; ... ... ... *************************** 3. row *************************** id: 3 name: /registry/apiregistration.k8s.io/apiservices/v1.apiextensions.k8s.io created: 1 deleted: 0 create_revision: 0 prev_revision: 0 lease: 0 value: {"kind":"APIService"."apiVersion":"apiregistration.k8s.io/v1beta1"."metadata": {"name":"v1.apiextensions.k8s.io"."uid":"c9109c3a-3475-42cb-a1e3-73ddb6161c55"."creationTimestamp":"2019-11-19T09:18:06Z"."labels": {"kube-aggregator.kubernetes.io/automanaged":"onstart"}},"spec": {"service":null,"group":"apiextensions.k8s.io"."version":"v1"."groupPriorityMinimum": 16700,"versionPriority": 15},"status": {"conditions": [{"type":"Available"."status":"True"."lastTransitionTime":"2019-11-19T09:18:06Z"."reason":"Local"."message":"Local APIServices are always available"}]}}
old_value: NULL
3 rows in set (0.00 sec)
Copy the code
DQLite (github.com/canonical/d…
Rancher.com/docs/k3s/la…
The usage scenarios of each datastore type can be summarized as follows:
In general, the basic mode of K3S HA is shown in the figure below:
HA Implementation Principle
We all know that the default datastore for K8S is ETCD, but how does K3S translate operations for ETCD to other types of datastores? Here Rancher has developed a new component called Kine (github.com/rancher/kin… V3 storage ports used by K8S in ETCD V3 are as follows (github.com/etcd-io/etc…
In Kine, these interfaces are implemented in sequence (Lease is not supported yet).
Github.com/rancher/kin… .
Kine exposes these interfaces to K3S ‘API-server. When datastore-endpoint is set for K3S, the API-server of K3S is still configured to connect to ETCD V3, but the address of etcd-Servers is configured to be the service address of Kine, which means that all API read and write operations are processed by Kine. Kine to transfer read/write operations to MySQL/PostgreSQL/SQLite etc. If we set up real Etcd Backend, Kine will skip over and directly expose the real ETCD – Servers to K3s API-server.
Api-server agent in worker node via a tcpProxy (github.com/google/tcpp…
Github.com/rancher/k3s… .
Tcpproxy needs to obtain the real service address of apI-Server, which is mainly obtained by the tunnel established by worker and master, namely the Websocket data channel. This tunnel is unique to K3S. When the apI-server address is updated, the worker relies on websocket to obtain the new address and update the configuration of tcpProxy in the worker.
Remember after
High availability is the basic standard for software production. K3s is now GA and offers HA solutions. In both edge scenarios and development tests, you can select an appropriate datastore and HA mode based on your requirements. DQLite’s solution is currently experimental, and will soon be available as the entire edge computing ecosystem evolves.