background

Recently, I have been reading the book “Playing Kubernetes in 5 minutes every day”. Personally, I think it is a good introduction to K8S.

When we first learn a technology, whether it’s through official documentation, books, or videos, it’s hard to actually use it if you just watch it without practicing it.

However, when I started to prepare for the practice, IT was not as easy as we thought to get the K8S running locally. There are the following “problems” :

  • Network environment: some mirrors in K8S are difficult to pull down in China, of course, this can be solved by proxy, mirror station and other ways.
  • Resource consumption: It is not wise to deploy a K8S cluster in a local development environment where memory resources are not so abundant.

So is there any solution to build a K8S cluster more elegant, lighter and faster? The answer is K3D.

There are several ways to run K8S locally, such as:

  • Minikube only supports single nodes, but we expect to be able to run in a cluster mode so that we can Mock the K8S auto-scheduling case ✖️ after the node goes down
  • Microk8s is a k8S version that can run on a single machine provided by The Ubuntu ecosystem. With the multipass of the Ubuntu ecosystem, it can simulate multiple nodes. However, in the local environment with limited resources, simulating multiple nodes through virtual machines is obviously not what I want ✖️
  • Kind is a tool for building Kubernetes clusters based on Docker, Kubernetes in docker ✔️
  • K3d is a tool that allows K3S to run in Docker. Compared with KIND, it has a faster startup speed and occupies less resources. It is also a solution I take ✅

Of course, if you just learn the use of K8S, then all the above schemes can be used.

For comparison of K3D and KIND, please refer to K3D vs KIND, who is more suitable for local development.

1. What is K3D + K3s?

K3s is a very fast and lightweight fully compatible Kubernetes distribution (CNCF certified).

K3d is a tool that allows K3S to run in Docker. It provides a simple CLI to create, run, and delete Kubernetes clusters with 1 to N nodes.

K3s consists of the following components:

  • Containerd: a runtime container similar to Docker, but does not support building images
  • Flannel: A network model based on the CNI implementation. By default, Flannel is used. It can also be replaced by Calico and other implementations
  • CoreDNS: internal DNS component of a cluster
  • SQLite3: SQLite3 is used by default for storage. Etcd3, MySQL, Postgres are also supported
  • Traefik: Ingress Controller is the version of Traefik 1.x installed by default
  • Embedded Service Loadbalancer: an Embedded service load balancing component

K3s is a modular distribution that can easily replace the above components.

2. Install k3d

On Mac, Homebrew can be used to install K3D easily: brew install k3D

Install kubectl and Kubecm

brew install kubectl
brew install kubecm
Copy the code

3. Test the cat

We can easily start one or N K8S clusters locally with k3D commands.

First we try to create a cluster with 1 master and 2 slave:

k3d cluster create first-cluster --port 8080:80@loadbalancer --port 8443:443@loadbalancer --api-port 6443 --servers 1 --agents 2
Copy the code

The initial creation can be slow because the latest Rancher/K3s image is pulled from the Docker repository.

When the following log appears, the K8S cluster is successfully created 😉

INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-first-cluster'
INFO[0000] Created volume 'k3d-first-cluster-images'
INFO[0001] Creating node 'k3d-first-cluster-server-0'
INFO[0001] Creating node 'k3d-first-cluster-agent-0'
INFO[0001] Creating node 'k3d-first-cluster-agent-1'
INFO[0001] Creating LoadBalancer 'k3d-first-cluster-serverlb'
INFO[0001] Starting cluster 'first-cluster'
INFO[0001] Starting servers...
INFO[0001] Starting Node 'k3d-first-cluster-server-0'
INFO[0008] Starting agents...
INFO[0008] Starting Node 'k3d-first-cluster-agent-0'
INFO[0020] Starting Node 'k3d-first-cluster-agent-1'
INFO[0028] Starting helpers...
INFO[0028] Starting Node 'k3d-first-cluster-serverlb'
INFO[0029] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access
INFO[0031] Successfully added host record to /etc/hosts in 4/4 nodes and to the CoreDNS ConfigMap
INFO[0031] Cluster 'first-cluster' created successfully!
INFO[0031] --kubeconfig-update-default=false --> sets --kubeconfig-switch-context=false
INFO[0031] You can now use it like this:
kubectl config use-context k3d-first-cluster
kubectl cluster-info
Copy the code

Kubectl cluster-info: kubectl cluster-info: kubectl cluster-info

Kubernetes master is running at https://0.0.0.0:6443 CoreDNS is running at The Metrics - https://0.0.0.0:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy server is running the at https://0.0.0.0:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxyCopy the code

Run kubectl get Nodes to check the current cluster nodes:

NAME STATUS ROLES AGE VERSION K3D-first-cluster-agent-1 Ready < None > 178M V1.20.2 + k3S1 k3D-first-cluster-server-0 Ready Control-plane,master 178m v1.20.2+ k3S1 k3D-first-cluster-agent-0 Ready < None > 178m v1.20.2+k3s1Copy the code

Note that the “node” here is actually the local Docker running container, through the Docker PS check the current local Docker running container

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES A757151DAF14 Rancher/K3D-proxy :v4.2.0"/ bin/sh - c nginx - pr..."4 hours ago Up 4 hours 0.0.0.0:6443->6443/ TCP, 0.0.0.0:8080->80/ TCP, 0.0.0.0:8443-443 / TCP k3d - first - cluster - > serverlb 6 fcb1bbaf96e rancher/k3s: the latest"/bin/k3s agent"         4 hours ago   Up 4 hours                                                                         k3d-first-cluster-agent-1
cef7277e43b9   rancher/k3s:latest         "/bin/k3s agent"         4 hours ago   Up 4 hours                                                                         k3d-first-cluster-agent-0
5d438c1b5087   rancher/k3s:latest         "/ bin/k3s server - t..."   4 hours ago   Up 4 hours                                                                         k3d-first-cluster-server-0
Copy the code

Explain the port mapping configured when we created the cluster:

  • --port 8080:80@loadbalancerPort 8080 will be mapped to port 80 of the loadBalancer. After receiving the request from port 80, the loadBalancer will be proxy to all K8S nodes.
  • --api-port 6443By default, k3s apI-server will listen on port 6443, which is mainly used to operate Kubernetes API. Even if multiple Master nodes are created, only one port 6443 needs to be exposed. Loadbalancer distributes the request proxy to multiple Master nodes.
  • If we want to expose the k8S service through NodePort, we can also define some port number mapping to loadbalancer to expose the K8S service, for example:-p 10080-20080:10080-20080@loadbalancer

Now the network communication between our cluster and host looks like this:

4. Test

Create a Deployment for nginx

kubectl create deployment nginx --image=nginx
Copy the code

Create a Service to expose the Service through ClusterIP

kubectl create service clusterip nginx --tcp=80:80
Copy the code

To create an Ingress, k3s installs Traefik 1.x as Ingress Controller by default

cat <<EOF | kubectl apply -f - apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx annotations: ingress.kubernetes.io/ssl-redirect: "false" spec: rules: - http: paths: - path: / pathType: Prefix backend: service: name: nginx port: number: 80 EOF
Copy the code

At this point, open your browser and visit http://localhost:8080/ to see the familiar nginx default page.

Isn’t this too cool ~ 😎

5. Other

5.1. Manage the cluster

  • Stopping a cluster:k3d cluster stop first-cluster
  • Restarting the cluster:k3d cluster start first-cluster
  • Deleting a cluster:k3d cluster delete first-cluster

5.2. Create a K8S cluster of the specified version

When installing Rancher using Helm Chart, you may get the following error log:

Chart requires kubeVersion: < 1.20.0-0whichIs incompatible with Kubernetes V1.20.0 + K3S2Copy the code

Rancher was released as version 2.5.5 at the time of testing, and the latest version 2.5.6 supports version 1.20.x

To create a k8S cluster with the k8S version number v1.19.8-k3S1, add the –image parameter to the end of the command to create the cluster: K3d cluster create first-cluster XXXXX –image Rancher/K3S: v1.19.8-k3S1

5.3. Quickly switch kubectl context

Remember the kubecm that you installed in step 2?

After we have created multiple clusters locally using K3D, we can quickly switch the context using Kubecm.

$ kubecm s Use the arrow keys to navigate: ↓ ↑ → ← and/toggles search Select Kube Context 😼 k3D-first-cluster (*) k3D-dev k3D-rancher-test <Exit> --------- Info ---------- Name: k3d-first-cluster Cluster: k3d-first-cluster User: admin@k3d-first-clusterCopy the code

reference

  • K3s: k3s. IO /
  • K3d: k3d. IO /
  • Kubecm:github.com/sunny0826/k…

In the next article, I’ll show you how to deploy Rancher in our K8S cluster to simplify our K8S operations.


Creation time: 2021-03-15 17:44:47

Original link: xkcoding.com/2021/03/15/…