The paper

With the development of K8S in full swing, more and more people want to learn and understand K8S, but many people balk at it because of its high entry curve.

However, with the development of the K8S ecosystem, more and more deployment solutions are presented by the community. There are several deployment solutions for the production usable environment, as well as several simple and usable solutions for testing and learning environments.

Today we will introduce a test, learning environment to quickly build K8S environment scheme: Kind. Kind of website is: https://kind.sigs.k8s.io/

So what’s Kind’s advantage over Minikube?

Docker-based instead of virtualization

The operating architecture diagram is as follows:Instead of packaging a virtual image, Kind directly talks about K8S components running on Docker. What are the benefits?

  1. No need to run GuestOS lower resource usage.
  2. It is not based on virtualization technology and can be used in VMS.
  3. The files are smaller and more portable.

Supports multi-node K8S cluster and HA

Kind supports multi-role node deployment. You can control how many Master nodes and how many Worker nodes you need through the configuration file to better simulate the actual environment in production.

Install the Kind

Kind is easy to install with only one binary file, so if you don’t want to bother releasing it on GitHub Releases, you can download the binary.

The following installation from https://kind.sigs.k8s.io/docs/user/quick-start/ Kind document

macOS / Linux

The curl - Lo. / kind https://kind.sigs.k8s.io/dl/v0.8.1/kind-$(uname) - amd64chmod +x ./kind
mv ./kind /some-dir-in-your-PATH/kind
Copy the code

MacOS/Linux uses Homebrew

brew install kind
Copy the code

Windows

Curl the. Exe - Lo kind - Windows - amd64. Exe https://kind.sigs.k8s.io/dl/v0.8.1/kind-windows-amd64 Move - Item .\kind-windows-amd64.exe c:\some-dir-in-your-PATH\kind.exeCopy the code

Windows using Chocolatey

choco install kind
Copy the code

Create a K8S cluster

If you’re using Docker on macOS or Windows, you need to set the Docker VM to at least 6GB of memory, and Kind recommends 8GB. Isn’t it not virtualization based? Why is there a Docker VM? Because Docker actually only supports Linux, macOS and Windows OS created a Linux VM based on virtualization technology. These problems do not exist on Linux systems.

In the simplest case, we can create a single-node K8S environment with a single command

kind create cluster
Copy the code

However, there are several limitations to the default configuration that are not applicable in most cases. The main limitations of the default configuration are as follows:

  1. APIServer only listens for 127.0.0.1, which means that APIServer cannot be accessed outside of Kind’s native environment
  2. Due to the domestic network situation, Docker Hub image station often cannot be accessed or times out, resulting in the failure to pull the image or very slow pull the image

Here is a configuration file to remove the restriction of appeal:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  apiServerAddress: "<API_SERVER_ADDRESS>"
containerdConfigPatches:
- |-  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]  endpoint = ["http://f1361db2.m.daocloud.io"] Copy the code

API_SERVER_ADDRESS configuration LAN IP or monitoring IP http://f1361db2.m.daocloud.io configuration Docker Hub to accelerate the mirror sites

More configurations (multi-node, version of K8S component running in node, APIServer listening port, Pod, Service subnet, kubeProxy mode, port mapping, Local persistence) can look at the Kind of document at https://kind.sigs.k8s.io/docs/user/configuration/

The result is as follows:

If you have been stuck in the step of Ensuring node image (Kindest /node:v1.18.2) for a long time, you can use docker pull Kindest /node:v1.18.2 to get the image pull progress bar.

Copy the cluster configuration file

Kube /config, which can be copied or added to an environment with kubectl tools.

Toggle the Kubectl cluster context

kubectl cluster-info --context kind-kind
Copy the code

How do I access IP in K8S

We deploy applications in K8S, and there are generally four ways to access them.

  1. Direct access to PodIP
  2. Access through the ClusterIP address of Service
  3. Access through the NodePort of the Service
  4. Access through Ingress Service NodePort

Methods 1 and 2 need to access the client in the K8S network environment. Methods 3 and 4 are actually a way to reach applications through the machine’s port mapping.

Personally, I think it is more convenient to access IP+ port directly. I will not introduce Ingress too much here. You can see the documentation of Kind about Ingress. https://kind.sigs.k8s.io/docs/user/ingress/

This section describes how to access K8S through kubectl port-forward port forwarding.

Deploy an Nginx Deployment and Service

Yaml is as follows:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
 app: nginx
 name: nginx spec:  replicas: 1  selector:  matchLabels:  app: nginx  template:  metadata:  labels:  app: nginx  spec:  containers:  - name: nginx  image: nginx --- apiVersion: v1 kind: Service metadata:  name: nginx spec:  ports:  - name: 80-tcp  port: 80  protocol: TCP  targetPort: 80  selector:  app: nginx  type: ClusterIP Copy the code
kubectl create nginx.yaml
kubectl port-forward service/nginx 8080:80
Copy the code

Results the following


You can see that we have forwarded the local 8080 to port 80 of the nginx service. Then you can access port 80 of the nginx service.

Q&A

Can Kind create multiple K8S clusters on a single machine?

Yes, kind Create Cluster provides the –name parameter to set the name of the K8S cluster. Note that the listening address/port of the API Server cannot be repeated or occupied.

How do I set the specified K8S version?

Kind Create Cluster provides the –image parameter. You can set the version of the Kindest /node image, which corresponds to the version released by K8S. You can check the specific version on DockerHub. https://hub.docker.com/r/kindest/node/tags

This feature is very cool, when doing compatibility testing can create a target version of the cluster to test, really not too convenient.

How can I use my application image in K8S if it is not published to the mirror library?

You can use the following methods:

  1. kind load
  2. Local mirror library
  3. Private mirror library

In general, you can load an image from a client into a K8S environment via Kind load. For example, load a native Nginx image into Kind’s K8S environment.

kind load docker-image nginx nginx
Copy the code

You can even alias images

kind load docker-image nginx nginx:test
Copy the code

For details, visit the HELP of the CLI

kind load -h
kind load docker-image -h
kind load image-archive -h
Copy the code

Kind of local image libraries use see documentation: https://kind.sigs.k8s.io/docs/user/local-registry/ private image libraries use see documents: https://kind.sigs.k8s.io/docs/user/private-registries/

Other questions?

If you have any other questions, please leave me a message.