Starting today, I’ll be serializing the Illustrated K8S primer series.

You can subscribe to the series: Illustrated Kubernetes by clicking here

This is the first article in a series that takes you through building a working K8S environment.

The construction of K8S environment is the first threshold for many people who want to learn K8S to stop outside the entrance, and many people are directly discouraged when they close this door.

Why is that? There are three main points as follows:

  1. Network problems: K8S is developed by Google, the latest official images are outside the wall, installation can not download the image
  2. Machine problem: K8S runs a lot of components, for the machine itself has requirements, configuration can not be low, if you want to build a cluster, also need more than one machine, is not a small overhead.
  3. Operation and maintenance problem: K8S deployment, installation and maintenance involves a lot of Linux and network related content, which requires certain operation and maintenance ability

Today, however, there are many K8S deployment tools, some with graphical interface installation, and some even with a command installation.

These tools provide a relatively friendly deployment solution for novices, but unfortunately I haven’t used most of them because I think they hide too much of the native K8S deployment details, such as what components are installed and how? Dependencies between components, how does the network work?

These details will be of great help to the maintenance of the environment and the investigation of problems in the future.

For this reason, I personally don’t recommend using too advanced deployment tools for beginners in the first place.

But you can’t do it without deployment tools. All components are installed manually, which I think drives a lot of novices crazy.

Therefore, I choose the official recommended kubeadm tool for demonstration, and lead us to build a usable K8S environment.

1. Environment Description

K8S environment can be divided into single-node and multi-node cluster environment.

With single node, the deployment is very simple and sufficient for general learning concepts, except for some functions that require multi-node support, such as DeamonSet, scheduling policy verification, etc

The online production environment should be distributed cluster, but this set of environment to build up the need for multiple machines.

You can create several virtual machines on your own computer, in this article I use my MAC (16 GB) as an example, to create two virtual machines, in fact, with three is no problem, so you are Windows, please make sure your configuration.

Of course, you can also choose to go to the major cloud manufacturers to buy machines, personal feeling should be a lot of money, willing to invest can go up.

Since we will definitely use multi-nodes in the future, this article will only talk about the multi-node cluster construction method. If you think you want to try the water first and learn some basic content, you can leave a comment in the comment section of this article. I am considering to separate a single-node K8S construction method, the method is almost the same.

2. Cluster architecture

Nodes in the K8S have two roles:

  • One is the control node, or Master node, which mainly runs the components of the control plane, such as Kube-Apiserver
  • One is computing node, also known as Worker node, which mainly runs business components, such as Web Server

The Master node is the brain of the whole cluster. A cluster must have only one Master node. Generally speaking, at least three Master nodes are required for high availability.

But since we are just learning, without cluster stability and security considerations, there is no need to waste resources on high availability.

Therefore, my cluster is a Master node with N Worker nodes (only one Worker deployment is shown below, and multiple workers are similar). The management network segment is 172.20.20.0/24, and the IP address is shown in the figure

3. Network environment

Cluster installation requires many images to be downloaded from the Internet, so make sure all your nodes are connected to the Internet before starting the installation.

For cloud hosts purchased from cloud vendors, there are public IP addresses, which need not be worried.

Can be created on the local computer virtual machine as K8S node friends, you can follow me to operate together.

My PC is the latest Macbook M1 Pro (2021) with only 16 gigabytes of memory. Parallels Desktop, the Desktop virtualization software for Mac, is now installed on my PC. Ubuntu 20.04 can be downloaded from the Parallels Desktop portal. Please note that this is the Desktop Version of Linux, and using the Desktop version would be too resource-intensive for my machine, which is not very well configured.

You can download Ubuntu for the server here: ubuntu.com/download/se… 1.1 G.

Download after completion, you can install, I believe that most people know how to operate, mingge here will not be much trouble, do not understand the exchange in the message area

After the VM is installed, you can configure the network segment first before starting the VM.

Click Network -> Advanced as shown below, and then click Open Network Preferences

Change the Shared network to 172.20.20.0/24, that is, 172.20.20.1-172.20.254

Once you modify it, Paralles Desktop will perceive and modify the IP address of Bridge100 on this machine to 172.20.20.0/24. The assigned IP is 172.20.20.2, which is in the same network segment as the virtual machine’s network, so that, The host machine can communicate with the virtual machine.

After the network configuration is complete, the VM is automatically configured with a 172.20.20.3 IP address.

Enp0s5 Enables DHCP to obtain IP addresses in /etc/netplan/00-installer-config.yaml.

Dynamic IP address acquisition may cause different IP addresses obtained by the same node in different scenarios at different times.

For the K8S cluster environment, the management network IP should be fixed, otherwise it will cause communication chaos.

Therefore, we should write the planned IP address into the configuration file, and then use Netplan Apply to make it effective

After the network is restarted, the current SSH connection is disconnected. After you log out, use 172.20.20.200 to log in again. The IP address of ENP0s5 has been set to 172.20.20.200 as expected.

Ping 114.114.114.114 and find that the network is disconnected. Why?

The gateway address is not specified in the netplan configuration file.

Bridge100’s IP address is 172.20.20.2.

This is because the IP is used by PD as the gateway, so we just need to add the gateway to the configuration file and then netplan apply again

The complete configuration file is as follows

Yaml network: etherNets: enp0s5: dhcp4: no addresses: [172.20.20.200/24] optional: # /etc/netplan/00-installer-config.yaml network: ethernets: enp0s5: dhcp4: no addresses: [172.20.20.200/24] True gateway4: 172.20.20.1 nameservers: addresses: [114.114.114.114] version: 2Copy the code

4. Basic environment

4.1 close the swapoff

Kubelet in the old version of K8S required that swapoff be turned off, but the latest version of Kubelet already supports swap, so this step is not necessary.

iswbm@master:~$sudo swapoff -a # modify /etc/fstab to persist iswbm@master:~$sudo vim /etc/fstabCopy the code

4.2 Changing the Time Zone

Time zone changed from UTC to CST by 8 hours

iswbm@master:~$ date
Sat 15 Jan 2022 02:22:44 AM UTC
iswbm@master:~$ sudo timedatectl set-timezone Asia/Shanghai
iswbm@master:~$ date
Sat 15 Jan 2022 10:22:55 AM CST

Copy the code

To make the timestamp of system logs take effect immediately, restart rsyslog

iswbm@master:~$ sudo systemctl restart rsyslog

Copy the code

4.3 Setting Kernel Parameters

Make sure your system has the br_netfilter module loaded, which is not available by default, so you need to install bridge-utils first

sudo apt-get install -y bridge-utils

Copy the code

Bridge – nF-call-iptables is set to 1. Br_netfilter is set to 1.

On Ubuntu 20.04 Server, this value is 1. If they are inconsistent on your system, use the following command to fix them:

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables =  1 EOF sudo sysctl --systemCopy the code

5. Basic software

The steps in this section are performed on the master and worker

5.1 installation Docker

Ubuntu already provides Docker installation packages, which can be installed directly

# install docker iswbm@master:~$sudo apt install docker. IO # Install docker iswbm@master:~$sudo systemctl start docker # Install docker iswbm@master:~$ sudo systemctl enable dockerCopy the code

If it is the old version of Ubuntu or Suggestions according to the website (docs.docker.com/engine/inst…

However, the Docker version provided by Ubuntu 20.04 source is still relatively new, which is 20.10.7, and can be used directly. The latest Docker version I see from the official Docker website is 20.10.12, but there are a few smaller versions with little difference.

iswbm@master:~$docker --version docker version 20.10.7, build 20.10.7-0ubuntu5~20.04.2Copy the code

5.2 Installing Kubeadm Kubectl

The following operations are performed on the master and worker nodes. Since Google source and repo are not accessible in the country, you need to switch to Ali source.

Run the following commands in sequence

iswbm@master:~$sudo apt-get install -y ca-certificates curl software-properties-common apt-transport-https  curl iswbm@master:~$ curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add - iswbm@master:~$ sudo tee /etc/apt/sources.list.d/kubernetes.list <<EOF deb https://mirrors.aliyun.com/kubernetes/apt/ Kubernetes - Xenial Main EOF # iswbm@master:~$sudo apt-get update iswbm@master:~$sudo apt-get install -y kubelet kubeadm kubectl # Ignore upgrade). So unhold the update and hold the update. iswbm@master:~$ sudo apt-mark hold kubelet kubeadm kubectlCopy the code

6. Build a cluster

6.1 the deployment of the master

90% of the articles on the web are deployed using kubeadm init with various parameters, just one command, like this

Sudo kubeadm init --pod-network-cidr 172.16.0.0/16 \ --image-repository Registry.cn-hangzhou.aliyuncs.com/google_containers \ - apiserver - advertise - address 172.20.20.200 \ --apiserver-bind-port 6443Copy the code

But such deployment commands will fail in newer versions of K8S (Maybe 1.22 +).

Kubeadm sets kubelet cgroupDriver to systemd by default.

If set to Systemd, Kubelet will not start.

After some groping, kubeadm Init should be deployed as a configuration file

iswbm@master:~$ sudo kubeadm init --config kubeadm-config.yaml

Copy the code

If you install an old version of K8S, you don’t need to use the above way, directly

Sudo kubeadm init --pod-network-cidr 172.16.0.0/16 \ --image-repository Registry.cn-hangzhou.aliyuncs.com/google_containers \ - apiserver - advertise - address 172.20.20.200 \ --apiserver-bind-port 6443 \ --token-ttl 0Copy the code

More commonly used parameters and Chinese explanations of kubeadm are sorted out as follows

Our purpose is to learn, there is no version requirements, directly on the latest version, so we can only use this way to install

iswbm@master:~$ sudo kubeadm init --config kubeadm-config.yaml

Copy the code

Where does the kubeadm-config.yaml configuration file come from? I prepared a, can be directly click here to download the wwe.lanzout.com/iITg2yt0imd

Kubeadm config print init-defaults Prints the default configuration

When it’s done, you’re reminded to do three things

First thing: Configure the environment variables

To enable you to use Kubectl for clustering normally, use the following command for regular users

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Copy the code

For the root user, run the following command

export KUBECONFIG=/etc/kubernetes/admin.conf

Copy the code

Second thing: Add the node to the cluster

To add the worker node to the cluster, run this command

Sudo kubeadm join 172.20.20.200:6443 --token abcdef.0123456789abcdef \ -- discovery-tok-ca-cert-hash sha256:4e4a7d0e848ae6c047d163fbe07f0e6975d71cc156d7705649241a59bbecaa04Copy the code

The command has a validity period. You can run the following command to obtain it

kubeadm token create --print-join-command

Copy the code

Now that your cluster is installed, you can take a look at the basic conditions of the cluster

Third thing: Deploy the network plug-in

This is not circled in the figure, but it is also very important and I will put it in the section 6.3 Deploying Calico.

6.2 the deployment of the worker

Compared with master, worker deployment is much simpler.

As long as docker, Kubelet, kubeadm software is installed, you can execute the previous join command to directly join the cluster

Sudo kubeadm join 172.20.20.200:6443 --token abcdef.0123456789abcdef \ -- discovery-tok-ca-cert-hash sha256:4e4a7d0e848ae6c047d163fbe07f0e6975d71cc156d7705649241a59bbecaa04Copy the code

Then check the node on the master. If you can see the worker, the worker has been added to the cluster successfully

6.3 the deployment of the Calico

Kubectl get Nodes has two nodes, but Status is NotReady. Kubectl get Nodes has two nodes, but Status is NotReady.

There are many network plug-ins of K8S, such as Flannel, Calico, Cilium, Kube-OVn and so on.

More support the CNI can be found in the official document list: kubernetes. IO/docs/concep…

I probably counted 16 web add-ons. It was terrible.

K8S has so many CNI network plug-ins, how to do a good job in the selection of CNI technology, a team needs to invest a lot of energy to research.

Flannel is the most basic network plug-in of K8S, which was developed by CoreOS. However, it has limited functions. It is only for beginners to learn to use Flannel and is not recommended for production.

For the rest, Calico is the mainstream choice (in fact, our company uses Kube-OVN), so I chose Calico for the installation

Installing Calico requires only one command.

Kubectl apply -f https://docs.projectcalico.org/v3.21/manifests/calico.yamlCopy the code

Once the installation is complete, these pods will be created

iswbm@master:~$ kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-85b5b5888d-dbptz   1/1     Running   0          3m33s
kube-system   calico-node-8jt69                          1/1     Running   0          3m33s
kube-system   calico-node-t69qb                          1/1     Running   0          3m33s

Copy the code

Coredns Pod, which failed due to the lack of a network plug-in installed, also started to pull up successfully

After the network is ok, confirm the cluster environment again

  • All pods are Running
  • All nodes are Ready

7. Verify cluster acceptance

I will introduce the concept of Pod later. For now, just know that Pod is the core resource object of K8S and the smallest scheduling unit.

If you can successfully create a Pod in a K8S cluster, then your cluster is healthy and available.

Use the following command to create a simple Nginx pod

kubectl apply -f https://k8s.io/examples/pods/simple-pod.yaml

Copy the code

The simple content of YAML for Siml. pod is as follows

ApiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80Copy the code

After execution, the change in state can be observed through kubectl get Pod

Using curl, you can access the nginx service by adding -o wide to the pod IP address

At this point, we have created the first POD, and the cluster is complete.

Congratulations, you have completed the most difficult step of your K8S learning – environment building

8. Write at the end

Although this is a Demonstration on a MAC operating system, there is no difference in the machine itself for building a K8S cluster, and at most there is a difference in the virtual machine network configuration, but I believe this is not a problem for serious programmers.

In this paper, the construction of the complete resembles the K8S cluster process, has experienced build – > screen solution – > material collected – > writing articles, full text after many repairs, back and forth the total spent more than six hours, to form the article, can make to a no operational ability of pure novice can learn painless also.

The output of an article takes such a long time, so the update frequency is naturally much smaller. In fact, I am also considering what kind of style I want to complete this series in the future. Is it a long article like this? Or do you want to learn K8S for 5 minutes every day?

As for me, I prefer the former, so as to have the condition to explain a knowledge thoroughly, and can let you feel my sincerity in writing this series.

But from the point of view of readers, it should be the latter, the update frequency is too low, the reader’s brain will be faulted, a week ago new knowledge, now may forget almost.

What do you think? I welcome your suggestions in the comments section.

This article uses the article synchronization assistant to synchronize