It has been a month since the last update, mainly because of the recent changes in the job a little frequent, now only temporarily stable. The purpose of this blog is to show you how to build a K8S cluster from scratch. But then I thought, if I read this article, what will I gain? You just follow every step of the way? If IT was me, I would have refused, so I added a brief introduction about K8S and added explanations for each step. Due to space and time, I only introduced the more core Pod and Service in K8S.

The first half of this article will give a brief introduction to K8S, and the second half will show you how to slowly build the cluster from scratch. If you want to get straight to the task of building a cluster, you can start with Chapter 3.

1. What is K8S

K8S, full name Kubernetes, is a production-level container choreography system, or microservices and cloud-native platform, that Google opened source in 2014. Although open source only 14 years ago, K8S is actually an open source version of Borg, Google’s internal container orcheography system, which has been in use at Google for over a decade. Here’s a little tidbit about where the K8S Logo came from.

Kubernetes was first announced by Google in 2014. Its development and design were heavily influenced by Google’s Borg system, and many of its top contributors were also previously Borg system developers. Within Google, The original code name for Kubernetes was Seven, after the befriended Borg character from Star Trek. Kubernetes logo with seven spokes on the steering wheel is a nod to the project code name.

However, there is also a theory that Docker’s Logo is a whale carrying a container, i.e., a transport ship, while the K8S Logo is a rudder, designed to lead Docker (or container technology) into the distance.

2. Get to know K8S briefly

Read a lot of official articles, is really official. What does it mean officially? It is possible to see about equal to not see, the same as what do not know.

So I want to write this article, for those who still do not understand the document or do not understand K8S a little help. So let’s go back to the original definition of K8S, which is a microservices framework.

Speaking of microservice frameworks, we have to mention the current mainstream microservice frameworks in the industry, compared with those you are familiar with, you will have a clear idea of what K8S can do. Spring Cloud, Dubbo and K8S are the most popular microservice frameworks and platforms.

Spring Cloud is from Netflix, Dubbo is from Alibaba, and K8S is from Google. To make it intuitive, all three frameworks are solutions for microservices. One might say, isn’t K8S a container orchestration system? How does that compare to a software-level microservices framework like Spring Cloud?

Don’t panic, iron. Let’s get deeper into this concept.

We all know that if we need to use microservices, there must be some underlying infrastructure to support them, such as service registration and discovery, load balancing, log monitoring, configuration management, cluster self-healing and fault tolerance, elastic scaling… And so on. I haven’t finished the list, but these components can all be collectively referred to as common concerns for microservices. Can we say that it is a microservices framework as long as it provides these features?

Most of the above features are built in. Therefore, we can say that K8S is a container choreography system similar to Docker Swarm, but because K8S has a built-in microservice solution, it is also a fully functional microservice framework.

2.1 Concept of Pod

The Docker Swarm, scheduling is the smallest unit of container, and in K8S, scheduling, the minimum is Pod, Pod is that what?

Pod is a new concept designed by K8S. In English, Pod means a Pod of whales or a Pod of peas. In other words, one or more containers can run in a Pod.

In a cluster, K8S assigns each Pod an IP address unique to the cluster. Because K8S requires the underlying network to support direct communication between two PODS between any node in the cluster. These containers share the file system and network of the current Pod. These containers can be shared because Pod has a root container called Pause, and the rest of the user business containers share the IP and Volume of this root container. So all of these containers can communicate with each other through localhost.

One might ask, why introduce the concept of a root container? That’s because without a root container, when multiple containers are introduced into a Pod, which container’s state should be used to determine the Pod’s state? This is why the Pause container is used as the root container, and the state of the root container represents the state of the entire container.

As anyone familiar with Spring Cloud or microservices knows, the worst thing to avoid in microservices is a single point.

So we typically deploy two or more instances of the same service. In K8S, multiple Pod copies are deployed to form a Pod cluster to provide external services.

And we mentioned before, K8S will provide each Pod with a unique IP address, the client needs through each Pod only asked about specific IP + container port visit Pod, in this way, if the client write call address death, there was no way to do load balancing server, and the Pod after restart the IP address will be changed, Do you have to notify the client of IP changes every time you restart?

To solve this problem, the concept of a Service is introduced.

2.2 the Service

Service is one of the core resource objects in K8S and is used to solve the problem mentioned above. I personally don’t see much difference from the Service concept in Swarm.

Once a Service is created, K8S assigns it a ClusterIP that is unique to the cluster, called ClusterIP, and will not change throughout the lifetime of the Service. In this way, you can use a similar operation to Docker Swarm. Set up a DNS domain name mapping from ClusterIP to the service name.

It is worth noting that ClusterIP is a virtual IP address that cannot be pinged and is only available in K8S clusters.

A Service, on the other hand, shields the underlying Pod from addressing the client. In addition, the kube-proxy process forwards the request to the Service to the specific Pod, and the specific one is determined by the specific scheduling algorithm. In this way, load balancing is achieved.

How does the Service find the Pod? This leads to the introduction of another core concept, Label.

2.3 the Label

A Lable is essentially a pair of keys and values determined by the user. Lable is a tag, which can be placed on a Pod or a Service. In summary, the Label has a one-to-many relationship with the tagged resource.

For example, if we labeled the Pod described above with role=serviceA, we would just add that Label to the Label Selector in the Service, so that, The Service will then be able to find a set of copies of pods labeled with the same Label by Label Selector.

Next, a brief introduction to the other K8S core concepts.

2.4 up Set

I mentioned deploying multiple Pods, what about that? K8S originally had a concept called Replication Controller, but now it has been gradually replaced by Replica Set. RS is also called the next generation RC. In simple terms, a Replica Set defines an expected scenario in which the number of Pod replicas in the cluster at any given time matches the expected value.

Once created, the cluster will periodically check the number of pods currently alive, and if there are too many, the cluster will stop some pods. On the contrary, if there is less, some Pod will be created. What problems can be avoided? If we set the number of replicas to 2, the cluster will automatically create a Pod to ensure that there are always two pods running in the cluster.

So much for K8S, let’s go into the setup of the cluster.

3. Preparation for building K8S

I don’t know where to start. I’m not sure I’d like to write a pure TODO type blog, but I’ve been in the trenches and found that this is by far the easiest blog I’ve seen.

Some of the installations I’ve seen have many different types, but when a beginner is looking at them, they can be confusing. So the next install will be a bit more hardcore. Regardless of the situation, there is only one case, a shuttle installation is done.

The system version is Ubuntu 18.04

K8S version v1.16.3

The Docker version v19.03.5

Flannel version v0.11.0

If you ask me, can you have your own cluster without a machine reading your article? So look at the picture below…

3.1 Preparations

Let’s assume that the following is true.

Machine: Two or three physical machines or VMS

System: Ubuntu 18.04 and a domestic source has been changed

If not, the end of this article, thanks for watching…

3.2 installation Docker

I don’t need to go through all the details, just hop on the machine and create a shell script, such as install_docker.sh. The shuttle code looks like this.

sudo apt-get update

sudo apt-get install -y apt-transport-https ca-certificates 

curl gnupg-agent software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

sudo apt-key fingerprint 0EBFCD88

sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

sudo apt-get update

sudo apt-get -y install docker-ce docker-ce-cli containerd.io

Copy the code

Then run sh install_docker.sh, wait for the command to complete, and verify that docker is installed. Just hit Docker + Enter.

3.3 installation Kubernetes

Similarly, create a new shell script, such as install_k8s.sh. A shuttle code is as follows.

sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

sudo apt-get update

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list

deb https://apt.kubernetes.io/ kubernetes-xenial main

EOF

sudo apt-get install -y kubelet kubeadm kubectl --allow-unauthenticated

Copy the code

Run the sh install_k8s.sh command to check whether k8S is installed. Kubectl + Enter.

3.4 close the Swap

Give a shuttle first, don’t delay the old iron being installed. We’ll talk about why we close it later.

  • Temporarily disable and use the command directlysudo swapoff -a, but it takes effect after a restart. The K8S cannot run properly.
  • Permanent Shutdown SuggestionOnce and for all.sudo vim /etc/fstabComment out the swap.img line and save it.

So, what is swap? It is the swap partition of the system, which you can think of as virtual memory. When the system memory is insufficient, part of the disk space is used as virtual memory. So why does K8S need to turn it off? You can see the difference between the speed of accessing memory and the speed of accessing hard disk.

K8S hopes that all services should not exceed the CPU and memory limits of the cluster or node.

4. Initialize the Master node

At this point, you are ready to start installing the K8S master node, boarding the machine that will be the master node.

4.1 set the HostName

As usual, give the order and tell me why.

sudo hostnamectl set-hostname master-node

Copy the code

After the host name is modified, the name of each node in the cluster is not automatically generated by K8S, facilitating the view and memory. For example, on other nodes you can change master-node to slave-nodes-1 or worker-nodes-2.

4.2 Initializing a Cluster

Run the following command on the machine.

Sudo kubeadm init - pod - network - cidr = 10.244.0.0/16

Copy the code

Then, pick up the guitar and wait for the command to finish.

Special attention should be paid here. After this command is executed, a command with kubeadm join is printed, which needs to be saved.

It looks something like this.

Kubeadm join your IP address :6443 –token your token –discovery-token-ca-cert-hash sha256: hash of your CA certificate

As the name implies, this command is used to add other nodes to the cluster, and the Token is time-sensitive, usually with an expiration time of 86.4 million milliseconds.

If it fails, it needs to be regenerated. If you really didn’t save it and it didn’t work… I still have two remedies for you. If the command is saved, skip these two remedies.

Token. Retrieve the token by running the Kubeadm token list command

Ca – cert. Execute the command openssl x509 pubkey – in/etc/kubernetes/pki/ca. CRT | openssl rsa – pubin – outform der 2 > / dev/null | openssl DGST sha256 – hex | sed ‘s / ^. * / / recovered

4.3 This command can be executed by common Users

The following instructions can be a shuttle.

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

Copy the code

Basically, to make things easier, you don’t have to sudo every time you execute a command like Kubectl on a control node.

4.4 Installing the Network Communication Plug-in

Run the following command to install Flannel.

sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Copy the code

As you can see, if Flannel is not installed, the Master node that we just Init will be in the NOT_READY state. Once installed, you can use the command kubectl get Nodes to view the status of all nodes. You can also use Kubectl Get Pods — All-Namespaces to check the current cluster status of all pods. Note that the next step can only be taken after the master node is READY and all pods are in the RUNNING state.

Why install a network plug-in?

That’s because K8S requires the Pod network to be interoperable between all nodes in the cluster. In other words, Flannel enables containers on different nodes in a cluster to have a virtual IP address unique to the current cluster. In this way, Pod to Pod can communicate directly across nodes.

In this way, the complex network communication, simple communication between two IP addresses. This is achieved mainly through the virtual layer 2 network. It appears that the Pod of this node communicates directly with the Pod of another node, and eventually flows out through the physical network card of the node.

5. The Slave node is added to the cluster

At this point, a single-point cluster is set up. Now what we need to do is log in to another server that is ready (I only have two, if you have 3 or 4 days, just repeat this chapter several times).

5.1 set the HostName

Run the following command.

sudo hostnamectl set-hostname slave-node

Copy the code

Since the current node is not master, the host name is set to slave-node.

5.2 Joining a Cluster

Just execute the kubeadm join command generated in the previous section. After the execution, you can run the kubectl get nodes command on the master node to see that the slave-node has joined the cluster.

The operation on the Slave node is gone.

6. Thanks for reading

This is the brief introduction of K8S. Due to length and time, many concepts, such as Deployment, Volume, ConfigMap, and so on, have not been introduced. I just covered the core Pod and Service, and all that stuff. After all, it would take more than one blog post to cover the core concepts of K8S, which I’ll cover in more detail later.

This is the first time I ask for a like in my blog. But then I found that when I opened the blog and saw the likes and comments, it was a great encouragement to me.

If you think this article is helpful to you, please click a like, close a note, share, leave a message

You can also search the official account [SH full stack notes] on wechat, of course, you can also scan the QR code directly to follow

Thanks to a worship