“This article has participated in the good article call order activity, click to see: back end, big front end double track submission, 20,000 yuan prize pool for you to challenge!”
Antecedents feed
Why should I learn K8s
I have mentioned my experience in the first article of my previous series of Go language learning. When I first entered the industry, the company’s services were all based on a single architecture, with the database and framework on a cloud server without object storage and CDN. A lot of data was stored on the cloud server, and when the disk capacity was insufficient, You need to set the alarm and go to disk capacity expansion, mount and restart in the middle of the night. Later, I encountered some fatal errors and had to roll back the system image of the whole service, resulting in the loss of the order information stored in the database. Then I went to the payment background to check the manual refund. When the sudden flow of activities came, the upgrade could not be smooth. Container technology is a disruptive technology, the container layout is perfect to solve these problems. K8s can be used to rapidly deploy applications, automatically scale and shrink to save costs, container deployment to eliminate environmental differences, and automatic o&M to reduce o&M costs, so why not use it?
Why use raspberry PI to install K8s
Oneself do STH over and over again a lot of cloud server, once MJJ(VPS circle garbage guy), spent a lot of money to buy a lot of cheap machine, what also did not do, want to build a K8s cluster oneself certainly need some stable machine, foreign instability, domestic machine to buy a moment, renew crematorium ~. Comprehensive consideration of the use of raspberry PI is the best, small power consumption, permanent use right, convenient toss, the Internet has been achieved, so do not hesitate to start. The disadvantage is that some software does not support raspberry PI, such as private image warehouse software Harber does not support Arm architecture, based on THE PHP Hyperf framework does not support Arm architecture, there are more pit I go to step on ~.
K8s cluster introduction & popular science
What is a k8s
Full name Kubernetes, eight letters between K and S, K8s for short. A container orchestration management tool, a cluster system based on container technology, provides orchestration and management capabilities for containerized applications such as deployment, dynamic scaling, service discovery, etc. Originally designed and developed by Google and formerly known as Google Borg platform, K8s is actually knowledge in the field of operation and maintenance, and I am a back-end engineer. This article is just a slight record of the learning journey. It is a pure novice and will not explain the details of K8s in depth. (I can’t even do hard ones, lol)
K8s Basic concepts
Pod
K8s most basic scheduling resources, a Pod can have a single or multiple containers, K8s many concepts are very abstract, such as this Pod, it can be seen as a Pod container logical physical host, a Pod can only exist on a physical machine, multiple containers in a Pod share a network and file system, When deploying an application in a K8S cluster, you write yamL configuration files to configure which container image pod uses, specify which Node to deploy to, limit the resources POD uses, which ports to expose, which data volumes to mount, specify some dynamic scaling metrics, specify health check methods, and so on.
The Master node
Nodes, usually a Node is deployed on a physical machine. There are usually two types of nodes: Master nodes and Node nodes. As the name suggests, the Master Node is the brain of the cluster and acts as the controller. The Master node has several basic components that are important to understand and what they do:
- Etcd: A highly available distributed strongly consistent key-value pair database (summarized in one sentence, with a look at the raft algorithm associated with it) that holds the network configuration and the state of resource objects for all nodes.
- Api Server: Is responsible for providing EXTERNAL Api services. Other components of the Master node need to invoke Api Server to realize their own functions.
- Controller Manager: Is responsible for maintaining the status of the cluster, such as implementing resource monitoring, dynamic scaling, rolling updates, and fault detection. There is one controller for each resource type. For example, Deployment and ReplicationController are application resource controllers (maintain pod copies, application upgrades, automatic scaling), DaemonSet(deploy some log collection, run monitored resource applications), Jop (scheduled task controller). You can view the resource type in the KIND field of the YAML description file when the application is deployed.
- Scheduler: Listens for new Pod and Node information and schedules the new Pod to the appropriate Node.
The Node Node
Node is the work cluster of K8s. It is the carrier of application deployment and the role of work.
- Kube-proxy: Deploys the service’s load-balancing component to properly forward the load of the service’s requests to the appropriate Pod.
- Kubelet is responsible for the specific tasks of the K8s cluster, reporting Pod and Node status, performing health checks, listening to the scheduler for task assignment, mounting data volumes, and more.
Since both networks and machines are unstable, it is common for a K8s cluster to be highly available, and of course not only one Mater node in a production environment, but also a Master node in a production environment.
Other components
Cluster Network Coverage Component (CNI), also known as container Network Interface (CNI), is a standard design specification that represents K8s’s openness and tolerance, allowing customization of many things. It is a container network specification proposed by CoreOS (a startup company that developed Etcd, Now acquired by the famous RedHat) to dynamically configure the appropriate network configuration and resources when a container is configured or destroyed. Flannel and Calico are the main cluster network coverage components, and Calico is installed by itself. It is very complicated to understand the details of the principle. The abstract understanding is that Etcd is roughly used to save the network state, and a virtual network adapter is created on the machine where Node is located. The principle of Calico is based on the routing and forwarding mechanism of BPG protocol and Linux, and Flannel is based on ipv4 address routing and forwarding of UDP protocol. Flannel is simple and easy to understand, while Calico is low-level, free and has more functions.
Hardware description
Three Raspberry PI 4B 8G Raspberry PI 64-bit system: github.com/openfans-co… System installation tutorial can refer to my personal public articles: mp.weixin.qq.com/s/nkppkqU7u… Power is xiaomi’s 5 – port charger building blocks DIY chassis
Prerequisite tasks
The three raspberry pies need to be set uniformly to improve cluster stability and meet cluster installation requirements.
Change the name of raspberry PI
sudo vim /etc/hosts
Copy the code
For example, on the Master node, change the first line from 127.0.0.1 raspberrypi to 127.0.0.1 b41.
My three raspberry PI names were changed to B41 / B42 / B43
One more change is needed
sudo vim /etc/hostname
Copy the code
Three raspberry PI in turn changed to their desired name, easy to distinguish who is who later.
Setting a Static IP address
The static IP is set so that when they restart, they will not automatically assign dynamic IP, so they can “communicate” with each other.
sudo vim /etc/dhcpcd.conf
Copy the code
Interface wlan0 static ip_address= User-defined Intranet IP address of the Raspberry PI /24 Static routers= Intranet gateway IP address static domain_name_Servers =114.114.114 Custom DNSCopy the code
The Intranet gateway is the management IP address of the router. The management IP address of my router is 192.168.2.181. The Intranet IP address of B41 is 192.168.2.181, and the Intranet IP address of the other two raspberry PI is the same.
Here,interface wlan0
Set a static address when connecting to wifi, as long as the PI is connected to the router using wifi, not a network cable. If a network cable connection needs to be set to eth0, you can use ipconfig to check how many network cards raspberry PI has.
To be safe I set the static IP again on the router (OpenWRt system Router, Network ->IP/MAC binding) :
Disable Swap Memory Swap space
What is the Swap
This is very important. What did I learn hereOOM
(Out Of Memory) Swap Memory is a partition Of Memory reserved on the hard disk Of the machine. When the physical Memory on the machine is insufficient, the machine allocates a space on the hard disk for running programs to use. The Swap memory partition exists to ensure that the program runs out of memory and does not crash, so it is so useful that today’s Linux system allocates the Swap memory partition by default to improve system stability. In the last article, I mentioned that I installed GitLab on a 1H2G machine and revised many parameters, but it was very difficult. Because there are many software packages attached to GitLab, so it will take up a lot of memory, 2 gb is usually not enough, SO I manually set 2 GB /4 GB Swap memory partition for that server, and sometimes 502, The page loading was slow, so I gave up.
Check the memory usage after installing the raspberry PI. You can find that the Swap partition has 1 GB of space.
Why should Swap be closed
Because K8s is a cross-machine cluster environment, in order to ensure that some containers can fail to accurately locate the problem, to provide a stable and healthy environment for container orcheography, rather than the unexplained error affecting the entire cluster, Puzzling of human error is mostly ran out of memory while the container is running a memory leak caused some process is kill resulting in service is not available, but the container is ostensibly healthy (container nature is also a special process, just isolation and restrictions on some resources, the special container processes and open many other tasks). This affects container scheduling for the entire cluster. So kubelet (K8s control console) made it mandatory to disable Swap after version 1.8.
How to close Swap
Simply comment the contents of /etc/fstab
sed -ri 's/.swap./#&/' /etc/fstab
Copy the code
sed
Sed is a command that allows you to edit text in a script. The full name of sed is stream Editor
- -r’s support for extended expressions is powerful, indicating that regular processing of literals is enabled
- -i Modifies the content of the source file directly. After the -i command is used, the terminal does not output the content of the file
- ‘s/.swap./#&/’ : S the symbols behind the forward slash is to specify symbol segmentation, segmentation symbols can customize here, using three segmentation symbols, the first specify symbol, behind the two split out the contents of the search and replace the content of the search. Swap., dot represents an arbitrary character, this refers to slash, use & representing the match to content, add a # in front of the match to the content, Comment out this line.
- There’s no g at the end. Note Only one match is required.
Take a look at the contents of the source file:
cat /etc/fstab
Copy the code
Check the performsed -ri 's/.swap./#&/' /etc/fstab
What follows:
Set host resolution
The hosts file records the actual addresses of some IP addresses or web addresses. The hosts file is equivalent to the domain name resolution system of your own machine. When you access an IP address or web site, the system queries the hosts file. It is mainly to facilitate the network connection between the machines inside the LAN, and all three machines need to be set up.
Sudo cat >> /etc/hosts << EOF 192.168.2.181 k8smaster 192.168.2.182 k8snode1 192.168.2.184 k8snode2 EOFCopy the code
Use cat >> to append to a file.
Disable selinux
System through various commands can not be seen, the raspberry PI system project maintenance personnel said no, but the system kernel support.It’s a Linux kernel module, Linux subsystem software, a very complex thing, developed by the NSA, it seems to be shut down in the country, mainly because it’s very error-prone and hard to locate.
Open the Cgroup
Cgroup is a mechanism provided by the Linux kernel that can limit, record and isolate process groups. Docker implements the restriction on container resources through Cgroup technology. My system already has this kernel setting enabled by default.
Enabling ipv4 forwarding
Searching for information is a Linux system with ipv4 forwarding enabled, which is equivalent to a system route that recognizes packets that are not intended for the system itself and can be sent to another network.
sudo vim /etc/sysctl.conf
Copy the code
Net.ipv4.ip_forward = 1 # Comment out this lineCopy the code
Disabling the firewall
To avoid affecting network performance, K8s is basically an Intranet environment for security, and only a few ports are exposed when the service is exposed to the outside.
Sudo /usr/sbin/iptables -p FORWARD ACCEPT # permanentCopy the code
Turn off some services
systemctl stop NetworkManager.service
systemctl disable NetworkManager.service
Copy the code
The reason is that you do not use the graphical desktop environment. You use the configuration file to configure the wifi connection. Therefore, disable this service to ensure good compatibility.
Install some common software
Common software summarized by myself:
sudo apt-get install -y net-tools lrzsz tree screen lsof tcpdump
Copy the code
The formal installation
Install kubeadm, kubelet and Kubectl
Install it on both the Master and two nodes:
kubeadm
Is the official launch of rapid deployment K8s tool.
kubelet
K8s Cluster command line tool to manage clusters and deploy applications.
kubectl
Master is sent to the Node to manage its own container on the Node. It is the super management tool on the Node. Update the software list first:
apt-get update && apt-get install -y apt-transport-https
Copy the code
Add the software key, add the address of the source server of the software update:
curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
Copy the code
To install kubeadm, Kubelet and Kubectl, you need to specify the cluster version to be installed:
Apt-get install -y kubelet=1.20.0-00 kubeadm=1.20.0-00 kubectl=1.20.0-00Copy the code
Before installing the K8s cluster, we need to pull some images to the local, because the image remote pulled by the kubeadm script does not seem to report an error, we need to pull the remote image first and re-tag, the script will recognize the local mirror will not report an error.
#Mirrorgcrio has not been updated... Can official website is searched to have which version
#MY_REGISTRY=mirrorgcrio
MY_REGISTRY=registry.aliyuncs.com/google_containers
#MY_REGISTRY=registry.cn-hangzhou.aliyuncs.comK8S_VERSION = "1.20.0" echo "" echo" = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = "echo" Pull Kubernetes for x64 v$K8S_VERSION Images from docker.io ......" echo "==========================================================" echo ""## Pull mirrordocker pull ${MY_REGISTRY}/kube-apiserver:v$K8S_VERSION docker pull ${MY_REGISTRY}/kube-controller-manager:v$K8S_VERSION docker pull ${MY_REGISTRY}/kube-scheduler:v$K8S_VERSION docker pull ${MY_REGISTRY}/kube-proxy:v$K8S_VERSION docker pull The ${MY_REGISTRY} / etcd: 3.4.13 0 docker pull ${MY_REGISTRY} / pause: 3.2#docker pull ${MY_REGISTRY}/ coredns - arm64:1.7.0Docker pull coredns/coredns: 1.7.0#Add the Tag #docker tag ${MY_REGISTRY}/kube-apiserver:v$K8S_VERSION k8s.gcr.io/kube-apiserver:v$K8S_VERSION docker tag ${MY_REGISTRY}/kube-scheduler:v$K8S_VERSION k8s.gcr.io/kube-scheduler:v$K8S_VERSION docker tag ${MY_REGISTRY}/kube-controller-manager:v$K8S_VERSION k8s.gcr.io/kube-controller-manager:v$K8S_VERSION docker tag ${MY_REGISTRY}/kube-proxy:v$K8S_VERSION k8s.gcr. IO /kube-proxy:v$K8S_VERSION docker tag ${MY_REGISTRY}/etcd:3.4.13-0 K8s.gcr. IO /etcd:3.4.13-0 docker tag ${MY_REGISTRY}/pause:3.2 k8s.gcr#docker tag ${MY_REGISTRY}/ coredns - arm64:1.7.0 k8s. GCR. IO/coredns: 1.7.0Docker tag coreDNS/coreDNS: 1.7.0k8s.gcr. IO/coreDNS :1.7.0 echo "" echo "==========================================================" echo "Pull Kubernetes for x64 v$K8S_VERSION Images FINISHED." echo "into docker.io/mirrorgcrio, " echo " by openthings@https://my.oschina.net/u/2306127." echo "==========================================================" echo ""Copy the code
Save it as a shell script and execute it. All three machines need to be running to pull the image.
Master Node Installation
Execute the following command only on the Master node machine:
#Specify IP address, version 1.20.0:Sudo kubeadm init - image-repository=registry.aliyuncs.com/google_containers kubernetes - version = v1.20.0 - apiserver - advertise - address = 192.168.2.181 - pod - network - cidr = 192.168.0.0/16 - ignore - preflight - errors = allCopy the code
-
Kubernetes -version: specifies the cluster version.
-
Apiserver – advertise – address: Kubeadm uses the default network interface of eth0 (usually Intranet IP) as the advertise address of the Master node. If we want to use different network interfaces, This can be set using the –apiserver-advertise-address= parameter
-
Pod-netword-cidr: pod-netword-cidr: Specifies the IP address range for the POD network, depending on which network plug-in you choose in the next step. For example, I used the Calico network in this article, specifying 192.168.0.0/16.
-
Ignore -preflight-errors – cgroups:hugetlb: ignore-preflight-errors – cgroups:hugetlb: ignore-preflight-errors – cgroups:hugetlb: ignore-preflight-errors – cgroups:hugetlb: ignore-preflight-errors – cgroups:hugetlb: ignore-preflight-errors
Wait a few minutes, the execution is complete:
[apiclient] All control plane components are healthy after 37.009846 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node b41 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node b41 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: kj236o.bff6bhbuiz7oxlqi
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.2.181:6443 --token 3fkqb5.******** --discovery-token-ca-cert-hash sha256:d516b2d07e3f2e99a1445a20348295258d8972414a4264ff0d79d389c8116b37 --ignore-preflight-errors=all
root@b41:/etc/docker#
Copy the code
The output shows how to join a cluster
Node Adds a Node to a cluster
Use SSH to connect to the Node and run the following command:
Kubeadm join 192.168.2.181:6443 --token njr783.****** --discovery-token-ca-cert-hash sha256:29f43006779a5c7e59e86d199984f435129a154b3a311dbe42aeab9c571c9f2f --ignore-preflight-errors=allCopy the code
An error will be reported during installation
failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
Copy the code
When K8s is installed, docker driver is cgroupfs, K8s needs to use systemd, this is directly add docker configuration can solve:
vim /etc/docker/daemon.json
{
"exec-opts":["native.cgroupdriver=systemd"]
}
Copy the code
Then restart Docker. Note that this system comes with Docker and you don’t need to install it yourself.
systemctl restart docker
Copy the code
After installing it many times and trying many times, it was easy to fix the problem, such as installing a flannel and simply resetting the entire installation process.
kubeadm reset
Copy the code
At this point use view all pods on the Master machine
kubectl get pods --all-namespaces
Copy the code
Install the Calico
At this time there is a lack of coverage network plug-in, I installed Calic.
Yaml
To deploy an application in K8s, you must be aware of Yaml files. Yaml files are a common file format commonly used as configuration files, similar to the DockerFile format used to package images, and indentation is used to indicate hierarchies. Listing as a resource in K8s is a clear record of what was done in the cluster and what was deployed, rather than just typing a command and not knowing what was done a few days later. First of all, I need to download the Yaml file which is open on the Internet. As a vegetable chicken, I really do not have the strength to write complex Yaml file.
Weget https://docs.projectcalico.org/v3.11/manifests/calico.yamlCopy the code
You can usecat calico.yaml
Take a look at this file and you can see a lotKind: CustomResourceDefinition
Key-value pairs, which are custom resource types in K8s,Everything in Kubernetes is a resource
.
To complete the installation, run the following command on the Master machine:
kubectl apply -f calico.yaml
Copy the code
Looking at all the PODS, this is the 1.20 version of the K8s cluster that I installed six months ago and now put the network plug-in in it for the demoCalico
andDashboard
The panel has been deleted, now I will install it again and go through the process again, because there is no record left before, here I will make a supplementary explanation. Add a little bit about how to delete Yaml description file deployed application.
kubectl delete -f kubernetes-dashboard.yaml
Copy the code
If the Terminating state of the Pod is Terminating all the time, you can use the Terminating Terminating state command to delete the Pod forcibly:
kubectl delete pod dashboard-metrics-scraper-79c5968bdc-f82g2 --force --grace-period=0 -n kubernetes-dashboard
Copy the code
- -n is followed by namespace
- Pod is followed by the pod name
- Grace-period The grace period is 0
After forced deletion, some problems will remain. You need to manually delete the remaining virtual network adapter and residual configuration on the machine:
ifconfig cali70ee51d9a9b down
ifconfig tunl0 down
rm -rf /etc/cni/net.d
Copy the code
ImagePullBackOff
The image has been unable to pull the error, we need to reconfigure several Docker image remote warehouse address, note that several machines need to be configured:
cat /etc/docker/daemon.json
Copy the code
{
"exec-opts":["native.cgroupdriver=systemd"],
"registry-mirrors": [
"https://uyqa6c1l.mirror.aliyuncs.com",
"https://hub-mirror.c.163.com",
"https://dockerhub.azk8s.cn",
"https://reg-mirror.qiniu.com",
"https://registry.docker-cn.com"
]
}
Copy the code
Next, we encountered a Pod with a state of:pod Err
Now we need to check the pod log to see what the problem isnamespace
Parameter, each resource in K8s can only exist in one namespace. Some basic resources do not exist in any namespace. The concept of namespace is also abstract. Different services can be isolated using different namesapces. For many problems also need to change the namespace of resources to deal with, for the novice a little trouble, need to slowly understand the toss.
kubectl logs calico-kube-controllers-6b8f6f78dc-5j5bt --namespace kube-system
Copy the code
Dial TCP 10.96.0.1:443: connect: no route to host dial TCP 10.96.0.1:443: connect: no route to host
# systemctl stop kubelet
# systemctl stop docker
# iptables --flush # Remove all protective wall rules, don't be afraid. The K8s will repair itself
# iptables -tnat --flush
# systemctl start kubelet
# systemctl start docker
Copy the code
Use it now when it’s readykubectl get pod --all -namespace
You can see that all the pods are working.How to install the application after the successful cluster installation? Next, install a cluster panel to check the status of the cluster.
Install the DashBoard
Below is the official DashBoard. It doesn’t look very good. Many things are not intuitive and have few features for beginners.Most importantly, I didn’t record the process myself, so I used a new DashBorad this time:
Install Kuboard
Its design concept is very suitable for beginners like me, let beginners to use Kubernetes, and then to understand the various concepts, improve the convenience of K8s operation and maintenance, so that the operation and maintenance work as much as possible in the web interface. It’s all in Chinese. It’s a Chinese-developed panel plugin. According to the official website, there are already 1000 companies using this panel in production environment, which means it’s very mature. Official website address: kuboard. Cn /, the official website also provides a very good K8s tutorial. Follow the installation tutorial on the official website for one-click installation:
kubectl apply -f https://addons.kuboard.cn/kuboard/kuboard-v3.yaml
Copy the code
Wait for the pod to be created:
watch kubectl get pods -n kuboard
Copy the code
The waiting time is a little longer, really than the officialDashBoard
Use the IP address of the Master node, panel port 30080:http://192.168.2.181:30080/
You can use the default account and password to directly see the default K8S cluster. You can set to add other clusters, manage other clusters, and add some users and user groups to the panel. The basic functions are there, and then slowly toss.
conclusion
Kubeadm has two simple commands (create cluster, join cluster). However, there are a lot of problems with it. Maybe some of the details or problems are not clear. Basically, I can find the solution through the search engine for all the problems I encounter. As long as I have the courage to do STH over and over again without fear of making mistakes (head iron), I can definitely complete it. I feel a great sense of achievement to run the cluster normally from scratch. Feel his words are popular, it may be not accurate enough, which met the little knowledge they do not have clear checked sums up many odd data, and did not involve complicated knowledge, of course, as a novice error is inevitable, what are the problems of local welcome, points out that don’t appreciate ~. Next, I will install some real service applications in the cluster to test. For example, I would like to install GitLab in the last part of the simple CI/CD article to synthesize the big Agiao game deployment to try, haha, some people reported playing for a long time, toxic. See here are all real big guy, thank you for your support, look forward to the next article out as soon as possible, everyone’s comments like interaction is my power wow.