A key step in learning Kubernetes is to learn to build a K8S cluster. In today’s article, the author will recently summarize the construction skills, free to share with you! Cut the crap and go straight to the dry stuff!

01. Prepare the system environment

To install and deploy Kubernetes cluster, first need to prepare the machine, the most direct way can go to the public cloud (such as Ali cloud, etc.) to apply for several virtual machines. If conditions permit, it is best to take a few local physical servers to form a cluster. But these machines need to meet the following conditions:

  • The 64-bit Linux operating system is required, and the kernel version is 3.10 or above, which can meet the requirements of Docker installation project;

  • Network communication between machines is the prerequisite for network communication between containers in the future.

  • IO and quay. IO are required to access the two Docker Registry, because a small number of images need to be pulled from here.

  • The available resources for a single machine are recommended to be 2-core CPU, 8G memory or above. If it is smaller, it is ok, but the number of PODS that can be scheduled is relatively limited.

  • The disk space must be at least 30GB, which is used to store Docker images and related log files.

In this experiment, we prepared two virtual machines, whose specific configurations are as follows:

  • 2 core CPU, 2GB ram, 30GB disk space;

  • Sever version of Unbantu 20.04 LTS, its Linux kernel is 5.4.0;

  • Intranet communication, Internet access is not controlled;

02. Introduction to Kubernetes cluster deployment tool Kubeadm

As a typical distributed system, the deployment of Kubernetes has been a big obstacle for beginners to enter the world of Kubernetes. Early deployment of Kubernetes relied on community-maintained scripts, but this involved binary compilation, configuration files, and kube-Apiserver authorization profiles. At present, the common deployment mode of Kubernetes is to use SaltStack, Ansible and other operation and maintenance tools to automate these tedious steps, but even so, the deployment process is still very tedious for beginners.

It was this pain point that led the Kubernetes community to launch kubeadm, a standalone one-click deployment tool that allows you to quickly deploy a Kubernetes cluster with a few simple commands. In the following sections, I will demonstrate how to use Kubeadm to deploy a simple Kubernetes cluster.

03. Install Kubeadm and Docker environment

It was this pain point that led the Kubernetes community to launch kubeadm, a standalone one-click deployment tool that allows you to quickly deploy a Kubernetes cluster with a few simple commands. In the following sections, I will demonstrate how to use Kubeadm to deploy a simple Kubernetes cluster.

Kubeadm (kubeadm, kubeadm, kubeadm, kubeadm, kubeadm, kubeadm)

1) Edit the operating system installation source configuration file, add kubernetes image source, run the following command:

# add Docker ali image source/root @ centos - Linux ~ # wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo - O [root@centos-linux ~]# yum -y install docker-ce-18.09.9-3.el7 # yum -y install docker-ce-18.09.9-3.el7 # yum -y install docker-ce-18.09.9-3.el7 # yum -y install docker [root@centos-linux ~]# yum -y install docker-ce-18.09.9-3.el7 # [root@centos-linux ~]# systemctl enable docker add Kubernetes yum Add ali cloud Kubernetes yum image source # cat > / etc/yum repos. D/Kubernetes. '< < EOF [Kubernetes] name = Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOFCopy the code

Yum kubeadm install kubeadm install kubeadm

[root@centos-linux ~]# yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0 [root@centos-linux ~]# yum install -y kubelet-1.20.0 kubectl-1.20.0 [root@centos-linux ~]# kubectl version Client version Version. The Info {Major: "1", Minor: "20", GitVersion: "v1.20.0 GitCommit:" af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38." BuildDate GitTreeState: "clean" : "the 2020-12-08 T17:59:43 Z," GoVersion: "go1.15.5", the Compiler: "gc", Platform:"linux/amd64"} The connection to the server localhost:8080 was refused - did you specify the right host or port? In the process of installing kubeadm, kubeadm and kubelet, kubectl, kubernetes-cni this several kubernetes core component binaries will be automatically installed.Copy the code

3) Docker service startup and restriction modification

Docker configuration information needs to be adjusted before running kubernetes deployment. First, edit the system /etc/default/grub file and add the following parameters to the configuration item GRUB_CMDLINE_LINUX:

GRUB_CMDLINE_LINUX=" cgroup_enable=memory swapaccount=1"
Copy the code

Run the following command to save the Settings and restart the server:

root@kubernetesnode01:/opt/kubernetes-config# reboot
Copy the code

The previous modification mainly resolves the possible docker WARNING WARNING: No swap limit support problem. Next, edit the /etc/docker-daemon. json file and add the following:

# cat > /etc/docker/daemon.json <<EOF { "registry-mirrors": ["https://6ze43vnb.mirror.aliyuncs.com"], "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "Overlay2"} EOF run the following command to restart the Docker: # systemctl restart Docker  # docker info | grep Cgroup Cgroup Driver: systemdCopy the code

Docker cgroup driver. The recommended driver is “systemd”. It needs to be emphasized that the above modification is only the author in the specific installation operation is to solve the specific problems, such as other problems in the process of practice also need to consult relevant information! Finally, you need to pay attention to because of kubernetes disable the virtual memory, so need to shut off swap can in kubeadm otherwise an error initializing kubernetes, specific as follows:

# swapoff -a
Copy the code

This command is only used to temporarily disable swap. To ensure that the swap takes effect after the system restarts, you need to comment out the line of swap in the vim /etc/fstab file.

Deploy the Kubernetes Master node

In Kubernetes, the Master node is the control node of the cluster. It is composed of three independent components that work closely together. Kube-apiserver is responsible for API service, Kube-Scheduler is responsible for scheduling, and Kube-Controller – Manager is responsible for container orchestration, wherein the persistent data of the whole cluster is processed by Kube-Apiserver and stored in Etcd. The Master node can be deployed directly through kubeadm with one click, but here we want to be able to deploy a relatively complete Kubernetes cluster with some experimental features enabled through the configuration file. Concrete in the system/opt/kubernetes – new config/directory, and create a for kubeadm YAML files (kubeadm. YAML), specific content is as follows:

apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration controllerManager: extraArgs: horizontal-pod-autoscaler-use-rest-clients: "true" horizontal-pod-autoscaler-sync-period: "Node-monitor-grace-period: "10s" apiServer: extraArgs: Runtime-config:" API /all=true" kubernetesVersion: "v1.20.0"Copy the code

In the above YAML configuration file “horizontal-pod-autoscaler-use-rest-clients: The “true” configuration indicates that the kuber-Controller-Manager deployed in the future can be automatically scaled horizontally using Custom Metrics, which can be checked by interested readers! And “v1.20.0” is the Kubernetes version number that kubeadm helped us deploy.

It should be noted that if the corresponding Docker images cannot be downloaded due to domestic network restrictions during the implementation, relevant images can be found on domestic websites (such as Ali Cloud) according to the error information, and then these images can be re-tagged before installation. Details are as follows:

# pull Kubernetes component image Docker pull from Aliyun Docker repository Registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver-amd64:v1.20.0 docker pull Registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager-amd64:v1.20.0 docker pull Registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler-amd64:v1.20.0 docker pull Registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:v1.20.0 docker pull Registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 docker pull Registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0Copy the code

After downloading, re-tag these Docker images. The specific command is as follows:

# to tag mirror docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler-amd64:v1.20.0 K8s. GCR. IO/kube - the scheduler: v1.20.0 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager-amd64:v1.20.0 K8s. GCR. IO/kube - controller - manager: v1.20.0 docker tag Registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver-amd64:v1.20.0 k8s. GCR. IO/kube - apiserver: v1.20.0 Docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:v1.20.0 k8s. GCR. IO/kube - proxy: v1.20.0 Docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s. GCR. IO/pause: 3.2 docker tag Registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0 k8s. GCR. IO/etcd: 3.4.13 0 docker tag Registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0 k8s. GCR. IO/coredns: 1.7.0 At this point, you can view the Docker image information through the Docker command, the command is as follows:  root@kubernetesnode01:/opt/kubernetes-config# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy V1.18.1 4 e68534e24f6 2 have a line 117 MB registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64 v1.18.1 4E68534E24F6 2 months ago 22MB k8s.gcr. IO /kube-controller-manager v1.18.1d1CCDD18e6ed 2 months ago 22MB Registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager-amd64 v1.18.1 d1ccdd18e6ed 2 have a line 162MB k8s.gcr. IO /kube-apiserver v1.18.1 a595AF0107F9 2 months ago 173MB Registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver-amd64 v1.18.1 a595af0107f9 2 have a line of 173 MB K8s.gcr. IO/Kube - Scheduler V1.18.1 6c9320041a7b 2 months ago 95.3MB Registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler-amd64 v1.18.1 6 c9320041a7b 2 have a line 95.3 MB K8s. GCR. IO/pause 3.2 80 d28bedfe5d 4 have a line 683 KB registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.2 80D28bedFE5D 4 months ago 803KB K8s.gcr. IO/coreDNS 1.6.7 67DA37a9a360 4 months ago 803KB Registry.cn-hangzhou.aliyuncs.com/google_containers/coredns 1.6.7 67 da37a9a360 4 have a line 43.8 MB k8s. GCR. IO/etcd Rule 3.4.3 0 303 ce5db0e90 8 have a line 288 MB registry.cn-hangzhou.aliyuncs.com/google_containers/etcd-amd64 rule 3.4.3 3-0 303ce5db0e90 8 months ago 288MBCopy the code

To complete the deployment of the Kubernetes Master controller node, run the kubeadm deployment command again. The command and result are as follows:

root@kubernetesnode01:/opt/kubernetes-config# kubeadm init --config kubeadm.yaml --v=5

...

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

 mkdir -p $HOME/.kube

 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

 sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

 export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

 https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.211.55.13:6443 --token yi9lua.icl2umh9yifn6z9k \

   --discovery-token-ca-cert-hash sha256:074460292aa167de2ae9785f912001776b936cec79af68cec597bd4a06d5998d
Copy the code

As can be seen from the above deployment execution results, kubeadm will generate the following commands after successful deployment:

Kubeadm join 10.211.55.13:6443 --token yi9lua.icl2umh9yifn6z9k \ --discovery-token-ca-cert-hash sha256:074460292aa167de2ae9785f912001776b936cec79af68cec597bd4a06d5998dCopy the code

The kubeadm join command is used to add more workers to the Master node, which will be used later when the Worker node is deployed. In addition, kubeadm will prompt us with the commands we need to configure to use the Kubernetes cluster for the first time:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config
Copy the code

These configuration commands are required because the Kubernetes cluster requires encrypted access by default. Kubectl will use the authorization information in this directory to access the Kubernetes cluster by default. If this is not done, the “Export KUBECONFIG environment variable” needs to be set each time through the cluster to tell Kubectl where the security file is located.

Kubectl get = kubectl get = kubectl get = kubectl get = kubectl get = Kubernetes

# kubectl get nodes NAME STATUS ROLES AGE VERSION centos-linux. Shared NotReady control-plane,master 6m55s v1.20.0Copy the code

The command output shows that the Master Node is in the NotReady state. You can run the kuberctl describe command to view details about the Node to find out the reason:

# kubectl describe node centos-linux.shared
Copy the code

This command can be very detailed to obtain node object status, events and other details, this way is also the most important means of debugging Kubernetes cluster investigation. The following information is displayed:

. Conditions ... Ready False... KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized ...Copy the code

The node is in “NodeNotReady” because no network plug-ins have been deployed. To verify this, you can use Kubectl to check the status of each Kubernetes system Pod on this node.

# kubectl get pods -n kube-system

NAME                                       READY   STATUS    RESTARTS   AGE

coredns-66bff467f8-l4wt6                   0/1     Pending   0          64m

coredns-66bff467f8-rcqx6                   0/1     Pending   0          64m

etcd-kubernetesnode01                      1/1     Running   0          64m

kube-apiserver-kubernetesnode01            1/1     Running   0          64m

kube-controller-manager-kubernetesnode01   1/1     Running   0          64m

kube-proxy-wjct7                           1/1     Running   0          64m

kube-scheduler-kubernetesnode01            1/1     Running   0          64m
Copy the code

Kube-system indicates the Namespace reserved for the Kubernetes project. Note that it is not a Linux Namespace, but a different workspace unit divided by Kuebernetes. Returning to the command output, you can see that network-dependent pods such as CoreDNS are all Pending, indicating that the network for the Master node is not yet ready to deploy.

Deploy the Kubernetes network plug-in

Because no network plug-in is deployed on the Master node, the node status is NodeNotReady. Next, we will deploy the network plug-in. Under the guidance of Kubernetes’ “everything container” design, the web plug-in will also run in the system as a separate Pod, so it is easy to deploy just by executing the “Kubectl apply” command. For example, Weave web plug-in:

# kubectl apply -f https://cloud.weave.works/k8s/net? k8s-version=$(kubectl version | base64 | tr -d '\n') serviceaccount/weave-net created clusterrole.rbac.authorization.k8s.io/weave-net created clusterrolebinding.rbac.authorization.k8s.io/weave-net created role.rbac.authorization.k8s.io/weave-net created rolebinding.rbac.authorization.k8s.io/weave-net created daemonset.apps/weave-net createdCopy the code

After the deployment is complete, run the “kubectl get” command to recheck the Pod status:

# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-66bff467f8-l4wt6 1/1 Running 0 116m coredns-66bff467f8-rcqx6 1/1 Running 0 116m etcd-kubernetesnode01 1/1 Running 0 116m kube-apiserver-kubernetesnode01 1/1  Running 0 116m kube-controller-manager-kubernetesnode01 1/1 Running 0 116m kube-proxy-wjct7 1/1 Running 0 116m kube-scheduler-kubernetesnode01 1/1 Running 0 116m weave-net-746qj 2/2 Running 0 14mCopy the code

Weave-net-746qj is a new Pod under kube-System. This Pod is the control component of the container network plugin on each node.

At this point, the Kubernetes Master node is deployed, and if you only need a single-node Kubernetes, you are ready to use it. By default, however, Kubernetes’ Master node does not run user pods and requires additional adjustments, which are covered at the end of this article.

06. Deploy the Worker node

To build a complete Kubernetes cluster, you need to continue here with how to deploy the Worker node. In fact, Kubernetes Worker node and Master node are almost the same, they both run a kubelet component, the main difference is that in the process of “kubeadm init”, after kubelet starts, The Master node also automatically starts kube-Apiserver, KuBE-Scheduler and kube-Controller-Manager.

All the steps in the section “Installing kubeadm and Decker environment” need to be performed on all Worker nodes, just like the Master node, before the specific deployment. Then execute the “kubeadm Join” command generated when the Master node is deployed on the Worker node, as follows:

root@kubenetesnode02:~# kubeadm join 10.211.55.6:6443 --token jfulwi.so2rj5lukgsej2o6     --discovery-token-ca-cert-hash sha256:d895d512f0df6cb7f010204193a9b240e8a394606090608daee11b988fc7fea6 --v=5


...

This node has joined the cluster:

* Certificate signing request was sent to apiserver and a response was received.

* The Kubelet was informed of the new secure connection details.


Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Copy the code

To facilitate the execution of kubectl commands on Worker nodes after cluster joining, the following configuration is required:

root@kubenetesnode02:~# mkdir -p $HOME/. Kube # copy the config file from $HOME/. Kube/to the Worker directory root@kubenetesnode02:~# SCP [email protected]:$HOME/. Kube /config $HOME/. Kube / # root@kubenetesnode02:~# sudo chown $(id -u):$(id -g) $HOME/.kube/configCopy the code

Then you can run the command “kubectl get Nodes” on the Worker or Master node to check the node status as follows:

root@kubernetesnode02:~# kubectl get nodes NAME STATUS ROLES AGE VERSION kubenetesnode02 NotReady < None > 33m v1.18.4 Kubernetesnode01 Ready Master 29H V1.18.4Copy the code

The node status is displayed. The Work node is in the NotReady state. The node description is as follows:

root@kubernetesnode02:~# kubectl describe node kubenetesnode02

...

Conditions:

...

Ready False ... KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

...
Copy the code

According to the description, the Worker node NotReady is found because the network plug-in is not deployed. You can perform the steps in “Deploying Kubernetes Network Plug-in”. However, it should be noted that kube-proxy will be deployed at the same time when the network plug-in is deployed, which will involve the action of obtaining the image from the K8s.gcr. IO warehouse. If the external network cannot be accessed, network deployment may be abnormal. After downloading the file from the domestic image warehouse, the file is tagged as follows:

# from ali cloud pull necessary mirror docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:v1.20.0 docker pull Registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 # will mirror to play tag docker tag Registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:v1.20.0 k8s. GCR. IO/kube - proxy: v1.20.0 docker tag Registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s. GCR. IO/pause: 3.2 if everything is normal, continued to view the node status, command is as follows: root@kubenetesnode02:~# kubectl get node NAME STATUS ROLES AGE VERSION kubenetesnode02 Ready < None > 7H52M v1.20.0 Kubernetesnode01 Ready Master 37h v1.20.0Copy the code

You can see that the state of the Worker node has become “Ready”, but careful readers may notice that the ROLES of the Worker node does not display “Master” like the Master node. This is because the newly installed Kubernetes environment Node sometimes loses information about ROLES. In this case, you can add ROLES manually by running the following command:

root@kubenetesnode02:~# kubectl label node kubenetesnode02 node-role.kubernetes.io/worker=worker
Copy the code

Run the node status command again to see the normal display. The command output is as follows:

root@kubenetesnode02:~# kubectl get node

NAME               STATUS   ROLES    AGE   VERSION

kubenetesnode02    Ready    worker   8h    v1.18.4

kubernetesnode01   Ready    master   37h   v1.18.4
Copy the code

Here is the deployment completed with a Master node and a Worker node Kubernetes cluster, as the experimental environment it has the basic Kubernetes cluster function!

07. Deploy the Dashboard visualization plug-in

In the Kubernetes community, there is a popular Dashboard project that gives users a visual Web interface to view various information in the current cluster. The plug-in is also deployed in container mode and the operation is very simple. Specifically, it can be deployed on the Master, Worker Node or other nodes that can safely access the Kubernetes cluster. The command is as follows:

root@kubenetesnode02:~# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml
Copy the code

After the deployment is complete, you can view the running status of the Pod corresponding to Dashboard. The result is as follows:

root@kubenetesnode02:~# kubectl get pods -n kubernetes-dashboard

NAME                                         READY   STATUS    RESTARTS   AGE

dashboard-metrics-scraper-6b4884c9d5-xfb8b   1/1     Running   0          12h

kubernetes-dashboard-7f99b75bf4-9lxk8        1/1     Running   0          12h
Copy the code

In addition, you can run the following command to view Dashboard Service information:

root@kubenetesnode02:~# kubectl get svc -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE Dashboard-metrics -scraper ClusterIP 10.97.69.158 <none> 8000/TCP 13h kubernetes-Dashboard ClusterIP 10.111.30.214 <none>  443/TCP 13hCopy the code

Note that since Dashboard is a Web service, it can only be accessed locally by Proxy by default from a security perspective. Install the kubectl management tool on the local machine, copy the config file in the $HOME/. Kube/directory on the Master node to the same directory on the local host, and run the kubectl proxy command as follows:

Qiaodemacbook-pro-2 :. Kube qiaojiang$kubectl proxy Starting to serve on 127.0.0.1:8001Copy the code

After the local proxy is started, access the Kubernetes Dashboard address as follows:

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
Copy the code

If the access is normal, you should see something like the following:

As shown in the figure above, identity authentication is required for Dashboard access, mainly through Token and Kubeconfig. Here we choose Token, and the generation steps of Token are as follows:

1) Create a service account

Create a service account named admin-user in the kubernetes-dashboard namespace. Create a file similar to dashboard-adminuser.yaml in the local directory as follows:

apiVersion: v1

kind: ServiceAccount

metadata:

 name: admin-user

 namespace: kubernetes-dashboard
Copy the code

Execute the create command after writing the file:

qiaodeMacBook-Pro-2:.kube qiaojiang$ kubectl apply -f dashboard-adminuser.yaml

Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply

serviceaccount/admin-user configured
Copy the code

Create a ClusterRoleBinding

After configuring the Kubernetes cluster using the Kubeadm tool, ClusterRole cluster management already exists in the cluster. You can use it to create a ClusterRoleBinding for the ServiceAccount created in the previous step. To create a dashboard-clusterrolebingding. yaml file in the local directory, perform the following steps:

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

 name: admin-user

roleRef:

 apiGroup: rbac.authorization.k8s.io

 kind: ClusterRole

 name: cluster-admin

subjects:

- kind: ServiceAccount

 name: admin-user

 namespace: kubernetes-dashboard
Copy the code

Execute the create command:

qiaodeMacBook-Pro-2:.kube qiaojiang$ kubectl apply -f dashboard-clusterRoleBingding.yaml

clusterrolebinding.rbac.authorization.k8s.io/admin-user created
Copy the code

3) Obtaining Bearer tokens

The subsequent command to obtain Bearer Token is as follows:

qiaodeMacBook-Pro-2:.kube qiaojiang$ kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}') Name: admin-user-token-xxq2b Namespace: kubernetes-dashboard Labels: <none> Annotations: kubernetes.io/service-account.name: admin-user kubernetes.io/service-account.uid: 213dce75-4063-4555-842a-904cf4e88ed1 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 20 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlplSHRwcXhNREs0SUJPcTZIYU1kT0pidlFuOFJaVXYzLWx0c1BOZzZZY28ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3Nlc nZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMua W8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXh4cTJiIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2a WNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyMTNkY2U3N S00MDYzLTQ1NTUtODQyYS05MDRjZjRlODhlZDEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlc iJ9.MIjSewAk4aVgVCU6fnBBLtIH7PJzcDUozaUoVGJPUu-TZSbRZHotugvrvd8Ek_f5urfyYhj14y1BSe1EXw3nINmo4J7bMI94T_f4HvSFW1RUznfWZ_uq 24qKjNgqy4HrSfmickav2PmGv4TtumjhbziMreQ3jfmaPZvPqOa6Xmv1uhytLw3G6m5tRS97kl0i8A1lqnOWu7COJX0TtPkDrXiPPX9IzaGrp3Hd0pKHWrI_ -orxsI5mmFj0cQZt1ncHarCssVnyHkWQqtle4ljV2HAO-bgY1j0E1pOPTlzpmSSbmAmedXZym77N10YNaIqtWvFjxMzhFqeTPNo539V1GgCopy the code

After obtaining the Token, go back to the previous authentication mode selection interface and fill in the obtained Token information to officially enter the Dashboard system interface and see the detailed visualization information of Kubernetes cluster, as shown in the figure:

Here we have completed the deployment of the Kubernetes visual plug-in and logged in through the local Proxy. In the actual production environment, if it is not convenient to access the Dashboard through the local Proxy every time, you can use Ingress to configure the Dashboard. Readers who are interested can try it by themselves. You can also set dashboard access by exposing the port first, for example:

# kubectl get sc-n kubernetes-dashboard # kubectl edit services -n kubernetes-dashboard kubernetes-dashboardCopy the code

Then modify the configuration file as follows:

ports:

 - nodePort: 30000

   port: 443

   protocol: TCP

   targetPort: 8443

 selector:

   k8s-app: kubernetes-dashboard

 sessionAffinity: None

 type: NodePort
Copy the code

It is then accessible via the IP+nodePort port! Such as:

https://47.98.33.48:30000/
Copy the code

08, MasterAdjust Taint/Toleration strategies **

As mentioned earlier, the Master node of the Kubernetes cluster cannot run user pods by default. And Kubernetes is relying on the Taint/Toleration mechanism to do this; The principle is that once a node is “Taint” it is “tainted” and all pods are no longer running on that node.

The reason why the Master node cannot run user Pod is that it will mark “Taint” for its own node after successful operation to prevent other user Pod from running on the Master node (it does not affect the running Pod). For details, you can check the relevant information on the Master node by command. The command has the following effect:

root@kubenetesnode02:~# kubectl describe node kubernetesnode01

Name:               kubernetesnode01

Roles:              master

...

Taints:             node-role.kubernetes.io/master:NoSchedule

...
Copy the code

You can see the Master node are being added to the default “node – role. Kubernetes. IO/Master: NoSchedule” such a “stain”, including the value of the “NoSchedule” means that the Taint in scheduling new Pod, The Pod already running on this node is not affected. If you only want a single Kubernetes, you can delete the Taint on the Master node as follows:

root@kubernetesnode01:~# kubectl taint nodes --all node-role.kubernetes.io/master-
Copy the code

The above command by “nodes – all the node – role. Kubernetes. IO/master” the key behind plus a dash “-” said remove all Taint to the key to key.

To this step, a basic Kubernetes cluster deployment is completed, through kubeadm such a native management tool, Kubernetes deployment is greatly simplified, which like certificates, authorization and each component configuration and other most troublesome operations, Kubeadm have helped us complete.

09, Kubernetes cluster restart command

If the server is powered off or restarted, run the following command to restart the cluster:

-reload systemctl restart docker daemon-reload systemctl restart dockerCopy the code

This is how to set up a Kubernetes learning cluster in CentOS 7. Other Linux distributions have similar deployment methods. You can choose according to your needs.

Write in the last

Welcome to pay attention to my public number [calm as code], massive Java related articles, learning materials will be updated in it, sorting out the data will be placed in it.

If you think it’s written well, click a “like” and add a follow! Point attention, do not get lost, continue to update!!