preface

Prepare the 6 machines in the k8S-1.20.x Installation pre-work section

1. Install basic components

Docker-ce 19.03 is installed on all nodes

Yum install -y docker-ce-19.03.14-3.el7 docker-ce-cli-19.03.14-3.el7Copy the code

All nodes are set to start Docker after startup

mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
	"exec-opts" : ["native.cgroupdriver=systemd"]
}
EOF

systemctl daemon-reload && systemctl enable --now docker
Copy the code

Install k8S components

yum list kubeadm.x86_64 --showduplicates | sort -r
Copy the code

Install the latest version kubeadm on all nodes

yum install kubeadm -y
Copy the code

Kubelet uses pause mirror (gcr. IO) to create pause mirror.

cat >/etc/sysconfig/kubelet<<EOF KUBELET_EXTRA_ARGS = "-- pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2" EOFCopy the code

Set Kubelet to start automatically upon startup

systemctl daemon-reload
systemctl enable --now kubelet
Copy the code

2. Install ha components

Note: HaProxy and Keepalived do not need to be installed on all Master nodes via yum if it is not a high availability cluster or installed on the cloud:

yum install keepalived haproxy -y
Copy the code

HAProxy is configured for all Master nodes. (For details, see the HAProxy document. HAProxy is configured for all Master nodes.)

[root@k8s-master01 etc]# mkdir /etc/haproxy [root@k8s-master01 etc]# vim /etc/haproxy/haproxy.cfg global maxconn 2000 Ulimit -n 16384 log 127.0.0.1 local0 ERR STATS timeout 30s defaults log Global mode HTTP option Httplog timeout Connect 5000 timeout client 50000 timeout server 50000 timeout http-request 15s timeout http-keep-alive 15s frontend monitor-in Bind *:33305 mode HTTP option Httplog monitor-uri /monitor frontend k8s-master bind 0.0.0.0:16443 bind 127.0.0.1:16443 mode tcp option tcplog tcp-request inspect-delay 5s default_backend k8s-master backend k8s-master mode tcp option tcplog  option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 Maxqueue 256 weight 100 server K8S-master01 192.168.0.201:6443 Check Server K8S-master02 192.168.0.202:6443 Check server K8s - master03 192.168.0.203:6443 checkCopy the code

All Master nodes are configured with KeepAlived, the configuration is different, pay attention to note that each node IP and network card (interface parameter) Master01 node configuration:

[root@k8s-master01 etc]# mkdir /etc/keepalived [root@k8s-master01 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id LVS_DEVEL script_user root enable_script_security } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } Vrrp_instance VI_1 {state MASTER interface eth1 McAst_src_ip 192.168.0.201 virtual_router_id 51 priority 101 advert_int 2 authentication {auth_type PASS auth_pass K8SHA_KA_AUTH} virtual_ipaddress {192.168.0.206} track_script { chk_apiserver } }Copy the code

Master02 configuration:

! Configuration File for keepalived global_defs { router_id LVS_DEVEL script_user root enable_script_security } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } Vrrp_instance VI_1 {state BACKUP interface eth1 McAst_src_ip 192.168.0.202 virtual_router_id 51 priority 100 advert_int 2 authentication {auth_type PASS auth_pass K8SHA_KA_AUTH} virtual_ipaddress {192.168.0.206} track_script { chk_apiserver } }Copy the code

Master03 configuration:

! Configuration File for keepalived global_defs { router_id LVS_DEVEL script_user root enable_script_security } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } Vrrp_instance VI_1 {state BACKUP interface eth1 McAst_src_ip 192.168.0.203 virtual_router_id 51 priority 100 advert_int 2 authentication {auth_type PASS auth_pass K8SHA_KA_AUTH} virtual_ipaddress {192.168.0.206} track_script { chk_apiserver } }Copy the code

Configure KeepAlived health check files:

[root@k8s-master01 keepalived]# cat /etc/keepalived/check_apiserver.sh #! /bin/bash err=0 for k in $(seq 1 3) do check_code=$(pgrep haproxy) if [[ $check_code == "" ]]; then err=$(expr $err + 1) sleep 1 continue else err=0 break fi done if [[ $err != "0" ]]; then echo "systemctl stop keepalived" /usr/bin/systemctl stop keepalived exit 1 else exit 0 fi chmod +x /etc/keepalived/check_apiserver.sh start haproxy and keepalived [root@k8s-master01 keepalived]# systemctl daemon-reload [root@k8s-master01 keepalived]# systemctl enable --now haproxy [root@k8s-master01 keepalived]# systemctl enable --now keepalivedCopy the code

Check whether HaProxy is started successfully

telnet -ntpl | grep 16443
Copy the code

Check whether Keepalived is started successfully

[root@k8s-1 ~]# Telnet 192.168.0.206 16443 Trying 192.168.0.206... Connected to 192.168.0.206. Escape character is '^]'. Connection closed by foreign hostCopy the code

3. Initialize the cluster

Create the new. Yaml configuration file for Master01 as follows:

apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: 7t2weq.bjbawausm0jaxury ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: AdvertiseAddress: 192.168.0.201 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim. Sock Name: k8s-master01 taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: certSANs: -192.168.0.206 timeoutForControlPlane: 4m0s apiVersion: kubead.k8s. IO/v1Beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint: 192.168.0.206:16443 controllerManager: {} DNS: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.20.1 Networking: dnsDomain: cluster.local podSubnet: Scheduler: {}Copy the code

Note: if it is not a high availability cluster, change 192.168.0.206:16443 to the address of master01 and 16443 to the port of apiserver (default: 6443). Kubeadm version copies the new.yaml file to other master nodes, and then all master nodes download the image in advance to save the initialization time:

The three master nodes perform:

kubeadm config images pull --config /root/new.yaml 
Copy the code

All nodes are set to start Kubelet automatically upon startup

Systemctl enable --now kubeletCopy the code

/etc/kubernetes /kubernetes /kubernetes /kubernetes /kubernetes /kubernetes /kubernetes /kubernetes /kubernetes /kubernetes /kubernetes /kubernetes /kubernetes

  kubeadm init --config /root/new.yaml  --upload-certs
Copy the code

After the initialization is successful, the Token value will be generated for other nodes to join. Therefore, record the Token value (Token value) generated after the initialization is successful:

Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of the control-plane node  running the following command on each as root: Kubeadm join 192.168.0.206:16443 --token 7t2weq. Bjbawausm0jaxury \ --discovery-token-ca-cert-hash sha256:43d6d361aad79ec0df6eed9cf38e732c29f71bd1a0d691064a0667ec51054558 \ --control-plane --certificate-key cf9e188c8dcdfee745c6b7d4eed45a166ea8f3e1908c5ab2364a9fdc657210bc Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward. Then you can join any number of worker nodes by running the following on each as root: Kubeadm join 192.168.0.206:16443 --token 7t2weq. Bjbawausm0jaxury \ --discovery-token-ca-cert-hash sha256:43d6d361aad79ec0df6eed9cf38e732c29f71bd1a0d691064a0667ec51054558Copy the code

Master01 node configures environment variables for accessing the Kubernetes cluster:

cat <<EOF >> /root/.bashrc
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
source /root/.bashrc
Copy the code

View node status:

[root@k8s-1 ~]# kubectl get nodes
NAME           STATUS     ROLES                  AGE     VERSION
k8s-master01   NotReady   control-plane,master   3m46s   v1.20.1
Copy the code

With an initial installation, all system components are running as containers and in the kube-system namespace, and the Pod status can be viewed:

[root@k8s-1 ~]# kubectl get pods -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-54d67798b7-ggzfs 0/1 Pending 0 5m18s <none> <none> <none> <none> coredns-54d67798b7-sfsrd 0/1 Pending 0 5m18s <none> <none> <none> <none> etcd-k8s-master01 1/1 Running 0 5m12s 192.168.0.201k8s-master01 <none> <none> Kube-apiserver -k8s-master01 1/1 Running 0 5m12s 192.168.0.201k8s-master01 <none> <none> Kube-controller-manager-k8s-master01 1/1 Running 0 5m12s 192.168.0.201k8s-master01 <none> <none> kube-proxy-rtzs6 1/1 Running 0 5m17s 192.168.0.201k8s-master01 <none> <none> kube-scheduler-k8s-master01 1/1 Running 0 5m12s 192.168.0.201 k8s-master01 <none> <none> [root@k8s-1 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 < None > 443/TCP 8M50SCopy the code

4. Highly available Master

Initialize other masters to join the cluster

Kubeadm join 192.168.0.206:16443 --token 7t2weq. Bjbawausm0jaxury \ --discovery-token-ca-cert-hash sha256:43d6d361aad79ec0df6eed9cf38e732c29f71bd1a0d691064a0667ec51054558 \ --control-plane --certificate-key cf9e188c8dcdfee745c6b7d4eed45a166ea8f3e1908c5ab2364a9fdc657210bcCopy the code

5. Add the Node

Adding a Node

Kubeadm join 192.168.0.206:16443 --token 7t2weq. Bjbawausm0jaxury \ --discovery-token-ca-cert-hash sha256:43d6d361aad79ec0df6eed9cf38e732c29f71bd1a0d691064a0667ec51054558Copy the code

View cluster status:

[root@k8s-master01]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 NotReady control-plane,master 8m53s V1.20.0k8s-master02 NotReady Control-plane,master 2M25s master03 NotReady Control-plane,master 31s v1.20.0k8S-master03 NotReady Control-plane,master 31s K8s-node01 NotReady < None > 32s v1.20.0 K8S-node02 NotReady < None > 88s v1.20.0Copy the code

6. Callico installation

The following steps are performed only on Master01

Git checkout manual-installation-v1.20.x && CD calico/Copy the code

Modify the following location of calico-etcd.yaml

sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "Https://192.168.0.201:2379, https://192.168.0.202:2379, https://192.168.0.203:2379," # g 'calico - etcd. Yaml ETCD_CA = ` cat /etc/kubernetes/pki/etcd/ca.crt | base64 | tr -d '\n'` ETCD_CERT=`cat /etc/kubernetes/pki/etcd/server.crt | base64 | tr -d '\n'` ETCD_KEY=`cat /etc/kubernetes/pki/etcd/server.key | base64 | tr -d '\n'` sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml |  grep cluster-cidr= | awk -F= '{print $NF}'` sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; S @ # value: "192.168.0.0/16" @ value: '" ${POD_SUBNET} "@ g' calico - etcd. YamlCopy the code

Create the calico

kubectl apply -f calico-etcd.yaml
Copy the code

7. Metrics Server deployment

In the new version of Kubernetes, the system resources are collected using metrics-server, which can collect the memory, disk, CPU and network usage of nodes and PODS. Example Copy the front-proxy-ca. CRT file of Master01 to all nodes

scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node01:/etc/kubernetes/pki/front-proxy-ca.crt scp The/etc/kubernetes/pki/front - proxy - ca. CRT k8s - node (other nodes to copy) : / etc/kubernetes/pki/front - proxy - ca. CRTCopy the code

Install the metrics server

CD /root/k8s-ha-install/metrics-server-0.4.x-kubeadm/ [root@k8s-master01 metrics-server-0.4.x-kubeadm]# kubectl create -f comp.yaml serviceaccount/metrics-server created clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrole.rbac.authorization.k8s.io/system:metrics-server created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created service/metrics-server created deployment.apps/metrics-server created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io createdCopy the code

Wait for all the pods in the kube-system command space to start and check the status

[root@k8s-master01 metrics-server-0.4.x-kubeadm]# kubectl  top node
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master01   109m         2%     1296Mi          33%       
k8s-master02   99m          2%     1124Mi          29%       
k8s-master03   104m         2%     1082Mi          28%       
k8s-node01     55m          1%     761Mi           19%       
k8s-node02     53m          1%     663Mi           17%
Copy the code

8. Dashboard deployment