The status quo
The existing Kubernetes cluster is running version 1.18.8.
The target
- Upgrade to version only all Kubernetes control plane and node components on the primary node
1.19.0
. - Also, upgrade Kubelet and Kubectl on the primary node.
- Make sure drain master node before upgrade and uncordon master node after upgrade.
- Do not upgrade the work node, ETCD, Container manager, CNI plug-in, DNS service or any other plug-in.
Check before Upgrade
Before upgrading, we need to confirm the version of each component
# check kubeadm
baiyutang@vb-n1:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:10:16Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
# check kubectl
baiyutang@vb-n1:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:12:48Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.9", GitCommit:"94f372e501c973a7fa9eb40ec9ebd2fe7ca69848", GitTreeState:"clean", BuildDate:"2020-09-16T13:47:43Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
# check kubeletbaiyutang@vb-n1:~$kubelet --version Kubernetes v1.18.8# check kube - apiserver
baiyutang@vb-n1:~$ kubectl exec -it kube-apiserver-vb-n1 -n kube-system -- kube-apiserver --version
Kubernetes v1.18.9
# check kube controller -- the manager
baiyutang@vb-n1:~$ kubectl exec -it kube-controller-manager-vb-n1 -n kube-system -- kube-controller-manager --version
Kubernetes v1.18.9
baiyutang@vb-n1:~$ kubectl exec- Scheduler --version I0925 13:32:59.717135 56 Registry. Go :150] Registering EvenPodsSpread predicate and priorityfunctionI0925 13:32:59.718611 56 Registry. Go :150] Registering EvenPodsSpread predicate and priorityfunction
Kubernetes v1.18.9
Check kube-proxy. It is more appropriate to check the mirrored version of Daemonset
baiyutang@vb-n1:~$ kubectl exec-it kube-proxy-b488d-n kube-system -- kube-proxy --version Kubernetes v1.18.9# Check etCD service
baiyutang@vb-n1:~$ kubectl execGit SHA: 3cf2f69b5 Git SHA: 3cf2f69b5 Go version: Go1.12.12 Go OS/Arch: Linux /amd64 baiyutang@vb-n1:~$kubectlexec -it etcd-vb-n1 -n kube-system -- etcdctl version
etcdctl version: 3.4.3
API version: 3.4
# Check DNS service
baiyutang@vb-n1:~$ kubectl describe deployments.apps -l k8s-app=kube-dns -n kube-system | grep image
baiyutang@vb-n1:~$ kubectl get deployments.apps -l k8s-app=kube-dns -n kube-system -o yaml | grep image
f:image: {}
f:imagePullPolicy: {}
image: registry.aliyuncs.com/google_containers/coredns:1.6.7
imagePullPolicy: IfNotPresent
# Examine the CNI Flannel
baiyutang@vb-n1:~$ kubectl get ds kube-flannel-ds-amd64 -n kube-system -o yaml | grep image
{"apiVersion":"apps/v1"."kind":"DaemonSet"."metadata": {"annotations": {},"labels": {"app":"flannel"."tier":"node"},"name":"kube-flannel-ds-amd64"."namespace":"kube-system"},"spec": {"selector": {"matchLabels": {"app":"flannel"}},"template": {"metadata": {"labels": {"app":"flannel"."tier":"node"}},"spec": {"affinity": {"nodeAffinity": {"requiredDuringSchedulingIgnoredDuringExecution": {"nodeSelectorTerms": [{"matchExpressions": [{"key":"beta.kubernetes.io/os"."operator":"In"."values": ["linux"] {},"key":"beta.kubernetes.io/arch"."operator":"In"."values": ["amd64"]}]}]}}},"containers": [{"args": ["--ip-masq"."--kube-subnet-mgr"]."command": ["/opt/bin/flanneld"]."env": [{"name":"POD_NAME"."valueFrom": {"fieldRef": {"fieldPath":"metadata.name"}}}, {"name":"POD_NAMESPACE"."valueFrom": {"fieldRef": {"fieldPath":"metadata.namespace"}}}]."image":"Quay. IO/coreos/flannel: v0.12.0 - amd64"."name":"kube-flannel"."resources": {"limits": {"cpu":"100m"."memory":"50Mi"},"requests": {"cpu":"100m"."memory":"50Mi"}},"securityContext": {"capabilities": {"add": ["NET_ADMIN"]},"privileged":false},"volumeMounts": [{"mountPath":"/run/flannel"."name":"run"}, {"mountPath":"/etc/kube-flannel/"."name":"flannel-cfg"}}]]."hostNetwork":true."initContainers": [{"args": ["-f"."/etc/kube-flannel/cni-conf.json"."/etc/cni/net.d/10-flannel.conflist"]."command": ["cp"]."image":"Quay. IO/coreos/flannel: v0.12.0 - amd64"."name":"install-cni"."volumeMounts": [{"mountPath":"/etc/cni/net.d"."name":"cni"}, {"mountPath":"/etc/kube-flannel/"."name":"flannel-cfg"}}]]."serviceAccountName":"flannel"."tolerations": [{"effect":"NoSchedule"."operator":"Exists"}]."volumes": [{"hostPath": {"path":"/run/flannel"},"name":"run"}, {"hostPath": {"path":"/etc/cni/net.d"},"name":"cni"}, {"configMap": {"name":"kube-flannel-cfg"},"name":"flannel-cfg"}]}}}} f:image: {} f:imagePullPolicy: {} f:image: {} f:imagePullPolicy: {} image: Quay. IO/coreos/flannel: v0.12.0 - amd64 imagePullPolicy: IfNotPresent image: Quay. IO/coreos/flannel: v0.12.0 - amd64 imagePullPolicy: IfNotPresentCopy the code
The screenshot is shown below. Click to view the larger picture
steps
Determines whether kubeadm is upgradable
sudo -i
apt update
apt-cache policy kubeadm
Copy the code
You get the following list of upgradable versions
Upgrade kubeadm
Sudo -i apt-mark unhold kubeadm && \ apt-get update && apt-get install -y kubeadm=1.19.0-00 && \ apt-mark hold kubeadmVerify success after execution
kubeadm version
Copy the code
Hover control plane node
kubectl drain vb-n1 --ignore-daemonsets
Copy the code
Kubeadm checks whether the cluster is upgradable
kubeadm upgrade plan
Copy the code
Please pay attention toYou can now apply the upgrade by executing the following command:
The information of
Application Upgrade Version
Kubeadm upgrade apply v1.19.0 --etcd-upgrade=false # Do not upgrade etCD
Copy the code
root@vb-n1:~V1.19.0 --etcd-upgrade=false
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.19.0"
[upgrade/versions] Cluster version: v1.18.9
[upgrade/versions] kubeadm version: v1.19.0
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.19.0". Static pod: kube-apiserver-vb-n1hash: 1c78c880abd4c7301a3e15e0b8ba0739
Static pod: kube-controller-manager-vb-n1 hash: a7092f0e72ccf0dde097448255396198
Static pod: kube-scheduler-vb-n1 hash: 1d49f6bea141a03c33715369a619d2a9
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests007582360"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-09-25-21-47-44/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-vb-n1 hash: 1c78c880abd4c7301a3e15e0b8ba0739
Static pod: kube-apiserver-vb-n1 hash: 1c78c880abd4c7301a3e15e0b8ba0739
Static pod: kube-apiserver-vb-n1 hash: 1c78c880abd4c7301a3e15e0b8ba0739
Static pod: kube-apiserver-vb-n1 hash: 79e1af63686084ebb219fefaaf989593
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-09-25-21-47-44/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-vb-n1 hash: a7092f0e72ccf0dde097448255396198
Static pod: kube-controller-manager-vb-n1 hash: e300bc107fc98f68d284e0aa8a71380b
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-09-25-21-47-44/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-vb-n1 hash: 1d49f6bea141a03c33715369a619d2a9
Static pod: kube-scheduler-vb-n1 hash: 340ea85a0f34a4df64d62b1a784833ae
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "Kubelet - config - 1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates inGo :282] The Cluster W0925 21:48:01.019661 19731 DNS.go :282] The CoreDNS Configuration will not be migrated due to unsupported version of CoreDNS. The existing CoreDNS Corefile configuration and deployment has been retained. [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to"v1.19.0". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
Copy the code
Screenshot below, click to see a larger picture
We take note of the last message
[upgrade/successful] SUCCESS! Your cluster was upgraded to “v1.19.0”. Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven’t already done so.
Upgrade Kubelet and Kubectl
Apt-mark unhold kubelet kubectl && \ apt-get update && apt-get install -y kubelet=1.19.0-00 kubectl=1.19.0-00 && \ apt-mark hold kubelet kubectl# restart kubelet
systemctl daemon-reload
systemctl restart kubelet
Copy the code
Restoring the primary node can be scheduled
kubectl uncordon vb-n1
Copy the code
Checking Component Versions
Kubeadm has been upgraded successfully
baiyutang@vb-n1:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:28:32Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
Kubectl has been upgraded successfully
baiyutang@vb-n1:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:30:33Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:23:04Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
Kubelet has been upgradedbaiyutang@vb-n1:~$kubelet --version Kubernetes v1.19.0Kube-controller-manager has been upgraded successfully
baiyutang@vb-n1:~$ kubectl exec -it kube-controller-manager-vb-n1 -n kube-system -- kube-controller-manager --version
Kubernetes v1.19.0
Kube-apiserver has been upgraded successfully
baiyutang@vb-n1:~$ kubectl exec -it kube-apiserver-vb-n1 -n kube-system -- kube-apiserver --version
Kubernetes v1.19.0
Kube-scheduler is successfully upgraded
baiyutang@vb-n1:~$ kubectl exec-scheduler --version I0925 14:24:10.422385 13 Registry. Go :173] Registering SelectorSpread plugin I0925 14:24:10.422439 13 registry.go:173] Registering SelectorSpread plugin Kubernetes v1.19.0Kube-proxy has been upgraded
baiyutang@vb-n1:~$ kubectl exec -it kube-proxy-b488d -n kube-system -- kube-proxy --version
Error from server (NotFound): pods "kube-proxy-b488d"not found baiyutang@vb-n1:~$ kubectl get ds kube-proxy -n kube-system -o yaml | grep image f:image: {} f: imagePullPolicy: {} image: registry.aliyuncs.com/google_containers/kube-proxy:v1.19.0 imagePullPolicy: IfNotPresent# Check etCD, not upgraded, meet the requirements
baiyutang@vb-n1:~$ kubectl execGit SHA: 3cf2f69b5 Git SHA: 3cf2f69b5 Go version: Go1.12.12 Go OS/Arch: Linux/AMd64 baiyutang@vb-n1:~$baiyutang@vb-n1:~$kubectlexec -it etcd-vb-n1 -n kube-system -- etcdctl version
etcdctl version: 3.4.3
API version: 3.4
The DNS service is not upgradedbaiyutang@vb-n1:~$ kubectl get deployments.apps -l k8s-app=kube-dns -n kube-system -o yaml | grep image f:image: {} f: imagePullPolicy: {} image: registry.aliyuncs.com/google_containers/coredns:1.6.7 imagePullPolicy: IfNotPresent# Check the CNI value flannel. It is not upgraded and meets the requirements
baiyutang@vb-n1:~$ kubectl get ds kube-flannel-ds-amd64 -n kube-system -o yaml | grep image
{"apiVersion":"apps/v1"."kind":"DaemonSet"."metadata": {"annotations": {},"labels": {"app":"flannel"."tier":"node"},"name":"kube-flannel-ds-amd64"."namespace":"kube-system"},"spec": {"selector": {"matchLabels": {"app":"flannel"}},"template": {"metadata": {"labels": {"app":"flannel"."tier":"node"}},"spec": {"affinity": {"nodeAffinity": {"requiredDuringSchedulingIgnoredDuringExecution": {"nodeSelectorTerms": [{"matchExpressions": [{"key":"beta.kubernetes.io/os"."operator":"In"."values": ["linux"] {},"key":"beta.kubernetes.io/arch"."operator":"In"."values": ["amd64"]}]}]}}},"containers": [{"args": ["--ip-masq"."--kube-subnet-mgr"]."command": ["/opt/bin/flanneld"]."env": [{"name":"POD_NAME"."valueFrom": {"fieldRef": {"fieldPath":"metadata.name"}}}, {"name":"POD_NAMESPACE"."valueFrom": {"fieldRef": {"fieldPath":"metadata.namespace"}}}]."image":"Quay. IO/coreos/flannel: v0.12.0 - amd64"."name":"kube-flannel"."resources": {"limits": {"cpu":"100m"."memory":"50Mi"},"requests": {"cpu":"100m"."memory":"50Mi"}},"securityContext": {"capabilities": {"add": ["NET_ADMIN"]},"privileged":false},"volumeMounts": [{"mountPath":"/run/flannel"."name":"run"}, {"mountPath":"/etc/kube-flannel/"."name":"flannel-cfg"}}]]."hostNetwork":true."initContainers": [{"args": ["-f"."/etc/kube-flannel/cni-conf.json"."/etc/cni/net.d/10-flannel.conflist"]."command": ["cp"]."image":"Quay. IO/coreos/flannel: v0.12.0 - amd64"."name":"install-cni"."volumeMounts": [{"mountPath":"/etc/cni/net.d"."name":"cni"}, {"mountPath":"/etc/kube-flannel/"."name":"flannel-cfg"}}]]."serviceAccountName":"flannel"."tolerations": [{"effect":"NoSchedule"."operator":"Exists"}]."volumes": [{"hostPath": {"path":"/run/flannel"},"name":"run"}, {"hostPath": {"path":"/etc/cni/net.d"},"name":"cni"}, {"configMap": {"name":"kube-flannel-cfg"},"name":"flannel-cfg"}]}}}} f:image: {} f:imagePullPolicy: {} f:image: {} f:imagePullPolicy: {} image: Quay. IO/coreos/flannel: v0.12.0 - amd64 imagePullPolicy: IfNotPresent image: Quay. IO/coreos/flannel: v0.12.0 - amd64 imagePullPolicy: IfNotPresentCopy the code
Check the record as screenshot, click to see the larger picture
conclusion
Kubeadm upgrade apply v1.19.0 –etcd-upgrade=false kubeadm upgrade apply v1.19.0 –etcd-upgrade=false
So let’s go back a little bit
reference
- Upgrade kubeadm cluster | Kubernetes