Precautions before upgrade:

  1. The node needs to be vacated when upgrading kubelet. For control plane nodes, there may be CoreDNS Pods or other very important loads running on them.
  2. After the upgrade, the static Pod container will be restarted because the hash value has been changed.
  3. The kubeadm version must be greater than or equal to the target version when upgrading
  4. The official version of the upgrade is that cross-version upgrade is not supported, but the large version is not very clear how big the version, so far I have tested from 1.18.8 to 1.19.9 can be successfully upgraded

View the current version of the environment

`[root@dm01 ~]# kubectl get nodes`
NAME   STATUS   ROLES    AGE    VERSION
dm01   Ready    master   181d   v118.8.
dm02   Ready    master   181d   v118.8.
dm03   Ready    master   181d   v118.8.

`[root@dm01 ~]# kubectl version`
Client Version: version.Info{Major:"1".Minor:"18".GitVersion:"v1.18.8".GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace".GitTreeState:"clean".BuildDate:"2020-08-13T16:12:48Z".GoVersion:"go1.13.15".Compiler:"gc".Platform:"linux/amd64"}
Server Version: version.Info{Major:"1".Minor:"18".GitVersion:"v1.18.8".GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace".GitTreeState:"clean".BuildDate:"2020-08-13T16:04:18Z".GoVersion:"go1.13.15".Compiler:"gc".Platform:"linux/amd64"}
Copy the code

The upgrade process is as follows:

Example Upgrade the active controller plane node

Save the kubeadm config file and modify the specified contents

`[root@dm01 ~]# kubeadm config view > kubeadm-config.yaml`
`[root@dm01 ~]# cat kubeadm-config.yaml `
apiServer:
  extraArgs:
    authorization-mode: Node,RBAC
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.1681.11.:6443
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/k8sxio  // Change the mirror source to the mirror source of Ali Cloud, because the default source needs to be over the wall
kind: ClusterConfiguration
kubernetesVersion: v119.9.    // Change it to the version we need to upgrade to
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244. 0. 0/16
  serviceSubnet: 10.1. 0. 0/16
scheduler: {}
// copy the kubeadm config file to the other control plane nodes
`[root@dm01 ~]# scp kubeadm-config.yaml dm02:/root/`
kubeadm-config.yaml                                                                                              100%  521   511.2KB/s   00:00    
`[root@dm01 ~]# scp kubeadm-config.yaml dm03:/root/`
kubeadm-config.yaml                                                                                              100%  521    42.9KB/s   00:00    
Copy the code

Nodes are expelled and cannot be scheduled in preparation for the upgrade

`[root@dm01 ~]# kubectl drain dm01 --ignore-daemonsets`
node/dm01 cordoned
WARNING: ignoring DaemonSet-managed Pods: istio-system/istio-ingressgateway-jvtf6, kube-system/kube-flannel-ds-qrzbm, kube-system/kube-proxy-5xvbm, test008/nginx-d26b8
evicting pod kube-system/coredns-84b99c4749-6j894

pod/coredns-84b99c4749-6j894 evicted
node/dm01 evicted

`[root@dm03 ~]# kubectl get nodes` 
NAME   STATUS                     ROLES    AGE    VERSION
dm01   Ready,SchedulingDisabled   master   181d   v118.8.
dm02   Ready                      master   181d   v118.8.
dm03   Ready                      master   181d   v118.8.
Copy the code

Download and install Kubectl, kubeadm 1.19.9

`[root@dm01 ~]# yum list kubeadm --showduplicates`

'[root@dm01 ~]# yum install kubectl-1.19.9
Copy the code

Viewing the Upgrade Plan

`[root@dm01 ~]# kubeadm upgrade plan`
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v118.8.
[upgrade/versions] kubeadm version: v119.9.
I0326 10:39:13.933676   12080 version.go:255] remote version is much newer: v120.. 5; falling back to: stable-1.19
[upgrade/versions] Latest stable version: v119.9.
[upgrade/versions] Latest stable version: v119.9.
[upgrade/versions] Latest version in the v118. series: v118.17.
[upgrade/versions] Latest version in the v118. series: v118.17.

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
kubelet     3 x v118.8.   v118.17.

Upgrade to the latest version in the v118. series:

COMPONENT                 CURRENT   AVAILABLE
kube-apiserver            v118.8.   v118.17.
kube-controller-manager   v118.8.   v118.17.
kube-scheduler            v118.8.   v118.17.
kube-proxy                v118.8.   v118.17.
CoreDNS                   1.7. 0     1.7. 0
etcd                      3.43.-0   3.43.-0

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v118.17.

_____________________________________________________________________

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
kubelet     3 x v118.8.   v119.9.

Upgrade to the latest stable version:

COMPONENT                 CURRENT   AVAILABLE
kube-apiserver            v118.8.   v119.9.
kube-controller-manager   v118.8.   v119.9.
kube-scheduler            v118.8.   v119.9.
kube-proxy                v118.8.   v119.9.
CoreDNS                   1.7. 0     1.7. 0
etcd                      3.43.-0   3.413.-0

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v119.9.

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION"column. API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED kubeproxy.config.k8s.io v1alpha1 v1alpha1 no  kubelet.config.k8s.io v1beta1 v1beta1 no _____________________________________________________________________Copy the code

Run the dry-run command to view the upgrade information.

'[root@dm01 ~]# kubeadm upgrade apply v1.19.9 --config kubeadm-config.yaml --dry-run   The configuration file contains the cluster information of the previous version and the modified mirror address
Copy the code

After confirmation, you can perform the upgrade operation. Download the required image in advance:

`[root@dm01 ~]# kubeadm config images pull --config kubeadm-config.yaml `
W0326 10:42:34.394143   14217 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[config/images] Pulled registry.aliyuncs.com/k8sxio/kube-apiserver:v119.9.
[config/images] Pulled registry.aliyuncs.com/k8sxio/kube-controller-manager:v119.9.
[config/images] Pulled registry.aliyuncs.com/k8sxio/kube-scheduler:v119.9.
[config/images] Pulled registry.aliyuncs.com/k8sxio/kube-proxy:v119.9.
[config/images] Pulled registry.aliyuncs.com/k8sxio/pause:3.2
[config/images] Pulled registry.aliyuncs.com/k8sxio/etcd:3.413.-0
[config/images] Pulled registry.aliyuncs.com/k8sxio/coredns:1.7. 0
Copy the code

Then you can upgrade

'[root@dm01 ~]# kubeadm upgrade apply v1.19.9 --config kubeadm-config.yaml
[upgrade/config] Making sure the configuration is correct:
W0326 10:45:24.222769   15899 common.go:94] WARNING: Usage of the --config flag with kubeadm config types for reconfiguring the cluster during upgrade is not recommended!
W0326 10:45:24.348052   15899 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.19.9"
[upgrade/versions] Cluster version: v118.8.
[upgrade/versions] kubeadm version: v119.9.
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.19.9". Static pod: kube-apiserver-dm01 hash: c96cc89dd2a469ab3cb99cefab0c8272 Static pod: kube-controller-manager-dm01 hash: 0ddc97d4db314c02f481301f85e32349 Static pod: kube-scheduler-dm01 hash: dc2f9c30c2d972efbe2ce45cf611390e [upgrade/etcd] Upgrading to TLSfor etcd
Static pod: etcd-dm01 hash: 679383392840e6962083ee74416be5ff
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-03-26-10-45-29/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-dm01 hash: 679383392840e6962083ee74416be5ff
Static pod: etcd-dm01 hash: 679383392840e6962083ee74416be5ff
Static pod: etcd-dm01 hash: 679383392840e6962083ee74416be5ff
Static pod: etcd-dm01 hash: 679383392840e6962083ee74416be5ff
Static pod: etcd-dm01 hash: 679383392840e6962083ee74416be5ff
Static pod: etcd-dm01 hash: 679383392840e6962083ee74416be5ff
Static pod: etcd-dm01 hash: 679383392840e6962083ee74416be5ff
Static pod: etcd-dm01 hash: 679383392840e6962083ee74416be5ff
Static pod: etcd-dm01 hash: 679383392840e6962083ee74416be5ff
Static pod: etcd-dm01 hash: 679383392840e6962083ee74416be5ff
Static pod: etcd-dm01 hash: 679383392840e6962083ee74416be5ff
Static pod: etcd-dm01 hash: 679383392840e6962083ee74416be5ff
Static pod: etcd-dm01 hash: 679383392840e6962083ee74416be5ff
Static pod: etcd-dm01 hash: 679383392840e6962083ee74416be5ff
Static pod: etcd-dm01 hash: d7d8e22d7a5881f06ab297fe7e173b67
[apiclient] Found 3 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests344416903"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-03-26-10-45-29/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-dm01 hash: c96cc89dd2a469ab3cb99cefab0c8272
Static pod: kube-apiserver-dm01 hash: c96cc89dd2a469ab3cb99cefab0c8272
Static pod: kube-apiserver-dm01 hash: f183b8963cde0fb805d7171f5af486b8
[apiclient] Found 3 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-03-26-10-45-29/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-dm01 hash: 0ddc97d4db314c02f481301f85e32349
Static pod: kube-controller-manager-dm01 hash: 6bacc4dd0c352be2b3c3b40e6a9f92c2
[apiclient] Found 3 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-03-26-10-45-29/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-dm01 hash: dc2f9c30c2d972efbe2ce45cf611390e
Static pod: kube-scheduler-dm01 hash: 357b7cba3cee5370b9c9a360984db687
[apiclient] Found 3 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "Kubelet - config - 1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.19.9". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven'not already done so.' // The upgrade is already doneCopy the code

Upgrade Kubelet and restart

'[root@dm01 ~]# yum install kubelet-1.19.9-0
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * elrepo: mirrors.tuna.tsinghua.edu.cn
 * epel: hkg.mirror.rackspace.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
Resolving Dependencies
There are unfinished transactions remaining. You might consider running yum-complete-transaction, or "yum-complete-transaction --cleanup-only" and "yum history redo last", first to finish them. If those don't work you'll have to try removing/installing packages by hand (maybe package-cleanup can help).
--> Running transaction check
---> Package kubelet.x86_64 0:1.192.-0 will be updated
---> Package kubelet.x86_64 0:1.199.-0will be an update --> Finished Dependency Resolution Dependencies Resolved = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =========================== Package Arch Version Repository Size = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =========================== Updating: kubelet x86_641.199.-0                            kubernetes                            20M Transaction Summary = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =========================== Upgrade1 Package

Total download size: 20 M
Is this ok [y/d/N]: y
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
91b94430e5a7b65479ba816cf352514c857cc21bc4cd2c5019d76d62610c60ab-kubelet-1.199.-0.x86_64.rpm                                |  20 MB  00:00:01     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Updating   : kubelet-1.199.-0.x86_64                                                                                                         1/2 
  Cleanup    : kubelet-1.192.-0.x86_64                                                                                                         2/2 
  Verifying  : kubelet-1.199.-0.x86_64                                                                                                         1/2 
  Verifying  : kubelet-1.192.-0.x86_64                                                                                                         2/2 

Updated:
  kubelet.x86_64 0:1.199.-0                                                                                                                        

Complete!

/ / restart kubelet
`[root@dm01 ~]# systemctl daemon-reload`
`[root@dm01 ~]# systemctl restart kubelet`

Copy the code

Check the version. The first control plane is upgraded

`[root@dm01 ~]# kubelet --version`
Kubernetes v119.9.
`[root@dm01 ~]# kubectl get nodes`
NAME   STATUS                      ROLES    AGE    VERSION
dm01   Ready,SchedulingDisabled    master   181d   v119.9.  // We can see that a node has been upgraded to the version we specified
dm02   Ready                       master   181d   v118.8.
dm03   Ready                       master   181d   v118.8.
Copy the code

Finally, disable scheduling for the node that has been upgraded

`[root@dm01 ~]# kubectl uncordon dm01`
Copy the code

Upgrade other control plane nodes

Note: the operation of other control planes is the same. Here is the operation method of one plane

Also disable scheduling

`[root@dm03 ~]# kubectl drain dm02 --ignore-daemonsets`
Copy the code

Download and install kubectl, kubeadm

'[root@dm03 ~]# yum install kubectl-1.19.9
Copy the code

Again, pull the mirror image

`[root@dm03 ~]# kubeadm config images pull --config kubeadm-config.yaml`
Copy the code

Executing the update command

// Note: Since the configuration of apiserver and other components has been uploaded to the configMap of the cluster during the upgrade of the first master node, in fact, other master nodes can just pull and restart related components.

`[root@dm03 ~]# kubeadm upgrade node` 
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade] Upgrading your Static Pod-hosted control plane instance to version "v1.19.9". Static pod: kube-apiserver-dm03 hash: d652bc8e100387e8a97f80472f0ba79c Static pod: kube-controller-manager-dm03 hash: 0ddc97d4db314c02f481301f85e32349 Static pod: kube-scheduler-dm03 hash: dc2f9c30c2d972efbe2ce45cf611390e [upgrade/etcd] Upgrading to TLSfor etcd
Static pod: etcd-dm03 hash: 901cf2db3f22c72c2a02502fa512bf0d
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-03-26-10-58-09/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-dm03 hash: 901cf2db3f22c72c2a02502fa512bf0d
Static pod: etcd-dm03 hash: 901cf2db3f22c72c2a02502fa512bf0d
Static pod: etcd-dm03 hash: 901cf2db3f22c72c2a02502fa512bf0d
Static pod: etcd-dm03 hash: 901cf2db3f22c72c2a02502fa512bf0d
Static pod: etcd-dm03 hash: 901cf2db3f22c72c2a02502fa512bf0d
Static pod: etcd-dm03 hash: 901cf2db3f22c72c2a02502fa512bf0d
Static pod: etcd-dm03 hash: 901cf2db3f22c72c2a02502fa512bf0d
Static pod: etcd-dm03 hash: 901cf2db3f22c72c2a02502fa512bf0d
Static pod: etcd-dm03 hash: 901cf2db3f22c72c2a02502fa512bf0d
Static pod: etcd-dm03 hash: 901cf2db3f22c72c2a02502fa512bf0d
Static pod: etcd-dm03 hash: 901cf2db3f22c72c2a02502fa512bf0d
Static pod: etcd-dm03 hash: ca24216d24f4ae1163e30bc0ab353715
[apiclient] Found 3 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests278745616"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-03-26-10-58-09/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-dm03 hash: d652bc8e100387e8a97f80472f0ba79c
Static pod: kube-apiserver-dm03 hash: d652bc8e100387e8a97f80472f0ba79c
Static pod: kube-apiserver-dm03 hash: d652bc8e100387e8a97f80472f0ba79c
Static pod: kube-apiserver-dm03 hash: 3def9f877b88fdbec114fd3304820a2c
[apiclient] Found 3 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-03-26-10-58-09/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-dm03 hash: 0ddc97d4db314c02f481301f85e32349
Static pod: kube-controller-manager-dm03 hash: 6bacc4dd0c352be2b3c3b40e6a9f92c2
[apiclient] Found 3 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-03-26-10-58-09/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-dm03 hash: dc2f9c30c2d972efbe2ce45cf611390e
Static pod: kube-scheduler-dm03 hash: 357b7cba3cee5370b9c9a360984db687
[apiclient] Found 3 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade] The control plane instance for this node was successfully updated!
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

Copy the code

Download Kubelet and restart

'[root@dm03 ~]# yum install kubelet-1.19.9-0
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * elrepo: hkg.mirror.rackspace.com
 * epel: hkg.mirror.rackspace.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
Resolving Dependencies
There are unfinished transactions remaining. You might consider running yum-complete-transaction, or "yum-complete-transaction --cleanup-only" and "yum history redo last", first to finish them. If those don't work you'll have to try removing/installing packages by hand (maybe package-cleanup can help).
--> Running transaction check
---> Package kubelet.x86_64 0:1.192.-0 will be updated
---> Package kubelet.x86_64 0:1.199.-0will be an update --> Finished Dependency Resolution Dependencies Resolved = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =========================== Package Arch Version Repository Size = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =========================== Updating: kubelet x86_641.199.-0                            kubernetes                            20M Transaction Summary = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =========================== Upgrade1 Package

Total download size: 20 M
Is this ok [y/d/N]: y
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
91b94430e5a7b65479ba816cf352514c857cc21bc4cd2c5019d76d62610c60ab-kubelet-1.199.-0.x86_64.rpm                                |  20 MB  00:00:02     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Updating   : kubelet-1.199.-0.x86_64                                                                                                         1/2 
  Cleanup    : kubelet-1.192.-0.x86_64                                                                                                         2/2 
  Verifying  : kubelet-1.199.-0.x86_64                                                                                                         1/2 
  Verifying  : kubelet-1.192.-0.x86_64                                                                                                         2/2 

Updated:
  kubelet.x86_64 0:1.199.-0                                                                                                                        

Complete!
`[root@dm03 ~]# systemctl daemon-reload`
`[root@dm03 ~]# systemctl restart kubelet `
Copy the code

Unscheduling

`[root@dm03 ~]# kubectl uncordon dm03`
node/dm03 uncordoned
Copy the code

Finally, all nodes are upgraded

`[root@dm03 ~]# kubectl get nodes `
NAME   STATUS   ROLES    AGE    VERSION
dm01   Ready    master   181d   v119.9.
dm02   Ready    master   181d   v119.9.
dm03   Ready    master   181d   v119.9.
Copy the code

Upgrading a Working Node

Note: Since there is no single node in my environment, there is no specific operation to upgrade the node. In fact, there is no special operation to upgrade the node after the master node. The only upgrade for the node is the Kubelet component.

1.It is executed first on the Node node`kubeadm upgrade node `Command, which pulls the Kubelet configuration file within the cluster.2.Then upgrade install Kubelet restart can be;3.Similarly, do not forget to enable disable scheduling during node upgrade and disable disable scheduling after upgradeCopy the code

Website shows