kubernetes
的版最近最新版本是1.32,这里想从1.26版本一直升级到1.32,记录一下在升级过程中遇到的坑,便于后期总结和使用,先看一下我的集群,但节点已经安装好了.
[root@node-01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION node-01 Ready control-plane 24h v1.26.1 node-02 Ready <none> 24h v1.26.1 node-03 Ready <none> 24h v1.26.1
我的环境是一个master
节点两个node
节点,所以集群部署的比较简单,但是多个master
也不受影响,只是多个master
节点升级而已.
master节点1.26版本升级1.27
我目前的集群是1.26.1
版本,如果升级到1.28.X
版本不可以直接升级,需要先升级到1.27.X
才可以继续,那么我现在就升级到1.27.2
,先把节点的打污,也就是去除节点上正在跑的服务,然后下载镜像,安装服务,因为我的环境没有服务,所以直接升级,在我的测试环境我是做批量升级的,也就不需要打污,节点太多没必要一个一个的来,但是如果部署在生成环境必须要.
[root@node-01 ~]# kubectl drain node-01 --ignore-daemonsets node/node-01 cordoned Warning: ignoring DaemonSet-managed Pods: kube-system/calico-node-hnh87, kube-system/kube-proxy-kvxjv evicting pod kube-system/coredns-5bbd96d687-l5bx7 evicting pod kube-system/calico-kube-controllers-754b79777-m2tqb evicting pod kube-system/coredns-5bbd96d687-ffsgt pod/calico-kube-controllers-754b79777-m2tqb evicted pod/coredns-5bbd96d687-ffsgt evicted pod/coredns-5bbd96d687-l5bx7 evicted node/node-01 drained [root@node-01 ~]# kubeadm config images list --kubernetes-version=v1.27.2 W0214 17:07:17.608583 9403 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.6-0) registry.k8s.io/kube-apiserver:v1.27.2 registry.k8s.io/kube-controller-manager:v1.27.2 registry.k8s.io/kube-scheduler:v1.27.2 registry.k8s.io/kube-proxy:v1.27.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.6-0 registry.k8s.io/coredns/coredns:v1.9.3 [root@node-01 ~]# kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.27.2 W0214 17:07:33.195286 9463 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.6-0) [config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.27.2 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.27.2 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.27.2 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.27.2 [config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9 [config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.6-0 [config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.9.3 [root@node-01 ~]# yum install -y kubelet-1.27.2 kubeadm-1.27.2 kubectl-1.27.2 --disableexcludes=kubernetes
基础环境已经配置好了,下面对master
节点升级,从1.26.1
升级到1.27.2
版本
[root@node-01 ~]# kubeadm upgrade apply 1.27.2 -y [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks. [upgrade] Running cluster health checks [upgrade/version] You have chosen to change the cluster version to "v1.27.2" [upgrade/versions] Cluster version: v1.26.1 [upgrade/versions] kubeadm version: v1.27.2 [upgrade/prepull] Pulling images required for setting up a Kubernetes cluster [upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection [upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull' W0214 17:09:29.443143 10275 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0) W0214 17:09:29.628239 10275 checks.go:835] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image. [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.27.2" (timeout: 5m0s)... [upgrade/etcd] Upgrading to TLS for etcd W0214 17:09:56.836718 10275 staticpods.go:305] [upgrade/etcd] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0) W0214 17:09:56.840639 10275 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0) [upgrade/staticpods] Preparing for "etcd" upgrade [upgrade/staticpods] Renewing etcd-server certificate [upgrade/staticpods] Renewing etcd-peer certificate [upgrade/staticpods] Renewing etcd-healthcheck-client certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-02-14-17-09-55/etcd.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) [apiclient] Found 1 Pods for label selector component=etcd [upgrade/staticpods] Component "etcd" upgraded successfully! [upgrade/etcd] Waiting for etcd to become available [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests4212209438" [upgrade/staticpods] Preparing for "kube-apiserver" upgrade [upgrade/staticpods] Renewing apiserver certificate [upgrade/staticpods] Renewing apiserver-kubelet-client certificate [upgrade/staticpods] Renewing front-proxy-client certificate [upgrade/staticpods] Renewing apiserver-etcd-client certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-02-14-17-09-55/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) [apiclient] Found 1 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade [upgrade/staticpods] Renewing controller-manager.conf certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-02-14-17-09-55/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) [apiclient] Found 1 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Preparing for "kube-scheduler" upgrade [upgrade/staticpods] Renewing scheduler.conf certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-02-14-17-09-55/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) [apiclient] Found 1 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster [upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config1003930174/config.yaml [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.27.2". Enjoy! #提示升级成功 [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
升级完成,可以看一下节点的版本
[root@node-01 ~]# systemctl daemon-reload [root@node-01 ~]# systemctl restart kubelet [root@node-01 ~]# kubectl uncordon node-01 node/node-01 uncordoned [root@node-01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION node-01 Ready control-plane 25h v1.27.2 node-02 Ready <none> 25h v1.26.1 node-03 Ready <none> 25h v1.26.1
这个是master
节点升级,目前已经升级完成了,下面升级node
节点.
node节点1.26.1升级1.27.2
node
节点的升级不需要node
节点镜像,因为node
节点的中间件镜像是通过master
节点分配过来的,所以直接安装node
节点的kubelet
,kubeadm
,kubectl
安装之后重启一下即可.
[root@node-02 ~]# yum install -y kubelet-1.27.2 kubeadm-1.27.2 kubectl-1.27.2 --disableexcludes=kubernetes [root@node-02 ~]# kubeadm upgrade node [upgrade] Reading configuration from the cluster... [upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks [preflight] Skipping prepull. Not a control plane node. [upgrade] Skipping phase. Not a control plane node. [upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config4136699873/config.yaml [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [upgrade] The configuration for this node was successfully updated! #节点升级完成 [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager. [root@node-02 ~]# systemctl daemon-reload [root@node-02 ~]# systemctl restart kubelet [root@node-01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION node-01 Ready control-plane 25h v1.27.2 node-02 Ready <none> 25h v1.27.2 node-03 Ready <none> 25h v1.26.1
重启之后看一下node
节点的版本,已经升级完成.
master节点 1.27.2升级1.28.2
从1.27.2
升级到1.28.2
版本和前面一样,前期还是需要下载基础镜像.然后安装kubelet
,kubeadm
,kubectl
[root@node-01 ~]# kubectl drain node-01 --ignore-daemonsets node/node-01 cordoned Warning: ignoring DaemonSet-managed Pods: kube-system/calico-node-hnh87, kube-system/kube-proxy-nvv2k node/node-01 drained [root@node-01 ~]# kubeadm config images list --kubernetes-version=v1.28.2 W0214 17:19:56.283442 15842 images.go:80] could not find officially supported version of etcd for Kubernetes v1.28.2, falling back to the nearest etcd version (3.5.7-0) registry.k8s.io/kube-apiserver:v1.28.2 registry.k8s.io/kube-controller-manager:v1.28.2 registry.k8s.io/kube-scheduler:v1.28.2 registry.k8s.io/kube-proxy:v1.28.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.7-0 registry.k8s.io/coredns/coredns:v1.10.1 [root@node-01 ~]# kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.28.2 W0214 17:20:06.692778 15905 images.go:80] could not find officially supported version of etcd for Kubernetes v1.28.2, falling back to the nearest etcd version (3.5.7-0) [config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.2 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.2 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.2 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.28.2 [config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9 [config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.7-0 [config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.10.1 [root@node-01 ~]# yum install -y kubelet-1.28.2 kubeadm-1.28.2 kubectl-1.28.2 --disableexcludes=kubernetes
master
节点升级,从1.27.2
升级到1.28.2
版本
[root@node-01 ~]# kubeadm upgrade apply 1.28.2 -y [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks. [upgrade] Running cluster health checks [upgrade/version] You have chosen to change the cluster version to "v1.28.2" [upgrade/versions] Cluster version: v1.27.2 [upgrade/versions] kubeadm version: v1.28.2 [upgrade/prepull] Pulling images required for setting up a Kubernetes cluster [upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection [upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull' W0214 17:22:13.289676 16555 checks.go:835] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image. [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.28.2" (timeout: 5m0s)... [upgrade/etcd] Upgrading to TLS for etcd [upgrade/staticpods] Preparing for "etcd" upgrade [upgrade/staticpods] Renewing etcd-server certificate [upgrade/staticpods] Renewing etcd-peer certificate [upgrade/staticpods] Renewing etcd-healthcheck-client certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-02-14-17-22-27/etcd.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) [apiclient] Found 1 Pods for label selector component=etcd [upgrade/staticpods] Component "etcd" upgraded successfully! [upgrade/etcd] Waiting for etcd to become available [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests1824515597" [upgrade/staticpods] Preparing for "kube-apiserver" upgrade [upgrade/staticpods] Renewing apiserver certificate [upgrade/staticpods] Renewing apiserver-kubelet-client certificate [upgrade/staticpods] Renewing front-proxy-client certificate [upgrade/staticpods] Renewing apiserver-etcd-client certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-02-14-17-22-27/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) [apiclient] Found 1 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade [upgrade/staticpods] Renewing controller-manager.conf certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-02-14-17-22-27/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) [apiclient] Found 1 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Preparing for "kube-scheduler" upgrade [upgrade/staticpods] Renewing scheduler.conf certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-02-14-17-22-27/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) [apiclient] Found 1 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster [upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config1255531904/config.yaml [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.28.2". Enjoy! #升级成功. [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
重启一下服务,然后查看一下master
节点版本.
[root@node-01 ~]# systemctl daemon-reload [root@node-01 ~]# systemctl restart kubelet [root@node-01 ~]# kubectl uncordon node-01 node/node-01 uncordoned [root@node-01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION node-01 Ready control-plane 25h v1.28.2 node-02 Ready <none> 25h v1.27.2 node-03 Ready <none> 25h v1.27.2
node节点1.27升级1.28
node
节点升级也是一样,直接安装重启即可
[root@node-02 ~]# yum install -y kubelet-1.28.2 kubeadm-1.28.2 kubectl-1.28.2 --disableexcludes=kubernetes Loaded plugins: fastestmirror [root@node-02 ~]# kubeadm upgrade node [upgrade] Reading configuration from the cluster... [upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks [preflight] Skipping prepull. Not a control plane node. [upgrade] Skipping phase. Not a control plane node. [upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config3395284065/config.yaml [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [upgrade] The configuration for this node was successfully updated! [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager. [root@node-02 ~]# systemctl daemon-reload [root@node-02 ~]# systemctl restart kubelet [root@node-01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION node-01 Ready control-plane 25h v1.28.2 node-02 Ready <none> 25h v1.28.2 node-03 Ready <none> 25h v1.27.2
目前通过安全
master节点1.28.2升级1.29.2
从1.29
版本开始,使用的yum
源就不一样了,每个大版本使用一个yum
源.所以安装之前需要先更新一下yum
源.
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/ enabled=1 gpgcheck=1 gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni EOF
这个是yum
源地址,更新yum
源后,还是要更新yum
,要不新的yum
源不生效.
[root@node-01 ~]# cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo > [kubernetes] > name=Kubernetes > baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/ > enabled=1 > gpgcheck=1 > gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key > exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni > EOF [kubernetes] name=Kubernetes baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/ enabled=1 gpgcheck=1 gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni [root@node-01 ~]# yum clean all [root@node-01 ~]# yum makecache
更新好yum
源后,下面就安装上面的升级流程对master
节点进行升级,
[root@node-01 ~]# kubectl drain node-01 --ignore-daemonsets node/node-01 cordoned Warning: ignoring DaemonSet-managed Pods: kube-system/calico-node-hnh87, kube-system/kube-proxy-z8dbg node/node-01 drained [root@node-01 ~]# kubeadm config images list --kubernetes-version=v1.29.2 registry.k8s.io/kube-apiserver:v1.29.2 registry.k8s.io/kube-controller-manager:v1.29.2 registry.k8s.io/kube-scheduler:v1.29.2 registry.k8s.io/kube-proxy:v1.29.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 [root@node-01 ~]# kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.29.2 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.29.2 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.29.2 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.29.2 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.29.2 [config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9 [config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.9-0 [config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.10.1 [root@node-01 ~]# yum install -y kubelet-1.29.2 kubeadm-1.29.2 kubectl-1.29.2 --disableexcludes=kubernetes
1.29的yum
源已经更新了,并且已经安装好了,下面升级master
节点
[root@node-01 ~]# kubeadm upgrade apply 1.29.2 -y [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks. [upgrade] Running cluster health checks [upgrade/version] You have chosen to change the cluster version to "v1.29.2" [upgrade/versions] Cluster version: v1.28.2 [upgrade/versions] kubeadm version: v1.29.2 [upgrade/prepull] Pulling images required for setting up a Kubernetes cluster [upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection [upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull' W0217 10:35:49.474258 25863 checks.go:835] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image. [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.29.2" (timeout: 5m0s)... [upgrade/etcd] Upgrading to TLS for etcd [upgrade/staticpods] Preparing for "etcd" upgrade [upgrade/staticpods] Renewing etcd-server certificate [upgrade/staticpods] Renewing etcd-peer certificate [upgrade/staticpods] Renewing etcd-healthcheck-client certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-02-17-10-36-01/etcd.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) [apiclient] Found 1 Pods for label selector component=etcd [upgrade/staticpods] Component "etcd" upgraded successfully! [upgrade/etcd] Waiting for etcd to become available [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests3031903101" [upgrade/staticpods] Preparing for "kube-apiserver" upgrade [upgrade/staticpods] Renewing apiserver certificate [upgrade/staticpods] Renewing apiserver-kubelet-client certificate [upgrade/staticpods] Renewing front-proxy-client certificate [upgrade/staticpods] Renewing apiserver-etcd-client certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-02-17-10-36-01/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) [apiclient] Found 1 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade [upgrade/staticpods] Renewing controller-manager.conf certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-02-17-10-36-01/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) [apiclient] Found 1 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Preparing for "kube-scheduler" upgrade [upgrade/staticpods] Renewing scheduler.conf certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-02-17-10-36-01/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) [apiclient] Found 1 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster [upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config996607342/config.yaml [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "super-admin.conf" kubeconfig file [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.29.2". Enjoy! #升级生成 [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
安装之后需要重启一下服务,然后看一下是否升级成功.不重启不生效.
[root@node-01 ~]# systemctl daemon-reload [root@node-01 ~]# systemctl restart kubelet [root@node-01 ~]# kubectl uncordon node-01 node/node-01 uncordoned [root@node-01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION node-01 Ready control-plane 3d18h v1.29.2 node-02 Ready <none> 3d18h v1.28.2 node-03 Ready <none> 3d18h v1.28.2
node节点1.28.2升级1.29.2
更新之前需要先更新一下yum
源,原因前面已经说过了,主要是yum
源的地址有调整.
[root@node-02 ~]# cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo > [kubernetes] > name=Kubernetes > baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/ > enabled=1 > gpgcheck=1 > gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key > exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni > EOF [kubernetes] name=Kubernetes baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/ enabled=1 gpgcheck=1 gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni [root@node-02 ~]# yum clean all Loaded plugins: fastestmirror Cleaning repos: base docker-ce-stable epel extras kubernetes updates Cleaning up list of fastest mirrors [root@node-02 ~]# yum makecache
更新yum
源之后直接安装kubelet
,kubeadm
,kubectl
即可
[root@node-02 ~]# yum install -y kubelet-1.29.2 kubeadm-1.29.2 kubectl-1.29.2 --disableexcludes=kubernetes [root@node-02 ~]# kubeadm upgrade node [upgrade] Reading configuration from the cluster... [upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks [preflight] Skipping prepull. Not a control plane node. [upgrade] Skipping phase. Not a control plane node. [upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config3620035432/config.yaml [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [upgrade] The configuration for this node was successfully updated! #升级成功 [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager. [root@node-02 ~]# systemctl daemon-reload [root@node-02 ~]# systemctl restart kubelet [root@node-01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION node-01 Ready control-plane 3d22h v1.29.2 node-02 Ready <none> 3d22h v1.29.2 node-03 Ready <none> 3d22h v1.28.2
master节点1.29.2升级1.30.2
kubernetes
集群1.30.X
版本也是一样,需要更新yum
源之后才可以进行升级,参考上面的,继续更新yum
源,然后安装服务.
[root@node-01 ~]# cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo > [kubernetes] > name=Kubernetes > baseurl=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/ > enabled=1 > gpgcheck=1 > gpgkey=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/repodata/repomd.xml.key > exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni > EOF [kubernetes] name=Kubernetes baseurl=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/ enabled=1 gpgcheck=1 gpgkey=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/repodata/repomd.xml.key exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni [root@node-01 ~]# yum clean all Loaded plugins: fastestmirror Cleaning repos: base docker-ce-stable epel extras kubernetes updates Cleaning up list of fastest mirrors [root@node-01 ~]# yum makecache [root@node-01 ~]# kubectl drain node-01 --ignore-daemonsets node/node-01 cordoned Warning: ignoring DaemonSet-managed Pods: kube-system/calico-node-hnh87, kube-system/kube-proxy-xgxj5 node/node-01 drained [root@node-01 ~]# kubeadm config images list --kubernetes-version=v1.30.2 registry.k8s.io/kube-apiserver:v1.30.2 registry.k8s.io/kube-controller-manager:v1.30.2 registry.k8s.io/kube-scheduler:v1.30.2 registry.k8s.io/kube-proxy:v1.30.2 registry.k8s.io/coredns/coredns:v1.11.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 [root@node-01 ~]# kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.30.2 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.30.2 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.30.2 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.30.2 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.30.2 [config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.11.1 [config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9 [config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.10-0 [root@node-01 ~]# yum install -y kubelet-1.30.2 kubeadm-1.30.2 kubectl-1.30.2 --disableexcludes=kubernetes
更新yum
源后安装kubelet
,kubeadm
,kubectl
然后更新master
节点
[root@node-01 ~]# kubeadm upgrade apply 1.30.2 -y [preflight] Running pre-flight checks. [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [upgrade] Running cluster health checks [upgrade/version] You have chosen to change the cluster version to "v1.30.2" [upgrade/versions] Cluster version: v1.29.2 [upgrade/versions] kubeadm version: v1.30.2 [upgrade/prepull] Pulling images required for setting up a Kubernetes cluster [upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection [upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull' W0217 14:48:37.273437 9916 checks.go:844] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.6" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image. [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.30.2" (timeout: 5m0s)... [upgrade/etcd] Upgrading to TLS for etcd [upgrade/staticpods] Preparing for "etcd" upgrade [upgrade/staticpods] Renewing etcd-server certificate [upgrade/staticpods] Renewing etcd-peer certificate [upgrade/staticpods] Renewing etcd-healthcheck-client certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-02-17-14-48-44/etcd.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This can take up to 5m0s [apiclient] Found 1 Pods for label selector component=etcd [upgrade/staticpods] Component "etcd" upgraded successfully! [upgrade/etcd] Waiting for etcd to become available [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests3092788334" [upgrade/staticpods] Preparing for "kube-apiserver" upgrade [upgrade/staticpods] Renewing apiserver certificate [upgrade/staticpods] Renewing apiserver-kubelet-client certificate [upgrade/staticpods] Renewing front-proxy-client certificate [upgrade/staticpods] Renewing apiserver-etcd-client certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-02-17-14-48-44/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This can take up to 5m0s [apiclient] Found 1 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade [upgrade/staticpods] Renewing controller-manager.conf certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-02-17-14-48-44/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This can take up to 5m0s [apiclient] Found 1 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Preparing for "kube-scheduler" upgrade [upgrade/staticpods] Renewing scheduler.conf certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-02-17-14-48-44/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This can take up to 5m0s [apiclient] Found 1 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster [upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config1484822805/config.yaml [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.30.2". Enjoy! #升级成功 [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
已经升级成功了,下面重启一下kubelet
然后看一下节点是否升级成功.
[root@node-01 ~]# systemctl daemon-reload [root@node-01 ~]# systemctl restart kubelet [root@node-01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION node-01 Ready control-plane 4d v1.30.2 node-02 Ready <none> 4d v1.29.2 node-03 Ready <none> 4d v1.29.2
master
节点升级成功了,下面开始升级node
节点,node
节点也是一样的,先更新yum
源,然后安装kubelet
,kubeadm
,kubectl
重启服务即可.
node节点1.29.2升级1.30.2
node
节点也是一样,需要先更新yum
源之后才可以安装kubelet
,kubeadm
,kubectl
[root@node-02 ~]# cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo > [kubernetes] > name=Kubernetes > baseurl=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/ > enabled=1 > gpgcheck=1 > gpgkey=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/repodata/repomd.xml.key > exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni > EOF [kubernetes] name=Kubernetes baseurl=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/ enabled=1 gpgcheck=1 gpgkey=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/repodata/repomd.xml.key exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni [root@node-02 ~]# yum clean all Loaded plugins: fastestmirror Cleaning repos: base docker-ce-stable epel extras kubernetes updates Cleaning up list of fastest mirrors [root@node-02 ~]# yum makecache
更新yum
源后,直接升级node
节点即可
[root@node-02 ~]# yum install -y kubelet-1.30.2 kubeadm-1.30.2 kubectl-1.30.2 --disableexcludes=kubernetes [root@node-02 ~]# kubeadm upgrade node [upgrade] Reading configuration from the cluster... [upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks [preflight] Skipping prepull. Not a control plane node. [upgrade] Skipping phase. Not a control plane node. [upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config2060671800/config.yaml [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [upgrade] The configuration for this node was successfully updated! [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager. [root@node-02 ~]# systemctl daemon-reload [root@node-02 ~]# systemctl restart kubelet [root@node-01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION node-01 Ready control-plane 4d v1.30.2 node-02 Ready <none> 4d v1.30.2 node-03 Ready <none> 4d v1.29.2
master节点1.30.2升级1.31.2
同样要更新yum
源,注意文件中版本,不更新会找不到安装包.
[root@node-01 ~]# cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo > [kubernetes] > name=Kubernetes > baseurl=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/ > enabled=1 > gpgcheck=1 > gpgkey=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/repodata/repomd.xml.key > exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni > EOF [kubernetes] name=Kubernetes baseurl=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/ enabled=1 gpgcheck=1 gpgkey=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/repodata/repomd.xml.key exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni [root@node-01 ~]# yum clean all Loaded plugins: fastestmirror Cleaning repos: base docker-ce-stable epel extras kubernetes updates Cleaning up list of fastest mirrors [root@node-01 ~]# yum makecache
下载镜像和安装kubelet
,kubeadm
,kubectl
[root@node-01 ~]# kubeadm config images list --kubernetes-version=v1.31.2 registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/coredns/coredns:v1.11.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 [root@node-01 ~]# kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.31.2 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.31.2 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.31.2 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.31.2 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.31.2 [config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.11.1 [config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9 [config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.12-0 [root@node-01 ~]# yum install -y kubelet-1.31.2 kubeadm-1.31.2 kubectl-1.31.2 --disableexcludes=kubernetes
master
节点升级
[root@node-01 ~]# kubeadm upgrade apply 1.31.2 -y [preflight] Running pre-flight checks. [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [upgrade] Running cluster health checks [upgrade/version] You have chosen to change the cluster version to "v1.31.2" [upgrade/versions] Cluster version: v1.30.2 [upgrade/versions] kubeadm version: v1.31.2 [upgrade/prepull] Pulling images required for setting up a Kubernetes cluster [upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection [upgrade/prepull] You can also perform this action beforehand using 'kubeadm config images pull' W0217 17:06:17.137931 23287 checks.go:846] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.6" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.aliyuncs.com/google_containers/pause:3.10" as the CRI sandbox image. [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.31.2" (timeout: 5m0s)... [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests3365877311" [upgrade/staticpods] Preparing for "etcd" upgrade [upgrade/staticpods] Renewing etcd-server certificate [upgrade/staticpods] Renewing etcd-peer certificate [upgrade/staticpods] Renewing etcd-healthcheck-client certificate [upgrade/staticpods] Moving new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backing up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-02-17-17-06-33/etcd.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This can take up to 5m0s [apiclient] Found 1 Pods for label selector component=etcd [upgrade/staticpods] Component "etcd" upgraded successfully! [upgrade/etcd] Waiting for etcd to become available [upgrade/staticpods] Preparing for "kube-apiserver" upgrade [upgrade/staticpods] Renewing apiserver certificate [upgrade/staticpods] Renewing apiserver-kubelet-client certificate [upgrade/staticpods] Renewing front-proxy-client certificate [upgrade/staticpods] Renewing apiserver-etcd-client certificate [upgrade/staticpods] Moving new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backing up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-02-17-17-06-33/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This can take up to 5m0s [apiclient] Found 1 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade [upgrade/staticpods] Renewing controller-manager.conf certificate [upgrade/staticpods] Moving new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backing up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-02-17-17-06-33/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This can take up to 5m0s [apiclient] Found 1 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Preparing for "kube-scheduler" upgrade [upgrade/staticpods] Renewing scheduler.conf certificate [upgrade/staticpods] Moving new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backing up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-02-17-17-06-33/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This can take up to 5m0s [apiclient] Found 1 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster [upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config3112572867/config.yaml [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.31.2". Enjoy! #升级成功 [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
已经升级成功了,下面重启一下kubelet
然后看一下节点是否升级成功.
[root@node-01 ~]# systemctl daemon-reload [root@node-01 ~]# systemctl restart kubelet [root@node-01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION node-01 Ready control-plane 4d1h v1.31.2 node-02 Ready <none> 4d1h v1.30.2 node-03 Ready <none> 4d1h v1.30.2
node节点1.30.2升级1.31.2
一样,要更新yum
源,然后把yum
源生效,然后安装kubelet
,kubeadm
,kubectl
服务
[root@node-02 ~]# cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo > [kubernetes] > name=Kubernetes > baseurl=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/ > enabled=1 > gpgcheck=1 > gpgkey=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/repodata/repomd.xml.key > exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni > EOF [kubernetes] name=Kubernetes baseurl=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/ enabled=1 gpgcheck=1 gpgkey=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/repodata/repomd.xml.key exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni [root@node-02 ~]# yum clean all Loaded plugins: fastestmirror Cleaning repos: base docker-ce-stable epel extras kubernetes updates Cleaning up list of fastest mirrors [root@node-02 ~]# yum makecache [root@node-02 ~]# yum install -y kubelet-1.31.2 kubeadm-1.31.2 kubectl-1.31.2 --disableexcludes=kubernetes
升级node
节点,并重启服务
[root@node-02 ~]# kubeadm upgrade node [upgrade] Reading configuration from the cluster... [upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks [preflight] Skipping prepull. Not a control plane node. [upgrade] Skipping phase. Not a control plane node. [upgrade] Skipping phase. Not a control plane node. [upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config191229205/config.yaml [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [upgrade] The configuration for this node was successfully updated! [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager. [root@node-02 ~]# systemctl daemon-reload [root@node-02 ~]# systemctl restart kubelet [root@node-01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION node-01 Ready control-plane 4d1h v1.31.2 node-02 Ready <none> 4d1h v1.31.2 node-03 Ready <none> 4d1h v1.30.2
master节点1.31.2升级1.32.2
同样的需要更新yum
源,然后安装kubelet
,kubeadm
,kubectl
服务
[root@node-01 ~]# cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo > [kubernetes] > name=Kubernetes > baseurl=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/ > enabled=1 > gpgcheck=1 > gpgkey=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/repodata/repomd.xml.key > exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni > EOF [kubernetes] name=Kubernetes baseurl=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/ enabled=1 gpgcheck=1 gpgkey=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/repodata/repomd.xml.key exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni [root@node-01 ~]# yum clean all Loaded plugins: fastestmirror Cleaning repos: base docker-ce-stable epel extras kubernetes updates Cleaning up list of fastest mirrors [root@node-01 ~]# yum makecache [root@node-01 ~]# yum install -y kubelet-1.31.2 kubeadm-1.31.2 kubec^C-1.32.2 --disableexcludes=kubernetes [root@node-01 ~]# kubeadm config images list --kubernetes-version=v1.32.2 registry.k8s.io/kube-apiserver:v1.32.2 registry.k8s.io/kube-controller-manager:v1.32.2 registry.k8s.io/kube-scheduler:v1.32.2 registry.k8s.io/kube-proxy:v1.32.2 registry.k8s.io/coredns/coredns:v1.11.3 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 [root@node-01 ~]# kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.32.2 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.32.2 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.32.2 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.32.2 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.32.2 [config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.11.3 [config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.10 [config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.15-0 [root@node-01 ~]# yum install -y kubelet-1.32.2 kubeadm-1.32.2 kubectl-1.32.2 --disableexcludes=kubernetes
直接升级master
节点
[root@node-01 ~]# kubeadm upgrade apply 1.32.2 -y [upgrade] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"... [upgrade] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it. [upgrade/preflight] Running preflight checks [upgrade] Running cluster health checks [upgrade/preflight] You have chosen to upgrade the cluster version to "v1.32.2" [upgrade/versions] Cluster version: v1.31.2 [upgrade/versions] kubeadm version: v1.32.2 [upgrade/preflight] Pulling images required for setting up a Kubernetes cluster [upgrade/preflight] This might take a minute or two, depending on the speed of your internet connection [upgrade/preflight] You can also perform this action beforehand using 'kubeadm config images pull' W0217 17:33:14.400501 1922 checks.go:846] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.6" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.aliyuncs.com/google_containers/pause:3.10" as the CRI sandbox image. [upgrade/control-plane] Upgrading your static Pod-hosted control plane to version "v1.32.2" (timeout: 5m0s)... [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests341939944" [upgrade/staticpods] Preparing for "etcd" upgrade [upgrade/staticpods] Renewing etcd-server certificate [upgrade/staticpods] Renewing etcd-peer certificate [upgrade/staticpods] Renewing etcd-healthcheck-client certificate [upgrade/staticpods] Moving new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backing up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-02-17-17-33-21/etcd.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This can take up to 5m0s [apiclient] Found 1 Pods for label selector component=etcd [upgrade/staticpods] Component "etcd" upgraded successfully! [upgrade/etcd] Waiting for etcd to become available [upgrade/staticpods] Preparing for "kube-apiserver" upgrade [upgrade/staticpods] Renewing apiserver certificate [upgrade/staticpods] Renewing apiserver-kubelet-client certificate [upgrade/staticpods] Renewing front-proxy-client certificate [upgrade/staticpods] Renewing apiserver-etcd-client certificate [upgrade/staticpods] Moving new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backing up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-02-17-17-33-21/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This can take up to 5m0s [apiclient] Found 1 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade [upgrade/staticpods] Renewing controller-manager.conf certificate [upgrade/staticpods] Moving new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backing up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-02-17-17-33-21/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This can take up to 5m0s [apiclient] Found 1 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Preparing for "kube-scheduler" upgrade [upgrade/staticpods] Renewing scheduler.conf certificate [upgrade/staticpods] Moving new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backing up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-02-17-17-33-21/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This can take up to 5m0s [apiclient] Found 1 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [upgrade/control-plane] The control plane instance for this node was successfully upgraded! [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster [upgrad/kubeconfig] The kubeconfig files for this node were successfully upgraded! W0217 17:36:48.412684 1922 postupgrade.go:117] Using temporary directory /etc/kubernetes/tmp/kubeadm-kubelet-config1892240455 for kubelet config. To override it set the environment variable KUBEADM_UPGRADE_DRYRUN_DIR [upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config1892240455/config.yaml [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [upgrade/kubelet-config] The kubelet configuration for this node was successfully upgraded! [upgrade/bootstrap-token] Configuring bootstrap token and cluster-info RBAC rules [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy [upgrade] SUCCESS! A control plane node of your cluster was upgraded to "v1.32.2". #升级成功 [upgrade] Now please proceed with upgrading the rest of the nodes by following the right order. [root@node-01 ~]# systemctl daemon-reload [root@node-01 ~]# systemctl restart kubelet [root@node-01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION node-01 Ready control-plane 4d1h v1.32.2 node-02 Ready <none> 4d1h v1.31.2 node-03 Ready <none> 4d1h v1.31.2
node节点1.31升级1.32
同样的需要先更新一下yum
源,然后安装kubelet
,kubeadm
,kubectl
服务等.
[root@node-02 ~]# cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo > [kubernetes] > name=Kubernetes > baseurl=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/ > enabled=1 > gpgcheck=1 > gpgkey=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/repodata/repomd.xml.key > exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni > EOF [kubernetes] name=Kubernetes baseurl=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/ enabled=1 gpgcheck=1 gpgkey=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/repodata/repomd.xml.key exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni [root@node-02 ~]# yum clean all Loaded plugins: fastestmirror Cleaning repos: base docker-ce-stable epel extras kubernetes updates Cleaning up list of fastest mirrors [root@node-02 ~]# yum makecache [root@node-02 ~]# yum install -y kubelet-1.32.2 kubeadm-1.32.2 kubectl-1.32.2 --disableexcludes=kubernetes
升级node节点然后重启服务即可.
[root@node-02 ~]# kubeadm upgrade node [upgrade] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"... [upgrade] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it. [upgrade/preflight] Running pre-flight checks [upgrade/preflight] Skipping prepull. Not a control plane node. [upgrade/control-plane] Skipping phase. Not a control plane node. [upgrade/kubeconfig] Skipping phase. Not a control plane node. W0217 17:44:20.649540 1342 postupgrade.go:117] Using temporary directory /etc/kubernetes/tmp/kubeadm-kubelet-config132880547 for kubelet config. To override it set the environment variable KUBEADM_UPGRADE_DRYRUN_DIR [upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config132880547/config.yaml [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [upgrade/kubelet-config] The kubelet configuration for this node was successfully upgraded! [upgrade/addon] Skipping the addon/coredns phase. Not a control plane node. [upgrade/addon] Skipping the addon/kube-proxy phase. Not a control plane node. [root@node-02 ~]# systemctl daemon-reload [root@node-02 ~]# systemctl restart kubelet [root@node-01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION node-01 Ready control-plane 4d1h v1.32.2 node-02 Ready <none> 4d1h v1.32.2 node-03 Ready <none> 4d1h v1.32.2
整个升级过程很顺畅,升级后会有个别的中间件不能使用,因为我这里的集群中没有过多依赖版本的中间件,所以没有发现,如果在生产环境慎用,先在本地测试一下在测试到生产.
您可以选择一种方式赞助本站
支付宝扫一扫赞助
微信钱包扫描赞助
赏