kubernetes版本升级

avatar 2022年10月20日18:18:19 评论 590 次浏览

kubernetes已经是现在比较常用的平台了,但是不通版本针对kubernetes的使用也是不一样的。版本越高技术使用方面相对要复杂,但是针对环境的复杂性适用性也相对要好很多,这里就看一下kubernetes的升级。针对不通的仓库kubernetes的版本是不一样的,如何查看当前仓库的版本可以使用:

[root@Mater ~]# yum list --showduplicates kubeadm --disableexcludes=kubernetes  #稳定版本
...............................................................
kubeadm.x86_64                                   1.23.9-0                                     kubernetes
kubeadm.x86_64                                   1.23.10-0                                    kubernetes
kubeadm.x86_64                                   1.23.11-0                                    kubernetes
kubeadm.x86_64                                   1.23.12-0                                    kubernetes
kubeadm.x86_64                                   1.24.0-0                                     kubernetes
kubeadm.x86_64                                   1.24.1-0                                     kubernetes
kubeadm.x86_64                                   1.24.2-0                                     kubernetes
kubeadm.x86_64                                   1.24.3-0                                     kubernetes
kubeadm.x86_64                                   1.24.4-0                                     kubernetes
kubeadm.x86_64                                   1.24.5-0                                     kubernetes
kubeadm.x86_64                                   1.24.6-0                                     kubernetes
kubeadm.x86_64                                   1.25.0-0                                     kubernetes
kubeadm.x86_64                                   1.25.1-0                                     kubernetes
kubeadm.x86_64                                   1.25.2-0                                     kubernetes
[root@Mater ~]# kubectl get node -o wide
NAME    STATUS                     ROLES                  AGE    VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE          KERNEL-VERSION          CONTAINER-RUNTIME
mater   Ready                      control-plane,master   158m   v1.23.2   10.211.55.11   <none>        CentOS Stream 8   4.18.0-383.el8.x86_64   containerd://1.6.8
node1   Ready                      <none>                 157m   v1.23.2   10.211.55.12   <none>        CentOS Stream 8   4.18.0-383.el8.x86_64   containerd://1.6.8
node2   Ready,SchedulingDisabled   <none>                 157m   v1.23.2   10.211.55.13   <none>        CentOS Stream 8   4.18.0-383.el8.x86_64   containerd://1.6.8

目前使用的是阿里云的仓库,最新版是1.25,我目前安装的版本是1.23,我要升级到1.24版本。这里需要注意的是1.24版本开始已经不支持使用docker了,将使用containerd调用链,所以在升级之前需要先把docker迁移到containerd之后在升级,避免出现错误。可以参考:https://www.wulaoer.org/?p=2652

master节点升级

在升级master节点之前需要做的是必须把master节点先进行禁止调用,并且把master节点中的服务排空才可以进行升级,避免因为升级带来的异常。

[root@Mater ~]# yum install -y kubeadm-1.24.1-0 --disableexcludes=kubernetes
...................................................
[root@Mater ~]# kubectl drain mater --ignore-daemonsets
node/mater cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-2hnck, kube-system/kube-proxy-4w92p
evicting pod kube-system/coredns-6d8c4cb4d-tc6fk
pod/coredns-6d8c4cb4d-tc6fk evicted
node/mater drained

上面安装的是v1.24.1版本,所以master节点升级也配置的版本号和上面的一定要一致,升级过程可能比较慢,不过不要紧,稍等一下即可。升级完kubeadm,也要升级一下kubelet,并修改配置文件。

[root@Mater ~]# kubeadm upgrade apply v1.24.1
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1019 13:02:40.358081   65200 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/dockershim.sock". Please update your configuration!
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.24.1"
[upgrade/versions] Cluster version: v1.23.2
[upgrade/versions] kubeadm version: v1.24.1
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.24.1" (timeout: 5m0s)...
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-10-19-13-03-36/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests3794708797"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-10-19-13-03-36/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-10-19-13-03-36/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-10-19-13-03-36/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Removing the deprecated label node-role.kubernetes.io/master='' from all control plane Nodes. After this step only the label node-role.kubernetes.io/control-plane='' will be present on control plane Nodes.
[upgrade/postupgrade] Adding the new taint &Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,} to all control plane Nodes. After this step both taints &Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,} and &Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,} should be present on control plane Nodes.
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.24.1". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
[root@Mater ~]# yum install -y kubelet-1.24.1-0 kubectl-1.24.1-0 --disableexcludes=kubernetes
Last metadata expiration check: 2:46:17 ago on Wed 19 Oct 2022 10:21:27 AM CST.
Dependencies resolved.
====================================================================================================================================================
 Package                           Architecture                     Version                              Repository                            Size
====================================================================================================================================================
Upgrading:
 kubectl                           x86_64                           1.24.1-0                             kubernetes                           9.9 M
 kubelet                           x86_64                           1.24.1-0                             kubernetes                            20 M

Transaction Summary
====================================================================================================================================================
Upgrade  2 Packages

Total download size: 30 M
Downloading Packages:
(1/2): 17013403794d47f80ade3299c74c3a646d37f195c1057da4db74fd3fd78270f1-kubectl-1.24.1-0.x86_64.rpm                 587 kB/s | 9.9 MB     00:17
(2/2): d184b7647df76898e431cfc9237dea3f8830e3e3398d17b0bf90c1b479984b3f-kubelet-1.24.1-0.x86_64.rpm                 696 kB/s |  20 MB     00:30
----------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                               1.0 MB/s |  30 MB     00:30
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                            1/1
  Running scriptlet: kubelet-1.24.1-0.x86_64                                                                                                    1/1
  Upgrading        : kubelet-1.24.1-0.x86_64                                                                                                    1/4
  Upgrading        : kubectl-1.24.1-0.x86_64                                                                                                    2/4
  Cleanup          : kubectl-1.23.2-0.x86_64                                                                                                    3/4
  Cleanup          : kubelet-1.23.2-0.x86_64                                                                                                    4/4
  Running scriptlet: kubelet-1.23.2-0.x86_64                                                                                                    4/4
  Verifying        : kubectl-1.24.1-0.x86_64                                                                                                    1/4
  Verifying        : kubectl-1.23.2-0.x86_64                                                                                                    2/4
  Verifying        : kubelet-1.24.1-0.x86_64                                                                                                    3/4
  Verifying        : kubelet-1.23.2-0.x86_64                                                                                                    4/4

Upgraded:
  kubectl-1.24.1-0.x86_64                                                  kubelet-1.24.1-0.x86_64

Complete!

#修改kubelet参数
[root@Mater ~]# cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.6"
[root@Mater ~]# vim /var/lib/kubelet/kubeadm-flags.env
#删除了参数–network-plugin=cni,如果不删除,kubelet无法启动
[root@Mater ~]# cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.5"
[root@Mater ~]# systemctl daemon-reload;  systemctl restart kubelet #重启
[root@Mater ~]# kubectl uncordon mater
node/mater uncordoned
#验证
[root@Mater ~]# kubectl get nodes
NAME    STATUS                     ROLES           AGE    VERSION
mater   Ready                      control-plane   169m   v1.24.1
node1   Ready                      <none>          168m   v1.23.2
node2   Ready,SchedulingDisabled   <none>          168m   v1.23.2

升级node1

升级node1节点之前也需要对node节点进行排空,然后安装kubeadm,并升级kubelet。

[root@Mater ~]# kubectl drain node1 --ignore-daemonsets
node/node1 cordoned
[root@Node1 ~]# yum install -y kubeadm-1.24.1-0 --disableexcludes=kubernetes
...............................................................
[root@Node1 ~]# kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[preflight] Skipping prepull. Not a control plane node.
[upgrade] Skipping phase. Not a control plane node.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

[root@Node1 ~]# vim /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.6"
#改为,去掉了--network-plugin=cni
KUBELET_KUBEADM_ARGS="--pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.6"
[root@Node1 ~]#  yum install -y kubelet-1.24.1-0 kubectl-1.24.1-0 --disableexcludes=kubernetes
..............................................................
[root@Node1 ~]# systemctl daemon-reload;  systemctl restart kubelet
#验证
[root@Mater ~]# kubectl uncordon node1
node/node1 uncordoned
[root@Mater ~]# kubectl get nodes
NAME    STATUS   ROLES           AGE    VERSION
mater   Ready    control-plane   3h8m   v1.24.1
node1   Ready    <none>          3h7m   v1.24.1
node2   Ready    <none>          3h7m   v1.23.2

升级node2节点

[root@Mater ~]# kubectl drain node2 --ignore-daemonsets
[root@Node2 ~]# yum install -y kubeadm-1.24.1-0 --disableexcludes=kubernetes
[root@Node2 ~]# kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[preflight] Skipping prepull. Not a control plane node.
[upgrade] Skipping phase. Not a control plane node.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
[root@Node2 ~]# vim /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.6"
#改为,去掉了--network-plugin=cni
KUBELET_KUBEADM_ARGS="--pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.6"
[root@Node2 ~]# yum install -y kubelet-1.24.1-0 kubectl-1.24.1-0 --disableexcludes=kubernetes
[root@Node2 ~]# systemctl daemon-reload;  systemctl restart kubelet
#验证
[root@Mater ~]# kubectl uncordon node2
node/node2 uncordoned
[root@Mater ~]# kubectl get node
NAME    STATUS   ROLES           AGE     VERSION
mater   Ready    control-plane   3h17m   v1.24.1
node1   Ready    <none>          3h16m   v1.24.1
node2   Ready    <none>          3h16m   v1.24.1

集群验证

在集群中部署了nginx服务,在升级之前nginx服务肯定是可以使用的,这里验证一下通过nodeport的方式看一下返回状态。

[root@Mater ~]# curl -I -s -m10 http://10.211.55.13:32469 |grep HTTP|awk '{print $2}'
200
[root@Mater ~]# curl -I -s -m10 http://10.211.55.12:32469 |grep HTTP|awk '{print $2}'
200
[root@Mater ~]# curl -I -s -m10 http://10.211.55.11:32469 |grep HTTP|awk '{print $2}'
200
[root@Mater ~]# kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS      AGE
default       nginx-85b98978db-c2whk                     1/1     Running   0             131m
kube-system   calico-kube-controllers-5dc679f767-sr2zh   1/1     Running   0             131m
kube-system   calico-node-2hnck                          1/1     Running   1             3h13m
kube-system   calico-node-99mbp                          1/1     Running   2             3h13m
kube-system   calico-node-sxwhd                          1/1     Running   2             3h13m
kube-system   coredns-6d8c4cb4d-djfcf                    1/1     Running   0             131m
kube-system   coredns-6d8c4cb4d-n97fk                    1/1     Running   0             39m
kube-system   etcd-mater                                 1/1     Running   1 (30m ago)   30m
kube-system   kube-apiserver-mater                       1/1     Running   1 (30m ago)   30m
kube-system   kube-controller-manager-mater              1/1     Running   1 (30m ago)   30m
kube-system   kube-proxy-tjkpw                           1/1     Running   0             35m
kube-system   kube-proxy-wbflf                           1/1     Running   0             35m
kube-system   kube-proxy-xwcs8                           1/1     Running   0             34m
kube-system   kube-scheduler-mater                       1/1     Running   1 (30m ago)   30m

所有节点访问正常,pod正常,容器正常,kubernetes升级成功,这里升级的主要点是需要在docker替换成containerd,如果不替换docker,升级之后kube不可用。

avatar

发表评论

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen: