kubernetes 1.24版本升级1.25版本

avatar 2024年4月30日18:19:54 评论 396 次浏览

闲来无事,就做一下版本升级的记录,前面做过单节点master升级集群高可用的案例,这里的升级也是在高可用集群的基础上实现的,所以这里我就不备份etcd了,如果是单节点集群一定要备份etcd集群,话不多说,看一下我的实现方式。

 [root@master ~]# kubectl get node
 NAME      STATUS                     ROLES           AGE    VERSION
 master    Ready                      control-plane   27h    v1.24.1
 node1     Ready                      control-plane   136d   v1.24.1
 node2     Ready                      control-plane   52d    v1.24.1
 wulaoer   Ready                      <none>          136d   v1.24.1

这个是我的集群,我的集群目前版本是1.24.1,我要先升级到1.25.2版本,不能直接升级到1.26.2版本,小版本可以跨大版本不能,所以我们的升级需要先把当前节点排空,然后下载集群需要的镜像,最后安装1.25.2版本的集群组件,在升级到1.25.2版本,然后重启kubelet即可,如果不重启看不到版本信息。哪就看我的升级需要用到的命令吧。

 [wolf@wulaoer.org 🔥🔥🔥🔥 ~ ]$ kubectl drain master --ignore-daemonsets
 [wolf@wulaoer.org 🔥🔥🔥🔥 ~ ]$ kubeadm config images list --kubernetes-version=v1.25.2#也可以直接执行,会根据指定的版本自动从阿里云上下载
[wolf@wulaoer.org 🔥🔥🔥🔥 ~ ]$ kubeadm config images pull --kubernetes-version=v1.25.2 --image-repository registry.aliyuncs.com/google_containers
 [wolf@wulaoer.org 🔥🔥🔥🔥 ~ ]$ yum install -y kubelet-1.25.2 kubeadm-1.25.2 kubectl-1.25.2 --disableexcludes=kubernetes
 [wolf@wulaoer.org 🔥🔥🔥🔥 ~ ]$ kubeadm upgrade apply 1.25.2 -y
 [wolf@wulaoer.org 🔥🔥🔥🔥 ~ ]$ systemctl restart kubelet
 [wolf@wulaoer.org 🔥🔥🔥🔥 ~ ]$ systemctl restart kubelet

这里注意我们查看版本时,这里的镜像都是国外的,为了方便下载我们选择国内阿里云的即可,只需要把registry.k8s.io替换成registry.aliyuncs.com/google_containers即可,然后一个一个的下载。

 [root@master ~]# kubeadm config images list --kubernetes-version=v1.25.2
 registry.k8s.io/kube-apiserver:v1.25.2
 registry.k8s.io/kube-controller-manager:v1.25.2
 registry.k8s.io/kube-scheduler:v1.25.2
 registry.k8s.io/kube-proxy:v1.25.2
 registry.k8s.io/pause:3.8
 registry.k8s.io/etcd:3.5.4-0
 registry.k8s.io/coredns/coredns:v1.9.3
 [root@master ~]# crictl images
 IMAGE                                                             TAG                 IMAGE ID            SIZE
 docker.io/calico/cni                                              v3.20.6             13b6f63a50d67       45.3MB
 docker.io/calico/kube-controllers                                 v3.20.6             4dc6e7685020b       25MB
 docker.io/calico/node                                             v3.20.6             daeec7e26e1f5       58.9MB
 docker.io/calico/pod2daemon-flexvol                               v3.20.6             39b166f3f9360       8.61MB
 ghcr.io/kube-vip/kube-vip                                         v0.6.2              404ca3549f735       14MB
 registry.aliyuncs.com/google_containers/coredns                   v1.8.6              a4ca41631cc7a       13.6MB
 registry.aliyuncs.com/google_containers/coredns                   v1.9.3              5185b96f0becf       14.8MB
 registry.aliyuncs.com/google_containers/etcd                      3.5.3-0             aebe758cef4cd       102MB
 registry.aliyuncs.com/google_containers/kube-apiserver            v1.24.1             e9f4b425f9192       33.8MB
 registry.aliyuncs.com/google_containers/kube-apiserver            v1.25.2             97801f8394908       34.2MB
 registry.aliyuncs.com/google_containers/kube-controller-manager   v1.24.1             b4ea7e648530d       31MB
 registry.aliyuncs.com/google_containers/kube-controller-manager   v1.25.2             dbfceb93c69b6       31.3MB
 registry.aliyuncs.com/google_containers/kube-proxy                v1.24.1             beb86f5d8e6cd       39.5MB
 registry.aliyuncs.com/google_containers/kube-proxy                v1.25.2             1c7d8c51823b5       20.3MB
 registry.aliyuncs.com/google_containers/kube-scheduler            v1.24.1             18688a72645c5       15.5MB
 registry.aliyuncs.com/google_containers/kube-scheduler            v1.25.2             ca0ea1ee3cfd3       15.8MB
 registry.aliyuncs.com/google_containers/pause                     3.6                 6270bb605e12e       302kB
 registry.aliyuncs.com/google_containers/pause                     3.7                 221177c6082a8       311kB

我这边都下载完了,下面就需要安装kubernetes的组件了,原来安装的是1.24.1版本的,现在安装1.25.2版本。

 [root@master ~]# yum install -y kubelet-1.25.2 kubeadm-1.25.2 kubectl-1.25.2 --disableexcludes=kubernetes
 [root@master ~]# kubeadm version
 kubeadm version: &version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.2", GitCommit:"5835544ca568b757a8ecae5c153f317e5736700e", GitTreeState:"clean", BuildDate:"2022-09-21T14:32:18Z", GoVersion:"go1.19.1", Compiler:"gc", Platform:"linux/amd64"}
 [root@master ~]# kubectl drain master --ignore-daemonsets #升级前需要把节点排空
 node/master cordoned
 WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-67747, kube-system/kube-proxy-p8mmr
 node/master drained
 [root@master ~]# kubectl get node
 NAME      STATUS                     ROLES           AGE    VERSION
 master    Ready,SchedulingDisabled   control-plane   27h    v1.24.1
 node1     Ready                      control-plane   136d   v1.24.1
 node2     Ready                      control-plane   52d    v1.24.1
 wulaoer   Ready                      <none>          136d   v1.24.1
 
 [root@master ~]# kubeadm upgrade apply 1.25.2 -y
 [upgrade/config] Making sure the configuration is correct:
 [upgrade/config] Reading configuration from the cluster...
 [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
 [preflight] Running pre-flight checks.
 [upgrade] Running cluster health checks
 [upgrade/version] You have chosen to change the cluster version to "v1.25.2"
 [upgrade/versions] Cluster version: v1.24.1
 [upgrade/versions] kubeadm version: v1.25.2
 [upgrade] Are you sure you want to proceed? [y/N]: y
 [upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
 [upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
 [upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
 [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.25.2" (timeout: 5m0s)...
 [upgrade/etcd] Upgrading to TLS for etcd
 [upgrade/staticpods] Preparing for "etcd" upgrade
 [upgrade/staticpods] Renewing etcd-server certificate
 [upgrade/staticpods] Renewing etcd-peer certificate
 [upgrade/staticpods] Renewing etcd-healthcheck-client certificate
 [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-04-30-14-07-40/etcd.yaml"
 [upgrade/staticpods] Waiting for the kubelet to restart the component
 [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
 [apiclient] Found 3 Pods for label selector component=etcd
 [upgrade/staticpods] Component "etcd" upgraded successfully!
 [upgrade/etcd] Waiting for etcd to become available
 [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests3338802492"
 [upgrade/staticpods] Preparing for "kube-apiserver" upgrade
 [upgrade/staticpods] Renewing apiserver certificate
 [upgrade/staticpods] Renewing apiserver-kubelet-client certificate
 [upgrade/staticpods] Renewing front-proxy-client certificate
 [upgrade/staticpods] Renewing apiserver-etcd-client certificate
 [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-04-30-14-07-40/kube-apiserver.yaml"
 [upgrade/staticpods] Waiting for the kubelet to restart the component
 [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
 [apiclient] Found 3 Pods for label selector component=kube-apiserver
 [upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
 [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
 [upgrade/staticpods] Renewing controller-manager.conf certificate
 [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-04-30-14-07-40/kube-controller-manager.yaml"
 [upgrade/staticpods] Waiting for the kubelet to restart the component
 [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
 [apiclient] Found 3 Pods for label selector component=kube-controller-manager
 [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
 [upgrade/staticpods] Preparing for "kube-scheduler" upgrade
 [upgrade/staticpods] Renewing scheduler.conf certificate
 [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-04-30-14-07-40/kube-scheduler.yaml"
 [upgrade/staticpods] Waiting for the kubelet to restart the component
 [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
 [apiclient] Found 3 Pods for label selector component=kube-scheduler
 [upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
 [upgrade/postupgrade] Removing the old taint &Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,} from all control plane Nodes. After this step only the &Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,} taint will be present on control plane Nodes.
 [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
 [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
 [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
 [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
 [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
 [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
 [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
 [addons] Applied essential addon: CoreDNS
 [addons] Applied essential addon: kube-proxy
 
 [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.25.2". Enjoy!
 
 [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
 
 [root@master ~]# systemctl restart kubelet
 [root@master ~]# kubectl uncordon master
 node/master uncordoned
 [root@master ~]# kubectl get node
 NAME      STATUS   ROLES           AGE    VERSION
 master    Ready    control-plane   27h    v1.25.2
 node1     Ready    control-plane   136d   v1.24.1
 node2     Ready    control-plane   52d    v1.24.1
 wulaoer   Ready    <none>          136d   v1.24.1

master节点升级完成,其他节点也是安装这个方式即可,下面看一下升级node节点方式。同样为了避免影响应用也要排空,然后安装组件,升级即可。

 [root@wulaoer ~]# yum install -y kubelet-1.25.2 kubeadm-1.25.2 kubectl-1.25.2 --disableexcludes=kubernetes
 [root@wulaoer ~]# kubeadm upgrade node
 [upgrade] Reading configuration from the cluster...
 [upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
 [preflight] Running pre-flight checks
 [preflight] Skipping prepull. Not a control plane node.
 [upgrade] Skipping phase. Not a control plane node.
 [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
 [upgrade] The configuration for this node was successfully updated!
 [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
 [root@wulaoer ~]# systemctl daemon-reload
 [root@wulaoer ~]# systemctl restart kubelet
 
 [root@master ~]# kubectl uncordon wulaoer
 node/wulaoer uncordoned
 [root@master ~]# kubectl get node
 NAME      STATUS   ROLES           AGE    VERSION
 master    Ready    control-plane   28h    v1.25.2
 node1     Ready    control-plane   136d   v1.25.2
 node2     Ready    control-plane   53d    v1.25.2
 wulaoer   Ready    <none>          136d   v1.25.2

这里注意,升级后集群的证书会从新开始,相当于安装新的集群,如果版本特别低,可以更新升级不用担心集群证书过期的问题了。

avatar

发表评论

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen: