kubernetes中docker迁移到contcontainerd

avatar 2022年10月19日18:10:21 评论 712 次浏览

这里主要针对contcontainerd替换成docker,主要原因是因为,新版本的kubernetes默认使用的contcontainerd,所以如果kubernetes升级的话,就会需要删除docker的问题,这里实验一下看下面的说明:

[root@Mater ~]# kubectl get nodes
NAME    STATUS   ROLES                  AGE   VERSION
mater   Ready    control-plane,master   23m   v1.23.2
node1   Ready    <none>                 22m   v1.23.2
node2   Ready    <none>                 22m   v1.23.2
[root@Mater ~]# kubectl drain mater --ignore-daemonsets
node/mater cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-2hnck, kube-system/kube-proxy-4w92p
node/mater drained

在替换之前,我们需要把节点上的应用都排空,避免删除docker的时候会影响用户的使用,kubernetes安装说明可以参考上个章节的文章,这里就不多做说明了。

Master节点先替换

排空节点后,需要把docker先停掉,然后删除docker,最后在安装containerd,并设置开机自启动,

[root@Mater ~]# systemctl disable docker --now
Removed /etc/systemd/system/multi-user.target.wants/docker.service.
Warning: Stopping docker.service, but it can still be activated by:
  docker.socket
[root@Mater ~]# systemctl disable docker.socket --now
[root@Mater ~]# yum remove docker-ce docker-ce-cli -y
....................................
[root@Mater ~]# yum install  containerd.io cri-tools  -y #安装containerd
[root@Mater ~]# crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock
[root@Mater ~]# containerd config default > /etc/containerd/config.toml #生成配置文件

修改配置文件

这里修改的配置文件可以和其他节点共用的,这里主要有几个关键词,分别是mirrors,sandbox,SystemdCgroup。

[root@Mater ~]# vim /etc/containerd/config.toml
.................关键词:mirrors...............................
     [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
改成
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          endpoint = ["https://frz7i079.mirror.aliyuncs.com"]
 .................关键词:sandbox...............................         
sandbox_image = "k8s.gcr.io/pause:3.6"
改为
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.7"
 .................关键词:SystemdCgroup...............................  
           [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            BinaryName = ""
            CriuImagePath = ""
            CriuPath = ""
            CriuWorkPath = ""
            IoGid = 0
            IoUid = 0
            NoNewKeyring = false
            NoPivotRoot = false
            Root = ""
            ShimCgroup = ""
            SystemdCgroup = false
改成(注意这中间的行是被删除了并非忽略不写的意思)
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            SystemdCgroup = true
 ................................................       
[root@Mater ~]# modprobe overlay ; modprobe br_netfilter #加载配置的文件
[root@Mater ~]# cat > /etc/modules-load.d/containerd.conf <<EOF #配置模块开机自启动
> overlay
> br_netfilter
> EOF
[root@Mater ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> net.ipv4.ip_forward = 1
> EOF
[root@Mater ~]# sysctl -p /etc/sysctl.d/k8s.conf #立即生效

修改配置文件,避免服务器在以后重启的过程中需要手动启动,需要注意的是,master节点在替换containerd时一定要排空节点中的应用,排空应用后,才会把节点用的服务调度到其他节点,然后又新服务时也不会调度到master节点里。

启动containerd

已经配置好containerd文件,设置开机自启动,并启动containerd。

[root@Mater ~]# systemctl enable containerd
[root@Mater ~]# systemctl restart containerd

配置kubelet

因为原来使用的是docker,现在已经把docker卸载了,所以需要修改一下kubelet的配置文件。并重启kubelet。

[root@Mater ~]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS= "--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock"
[root@Mater ~]# systemctl restart kubelet
[root@Mater ~]# kubectl uncordon mater #解锁调度
node/mater already uncordoned

验证master节点

解锁master节点后,查看kubernetes集群节点状态。master节点已经从docker替换成containerd了,而且不影响应用的使用。如果你的节点中应用相对较多,这个时候必须考虑在低峰期需要有一个备用的node节点。

[root@Mater ~]# kubectl get nodes -o wide
NAME    STATUS                     ROLES                  AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE          KERNEL-VERSION          CONTAINER-RUNTIME
mater   Ready,SchedulingDisabled   control-plane,master   51m   v1.23.2   10.211.55.11   <none>        CentOS Stream 8   4.18.0-383.el8.x86_64   containerd://1.6.8
node1   Ready                      <none>                 50m   v1.23.2   10.211.55.12   <none>        CentOS Stream 8   4.18.0-383.el8.x86_64   docker://20.10.18
node2   Ready                      <none>                 50m   v1.23.2   10.211.55.13   <none>        CentOS Stream 8   4.18.0-383.el8.x86_64   docker://20.10.18

node1节点安装containerd

在安装之前需要在master节点上把node1节点进行排空,禁止调用,如果你是托管的,可以在可以连接kubernetes集群的节点上操作也是可以的。然后在删除docker,安装containerd。

[root@Mater ~]# kubectl drain node1 --ignore-daemonsets #禁止调用
node/node1 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-sxwhd, kube-system/kube-proxy-f7vrx
evicting pod kube-system/coredns-6d8c4cb4d-gjxz5
evicting pod kube-system/calico-kube-controllers-5dc679f767-c874r
evicting pod kube-system/coredns-6d8c4cb4d-74dpl
pod/calico-kube-controllers-5dc679f767-c874r evicted
pod/coredns-6d8c4cb4d-74dpl evicted
pod/coredns-6d8c4cb4d-gjxz5 evicted
node/node1 drained
[root@Mater ~]# kubectl get node
NAME    STATUS                     ROLES                  AGE   VERSION
mater   Ready                      control-plane,master   55m   v1.23.2
node1   Ready,SchedulingDisabled   <none>                 54m   v1.23.2
node2   Ready                      <none>                 53m   v1.23.2
#这里需要注意,在master节点上给node节点打污点,排除所有的服务,然后卸载docker
[root@Node1 ~]# systemctl disable docker --now #删除开启自启动
Removed /etc/systemd/system/multi-user.target.wants/docker.service.
Warning: Stopping docker.service, but it can still be activated by:
  docker.socket
[root@Node1 ~]# systemctl disable docker.socket --now
[root@Node1 ~]# yum remove docker-ce docker-ce-cli -y #卸载docker
..................................
[root@Node1 ~]# yum install  containerd.io cri-tools  -y #安装containerd
[root@Node1 ~]# crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock

为了避免重复的配置文件,可以在master节点上的配置文件拷贝过node1节点上,

[root@Mater ~]# scp /etc/containerd/config.toml root@10.211.55.12:/etc/containerd/
The authenticity of host '10.211.55.12 (10.211.55.12)' can't be established.
ECDSA key fingerprint is SHA256:qQ8tjo4mtpA3UKnery8ACC1r+FAz8rlYMZynDs9Afbk.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.211.55.12' (ECDSA) to the list of known hosts.
root@10.211.55.12's password:
config.toml                              100% 7078     7.4MB/s   00:00
[root@Mater ~]# scp /etc/modules-load.d/containerd.conf  root@10.211.55.12:/etc/modules-load.d/
root@10.211.55.12's password:
containerd.conf                          100%   21    15.7KB/s   00:00
[root@Mater ~]# scp  /etc/sysctl.d/k8s.conf root@10.211.55.12:/etc/sysctl.d/
root@10.211.55.12's password:
k8s.conf                                 100%  103   113.6KB/s   00:00
[root@Mater ~]# scp /etc/sysconfig/kubelet root@10.211.55.12:/etc/sysconfig/
root@10.211.55.12's password:
kubelet                                  100%  147    91.8KB/s   00:00

加载配置文件,并启动containerd,然后解锁node1节点,并验证。

[root@Node1 ~]# modprobe overlay ; modprobe br_netfilter
[root@Node1 ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
[root@Node1 ~]# systemctl enable containerd
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /usr/lib/systemd/system/containerd.service.
[root@Node1 ~]# systemctl restart containerd
[root@Node1 ~]# systemctl restart kubelet
[root@Mater ~]# kubectl get node
NAME    STATUS                     ROLES                  AGE   VERSION
mater   Ready                      control-plane,master   64m   v1.23.2
node1   Ready,SchedulingDisabled   <none>                 63m   v1.23.2
node2   Ready                      <none>                 63m   v1.23.2
[root@Mater ~]# kubectl uncordon  node1 #解锁调度
node/node1 uncordoned
[root@Mater ~]# kubectl get node
NAME    STATUS   ROLES                  AGE   VERSION
mater   Ready    control-plane,master   65m   v1.23.2
node1   Ready    <none>                 64m   v1.23.2
node2   Ready    <none>                 64m   v1.23.2
[root@Mater ~]# kubectl get nodes -o wide
NAME    STATUS   ROLES                  AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE          KERNEL-VERSION          CONTAINER-RUNTIME
mater   Ready    control-plane,master   67m   v1.23.2   10.211.55.11   <none>        CentOS Stream 8   4.18.0-383.el8.x86_64   containerd://1.6.8
node1   Ready    <none>                 66m   v1.23.2   10.211.55.12   <none>        CentOS Stream 8   4.18.0-383.el8.x86_64   containerd://1.6.8
node2   Ready    <none>                 65m   v1.23.2   10.211.55.13   <none>        CentOS Stream 8   4.18.0-383.el8.x86_64   docker://20.10.18

node2节点安装containerd

node2的方法和node1的一样,同样显示禁止调用,然后卸载docker,安装containerd,并在master节点上把配置文件同步过来进行加载启动操作。

[root@Mater ~]#  kubectl drain node2 --ignore-daemonsets #禁止调用
node/node2 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-99mbp, kube-system/kube-proxy-jfd2f
evicting pod kube-system/coredns-6d8c4cb4d-dwd5h
evicting pod default/nginx-85b98978db-q7mwm
evicting pod kube-system/calico-kube-controllers-5dc679f767-hkxgb
evicting pod kube-system/coredns-6d8c4cb4d-d2phf
pod/calico-kube-controllers-5dc679f767-hkxgb evicted
pod/nginx-85b98978db-q7mwm evicted
pod/coredns-6d8c4cb4d-dwd5h evicted
pod/coredns-6d8c4cb4d-d2phf evicted
node/node2 drained
[root@Node2 ~]# systemctl disable docker --now
Removed /etc/systemd/system/multi-user.target.wants/docker.service.
Warning: Stopping docker.service, but it can still be activated by:
  docker.socket
[root@Node2 ~]# systemctl disable docker.socket --now
[root@Node2 ~]# yum remove docker-ce docker-ce-cli -y #卸载docker
.............................................
[root@Node2 ~]# yum install  containerd.io cri-tools  -y
[root@Node2 ~]# crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock

[root@Mater ~]# scp /etc/containerd/config.toml 10.211.55.13:/etc/containerd/
The authenticity of host '10.211.55.13 (10.211.55.13)' can't be established.
ECDSA key fingerprint is SHA256:4wBcllINqPBu0HiKaCR6L+sc82tNhgb7Bfqk2ja66s4.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.211.55.13' (ECDSA) to the list of known hosts.
root@10.211.55.13's password:
config.toml                                        100% 7078     4.6MB/s   00:00
[root@Mater ~]#  scp /etc/modules-load.d/containerd.conf  10.211.55.13:/etc/modules-load.d/
root@10.211.55.13's password:
containerd.conf                                    100%   21    14.6KB/s   00:00
[root@Mater ~]# scp  /etc/sysctl.d/k8s.conf 10.211.55.13:/etc/sysctl.d/
root@10.211.55.13's password:
k8s.conf                                           100%  103   104.1KB/s   00:00
[root@Mater ~]# scp /etc/sysconfig/kubelet 10.211.55.13:/etc/sysconfig/
root@10.211.55.13's password:
kubelet                                            100%  147    39.1KB/s   00:00
[root@Node2 ~]# modprobe overlay ; modprobe br_netfilter
[root@Node2 ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
[root@Node2 ~]# systemctl enable containerd  ; systemctl restart containerd
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /usr/lib/systemd/system/containerd.service.
[root@Node2 ~]#  systemctl restart kubelet
[root@Mater ~]# kubectl uncordon node2
node/node2 uncordoned
[root@Mater ~]# kubectl get node
NAME    STATUS   ROLES                  AGE   VERSION
mater   Ready    control-plane,master   76m   v1.23.2
node1   Ready    <none>                 75m   v1.23.2
node2   Ready    <none>                 75m   v1.23.2
[root@Mater ~]# kubectl get node -o wide
NAME    STATUS   ROLES                  AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE          KERNEL-VERSION          CONTAINER-RUNTIME
mater   Ready    control-plane,master   76m   v1.23.2   10.211.55.11   <none>        CentOS Stream 8   4.18.0-383.el8.x86_64   containerd://1.6.8
node1   Ready    <none>                 75m   v1.23.2   10.211.55.12   <none>        CentOS Stream 8   4.18.0-383.el8.x86_64   containerd://1.6.8
node2   Ready    <none>                 75m   v1.23.2   10.211.55.13   <none>        CentOS Stream 8   4.18.0-383.el8.x86_64   containerd://1.6.8

 安装过程中遇到的异常

在安装过程中docker的命令和containerd的命令不太一样,所以在master节点上看容器的时候使用crictl ps命令时会出现报错,下面有解决方法:

[root@Mater ~]# crictl ps
WARN[0000] runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
ERRO[0000] unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused"
WARN[0000] image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
ERRO[0000] unable to determine image API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused"
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
5670d7ae9904c       a4ca41631cc7a       14 minutes ago      Running             coredns                   0                   d0e8896bf33d4       coredns-6d8c4cb4d-tc6fk
d97e91fff952c       a7de69da7d13a       29 minutes ago      Running             calico-node               1                   ec4df8d95e7f2       calico-node-2hnck
0aed24b783d86       d922ca3da64b3       31 minutes ago      Running             kube-proxy                2                   0656a682543b0       kube-proxy-4w92p
bedca0a04bae6       4783639ba7e03       32 minutes ago      Running             kube-controller-manager   2                   bf76f4a139346       kube-controller-manager-mater
a76a891063d29       6114d758d6d16       32 minutes ago      Running             kube-scheduler            2                   4512bf3a1bff2       kube-scheduler-mater
5a4226564e01a       8a0228dd6a683       32 minutes ago      Running             kube-apiserver            2                   4a63979ba421b       kube-apiserver-mater
b10522e331667       25f8c7f3da61c       33 minutes ago      Running             etcd                      2                   99700d02903ca       etcd-mater
解决方法:
cat <<EOF> /etc/crictl.yaml 
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
[root@Mater ~]# crictl ps
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
5670d7ae9904c       a4ca41631cc7a       17 minutes ago      Running             coredns                   0                   d0e8896bf33d4       coredns-6d8c4cb4d-tc6fk
d97e91fff952c       a7de69da7d13a       31 minutes ago      Running             calico-node               1                   ec4df8d95e7f2       calico-node-2hnck
0aed24b783d86       d922ca3da64b3       33 minutes ago      Running             kube-proxy                2                   0656a682543b0       kube-proxy-4w92p
bedca0a04bae6       4783639ba7e03       34 minutes ago      Running             kube-controller-manager   2                   bf76f4a139346       kube-controller-manager-mater
a76a891063d29       6114d758d6d16       34 minutes ago      Running             kube-scheduler            2                   4512bf3a1bff2       kube-scheduler-mater
5a4226564e01a       8a0228dd6a683       34 minutes ago      Running             kube-apiserver            2                   4a63979ba421b       kube-apiserver-mater
b10522e331667       25f8c7f3da61c       35 minutes ago      Running             etcd                      2                   99700d02903ca       etcd-mater

以上是针对kubernetes中docker迁移到containerd的方法,这里需要注意的点不能做批量,可以在node节点排空后,把卸载和containerd的安装写成脚本执行脚本即可,切记不可批量操作,因为批量排空会造成资源使用紧张的问题。

avatar

发表评论

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen: