前面已经Kubernetes的基础,原理,以及组件和组件的作用。这个章节我们就说一下Kubernetes的安装,如果安装Kubernetes,这里说明一下我自己的机器不够所以我就把etcd安装在master节点上了,如果在生产环境中注意不要把etcd和master节点安装在一起。
安装前准备
首先同步三台服务器的时间
[root@Serve01 ~]# yum -y install ntpdate [root@Serve01 ~]# ntpdate -u cn.pool.ntp.org [root@Serve02 ~]# yum -y install ntpdate [root@Serve02 ~]# ntpdate -u cn.pool.ntp.org [root@Serve03 ~]# yum -y install ntpdate [root@Serve03 ~]# ntpdate -u cn.pool.ntp.org
在node节点上安装redhat-ca.crt
[root@Serve02 ~]# yum install *rhsm* -y [root@Serve03 ~]# yum install *rhsm* -y
master节点安装kubernetes etcd
[root@Serve01 ~]# yum -y install kubernetes-master etcd
配置master节点etcd
[root@Serve01 ~]# vim /etc/etcd/etcd.conf #[Member] #ETCD_CORS="" ETCD_DATA_DIR="/var/lib/etcd/etcd1.etcd" #ETCD_WAL_DIR="" ETCD_LISTEN_PEER_URLS="http://10.211.55.16:2380" ETCD_LISTEN_CLIENT_URLS="http://10.211.55.16:2379,http://127.0.0.1:2379" ETCD_MAX_SNAPSHOTS="5" #ETCD_MAX_WALS="5" ETCD_NAME="etcd1" #ETCD_SNAPSHOT_COUNT="100000" #ETCD_HEARTBEAT_INTERVAL="100" #ETCD_ELECTION_TIMEOUT="1000" #ETCD_QUOTA_BACKEND_BYTES="0" #ETCD_MAX_REQUEST_BYTES="1572864" #ETCD_GRPC_KEEPALIVE_MIN_TIME="5s" #ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s" #ETCD_GRPC_KEEPALIVE_TIMEOUT="20s" # #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.211.55.16:2380" ETCD_ADVERTISE_CLIENT_URLS="http://10.211.55.16:2379" #ETCD_DISCOVERY="" #ETCD_DISCOVERY_FALLBACK="proxy" #ETCD_DISCOVERY_PROXY="" #ETCD_DISCOVERY_SRV="" ETCD_INITIAL_CLUSTER="etcd1=http://10.211.55.16:2380,etcd2=http://10.211.55.17:2380,etcd3=http://10.211.55.18:2380" #ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" #ETCD_INITIAL_CLUSTER_STATE="new" .........................................
配置node节点安装部署kubernetes-node/etcd/flannel/docker
[root@Serve02 ~]# yum -y install kubernetes-node etcd flannel docker [root@Serve03 ~]# yum -y install kubernetes-node etcd flannel docker
nodes节点配置etcd,node1和node2一样方法
[root@Serve02 ~]# vim /etc/etcd/etcd.conf #[Member] #ETCD_CORS="" ETCD_DATA_DIR="/var/lib/etcd/etcd2.etcd" #ETCD_WAL_DIR="" ETCD_LISTEN_PEER_URLS="http://10.211.55.17:2380" ETCD_LISTEN_CLIENT_URLS="http://10.211.55.17:2379,http://127.0.0.1:2379" #ETCD_MAX_SNAPSHOTS="5" #ETCD_MAX_WALS="5" ETCD_NAME="etcd2" #ETCD_SNAPSHOT_COUNT="100000" #ETCD_HEARTBEAT_INTERVAL="100" #ETCD_ELECTION_TIMEOUT="1000" #ETCD_QUOTA_BACKEND_BYTES="0" #ETCD_MAX_REQUEST_BYTES="1572864" #ETCD_GRPC_KEEPALIVE_MIN_TIME="5s" #ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s" #ETCD_GRPC_KEEPALIVE_TIMEOUT="20s" # #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.211.55.17:2380" ETCD_ADVERTISE_CLIENT_URLS="http://10.211.55.17:2379" #ETCD_DISCOVERY="" #ETCD_DISCOVERY_FALLBACK="proxy" #ETCD_DISCOVERY_PROXY="" #ETCD_DISCOVERY_SRV="" ETCD_INITIAL_CLUSTER="etcd1=http://10.211.55.16:2380,etcd2=http://10.211.55.17:2380,etcd3=http://10.211.55.18:2380" #ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" #ETCD_INITIAL_CLUSTER_STATE="new" .................................... [root@Serve03 ~]# vim /etc/etcd/etcd.conf #[Member] #ETCD_CORS="" ETCD_DATA_DIR="/var/lib/etcd/etcd3.etcd" #ETCD_WAL_DIR="" ETCD_LISTEN_PEER_URLS="http://10.211.55.18:2380" ETCD_LISTEN_CLIENT_URLS="http://10.211.55.18:2379,http://127.0.0.1:2379" #ETCD_MAX_SNAPSHOTS="5" #ETCD_MAX_WALS="5" ETCD_NAME="etcd3" #ETCD_SNAPSHOT_COUNT="100000" #ETCD_HEARTBEAT_INTERVAL="100" #ETCD_ELECTION_TIMEOUT="1000" #ETCD_QUOTA_BACKEND_BYTES="0" #ETCD_MAX_REQUEST_BYTES="1572864" #ETCD_GRPC_KEEPALIVE_MIN_TIME="5s" #ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s" #ETCD_GRPC_KEEPALIVE_TIMEOUT="20s" # #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.211.55.18:2380" ETCD_ADVERTISE_CLIENT_URLS="http://10.211.55.18:2379" #ETCD_DISCOVERY="" #ETCD_DISCOVERY_FALLBACK="proxy" #ETCD_DISCOVERY_PROXY="" #ETCD_DISCOVERY_SRV="" ETCD_INITIAL_CLUSTER="etcd1=http://10.211.55.16:2380,etcd2=http://10.211.55.17:2380,etcd3=http://10.211.55.18:2380" #ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" #ETCD_INITIAL_CLUSTER_STATE="new" ..............................................
启动etcd,三台分别启动,启动之前需要先关闭防火墙,要不请求超时
[root@Serve01 ~]# systemctl start etcd.service [root@Serve02 ~]# systemctl start etcd.service [root@Serve03 ~]# systemctl start etcd.service
nodes重启kubernetes-node
[root@Serve02 ~]# systemctl start kubelet [root@Serve03 ~]# systemctl start kubelet
查看启动日志
journalctl -xefu kubelet
nodes重启docker
systemctl start docker.service
nodes重启flanneld
systemctl enable flanneld.service
systemctl start flanneld.service
启动kube-master
[root@Serve01 ~]# systemctl start kube-apiserver [root@Serve01 ~]# systemctl start kube-controller-manager [root@Serve01 ~]# systemctl start kube-scheduler [root@Serve01 ~]# systemctl enable kube-apiserver ln -s '/usr/lib/systemd/system/kube-apiserver.service' '/etc/systemd/system/multi-user.target.wants/kube-apiserver.service' [root@Serve01 ~]# systemctl enable kube-controller-manager ln -s '/usr/lib/systemd/system/kube-controller-manager.service' '/etc/systemd/system/multi-user.target.wants/kube-controller-manager.service' [root@Serve01 ~]# systemctl enable kube-scheduler ln -s '/usr/lib/systemd/system/kube-scheduler.service' '/etc/systemd/system/multi-user.target.wants/kube-scheduler.service' [root@Serve01 ~]#
检查一下etcd集群状态
[root@Serve01 ~]# kubectl get nodes NAME STATUS AGE 10.211.55.17 Ready 3h 10.211.55.18 Ready 3h [root@Serve01 ~]# etcdctl member list 624ca8b470886bb8: name=etcd1 peerURLs=http://10.211.55.16:2380 clientURLs=http://10.211.55.16:2379 isLeader=false 979db527c07d3884: name=etcd3 peerURLs=http://10.211.55.18:2380 clientURLs=http://10.211.55.18:2379 isLeader=false ea7f70db8857f25e: name=etcd2 peerURLs=http://10.211.55.17:2380 clientURLs=http://10.211.55.17:2379 isLeader=true [root@Serve01 ~]# etcdctl cluster-health member 624ca8b470886bb8 is healthy: got healthy result from http://10.211.55.16:2379 member 979db527c07d3884 is healthy: got healthy result from http://10.211.55.18:2379 member ea7f70db8857f25e is healthy: got healthy result from http://10.211.55.17:2379 cluster is healthy
现在2个节点上手动下载pod images,以及测试使用的nginx images
[root@Serve02 ~]# docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest Trying to pull repository registry.access.redhat.com/rhel7/pod-infrastructure ... latest: Pulling from registry.access.redhat.com/rhel7/pod-infrastructure 26e5ed6899db: Pull complete 66dbe984a319: Pull complete 9138e7863e08: Pull complete Digest: sha256:92d43c37297da3ab187fc2b9e9ebfb243c1110d446c783ae1b989088495db931 Status: Downloaded newer image for registry.access.redhat.com/rhel7/pod-infrastructure:latest [root@Serve02 ~]# docker pull nginx Using default tag: latest Trying to pull repository docker.io/library/nginx ... latest: Pulling from docker.io/library/nginx 743f2d6c1f65: Pull complete 6bfc4ec4420a: Pull complete 688a776db95f: Pull complete Digest: sha256:23b4dcdf0d34d4a129755fc6f52e1c6e23bb34ea011b315d87e193033bcd1b68 Status: Downloaded newer image for docker.io/nginx:latest [root@Serve02 ~]# [root@Serve03 ~]# docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest Trying to pull repository registry.access.redhat.com/rhel7/pod-infrastructure ... latest: Pulling from registry.access.redhat.com/rhel7/pod-infrastructure 26e5ed6899db: Pull complete 66dbe984a319: Pull complete 9138e7863e08: Pull complete Digest: sha256:92d43c37297da3ab187fc2b9e9ebfb243c1110d446c783ae1b989088495db931 Status: Downloaded newer image for registry.access.redhat.com/rhel7/pod-infrastructure:latest [root@Serve03 ~]# docker pull nginx Using default tag: latest Trying to pull repository docker.io/library/nginx ... latest: Pulling from docker.io/library/nginx 743f2d6c1f65: Pull complete 6bfc4ec4420a: Pull complete 688a776db95f: Pull complete Digest: sha256:23b4dcdf0d34d4a129755fc6f52e1c6e23bb34ea011b315d87e193033bcd1b68 Status: Downloaded newer image for docker.io/nginx:latest
运行nginx
[root@Serve01 ~]# kubectl run my-nginx --image=nginx --replicas=2 --port=80 deployment "my-nginx" created
查看一下node的镜像
[root@Serve02 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/nginx latest 53f3fd8007f7 13 days ago 109 MB registry.access.redhat.com/rhel7/pod-infrastructure latest 99965fb98423 19 months ago 209 MB [root@Serve03 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/nginx latest 53f3fd8007f7 13 days ago 109 MB registry.access.redhat.com/rhel7/pod-infrastructure latest 99965fb98423 19 months ago 209 MB
现在Kubernete集群已经安装好了,其实,现在有很多利用脚本写的Kubernete安装方法已经很成熟了,只是版本的问题,不过如果我们在生产环境中使用不建议使用太新的版本,因为旧版本使用的第一是比较多,出现的漏洞等等都已修复,出现的问题解决方法在网上有很多,可以帮助我们在以后的工作中或者学习中学习更多问题解决方法。下面介绍一下利用ansible安装Kubernete的方法,先看一下https://github.com/wolf27w/kubeasz,我们根据规划配置,从0到7安装,有兴趣的可以了解一下,这里就不多说了
您可以选择一种方式赞助本站
支付宝扫一扫赞助
微信钱包扫描赞助
赏