现在部署一套K8s集群已经很方便了,尤其现在github上有很多ansible自动化安装工具,使用起来非常的方便,只要你懂ansible的技术,就能很快上手熟练掌握如何安装,本文我们就介绍下如果使用Kubeeasz来安装一套集群,当然也有其他类似工具,大家也可以去研究。
清单列表:
1 2 3 4 5 6 7 |
kubernetes :v1.15.6 docker : Server Version: 18.09.9 ECTD: {"etcdserver":"3.3.12","etcdcluster":"3.3.0"} gluseterfs :glusterfs 6.7 flannel: flannel:v0.11.0-amd64 kubernetes-dashboard-amd64:v1.10.1 coredns:1.6.5 |
因为是测试,我没有太多虚拟机,只用了3台,来做演示:
部署资源准备,三台机器:
192.168.248.33 master01
192.168.248.34 node-01
192.168.248.35 node-02
先安装基础包和ansible:
1 2 3 4 5 |
yum update -y && yum install python -y yum install git python-pip -y pip install pip --upgrade -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com pip install ansible==2.6.12 -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com |
1 |
git clone -b 2.0.0 https://github.com/easzlab/kubeasz.git /etc/ansible |
配置host,这里我采用部署2台master节点做测试:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
配置hosts文件: # 'etcd' cluster should have odd member(s) (1,3,5,...) # variable 'NODE_NAME' is the distinct name of a member in 'etcd' cluster [etcd] 192.168.248.33 NODE_NAME=etcd1 192.168.248.34 NODE_NAME=etcd2 192.168.248.35 NODE_NAME=etcd3 [kube-master] 192.168.248.33 192.168.248.34 [kube-node] 192.168.248.35 [ex-lb] [all:vars] # Cluster container-runtime supported: docker, containerd CONTAINER_RUNTIME="docker" # Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn CLUSTER_NETWORK="flannel" # K8S Service CIDR, not overlap with node(host) networking SERVICE_CIDR="10.60.0.0/16" # Cluster CIDR (Pod CIDR), not overlap with node(host) networking CLUSTER_CIDR="10.255.0.0/16" # NodePort Range NODE_PORT_RANGE="30000-48000" # Cluster DNS Domain CLUSTER_DNS_DOMAIN="test.thc.local." # -------- Additional Variables (don't change the default value right now) --- # Binaries Directory bin_dir="/usr/bin" # CA and other components cert/key Directory ca_dir="/etc/kubernetes/ssl" # Deploy Directory (kubeasz workspace) base_dir="/etc/ansible" |
接下来就是分布安装了,我将介绍每一步的安装和遇到的问题如何进行了解决:
分步骤安装:
ansible-playbook 01.prepare.yml #正常
ansible-playbook 02.etcd.yml
#hosts加
1 2 3 |
192.168.248.33 NODE_NAME=etcd1 #NODE_NAME要加 192.168.248.34 NODE_NAME=etcd2 192.168.248.35 NODE_NAME=etcd3 |
ansible-playbook 03.docker.yml
#该步骤有一台没过去,手动安装了docker,方法如下:
yum-config-manager –add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum list docker-ce –showduplicates|sort -r
yum -y install docker-ce-18.09.9-3.el7
yum install docker-ce-cli-18.09.9-3.el7
systemctl docker.service start
systemctl status docker.service
/etc/docker/daemon.json文件增加:
{ “storage-driver”: “devicemapper” }
dockerd & #启动命令可以看具体问题
docker info
ansible-playbook 04.kube-master.yml
#该步骤在hosts里增加了
[ex-lb]不然过不去
ansible-playbook 05.kube-node.yml
#这步骤正常
ansible-playbook 06.network.yml
#这步骤需要先把flannel,pause-amd64镜像下载下来,放到/etc/ansible/down/下。
然后再去执行playbook。
ansible-playbook 07.cluster-addon.yml
#这步骤跟进自己情况去安装,比如我不安装Ingress,就需要去注释掉:
至此2个master节点的集群安装完成:
kubectl get nodes
访问dashboard:
kubectl cluster-info|grep dashboard
1 2 |
查看token: kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') |
测试DNS:
运行2个pods:
kubectl run nginx –image=nginx –expose –port=80
kubectl run busybox –rm -it –image=busybox /bin/sh
查看状态:
kubectl get pods
进入容器:
kubectl exec -it busybox-7cbc9ff64b-qnqtx — /bin/sh
#ping nginx
kubectl svc #对应service ip解析。
加上hosts中定义的域名,再解析:
规则:见标红的内容,default是namespace,svc是服务类型。
#ping nginx.default.svc.test.thc.local
各个节点配置haproxy:
安装:yum install -y haproxy
Cd /etc/haproxy/查看haproxy.cfg,这个文件已经生成:
内容如下:
[root@node-02 haproxy]# more haproxy.cfg
global
log /dev/log local1 warning
chroot /var/lib/haproxy
user haproxy
group haproxy
daemon
nbproc 1
defaults
log global
timeout connect 5s
timeout client 10m
timeout server 10m
listen kube-master
bind 127.0.0.1:6443
mode tcp
option tcplog
option dontlognull
option dontlog-normal
balance roundrobin
server 192.168.248.33 192.168.248.33:6443 check inter 10s fall 2 rise 2 weight 1
server 192.168.248.34 192.168.248.34:6443 check inter 10s fall 2 rise 2 weight 1
# 设置开机启动
systemctl enable haproxy
# 开启haproxy
systemctl start haproxy
# 查看启动状态
systemctl status haproxy
查看/etc/kubenetes/目录下配置文件:
kebelet.kubeconfig
kube-proxy.kubeconfig
因为访问的都是127.0.0.1 本机的端口,然后由HA转发到master节点,表明master节点已经高可用了。
Dns解析外网:
修改,coredns.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
--- apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes syzx.thc.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa } forward . your dns ip cache 30 loop reload loadbalance } |
内核升级和安装docker,和一些ansible依赖包:
升级内核:
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
yum –enablerepo=elrepo-kernel install -y kernel-lt
grub2-set-default 0
init 6
yum-config-manager –add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y install docker-ce-18.09.9-3.el7
yum info installed |grep docker
pip install netaddr
#pip install –upgrade Jinja2
配置访问harbor:
#新建目录红色字体是镜像仓库域名:
Mkdir /etc/docker/certs.d/镜像仓库域名/ #对应ansible里面的配置:
上传harbor-ca.crt文件。
测试登录,成功后,查看如下文件内容:
/root/.docker/config.json
导出镜像命令:
#导出镜像
docker save -o flannel_v0.11.0-amd64.tar 仓库域名/kubernetes/flannel:v0.11.0-amd64
参考:
https://www.cnblogs.com/linuxk/p/10762832.html
https://blog.csdn.net/sltin/article/details/93611808
https://my.oschina.net/boreboluomiduo/blog/2875753
https://blog.csdn.net/u011459278/article/details/104426440/