使用python脚本一个简单的搭建ansible集群-CSDN博客
节点 | IP地址 | 操作系统 | 配置 |
---|---|---|---|
server | 192.168.174.150 | centos7.9 | 2G2核 |
client1 | 192.168.174.151 | centos7.9 | 2G2核 |
client2 | 192.168.174.152 | centos7.9 | 2G2核 |
ansible清单文件内容如下
[clients_all]
server
client1
client2
[clients_master]
server
[clients_client]
client1
client2
配置本地yum源:
# name是设备名,path是挂载点,state=unmounted如果目标路径已经挂载了设备,将其卸载重新挂载
ansible clients_all -m mount -a "src=/dev/cdrom path=/mnt/cdrom fstype=iso9660 opts=defaults state=mounted"
# path是文件路径,line是要添加的行内容,insertafter=EOF是文件末尾添加,BOF是文件头部添加
ansible clients_all -m lineinfile -a "path=/etc/fstab line='/dev/cdrom /mnt/cdrom iso9660 defaults 0 0' insertafter=EOF"
ansible clients_all -m shell -a 'echo "" > /etc/yum.repos.d/centos-local.repo'
path修改文件的路径,block添加的文本内容,多行用\n隔开,create如果没有文件就创建,marker=插入的前后标签,如果重新执行就是替换文本
ansible clients_all -m blockinfile -a "path=/etc/yum.repos.d/centos-local.repo block='[centos7.9]\nname=centos7.9\nbaseurl=file:///mnt/cdrom\nenabled=1\ngpgcheck=0' create=yes marker='#{mark} centos7.9'"
ansible clients_all -m shell -a "yum clean all && yum repolist"
配置远程阿里源:
ansible clients_all -m yum -a "name=wget"
ansible clients_all -m get_url -a "dest=/etc/yum.repos.d/CentOS-Base.repo url=http://mirrors.aliyun.com/repo/Centos-7.repo"
ansible clients_all -m shell -a "yum clean all && yum repolist"
配置扩展源:
ansible clients_all -m yum -a "name=epel-release"
ansible clients_all -m shell -a "yum clean all && yum repolist"
ansible clients_all -m yum -a "name=bash-completion,vim,net-tools,tree,psmisc,lrzsz,dos2unix"
禁止使用selinux
ansible clients_all -m selinux -a 'state=disabled'
禁用iptables和firewalld服务,kubernetes和docker在运行中会产生大量的iptables规则,为了不让系统规则跟它们混淆,直接关闭系统的规则
ansible clients_all -m service -a "name=firewalld state=stopped enabled=false"
# 可能没有安装iptables服务
ansible clients_all -m service -a "name=iptables state=stopped enabled=false"
所有节点:
ansible clients_all -m yum -a "name=chrony"
ansible clients_all -m service -a "name=chronyd state=restarted enabled=true"
ansible clients_master -m lineinfile -a "path=/etc/chrony.conf regexp='^#allow 192.168.0.0\/16' line='allow 192.168.174.0/24' backrefs=yes"
ansible clients_master -m lineinfile -a "path=/etc/chrony.conf regexp='^#local stratum 10' line='local stratum 10' backrefs=yes"
ansible clients_client -m lineinfile -a "path=/etc/chrony.conf regexp='^server' state=absent"
ansible clients_client -m lineinfile -a "path=/etc/chrony.conf line='server 192.168.174.150 iburst' insertbefore='^.*\bline2\b.*$'"
所有节点:
ansible clients_all -m service -a "name=chronyd state=restarted enabled=true"
ansible clients_all -m shell -a "timedatectl set-ntp true"
查看:
[root@server ~]# ansible clients_client -m shell -a "chronyc sources -v"
client2 | CHANGED | rc=0 >>
210 Number of sources = 1
?
.-- Source mode ?'^' = server, '=' = peer, '#' = local clock.
/ .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| / ? '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
|| ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? .- xxxx [ yyyy ] +/- zzzz
|| ? ? Reachability register (octal) -. ? ? ? ? ? | xxxx = adjusted offset,
|| ? ? Log2(Polling interval) --. ? ? | ? ? ? ? | yyyy = measured offset,
|| ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? \ ? ? | ? ? ? ? | zzzz = estimated error.
|| ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? | ? | ? ? ? ? ? \
MS Name/IP address ? ? ? ? Stratum Poll Reach LastRx Last sample ? ? ? ? ? ? ?
===============================================================================
^* server ? ? ? ? ? ? ? ? ? ? ? ?3 ? 6 ? ?17 ? ? 6 ? +918us[+4722us] +/- 217ms
client1 | CHANGED | rc=0 >>
210 Number of sources = 1
?
.-- Source mode ?'^' = server, '=' = peer, '#' = local clock.
/ .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| / ? '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
|| ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? .- xxxx [ yyyy ] +/- zzzz
|| ? ? Reachability register (octal) -. ? ? ? ? ? | xxxx = adjusted offset,
|| ? ? Log2(Polling interval) --. ? ? | ? ? ? ? | yyyy = measured offset,
|| ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? \ ? ? | ? ? ? ? | zzzz = estimated error.
|| ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? | ? | ? ? ? ? ? \
MS Name/IP address ? ? ? ? Stratum Poll Reach LastRx Last sample ? ? ? ? ? ? ?
===============================================================================
^* server ? ? ? ? ? ? ? ? ? ? ? ?3 ? 6 ? ?17 ? ? 6 ? +961us[+4856us] +/- 217ms
# backrefs=yes,当没有匹配到指定行则不做任何更改
ansible clients_all -m lineinfile -a "path=/etc/fstab regexp='^\/dev\/mapper\/centos-swap' line='#/dev/mapper/centos-swap swap ? ? ? ? ? ? ? ? ? swap ? defaults ? ? ? 0 0' backrefs=yes"
编辑/etc/sysctl.d/kubernetes.conf,添加网桥过滤和地址转发功能
# path修改文件的路径,block添加的文本内容,多行用\n隔开,create如果没有文件就创建,marker=插入的前后标签,如果重新执行就是替换文本
ansible clients_all -m blockinfile -a "path=/etc/sysctl.d/kubernetes.conf block='net.bridge.bridge-nf-call-ip6tables = 1\nnet.bridge.bridge-nf-call-iptables = 1\nnet.ipv4.ip_forward = 1' create=yes marker='#{mark} kubernetes'"
重新加载配置
ansible clients_all -m shell -a "sysctl -p"
加载网桥过滤模块
ansible clients_all -m shell -a "modprobe br_netfilter"
查看网桥过滤模块是否加载成功
ansible clients_all -m shell -a "lsmod | grep br_netfilter"
在kubernetes中service有两种代理模型,
一种是基于iptables的
一种是基于ipvs的
两者比较的话,ipvs的性能明显要高一些,但是如果要使用它,需要手动载入ipvs模块
安装ipset和ipvsadm
ansible clients_all -m yum -a "name=ipset,ipvsadm"
添加需要加载的模块写入脚本文件:
ansible clients_all -m blockinfile -a "path=/etc/sysconfig/modules/ipvs.modules block='#! /bin/bash\nmodprobe -- ip_vs\nmodprobe -- ip_vs_rr\nmodprobe -- ip_vs_wrr\nmodprobe -- ip_vs_sh\nmodprobe -- nf_conntrack_ipv4' create=yes marker='#{mark} ipvs'"
为脚本文件添加执行权限
ansible clients_all -m file -a "path=/etc/sysconfig/modules/ipvs.modules mode='0755'"
执行脚本文件
ansible clients_all -m script -a "/bin/bash /etc/sysconfig/modules/ipvs.modules"
查看对应的模块是否加载成功
ansible clients_all -m shell -a "lsmod | grep -e ip_vs -e nf_conntrack_ipv4"
重启linux服务
ansible clients_all -m reboot
添加docker镜像到本地:
ansible clients_all -m get_url -a "dest=/etc/yum.repos.d/docker-ce.repo url=http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo"
然后输入命令:
ansible clients_all -m shell -a "yum install -y --setopt=obsoletes=0 docker-ce-18.06.3.ce-3.el7"
修改配置文件:
ansible clients_all -m file -a "path=/etc/docker state=directory"
# Docker在默认情况下使用的Cgroup Driver为cgroupfs,而kubernetes推荐使用systemd来代替cgroupfs
mkdir -p /etc/docker
cat <<eof > /etc/docker/daemon.json
{
"storage-driver": "devicemapper",
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://ja9e22yz.mirror.aliyuncs.com"]
}
eof
ansible clients_all -m copy -a "src=/etc/docker/daemon.json dest=/etc/docker/daemon.json"
cat << eof > /etc/sysconfig/docker
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'
eof
ansible clients_all -m copy -a "src=/etc/sysconfig/docker dest=/etc/sysconfig/docker"
重启,并设置开机自启:
ansible clients_all -m service -a "name=docker state=restarted enabled=true"
配置k8syum仓库:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
ansible clients_all -m copy -a "src=/etc/yum.repos.d/kubernetes.repo dest=/etc/yum.repos.d/kubernetes.repo"
安装kubeadm、kubelet和kubectl
组件 | 说明 |
---|---|
kubeadm | 搭建kubernetes集群的工具 |
kubelet | 负责维护容器的生命周期,即通过控制docker,来创建、更新、销毁容器 |
kubectl | 用来与集群通信的命令行工具。 |
ansible clients_all -m shell -a "yum install --setopt=obsoletes=0 kubeadm-1.17.4-0 kubelet-1.17.4-0 kubectl-1.17.4-0 -y"
编辑/etc/sysconfig/kubelet,配置kubelet的cgroup
cat <<eof > /etc/sysconfig/kubelet
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
eof
ansible clients_all -m copy -a "src=/etc/sysconfig/kubelet dest=/etc/sysconfig/kubelet"
设置kubelet开机自启
ansible clients_all -m service -a "name=kubelet state=started enabled=true"
在安装kubernetes集群之前,必须要提前准备好集群需要的镜像,所需镜像可以通过下面命令查看
ansible clients_all -m shell -a "kubeadm config images list"
下载镜像:此镜像在kubernetes的仓库中,由于网络原因,无法连接,下面提供了一种替代方案下载这些镜像
cat << eof > kubernetes_images_install.yaml
---
- hosts: clients_all
gather_facts: no
vars: ?
? images: ?
? ? - kube-apiserver:v1.17.4 ?
? ? - kube-controller-manager:v1.17.4 ?
? ? - kube-scheduler:v1.17.4 ?
? ? - kube-proxy:v1.17.4 ?
? ? - pause:3.1 ?
? ? - etcd:3.4.3-0 ?
? ? - coredns:1.6.5 ?
tasks:
? - name: 拉取镜像
? ? shell: docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/{{ item }}
? ? with_items: "{{ images }}"
? - name: 给镜像打标签
? ? shell: docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/{{ item }} k8s.gcr.io/{{ item }}
? ? with_items: "{{ images }}"
? - name: 删除镜像
? ? shell: docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/{{ item }}
? ? with_items: "{{ images }}"
eof
ansible-playbook kubernetes_images_install.yaml
在 master 节点:
创建集群
ansible clients_master -m shell -a "kubeadm init \
--kubernetes-version=v1.17.4 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 \
--apiserver-advertise-address=192.168.174.150" | grep 'kubeadm join' ? # 集群入口指向master
# 成功执行后将给出将node节点加入集群的命令
kubeadm join 192.168.174.150:6443 --token 2pmmsi.xv4534qap5pf3bjv \
--discovery-token-ca-cert-hash sha256:69715f25a2e7795f4642afeb8f88c800e601cb1624b819180e820702885b5eef
创建必要文件
ansible clients_master -m file -a "path=$HOME/.kube state=directory"
ansible clients_master -m shell -a "cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
ansible clients_master -m file -a "path=$HOME/.kube/config state=touch owner=$(id -u) group=$(id -g)"
在 node 节点,将node节点加入集群(命令各不相同,需要在住master节点创建集群后获取命令)
ansible clients_client -m shell -a "kubeadm join 192.168.174.150:6443 --token 2pmmsi.xv4534qap5pf3bjv \
--discovery-token-ca-cert-hash sha256:69715f25a2e7795f4642afeb8f88c800e601cb1624b819180e820702885b5eef"
在 master 节点,在查看集群状态 此时的集群状态为NotReady,这是因为还没有配置网络插件
[root@server ~]# ansible clients_master -m shell -a "kubectl get nodes"
server | CHANGED | rc=0 >>
NAME ? ? STATUS ? ? ROLES ? AGE ? VERSION
client1 ? NotReady ? <none> ? 14m ? v1.17.4
client2 ? NotReady ? <none> ? 14m ? v1.17.4
server ? NotReady ? master ? 23m ? v1.17.4
kubernetes支持多种网络插件,比如flannel、calico、canal等等,任选一种使用即可
本次选择flannel
下面操作只需在 master 节点执行即可,插件使用的是DaemonSet的控制器,它会在每个节点上都运行
获取fannel的配置文件
ansible clients_master -m get_url -a "dest=./ url=https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml"
部署flannel网络:
ansible clients_master -m shell -a "kubectl apply -f kube-flannel.yml"
过一分钟左右查看各节点状态,变为Ready说明网络打通了:
[root@server ~]# ansible clients_master -m shell -a "kubectl get nodes"
server | CHANGED | rc=0 >>
NAME ? ? STATUS ? ROLES ? AGE ? VERSION
client1 ? Ready ? <none> ? 20m ? v1.17.4
client2 ? Ready ? <none> ? 20m ? v1.17.4
server ? Ready ? master ? 29m ? v1.17.4
查看所有pod是否变为Running
[root@server ~]# ansible clients_master -m shell -a "kubectl get pod --all-namespaces"
server | CHANGED | rc=0 >>
NAMESPACE ? ? NAME ? ? ? ? ? ? ? ? ? ? ? ? ? ? READY ? STATUS ? RESTARTS ? AGE
kube-flannel ? kube-flannel-ds-h7d2z ? ? ? ? ? ?1/1 ? ? Running ? 0 ? ? ? ? 3m3s
kube-flannel ? kube-flannel-ds-hht48 ? ? ? ? ? ?1/1 ? ? Running ? 0 ? ? ? ? 3m3s
kube-flannel ? kube-flannel-ds-lk7qd ? ? ? ? ? ?1/1 ? ? Running ? 0 ? ? ? ? 3m3s
kube-system ? coredns-6955765f44-4vg95 ? ? ? ? 1/1 ? ? Running ? 0 ? ? ? ? 29m
kube-system ? coredns-6955765f44-kkndx ? ? ? ? 1/1 ? ? Running ? 0 ? ? ? ? 29m
kube-system ? etcd-server ? ? ? ? ? ? ? ? ? ? ?1/1 ? ? Running ? 0 ? ? ? ? 29m
kube-system ? kube-apiserver-server ? ? ? ? ? ?1/1 ? ? Running ? 0 ? ? ? ? 29m
kube-system ? kube-controller-manager-server ? 1/1 ? ? Running ? 0 ? ? ? ? 29m
kube-system ? kube-proxy-7x47c ? ? ? ? ? ? ? ? 1/1 ? ? Running ? 0 ? ? ? ? 29m
kube-system ? kube-proxy-pxx4l ? ? ? ? ? ? ? ? 1/1 ? ? Running ? 0 ? ? ? ? 21m
kube-system ? kube-proxy-v54j6 ? ? ? ? ? ? ? ? 1/1 ? ? Running ? 0 ? ? ? ? 21m
kube-system ? kube-scheduler-server ? ? ? ? ? ?1/1 ? ? Running ? 0 ? ? ? ? 29m