? 此部署使用传统的pv,pvc方式做持久化数据存储,而是使用storageclass调用provisioner,自动给pod创建的pvc分配pv并绑定,从而达到持久化存储的效果。可根据自己需求创建相关的pv,pvc。
NFS Server IP(服务端):172.30.93.2
NFS Client IP(客户端):172.30.93.3
操作主机 172.30.93.2
# 1.安装nfs与rpc
yum install -y nfs-utils rpcbind
# 查看是否安装成功
rpm -qa | grep nfs
rpm -qa | grep rpcbind
# 2.创建共享存储文件夹,并授权
mkdir -p /nfs/k8s_data
chmod 777 /nfs/k8s_data/
# 3.配置nfs
vim /etc/exports
/nfs/k8s_data 172.30.93.0/24(rw,no_root_squash,no_all_squash,sync)
# 4.启动服务
systemctl start nfs
systemctl start rpcbind
#添加开机自启
systemctl enable nfs
systemctl enable rpcbind
# 5.配置生效
exportfs -r
# 6.查看挂载情况
showmount -e localhost
#输出下面信息表示正常
操作主机:除了NFS server,其他所有主机
yum -y install nfs-utils
当很多的数据卷需要创建或者管理时,Kubernetes解决这个问题的方法是提供动态配置PV的方法,可以自动创建PV。管理员可以部署PV配置器(provisioner),然后定义对应的StorageClass,这样开发者在创建PVC的时候就可以选择需要创建存储的类型,PVC会把StorageClass传递给PV provisioner,由provisioner自动创建PV。
所以这里使用了StorageClass的类型当做就持久化方案。
pv-nfs.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs
labels:
pv: nfs
spec:
capacity: #容量为2G
storage: 2G
accessModes: #访问模式为允许多节点以读写方式挂载,可以有多个访问模式
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain #回收策略
nfs: #定义nfs服务器的信息
server: 172.30.93.2
path: /home/k8s_data
readOnly: false
storage修改为目录大小
server ip换成主机的ip
rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: default
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
# 创建资源
kubectl apply -f ?rbac.yaml
provisioner(也可称为供应者、置备程序、存储分配器)(nfs-client-provisioner.yaml)
修改yaml里面的ip
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner #这个serviceAccountName就是上面创建ServiceAccount账号
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME #PROVISIONER_NAME的值就是本清单的顶部定义的name
value: nfs-client-provisioner
- name: NFS_SERVER #这个NFS_SERVER参数的值就是nfs服务器的IP地址
value: 172.30.93.2
- name: NFS_PATH #这个NFS_PATH参数的值就是nfs服务器的共享目录
value: /home/k8s_data
volumes:
- name: nfs-client-root
nfs: #这里就是配置nfs服务器的ip地址和共享目录
server: 172.30.93.2
path: /home/k8s_data
# 创建资源
kubectl apply -f nfs-client-provisioner.yaml
nfs-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-storageclass
provisioner: nfs-client-provisioner #provisioner参数定义置备程序
reclaimPolicy: Retain #回收策略,默认是Delete
parameters:
archiveOnDelete: "false"
# 创建资源:
kubectl apply -f nfs-storageclass.yaml
以上创建好以后,如下显示running
5、测试storageclass是否可用
绑定即可。