您好,登录后才能下订单哦!
这篇文章主要讲解了“kubernetes数据持久化StorageClass动态供给怎么实现”,文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习“kubernetes数据持久化StorageClass动态供给怎么实现”吧!
存储类的好处之一便是支持PV的动态供给,它甚至可以直接被视作为PV的创建模版,用户用到持久性存储时,需要通过创建PVC来绑定匹配的PV,此类操作需求较大,或者当管理员手动创建的PV无法满足PVC的所有需求时,系统按PVC的需求标准动态创建适配的PV会为存储管理带来极大的灵活性,不过仅那些属于StorageClass的PVC和PV才能产生绑定关系,即没有指定StorageClass的PVC只能绑定同类的PV。
存储类对象的名称至关重要,它是用户调用的标识,创建存储类对象时,除了名称之外,还需要为其定义三个关键字段。provisioner、parameter和reclaimPolicy。
所以kubernetes提供了一种可以动态分配的工作机制,可用自动创建PV,该机制依赖于StorageClass的API,将某个存储节点划分1T给kubernetes使用,当用户申请5Gi的PVC时,会自动从这1T的存储空间去创建一个5Gi的PV,而后自动与之进行关联绑定。
动态PV供给的启用需要事先创建一个存储类,不同的Provisoner的创建方法各有不同,并非所有的存储卷插件都由Kubernetes内建支持PV动态供给。
由于kubernetes内部不包含NFS驱动,所以需要使用外部驱动nfs-subdir-external-provisioner是一个自动供应器,它使用NFS服务端来支持动态供应。
NFS-subdir-external- provisioner实例负责监视PersistentVolumeClaims请求StorageClass,并自动为它们创建NFS所支持的PresistentVolumes。
这里的意思是要把哪个目录给kubernetes来使用。把目录共享出来。
[root@kn-server-node02-15 ~]# ll /data/ 总用量 0 [root@kn-server-node02-15 ~]# showmount -e 10.0.0.15 Export list for 10.0.0.15: /data 10.0.0.0/24
首先创建RBAC权限。
[root@kn-server-master01-13 nfs-provisioner]# cat nfs-rbac.yaml apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["nodes"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io [root@kn-server-master01-13 nfs-provisioner]# kubectl apply -f nfs-rbac.yaml serviceaccount/nfs-client-provisioner created clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
[root@kn-server-master01-13 nfs-provisioner]# cat nfs-provisioner-deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 镜像在国内是拉取不到的,因此为下载下来了放在我的docker hub。 替换为lihuahaitang/nfs-subdir-external-provisioner:v4.0.2 volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: k8s-sigs.io/nfs-subdir-external-provisioner NFS-Provisioner的名称,后续StorageClassName要与该名称保持一致 - name: NFS_SERVER NFS服务器的地址 value: 10.0.0.15 - name: NFS_PATH value: /data volumes: - name: nfs-client-root nfs: server: 10.0.0.15 path: /data [root@kn-server-master01-13 nfs-provisioner]# kubectl apply -f nfs-provisioner-deploy.yaml deployment.apps/nfs-client-provisioner created Pod正常运行。 [root@kn-server-master01-13 nfs-provisioner]# kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-57d6d9d5f6-dcxgq 1/1 Running 0 2m25s describe查看Pod详细信息; [root@kn-server-master01-13 nfs-provisioner]# kubectl describe pods nfs-client-provisioner-57d6d9d5f6-dcxgq Name: nfs-client-provisioner-57d6d9d5f6-dcxgq Namespace: default Priority: 0 Node: kn-server-node02-15/10.0.0.15 Start Time: Mon, 28 Nov 2022 11:19:33 +0800 Labels: app=nfs-client-provisioner pod-template-hash=57d6d9d5f6 Annotations: <none> Status: Running IP: 192.168.2.82 IPs: IP: 192.168.2.82 Controlled By: ReplicaSet/nfs-client-provisioner-57d6d9d5f6 Containers: nfs-client-provisioner: Container ID: docker://b5ea240a8693185be681714747f8e0a9f347492a24920dd68e629effb3a7400f Image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 镜像来自k8s.gcr.io Image ID: docker-pullable://k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner@sha256:63d5e04551ec8b5aae83b6f35938ca5ddc50a88d85492d9731810c31591fa4c9 Port: <none> Host Port: <none> State: Running Started: Mon, 28 Nov 2022 11:20:12 +0800 Ready: True Restart Count: 0 Environment: PROVISIONER_NAME: k8s-sigs.io/nfs-subdir-external-provisioner NFS_SERVER: 10.0.0.15 NFS_PATH: /data Mounts: /persistentvolumes from nfs-client-root (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q2z8w (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: nfs-client-root: Type: NFS (an NFS mount that lasts the lifetime of a pod) Server: 10.0.0.15 Path: /data ReadOnly: false kube-api-access-q2z8w: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3m11s default-scheduler Successfully assigned default/nfs-client-provisioner-57d6d9d5f6-dcxgq to kn-server-node02-15 Normal Pulling 3m11s kubelet Pulling image "k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2" Normal Pulled 2m32s kubelet Successfully pulled image "k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2" in 38.965869132s Normal Created 2m32s kubelet Created container nfs-client-provisioner Normal Started 2m32s kubelet Started container nfs-client-provisioner
创建NFS StorageClass动态供应商。
[root@kn-server-master01-13 nfs-provisioner]# cat storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass 类型为storageclass metadata: name: nfs-provisioner-storage PVC申请时需明确指定的storageclass名称 annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: k8s-sigs.io/nfs-subdir-external-provisioner 供应商名称,必须和上面创建的"PROVISIONER_NAME"保持一致 parameters: archiveOnDelete: "false" 如果值为false,删除pvc后也会删除目录内容,"true"则会对数据进行保留 pathPattern: "${.PVC.namespace}/${.PVC.name}" 创建目录路径的模板,默认为随机命名。 [root@kn-server-master01-13 nfs-provisioner]# kubectl apply -f storageclass.yaml storageclass.storage.k8s.io/nfs-provisioner-storage created storage简写sc [root@kn-server-master01-13 nfs-provisioner]# kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE nfs-provisioner-storage k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 3s describe查看配详细信息。 [root@kn-server-master01-13 nfs-provisioner]# kubectl describe sc Name: nfs-provisioner-storage IsDefaultClass: Yes Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"nfs-provisioner-storage"},"parameters":{"archiveOnDelete":"false","pathPattern":"${.PVC.namespace}/${.PVC.name}"},"provisioner":"k8s-sigs.io/nfs-subdir-external-provisioner"} ,storageclass.kubernetes.io/is-default-class=true Provisioner: k8s-sigs.io/nfs-subdir-external-provisioner Parameters: archiveOnDelete=false,pathPattern=${.PVC.namespace}/${.PVC.name} AllowVolumeExpansion: <unset> MountOptions: <none> ReclaimPolicy: Delete VolumeBindingMode: Immediate Events: <none>
[root@kn-server-master01-13 nfs-provisioner]# cat nfs-pvc-test.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-pvc-test spec: storageClassName: "nfs-provisioner-storage" accessModes: - ReadWriteMany resources: requests: storage: 0.5Gi 这里的PV的名字是随机的,数据的存储路径是根据pathPattern来定义的。 [root@kn-server-node02-15 data]# ls default [root@kn-server-node02-15 data]# ll default/ 总用量 0 drwxrwxrwx 2 root root 6 11月 28 13:56 nfs-pvc-test [root@kn-server-master01-13 pv]# kubectl get pv pvc-8ed67f7d-d829-4d87-8c66-d8a85f50772f 512Mi RWX Delete Bound default/nfs-pvc-test nfs-provisioner-storage 5m19s [root@kn-server-master01-13 nfs-provisioner]# kubectl describe pv pvc-8ed67f7d-d829-4d87-8c66-d8a85f50772f Name: pvc-8ed67f7d-d829-4d87-8c66-d8a85f50772f Labels: <none> Annotations: pv.kubernetes.io/provisioned-by: k8s-sigs.io/nfs-subdir-external-provisioner Finalizers: [kubernetes.io/pv-protection] StorageClass: nfs-provisioner-storage Status: Bound Claim: default/nfs-pvc-test Reclaim Policy: Delete Access Modes: RWX VolumeMode: Filesystem Capacity: 512Mi Node Affinity: <none> Message: Source: Type: NFS (an NFS mount that lasts the lifetime of a pod) Server: 10.0.0.15 Path: /data/default/nfs-pvc-test ReadOnly: false Events: <none> describe可用看到更详细的信息 root@kn-server-master01-13 nfs-provisioner]# kubectl describe pvc Name: nfs-pvc-test Namespace: default StorageClass: nfs-provisioner-storage Status: Bound Volume: pvc-8ed67f7d-d829-4d87-8c66-d8a85f50772f Labels: <none> Annotations: pv.kubernetes.io/bind-completed: yes pv.kubernetes.io/bound-by-controller: yes volume.beta.kubernetes.io/storage-provisioner: k8s-sigs.io/nfs-subdir-external-provisioner Finalizers: [kubernetes.io/pvc-protection] Capacity: 512Mi 定义的存储大小 Access Modes: RWX 卷的读写 VolumeMode: Filesystem Used By: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ExternalProvisioning 13m persistentvolume-controller waiting for a volume to be created, either by external provisioner "k8s-sigs.io/nfs-subdir-external-provisioner" or manually created by system administrator Normal Provisioning 13m k8s-sigs.io/nfs-subdir-external-provisioner_nfs-client-provisioner-57d6d9d5f6-dcxgq_259532a3-4dba-4183-be6d-8e8b320fc778 External provisioner is provisioning volume for claim "default/nfs-pvc-test" Normal ProvisioningSucceeded 13m k8s-sigs.io/nfs-subdir-external-provisioner_nfs-client-provisioner-57d6d9d5f6-dcxgq_259532a3-4dba-4183-be6d-8e8b320fc778 Successfully provisioned volume pvc-8ed67f7d-d829-4d87-8c66-d8a85f50772f
[root@kn-server-master01-13 nfs-provisioner]# cat nginx-pvc-test.yaml apiVersion: v1 kind: Pod metadata: name: nginx-sc spec: containers: - name: nginx image: nginx volumeMounts: - name: nginx-page mountPath: /usr/share/nginx/html volumes: - name: nginx-page persistentVolumeClaim: claimName: nfs-pvc-test [root@kn-server-master01-13 nfs-provisioner]# kubectl apply -f nginx-pvc-test.yaml pod/nginx-sc created [root@kn-server-master01-13 nfs-provisioner]# kubectl describe pvc Name: nfs-pvc-test Namespace: default StorageClass: nfs-provisioner-storage Status: Bound Volume: pvc-8ed67f7d-d829-4d87-8c66-d8a85f50772f Labels: <none> Annotations: pv.kubernetes.io/bind-completed: yes pv.kubernetes.io/bound-by-controller: yes volume.beta.kubernetes.io/storage-provisioner: k8s-sigs.io/nfs-subdir-external-provisioner Finalizers: [kubernetes.io/pvc-protection] Capacity: 512Mi Access Modes: RWX VolumeMode: Filesystem Used By: nginx-sc 可以看到的是nginx-sc这个Pod在使用这个PVC。 和上面名称是一致的。 [root@kn-server-master01-13 nfs-provisioner]# kubectl get pods nginx-sc NAME READY STATUS RESTARTS AGE nginx-sc 1/1 Running 0 2m43s 尝试写入数据 [root@kn-server-node02-15 data]# echo "haitang" > /data/default/nfs-pvc-test/index.html 访问测试。 [root@kn-server-master01-13 nfs-provisioner]# curl 192.168.2.83 haitang
感谢各位的阅读,以上就是“kubernetes数据持久化StorageClass动态供给怎么实现”的内容了,经过本文的学习后,相信大家对kubernetes数据持久化StorageClass动态供给怎么实现这一问题有了更深刻的体会,具体使用情况还需要大家实践验证。这里是亿速云,小编将为大家推送更多相关知识点的文章,欢迎关注!
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。