centos

centos k8s部署如何进行存储配置

小樊
50
2025-08-10 21:03:04
栏目: 云计算

在CentOS上部署Kubernetes时,可通过以下常见存储方案进行配置,以下是具体步骤及对应配置示例:

一、NFS存储配置

1. 安装NFS服务器

# 安装NFS软件包
sudo yum install -y nfs-utils

# 创建共享目录并设置权限
sudo mkdir -p /mnt/nfs
sudo groupadd nogroup
sudo chown -R nobody:nogroup /mnt/nfs

# 编辑/etc/exports配置共享
echo "/mnt/nfs *(rw,sync,no_subtree_check)" | sudo tee /etc/exports

# 启动NFS服务并配置防火墙
sudo systemctl start nfs-server
sudo systemctl enable nfs-server
sudo firewall-cmd --permanent --add-service=nfs --add-service=mountd --add-service=rpc-bind
sudo firewall-cmd --reload

2. 在Kubernetes中创建PV和PVC

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteMany
  nfs:
    path: /mnt/nfs
    server: <NFS服务器IP>
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
kubectl apply -f nfs-pv.yaml -f nfs-pvc.yaml

3. 在Pod中使用PVC

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
spec:
  containers:
  - name: nginx
    image: nginx
    volumeMounts:
    - mountPath: "/usr/share/nginx/html"
      name: nfs-volume
  volumes:
  - name: nfs-volume
    persistentVolumeClaim:
      claimName: nfs-pvc
kubectl apply -f nginx-pod.yaml

二、Ceph存储配置(需提前部署Ceph集群)

1. 部署Ceph集群(简化步骤)

helm repo add rook-release https://charts.rook.io/release
helm install rook-ceph rook-release/rook-ceph --namespace rook-ceph
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: rook-ceph-block
provisioner: rook-ceph.rbd.csi.ceph.com
parameters:
  clusterID: rook-ceph
  pool: replicapool
  imageFormat: "2"
  imageFeatures: layering
  csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
kubectl apply -f ceph-storageclass.yaml

三、HostPath存储(仅适用于单节点测试)

1. 创建本地目录并配置PV

mkdir -p /data/k8s-storage
apiVersion: v1
kind: PersistentVolume
metadata:
  name: hostpath-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /data/k8s-storage
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: hostpath-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
kubectl apply -f hostpath-pv.yaml -f hostpath-pvc.yaml

四、关键说明

  1. 存储类型选择

    • NFS:适合中小规模集群,支持多节点共享,需手动配置NFS服务器。
    • Ceph:适合大规模集群,支持块存储、对象存储,需提前部署Ceph集群。
    • HostPath:仅适用于开发测试,数据存储在节点本地,不具备高可用性。
  2. 动态存储(StorageClass)

    • 通过StorageClass可实现按需分配存储,避免手动创建PV。
    • NFS和Ceph均支持通过StorageClass动态创建PVC。
  3. 验证存储

    • 查看PV和PVC状态:kubectl get pv,pvc
    • 进入Pod验证数据是否持久化:kubectl exec -it <pod-name> -- ls /挂载路径

0
看了该问题的人还看了