CentOS上部署Kubernetes存储方案的常见选择与实践
在CentOS环境中为Kubernetes(K8s)配置存储,需根据应用场景(如数据共享需求、性能要求、扩展性)选择合适的存储类型。以下是NFS、GlusterFS、Ceph三种主流方案的详细部署步骤及特点分析:
NFS适用于需要多Pod共享数据且对性能要求不高的场景(如静态网站、配置文件共享),其核心优势是配置简单、成本低。
sudo yum install -y nfs-utils rpcbindsudo mkdir -p /mnt/nfs && sudo chown -R nobody:nogroup /mnt/nfs/etc/exports):/mnt/nfs *(rw,sync,no_subtree_check)(允许所有客户端读写)sudo systemctl start nfs-server rpcbind && sudo systemctl enable nfs-server rpcbindsudo exportfs -asudo firewall-cmd --permanent --add-service=nfs && sudo firewall-cmd --reloadapiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany # 支持多Pod同时读写
nfs:
path: /mnt/nfs
server: <NFS_SERVER_IP> # 替换为NFS服务器IP
persistentVolumeReclaimPolicy: Retain # 删除PVC后保留PV数据
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
apiVersion: v1
kind: Pod
metadata:
name: nfs-pod
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html # 容器内挂载路径
name: nfs-volume
volumes:
- name: nfs-volume
persistentVolumeClaim:
claimName: nfs-pvc # 关联PVC名称
应用上述配置(kubectl apply -f <文件名>.yaml)后,Pod即可通过NFS共享存储读写数据。
GlusterFS适用于需要高可用、高扩展性的场景(如大规模数据存储、分布式应用),支持动态扩容且数据自动均衡。
sudo yum install -y centos-release-gluster && sudo yum install -y glusterfs-server glusterfs-fusesudo systemctl start glusterd && sudo systemctl enable glusterdgluster peer probe <NODE_IP>(需重复执行直到所有节点加入)gluster volume create gv0 replica 3 <NODE1_IP>:/data/brick1 <NODE2_IP>:/data/brick2 <NODE3_IP>:/data/brick3 forcegluster volume start gv0apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: glusterfsp
provisioner: kubernetes.io/glusterfs
parameters:
resturl: http://<GLUSTERFS_SERVER_IP>:8080 # GlusterFS REST服务地址
clusterid: <CLUSTER_ID> # GlusterFS集群ID(通过`gluster cluster info`获取)
restauthenabled: "true"
restuser: admin
restuserkey: password # 设置REST API密码
volumeBindingMode: WaitForFirstConsumer # 延迟绑定,确保Pod调度到正确节点
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gluster-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: glusterfsp # 关联StorageClass
resources:
requests:
storage: 10Gi
Ceph适用于对性能、可靠性要求极高的生产环境(如数据库、AI训练),支持块存储(RBD)、对象存储(RADOS Gateway)、文件系统(CephFS)三种类型。
sudo yum install -y ceph ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mon ceph-osdceph-deploy new <MONITOR_NODE_IP>ceph-deploy mon create-initial/dev/sdb):ceph-deploy osd create --data /dev/sdb <NODE_IP>ceph -s(需显示HEALTH_OK)ceph auth get-or-create client.admin | tee /etc/ceph/ceph.client.admin.keyring
kubectl create secret generic ceph-secret --from-file=keyring=/etc/ceph/ceph.client.admin.keyring -n default
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-rbd
provisioner: kubernetes.io/rbd
parameters:
monitors: <MONITOR_IP1>,<MONITOR_IP2>,<MONITOR_IP3> # Ceph monitor地址
adminId: admin
adminSecretNamespace: default
adminSecretName: ceph-secret
pool: rbd # Ceph存储池名称
userId: admin
userSecretName: ceph-secret
fsType: ext4 # 文件系统类型
imageFeatures: layering # 支持快照
reclaimPolicy: Delete # 删除PVC后删除PV
rbd工具)。以上方案均需确保CentOS节点与K8s集群的网络互通,并根据实际需求调整存储容量、访问模式等参数。