您好,登录后才能下订单哦!
Kubernetes部署(一):架构及功能说明
Kubernetes部署(二):系统环境初始化
Kubernetes部署(三):CA证书制作
Kubernetes部署(四):ETCD集群部署
Kubernetes部署(五):Haproxy、Keppalived部署
Kubernetes部署(六):Master节点部署
Kubernetes部署(七):Node节点部署
Kubernetes部署(八):Flannel网络部署
Kubernetes部署(九):CoreDNS、Dashboard、Ingress部署
Kubernetes部署(十):储存之glusterfs和heketi部署
Kubernetes部署(十一):管理之Helm和Rancher部署
Kubernetes部署(十二):helm部署harbor企业级镜像仓库
本指南支持在Kubernetes集群中集成,部署和管理GlusterFS容器化存储节点。这使Kubernetes管理员能够为其用户提供可靠的共享存储。
包括设置指南、其中包含一个示例服务器pod,它使用动态配置的GlusterFS卷进行存储。对于那些希望测试或了解有关此主题的更多信息的人,请按照主要自述文件中的快速入门说明 了解gluster-kubernetes
本指南旨在演示Heketi在Kubernetes环境中管理Gluster的最小示例。
#使用file -s 查看硬盘如果显示为data则为原始块设备。如果不是data类型,可先用pvcreate,pvremove来变更。
[root@node-04 ~]# file -s /dev/sdc
/dev/sdc: x86 boot sector, code offset 0xb8
[root@node-04 ~]# pvcreate /dev/sdc
WARNING: dos signature detected on /dev/sdc at offset 510. Wipe it? [y/n]: y
Wiping dos signature on /dev/sdc.
Physical volume "/dev/sdc" successfully created.
[root@node-04 ~]# pvremove /dev/sdc
Labels on physical volume "/dev/sdc" successfully wiped.
[root@node-04 ~]# file -s /dev/sdc 
/dev/sdc: datayum install -y glusterfs-client glusterfs-fuse socatmodprobe dm_thin_poolHeketi提供CLI,为用户提供管理Kubernetes中GlusterFS的部署和配置的方法。 在您的客户端计算机上下载并安装下载并安装heketi-cli,下载的heketi-cli版本最好是和heketi服务端版本一致,不然可能会出现报错。
部署GlusterFS DaemonSet
{
"kind": "DaemonSet",
"apiVersion": "extensions/v1beta1",
"metadata": {
    "name": "glusterfs",
    "labels": {
        "glusterfs": "deployment"
    },
    "annotations": {
        "description": "GlusterFS Daemon Set",
        "tags": "glusterfs"
    }
},
"spec": {
    "template": {
        "metadata": {
            "name": "glusterfs",
            "labels": {
                "glusterfs-node": "daemonset"
            }
        },
        "spec": {
            "nodeSelector": {
                "storagenode" : "glusterfs"
            },
            "hostNetwork": true,
            "containers": [
                {
                    "image": "gluster/gluster-centos:latest",
                    "imagePullPolicy": "Always",
                    "name": "glusterfs",
                    "volumeMounts": [
                        {
                            "name": "glusterfs-heketi",
                            "mountPath": "/var/lib/heketi"
                        },
                        {
                            "name": "glusterfs-run",
                            "mountPath": "/run"
                        },
                        {
                            "name": "glusterfs-lvm",
                            "mountPath": "/run/lvm"
                        },
                        {
                            "name": "glusterfs-etc",
                            "mountPath": "/etc/glusterfs"
                        },
                        {
                            "name": "glusterfs-logs",
                            "mountPath": "/var/log/glusterfs"
                        },
                        {
                            "name": "glusterfs-config",
                            "mountPath": "/var/lib/glusterd"
                        },
                        {
                            "name": "glusterfs-dev",
                            "mountPath": "/dev"
                        },
                        {
                            "name": "glusterfs-cgroup",
                            "mountPath": "/sys/fs/cgroup"
                        }
                    ],
                    "securityContext": {
                        "capabilities": {},
                        "privileged": true
                    },
                    "readinessProbe": {
                        "timeoutSeconds": 3,
                        "initialDelaySeconds": 60,
                        "exec": {
                            "command": [
                                "/bin/bash",
                                "-c",
                                "systemctl status glusterd.service"
                            ]
                        }
                    },
                    "livenessProbe": {
                        "timeoutSeconds": 3,
                        "initialDelaySeconds": 60,
                        "exec": {
                            "command": [
                                "/bin/bash",
                                "-c",
                                "systemctl status glusterd.service"
                            ]
                        }
                    }
                }
            ],
            "volumes": [
                {
                    "name": "glusterfs-heketi",
                    "hostPath": {
                        "path": "/var/lib/heketi"
                    }
                },
                {
                    "name": "glusterfs-run"
                },
                {
                    "name": "glusterfs-lvm",
                    "hostPath": {
                        "path": "/run/lvm"
                    }
                },
                {
                    "name": "glusterfs-etc",
                    "hostPath": {
                        "path": "/etc/glusterfs"
                    }
                },
                {
                    "name": "glusterfs-logs",
                    "hostPath": {
                        "path": "/var/log/glusterfs"
                    }
                },
                {
                    "name": "glusterfs-config",
                    "hostPath": {
                        "path": "/var/lib/glusterd"
                    }
                },
                {
                    "name": "glusterfs-dev",
                    "hostPath": {
                        "path": "/dev"
                    }
                },
                {
                    "name": "glusterfs-cgroup",
                    "hostPath": {
                        "path": "/sys/fs/cgroup"
                    }
                }
            ]
        }
    }
}
}
$ kubectl create -f glusterfs-daemonset.json
$ kubectl get nodes
storagenode=glusterfs在该节点上设置标签,将gluster容器部署到指定节点上。[root@node-01 heketi]# kubectl label node 10.31.90.204 storagenode=glusterfs
[root@node-01 heketi]# kubectl label node 10.31.90.205 storagenode=glusterfs
[root@node-01 heketi]# kubectl label node 10.31.90.206 storagenode=glusterfs
根据需要,验证pod正在节点上运行,至少应运行三个pod。
$ kubectl get pods
接下来我们将为Heketi创建一个ServiceAccount:
{
"apiVersion": "v1",
"kind": "ServiceAccount",
"metadata": {
"name": "heketi-service-account"
}
}
$ kubectl create -f heketi-service-account.json
我们现在必须建立该服务帐户控制gluster pod的能力。我们通过为新创建的服务帐户创建集群角色绑定来实现此目的。
$ kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account
{
  "_port_comment": "Heketi Server Port Number",
  "port": "8080",
  "_use_auth": "Enable JWT authorization. Please enable for deployment",
  "use_auth": false,
  "_jwt": "Private keys for access",
  "jwt": {
    "_admin": "Admin has access to all APIs",
    "admin": {
      "key": "My Secret"
    },
    "_user": "User only has access to /volumes endpoint",
    "user": {
      "key": "My Secret"
    }
  },
  "_glusterfs_comment": "GlusterFS Configuration",
  "glusterfs": {
    "_executor_comment": "Execute plugin. Possible choices: mock, kubernetes, ssh",
    "executor": "kubernetes",
    "_db_comment": "Database file name",
    "db": "/var/lib/heketi/heketi.db",
    "kubeexec": {
      "rebalance_on_expansion": true
    },
    "sshexec": {
      "rebalance_on_expansion": true,
      "keyfile": "/etc/heketi/private_key",
      "fstab": "/etc/fstab",
      "port": "22",
      "user": "root",
      "sudo": false
    }
  },
  "_backup_db_to_kube_secret": "Backup the heketi database to a Kubernetes secret when running in Kubernetes. Default is off.",
  "backup_db_to_kube_secret": false
}
$ kubectl create secret generic heketi-config-secret --from-file=./heketi.json
{
"kind": "List",
"apiVersion": "v1",
"items": [
{
  "kind": "Service",
  "apiVersion": "v1",
  "metadata": {
    "name": "deploy-heketi",
    "labels": {
      "glusterfs": "heketi-service",
      "deploy-heketi": "support"
    },
    "annotations": {
      "description": "Exposes Heketi Service"
    }
  },
  "spec": {
    "selector": {
      "name": "deploy-heketi"
    },
    "ports": [
      {
        "name": "deploy-heketi",
        "port": 8080,
        "targetPort": 8080
      }
    ]
  }
},
{
  "kind": "Deployment",
  "apiVersion": "extensions/v1beta1",
  "metadata": {
    "name": "deploy-heketi",
    "labels": {
      "glusterfs": "heketi-deployment",
      "deploy-heketi": "deployment"
    },
    "annotations": {
      "description": "Defines how to deploy Heketi"
    }
  },
  "spec": {
    "replicas": 1,
    "template": {
      "metadata": {
        "name": "deploy-heketi",
        "labels": {
          "name": "deploy-heketi",
          "glusterfs": "heketi-pod",
          "deploy-heketi": "pod"
        }
      },
      "spec": {
        "serviceAccountName": "heketi-service-account",
        "containers": [
          {
            "image": "heketi/heketi:8",
            "imagePullPolicy": "Always",
            "name": "deploy-heketi",
            "env": [
              {
                "name": "HEKETI_EXECUTOR",
                "value": "kubernetes"
              },
              {
                "name": "HEKETI_DB_PATH",
                "value": "/var/lib/heketi/heketi.db"
              },
              {
                "name": "HEKETI_FSTAB",
                "value": "/var/lib/heketi/fstab"
              },
              {
                "name": "HEKETI_SNAPSHOT_LIMIT",
                "value": "14"
              },
              {
                "name": "HEKETI_KUBE_GLUSTER_DAEMONSET",
                "value": "y"
              }
            ],
            "ports": [
              {
                "containerPort": 8080
              }
            ],
            "volumeMounts": [
              {
                "name": "db",
                "mountPath": "/var/lib/heketi"
              },
              {
                "name": "config",
                "mountPath": "/etc/heketi"
              }
            ],
            "readinessProbe": {
              "timeoutSeconds": 3,
              "initialDelaySeconds": 3,
              "httpGet": {
                "path": "/hello",
                "port": 8080
              }
            },
            "livenessProbe": {
              "timeoutSeconds": 3,
              "initialDelaySeconds": 30,
              "httpGet": {
                "path": "/hello",
                "port": 8080
              }
            }
          }
        ],
        "volumes": [
          {
            "name": "db"
          },
          {
            "name": "config",
            "secret": {
              "secretName": "heketi-config-secret"
            }
          }
        ]
      }
    }
  }
}
]
}# kubectl create -f heketi-bootstrap.json
service "deploy-heketi" created
deployment "deploy-heketi" created
[root@node-01 heketi]# kubectl get pod
NAME                            READY     STATUS    RESTARTS   AGE
deploy-heketi-8888799fd-cmfp6   1/1       Running   0          6m
glusterfs-7t5ls                 1/1       Running   0          8m
glusterfs-drsx9                 1/1       Running   0          8m
glusterfs-pnnn8                 1/1       Running   0          8m
kubectl port-forward deploy-heketi-8888799fd-cmfp6 :8080
如果在运行命令的系统上本地端口8080空闲,则可以运行port-forward命令,以便它为了方便而绑定到8080:
kubectl port-forward deploy-heketi-8888799fd-cmfp6 18080:8080
现在通过对Heketi服务运行示例查询来验证端口转发是否正常工作。该命令应该打印将要转发的本地端口。将其合并到URL中以测试服务,如下所示:
curl http://localhost:18080/hello
Handling connection for 18080
Hello from Heketi
最后,为Heketi CLI客户端设置环境变量,以便它知道如何到达Heketi Server。
export HEKETI_CLI_SERVER=http://localhost:18080
接下来,我们将向Heketi提供有关要管理的GlusterFS集群的信息。我们通过拓扑文件提供此信息 。您克隆的repo中有一个示例拓扑文件,名为topology-sample.json。拓扑指定运行GlusterFS容器的Kubernetes节点以及每个节点的相应原始块设备。
确保hostnames/manage指向下面显示的确切名称kubectl get nodes,并且hostnames/storage是存储网络的IP地址。
修改拓扑文件以反映您所做的选择,然后部署它,如下所示:
{
  "clusters": [
    {
      "nodes": [
        {
          "node": {
            "hostnames": {
              "manage": [
                "10.31.90.204"
              ],
              "storage":[
                "10.31.90.204"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sdc"
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "10.31.90.205"
              ],
              "storage":[
                "10.31.90.205"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sdc"
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "10.31.90.206"
              ],
              "storage":[
                "10.31.90.206"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sdc"
          ]
        }
      ]
    }
  ]
}
[root@node-01 ~]# heketi-cli topology load --json=top.json
Creating cluster ... ID: e758afb77ee26d5f969d7efee1516e64
        Allowing file volumes on cluster.
        Allowing block volumes on cluster.
        Creating node 10.31.90.204 ... ID: a6eedd58c118dcfe44a0db2af1a4f863
                Adding device /dev/sdc ... OK
        Creating node 10.31.90.205 ... ID: 4066962c14bcdebd28aca193b5690792
                Adding device /dev/sdc ... OK
        Creating node 10.31.90.206 ... ID: 91e42a2361f0266ae334354e5c34ce11
                Adding device /dev/sdc ... OK
执行此命令后会生成一个heketi-storage.json的文件,我们最好是将此文件里的"image": "heketi/heketi:dev"
改为"image": "heketi/heketi:8"
# heketi-client/bin/heketi-cli setup-openshift-heketi-storage
然后在创建heketi相关服务
# kubectl create -f heketi-storage.json
陷阱:如果heketi-cli在运行setup-openshift-heketi-storage子命令时报告“无空间”错误,则可能无意中运行topology load了服务器和heketi-cli的不匹配版本。停止正在运行的Heketi pod(kubectl scale deployment deploy-heketi --replicas=0),手动从存储块设备中删除任何签名,然后继续运行Heketi pod(kubectl scale deployment deploy-heketi --replicas=1)。然后使用匹配版本的heketi-cli重新加载拓扑并重试该步骤。
# kubectl delete all,service,jobs,deployment,secret --selector="deploy-heketi"
{
"kind": "List",
"apiVersion": "v1",
"items": [
{
  "kind": "Secret",
  "apiVersion": "v1",
  "metadata": {
    "name": "heketi-db-backup",
    "labels": {
      "glusterfs": "heketi-db",
      "heketi": "db"
    }
  },
  "data": {
  },
  "type": "Opaque"
},
{
  "kind": "Service",
  "apiVersion": "v1",
  "metadata": {
    "name": "heketi",
    "labels": {
      "glusterfs": "heketi-service",
      "deploy-heketi": "support"
    },
    "annotations": {
      "description": "Exposes Heketi Service"
    }
  },
  "spec": {
    "selector": {
      "name": "heketi"
    },
    "ports": [
      {
        "name": "heketi",
        "port": 8080,
        "targetPort": 8080
      }
    ]
  }
},
{
  "kind": "Deployment",
  "apiVersion": "extensions/v1beta1",
  "metadata": {
    "name": "heketi",
    "labels": {
      "glusterfs": "heketi-deployment"
    },
    "annotations": {
      "description": "Defines how to deploy Heketi"
    }
  },
  "spec": {
    "replicas": 1,
    "template": {
      "metadata": {
        "name": "heketi",
        "labels": {
          "name": "heketi",
          "glusterfs": "heketi-pod"
        }
      },
      "spec": {
        "serviceAccountName": "heketi-service-account",
        "containers": [
          {
            "image": "heketi/heketi:8",
            "imagePullPolicy": "Always",
            "name": "heketi",
            "env": [
              {
                "name": "HEKETI_EXECUTOR",
                "value": "kubernetes"
              },
              {
                "name": "HEKETI_DB_PATH",
                "value": "/var/lib/heketi/heketi.db"
              },
              {
                "name": "HEKETI_FSTAB",
                "value": "/var/lib/heketi/fstab"
              },
              {
                "name": "HEKETI_SNAPSHOT_LIMIT",
                "value": "14"
              },
              {
                "name": "HEKETI_KUBE_GLUSTER_DAEMONSET",
                "value": "y"
              }
            ],
            "ports": [
              {
                "containerPort": 8080
              }
            ],
            "volumeMounts": [
              {
                "mountPath": "/backupdb",
                "name": "heketi-db-secret"
              },
              {
                "name": "db",
                "mountPath": "/var/lib/heketi"
              },
              {
                "name": "config",
                "mountPath": "/etc/heketi"
              }
            ],
            "readinessProbe": {
              "timeoutSeconds": 3,
              "initialDelaySeconds": 3,
              "httpGet": {
                "path": "/hello",
                "port": 8080
              }
            },
            "livenessProbe": {
              "timeoutSeconds": 3,
              "initialDelaySeconds": 30,
              "httpGet": {
                "path": "/hello",
                "port": 8080
              }
            }
          }
        ],
        "volumes": [
          {
            "name": "db",
            "glusterfs": {
              "endpoints": "heketi-storage-endpoints",
              "path": "heketidbstorage"
            }
          },
          {
            "name": "heketi-db-secret",
            "secret": {
              "secretName": "heketi-db-backup"
            }
          },
          {
            "name": "config",
            "secret": {
              "secretName": "heketi-config-secret"
            }
          }
        ]
      }
    }
  }
}
]
}# kubectl create -f heketi-deployment.json
service "heketi" created
deployment "heketi" created
使用诸如heketi-cli cluster list和之类的命令heketi-cli volume list 来确认先前建立的集群是否存在,以及Heketi是否知道在引导阶段创建的db存储卷。
heketi.cnlinux.club的A记录解析为10.31.90.200。
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-heketi
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: heketi.cnlinux.club
  http:
    paths:
      - path: 
        backend:
          serviceName: heketi
          servicePort: 8080
[root@node-01 heketi]# kubectl create -f ingress-heketi.yaml
在浏览器访问http://heketi.cnlinux.club/hello
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gluster-heketi
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://heketi.cnlinux.club"
  restauthenabled: "false" 
  volumetype: "replicate:2"
[root@node-01 heketi]# kubectl create -f storageclass-gluster-heketi.yaml
[root@node-01 heketi]# kubectl get sc
NAME             PROVISIONER               AGE
gluster-heketi   kubernetes.io/glusterfs   10s
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-gluster-heketi
spec:
storageClassName: gluster-heketi
accessModes:
- ReadWriteOnce
resources:
requests:
  storage: 1Gi
[root@node-01 heketi]# kubectl create -f pvc-gluster-heketi.yaml 
[root@node-01 heketi]# kubectl get pvc
NAME                 STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
pvc-gluster-heketi   Bound     pvc-d978f524-0b74-11e9-875c-005056826470   1Gi        RWO            gluster-heketi   30sapiVersion: v1
kind: Pod
metadata:
name: pod-pvc
spec:
containers:
- name: pod-pvc
image: busybox:latest
command:
- sleep
- "3600"
volumeMounts:
- name: gluster-volume
  mountPath: "/pv-data"
volumes:
- name: gluster-volume
  persistentVolumeClaim:
    claimName: pvc-gluster-heketi
[root@node-01 heketi]# kubectl create -f pod-pvc.yaml 
进入容器查看是否已经挂载成功
[root@node-01 heketi]# kubectl exec pod-pvc -it /bin/sh
/ # df -h
Filesystem                Size      Used Available Use% Mounted on
overlay                  47.8G      4.3G     43.5G   9% /
tmpfs                    64.0M         0     64.0M   0% /dev
tmpfs                     1.9G         0      1.9G   0% /sys/fs/cgroup
10.31.90.204:vol_675cc9fe0e959157919c886ea7786d33
                   1014.0M     42.7M    971.3M   4% /pv-data
/dev/sda3                47.8G      4.3G     43.5G   9% /dev/termination-log
/dev/sda3                47.8G      4.3G     43.5G   9% /etc/resolv.conf
/dev/sda3                47.8G      4.3G     43.5G   9% /etc/hostname
/dev/sda3                47.8G      4.3G     43.5G   9% /etc/hosts
#往/pv-data写文件,当容量超过1G时就自动退出了,证明容量限制是生效的。
/ # cd /pv-data/
/pv-data # dd if=/dev/zero of=/pv-data/test.img bs=8M count=300
123+0 records in
122+0 records out
1030225920 bytes (982.5MB) copied, 24.255925 seconds, 40.5MB/s
在宿主机磁盘里查看是否创建了test.img文件
[root@node-04 cfg]# mount /dev/vg_2631413b8b87bbd6cb526568ab697d37/brick_1691ef862dd504e12e8384af76e5a9f2 /mnt
[root@node-04 cfg]# ll -h /mnt/brick/
total 982M
-rw-r--r-- 2 root 2001 982M Jan  2 15:14 test.img
至此,所有的操作都已完成。
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。