k8s之针对有状态服务实现数据持久化

发布时间:2020-06-10 23:26:16 作者:warrent
来源:网络 阅读:599

前言

1、什么是有状态服务和无状态服务?

服务器程序来说,究竟是有状态服务,还是无状态服务,其判断依旧是指两个来自相同发起者的请求在服务器端是否具备上下文关系。如果是状态化请求,那么服务器端一般都要保存请求的相关信息,每个请求可以默认地使用以前的请求信息。而对于无状态请求,服务器端所能够处理的过程必须全部来自于请求所携带的信息,以及其他服务器端自身所保存的、并且可以被所有请求所使用的公共信息。
无状态的服务器程序,最著名的就是WEB服务器。每次HTTP请求和以前都没有什么关系,只是获取目标URI。得到目标内容之后,这次连接就被杀死,没有任何痕迹。在后来的发展进程中,逐渐在无状态化的过程中,加入状态化的信息,比如COOKIE。服务端在响应客户端的请求的时候,会向客户端推送一个COOKIE,这个COOKIE记录服务端上面的一些信息。客户端在后续的请求中,可以携带这个COOKIE,服务端可以根据这个COOKIE判断这个请求的上下文关系。COOKIE的存在,是无状态化向状态化的一个过渡手段,他通过外部扩展手段,COOKIE来维护上下文关系。
状态化的服务器有更广阔的应用范围,比如MSN、网络游戏等服务器。他在服务端维护每个连接的状态信息,服务端在接收到每个连接的发送的请求时,可以从本地存储的信息来重现上下文关系。这样,客户端可以很容易使用缺省的信息,服务端也可以很容易地进行状态管理。比如说,当一个用户登录后,服务端可以根据用户名获取他的生日等先前的注册信息;而且在后续的处理中,服务端也很容易找到这个用户的历史信息。
状态化服务器在功能实现方面具有更加强大的优势,但由于他需要维护大量的信息和状态,在性能方面要稍逊于无状态服务器。无状态服务器在处理简单服务方面有优势,但复杂功能方面有很多弊端,比如,用无状态服务器来实现即时通讯服务器,将会是场恶梦。

2、K8s有状态服务和无状态服务的数据持久化有什么区别?

在k8s中,对web这种无状态服务实现数据持久化时,采用我之前的博文:K8s数据持久化之自动创建PV的方式对其实现即可。但是如果对数据库这种有状态的服务使用这种数据持久化方式的话,那么将会有一个很严重的问题,就是当对数据库进行写入操作时,你会发现只能对后端的多个容器中的其中一个容器进行写入,当然,nfs目录下也会有数据库写入的数据,但是,其无法被其他数据库读取到,因为在数据库中有很多影响因素,比如server_id,数据库分区表信息等。

当然,除了数据库之外,还有其他的有状态服务不可以使用上述的数据持久化方式。

3、数据持久化实现方式——StatefullSet

StatefulSet也是一种资源对象(在kubelet 1.5版本之前都叫做PetSet),这种资源对象和RS、RC、Deployment一样,都是Pod控制器。

在Kubernetes中,大多数的Pod管理都是基于无状态、一次性的理念。例如Replication Controller,它只是简单的保证可提供服务的Pod数量。如果一个Pod被认定为不健康的,Kubernetes就会以对待牲畜的态度对待这个Pod——删掉、重建。相比于牲畜应用,PetSet(宠物应用),是由一组有状态的Pod组成,每个Pod有自己特殊且不可改变的ID,且每个Pod中都有自己独一无二、不能删除的数据。

  众所周知,相比于无状态应用的管理,有状态应用的管理是非常困难的。有状态的应用需要固定的ID、有自己内部可不见的通信逻辑、特别容器出现剧烈波动等。传统意义上,对有状态应用的管理一般思路都是:固定机器、静态IP、持久化存储等。Kubernetes利用PetSet这个资源,弱化有状态Pet与具体物理设施之间的关联。一个PetSet能够保证在任意时刻,都有固定数量的Pet在运行,且每个Pet都有自己唯一的身份。

一个“有身份”的Pet指的是该Pet中的Pod包含如下特性:

1、应用举例:

配置示例

这种方式,与K8s数据持久化之自动创建PV的方式有很多相同点,都需要底层NFS存储、rbac授权账户,nfs-client-Provisioner提供存储,SC存储类这些东西,唯一不同的是,这种针对于有状态服务的数据持久化,并不需要我们手动创建PV。

搭建registry私有仓库:

[root@master ~]# docker run -tid --name registry -p 5000:5000 -v /data/registry:/var/lib/registry --restart always registry
[root@master ~]# vim /usr/lib/systemd/system/docker.service   #更改docker的配置文件,以便指定私有仓库
ExecStart=/usr/bin/dockerd -H unix:// --insecure-registry 192.168.20.6:5000
[root@master ~]# scp /usr/lib/systemd/system/docker.service node01:/usr/lib/systemd/system/
[root@master ~]# scp /usr/lib/systemd/system/docker.service node02:/usr/lib/systemd/system/
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl restart docker

搭建NFS服务:

[root@master ~]# yum -y install nfs-utils
[root@master ~]# systemctl enable rpcbind
[root@master ~]# vim /etc/exports
/nfsdata *(rw,sync,no_root_squash)
[root@master ~]# systemctl start rpcbind
[root@master ~]# systemctl start nfs
[root@master ~]# showmount -e
Export list for master:
/nfsdata *

以上皆属于准备工作。

1、使用自定义镜像,创建StatefulSet资源对象,要求每个都做数据持久化。副本数量为6个。数据持久化目录为:/usr/local/apache2/htdocs

创建rbac授权

[root@master ljz]# vim rbac.yaml  #编写yaml文件

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nfs-provisioner-runner
  namespace: default
rules:
   -  apiGroups: [""]
      resources: ["persistentvolumes"]
      verbs: ["get", "list", "watch", "create", "delete"]
   -  apiGroups: [""]
      resources: ["persistentvolumeclaims"]
      verbs: ["get", "list", "watch", "update"]
   -  apiGroups: ["storage.k8s.io"]
      resources: ["storageclasses"]
      verbs: ["get", "list", "watch"]
   -  apiGroups: [""]
      resources: ["events"]
      verbs: ["watch", "create", "update", "patch"]
   -  apiGroups: [""]
      resources: ["services", "endpoints"]
      verbs: ["get","create","list", "watch","update"]
   -  apiGroups: ["extensions"]
      resources: ["podsecuritypolicies"]
      resourceNames: ["nfs-provisioner"]
      verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
[root@master ljz]# kubectl apply -f rbac.yaml       #执行yaml文件

创建NFS-clinet-Provisioner

[root@master ljz]# vim nfs-deploymnet.yaml   #编写yaml文件
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccount: nfs-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner
          volumeMounts:
            - name: nfs-client-root
              mountPath:  /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: ljz
            - name: NFS_SERVER
              value: 192.168.20.6
            - name: NFS_PATH
              value: /nfsdata
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.20.6
            path: /nfsdata
[root@master ljz]# kubectl apply -f nfs-deploymnet.yaml      #执行yaml文件

创建SC(storageClass)

[root@master ljz]# vim sc.yaml       #编写yaml文件

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: test-sc
provisioner: ljz
reclaimPolicy: Retain
[root@master ljz]# kubectl apply -f sc.yaml         #执行yaml文件

创建POd

[root@master ljz]# vim statefulset.yaml        #编写yaml文件

apiVersion: v1
kind: Service
metadata:
  name: headless-svc
  labels:
    app: headless-svc
spec:
  ports:
  - name: testweb
    port: 80
  selector:
    app: headless-pod
  clusterIP: None

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: statefulset
spec:
  serviceName: headless-svc
  replicas: 6
  selector:
    matchLabels:
      app: headless-pod
  template:
    metadata:
      labels:
        app: headless-pod
    spec:
      containers:
      - name: testhttpd
        image: 192.168.20.6:5000/ljz:v1
        ports:
        - containerPort: 80
        volumeMounts:
        - name: test
          mountPath: /usr/local/apache2/htdocs
  volumeClaimTemplates:
  - metadata:
      name: test
      annotations:
        volume.beta.kubernetes.io/storage-class: test-sc
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 100Mi

[root@master ljz]# kubectl apply -f statefulset.yaml 
[root@master ljz]# kubectl get pod -w
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-6649749f97-cl92m   1/1     Running   0          7m57s
statefulset-0                             1/1     Running   0          26s
statefulset-1                             1/1     Running   0          24s
statefulset-2                             1/1     Running   0          20s
statefulset-3                             1/1     Running   0          16s
statefulset-4                             1/1     Running   0          13s
statefulset-5                             1/1     Running   0          9s
[root@master ljz]# kubectl get pv,pvc

2、完成之后,要求第0--5个Pod的主目录应该为: Version:--v1
将服务进行扩容:副本数量更新为10个,验证是否会继续为新的Pod创建持久化的PV,PVC

[root@master ljz]# vim a.sh   #编写脚本定义首页

#!/bin/bash
for i in `ls /nfsdata`
do
  echo "Version: --v1" > /nfsdata/${i}/index.html
done
[root@master ljz]# kubectl get pod -o wide      #查看节点IP,随机验证首页文件
[root@master ljz]# curl 10.244.1.3
Version: --v1
[root@master ljz]# curl 10.244.1.5
Version: --v1
[root@master ljz]# curl 10.244.2.4
Version: --v1
#进行扩容更新
[root@master ljz]# vim statefulset.yaml 

apiVersion: v1
kind: Service
metadata:
  name: headless-svc
  labels:
    app: headless-svc
spec:
  ports:
  - name: testweb
    port: 80
  selector:
    app: headless-pod
  clusterIP: None

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: statefulset
spec:
  updateStrategy:
    rollingUpdate:
      partition: 4
  serviceName: headless-svc
  replicas: 10
  selector:
    matchLabels:
      app: headless-pod
  template:
    metadata:
      labels:
        app: headless-pod
    spec:
      containers:
      - name: testhttpd
        image: 192.168.20.6:5000/ljz:v2
        ports:
        - containerPort: 80
        volumeMounts:
        - name: test
          mountPath: /usr/local/apache2/htdocs
  volumeClaimTemplates:
  - metadata:
      name: test
      annotations:
        volume.beta.kubernetes.io/storage-class: test-sc
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 100Mi

[root@master ljz]# kubectl get pod -w #查看其更新过程
NAME                                      READY   STATUS             RESTARTS   AGE
nfs-client-provisioner-6649749f97-cl92m   1/1     Running            0          40m
statefulset-0                             1/1     Running            0          33m
statefulset-1                             1/1     Running            0          33m
statefulset-2                             1/1     Running            0          33m
statefulset-3                             1/1     Running            0          33m
statefulset-4                             1/1     Running            0          33m
statefulset-5                             1/1     Running            0          33m
statefulset-6                             0/1     ImagePullBackOff   0          5m9s
statefulset-6                             1/1     Running            0          5m41s
statefulset-7                             0/1     Pending            0          0s
statefulset-7                             0/1     Pending            0          0s
statefulset-7                             0/1     Pending            0          2s
statefulset-7                             0/1     ContainerCreating   0          2s
statefulset-7                             1/1     Running             0          4s
statefulset-8                             0/1     Pending             0          0s
statefulset-8                             0/1     Pending             0          0s
statefulset-8                             0/1     Pending             0          1s
statefulset-8                             0/1     ContainerCreating   0          1s
statefulset-8                             1/1     Running             0          3s
statefulset-9                             0/1     Pending             0          0s
statefulset-9                             0/1     Pending             0          0s
statefulset-9                             0/1     Pending             0          1s
statefulset-9                             0/1     ContainerCreating   0          1s
statefulset-9                             1/1     Running             0          3s
statefulset-5                             1/1     Terminating         0          33m
statefulset-5                             0/1     Terminating         0          33m
statefulset-5                             0/1     Terminating         0          33m
statefulset-5                             0/1     Terminating         0          33m
statefulset-5                             0/1     Pending             0          0s
statefulset-5                             0/1     Pending             0          0s
statefulset-5                             0/1     ContainerCreating   0          0s
statefulset-5                             1/1     Running             0          1s
statefulset-4                             1/1     Terminating         0          33m
statefulset-4                             0/1     Terminating         0          34m
statefulset-4                             0/1     Terminating         0          34m
statefulset-4                             0/1     Terminating         0          34m
statefulset-4                             0/1     Pending             0          0s
statefulset-4                             0/1     Pending             0          0s
statefulset-4                             0/1     ContainerCreating   0          0s
statefulset-4                             1/1     Running             0          1s
[root@master ljz]# kubectl get pv,pvc                #查看其为扩容后的容器创建的pv及pvc
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                        STORAGECLASS   REASON   AGE
persistentvolume/pvc-161fc655-7601-4996-99c8-a13cabaa4ad1   100Mi      RWO            Delete           Bound    default/test-statefulset-0   test-sc                 38m
persistentvolume/pvc-1d1b0cfd-83cc-4cd6-a380-f23b9eac0411   100Mi      RWO            Delete           Bound    default/test-statefulset-4   test-sc                 37m
persistentvolume/pvc-297495a8-2117-4232-8e9a-61019c03f0d0   100Mi      RWO            Delete           Bound    default/test-statefulset-7   test-sc                 3m41s
persistentvolume/pvc-2e48a292-cb30-488e-90a9-5184811b9eb8   100Mi      RWO            Delete           Bound    default/test-statefulset-5   test-sc                 37m
persistentvolume/pvc-407e2c0e-209d-4b5a-a3fa-454787f617a7   100Mi      RWO            Delete           Bound    default/test-statefulset-2   test-sc                 37m
persistentvolume/pvc-56ac09a0-e51d-42a9-843b-f0a3a0c60a08   100Mi      RWO            Delete           Bound    default/test-statefulset-9   test-sc                 3m34s
persistentvolume/pvc-90a05d1b-f555-44df-9bb3-73284001dda3   100Mi      RWO            Delete           Bound    default/test-statefulset-3   test-sc                 37m
persistentvolume/pvc-9e2fd35e-5151-4790-b248-6545815d8c06   100Mi      RWO            Delete           Bound    default/test-statefulset-8   test-sc                 3m37s
persistentvolume/pvc-9f60aab0-4491-4422-9514-1a945151909d   100Mi      RWO            Delete           Bound    default/test-statefulset-6   test-sc                 9m22s
persistentvolume/pvc-ab64f8b8-737e-49a5-ae5d-e3b33188ce39   100Mi      RWO            Delete           Bound    default/test-statefulset-1   test-sc                 37m

NAME                                       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/test-statefulset-0   Bound    pvc-161fc655-7601-4996-99c8-a13cabaa4ad1   100Mi      RWO            test-sc        38m
persistentvolumeclaim/test-statefulset-1   Bound    pvc-ab64f8b8-737e-49a5-ae5d-e3b33188ce39   100Mi      RWO            test-sc        37m
persistentvolumeclaim/test-statefulset-2   Bound    pvc-407e2c0e-209d-4b5a-a3fa-454787f617a7   100Mi      RWO            test-sc        37m
persistentvolumeclaim/test-statefulset-3   Bound    pvc-90a05d1b-f555-44df-9bb3-73284001dda3   100Mi      RWO            test-sc        37m
persistentvolumeclaim/test-statefulset-4   Bound    pvc-1d1b0cfd-83cc-4cd6-a380-f23b9eac0411   100Mi      RWO            test-sc        37m
persistentvolumeclaim/test-statefulset-5   Bound    pvc-2e48a292-cb30-488e-90a9-5184811b9eb8   100Mi      RWO            test-sc        37m
persistentvolumeclaim/test-statefulset-6   Bound    pvc-9f60aab0-4491-4422-9514-1a945151909d   100Mi      RWO            test-sc        9m22s
persistentvolumeclaim/test-statefulset-7   Bound    pvc-297495a8-2117-4232-8e9a-61019c03f0d0   100Mi      RWO            test-sc        3m41s
persistentvolumeclaim/test-statefulset-8   Bound    pvc-9e2fd35e-5151-4790-b248-6545815d8c06   100Mi      RWO            test-sc        3m37s
persistentvolumeclaim/test-statefulset-9   Bound    pvc-56ac09a0-e51d-42a9-843b-f0a3a0c60a08   100Mi      RWO            test-sc        3m34s
[root@master nfsdata]# kubectl get pod -o wide
NAME                                      READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
nfs-client-provisioner-6649749f97-cl92m   1/1     Running   0          54m   10.244.1.2   node01   <none>           <none>
statefulset-0                             1/1     Running   0          47m   10.244.1.3   node01   <none>           <none>
statefulset-1                             1/1     Running   0          47m   10.244.2.3   node02   <none>           <none>
statefulset-2                             1/1     Running   0          47m   10.244.2.4   node02   <none>           <none>
statefulset-3                             1/1     Running   0          47m   10.244.1.4   node01   <none>           <none>
statefulset-4                             1/1     Running   0          12m   10.244.2.8   node02   <none>           <none>
statefulset-5                             1/1     Running   0          13m   10.244.1.8   node01   <none>           <none>
statefulset-6                             1/1     Running   0          18m   10.244.2.6   node02   <none>           <none>
statefulset-7                             1/1     Running   0          13m   10.244.1.6   node01   <none>           <none>
statefulset-8                             1/1     Running   0          13m   10.244.2.7   node02   <none>           <none>
statefulset-9                             1/1     Running   0          13m   10.244.1.7   node01   <none>           <none>
#查看其首页文件
[root@master nfsdata]# curl 10.244.2.6
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<html>
 <head>
  <title>Index of /</title>
 </head>
 <body>
<h2>Index of /</h2>
<ul></ul>
</body></html>
[root@master nfsdata]# curl 10.244.1.8
Version: --v1

服务进行更新:在更新过程中,要求3以后的全部更新为Version:v2

[root@master ljz]# vim a.sh      #编写首页文件

#!/bin/bash
for i in `ls /nfsdata/`
do
  if [ `echo $i | awk -F - '{print $4}'` -gt 3 ]
  then
    echo "Version: --v2" > /nfsdata/${i}/index.html
  fi
done
[root@master ljz]# sh a.sh        #执行脚本
[root@master ljz]# kubectl get pod -o wide
NAME                                      READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
nfs-client-provisioner-6649749f97-cl92m   1/1     Running   0          68m   10.244.1.2   node01   <none>           <none>
statefulset-0                             1/1     Running   0          60m   10.244.1.3   node01   <none>           <none>
statefulset-1                             1/1     Running   0          60m   10.244.2.3   node02   <none>           <none>
statefulset-2                             1/1     Running   0          60m   10.244.2.4   node02   <none>           <none>
statefulset-3                             1/1     Running   0          60m   10.244.1.4   node01   <none>           <none>
statefulset-4                             1/1     Running   0          26m   10.244.2.8   node02   <none>           <none>
statefulset-5                             1/1     Running   0          26m   10.244.1.8   node01   <none>           <none>
statefulset-6                             1/1     Running   0          32m   10.244.2.6   node02   <none>           <none>
statefulset-7                             1/1     Running   0          26m   10.244.1.6   node01   <none>           <none>
statefulset-8                             1/1     Running   0          26m   10.244.2.7   node02   <none>           <none>
statefulset-9                             1/1     Running   0          26m   10.244.1.7   node01   <none>           <none>
#确认内容
[root@master ljz]# curl 10.244.1.4
Version: --v1
[root@master ljz]# curl 10.244.2.8
Version: --v2
推荐阅读:
  1. Kubernetes针对有状态服务数据持久化之Statefu
  2. k8s的StatefulSet(有状态服务)实现

免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。

有状态服务实现数据持久化

上一篇:css 样式表分类总结

下一篇:学习安卓知识顺序是怎样的

相关阅读

您好,登录后才能下订单哦!

密码登录
登录注册
其他方式登录
点击 登录注册 即表示同意《亿速云用户服务条款》