xiunai 2019-10-23
目录
glusterfs是一个可扩展,分布式文件系统,集成来自多台服务器上的磁盘存储资源到单一全局命名空间,以提供共享文件存储。
glusterfs
中的volume
的模式有很多中,包括以下几种:
heketi
是一个提供RESTful API
管理gfs
卷的框架,能够在kubernetes
、openshift
、openstack
等云平台上实现动态的存储资源供应,支持gfs
多集群管理,便于管理员对gfs
进行操作,在kubernetes
集群中,pod
将存储的请求发送至heketi
,然后heketi
控制gfs
集群创建对应的存储卷。heketi
动态在集群内选择bricks
构建指定的volumes
,以确保副本会分散到集群不同的故障域内。heketi
还支持任意数量的glusterfs
集群,以保证接入的云服务器不局限于单个glusterfs
集群。
环境:kubeadm
安装的最新k8s 1.16.2
版本,由1master+2node组成,网络插件选用的是flannel
,默认kubeadm
安装的k8s
,会给master
打上污点,本文为了实现gfs
集群功能,先手动去掉了污点。
本文的glusterfs
卷模式为复制卷模式。
另外,glusterfs
在kubernetes
集群中需要以特权运行,需要在kube-apiserver
中添加–allow-privileged=true
参数以开启此功能,默认此版本的kubeadm
已开启。
[ ~]# kubectl describe nodes k8s-master-01 |grep Taint Taints: node-role.kubernetes.io/master:NoSchedule [ ~]# kubectl taint node k8s-master-01 node-role.kubernetes.io/master- node/k8s-master-01 untainted [ ~]# kubectl describe nodes k8s-master-01 |grep Taint Taints: <none>
为了保证pod
能够正常使用gfs
作为后端存储,需要每台运行pod
的节点上提前安装gfs
的客户端工具,其他存储方式也类似。
$ yum install -y glusterfs glusterfs-fuse -y
需要安装gfs
的kubernetes
设置Label
,因为gfs
是通过kubernetes
集群的DaemonSet
方式安装的。DaemonSet
安装方式默认会在每个节点上都进行安装,除非安装前设置筛选要安装节点Label
,带上此标签的节点才会安装。
安装脚本中设置DaemonSet
中设置安装在贴有 storagenode=glusterfs
的节点,所以这是事先将节点贴上对应Label
。
[ ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master-01 Ready master 5d v1.16.2 k8s-node-01 Ready <none> 4d23h v1.16.2 k8s-node-02 Ready <none> 4d23h v1.16.2 [ ~]# kubectl label node k8s-master-01 storagenode=glusterfs node/k8s-master-01 labeled [ ~]# kubectl label node k8s-node-01 storagenode=glusterfs node/k8s-node-01 labeled [ ~]# kubectl label node k8s-node-02 storagenode=glusterfs node/k8s-node-02 labeled [ ~]# kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS k8s-master-01 Ready master 5d v1.16.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master-01,kubernetes.io/os=linux,node-role.kubernetes.io/master=,storagenode=glusterfs k8s-node-01 Ready <none> 4d23h v1.16.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node-01,kubernetes.io/os=linux,storagenode=glusterfs k8s-node-02 Ready <none> 4d23h v1.16.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node-02,kubernetes.io/os=linux,storagenode=glusterfs
$ modprobe dm_snapshot $ modprobe dm_mirror $ modprobe dm_thin_pool
查看是否加载
$ lsmod | grep dm_snapshot $ lsmod | grep dm_mirror $ lsmod | grep dm_thin_pool
采用容器化方式部署gfs
集群,同样也可以使用传统方式部署,在生产环境中,gfs
集群最好是独立于集群之外进行部署,之后只需要创建对应的endpoints
即可。这里采用DaemonSet
方式部署,同时保证已经打上标签的节点上都运行一个gfs
服务,并且均有提供存储的磁盘。
[ glusterfs]# pwd /root/manifests/glusterfs [ glusterfs]# wget https://github.com/heketi/heketi/releases/download/v7.0.0/heketi-client-v7.0.0.linux.amd64.tar.gz [ glusterfs]# tar xf heketi-client-v7.0.0.linux.amd64.tar.gz [ glusterfs]# cd heketi-client/share/heketi/kubernetes/ [ kubernetes]# pwd /root/manifests/glusterfs/heketi-client/share/heketi/kubernetes
在本集群中,下面用到的DaemonSet
控制器及后面用到的deployment
控制器的api
版本均变为了apps/v1
,所以需要手动修改下载的json
文件再进行部署,资源编排文件中需要指定selector
声明。避免出现以下报错:
[ kubernetes]# kubectl apply -f glusterfs-daemonset.json error: unable to recognize "glusterfs-daemonset.json": no matches for kind "DaemonSet" in version "extensions/v1beta1"
修改api
版本
"apiVersion": "extensions/v1beta1"
为apps/v1
"apiVersion": "apps/v1",
指定selector
声明
[ kubernetes]# kubectl apply -f glusterfs-daemonset.json error: error validating "glusterfs-daemonset.json": error validating data: ValidationError(DaemonSet.spec): missing required field "selector" in io.k8s.api.apps.v1.DaemonSetSpec; if you choose to ignore these errors, turn validation off with --validate=false
对应后面内容的selecto
r,用matchlabel
相关联
"spec": { "selector": { "matchLabels": { "glusterfs-node": "daemonset" } },
[ kubernetes]# kubectl apply -f glusterfs-daemonset.json daemonset.apps/glusterfs created
注意:
gfs
的工作目录namespace
为default
,可手动指定为其他namespace
[ kubernetes]# kubectl get pods NAME READY STATUS RESTARTS AGE glusterfs-9tttf 1/1 Running 0 1m10s glusterfs-gnrnr 1/1 Running 0 1m10s glusterfs-v92j5 1/1 Running 0 1m10s
[ kubernetes]# cat heketi-service-account.json { "apiVersion": "v1", "kind": "ServiceAccount", "metadata": { "name": "heketi-service-account" } } [ kubernetes]# kubectl apply -f heketi-service-account.json serviceaccount/heketi-service-account created [ kubernetes]# kubectl get sa NAME SECRETS AGE default 1 71m heketi-service-account 1 5s
[ kubernetes]# kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=dafault:heketi-service-account clusterrolebinding.rbac.authorization.k8s.io/heketi-gluster-admin created [ kubernetes]# kubectl create secret generic heketi-config-secret --from-file=./heketi.json secret/heketi-config-secret created
同样的,需要修改api
版本以及增加selector
声明部分。
[ kubernetes]# vim heketi-bootstrap.json ... "kind": "Deployment", "apiVersion": "apps/v1" ... "spec": { "selector": { "matchLabels": { "name": "deploy-heketi" } }, ... [ kubernetes]# kubectl create -f heketi-bootstrap.json service/deploy-heketi created deployment.apps/deploy-heketi created [ kubernetes]# vim heketi-deployment.json ... "kind": "Deployment", "apiVersion": "apps/v1", ... "spec": { "selector": { "matchLabels": { "name": "heketi" } }, "replicas": 1, ... [ kubernetes]# kubectl apply -f heketi-deployment.json secret/heketi-db-backup created service/heketi created deployment.apps/heketi created [ kubernetes]# kubectl get pods NAME READY STATUS RESTARTS AGE deploy-heketi-6c687b4b84-p7mcr 1/1 Running 0 72s heketi-68795ccd8-9726s 0/1 ContainerCreating 0 50s glusterfs-9tttf 1/1 Running 0 48m glusterfs-gnrnr 1/1 Running 0 48m glusterfs-v92j5 1/1 Running 0 48m
复制heketi-cli
到/usr/local/bin
目录下
[ heketi-client]# pwd /root/manifests/glusterfs/heketi-client [ heketi-client]# cp bin/heketi-cli /usr/local/bin/ [ heketi-client]# heketi-cli -v heketi-cli v7.0.0
修改topology-sample
,manage
为gfs
管理服务的节点Node
主机名,storage
为节点的ip
地址,device
为节点上的裸设备,也就是用于提供存储的磁盘最好使用裸设备,不进行分区。
因此,需要预先在每个gfs
的节点上准备好新的磁盘,这里分别在三个节点都新添加了一块/dev/sdb磁盘设备,大小均为10G。
[ ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 50G 0 disk ├─sda1 8:1 0 2G 0 part /boot └─sda2 8:2 0 48G 0 part ├─centos-root 253:0 0 44G 0 lvm / └─centos-swap 253:1 0 4G 0 lvm sdb 8:16 0 10G 0 disk sr0 11:0 1 1024M 0 rom [ ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 50G 0 disk ├─sda1 8:1 0 2G 0 part /boot └─sda2 8:2 0 48G 0 part ├─centos-root 253:0 0 44G 0 lvm / └─centos-swap 253:1 0 4G 0 lvm sdb 8:16 0 10G 0 disk sr0 11:0 1 1024M 0 rom [ ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 50G 0 disk ├─sda1 8:1 0 2G 0 part /boot └─sda2 8:2 0 48G 0 part ├─centos-root 253:0 0 44G 0 lvm / └─centos-swap 253:1 0 4G 0 lvm sdb 8:16 0 10G 0 disk sr0 11:0 1 1024M 0 rom
配置topology-sample
{ "clusters": [ { "nodes": [ { "node": { "hostnames": { "manage": [ "k8s-master-01" ], "storage": [ "192.168.2.10" ] }, "zone": 1 }, "devices": [ { "name": "/dev/sdb", "destroydata": false } ] }, { "node": { "hostnames": { "manage": [ "k8s-node-01" ], "storage": [ "192.168.2.11" ] }, "zone": 1 }, "devices": [ { "name": "/dev/sdb", "destroydata": false } ] }, { "node": { "hostnames": { "manage": [ "k8s-node-02" ], "storage": [ "192.168.2.12" ] }, "zone": 1 }, "devices": [ { "name": "/dev/sdb", "destroydata": false } ] } ] } ] }
查看当前heketi
的ClusterIP
,并通过环境变量声明
[ kubernetes]# kubectl get svc|grep heketi deploy-heketi ClusterIP 10.1.241.99 <none> 8080/TCP 3m18s [ kubernetes]# curl http://10.1.241.99:8080/hello Hello from Heketi [ kubernetes]# export HEKETI_CLI_SERVER=http://10.1.241.99:8080 [ kubernetes]# echo $HEKETI_CLI_SERVER http://10.1.185.215:8080
执行如下命令创建gfs集群会提示Invalid JWT token: Token missing iss claim
[ kubernetes]# heketi-cli topology load --json=topology-sample.json Error: Unable to get topology information: Invalid JWT token: Token missing iss claim
这是因为新版本的heketi在创建gfs集群时需要带上参数,声明用户名及密码,相应值在heketi.json文件中配置,即:
[ kubernetes]# heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret 'My Secret' topology load --json=topology-sample.json Creating cluster ... ID: 1c5ffbd86847e5fc1562ef70c033292e Allowing file volumes on cluster. Allowing block volumes on cluster. Creating node k8s-master-01 ... ID: b6100a5af9b47d8c1f19be0b2b4d8276 Adding device /dev/sdb ... OK Creating node k8s-node-01 ... ID: 04740cac8d42f56e354c94bdbb7b8e34 Adding device /dev/sdb ... OK Creating node k8s-node-02 ... ID: 1b33ad0dba20eaf23b5e3a4845e7cdb4 Adding device /dev/sdb ... OK
执行了heketi-cli topology load之后,Heketi在服务器做的大致操作如下:
[ manifests]# kubectl logs -f deploy-heketi-6c687b4b84-l5b6j ... [kubeexec] DEBUG 2019/10/23 02:17:44 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [pvs -o pv_name,pv_uuid,vg_name --reportformat=json /dev/sdb] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout [ { "report": [ { "pv": [ {"pv_name":"/dev/sdb", "pv_uuid":"1UkSIV-RYt1-QBNw-KyAR-Drm5-T9NG-UmO313", "vg_name":"vg_398329cc70361dfd4baa011d811de94a"} ] } ] } ]: Stderr [ WARNING: Device /dev/sdb not initialized in udev database even after waiting 10000000 microseconds. WARNING: Device /dev/centos/root not initialized in udev database even after waiting 10000000 microseconds. WARNING: Device /dev/sda1 not initialized in udev database even after waiting 10000000 microseconds. WARNING: Device /dev/centos/swap not initialized in udev database even after waiting 10000000 microseconds. WARNING: Device /dev/sda2 not initialized in udev database even after waiting 10000000 microseconds. WARNING: Device /dev/sdb not initialized in udev database even after waiting 10000000 microseconds. ] [kubeexec] DEBUG 2019/10/23 02:17:44 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [udevadm info --query=symlink --name=/dev/sdb] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 02:17:44 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0 [kubeexec] DEBUG 2019/10/23 02:17:44 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [udevadm info --query=symlink --name=/dev/sdb] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout [ ]: Stderr [] [kubeexec] DEBUG 2019/10/23 02:17:44 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [vgdisplay -c vg_398329cc70361dfd4baa011d811de94a] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 02:17:44 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0 [negroni] 2019-10-23T02:17:44Z | 200 | 93.868μs | 10.1.241.99:8080 | GET /queue/3d0b6edb0faa67e8efd752397f314a6f [kubeexec] DEBUG 2019/10/23 02:17:44 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [vgdisplay -c vg_398329cc70361dfd4baa011d811de94a] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout [ vg_398329cc70361dfd4baa011d811de94a:r/w:772:-1:0:0:0:-1:0:1:1:10350592:4096:2527:0:2527:YCPG9X-b270-1jf2-VwKX-ycpZ-OI9u-7ZidOc ]: Stderr [] [cmdexec] DEBUG 2019/10/23 02:17:44 heketi/executors/cmdexec/device.go:273:cmdexec.(*CmdExecutor).getVgSizeFromNode: /dev/sdb in k8s-node-01 has TotalSize:10350592, FreeSize:10350592, UsedSize:0 [heketi] INFO 2019/10/23 02:17:44 Added device /dev/sdb [asynchttp] INFO 2019/10/23 02:17:44 Completed job 3d0b6edb0faa67e8efd752397f314a6f in 3m2.694238221s [negroni] 2019-10-23T02:17:45Z | 204 | 105.23μs | 10.1.241.99:8080 | GET /queue/3d0b6edb0faa67e8efd752397f314a6f [cmdexec] INFO 2019/10/23 02:17:45 Check Glusterd service status in node k8s-node-01 [kubeexec] DEBUG 2019/10/23 02:17:45 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [systemctl status glusterd] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 02:17:45 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0 [kubeexec] DEBUG 2019/10/23 02:17:45 heketi/pkg/remoteexec/log/commandlog.go:41:log.(*CommandLogger).Success: Ran command [systemctl status glusterd] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout filtered, Stderr filtered [heketi] INFO 2019/10/23 02:17:45 Adding node k8s-node-02 [negroni] 2019-10-23T02:17:45Z | 202 | 146.998544ms | 10.1.241.99:8080 | POST /nodes [asynchttp] INFO 2019/10/23 02:17:45 Started job 8da70b6fd6fec1d61c4ba1cd0fe27fe5 [cmdexec] INFO 2019/10/23 02:17:45 Probing: k8s-node-01 -> 192.168.2.12 [negroni] 2019-10-23T02:17:45Z | 200 | 74.577μs | 10.1.241.99:8080 | GET /queue/8da70b6fd6fec1d61c4ba1cd0fe27fe5 [kubeexec] DEBUG 2019/10/23 02:17:45 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [gluster --mode=script --timeout=600 peer probe 192.168.2.12] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 02:17:45 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0 [negroni] 2019-10-23T02:17:46Z | 200 | 79.893μs | 10.1.241.99:8080 | GET /queue/8da70b6fd6fec1d61c4ba1cd0fe27fe5 [kubeexec] DEBUG 2019/10/23 02:17:46 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [gluster --mode=script --timeout=600 peer probe 192.168.2.12] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout [peer probe: success. ]: Stderr [] [cmdexec] INFO 2019/10/23 02:17:46 Setting snapshot limit [kubeexec] DEBUG 2019/10/23 02:17:46 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [gluster --mode=script --timeout=600 snapshot config snap-max-hard-limit 14] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 02:17:46 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0 [kubeexec] DEBUG 2019/10/23 02:17:46 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [gluster --mode=script --timeout=600 snapshot config snap-max-hard-limit 14] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout [snapshot config: snap-max-hard-limit for System set successfully ]: Stderr [] [heketi] INFO 2019/10/23 02:17:46 Added node 1b33ad0dba20eaf23b5e3a4845e7cdb4 [asynchttp] INFO 2019/10/23 02:17:46 Completed job 8da70b6fd6fec1d61c4ba1cd0fe27fe5 in 488.404011ms [negroni] 2019-10-23T02:17:46Z | 303 | 80.712μs | 10.1.241.99:8080 | GET /queue/8da70b6fd6fec1d61c4ba1cd0fe27fe5 [negroni] 2019-10-23T02:17:46Z | 200 | 242.595μs | 10.1.241.99:8080 | GET /nodes/1b33ad0dba20eaf23b5e3a4845e7cdb4 [heketi] INFO 2019/10/23 02:17:46 Adding device /dev/sdb to node 1b33ad0dba20eaf23b5e3a4845e7cdb4 [negroni] 2019-10-23T02:17:46Z | 202 | 696.018μs | 10.1.241.99:8080 | POST /devices [asynchttp] INFO 2019/10/23 02:17:46 Started job 21af2069b74762a5521a46e2b52e7d6a [negroni] 2019-10-23T02:17:46Z | 200 | 82.354μs | 10.1.241.99:8080 | GET /queue/21af2069b74762a5521a46e2b52e7d6a [kubeexec] DEBUG 2019/10/23 02:17:46 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [pvcreate -qq --metadatasize=128M --dataalignment=256K '/dev/sdb'] on [pod:glusterfs-l2lsv c:glusterfs ns:default (from host:k8s-node-02 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 02:17:46 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0 ...
上面创建的heketi没有配置持久化的卷,如果heketi的pod重启,可能会丢失之前的配置信息,所以现在创建heketi持久化的卷来对heketi数据进行持久化,该持久化方式利用gfs提供的动态存储,也可以采用其他方式进行持久化。
在所有节点安装device-mapper*
yum install -y device-map
将配置信息保存为文件,并创建持久化相关信息
[ kubernetes]# heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret 'My Secret' setup-openshift-heketi-storage Saving heketi-storage.json Saving heketi-storage.json [ kubernetes]# kubectl apply -f heketi-storage.json secret/heketi-storage-secret created endpoints/heketi-storage-endpoints created service/heketi-storage-endpoints created job.batch/heketi-storage-copy-job created
删除中间产物
[ kubernetes]# kubectl delete all,svc,jobs,deployment,secret --selector="deploy-heketi" pod "deploy-heketi-6c687b4b84-l5b6j" deleted service "deploy-heketi" deleted deployment.apps "deploy-heketi" deleted replicaset.apps "deploy-heketi-6c687b4b84" deleted job.batch "heketi-storage-copy-job" deleted secret "heketi-storage-secret" deleted
创建持久化的heketi
[ kubernetes]# kubectl apply -f heketi-deployment.json secret/heketi-db-backup created service/heketi created deployment.apps/heketi created [ kubernetes]# kubectl get pods NAME READY STATUS RESTARTS AGE glusterfs-cqw5d 1/1 Running 0 41m glusterfs-l2lsv 1/1 Running 0 41m glusterfs-lrdz7 1/1 Running 0 41m heketi-68795ccd8-m8x55 1/1 Running 0 32s
查看持久化后heketi的svc,并重新声明环境变量
[ kubernetes]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE heketi ClusterIP 10.1.45.61 <none> 8080/TCP 2m9s heketi-storage-endpoints ClusterIP 10.1.26.73 <none> 1/TCP 4m58s kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 14h [ kubernetes]# export HEKETI_CLI_SERVER=http://10.1.45.61:8080 [ kubernetes]# curl http://10.1.45.61:8080/hello Hello from Heketi
查看gfs集群信息,更多操作参照官方文档说明
[ kubernetes]# heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret 'My Secret' topology info Cluster Id: 1c5ffbd86847e5fc1562ef70c033292e File: true Block: true Volumes: Name: heketidbstorage Size: 2 Id: b25f4b627cf66279bfe19e8a01e9e85d Cluster Id: 1c5ffbd86847e5fc1562ef70c033292e Mount: 192.168.2.11:heketidbstorage Mount Options: backup-volfile-servers=192.168.2.12,192.168.2.10 Durability Type: replicate Replica: 3 Snapshot: Disabled Bricks: Id: 3ab6c19b8fe0112575ba04d58573a404 Path: /var/lib/heketi/mounts/vg_703e3662cbd8ffb24a6401bb3c3c41fa/brick_3ab6c19b8fe0112575ba04d58573a404/brick Size (GiB): 2 Node: b6100a5af9b47d8c1f19be0b2b4d8276 Device: 703e3662cbd8ffb24a6401bb3c3c41fa Id: d1fa386f2ec9954f4517431163f67dea Path: /var/lib/heketi/mounts/vg_398329cc70361dfd4baa011d811de94a/brick_d1fa386f2ec9954f4517431163f67dea/brick Size (GiB): 2 Node: 04740cac8d42f56e354c94bdbb7b8e34 Device: 398329cc70361dfd4baa011d811de94a Id: d2b0ae26fa3f0eafba407b637ca0d06b Path: /var/lib/heketi/mounts/vg_7c791bbb90f710123ba431a7cdde8d0b/brick_d2b0ae26fa3f0eafba407b637ca0d06b/brick Size (GiB): 2 Node: 1b33ad0dba20eaf23b5e3a4845e7cdb4 Device: 7c791bbb90f710123ba431a7cdde8d0b Nodes: Node Id: 04740cac8d42f56e354c94bdbb7b8e34 State: online Cluster Id: 1c5ffbd86847e5fc1562ef70c033292e Zone: 1 Management Hostnames: k8s-node-01 Storage Hostnames: 192.168.2.11 Devices: Id:398329cc70361dfd4baa011d811de94a Name:/dev/sdb State:online Size (GiB):9 Used (GiB):2 Free (GiB):7 Bricks: Id:d1fa386f2ec9954f4517431163f67dea Size (GiB):2 Path: /var/lib/heketi/mounts/vg_398329cc70361dfd4baa011d811de94a/brick_d1fa386f2ec9954f4517431163f67dea/brick Node Id: 1b33ad0dba20eaf23b5e3a4845e7cdb4 State: online Cluster Id: 1c5ffbd86847e5fc1562ef70c033292e Zone: 1 Management Hostnames: k8s-node-02 Storage Hostnames: 192.168.2.12 Devices: Id:7c791bbb90f710123ba431a7cdde8d0b Name:/dev/sdb State:online Size (GiB):9 Used (GiB):2 Free (GiB):7 Bricks: Id:d2b0ae26fa3f0eafba407b637ca0d06b Size (GiB):2 Path: /var/lib/heketi/mounts/vg_7c791bbb90f710123ba431a7cdde8d0b/brick_d2b0ae26fa3f0eafba407b637ca0d06b/brick Node Id: b6100a5af9b47d8c1f19be0b2b4d8276 State: online Cluster Id: 1c5ffbd86847e5fc1562ef70c033292e Zone: 1 Management Hostnames: k8s-master-01 Storage Hostnames: 192.168.2.10 Devices: Id:703e3662cbd8ffb24a6401bb3c3c41fa Name:/dev/sdb State:online Size (GiB):9 Used (GiB):2 Free (GiB):7 Bricks: Id:3ab6c19b8fe0112575ba04d58573a404 Size (GiB):2 Path: /var/lib/heketi/mounts/vg_703e3662cbd8ffb24a6401bb3c3c41fa/brick_3ab6c19b8fe0112575ba04d58573a404/brick
[ kubernetes]# vim storageclass-gfs-heketi.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: gluster-heketi provisioner: kubernetes.io/glusterfs reclaimPolicy: Retain parameters: resturl: "http://10.1.45.61:8080" restauthenabled: "true" restuser: "admin" restuserkey: "My Secret" gidMin: "40000" gidMax: "50000" volumetype: "replicate:3" allowVolumeExpansion: true [ kubernetes]# kubectl apply -f storageclass-gfs-heketi.yaml storageclass.storage.k8s.io/gluster-heketi created
参数说明:
volumetype:卷类型及个数,这里使用的是复制卷,个数必须大于1
pod
使用动态pv
,在StorageClassName
指定之前创建的StorageClass的name
,即gluster-heketi
:[ kubernetes]# vim pod-use-pvc.yaml apiVersion: v1 kind: Pod metadata: name: pod-use-pvc spec: containers: - name: pod-use-pvc image: busybox command: - sleep - "3600" volumeMounts: - name: gluster-volume mountPath: "/pv-data" readOnly: false volumes: - name: gluster-volume persistentVolumeClaim: claimName: pvc-gluster-heketi --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-gluster-heketi spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "gluster-heketi" resources: requests: storage: 1Gi
创建pod并查看创建的pv和pvc
[ kubernetes]# kubectl apply -f pod-use-pvc.yaml pod/pod-use-pvc created persistentvolumeclaim/pvc-gluster-heketi created [ kubernetes]# kubectl get pv,pvc NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-0fb9b246-4da4-491c-b6a2-4f38489ab11c 1Gi RWO Retain Bound default/pvc-gluster-heketi gluster-heketi 57s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/pvc-gluster-heketi Bound pvc-0fb9b246-4da4-491c-b6a2-4f38489ab11c 1Gi RWO gluster-heketi 62s
通过pvc
及向storageclass
申请创建对应的pv
,具体可通过查看创建的heketi pod
的日志
首先发现heketi
接收到请求之后运行了一个job
任务,创建了三个bricks
,在三个gfs
节点中创建对应的目录:
[heketi] INFO 2019/10/23 03:08:36 Allocating brick set #0 [negroni] 2019-10-23T03:08:36Z | 202 | 56.193603ms | 10.1.45.61:8080 | POST /volumes [asynchttp] INFO 2019/10/23 03:08:36 Started job 3ec932315085609bc54ead6e3f6851e8 [heketi] INFO 2019/10/23 03:08:36 Started async operation: Create Volume [heketi] INFO 2019/10/23 03:08:36 Trying Create Volume (attempt #1/5) [heketi] INFO 2019/10/23 03:08:36 Creating brick 289fe032c1f4f9f211480e24c5d74a44 [heketi] INFO 2019/10/23 03:08:36 Creating brick a3172661ba1b849d67b500c93c3dd652 [heketi] INFO 2019/10/23 03:08:36 Creating brick 917e27a9dbc5395ebf08dff8d3401b43 [negroni] 2019-10-23T03:08:36Z | 200 | 72.083μs | 10.1.45.61:8080 | GET /queue/3ec932315085609bc54ead6e3f6851e8 [kubeexec] DEBUG 2019/10/23 03:08:36 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [mkdir -p /var/lib/heketi/mounts/vg_703e3662cbd8ffb24a6401bb3c3c41fa/brick_a3172661ba1b849d67b500c93c3dd652] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 03:08:36 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0 [kubeexec] DEBUG 2019/10/23 03:08:36 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [mkdir -p /var/lib/heketi/mounts/vg_398329cc70361dfd4baa011d811de94a/brick_289fe032c1f4f9f211480e24c5d74a44] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 03:08:36 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 1 [kubeexec] DEBUG 2019/10/23 03:08:36 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [mkdir -p /var/lib/heketi/mounts/vg_7c791bbb90f710123ba431a7cdde8d0b/brick_917e27a9dbc5395ebf08dff8d3401b43] on [pod:glusterfs-l2lsv c:glusterfs ns:default (from host:k8s-node-02 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 03:08:36 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 2
创建lv,添加自动挂载
[kubeexec] DEBUG 2019/10/23 03:08:37 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 2 [kubeexec] DEBUG 2019/10/23 03:08:37 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_703e3662cbd8ffb24a6401bb3c3c41fa-brick_a3172661ba1b849d67b500c93c3dd652] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)]: Stdout [meta-data=/dev/mapper/vg_703e3662cbd8ffb24a6401bb3c3c41fa-brick_a3172661ba1b849d67b500c93c3dd652 isize=512 agcount=8, agsize=32768 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=262144, imaxpct=25 = sunit=64 swidth=64 blks naming =version 2 bsize=8192 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=64 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 ]: Stderr [] [kubeexec] DEBUG 2019/10/23 03:08:37 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [awk "BEGIN {print \"/dev/mapper/vg_703e3662cbd8ffb24a6401bb3c3c41fa-brick_a3172661ba1b849d67b500c93c3dd652 /var/lib/heketi/mounts/vg_703e3662cbd8ffb24a6401bb3c3c41fa/brick_a3172661ba1b849d67b500c93c3dd652 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}"] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)]
创建brick,设置权限
[kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [chmod 2775 /var/lib/heketi/mounts/vg_703e3662cbd8ffb24a6401bb3c3c41fa/brick_a3172661ba1b849d67b500c93c3dd652/brick] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 2 [kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [chown :40000 /var/lib/heketi/mounts/vg_398329cc70361dfd4baa011d811de94a/brick_289fe032c1f4f9f211480e24c5d74a44/brick] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout []: Stderr [] [kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [chmod 2775 /var/lib/heketi/mounts/vg_398329cc70361dfd4baa011d811de94a/brick_289fe032c1f4f9f211480e24c5d74a44/brick] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 2 [negroni] 2019-10-23T03:08:38Z | 200 | 83.159μs | 10.1.45.61:8080 | GET /queue/3ec932315085609bc54ead6e3f6851e8 [kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [chmod 2775 /var/lib/heketi/mounts/vg_7c791bbb90f710123ba431a7cdde8d0b/brick_917e27a9dbc5395ebf08dff8d3401b43/brick] on [pod:glusterfs-l2lsv c:glusterfs ns:default (from host:k8s-node-02 selector:glusterfs-node)]: Stdout []: Stderr [] [kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [chmod 2775 /var/lib/heketi/mounts/vg_703e3662cbd8ffb24a6401bb3c3c41fa/brick_a3172661ba1b849d67b500c93c3dd652/brick] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)]: Stdout []: Stderr [] [kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [chmod 2775 /var/lib/heketi/mounts/vg_398329cc70361dfd4baa011d811de94a/brick_289fe032c1f4f9f211480e24c5d74a44/brick] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout []: Stderr [] [cmdexec] INFO 2019/10/23 03:08:38 Creating volume vol_08e8447256de2598952dcb240e615d0f replica 3
创建对应的volume
[asynchttp] INFO 2019/10/23 03:08:41 Completed job 3ec932315085609bc54ead6e3f6851e8 in 5.007631648s [negroni] 2019-10-23T03:08:41Z | 303 | 78.335μs | 10.1.45.61:8080 | GET /queue/3ec932315085609bc54ead6e3f6851e8 [negroni] 2019-10-23T03:08:41Z | 200 | 5.751689ms | 10.1.45.61:8080 | GET /volumes/08e8447256de2598952dcb240e615d0f [negroni] 2019-10-23T03:08:41Z | 200 | 139.05μs | 10.1.45.61:8080 | GET /clusters/1c5ffbd86847e5fc1562ef70c033292e [negroni] 2019-10-23T03:08:41Z | 200 | 660.249μs | 10.1.45.61:8080 | GET /nodes/04740cac8d42f56e354c94bdbb7b8e34 [negroni] 2019-10-23T03:08:41Z | 200 | 270.334μs | 10.1.45.61:8080 | GET /nodes/1b33ad0dba20eaf23b5e3a4845e7cdb4 [negroni] 2019-10-23T03:08:41Z | 200 | 345.528μs | 10.1.45.61:8080 | GET /nodes/b6100a5af9b47d8c1f19be0b2b4d8276 [heketi] INFO 2019/10/23 03:09:39 Starting Node Health Status refresh [cmdexec] INFO 2019/10/23 03:09:39 Check Glusterd service status in node k8s-node-01 [kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [systemctl status glusterd] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0 [kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/log/commandlog.go:41:log.(*CommandLogger).Success: Ran command [systemctl status glusterd] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout filtered, Stderr filtered [heketi] INFO 2019/10/23 03:09:39 Periodic health check status: node 04740cac8d42f56e354c94bdbb7b8e34 up=true [cmdexec] INFO 2019/10/23 03:09:39 Check Glusterd service status in node k8s-node-02 [kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [systemctl status glusterd] on [pod:glusterfs-l2lsv c:glusterfs ns:default (from host:k8s-node-02 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0 [kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/log/commandlog.go:41:log.(*CommandLogger).Success: Ran command [systemctl status glusterd] on [pod:glusterfs-l2lsv c:glusterfs ns:default (from host:k8s-node-02 selector:glusterfs-node)]: Stdout filtered, Stderr filtered [heketi] INFO 2019/10/23 03:09:39 Periodic health check status: node 1b33ad0dba20eaf23b5e3a4845e7cdb4 up=true [cmdexec] INFO 2019/10/23 03:09:39 Check Glusterd service status in node k8s-master-01 [kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [systemctl status glusterd] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0 [kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/log/commandlog.go:41:log.(*CommandLogger).Success: Ran command [systemctl status glusterd] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)]: Stdout filtered, Stderr filtered [heketi] INFO 2019/10/23 03:09:39 Periodic health check status: node b6100a5af9b47d8c1f19be0b2b4d8276 up=true [heketi] INFO 2019/10/23 03:09:39 Cleaned 0 nodes from health cache
测试使用该pv
的pod
之间能否共享数据,手动进入到pod
并创建文件
[ kubernetes]# kubectl get pods NAME READY STATUS RESTARTS AGE glusterfs-cqw5d 1/1 Running 0 90m glusterfs-l2lsv 1/1 Running 0 90m glusterfs-lrdz7 1/1 Running 0 90m heketi-68795ccd8-m8x55 1/1 Running 0 49m pod-use-pvc 1/1 Running 0 20m [ kubernetes]# kubectl exec -it pod-use-pvc /bin/sh / # cd /pv-data/ /pv-data # echo "hello world">a.txt /pv-data # cat a.txt hello world
查看创建的卷
[ kubernetes]# heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret 'My Secret' volume list Id:08e8447256de2598952dcb240e615d0f Cluster:1c5ffbd86847e5fc1562ef70c033292e Name:vol_08e8447256de2598952dcb240e615d0f Id:b25f4b627cf66279bfe19e8a01e9e85d Cluster:1c5ffbd86847e5fc1562ef70c033292e Name:heketidbstorage
将设备挂载查看卷中的数据,vol_08e8447256de2598952dcb240e615d0f为卷名称
[ kubernetes]# mount -t glusterfs 192.168.2.10:vol_08e8447256de2598952dcb240e615d0f /mnt [ kubernetes]# ll /mnt/ total 1 -rw-r--r-- 1 root 40000 12 Oct 23 11:29 a.txt [ kubernetes]# cat /mnt/a.txt hello world
测试通过deployment
控制器部署能否正常使用storageclass
,创建nginx
的deployment
[ kubernetes]# vim nginx-deployment-gluster.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-gfs spec: selector: matchLabels: name: nginx replicas: 2 template: metadata: labels: name: nginx spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent ports: - containerPort: 80 volumeMounts: - name: nginx-gfs-html mountPath: "/usr/share/nginx/html" - name: nginx-gfs-conf mountPath: "/etc/nginx/conf.d" volumes: - name: nginx-gfs-html persistentVolumeClaim: claimName: glusterfs-nginx-html - name: nginx-gfs-conf persistentVolumeClaim: claimName: glusterfs-nginx-conf --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: glusterfs-nginx-html spec: accessModes: [ "ReadWriteMany" ] storageClassName: "gluster-heketi" resources: requests: storage: 500Mi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: glusterfs-nginx-conf spec: accessModes: [ "ReadWriteMany" ] storageClassName: "gluster-heketi" resources: requests: storage: 10Mi
查看相应资源
[ kubernetes]# kubectl get pod,pv,pvc|grep nginx pod/nginx-gfs-7d66cccf76-mkc76 1/1 Running 0 2m45s pod/nginx-gfs-7d66cccf76-zc8n2 1/1 Running 0 2m45s persistentvolume/pvc-87481e3a-9b7e-43aa-a0b9-4028ce0a1abb 1Gi RWX Retain Bound default/glusterfs-nginx-conf gluster-heketi 2m34s persistentvolume/pvc-f954a4ca-ea1c-458d-8490-a49a0a001ab5 1Gi RWX Retain Bound default/glusterfs-nginx-html gluster-heketi 2m34s persistentvolumeclaim/glusterfs-nginx-conf Bound pvc-87481e3a-9b7e-43aa-a0b9-4028ce0a1abb 1Gi RWX gluster-heketi 2m45s persistentvolumeclaim/glusterfs-nginx-html Bound pvc-f954a4ca-ea1c-458d-8490-a49a0a001ab5 1Gi RWX gluster-heketi 2m45s
查看挂载情况
[ kubernetes]# kubectl exec -it nginx-gfs-7d66cccf76-mkc76 -- df -Th Filesystem Type Size Used Avail Use% Mounted on overlay overlay 44G 3.2G 41G 8% / tmpfs tmpfs 64M 0 64M 0% /dev tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/mapper/centos-root xfs 44G 3.2G 41G 8% /etc/hosts shm tmpfs 64M 0 64M 0% /dev/shm 192.168.2.10:vol_adf6fc08c8828fdda27c8aa5ce99b50c fuse.glusterfs 1014M 43M 972M 5% /etc/nginx/conf.d 192.168.2.10:vol_454e14ae3184122ff9a14d77e02b10b9 fuse.glusterfs 1014M 43M 972M 5% /usr/share/nginx/html tmpfs tmpfs 2.0G 12K 2.0G 1% /run/secrets/kubernetes.io/serviceaccount tmpfs tmpfs 2.0G 0 2.0G 0% /proc/acpi tmpfs tmpfs 2.0G 0 2.0G 0% /proc/scsi tmpfs tmpfs 2.0G 0 2.0G 0% /sys/firmware
在宿主机挂载和创建文件
[ kubernetes]# mount -t glusterfs 192.168.2.10:vol_454e14ae3184122ff9a14d77e02b10b9 /mnt/ [ kubernetes]# cd /mnt/ [ mnt]# echo "hello world">index.html [ mnt]# kubectl exec -it nginx-gfs-7d66cccf76-mkc76 -- cat /usr/share/nginx/html/index.html hello world
扩容nginx副本,查看是否能正常挂载
[ mnt]# kubectl scale deployment nginx-gfs --replicas=3 deployment.apps/nginx-gfs scaled [ mnt]# kubectl get pods NAME READY STATUS RESTARTS AGE glusterfs-cqw5d 1/1 Running 0 129m glusterfs-l2lsv 1/1 Running 0 129m glusterfs-lrdz7 1/1 Running 0 129m heketi-68795ccd8-m8x55 1/1 Running 0 88m nginx-gfs-7d66cccf76-mkc76 1/1 Running 0 8m55s nginx-gfs-7d66cccf76-qzqnv 1/1 Running 0 23s nginx-gfs-7d66cccf76-zc8n2 1/1 Running 0 8m55s [ mnt]# kubectl exec -it nginx-gfs-7d66cccf76-qzqnv -- cat /usr/share/nginx/html/index.html hello world
至此,在k8s
集群中部署heketi+glusterfs
提供动态存储结束。
参考来源:
https://github.com/heketi/heketi
https://github.com/gluster/gluster-kubernetes
https://www.cnblogs.com/jicki/p/5801712.html