GlusterFS如何安装与配置

发布时间:2021-11-12 14:11:22 作者:小新
来源:亿速云 阅读:187

小编给大家分享一下GlusterFS如何安装与配置,相信大部分人都还不怎么了解,因此分享这篇文章给大家参考一下,希望大家阅读完这篇文章后大有收获,下面让我们一起去了解一下吧!

GlusterFS是一个开源的分布式文件系统,于2011年被红帽收购.它具有高扩展性、高性能、高可用性、可横向扩展的弹性特点,无元数据服务器设计使glusterfs没有单点故障隐患,详细介绍请查看官网:www.gluster.org 。

部署环境: 
OS: CentOS release 6.5 (Final) x64 
Server: 
c1:192.168.242.132 
c2:192.168.242.133 
c3:192.168.242.134 
c4:192.168.242.135 
hosts: 
192.168.242.132 c1 
192.168.242.133 c2 
192.168.242.134 c3 
192.168.242.135 c4

具体操作: 
c1/c2/c3/c4上执行 
[root@c1 ~]# wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo 
[root@c1 yum.repos.d]# yum install -y glusterfs glusterfs-server glusterfs-fuse 
[root@c1 yum.repos.d]# /etc/init.d/glusterd start 
Starting glusterd: [ OK ] 
[root@c1 yum.repos.d]# chkconfig glusterd on

c1上配置集群 
[root@c1 ~]# gluster peer probe c1 
peer probe: success. Probe on localhost not needed 
[root@c1 ~]# gluster peer probe c2

peer probe: success. 
[root@c1 ~]# gluster peer probe c3 
peer probe: success. 
[root@c1 ~]# gluster peer probe c4 
peer probe: success.

如果c1在peer表中被识别为ip地址,可能后面集群过程中会出现通讯问题, 
我们可以使用ip来进行修复: 
[root@c3 ~]# gluster peer status 
Number of Peers: 3

Hostname: 192.168.242.132 
Uuid: 6e8d6880-ec36-4331-a806-2e8fb4fda7be 
State: Peer in Cluster (Connected)

Hostname: c2 
Uuid: 9a722f50-911e-4181-823d-572296640486 
State: Peer in Cluster (Connected)

Hostname: c4 
Uuid: 1ee3588a-8a16-47ff-ba59-c0285a2a95bd 
State: Peer in Cluster (Connected) 
[root@c3 ~]# gluster peer detach 192.168.242.132 
peer detach: success 
[root@c3 ~]# gluster peer probe c1 
peer probe: success. 
[root@c3 ~]# gluster peer status 
Number of Peers: 3

Hostname: c2 
Uuid: 9a722f50-911e-4181-823d-572296640486 
State: Peer in Cluster (Connected)

Hostname: c4 
Uuid: 1ee3588a-8a16-47ff-ba59-c0285a2a95bd 
State: Peer in Cluster (Connected)

Hostname: c1 
Uuid: 6e8d6880-ec36-4331-a806-2e8fb4fda7be 
State: Peer in Cluster (Connected)

c1上创建集群磁盘 
[root@c1 ~]# gluster volume create datavolume1 replica 2 transport tcp c1:/usr/local/share/datavolume1 c2:/usr/local/share/datavolume1 c3:/usr/local/share/datavolume1 c4:/usr/local/share/datavolume1 force 
volume create: datavolume1: success: please start the volume to access data 
[root@c1 ~]# gluster volume create datavolume2 replica 2 transport tcp c1:/usr/local/share/datavolume2 c2:/usr/local/share/datavolume2 c3:/usr/local/share/datavolume2 c4:/usr/local/share/datavolume2 force 
volume create: datavolume2: success: please start the volume to access data 
[root@c1 ~]# gluster volume create datavolume3 replica 2 transport tcp c1:/usr/local/share/datavolume3 c2:/usr/local/share/datavolume3 c3:/usr/local/share/datavolume3 c4:/usr/local/share/datavolume3 force 
volume create: datavolume3: success: please start the volume to access data 
[root@c1 ~]# gluster volume start datavolume1 
volume start: datavolume1: success 
[root@c1 ~]# gluster volume start datavolume2 
volume start: datavolume2: success 
[root@c1 ~]# gluster volume start datavolume3 
volume start: datavolume3: success

[root@c1 ~]# gluster volume info

Volume Name: datavolume1 
Type: Distributed-Replicate 
Volume ID: 819d3dc4-2a3a-4342-b49b-3b7961ef624f 
Status: Started 
Number of Bricks: 2 x 2 = 4 
Transport-type: tcp 
Bricks: 
Brick1: c1:/usr/local/share/datavolume1 
Brick2: c2:/usr/local/share/datavolume1 
Brick3: c3:/usr/local/share/datavolume1 
Brick4: c4:/usr/local/share/datavolume1

Volume Name: datavolume2 
Type: Distributed-Replicate 
Volume ID: d9ebaee7-ef91-4467-9e44-217a63635bfc 
Status: Started 
Number of Bricks: 2 x 2 = 4 
Transport-type: tcp 
Bricks: 
Brick1: c1:/usr/local/share/datavolume2 
Brick2: c2:/usr/local/share/datavolume2 
Brick3: c3:/usr/local/share/datavolume2 
Brick4: c4:/usr/local/share/datavolume2

Volume Name: datavolume3 
Type: Distributed-Replicate 
Volume ID: 1e8b21db-f377-468b-b76e-868edde93f15 
Status: Started 
Number of Bricks: 2 x 2 = 4 
Transport-type: tcp 
Bricks: 
Brick1: c1:/usr/local/share/datavolume3 
Brick2: c2:/usr/local/share/datavolume3 
Brick3: c3:/usr/local/share/datavolume3 
Brick4: c4:/usr/local/share/datavolume3

客户端环境部署 
Centos OS 6.5 x64 并加入hosts 
[root@c5 ~]#wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo 
[root@c5 ~]#yum install -y glusterfs glusterfs-fuse 
[root@c5 ~]# mkdir -p /mnt/{datavolume1,datavolume2,datavolume3} 
[root@c5 ~]# mount -t glusterfs -o ro c1:datavolume1 /mnt/datavolume1/ 
[root@c5 ~]# mount -t glusterfs -o ro c1:datavolume2 /mnt/datavolume2/ 
[root@c5 ~]# mount -t glusterfs -o ro c1:datavolume3 /mnt/datavolume3/ 
me3 
[root@c5 ~]# df -h 
Filesystem Size Used Avail Use% Mounted on 
/dev/mapper/VolGroup-lv_root 
38G 840M 36G 3% / 
tmpfs 242M 0 242M 0% /dev/shm 
/dev/sda1 485M 32M 429M 7% /boot 
c1:datavolume1 57G 2.4G 52G 5% /mnt/datavolume1 
c1:datavolume2 57G 2.4G 52G 5% /mnt/datavolume2 
c1:datavolume3 57G 2.4G 52G 5% /mnt/datavolume3

客户端测试 
[root@c5 ~]# umount /mnt/datavolume1/ 
[root@c5 ~]# mount -t glusterfs c1:datavolume1 /mnt/datavolume1/ 
[root@c5 ~]# touch /mnt/datavolume1/test.txt 
[root@c5 ~]# ls /mnt/datavolume1/test.txt 
/mnt/datavolume1/test.txt

[root@c2 ~]# ls -al /usr/local/share/datavolume1/ 
total 16 
drwxr-xr-x. 3 root root 4096 May 15 03:50 . 
drwxr-xr-x. 8 root root 4096 May 15 02:28 .. 
drw——-. 6 root root 4096 May 15 03:50 .glusterfs 
-rw-r–r–. 2 root root 0 May 20 2014 test.txt 
[root@c1 ~]# ls -al /usr/local/share/datavolume1/ 
total 16 
drwxr-xr-x. 3 root root 4096 May 15 03:50 . 
drwxr-xr-x. 8 root root 4096 May 15 02:28 .. 
drw——-. 6 root root 4096 May 15 03:50 .glusterfs 
-rw-r–r–. 2 root root 0 May 20 2014 test.txt

删除GlusterFS磁盘: 
gluster volume stop datavolume1 
gluster volume delete datavolume1

卸载GlusterFS磁盘: 
gluster peer detach idc1-server4

访问控制: 
gluster volume set datavolume1 auth.allow 192.168.242.*,192.168.241.*

添加GlusterFS节点: 
gluster peer probe c6 
gluster peer probe c7 
gluster volume add-brick datavolume1 c6:/usr/local/share/datavolume1 c7:/usr/local/share/datavolume1

迁移GlusterFS磁盘数据: 
gluster volume remove-brick datavolume1 c1:/usr/local/share/datavolume1 c6:/usr/local/share/datavolume1 start 
gluster volume remove-brick datavolume1 c1:/usr/local/share/datavolume1 c6:/usr/local/share/datavolume1 status 
gluster volume remove-brick datavolume1 c1:/usr/local/share/datavolume1 c6:/usr/local/share/datavolume1 commit

数据重新分配: 
gluster volume rebalance datavolume1 start 
gluster volume rebalance datavolume1 status 
gluster volume rebalance datavolume1 stop

修复GlusterFS磁盘数据(例如在c1宕机的情况下): 
gluster volume replace-brick datavolume1 c1:/usr/local/share/datavolume1 c6:/usr/local/share/datavolume1 commit -force 
gluster volume heal datavolume1 full

以上是“GlusterFS如何安装与配置”这篇文章的所有内容,感谢各位的阅读!相信大家都有了一定的了解,希望分享的内容对大家有所帮助,如果还想学习更多知识,欢迎关注亿速云行业资讯频道!

推荐阅读:
  1. Supervisor安装与配置
  2. GlusterFS 相关

免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。

glusterfs

上一篇:在升级到kernel-3.17的centos-6.5上安装docker会遇到什么错误

下一篇:Django中的unittest应用是什么

相关阅读

您好,登录后才能下订单哦!

密码登录
登录注册
其他方式登录
点击 登录注册 即表示同意《亿速云用户服务条款》