[Linux] GlusterFS部署详解


本文总阅读量

0、安装之前须知:

1
2
3
4
5
6
7
8
9
推荐使用xfs,没用xfs推荐使用ext4,其他文件系统也可兼容;
生产环境DNS NTP服务必须安装,DNS服务配置可点击[此处](https://zeven0707.github.io/2018/11/22/DNS%E6%9C%8D%E5%8A%A1%E9%83%A8%E7%BD%B2/);
如果是虚拟机环境最少1G内存;
最好配置2个网卡,一个管理接口,一个数据传输接口;
如果使用vm克隆额外的机器,确保glusterfs没有安装,如果安装了Gluster,会在每个系统上生产一个uuid,因此如果克隆了一个已经安装了gluster的系统,后面会报错;
如果使用物理服务器,建议最低配置(2 CPU’s, 2GB of RAM, 1GBE),板载组件不如附加组件强大;
安装官方文档,点击[此处](https://docs.gluster.org/en/latest/Install-Guide/Install/)
centos7快速安装指南,点击[此处](https://wiki.centos.org/SpecialInterestGroup/Storage/gluster-Quickstart)
/var/log目录最好单独挂在,一旦日志目录无法写入信息,gluster fs会出现古怪现象

1、配置yum源

1
yum install centos-release-gluster

2、在两个节点上添加磁盘,格式化,并挂在目录

1
2
3
4
5
6
7
8
#格式化磁盘,创建相应的挂在目录
mkfs.xfs -i size=512 /dev/sdb1
mkdir -p /gfs/test1
#将挂在配置信息写入fstab配置文件,以便重启自动挂载
vi /etc/fstab
/dev/sdb1 /gfs/test1 xfs defaults 1 2
#加载修改的配置信息
mount -a && mount

Note: 在CentOS 6操作系统,需要安装xfs文件系统:

1
yum install xfsprogs

3、安装、配置glusterd服务

1
yum install glusterfs-server

启动GlusterFS 管理进程

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#加入开机启动
systemctl enable glusterd
ln -s '/usr/lib/systemd/system/glusterd.service' '/etc/systemd/system/multi-user.target.wants/glusterd.service'
#启动glusterd
systemctl start glusterd
#查看glusterd状态信息
systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2018-11-15 12:08:54 EST; 15s ago
Process: 2808 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 2810 (glusterd)
Tasks: 8
CGroup: /system.slice/glusterd.service
└─2810 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO

Nov 15 12:08:53 node1 systemd[1]: Starting GlusterFS, a clustered file-system server...
Nov 15 12:08:54 node1 systemd[1]: Started GlusterFS, a clustered file-system server.

4、防火墙配置

默认glusted监听tcp/24007 ,但是你增加一个brick,会开启一个新的端口,通过命令”gluster volume status”可以查询到
,因此如果各个节点配置了防火墙,新增一个brick之后,需要注意修改防火墙的端口限制,不然下面的配置受信任池会报错。

5、配置受信任池

node1操作,将node2添加到受信任池:

1
2
[root@node1 ~]# gluster peer probe node2
peer probe: success.

如果防火墙开启可能会提示下面错误:

1
2
[root@node1 ~]# gluster peer probe node2
peer probe: failed: Probe returned with Transport endpoint is not connected

node2操作,将node1添加到受信任池

1
2
[root@node2 ~]# gluster peer probe node1
peer probe: success. Host node1 port 24007 already in peer list

6、建立GlusterFS volume

node1 and node2操作:

1
mkdir /gfs/test1/gv0

在任意一个节点上执行即可,不需要重复执行:

1
2
3
4
5
6
# gluster volume create gv0 replica 2 node1:/gfs/test1/gv0 node2:/gfs/test1/gv0
[root@node1 ~]# gluster volume create gv0 replica 2 node1:/gfs/test1/gv0 node2:/gfs/test1/gv0
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
(y/n) y
volume create: gv0: success: please start the volume to access data

提示只有2个副本有可能出现脑裂的情况,创建带有仲裁节点以上的gluserfs volume至少三个节点,因此两个节点的副本集可以只能忽略这个问题。
下面启动创建的gv0卷

1
2
3
4
# gluster volume start gv0
[root@node1 ~]# gluster volume start gv0
volume start: gv0: success
Confirm that the volume shows "Started":

查看启动的gv0卷信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# gluster volume info
[root@node1 ~]# gluster volume info

Volume Name: gv0
Type: Replicate
Volume ID: 79c81f10-0cb8-4f26-a7ab-d21fe19f0bbf
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: node1:/gfs/test1/gv0
Brick2: node2:/gfs/test1/gv0
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

/var/log/glusterfs如果没有正常启动,可以查看日志

7、测试Glusterfs volume gv0副本集是否生效

1
2
3
4
5
6
7
8
#挂在到任意空目录
mount -t glusterfs node1:/gv0 /mnt
#创造测试数据
for i in `seq -w 1 100`; do cp -rp /var/log/messages /mnt/copy-test-$i; done
#查看生成数据的数量
ls /mnt | wc -l
#在node1和node2的/gfs/test1/gv0目录下均生成了100个文件
ls /gfs/test1/gv0 |wc -l

[root@node1 ~]# gluster volume create gv1 disperse 2 node1:/gfs/test1/gv1 node2:/gfs/test1/gv1
disperse count must be greater than 2
disperse option given twice

Usage:
volume create [stripe ] [replica [arbiter ]] [disperse []] [disperse-data ] [redundancy ] [transport <tcp|rdma|tcp,rdma>] ?… [force]

8、添加新的节点,因为受信任池已经创建,需要在受信任池内的节点下,将新节点node3进来

1
2
3
4
5
6
7
8
9
10
11
[root@node1 ~]# gluster peer probe node3
[root@node1 ~]# gluster peer status
Number of Peers: 2

Hostname: node2
Uuid: 588d2f92-f085-4e74-ab63-c6f5aa6ffb24
State: Peer in Cluster (Connected)

Hostname: node3
Uuid: 3495bc9c-7330-4038-ac94-1777ba0286f5
State: Peer in Cluster (Connected)

9、创建2节点分布式volume

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#创建卷不添加任何参数的情况下,默认模式为分布式:
#gluster volume create gv3 node1:/gfs/test1/gv3 node2:/gfs/test1/gv3
[root@node1 ~]# gluster volume create gv3 node1:/gfs/test1/gv3 node2:/gfs/test1/gv3
volume create: gv1: success: please start the volume to access data
# 启动gv3
# gluster volume start gv3
[root@node1 ~]# gluster volume start gv3

[root@node1 ~]# gluster volume info gv3

Volume Name: gv3
Type: Distribute
Volume ID: 02e3163b-1c69-402f-aad5-9cf4dba3267c
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: node1:/gfs/test1/gv3
Brick2: node2:/gfs/test1/gv3
Options Reconfigured:
transport.address-family: inet
nfs.disable: on

10、测试Glusterfs volume gv3

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#挂载并创建测试数据
mount -t glusterfs node1:/gv3 /mnt
for i in `seq -w 1 100`; do cp -rp /var/log/messages /mnt/copy-test-$i; done
#node1查看数据:
[root@node1 gv3]# ls /mnt | wc -l
100
#node1:查看文件实际位置下的数据:
[root@node1 gv3]# ls /gfs/test1/gv3/ |wc -l
50
#node2:查看文件实际位置下的数据:
[root@node2 gv3]# ls /gfs/test1/gv3/ |wc -l
50
#虽然在节点3没有生成集群卷,在节点3上挂在/gv3也可以看到数据:
mount -t glusterfs 127.0.0.1:/gv3 /mnt
[root@node3 mnt]# ls /mnt/ |wc -l
100

11、创建分布式副本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# gluster volume create gv4 replica 2 transport tcp node1:/gfs/test1/gv4 node2:/gfs/test1/gv4 node3:/gfs/test1/gv4 node4:/gfs/test1/gv4
[root@node1 test1]# gluster volume create gv4 replica 2 transport tcp node1:/gfs/test1/gv4 node2:/gfs/test1/gv4 node3:/gfs/test1/gv4
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
(y/n) y
number of bricks is not a multiple of replica count
#创建副本的节点数要和副本个数成倍数关系
[root@node1 test1]# gluster volume create gv4 replica 2 transport tcp node1:/gfs/test1/gv4 node2:/gfs/test1/gv4 node3:/gfs/test1/gv4 node4:/gfs/test1/gv4
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
(y/n) y
volume create: gv4: success: please start the volume to access data
[root@node1 test1]# gluster volume start gv4
[root@node1 test1]# gluster volume info gv4
Volume Name: gv4
Type: Distributed-Replicate
Volume ID: e8556b2e-462d-4407-99c4-a6e622754e6c
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: node1:/gfs/test1/gv4
Brick2: node2:/gfs/test1/gv4
Brick3: node3:/gfs/test1/gv4
Brick4: node4:/gfs/test1/gv4
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

12、测试Glusterfs volume gv4:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
#挂载并创建测试数据
mount -t glusterfs node1:/gv4 /mnt
for i in `seq -w 1 100`; do cp -rp /var/log/messages /mnt/copy-test-$i; done
#查看数据分布,node1与node2数据相同,node3和node4存储另一半信息
[root@node1 test1]# ls /gfs/test1/gv4
copy-test-001 copy-test-012 copy-test-021 copy-test-029 copy-test-034 copy-test-048 copy-test-060 copy-test-078 copy-test-086 copy-test-094
copy-test-004 copy-test-015 copy-test-022 copy-test-030 copy-test-038 copy-test-051 copy-test-063 copy-test-079 copy-test-087 copy-test-095
copy-test-006 copy-test-016 copy-test-023 copy-test-031 copy-test-039 copy-test-052 copy-test-065 copy-test-081 copy-test-088 copy-test-098
copy-test-008 copy-test-017 copy-test-024 copy-test-032 copy-test-041 copy-test-054 copy-test-073 copy-test-082 copy-test-090 copy-test-099
copy-test-011 copy-test-019 copy-test-028 copy-test-033 copy-test-046 copy-test-057 copy-test-077 copy-test-083 copy-test-093 copy-test-100

[root@node2 gv0]# ls /gfs/test1/gv4
copy-test-001 copy-test-012 copy-test-021 copy-test-029 copy-test-034 copy-test-048 copy-test-060 copy-test-078 copy-test-086 copy-test-094
copy-test-004 copy-test-015 copy-test-022 copy-test-030 copy-test-038 copy-test-051 copy-test-063 copy-test-079 copy-test-087 copy-test-095
copy-test-006 copy-test-016 copy-test-023 copy-test-031 copy-test-039 copy-test-052 copy-test-065 copy-test-081 copy-test-088 copy-test-098
copy-test-008 copy-test-017 copy-test-024 copy-test-032 copy-test-041 copy-test-054 copy-test-073 copy-test-082 copy-test-090 copy-test-099
copy-test-011 copy-test-019 copy-test-028 copy-test-033 copy-test-046 copy-test-057 copy-test-077 copy-test-083 copy-test-093 copy-test-100

[root@node3 test]# ls /gfs/test1/gv4
copy-test-002 copy-test-010 copy-test-025 copy-test-037 copy-test-045 copy-test-055 copy-test-062 copy-test-069 copy-test-075 copy-test-089
copy-test-003 copy-test-013 copy-test-026 copy-test-040 copy-test-047 copy-test-056 copy-test-064 copy-test-070 copy-test-076 copy-test-091
copy-test-005 copy-test-014 copy-test-027 copy-test-042 copy-test-049 copy-test-058 copy-test-066 copy-test-071 copy-test-080 copy-test-092
copy-test-007 copy-test-018 copy-test-035 copy-test-043 copy-test-050 copy-test-059 copy-test-067 copy-test-072 copy-test-084 copy-test-096
copy-test-009 copy-test-020 copy-test-036 copy-test-044 copy-test-053 copy-test-061 copy-test-068 copy-test-074 copy-test-085 copy-test-097


[root@node4 ~]# ls /gfs/test1/gv4
copy-test-002 copy-test-010 copy-test-025 copy-test-037 copy-test-045 copy-test-055 copy-test-062 copy-test-069 copy-test-075 copy-test-089
copy-test-003 copy-test-013 copy-test-026 copy-test-040 copy-test-047 copy-test-056 copy-test-064 copy-test-070 copy-test-076 copy-test-091
copy-test-005 copy-test-014 copy-test-027 copy-test-042 copy-test-049 copy-test-058 copy-test-066 copy-test-071 copy-test-080 copy-test-092
copy-test-007 copy-test-018 copy-test-035 copy-test-043 copy-test-050 copy-test-059 copy-test-067 copy-test-072 copy-test-084 copy-test-096
copy-test-009 copy-test-020 copy-test-036 copy-test-044 copy-test-053 copy-test-061 copy-test-068 copy-test-074 copy-test-085 copy-test-097

13、创建带有仲裁的副本集

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[root@node1 test1]# gluster volume create gv5 replica 2 arbiter 2 transport tcp node1:/gfs/test1/gv5 node2:/gfs/test1/gv5 node3:/gfs/test1/gv5
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
(y/n) y
For arbiter configuration, replica count must be 3 and arbiter count must be 1. The 3rd brick of the replica will be the arbiter
#提示创建仲裁必须是三个副本集
[root@node1 test1]# gluster volume create gv5 replica 3 arbiter 1 transport tcp node1:/gfs/test1/gv5 node2:/gfs/test1/gv5 node3:/gfs/test1/gv5
#启动gv5
[root@node1 test1]# gluster volume start gv5
[root@node1 test1]# gluster volume info gv5
Volume Name: gv5
Type: Replicate
Volume ID: fd4fca20-1bb3-480b-9c24-703dd3e8b508
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: node1:/gfs/test1/gv5
Brick2: node2:/gfs/test1/gv5
Brick3: node3:/gfs/test1/gv5 (arbiter)
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

14、GlusterFS的常见的卷介绍

1
2
3
4
5
6
7
8
9
Distributed:分布式卷,文件通过 hash 算法随机分布到由 bricks 组成的卷上。
Replicated: 复制式卷,类似 RAID 1,replica 数必须等于 volume 中 brick 所包含的存储服务器数,可用性高。
Striped: 条带式卷,类似 RAID 0,stripe 数必须等于 volume 中 brick 所包含的存储服务器数,文件被分成数据块,以 Round Robin 的方式存储在 bricks 中,并发粒度是数据块,大文件性能好。
Distributed Striped: 分布式的条带卷,volume中 brick 所包含的存储服务器数必须是 stripe 的倍数(>=2倍),兼顾分布式和条带式的功能。
Distributed Replicated: 分布式的复制卷,volume 中 brick 所包含的存储服务器数必须是 replica 的倍数(>=2倍),兼顾分布式和复制式的功能。
分布式复制卷的brick顺序决定了文件分布的位置,一般来说,先是两个brick形成一个复制关系,然后两个复制关系形成分布。
企业一般用后两种,大部分会用分布式复制(可用容量为 总容量/复制份数),通过网络传输的话最好用万兆交换机,万兆网卡来做。这样就会优化一部分性能。它们的数据都是通过网络来传输的。

查看官方提供的gluster volume,点击[此处](https://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/)

15、删除卷

1
2
3
#先停止要删除的卷
[root@node01 ~]# gluster volume stop gv1
[root@node01 ~]# gluster volume delete gv

16、监控负载

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
volume profile <VOLNAME> {start|info [peek|incremental [peek]|cumulative|clear]|stop} [nfs]
#启动性能分析:
gluster volume profile gv5 start
#显示io信息:
gluster volume profile gv5 info
#停止性能分析:
gluster volume profile gv5 stop

#使用top命令查看读取,写入,文件打开调用,读取调用,写入调用等指标:
volume top <VOLNAME> {open|read|write|opendir|readdir|clear} [nfs|brick <brick>] [list-cnt <value>] |
volume top <VOLNAME> {read-perf|write-perf} [bs <size> count <count>] [brick <brick>] [list-cnt <value>]
#查看fd的当前打开数,和最大打开数
gluster volume top gv5 open brick node1:/gfs/test1/gv5 list-cnt 10
Brick: node1:/gfs/test1/gv5
Current open fds: 0, Max open fds: 1, Max openfd time: 2018-11-21 14:10:02.279118

#显示文件读取调用数量:
gluster volume top gv5 read brick node1:/gfs/test1/gv5 list-cnt 10
Brick: node1:/gfs/test1/gv5
Count filename
=======================
1 /copy-test-002
1 /copy-test-001

#每个brick下面文件被写入的数量列表
gluster volume top gv5 write brick node1:/gfs/test1/gv5 list-cnt 10
Brick: node1:/gfs/test1/gv5
Count filename
=======================
12 /copy-test-100
12 /copy-test-099
12 /copy-test-098
12 /copy-test-097
12 /copy-test-096
12 /copy-test-095
12 /copy-test-094
12 /copy-test-093
12 /copy-test-092
12 /copy-test-091

#当前brick下,目录被打开的数量列表
[root@node1 test1]# gluster volume top gv5 opendir brick node1:/gfs/test1/gv5 list-cnt 10
Brick: node1:/gfs/test1/gv5
Count filename
=======================
1 /test

#当前brick下,目录被读的数量列表
[root@node1 test1]# gluster volume top gv5 readdir brick node1:/gfs/test1/gv5 list-cnt 10
Brick: node1:/gfs/test1/gv5
Count filename
=======================
2 /test

#查看brick的读性能:
[root@node1 test1]# gluster volume top gv5 read-perf bs 256 count 1 brick node1:/gfs/test1/gv5 list-cnt 10
Brick: node1:/gfs/test1/gv5
Throughput 51.20 MBps time 0.0000 secs
MBps Filename Time
==== ======== ====
0 /copy-test-001 2018-11-21 16:14:30.512003
0 /copy-test-003 2018-11-21 16:13:32.481649
0 /copy-test-002 2018-11-21 16:06:19.696071
#查看brick的写性能:
[root@node1 test1]# gluster volume top gv5 write-perf bs 256 count 1 brick node1:/gfs/test1/gv5 list-cnt 10
Brick: node1:/gfs/test1/gv5
Throughput 10.67 MBps time 0.0000 secs
MBps Filename Time
==== ======== ====
0 /.copy-test-001.swp 2018-11-21 16:14:53.514516
0 /copy-test-001 2018-11-21 16:14:53.489068
0 /.copy-test-001.swp 2018-11-21 16:14:30.483696
0 /.copy-test-003.swp 2018-11-21 16:12:46.331094
0 /copy-test-100 2018-11-21 14:10:06.65163
0 /copy-test-099 2018-11-21 14:10:05.953871
0 /copy-test-098 2018-11-21 14:10:05.773626
0 /copy-test-097 2018-11-21 14:10:05.620743
0 /copy-test-096 2018-11-21 14:10:05.591005
0 /copy-test-095 2018-11-21 14:10:05.555036
#查看单个卷的信息:
gluster volume info <VOLNAME>

gluster volume info gv5
Volume Name: gv5
Type: Replicate
Volume ID: e12e23f5-7347-4049-8d77-53cef76b0633
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: node1:/gfs/test1/gv5
Brick2: node2:/gfs/test1/gv5
Brick3: node3:/gfs/test1/gv5 (arbiter)
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

#查看所有卷的信息:
gluster volume info all

#查看卷状态:
gluster volume status [all| []] [detail|clients|mem|inode|fd|callpool]
#显示所有卷的状态
gluster volume status all
#显示卷额外的信息:
gluster volume status gv5 detail
#显示客户端列表:
gluster volume status gv5 clients
#显示内存使用情况:
gluster volume status gv5 mem
#显示卷的inode表
gluster volume status gv5 inode
#显示卷打开的fd表
gluster volume status gv5 fd
#显示卷的挂起调用
gluster volume status gv5 callpool

17、glusterfs其他信息查看

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#看下节点有没有在线
gluster volume status nfsp
#启动完全修复
gluster volume heal gv2 full
#查看需要修复的文件
gluster volume heal gv2 info
#查看修复成功的文件
gluster volume heal gv2 info healed
#查看修复失败的文件
gluster volume heal gv2 heal-failed
#查看脑裂的文件
gluster volume heal gv2 info split-brain
#激活quota功能
gluster volume quota gv2 enable
#关闭quota功能
gulster volume quota gv2 disable
#目录限制(卷中文件夹的大小)
gluster volume quota limit-usage /data/30MB --/gv2/data
#quota信息列表
gluster volume quota gv2 list
#限制目录的quota信息
gluster volume quota gv2 list /data
#设置信息的超时时间
gluster volume set gv2 features.quota-timeout 5
#删除某个目录的quota设置
gluster volume quota gv2 remove /data
备注:quota功能,主要是对挂载点下的某个目录进行空间限额。如:/mnt/gulster/data目录,而不是对组成卷组的空间进行限制。

18、使用zabbix模板监控glusterfs,zabbix官网有自带的模板,CPU、内存、磁盘空间、主机运行时间、系统load。

1
2
gfszabbix监控
https://github.com/MrCirca/zabbix-glusterfs
目录
  1. 1. 0、安装之前须知:
  2. 2. 1、配置yum源
  3. 3. 2、在两个节点上添加磁盘,格式化,并挂在目录
  4. 4. 3、安装、配置glusterd服务
  5. 5. 4、防火墙配置
  6. 6. 5、配置受信任池
  7. 7. 6、建立GlusterFS volume
  8. 8. 7、测试Glusterfs volume gv0副本集是否生效
  9. 9. 8、添加新的节点,因为受信任池已经创建,需要在受信任池内的节点下,将新节点node3进来
  10. 10. 9、创建2节点分布式volume
  11. 11. 10、测试Glusterfs volume gv3
  12. 12. 11、创建分布式副本
  13. 13. 12、测试Glusterfs volume gv4:
  14. 14. 13、创建带有仲裁的副本集
  15. 15. 14、GlusterFS的常见的卷介绍
  16. 16. 15、删除卷
  17. 17. 16、监控负载
  18. 18. 17、glusterfs其他信息查看
  19. 19. 18、使用zabbix模板监控glusterfs,zabbix官网有自带的模板,CPU、内存、磁盘空间、主机运行时间、系统load。

Proudly powered by Hexo and Theme by Lap
本站访客数人次
© 2020 zeven0707's blog