[Linux] GlusterFS部署详解


0、安装之前须知:

1
2
3
4
5
6
7
8
9
推荐使用xfs,没用xfs推荐使用ext4,其他文件系统也可兼容;
生产环境DNS NTP服务必须安装,DNS服务配置可点击[此处](https://zeven0707.github.io/2018/11/22/DNS%E6%9C%8D%E5%8A%A1%E9%83%A8%E7%BD%B2/);
如果是虚拟机环境最少1G内存;
最好配置2个网卡,一个管理接口,一个数据传输接口;
如果使用vm克隆额外的机器,确保glusterfs没有安装,如果安装了Gluster,会在每个系统上生产一个uuid,因此如果克隆了一个已经安装了gluster的系统,后面会报错;
如果使用物理服务器,建议最低配置(2 CPU’s, 2GB of RAM, 1GBE),板载组件不如附加组件强大;
安装官方文档,点击[此处](https://docs.gluster.org/en/latest/Install-Guide/Install/)
centos7快速安装指南,点击[此处](https://wiki.centos.org/SpecialInterestGroup/Storage/gluster-Quickstart)
/var/log目录最好单独挂在,一旦日志目录无法写入信息,gluster fs会出现古怪现象

1、配置yum源

1
yum install centos-release-gluster

2、在两个节点上添加磁盘,格式化,并挂在目录

1
2
3
4
5
6
7
8
#格式化磁盘,创建相应的挂在目录
mkfs.xfs -i size=512 /dev/sdb1
mkdir -p /gfs/test1
#将挂在配置信息写入fstab配置文件,以便重启自动挂载
vi /etc/fstab
/dev/sdb1 /gfs/test1 xfs defaults 1 2
#加载修改的配置信息
mount -a && mount

Note: 在CentOS 6操作系统,需要安装xfs文件系统:

1
yum install xfsprogs

3、安装、配置glusterd服务

1
yum install glusterfs-server

启动GlusterFS 管理进程

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#加入开机启动
systemctl enable glusterd
ln -s '/usr/lib/systemd/system/glusterd.service' '/etc/systemd/system/multi-user.target.wants/glusterd.service'
#启动glusterd
systemctl start glusterd
#查看glusterd状态信息
systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2018-11-15 12:08:54 EST; 15s ago
Process: 2808 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 2810 (glusterd)
Tasks: 8
CGroup: /system.slice/glusterd.service
└─2810 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO

Nov 15 12:08:53 node1 systemd[1]: Starting GlusterFS, a clustered file-system server...
Nov 15 12:08:54 node1 systemd[1]: Started GlusterFS, a clustered file-system server.

4、防火墙配置

默认glusted监听tcp/24007 ,但是你增加一个brick,会开启一个新的端口,通过命令”gluster volume status”可以查询到
,因此如果各个节点配置了防火墙,新增一个brick之后,需要注意修改防火墙的端口限制,不然下面的配置受信任池会报错。

5、配置受信任池

node1操作,将node2添加到受信任池:

1
2
[root@node1 ~]# gluster peer probe node2
peer probe: success.

如果防火墙开启可能会提示下面错误:

1
2
[root@node1 ~]# gluster peer probe node2
peer probe: failed: Probe returned with Transport endpoint is not connected

node2操作,将node1添加到受信任池

1
2
[root@node2 ~]# gluster peer probe node1
peer probe: success. Host node1 port 24007 already in peer list

6、建立GlusterFS volume

node1 and node2操作:

1
mkdir /gfs/test1/gv0

在任意一个节点上执行即可,不需要重复执行:

1
2
3
4
5
6
# gluster volume create gv0 replica 2 node1:/gfs/test1/gv0 node2:/gfs/test1/gv0
[root@node1 ~]# gluster volume create gv0 replica 2 node1:/gfs/test1/gv0 node2:/gfs/test1/gv0
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
(y/n) y
volume create: gv0: success: please start the volume to access data

提示只有2个副本有可能出现脑裂的情况,创建带有仲裁节点以上的gluserfs volume至少三个节点,因此两个节点的副本集可以只能忽略这个问题。
下面启动创建的gv0卷

1
2
3
4
# gluster volume start gv0
[root@node1 ~]# gluster volume start gv0
volume start: gv0: success
Confirm that the volume shows "Started":

查看启动的gv0卷信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# gluster volume info
[root@node1 ~]# gluster volume info

Volume Name: gv0
Type: Replicate
Volume ID: 79c81f10-0cb8-4f26-a7ab-d21fe19f0bbf
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: node1:/gfs/test1/gv0
Brick2: node2:/gfs/test1/gv0
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

/var/log/glusterfs如果没有正常启动,可以查看日志

7、测试Glusterfs volume gv0副本集是否生效

1
2
3
4
5
6
7
8
#挂在到任意空目录
mount -t glusterfs node1:/gv0 /mnt
#创造测试数据
for i in `seq -w 1 100`; do cp -rp /var/log/messages /mnt/copy-test-$i; done
#查看生成数据的数量
ls /mnt | wc -l
#在node1和node2的/gfs/test1/gv0目录下均生成了100个文件
ls /gfs/test1/gv0 |wc -l

[root@node1 ~]# gluster volume create gv1 disperse 2 node1:/gfs/test1/gv1 node2:/gfs/test1/gv1
disperse count must be greater than 2
disperse option given twice

Usage:
volume create [stripe ] [replica [arbiter ]] [disperse []] [disperse-data ] [redundancy ] [transport <tcp|rdma|tcp,rdma>] ?… [force]

8、添加新的节点,因为受信任池已经创建,需要在受信任池内的节点下,将新节点node3进来

1
2
3
4
5
6
7
8
9
10
11
[root@node1 ~]# gluster peer probe node3
[root@node1 ~]# gluster peer status
Number of Peers: 2

Hostname: node2
Uuid: 588d2f92-f085-4e74-ab63-c6f5aa6ffb24
State: Peer in Cluster (Connected)

Hostname: node3
Uuid: 3495bc9c-7330-4038-ac94-1777ba0286f5
State: Peer in Cluster (Connected)

9、创建2节点分布式volume

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#创建卷不添加任何参数的情况下,默认模式为分布式:
#gluster volume create gv3 node1:/gfs/test1/gv3 node2:/gfs/test1/gv3
[root@node1 ~]# gluster volume create gv3 node1:/gfs/test1/gv3 node2:/gfs/test1/gv3
volume create: gv1: success: please start the volume to access data
# 启动gv3
# gluster volume start gv3
[root@node1 ~]# gluster volume start gv3

[root@node1 ~]# gluster volume info gv3

Volume Name: gv3
Type: Distribute
Volume ID: 02e3163b-1c69-402f-aad5-9cf4dba3267c
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: node1:/gfs/test1/gv3
Brick2: node2:/gfs/test1/gv3
Options Reconfigured:
transport.address-family: inet
nfs.disable: on

10、测试Glusterfs volume gv3

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#挂载并创建测试数据
mount -t glusterfs node1:/gv3 /mnt
for i in `seq -w 1 100`; do cp -rp /var/log/messages /mnt/copy-test-$i; done
#node1查看数据:
[root@node1 gv3]# ls /mnt | wc -l
100
#node1:查看文件实际位置下的数据:
[root@node1 gv3]# ls /gfs/test1/gv3/ |wc -l
50
#node2:查看文件实际位置下的数据:
[root@node2 gv3]# ls /gfs/test1/gv3/ |wc -l
50
#虽然在节点3没有生成集群卷,在节点3上挂在/gv3也可以看到数据:
mount -t glusterfs 127.0.0.1:/gv3 /mnt
[root@node3 mnt]# ls /mnt/ |wc -l
100

11、创建分布式副本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# gluster volume create gv4 replica 2 transport tcp node1:/gfs/test1/gv4 node2:/gfs/test1/gv4 node3:/gfs/test1/gv4 node4:/gfs/test1/gv4
[root@node1 test1]# gluster volume create gv4 replica 2 transport tcp node1:/gfs/test1/gv4 node2:/gfs/test1/gv4 node3:/gfs/test1/gv4
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
(y/n) y
number of bricks is not a multiple of replica count
#创建副本的节点数要和副本个数成倍数关系
[root@node1 test1]# gluster volume create gv4 replica 2 transport tcp node1:/gfs/test1/gv4 node2:/gfs/test1/gv4 node3:/gfs/test1/gv4 node4:/gfs/test1/gv4
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
(y/n) y
volume create: gv4: success: please start the volume to access data
[root@node1 test1]# gluster volume start gv4
[root@node1 test1]# gluster volume info gv4
Volume Name: gv4
Type: Distributed-Replicate
Volume ID: e8556b2e-462d-4407-99c4-a6e622754e6c
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: node1:/gfs/test1/gv4
Brick2: node2:/gfs/test1/gv4
Brick3: node3:/gfs/test1/gv4
Brick4: node4:/gfs/test1/gv4
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

12、测试Glusterfs volume gv4:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
#挂载并创建测试数据
mount -t glusterfs node1:/gv4 /mnt
for i in `seq -w 1 100`; do cp -rp /var/log/messages /mnt/copy-test-$i; done
#查看数据分布,node1与node2数据相同,node3和node4存储另一半信息
[root@node1 test1]# ls /gfs/test1/gv4
copy-test-001 copy-test-012 copy-test-021 copy-test-029 copy-test-034 copy-test-048 copy-test-060 copy-test-078 copy-test-086 copy-test-094
copy-test-004 copy-test-015 copy-test-022 copy-test-030 copy-test-038 copy-test-051 copy-test-063 copy-test-079 copy-test-087 copy-test-095
copy-test-006 copy-test-016 copy-test-023 copy-test-031 copy-test-039 copy-test-052 copy-test-065 copy-test-081 copy-test-088 copy-test-098
copy-test-008 copy-test-017 copy-test-024 copy-test-032 copy-test-041 copy-test-054 copy-test-073 copy-test-082 copy-test-090 copy-test-099
copy-test-011 copy-test-019 copy-test-028 copy-test-033 copy-test-046 copy-test-057 copy-test-077 copy-test-083 copy-test-093 copy-test-100

[root@node2 gv0]# ls /gfs/test1/gv4
copy-test-001 copy-test-012 copy-test-021 copy-test-029 copy-test-034 copy-test-048 copy-test-060 copy-test-078 copy-test-086 copy-test-094
copy-test-004 copy-test-015 copy-test-022 copy-test-030 copy-test-038 copy-test-051 copy-test-063 copy-test-079 copy-test-087 copy-test-095
copy-test-006 copy-test-016 copy-test-023 copy-test-031 copy-test-039 copy-test-052 copy-test-065 copy-test-081 copy-test-088 copy-test-098
copy-test-008 copy-test-017 copy-test-024 copy-test-032 copy-test-041 copy-test-054 copy-test-073 copy-test-082 copy-test-090 copy-test-099
copy-test-011 copy-test-019 copy-test-028 copy-test-033 copy-test-046 copy-test-057 copy-test-077 copy-test-083 copy-test-093 copy-test-100

[root@node3 test]# ls /gfs/test1/gv4
copy-test-002 copy-test-010 copy-test-025 copy-test-037 copy-test-045 copy-test-055 copy-test-062 copy-test-069 copy-test-075 copy-test-089
copy-test-003 copy-test-013 copy-test-026 copy-test-040 copy-test-047 copy-test-056 copy-test-064 copy-test-070 copy-test-076 copy-test-091
copy-test-005 copy-test-014 copy-test-027 copy-test-042 copy-test-049 copy-test-058 copy-test-066 copy-test-071 copy-test-080 copy-test-092
copy-test-007 copy-test-018 copy-test-035 copy-test-043 copy-test-050 copy-test-059 copy-test-067 copy-test-072 copy-test-084 copy-test-096
copy-test-009 copy-test-020 copy-test-036 copy-test-044 copy-test-053 copy-test-061 copy-test-068 copy-test-074 copy-test-085 copy-test-097


[root@node4 ~]# ls /gfs/test1/gv4
copy-test-002 copy-test-010 copy-test-025 copy-test-037 copy-test-045 copy-test-055 copy-test-062 copy-test-069 copy-test-075 copy-test-089
copy-test-003 copy-test-013 copy-test-026 copy-test-040 copy-test-047 copy-test-056 copy-test-064 copy-test-070 copy-test-076 copy-test-091
copy-test-005 copy-test-014 copy-test-027 copy-test-042 copy-test-049 copy-test-058 copy-test-066 copy-test-071 copy-test-080 copy-test-092
copy-test-007 copy-test-018 copy-test-035 copy-test-043 copy-test-050 copy-test-059 copy-test-067 copy-test-072 copy-test-084 copy-test-096
copy-test-009 copy-test-020 copy-test-036 copy-test-044 copy-test-053 copy-test-061 copy-test-068 copy-test-074 copy-test-085 copy-test-097

13、创建带有仲裁的副本集

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[root@node1 test1]# gluster volume create gv5 replica 2 arbiter 2 transport tcp node1:/gfs/test1/gv5 node2:/gfs/test1/gv5 node3:/gfs/test1/gv5
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
(y/n) y
For arbiter configuration, replica count must be 3 and arbiter count must be 1. The 3rd brick of the replica will be the arbiter
#提示创建仲裁必须是三个副本集
[root@node1 test1]# gluster volume create gv5 replica 3 arbiter 1 transport tcp node1:/gfs/test1/gv5 node2:/gfs/test1/gv5 node3:/gfs/test1/gv5
#启动gv5
[root@node1 test1]# gluster volume start gv5
[root@node1 test1]# gluster volume info gv5
Volume Name: gv5
Type: Replicate
Volume ID: fd4fca20-1bb3-480b-9c24-703dd3e8b508
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: node1:/gfs/test1/gv5
Brick2: node2:/gfs/test1/gv5
Brick3: node3:/gfs/test1/gv5 (arbiter)
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

14、GlusterFS的常见的卷介绍

1
2
3
4
5
6
7
8
9
Distributed:分布式卷,文件通过 hash 算法随机分布到由 bricks 组成的卷上。
Replicated: 复制式卷,类似 RAID 1,replica 数必须等于 volume 中 brick 所包含的存储服务器数,可用性高。
Striped: 条带式卷,类似 RAID 0,stripe 数必须等于 volume 中 brick 所包含的存储服务器数,文件被分成数据块,以 Round Robin 的方式存储在 bricks 中,并发粒度是数据块,大文件性能好。
Distributed Striped: 分布式的条带卷,volume中 brick 所包含的存储服务器数必须是 stripe 的倍数(>=2倍),兼顾分布式和条带式的功能。
Distributed Replicated: 分布式的复制卷,volume 中 brick 所包含的存储服务器数必须是 replica 的倍数(>=2倍),兼顾分布式和复制式的功能。
分布式复制卷的brick顺序决定了文件分布的位置,一般来说,先是两个brick形成一个复制关系,然后两个复制关系形成分布。
企业一般用后两种,大部分会用分布式复制(可用容量为 总容量/复制份数),通过网络传输的话最好用万兆交换机,万兆网卡来做。这样就会优化一部分性能。它们的数据都是通过网络来传输的。

查看官方提供的gluster volume,点击[此处](https://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/)

15、删除卷

1
2
3
#先停止要删除的卷
[root@node01 ~]# gluster volume stop gv1
[root@node01 ~]# gluster volume delete gv

16、监控负载

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
volume profile <VOLNAME> {start|info [peek|incremental [peek]|cumulative|clear]|stop} [nfs]
#启动性能分析:
gluster volume profile gv5 start
#显示io信息:
gluster volume profile gv5 info
#停止性能分析:
gluster volume profile gv5 stop

#使用top命令查看读取,写入,文件打开调用,读取调用,写入调用等指标:
volume top <VOLNAME> {open|read|write|opendir|readdir|clear} [nfs|brick <brick>] [list-cnt <value>] |
volume top <VOLNAME> {read-perf|write-perf} [bs <size> count <count>] [brick <brick>] [list-cnt <value>]
#查看fd的当前打开数,和最大打开数
gluster volume top gv5 open brick node1:/gfs/test1/gv5 list-cnt 10
Brick: node1:/gfs/test1/gv5
Current open fds: 0, Max open fds: 1, Max openfd time: 2018-11-21 14:10:02.279118

#显示文件读取调用数量:
gluster volume top gv5 read brick node1:/gfs/test1/gv5 list-cnt 10
Brick: node1:/gfs/test1/gv5
Count filename
=======================
1 /copy-test-002
1 /copy-test-001

#每个brick下面文件被写入的数量列表
gluster volume top gv5 write brick node1:/gfs/test1/gv5 list-cnt 10
Brick: node1:/gfs/test1/gv5
Count filename
=======================
12 /copy-test-100
12 /copy-test-099
12 /copy-test-098
12 /copy-test-097
12 /copy-test-096
12 /copy-test-095
12 /copy-test-094
12 /copy-test-093
12 /copy-test-092
12 /copy-test-091

#当前brick下,目录被打开的数量列表
[root@node1 test1]# gluster volume top gv5 opendir brick node1:/gfs/test1/gv5 list-cnt 10
Brick: node1:/gfs/test1/gv5
Count filename
=======================
1 /test

#当前brick下,目录被读的数量列表
[root@node1 test1]# gluster volume top gv5 readdir brick node1:/gfs/test1/gv5 list-cnt 10
Brick: node1:/gfs/test1/gv5
Count filename
=======================
2 /test

#查看brick的读性能:
[root@node1 test1]# gluster volume top gv5 read-perf bs 256 count 1 brick node1:/gfs/test1/gv5 list-cnt 10
Brick: node1:/gfs/test1/gv5
Throughput 51.20 MBps time 0.0000 secs
MBps Filename Time
==== ======== ====
0 /copy-test-001 2018-11-21 16:14:30.512003
0 /copy-test-003 2018-11-21 16:13:32.481649
0 /copy-test-002 2018-11-21 16:06:19.696071
#查看brick的写性能:
[root@node1 test1]# gluster volume top gv5 write-perf bs 256 count 1 brick node1:/gfs/test1/gv5 list-cnt 10
Brick: node1:/gfs/test1/gv5
Throughput 10.67 MBps time 0.0000 secs
MBps Filename Time
==== ======== ====
0 /.copy-test-001.swp 2018-11-21 16:14:53.514516
0 /copy-test-001 2018-11-21 16:14:53.489068
0 /.copy-test-001.swp 2018-11-21 16:14:30.483696
0 /.copy-test-003.swp 2018-11-21 16:12:46.331094
0 /copy-test-100 2018-11-21 14:10:06.65163
0 /copy-test-099 2018-11-21 14:10:05.953871
0 /copy-test-098 2018-11-21 14:10:05.773626
0 /copy-test-097 2018-11-21 14:10:05.620743
0 /copy-test-096 2018-11-21 14:10:05.591005
0 /copy-test-095 2018-11-21 14:10:05.555036
#查看单个卷的信息:
gluster volume info <VOLNAME>

gluster volume info gv5
Volume Name: gv5
Type: Replicate
Volume ID: e12e23f5-7347-4049-8d77-53cef76b0633
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: node1:/gfs/test1/gv5
Brick2: node2:/gfs/test1/gv5
Brick3: node3:/gfs/test1/gv5 (arbiter)
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

#查看所有卷的信息:
gluster volume info all

#查看卷状态:
gluster volume status [all| []] [detail|clients|mem|inode|fd|callpool]
#显示所有卷的状态
gluster volume status all
#显示卷额外的信息:
gluster volume status gv5 detail
#显示客户端列表:
gluster volume status gv5 clients
#显示内存使用情况:
gluster volume status gv5 mem
#显示卷的inode表
gluster volume status gv5 inode
#显示卷打开的fd表
gluster volume status gv5 fd
#显示卷的挂起调用
gluster volume status gv5 callpool

17、glusterfs其他信息查看

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#看下节点有没有在线
gluster volume status nfsp
#启动完全修复
gluster volume heal gv2 full
#查看需要修复的文件
gluster volume heal gv2 info
#查看修复成功的文件
gluster volume heal gv2 info healed
#查看修复失败的文件
gluster volume heal gv2 heal-failed
#查看脑裂的文件
gluster volume heal gv2 info split-brain
#激活quota功能
gluster volume quota gv2 enable
#关闭quota功能
gulster volume quota gv2 disable
#目录限制(卷中文件夹的大小)
gluster volume quota limit-usage /data/30MB --/gv2/data
#quota信息列表
gluster volume quota gv2 list
#限制目录的quota信息
gluster volume quota gv2 list /data
#设置信息的超时时间
gluster volume set gv2 features.quota-timeout 5
#删除某个目录的quota设置
gluster volume quota gv2 remove /data
备注:quota功能,主要是对挂载点下的某个目录进行空间限额。如:/mnt/gulster/data目录,而不是对组成卷组的空间进行限制。

18、使用zabbix模板监控glusterfs,zabbix官网有自带的模板,CPU、内存、磁盘空间、主机运行时间、系统load。

1
2
gfszabbix监控
https://github.com/MrCirca/zabbix-glusterfs

点击阅读

[Linux] vsftp启用虚拟用户功能


0、部署vsftp-server,请先跳转到此处,部署vsfpt-ssl功能,点击此处

1、在/etc/vsftpd.conf配置文件添加以下内容

1
2
3
4
5
6
7
8
#开启虚拟用户的功能
guest_enable=YES
#指定虚拟用户的宿主用户
guest_username=vir
#指定虚拟用户配置文件的存放路径
user_config_dir=/etc/vsftpd/vuser_conf
#虚拟用户和本地用户有相同的权限
virtual_use_local_privs=YES

2、建立虚拟用户名文件

1
2
3
4
[root@localhost ~]# vim /etc/vsftpd/vsftpd_users.conf
#输入奇数行为账号,偶数行为密码
f1
123456

3、生成认证文件,如果db_load命令不存在先安装(yum install db4 db4-utils)

1
[root@localhost ~]# db_load -T -t hash -f /etc/vsftpd/vsftpd_users.conf /etc/vsftpd/vsftpd_users.db

4、文件生成之后,修改文件权限

1
[root@localhost ~]# chmod 600 /etc/vsftpd/vsftpd_users.db

5、编辑认证文件,将其余行注释,只保留第一行和最后两行

“/etc/vsftpd/vsftpd_users”根据自己配置的文件设置,.db后缀无需添加

1
2
3
4
5
6
7
8
9
10
11
[root@localhost f2]# more /etc/pam.d/vsftpd
#%PAM-1.0
#session optional pam_keyinit.so force revoke
#auth required pam_listfile.so item=user sense=deny file=/etc/vsftpd/ftpusers onerr=succeed
#auth required pam_shells.so
#auth include password-auth
#account include password-auth
#session required pam_loginuid.so
#session include password-auth
auth sufficient /lib64/security/pam_userdb.so db=/etc/vsftpd/vsftpd_users
account sufficient /lib64/security/pam_userdb.so db=/etc/vsftpd/vsftpd_users

6、建立虚拟用户配置文件存放位置

1
2
3
4
5
6
7
8
9
10
[root@localhost ~]# mkdir /etc/vsftpd/vuser_conf/
[root@localhost ~]# vim /etc/vsftpd/vuser_conf/f1
#添加以下内容
local_root=/var/ftp/f1
write_enable=YES
anon_umask=022
anon_world_readable_only=NO
anon_upload_enable=YES
anon_mkdir_write_enable=YES
anon_other_write_enable=YES

7、将f1用户加入/etc/vsftpd/chroot_list,/etc/vsftpd/vsftpd.user_list允许f1用户登录

1
2
3
4
[root@localhost vsftpd]# more /etc/vsftpd/chroot_list |grep f1
f1
[root@localhost vsftpd]# more /etc/vsftpd/vsftpd.user_list |grep f1
f1

8、重新启动vsftp

1
[root@localhost ~]# systemctl vsftpd restart

9、测试用户能否正常登录:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@localhost surgeftp_2.3f2_linux64]# ./sslftp 192.168.168.120
Connected to 192.168.168.120
220 (vsFTPd 3.0.2)
234 Proceed with negotiation.
starting SSL/TLS
sslinit 3
Negotiated secure protocol TLSv1.2, using an AES cipher.
200 PBSZ set to 0.
200 PROT now Private.
(secure) User: f1
331 Please specify the password.
(secure) Password: ******
Connection problem SSLTCP:525:ssl_read tcp:-1000:SSL failure. (SSL_ERROR_SSL):error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number
Channel open, login Failed!

提示的错误没找到任何相关文档,尝试用tcpdump抓包看看交互情况:

1
2
15:50:39.932746 IP 192.168.168.120.21 > 192.168.168.121.39398: Flags [P.], seq 1806:1816, ack 772, win 243, options [nop,nop,TS val 513318660 ecr 1441172002], length 10: FTP: 500 OOPS: [!ftp]
15:50:39.932769 IP 192.168.168.120.21 > 192.168.168.121.39398: Flags [P.], seq 1816:1851, ack 772, win 243, options [nop,nop,TS val 513318660 ecr 1441172002], length 35: FTP: cannot change directory:/var/ftp/f1[!ftp]

提示f1的目录不存在,尝试创建f1的目录,并修改权限:

1
2
mkdir -p /var/ftp/f1
chown ftp1.ftp1 /var/ftp/f1

再次登录,正常

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@localhost surgeftp_2.3f2_linux64]# ./sslftp 192.168.168.120
Connected to 192.168.168.120
220 (vsFTPd 3.0.2)
234 Proceed with negotiation.
starting SSL/TLS
sslinit 3
Negotiated secure protocol TLSv1.2, using an AES cipher.
200 PBSZ set to 0.
200 PROT now Private.
(secure) User: f1
331 Please specify the password.
(secure) Password: ******
230 Login successful.
Type in "save" to save login details to /root/.netrc
sslftp> ls
226 Directory send OK.
sslftp> exit
221 Goodbye.
Channel Closed.

10、新增f2用户,修改vsftpd_users.conf配置文件,添加f2用户

1
2
3
4
5
6
[root@localhost ~]# vim /etc/vsftpd/vsftpd_users.conf
#输入奇数行为账号,偶数行为密码
f1
123456
f2
123456

11、重新生成认证文件

1
[root@localhost ~]# db_load -T -t hash -f /etc/vsftpd/vsftpd_users.conf /etc/vsftpd/vsftpd_users.db

12、将f2用户加入/etc/vsftpd/chroot_list,/etc/vsftpd/vsftpd.user_list允许f2用户登录

1
2
3
4
5
6
[root@localhost vsftpd]# more /etc/vsftpd/chroot_list |grep -E "f1|f2"
f1
f2
[root@localhost vsftpd]# more /etc/vsftpd/vsftpd.user_list |grep -E "f1|f2"
f1
f2

13、添加f2配置文件

vim /etc/vsftpd/vuser_conf/f2

1
2
3
4
5
6
7
local_root=/var/ftp/f2
write_enable=YES
anon_umask=022
anon_world_readable_only=NO
anon_upload_enable=YES
anon_mkdir_write_enable=YES
anon_other_write_enable=YES

14、创建f2目录,并修改权限

1
2
mkdir -p /var/ftp/f2
chown ftp1.ftp1 /var/ftp/f2

15、重新启动vsftpd

1
[root@localhost ~]# systemctl vsftpd restart

16、测试登录vsftpd是否正常

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@localhost surgeftp_2.3f2_linux64]# ./sslftp 192.168.168.120
Connected to 192.168.168.120
220 (vsFTPd 3.0.2)
234 Proceed with negotiation.
starting SSL/TLS
sslinit 3
Negotiated secure protocol TLSv1.2, using an AES cipher.
200 PBSZ set to 0.
200 PROT now Private.
(secure) User: f2
331 Please specify the password.
(secure) Password: ******
230 Login successful.
Type in "save" to save login details to /root/.netrc
sslftp>

点击阅读

[Linux] DNS服务部署(单点&主从)


1、关闭防火墙开启相应的服务

1
2
[root@localhost  ~]#setenforce  0
[root@localhost ~]#iptables -F

2、主节点安装dns

1
yum -y install bind bind-chroot bind-utils

3、修改配置文件

3.1、/etc/named.conf—–主配置文件,服务器主要运行参数

修改dns主配置文件:

1
2
3
4
5
6
7
8
9
options {
listen-on port 53 { 10.0.30.95;127.0.0.1 };
listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
allow-query { any; };
forwarders { 8.8.8.8; };
3.2、/etc/named.rfc1912.zones—–区域文件,主要指定要解析哪个域名

vim /etc/named.rfc1912.zones

1
2
3
4
5
6
7
8
9
10
11
12
// 首先对文件中正向解析的区域进行修改
zone "node" IN {
type master;
file "node.conf";
allow-update { none; };
};
// 设置好域名格式以及相应的数据文件接下来进行反向解析区域的配置
zone "30.0.10.in -addr.arpa" IN{
type master;
file "node.txt";
allow-update { none; };
}
3.3、配置完成之后校验一下文件配置是否正确:
1
2
3
[root@node4 named]# named-checkconf
[root@node4 named]#
没有任何输出表示 /etc/named.conf没有语法错误
3.4、/var/named/xxx.xx ——-数据文件,用来正向和反向的解析

主文件及区域文件修改完成后接下来进行数据文件的修改,切换到/var/named目录下,对数据文件进行相应的选项配置:

1
2
cd /var/named/
cp named.empty node.conf

首先对其正向解析的数据文件进行配置
vim node.com

1
2
3
4
5
6
7
8
9
10
11
$TTL 3H
@ IN SOA gfs.com rname.invalid. (
0 ; serial
1D ; refresh
1H ; retry
1W ; expire
3H ) ; minimum
NS dns.gfs.com.
dns.gfs.com A 10.0.30.95
node1.gfs.com 1 A 10.0.30.51
AAAA ::1

对反向解析的数据文件进行配置,反向解析数据文件里面只有SOA、NS、PTR资源记录,所有A记录都要改为PTR记录,名称为IP地址,IP地址可以写全也可以简写,如果写全则是IP地址反写加上.in-addr.arpa.例如:116.2.16.172.in-addr.arpa. PTR资源记录的值为域名。

1
cp named.empty  node.txt

vim node.txt

1
2
3
4
5
6
7
8
9
10
$TTL 3H
@ IN SOA gfs.com rname.invalid. (
0 ; serial
1D ; refresh
1H ; retry
1W ; expire
3H ) ; minimum
NS dns.gfs.com.
95 PTR dns.gfs.com.
51 PTR node1.gfs.com.
3.5、解析文件参数详解
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
区域解析库文件第一个记录必须是SOA记录,必须有NS记录并且正解区域要有NS记录的A记录,反解则不需要有NS记录对应的A记录。
$TTL表示宏定义,TTL(Time- To-Live),dns记录在本地DNS服务器上保留的时间
@符号可代表区域文件/etc/named.conf里面定义的区域名称,即:"gfs.com."。
2018030422 ;标识序列号,十进制数字,不能超过10位,通常使用日期
2H ;刷新时间,即每隔多久到主服务器检查一次,此处为2小时,实验环境可以设置小一点
4M ;重试时间,应该小于刷新时间,此处为4分钟,实验环境可以设置小一点
1D ;过期时间,此处为1天
2D ;主服务器挂后,从服务器至多工作的时间,此处为2天)
这个文件里所有的域名结尾的点号一定不能省略。
区域解析库文件是放在/var/named目录下,由named进程是以named用户运行,因此区域解析库文件的属组应设置为named。

(1)A记录(Address)正向解析
A记录是将一个主机名(全称域名FQDN)和一个IP地址关联起来。这也是大多数客户端程序默认的查询类型。
(2)PTR记录(Pointer)反向解析
PTR记录将一个IP地址对应到主机名(全称域名FQDN)。这些记录保存在in-addr.arpa域中。
(3)CNAME记录(Canonical Name)别名
别名记录,也称为规范名字(Canonical Name)。这种记录允许您将多个名字映射到同一台计算机。
(4)MX记录(Mail eXchange)
MX记录是邮件交换记录,它指向一个邮件服务器,用于电子邮件系统发邮件时根据 收信人的地址后缀来定位邮件服务器。MX记录也叫做邮件路由记录,用户可以将该域名下的邮件服务器指向到自己的mail server上,然后即可自行操控所有的邮箱设置。
当有多个MX记录(即有多个邮件服务器)时,则需要设置数值来确定其优先级。通过设置优先级数字来指明首选服务器,数字越小表示优先级越高。
(5)NS记录(Name Server)
NS(Name Server)记录是域名服务器记录,也称为授权服务器,用来指定该域名由哪个DNS服务器来进行解析。
将网站的NS记录指向到目标地址,在设置NS记录的同时还需要设置目标网站的指向,否则NS记录将无法正常解析
NS记录优先于A记录。即,如果一个主机地址同时存在NS记录和A记录,则A记录不生效。
(6)AAAA记录 IPV6解析记录
该记录是将域名解析到一个指定的IPV6的IP上。
3.6、相应的数据配置文件完成后对数据文件的属主进行修改
1
2
3
4
[root@localhost  named]#chown  named  node.*
[root@node4 named]# ll node.*
-rw-r-----. 1 named root 286 Nov 22 16:12 node.conf
-rw-r-----. 1 named root 310 Nov 22 16:12 node.txt
3.7、配置完成后检测解析文件是否正确

[root@node4 named]# named-checkzone “gfs.com” node.conf
zone gfs.com/IN: loaded serial 0
OK

3.8、检测反向解析库配置有没有错误
1
2
3
[root@node4 named]# named-checkzone "30.0.10.in-addr.arpa" node.txt
zone 30.0.10.in-addr.arpa/IN: loaded serial 0
OK

4、修改完成后启动dns服务

1
systemctl start named.service

5、在客户端配置dns信息

5.1、修改 /etc/NetworkManager/NetworkManager.conf 文件,防止重启网卡后dns被重置,在main部分添加 “dns=none” 选项
1
2
3
[main]
plugins=ifcfg-rh
dns=none
5.2、NetworkManager重新装载上面修改的配置
1
# systemctl restart NetworkManager.service
5.3、之后修改/etc/resolv.conf配置文件:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[root@node2 ~]# more /etc/resolv.conf
# Generated by NetworkManager
nameserver 10.0.30.95
[root@node2 ~]# /etc/init.d/network restart
Restarting network (via systemctl): [ OK ]
[root@node2 ~]# more /etc/resolv.conf
# Generated by NetworkManager
nameserver 202.106.0.20
nameserver 8.8.8.8
[root@node2 ~]#
[root@node2 ~]# ls
anaconda-ks.cfg Desktop Documents Downloads initial-setup-ks.cfg Music Pictures Public Templates Videos
[root@node2 ~]# vim /etc/NetworkManager/NetworkManager.conf
[root@node2 ~]#
[root@node2 ~]# systemctl restart NetworkManager.service
[root@node2 ~]# vim /etc/resolv.conf
[root@node2 ~]#
[root@node2 ~]# more /etc/resolv.conf
# Generated by NetworkManager
nameserver 10.0.30.95
[root@node2 ~]# /etc/init.d/network restart
Restarting network (via systemctl): [ OK ]
[root@node2 ~]# more /etc/resolv.conf
# Generated by NetworkManager
nameserver 10.0.30.95
5.4、测试dns是否能正常解析:

正向解析:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[root@node2 ~]# dig -t A node1.gfs.com

; <<>> DiG 9.9.4-RedHat-9.9.4-61.el7 <<>> -t A node1.gfs.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 30794
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 3

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;node1.gfs.com. IN A

;; ANSWER SECTION:
node1.gfs.com. 10800 IN A 10.0.30.51

;; AUTHORITY SECTION:
gfs.com. 10800 IN NS dns1.gfs.com.
gfs.com. 10800 IN NS dns2.gfs.com.

;; ADDITIONAL SECTION:
dns1.gfs.com. 10800 IN A 10.0.30.95
dns2.gfs.com. 10800 IN A 10.0.30.117

;; Query time: 2 msec
;; SERVER: 10.0.30.95#53(10.0.30.95)
;; WHEN: Thu Nov 22 17:44:35 CST 2018
;; MSG SIZE rcvd: 128

反向解析:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[root@node2 ~]# dig -x 10.0.30.51 

; <<>> DiG 9.9.4-RedHat-9.9.4-61.el7 <<>> -x 10.0.30.51
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 50154
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 3

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;51.30.0.10.in-addr.arpa. IN PTR

;; ANSWER SECTION:
51.30.0.10.in-addr.arpa. 10800 IN PTR node1.gfs.com.

;; AUTHORITY SECTION:
30.0.10.in-addr.arpa. 10800 IN NS dns2.gfs.com.
30.0.10.in-addr.arpa. 10800 IN NS dns1.gfs.com.

;; ADDITIONAL SECTION:
dns1.gfs.com. 10800 IN A 10.0.30.95
dns2.gfs.com. 10800 IN A 10.0.30.117

;; Query time: 2 msec
;; SERVER: 10.0.30.95#53(10.0.30.95)
;; WHEN: Thu Nov 22 17:44:47 CST 2018
;; MSG SIZE rcvd: 149

上面单节点的dns配置完成,下面配置dns主从,从节点实时同步主节点更新的数据。

6、主节点修改/etc/named.rfc1912.zones

allow-transfer { 10.0.30.117; }; #指定允许转发的目标主机,即从服务器
also-notify:主动通知从域名服务器进行更新,在主域名服务器进行更新后,而不需要在等规定的时间后才通知从域名服务器进行更新

1
2
3
4
5
6
7
8
9
10
11
12
13
zone "gfs.com" IN {
type master;
file "node.conf";
allow-transfer { 10.0.30.117; };
also-notify { 10.0.30.117; };
};

zone "30.0.10.in-addr.arpa" IN{
type master;
file "node.txt";
also-notify { 10.0.30.117; };
allow-transfer { 10.0.30.117; };
};

7、修改主节点正向解析文件

[root@node4 named]# more node.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$TTL 3H
@ IN SOA gfs.com. rname.invalid. (
2 ; serial
1D ; refresh
1H ; retry
1W ; expire
3H ) ; minimum
NS dns1.gfs.com.
NS dns2.gfs.com.
dns1 A 10.0.30.95
dns2 A 10.0.30.117
node1 A 10.0.30.51
node2 A 10.0.30.35
node3 A 10.0.30.117
AAAA ::1

8、修改主节点反向解析文件

[root@node4 named]# more node.txt

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$TTL 3H
@ IN SOA gfs.com. rname.invalid. (
2 ; serial
1D ; refresh
1H ; retry
1W ; expire
3H ) ; minimum
NS dns1.gfs.com.
NS dns2.gfs.com.
95 PTR dns1.gfs.com.
117 PTR dns2.gfs.com.
51 PTR node1.gfs.com.
35 PTR node2.gfs.com.
117 PTR node3.gfs.com.

9、重新启动dns服务器

1
systemctl restart named.service

10、在从服务器上安装dns服务

1
yum -y install bind bind-chroot bind-utils

11、修改从节点/etc/named.conf配置文件

1
2
3
4
5
6
7
8
9
options {
listen-on port 53 { 10.0.30.117;127.0.0.1; };
listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
allow-query { any; };
forwarders { 8.8.8.8; };

12、修改/etc/named.rfc1912.zones

allow-notify 接受这个网段内的主机发送来的通知

1
2
3
4
5
6
7
8
9
10
11
12
13
zone "gfs.com" IN {
type slave;
masters { 10.0.30.95; };
allow-notify { 10.0.30.95; };
file "slaves/node.conf";
};

zone "30.0.10.in-addr.arpa" IN{
type slave;
masters { 10.0.30.95; };
allow-notify { 10.0.30.95; };
file "slaves/node.txt";
};

13、从服务器启动dns服务

1
systemctl start named.service

14、查看/var/named/slaves目录下已经多出了node.*的两个文件

1
2
3
4
5
6
[root@node3 slaves]# pwd
/var/named/slaves
[root@node3 slaves]# ll
total 8
-rw-r--r--. 1 named named 416 Nov 22 16:12 node.conf
-rw-r--r--. 1 named named 514 Nov 22 16:12 node.txt

15、测试主从文件能否正常同步,修改主节点node.*配置文件

1
2
3
4
5
6
[root@node4 named]# vim node.conf 
[root@node4 named]# vim node.txt
[root@node4 named]#
[root@node4 named]# ll node.*
-rw-r-----. 1 named root 265 Nov 22 18:48 node.conf
-rw-r-----. 1 named root 272 Nov 22 18:48 node.txt

***注意修改配置文件的时候一定要修改配置文件中serial那一行,只有该行的数值大于前一个值,才会同步到从节点。

16、主节点重新加载dns

1
systemctl reload named.service

17、查看从节点日志信息

1
2
3
4
5
6
7
8
9
10
11
12
Nov 22 18:48:55 node3 named[98442]: client 10.0.30.95#52505: received notify for zone 'gfs.com'
Nov 22 18:48:55 node3 named[98442]: zone gfs.com/IN: Transfer started.
Nov 22 18:48:55 node3 named[98442]: transfer of 'gfs.com/IN' from 10.0.30.95#53: connected using 10.0.30.117#58148
Nov 22 18:48:55 node3 named[98442]: zone gfs.com/IN: transferred serial 3
Nov 22 18:48:55 node3 named[98442]: transfer of 'gfs.com/IN' from 10.0.30.95#53: Transfer completed: 1 messages, 9 records, 252 bytes, 0.008 secs (31500 bytes/sec)
Nov 22 18:48:55 node3 named[98442]: zone gfs.com/IN: sending notifies (serial 3)
Nov 22 18:48:55 node3 named[98442]: client 10.0.30.95#54023: received notify for zone '30.0.10.in-addr.arpa'
Nov 22 18:48:55 node3 named[98442]: zone 30.0.10.in-addr.arpa/IN: Transfer started.
Nov 22 18:48:55 node3 named[98442]: transfer of '30.0.10.in-addr.arpa/IN' from 10.0.30.95#53: connected using 10.0.30.117#51530
Nov 22 18:48:55 node3 named[98442]: zone 30.0.10.in-addr.arpa/IN: transferred serial 3
Nov 22 18:48:55 node3 named[98442]: transfer of '30.0.10.in-addr.arpa/IN' from 10.0.30.95#53: Transfer completed: 1 messages, 8 records, 249 bytes, 0.002 secs (124500 bytes/sec)
Nov 22 18:48:55 node3 named[98442]: zone 30.0.10.in-addr.arpa/IN: sending notifies (serial 3)

18、查看从节点配置文件信息

1
2
3
4
[root@node3 slaves]# ll
total 8
-rw-r--r--. 1 named named 375 Nov 22 18:48 node.conf
-rw-r--r--. 1 named named 433 Nov 22 18:48 node.txt

从节点文件已和主节点正常同步

19、客户端测试主节点dns服务器挂掉从节点能否正常使用

1
2
3
4
[root@node2 ~]# more /etc/resolv.conf
# Generated by NetworkManager
nameserver 10.0.30.95
nameserver 10.0.30.117

20、测试正向解析是否正常

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[root@node2 ~]# dig -t A node1.gfs.com

; <<>> DiG 9.9.4-RedHat-9.9.4-61.el7 <<>> -t A node1.gfs.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55761
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 3

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;node1.gfs.com. IN A

;; ANSWER SECTION:
node1.gfs.com. 10800 IN A 10.0.30.51

;; AUTHORITY SECTION:
gfs.com. 10800 IN NS dns1.gfs.com.
gfs.com. 10800 IN NS dns2.gfs.com.

;; ADDITIONAL SECTION:
dns1.gfs.com. 10800 IN A 10.0.30.95
dns2.gfs.com. 10800 IN A 10.0.30.117

;; Query time: 0 msec
;; SERVER: 10.0.30.95#53(10.0.30.95)
;; WHEN: Thu Nov 22 18:54:03 CST 2018
;; MSG SIZE rcvd: 128

21、主节点停掉dns服务:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@node4 named]# systemctl stop named.service
[root@node4 named]# systemctl status named.service
● named.service - Berkeley Internet Name Domain (DNS)
Loaded: loaded (/usr/lib/systemd/system/named.service; disabled; vendor preset: disabled)
Active: inactive (dead)

Nov 22 18:48:55 node4 named[10366]: zone 30.0.10.in-addr.arpa/IN: sending notifies (serial 3)
Nov 22 18:48:55 node4 named[10366]: client 10.0.30.117#58148 (gfs.com): transfer of 'gfs.com/IN': AXFR-style IXFR started
Nov 22 18:48:55 node4 named[10366]: client 10.0.30.117#58148 (gfs.com): transfer of 'gfs.com/IN': AXFR-style IXFR ended
Nov 22 18:48:55 node4 named[10366]: client 10.0.30.117#28664: received notify for zone 'gfs.com'
Nov 22 18:48:55 node4 named[10366]: client 10.0.30.117#51530 (30.0.10.in-addr.arpa): transfer of '30.0.10.in-addr.arpa/IN': AXFR-style IXFR started
Nov 22 18:48:55 node4 named[10366]: client 10.0.30.117#51530 (30.0.10.in-addr.arpa): transfer of '30.0.10.in-addr.arpa/IN': AXFR-style IXFR ended
Nov 22 18:48:56 node4 named[10366]: client 10.0.30.117#41288: received notify for zone '30.0.10.in-addr.arpa'
Nov 22 18:59:26 node4 systemd[1]: Stopping Berkeley Internet Name Domain (DNS)...
Nov 22 18:59:26 node4 named[10366]: received control channel command 'stop'
Nov 22 18:59:26 node4 systemd[1]: Stopped Berkeley Internet Name Domain (DNS).

22、客户端解析仍然正常

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[root@node2 ~]# dig -t A node1.gfs.com

; <<>> DiG 9.9.4-RedHat-9.9.4-61.el7 <<>> -t A node1.gfs.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 39672
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 3

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;node1.gfs.com. IN A

;; ANSWER SECTION:
node1.gfs.com. 10800 IN A 10.0.30.51

;; AUTHORITY SECTION:
gfs.com. 10800 IN NS dns2.gfs.com.
gfs.com. 10800 IN NS dns1.gfs.com.

;; ADDITIONAL SECTION:
dns1.gfs.com. 10800 IN A 10.0.30.95
dns2.gfs.com. 10800 IN A 10.0.30.117

;; Query time: 2 msec
;; SERVER: 10.0.30.117#53(10.0.30.117)
;; WHEN: Thu Nov 22 18:59:48 CST 2018
;; MSG SIZE rcvd: 128

23、指定dns为主节点,解析失败

1
2
3
4
5
[root@node2 ~]# dig -t A node1.gfs.com @10.0.30.95

; <<>> DiG 9.9.4-RedHat-9.9.4-61.el7 <<>> -t A node1.gfs.com @10.0.30.95
;; global options: +cmd
;; connection timed out; no servers could be reached

24、指定为从节点,解析正常

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[root@node2 ~]# dig -t A node1.gfs.com @10.0.30.117

; <<>> DiG 9.9.4-RedHat-9.9.4-61.el7 <<>> -t A node1.gfs.com @10.0.30.117
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 53270
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 3

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;node1.gfs.com. IN A

;; ANSWER SECTION:
node1.gfs.com. 10800 IN A 10.0.30.51

;; AUTHORITY SECTION:
gfs.com. 10800 IN NS dns1.gfs.com.
gfs.com. 10800 IN NS dns2.gfs.com.

;; ADDITIONAL SECTION:
dns1.gfs.com. 10800 IN A 10.0.30.95
dns2.gfs.com. 10800 IN A 10.0.30.117

;; Query time: 2 msec
;; SERVER: 10.0.30.117#53(10.0.30.117)
;; WHEN: Thu Nov 22 19:01:30 CST 2018
;; MSG SIZE rcvd: 128

25、把从节点dns服务也关掉

1
[root@node3 slaves]# systemctl stop named.service

26、解析域名报错

1
2
3
4
5
[root@node2 ~]# dig -t A node1.gfs.com

; <<>> DiG 9.9.4-RedHat-9.9.4-61.el7 <<>> -t A node1.gfs.com
;; global options: +cmd
;; connection timed out; no servers could be reached

27、额外参考信息

1
2
3
DNS服务器监听端口:
PORT: udp/tcp 53 ---> 服务端口
PORT: udp/tcp 953 ---> 主从连接端口
1
dns各参数详解,点击[此处](https://blog.csdn.net/bpingchang/article/details/38427113)跳转

点击阅读

[Mysql] mysql-show table status语法


1、使用方法:

1
2
3
show table status from aaaa like 'test_order';

show table status in aaaa like 'test_order';

结果显示如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
*************************** 1. row ***************************
Name: test_order
Engine: InnoDB
Version: 10
Row_format: Dynamic
Rows: 84052104
Avg_row_length: 372
Data_length: 31321817088
Max_data_length: 0
Index_length: 25528303616
Data_free: 7340032
Auto_increment: 86709693
Create_time: 2018-11-02 16:20:25
Update_time: NULL
Check_time: NULL
Collation: utf8mb4_general_ci
Checksum: NULL
Create_options:
Comment:
1 row in set (0.00 sec)

2、具体参数详解:

1
Name:表名
1
Engine:表使用的存储引擎
1
Version:该列在8.0版本已经废弃,在5.7版本下显示硬编码10
1
Row_format:行存储格式(Fixed, Dynamic, Compressed, Redundant, Compact)。对于myisam表,Dynamic值与 myisamchk -dvv查询出来的Packed值相对应。
1
Rows:对于myisam存储引擎,显示实际行数。对于其他引擎,比如innodb,值是近似的(粗略估计),和实际值可能相差40%至50%之间,想要获取准确的值使用select count(*)。
1
Avg_row_length:平均行长度。
1
Data_length:对于Myisam表,该值以字节为单位显示数据文件的长度;对于Innodb,该值以字节为单位显示为聚簇索引分配内存的值(近似)。
1
Max_data_length:对于Myisam表,该值显示数据文件的最大长度。这是可以存储在表中的数据的总字节数,该表给定使用的数据指针大小。对于Innodb,该值已弃用。
1
2
Index_length:对于Myisam,该值为索引文件的长度(以字节为单位)。对于Innodb,该值显示内存为非聚簇索引分配的数量(以字节为单位),它是内存页中非聚簇索引大小的总和与
Innodb内存页大小的乘积。
1
2
3
4
Data_free:已分配但未使用的字节数。Innodb-tables代表当前表所属表空间的可用空间。如果该表位于共享表空间,该值代表共享表空间的剩余空间。如果使用多个表空间,每个表拥有自己的表空间,该值仅代表当前表的剩余空间。剩余空间的意思就是当前完全空闲的区字节数减去一个安全的边界值。即使该值为0,只要新的区可以被分配插入数据也是正常的。
对于NDB集群,该值显示磁盘分配的空间,但是还没有被磁盘数据表或者磁盘数据碎片使用的空间。
对于分区表,该值只是估计值并不准确。想要获取更加准确的方法使用下面方法查询:
SELECT SUM(DATA_FREE) FROM INFORMATION_SCHEMA.PARTITIONS WHERE TABLE_SCHEMA = 'mydb' AND TABLE_NAME = 'mytable';
1
Auto_increment:下一个自动增长值。
1
Create_time:表创建时间。
1
2
Update_time:数据文件最后更新时间。对于一些存储引擎,该是为null。例如,Innodb存储多个表在系统表空间而且数据文件时间戳没有应用。Innodb表是使用file-per-table模式,每个表都有他自己的.ibd文件,内存buffer写入数据文件存在延迟,因而文件修改时间与最后的插入、更新、删除修改时间是不同的。对于Myisam表,数据文件时间戳是可用的,但是在window系统上,更新并不会更新时间戳,因此该值是不准确的。
对于非分区的Innodb表,该值显示最后update、insert、delete的时间戳的值。对于MVCC,时间戳的值代表commit的时间。如果服务器重启或者该表被数据字典缓存清除,时间戳不会保留。
1
Check_time:表最后一次检查时间。
1
Collation:表默认的排序规则。
1
Checksum:校验值。
1
2
Create_options:创建表时的额外选项。当执行create table时被保留的原始选项,这些选项与活动表的设置和选项不同。
对于Innodb表,该值显示row_format和key_block_size选项。如果表示分区表,该值会显示partitioned。当创建或者修改一个file-per-table表空间为加密模式时,该值会显示为ENCRYPTIONG(general tablespace除外)。
1
Comment:创建表空间添加的注释。

3、测试对innodb表添加encryption功能:

3.1、安装插件

1
INSTALL PLUGIN keyring_file  soname 'keyring_file.so';

3.2、创建密钥文件目录

1
2
mkdir -p /data/mysql/keyfile
chown mysql.mysql -R /data/mysql/keyfile

3.3、设置key文件:

1
2
3
4
5
6
7
8
9
set global keyring_file_data='/data/mysql/keyfile/key01';
show global variables like '%keyring_file_data%';
root@db 16:28: [aaaa]> show global variables like '%keyring_file_data%';
+-------------------+---------------------------+
| Variable_name | Value |
+-------------------+---------------------------+
| keyring_file_data | /data/mysql/keyfile/key01 |
+-------------------+---------------------------+
1 row in set (0.01 sec)

3.4、查看插件状态:

1
2
3
4
5
6
7
8
9
root@db 16:27:  [aaaa]> SELECT PLUGIN_NAME, PLUGIN_STATUS
-> FROM INFORMATION_SCHEMA.PLUGINS
-> WHERE PLUGIN_NAME LIKE 'keyring%';
+--------------+---------------+
| PLUGIN_NAME | PLUGIN_STATUS |
+--------------+---------------+
| keyring_file | ACTIVE |
+--------------+---------------+
1 row in set (0.01 sec)

3.5、表aa未加密时,查看表状态:

1
2
3
4
5
6
7
root@db 16:29:  [aaaa]> show table status from aaaa like 'aa';
+------+--------+---------+------------+------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+--------------------+----------+----------------+---------+
| Name | Engine | Version | Row_format | Rows | Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Create_time | Update_time | Check_time | Collation | Checksum | Create_options | Comment |
+------+--------+---------+------------+------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+--------------------+----------+----------------+---------+
| aa | InnoDB | 10 | Dynamic | 12 | 1365 | 16384 | 0 | 0 | 0 | NULL | 2018-10-29 17:30:47 | NULL | NULL | utf8mb4_general_ci | NULL | | |
+------+--------+---------+------------+------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+--------------------+----------+----------------+---------+
1 row in set (0.00 sec)

3.6、对表aa进行加密:

1
2
3
root@db 16:29:  [aaaa]> alter table aa encryption='Y';
Query OK, 12 rows affected (5.13 sec)
Records: 12 Duplicates: 0 Warnings: 0

3.7、再次查看表aa状态:

1
2
3
4
5
6
7
root@db 16:30:  [aaaa]> show table status from aaaa like 'aa';
+------+--------+---------+------------+------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+---------------------+------------+--------------------+----------+----------------+---------+
| Name | Engine | Version | Row_format | Rows | Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Create_time | Update_time | Check_time | Collation | Checksum | Create_options | Comment |
+------+--------+---------+------------+------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+---------------------+------------+--------------------+----------+----------------+---------+
| aa | InnoDB | 10 | Dynamic | 12 | 1365 | 16384 | 0 | 0 | 0 | NULL | 2018-11-14 16:30:16 | 2018-11-14 16:30:14 | NULL | utf8mb4_general_ci | NULL | ENCRYPTION="Y" | |
+------+--------+---------+------------+------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+---------------------+------------+--------------------+----------+----------------+---------+
1 row in set (0.01 sec)

4、下面为大佬提供的方便查询数据库下所有表所占行数(估计)和占用空间的存储过程,点此跳转至原文,具体内容如下:

4.1、存储过程所在的aaaa数据库,可根据自己数据库情况自行修改

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
DELIMITER $$
DROP PROCEDURE IF EXISTS `aaaa`.`sp_status` $$
CREATE PROCEDURE `aaaa`.`sp_status`(dbname VARCHAR(50))
BEGIN
-- Obtaining tables and views
(
SELECT
TABLE_NAME AS `Table Name`,
ENGINE AS `Engine`,
TABLE_ROWS AS `Rows`,
CONCAT(
(FORMAT((DATA_LENGTH + INDEX_LENGTH) / POWER(1024,2),2))
, ' Mb')
AS `Size`,
TABLE_COLLATION AS `Collation`
FROM information_schema.TABLES
WHERE TABLES.TABLE_SCHEMA = dbname
AND TABLES.TABLE_TYPE = 'BASE TABLE'
)
UNION
(
SELECT
TABLE_NAME AS `Table Name`,
'[VIEW]' AS `Engine`,
'-' AS `Rows`,
'-' `Size`,
'-' AS `Collation`
FROM information_schema.TABLES
WHERE TABLES.TABLE_SCHEMA = dbname
AND TABLES.TABLE_TYPE = 'VIEW'
)
ORDER BY 1;
-- Obtaining functions, procedures and triggers
(
SELECT ROUTINE_NAME AS `Routine Name`,
ROUTINE_TYPE AS `Type`,
'' AS `Comment`
FROM information_schema.ROUTINES
WHERE ROUTINE_SCHEMA = dbname
ORDER BY ROUTINES.ROUTINE_TYPE, ROUTINES.ROUTINE_NAME
)
UNION
(
SELECT TRIGGER_NAME,'TRIGGER' AS `Type`,
concat('On ',EVENT_MANIPULATION,': ',EVENT_OBJECT_TABLE) AS `Comment`
FROM information_schema.TRIGGERS
WHERE EVENT_OBJECT_SCHEMA = dbname
)
ORDER BY 2,1;
END$$
DELIMITER ;

4.2、执行存储过程:

1
call aaaa.sp_status('aaaa');

4.3、结果如下:

1
2
3
4
5
6
7
8
9
10
root@db 16:58:  [aaaa]> call aaaa.sp_status('aaaa');
+--------------+--------+----------+--------------+--------------------+
| Table Name | Engine | Rows | Size | Collation |
+--------------+--------+----------+--------------+--------------------+
| aa | InnoDB | 12 | 0.02 Mb | utf8mb4_general_ci |
| sequence | InnoDB | 12292293 | 515.98 Mb | utf8mb4_general_ci |
| test | InnoDB | 0 | 0.02 Mb | utf8mb4_general_ci |
| test1 | InnoDB | 6 | 0.02 Mb | utf8mb4_general_ci |
| test_order | InnoDB | 84052104 | 54,216.50 Mb | utf8mb4_general_ci |
+--------------+--------+----------+--------------+--------------------+

点击阅读

[Linux] vsftpd配置ssl


0、如果还没有部署vsftp-server,请先跳转到此处

1、查询vsftpd软件是否支持SSL

ldd /usr/sbin/vsftpd | grep ssl

1
libssl.so.10 => /lib64/libssl.so.10 (0x00007f6cfc4a6000)

2、生成vsftpd.pem 证书

[root@localhost vsftpd]# openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout /etc/vsftpd/vsftpd.pem -out /etc/vsftpd/vsftpd.pem

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Generating a 1024 bit RSA private key
..................................++++++
.........++++++
writing new private key to '/etc/vsftpd/vsftpd.pem'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:cn
State or Province Name (full name) []:beijing
Locality Name (eg, city) [Default City]:beijing
Organization Name (eg, company) [Default Company Ltd]:goopal
Organizational Unit Name (eg, section) []:goopal
Common Name (eg, your name or your server's hostname) []:zeven
Email Address []:test@goopal.com

查看生成vsftpd.pem是否成功
ls -l /etc/vsftpd/|grep vsftpd.pem

1
-rw-r--r-- 1 root root 1982 11月  9 14:20 vsftpd.pem

3、修改主配置文件/etc/vsftpd/vsftpd.conf

下面是ssl参数一些定义,根据自己需求去修改

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
ssl_enable=yes/no             是否启用 SSL,默认为no
allow_anon_ssl=yes/no 是否允许匿名用户使用SSL,默认为no
rsa_cert_file=/path/to/file rsa证书的位置
dsa_cert_file=/path/to/file dsa证书的位置
force_local_logins_ssl=yes/no 非匿名用户登陆时是否加密,默认为yes
force_local_data_ssl=yes/no 非匿名用户传输数据时是否加密,默认为yes
force_anon_logins_ssl=yes/no 匿名用户登录时是否加密,默认为no
force_anon_data_ssl=yes/no 匿名用户数据传输时是否加密,默认为no
ssl_sslv2=yes/no 是否激活sslv2加密,默认no
ssl_sslv3=yes/no 是否激活sslv3加密,默认no
ssl_tlsv1=yes/no 是否激活tls v1加密,默认yes
ssl_ciphers=加密方法 默认是DES-CBC3-SHA
require_ssl_reuse=no 需要数据与控制流使用相同的ssl通道,程序是另起一个ssl通道,选no
#使用ftps的情况下,主动模式是不能用的,必须使用别动模式,为考虑安全问题,可以指定被动端口范围:
pasv_enable=YES
pasv_min_port=20000
pasv_max_port=20001

修改vsftpd的主配置文件vsftpd.conf添加如下内容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
ssl_enable=YES
allow_anon_ssl=YES
force_anon_logins_ssl=YES
force_anon_data_ssl=YES
force_local_logins_ssl=YES
force_local_data_ssl=YES
ssl_tlsv1=YES
ssl_sslv2=NO
ssl_sslv3=NO
rsa_cert_file=/etc/vsftpd/vsftpd.pem
require_ssl_reuse=no
pasv_enable=YES
pasv_min_port=20000
pasv_max_port=20001

4、重启vsftpd服务

1
systemctl restart vsftpd

5、默认linux自带的ftp client不支持ssl协议,如果配置了ssl,使用ftp client连接会报错


可以使用第三方的surgeftp测试,下载方式:

1
wget ftp://ftp.netwinsite.com/pub/surgeftp/surgeftp_23f2_linux64.tar.gz

解压之后,测试连接是否正常:

也可以使用windows下的ftp软件测试,如FileZilla,配置方法如下:

6、配置过程终于到的问题:

1
2
3
4
522 SSL connection failed; session reuse required: see require_ssl_reuse option in vsftpd.conf man page
解决方法:
查看vsftpd.conf配置文件是否配置了下面参数
require_ssl_reuse=no

点击阅读

[Mysql] mysql-mgr解散集群并重新加入集群


1、查看集群状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
 MySQL  dax-mysql-master:3306  JS > cluster = dba.getCluster("prodCluster")
<Cluster:prodCluster>
MySQL dax-mysql-master:3306 JS > cluster.status()
{
"clusterName": "prodCluster",
"defaultReplicaSet": {
"name": "default",
"primary": "dax-mysql-master:3306",
"ssl": "DISABLED",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures.",
"topology": {
"dax-mysql-master:3306": {
"address": "dax-mysql-master:3306",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
},
"dax-mysql-slave:3306": {
"address": "dax-mysql-slave:3306",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
}
}
},
"groupInformationSourceMember": "mysql://repl@dax-mysql-master:3306"
}

2、解散集群:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
 MySQL  dax-mysql-master:3306  JS > cluster.dissolve()
The cluster still has active ReplicaSets.
Please use cluster.dissolve({force: true}) to deactivate replication
and unregister the ReplicaSets from the cluster.

The following replicasets are currently registered:
{
"clusterName": "prodCluster",
"defaultReplicaSet": {
"name": "default",
"topology": [
{
"address": "dax-mysql-slave:3306",
"label": "dax-mysql-slave:3306",
"role": "HA"
},
{
"address": "dax-mysql-master:3306",
"label": "dax-mysql-master:3306",
"role": "HA"
}
]
}
}
MySQL dax-mysql-master:3306 JS > cluster.dissolve({force: true})
WARNING: On instance 'dax-mysql-slave:3306' configuration cannot be persisted since MySQL version 5.7.22 does not support the SET PERSIST command (MySQL version >= 8.0.5 required). Please set the 'group_replication_start_on_boot' variable to 'OFF' in the server configuration file, otherwise it might rejoin the cluster upon restart.
WARNING: On instance 'dax-mysql-master:3306' configuration cannot be persisted since MySQL version 5.7.22 does not support the SET PERSIST command (MySQL version >= 8.0.5 required). Please set the 'group_replication_start_on_boot' variable to 'OFF' in the server configuration file, otherwise it might rejoin the cluster upon restart.
The cluster was successfully dissolved.
Replication was disabled but user data was left intact.

3、查看集群信息已经被删掉了:

1
2
root@db 16:54:  [mysql_innodb_cluster_metadata]> select * from clusters;
Empty set (0.00 sec)

4、重新建立集群:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
MySQL  dax-mysql-master:3306  JS > var cluster = dba.createCluster('prodCluster');
A new InnoDB cluster will be created on instance 'repl@dax-mysql-master:3306'.

Validating instance at dax-mysql-master:3306...

This instance reports its own address as dax-mysql-master
WARNING: The following tables do not have a Primary Key or equivalent column:
aaaa.test

Group Replication requires tables to use InnoDB and have a PRIMARY KEY or PRIMARY KEY Equivalent (non-null unique key). Tables that do not follow these requirements will be readable but not updateable when used with Group Replication. If your applications make updates (INSERT, UPDATE or DELETE) to these tables, ensure they use the InnoDB storage engine and have a PRIMARY KEY or PRIMARY KEY Equivalent.

Instance configuration is suitable.
Creating InnoDB cluster 'prodCluster' on 'repl@dax-mysql-master:3306'...
WARNING: On instance 'dax-mysql-master:3306' membership change cannot be persisted since MySQL version 5.7.22 does not support the SET PERSIST command (MySQL version >= 8.0.5 required). Please use the <Dba>.configureLocalInstance command locally to persist the changes.
Adding Seed Instance...

Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.

5、 查看集群状态,现在只有一个节点信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
MySQL  dax-mysql-master:3306  JS > cluster.status();
{
"clusterName": "prodCluster",
"defaultReplicaSet": {
"name": "default",
"primary": "dax-mysql-master:3306",
"ssl": "DISABLED",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures.",
"topology": {
"dax-mysql-master:3306": {
"address": "dax-mysql-master:3306",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
}
}
},
"groupInformationSourceMember": "mysql://repl@dax-mysql-master:3306"
}

6、添加新的节点:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
cluster.addInstance('dax-mysql-slave:3306')

MySQL dax-mysql-master:3306 JS > cluster.addInstance('dax-mysql-slave:3306')
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Please provide the password for 'root@dax-mysql-slave:3306': ********
Adding instance to the cluster ...

Validating instance at dax-mysql-slave:3306...

This instance reports its own address as dax-mysql-slave
WARNING: The following tables do not have a Primary Key or equivalent column:
aaaa.test

Group Replication requires tables to use InnoDB and have a PRIMARY KEY or PRIMARY KEY Equivalent (non-null unique key). Tables that do not follow these requirements will be readable but not updateable when used with Group Replication. If your applications make updates (INSERT, UPDATE or DELETE) to these tables, ensure they use the InnoDB storage engine and have a PRIMARY KEY or PRIMARY KEY Equivalent.

Instance configuration is suitable.
WARNING: On instance 'dax-mysql-slave:3306' membership change cannot be persisted since MySQL version 5.7.22 does not support the SET PERSIST command (MySQL version >= 8.0.5 required). Please use the <Dba>.configureLocalInstance command locally to persist the changes.
WARNING: On instance 'dax-mysql-master:3306' membership change cannot be persisted since MySQL version 5.7.22 does not support the SET PERSIST command (MySQL version >= 8.0.5 required). Please use the <Dba>.configureLocalInstance command locally to persist the changes.
The instance 'root@dax-mysql-slave:3306' was successfully added to the cluster.

7、查看集群状态,可以看到两个节点了:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
 MySQL  dax-mysql-master:3306  JS > cluster.status();
{
"clusterName": "prodCluster",
"defaultReplicaSet": {
"name": "default",
"primary": "dax-mysql-master:3306",
"ssl": "DISABLED",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures.",
"topology": {
"dax-mysql-master:3306": {
"address": "dax-mysql-master:3306",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
},
"dax-mysql-slave:3306": {
"address": "dax-mysql-slave:3306",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
}
}
},
"groupInformationSourceMember": "mysql://repl@dax-mysql-master:3306"
}

root@db 17:14: [mysql_innodb_cluster_metadata]> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+------------------+-------------+--------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+------------------+-------------+--------------+
| group_replication_applier | bddd9c32-8fee-11e8-ac79-525400edbe8d | dax-mysql-slave | 3306 | ONLINE |
| group_replication_applier | d5bd8edd-9a1d-11e8-993e-525400578639 | dax-mysql-master | 3306 | ONLINE |
+---------------------------+--------------------------------------+------------------+-------------+--------------+
2 rows in set (0.00 sec)

root@db 17:14: [mysql_innodb_cluster_metadata]> show status like '%primary%';
+----------------------------------+--------------------------------------+
| Variable_name | Value |
+----------------------------------+--------------------------------------+
| group_replication_primary_member | d5bd8edd-9a1d-11e8-993e-525400578639 |
+----------------------------------+--------------------------------------+
1 row in set (0.01 sec)

点击阅读

[Mysql] mysql-mgr单主模式切换为多主,多主切换为单主


1、停止组复制(所有节点执行):

1
2
3
mysql> stop group_replication; 
mysql> set global group_replication_single_primary_mode=OFF;
mysql> set global group_replication_enforce_update_everywhere_checks=ON;

2、随便选择某个节点执行

1
2
3
mysql> SET GLOBAL group_replication_bootstrap_group=ON; 
mysql> START GROUP_REPLICATION;
mysql> SET GLOBAL group_replication_bootstrap_group=OFF;

3、其他节点执行

1
mysql> START GROUP_REPLICATION;

查看节点启动报错:

1
2
2018-11-08T14:00:52.098660Z 56 [ERROR] Plugin group_replication reported: 'There was an error when connecting to the donor server. Please check that group_replication_recovery channel credentials and all MEMBER_HOST column values of performance_schema.replication_group_members table are correct and DNS resolvable.'
2018-11-08T14:00:52.098689Z 56 [ERROR] Plugin group_replication reported: 'For details please check performance_schema.replication_connection_status table and error log messages of Slave I/O for channel group_replication_recovery.'

手动配置复制信息:

1
change master to master_user='repl',master_password='12345678' for channel 'group_replication_recovery';

再次启动:

1
mysql> START GROUP_REPLICATION;

4、查看组信息,所有节点的信息正常,所有节点均为主:

1
2
3
4
5
6
7
SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+------------------+-------------+--------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+------------------+-------------+--------------+
| group_replication_applier | bddd9c32-8fee-11e8-ac79-525400edbe8d | dax-mysql-slave | 3306 | ONLINE |
| group_replication_applier | d5bd8edd-9a1d-11e8-993e-525400578639 | dax-mysql-master | 3306 | ONLINE |
+---------------------------+--------------------------------------+------------------+-------------+--------------+

5、查看集群状态

1
2
3
4
5
 MySQL  dax-mysql-master:3306  JS > cluster = dba.getCluster("prodCluster")
<Cluster:prodCluster>

MySQL dax-mysql-master:3306 JS > cluster.status();
Cluster.status: The InnoDB Cluster topology type (Single-Master) does not match the current Group Replication configuration (Multi-Master). Please use <cluster>.rescan() or change the Group Replication configuration accordingly. (RuntimeError)

提示cluster 拓扑类型为单主与当前的mgr(多主)不匹配,使用cluster.rescan()参数重新扫描:

1
2
3
4
5
6
7
8
9
10
11
 MySQL  dax-mysql-master:3306  JS > cluster.rescan()
Rescanning the cluster...

Result of the rescanning operation:
{
"defaultReplicaSet": {
"name": "default",
"newlyDiscoveredInstances": [],
"unavailableInstances": []
}
}

再次查看集群状态仍然报错:

1
2
 MySQL  dax-mysql-master:3306  JS > cluster.status();
Cluster.status: The InnoDB Cluster topology type (Single-Master) does not match the current Group Replication configuration (Multi-Master). Please use <cluster>.rescan() or change the Group Replication configuration accordingly. (RuntimeError)

尝试删除集群元数据:

1
2
3
dba.dropMetadataSchema();
Are you sure you want to remove the Metadata? [y/N]: y
Metadata Schema successfully removed.

重新创建集群

1
var cluster = dba.createCluster('prodCluster', {adoptFromGR: true ,force: true});

报错提示:

1
[ERROR] Plugin group_replication reported: 'Table instances has a foreign key with 'CASCADE' clause. This is not compatible with Group Replication'

创建集群会在mysql_innodb_cluster_metadata库下建立一个instances的表,在单主模式下创建集群没有外键级联的报错问题,在多主模式下提示有外键级联的问题,表内容如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
CREATE TABLE `instances` (
`instance_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`host_id` int(10) unsigned NOT NULL,
`replicaset_id` int(10) unsigned DEFAULT NULL,
`mysql_server_uuid` varchar(40) NOT NULL,
`instance_name` varchar(256) NOT NULL,
`role` enum('HA','readScaleOut') NOT NULL,
`weight` float DEFAULT NULL,
`addresses` json NOT NULL,
`attributes` json DEFAULT NULL,
`version_token` int(10) unsigned DEFAULT NULL,
`description` text,
PRIMARY KEY (`instance_id`),
UNIQUE KEY `mysql_server_uuid` (`mysql_server_uuid`),
UNIQUE KEY `instance_name` (`instance_name`),
KEY `host_id` (`host_id`),
KEY `instances_ibfk_2` (`replicaset_id`),
CONSTRAINT `instances_ibfk_1` FOREIGN KEY (`host_id`) REFERENCES `hosts` (`host_id`),
CONSTRAINT `instances_ibfk_2` FOREIGN KEY (`replicaset_id`) REFERENCES `replicasets` (`replicaset_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4

查阅官方文档有提示说多主模式下存在这个bug,将该表下的外键约束去掉:

1
2
ALTER TABLE `mysql_innodb_cluster_metadata`.`instances` DROP FOREIGN KEY `instances_ibfk_1`;
ALTER TABLE `mysql_innodb_cluster_metadata`.`instances` DROP FOREIGN KEY `instances_ibfk_2`;

重建添加外键使用下面的命令:

1
2
3
ALTER TABLE `mysql_innodb_cluster_metadata`.`instances` ADD CONSTRAINT `instances_ibfk_1` FOREIGN KEY (`host_id`) REFERENCES `hosts` (`host_id`);
ALTER TABLE `mysql_innodb_cluster_metadata`.`instances` ADD CONSTRAINT `instances_ibfk_2`
FOREIGN KEY (`replicaset_id`) REFERENCES `replicasets` (`replicaset_id`);

将外键约束删除之后,重新创建集群:

1
2
3
4
5
6
7
8
 MySQL  dax-mysql-master:3306  JS > var cluster = dba.createCluster('prodCluster', {adoptFromGR: true ,force: true});
A new InnoDB cluster will be created based on the existing replication group on instance 'repl@dax-mysql-master:3306'.

Creating InnoDB cluster 'prodCluster' on 'repl@dax-mysql-master:3306'...
Adding Instance 'dax-mysql-slave:3306'...
Adding Instance 'dax-mysql-master:3306'...

Cluster successfully created based on existing replication group.

集群创建成功,查看集群状态,两个节点均为读写模式:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
 MySQL  dax-mysql-master:3306  JS > cluster.status()
{
"clusterName": "prodCluster",
"defaultReplicaSet": {
"name": "default",
"ssl": "DISABLED",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures.",
"topology": {
"dax-mysql-master:3306": {
"address": "dax-mysql-master:3306",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
},
"dax-mysql-slave:3306": {
"address": "dax-mysql-slave:3306",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
}
}
},
"groupInformationSourceMember": "mysql://repl@dax-mysql-master:3306"
}

6、mysql-mgr多主模式切换成功,现在将模式从多主切换为单主:

7、所有节点执行

1
2
3
mysql> stop group_replication; 
mysql> set global group_replication_enforce_update_everywhere_checks=OFF;
mysql> set global group_replication_single_primary_mode=ON;

8、主节点(dax-mysql-master)执行

1
2
3
SET GLOBAL group_replication_bootstrap_group=ON; 
START GROUP_REPLICATION;
SET GLOBAL group_replication_bootstrap_group=OFF;

9、从节点执行:

1
START GROUP_REPLICATION;

10、查看MGR组信息,所有节点同步正常:

1
2
3
4
5
6
7
mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+------------------+-------------+--------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+------------------+-------------+--------------+
| group_replication_applier | bddd9c32-8fee-11e8-ac79-525400edbe8d | dax-mysql-slave | 3306 | ONLINE |
| group_replication_applier | d5bd8edd-9a1d-11e8-993e-525400578639 | dax-mysql-master | 3306 | ONLINE |
+---------------------------+--------------------------------------+------------------+-------------+--------------+

查看当前主节点信息:

1
2
3
4
5
6
7
show status like '%primary%';
+----------------------------------+--------------------------------------+
| Variable_name | Value |
+----------------------------------+--------------------------------------+
| group_replication_primary_member | d5bd8edd-9a1d-11e8-993e-525400578639 |
+----------------------------------+--------------------------------------+
1 row in set (0.01 sec)

11、登录mysql-shell查看集群信息:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
MySQL  dax-mysql-master:3306  JS > cluster = dba.getCluster("prodCluster")
<Cluster:prodCluster>

MySQL dax-mysql-master:3306 JS > cluster.status();
Cluster.status: The InnoDB Cluster topology type (Multi-Master) does not match the current Group Replication configuration (Single-Master). Please use <cluster>.rescan() or change the Group Replication configuration accordingly. (RuntimeError)

MySQL dax-mysql-master:3306 JS > cluster.rescan()
Rescanning the cluster...

Result of the rescanning operation:
{
"defaultReplicaSet": {
"name": "default",
"newlyDiscoveredInstances": [],
"unavailableInstances": []
}
}

MySQL dax-mysql-master:3306 JS > cluster.status();
Cluster.status: The InnoDB Cluster topology type (Multi-Master) does not match the current Group Replication configuration (Single-Master). Please use <cluster>.rescan() or change the Group Replication configuration accordingly. (RuntimeError)

跟多主切换为单主报错相同,尝试删除集群元数据:

1
2
3
dba.dropMetadataSchema();
Are you sure you want to remove the Metadata? [y/N]: y
Metadata Schema successfully removed.

重新创建集群:

1
2
3
4
5
6
7
8
 MySQL  dax-mysql-master:3306  JS > var cluster = dba.createCluster('prodCluster', {adoptFromGR: true});
A new InnoDB cluster will be created based on the existing replication group on instance 'repl@dax-mysql-master:3306'.

Creating InnoDB cluster 'prodCluster' on 'repl@dax-mysql-master:3306'...
Adding Seed Instance...
Adding Instance 'dax-mysql-slave:3306'...

Cluster successfully created based on existing replication group.

集群创建成功,查看集群状态为一主一从模式:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
 MySQL  dax-mysql-master:3306  JS >  cluster.status();
{
"clusterName": "prodCluster",
"defaultReplicaSet": {
"name": "default",
"primary": "dax-mysql-master:3306",
"ssl": "DISABLED",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures.",
"topology": {
"dax-mysql-master:3306": {
"address": "dax-mysql-master:3306",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
},
"dax-mysql-slave:3306": {
"address": "dax-mysql-slave:3306",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
}
}
},
"groupInformationSourceMember": "mysql://repl@dax-mysql-master:3306"
}

点击阅读

[Mysql] mysql-mgr(双节点)依次修改server-id


1、查看从节点server-id信息:

1
2
3
4
5
6
7
8
9
10
11
12
root@db 16:55:  [(none)]> show variables like "%server%";
+---------------------------------------------------+--------------------------------------+
| Variable_name | Value |
+---------------------------------------------------+--------------------------------------+
| character_set_server | utf8mb4 |
| collation_server | utf8mb4_general_ci |
| group_replication_recovery_ssl_verify_server_cert | OFF |
| innodb_ft_server_stopword_table | |
| server_id | 3306102 |
| server_id_bits | 32 |
| server_uuid | bddd9c32-8fee-11e8-ac79-525400edbe8d |
+---------------------------------------------------+--------------------------------------+

2、登录mysql-shell查看集群信息:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
MySQL  dax-mysql-master:3306  JS > cluster = dba.getCluster("prodCluster")
MySQL dax-mysql-master:3306 JS > cluster.status();
{
"clusterName": "prodCluster",
"defaultReplicaSet": {
"name": "default",
"primary": "dax-mysql-master:3306",
"ssl": "DISABLED",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures.",
"topology": {
"dax-mysql-master:3306": {
"address": "dax-mysql-master:3306",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
},
"dax-mysql-slave:3306": {
"address": "dax-mysql-slave:3306",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
}
}
},
"groupInformationSourceMember": "mysql://repl@dax-mysql-master:3306"
}

将从节点移除集群:
MySQL dax-mysql-master:3306 JS > cluster.removeInstance(‘dax-mysql-slave:3306’)

1
2
3
4
5
6
7
8
9
The instance will be removed from the InnoDB cluster. Depending on the 
instance being the Seed or not, the Metadata session might become invalid.
If so, please start a new session to the Metadata Storage R/W instance.

WARNING: On instance 'dax-mysql-master:3306' membership change cannot be persisted since MySQL version 5.7.22 does not support the SET PERSIST command (MySQL version >= 8.0.5 required). Please use the <Dba>.configureLocalInstance command locally to persist the changes.
WARNING: On instance 'dax-mysql-slave:3306' configuration cannot be persisted since MySQL version 5.7.22 does not support the SET PERSIST command (MySQL version >= 8.0.5 required). Please set the 'group_replication_start_on_boot' variable to 'OFF' in the server configuration file, otherwise it might rejoin the cluster upon restart.
The instance 'dax-mysql-slave:3306' was successfully removed from the cluster.

WARNING: The 'group_replication_start_on_boot' variable must be set to 'OFF' in the server configuration file, otherwise it might silently rejoin the cluster upon restart.

重新扫描集群信息:

1
2
3
4
5
6
7
8
9
10
11
 MySQL  dax-mysql-master:3306  JS > cluster.rescan();
Rescanning the cluster...

Result of the rescanning operation:
{
"defaultReplicaSet": {
"name": "default",
"newlyDiscoveredInstances": [],
"unavailableInstances": []
}
}

查看集群信息:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
 MySQL  dax-mysql-master:3306  JS > cluster.status();
{
"clusterName": "prodCluster",
"defaultReplicaSet": {
"name": "default",
"primary": "dax-mysql-master:3306",
"ssl": "DISABLED",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures.",
"topology": {
"dax-mysql-master:3306": {
"address": "dax-mysql-master:3306",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
}
}
},
"groupInformationSourceMember": "mysql://repl@dax-mysql-master:3306"
}

3、从节点修改server-id,并重启库,之后将从节点加入集群:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
mysql-shell > cluster.addInstance('dax-mysql-slave:3306')
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Please provide the password for 'root@dax-mysql-slave:3306': ********
Adding instance to the cluster ...

Validating instance at dax-mysql-slave:3306...

This instance reports its own address as dax-mysql-slave
WARNING: The following tables do not have a Primary Key or equivalent column:
aaaa.test

Group Replication requires tables to use InnoDB and have a PRIMARY KEY or PRIMARY KEY Equivalent (non-null unique key). Tables that do not follow these requirements will be readable but not updateable when used with Group Replication. If your applications make updates (INSERT, UPDATE or DELETE) to these tables, ensure they use the InnoDB storage engine and have a PRIMARY KEY or PRIMARY KEY Equivalent.

Instance configuration is suitable.
WARNING: On instance 'dax-mysql-slave:3306' membership change cannot be persisted since MySQL version 5.7.22 does not support the SET PERSIST command (MySQL version >= 8.0.5 required). Please use the <Dba>.configureLocalInstance command locally to persist the changes.
WARNING: On instance 'dax-mysql-master:3306' membership change cannot be persisted since MySQL version 5.7.22 does not support the SET PERSIST command (MySQL version >= 8.0.5 required). Please use the <Dba>.configureLocalInstance command locally to persist the changes.
The instance 'root@dax-mysql-slave:3306' was successfully added to the cluster.

查看集群信息:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
 MySQL  dax-mysql-master:3306  JS > cluster.status();
{
"clusterName": "prodCluster",
"defaultReplicaSet": {
"name": "default",
"primary": "dax-mysql-master:3306",
"ssl": "DISABLED",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures.",
"topology": {
"dax-mysql-master:3306": {
"address": "dax-mysql-master:3306",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
},
"dax-mysql-slave:3306": {
"address": "dax-mysql-slave:3306",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
}
}
},
"groupInformationSourceMember": "mysql://repl@dax-mysql-master:3306"
}

其他节点修改方法如上。

点击阅读

[Oracle] oracle本地表空间在uniform、system等不同模式下分配extent方式


1、表空间和表都使用oracle默认参数

查看scott用户默认表空间:

1
2
3
4
select username,default_tablespace from dba_users where username='SCOTT';
USERNAME DEFAULT_TABLESPACE
------------------------------ ------------------------------
SCOTT USERS

查看users表空间初始化extent大小:

1
2
3
4
5
select TABLESPACE_NAME,BLOCK_SIZE,INITIAL_EXTENT,NEXT_EXTENT,MIN_EXTENTS,MAX_EXTENTS,EXTENT_MANAGEMENT,ALLOCATION_TYPE from user_tablespaces where tablespace_name='USERS';

TABLESPACE_NAME BLOCK_SIZE INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS MAX_EXTENTS EXTENT_MAN ALLOCATIO
------------------------------ ---------- -------------- ----------- ----------- ----------- ---------- ---------
USERS 8192 65536 1 2147483645 LOCAL SYSTEM

查看当前数据库块大小:

1
2
3
4
5
show parameter db_block_size;

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_block_size integer 8192

创建测试表:

1
2
3
4
5
11:43:49 SCOTT@ boston>  create table t1 as select * from emp where 0=1;
Table created.
表数据为空,查看是否有extent区在:
11:43:54 SYS@ boston> select SEGMENT_NAME,EXTENT_ID,FILE_ID,BLOCK_ID,BLOCKS from dba_extents where OWNER='SCOTT' AND SEGMENT_NAME='T1';
no rows selected

向t1表内插入数据:

1
2
11:43:50 SCOTT@ boston>  insert into t1  select * from emp;
14 rows created.

查看已分配出一个8个block的extent:

1
2
3
4
5
6
11:43:55 SYS@ boston>  select SEGMENT_NAME,EXTENT_ID,FILE_ID,BLOCK_ID,BLOCKS from dba_extents where OWNER='SCOTT' AND SEGMENT_NAME='T1';

SEGMENT_NAME EXTENT_ID FILE_ID BLOCK_ID BLOCKS
--------------------------------------------------------------------------------- ---------- ---------- ---------- ----------
T1 0 4 176 8
Elapsed: 00:00:00.50

因数据量太小,只分配了一个区,需插入大量数据:

1
2
3
4
5
6
12:00:40 SCOTT@ boston>  insert into t1  select * from t1;
42 rows created.
...
12:00:57 SCOTT@ boston> /
2688 rows created.
Elapsed: 00:00:00.02

再次查看extent,当分配16个extent之后,数据量达到了1688/1024=1M,当达到1m之后,在分配下一个extent会直接分配128个块,大小分为128*8/1024=1M:

1
2
3
4
5
6
7
8
12:01:01 SYS@ boston>  select SEGMENT_NAME,EXTENT_ID,FILE_ID,BLOCK_ID,BLOCKS from dba_extents where OWNER='SCOTT' AND SEGMENT_NAME='T1';

SEGMENT_NAME EXTENT_ID FILE_ID BLOCK_ID BLOCKS
--------------------------------------------------------------------------------- ---------- ---------- ---------- ----------
T1 0 4 176 8
...
T1 15 4 296 8
T1 16 4 384 128

再次插入数据:

1
2
3
4
5
6
12:28:36 SCOTT@ boston> /
5376 rows created.
Elapsed: 00:00:00.02
....
12:30:11 SCOTT@ boston> /
688128 rows created.

查看extent情况:

1
2
3
4
5
6
7
8
9
10
11
12:30:14 SYS@ boston>select SEGMENT_NAME,EXTENT_ID,FILE_ID,BLOCK_ID,BLOCKS from dba_extents where OWNER='SCOTT' AND SEGMENT_NAME='T1';
SEGMENT_NAME EXTENT_ID FILE_ID BLOCK_ID BLOCKS
--------------------------------------------------------------------------------- ---------- ---------- ---------- ----------
T1 0 4 176 8
....
T1 15 4 296 8
T1 16 4 384 128
...
T1 78 4 8320 128
T1 79 4 8448 1024
80 rows selected.

当表数据量达到64M的时候,再次分配下一个extent会直接分配1024个block,大小为64M。

手动对表t1分配extent

1
2
3
4
5
6
7
8
9
13:35:52 SCOTT@ boston> alter table t1 allocate extent;
Table altered.
Elapsed: 00:00:00.49
14:18:45 SCOTT@ boston> /
Table altered.
Elapsed: 00:00:00.07
14:19:28 SCOTT@ boston> /
Table altered.
Elapsed: 00:00:00.06

查看extent情况:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
14:19:06 SYS@ boston> select SEGMENT_NAME,EXTENT_ID,FILE_ID,BLOCK_ID,BLOCKS from dba_extents where OWNER='SCOTT' AND SEGMENT_NAME='T1';
SEGMENT_NAME EXTENT_ID FILE_ID BLOCK_ID BLOCKS
--------------------------------------------------------------------------------- ---------- ---------- ---------- ----------
T1 0 4 176 8
...
T1 15 4 296 8
T1 16 4 384 128
...
T1 78 4 8320 128
T1 79 4 8448 1024
T1 80 4 9472 128
T1 81 4 9600 128
T1 82 4 9728 128
83 rows selected.
Elapsed: 00:00:00.45

使用alter table ** allocate extent手动分配的区大小为1M。

再次对表插入数据:

1
2
3
14:19:30 SCOTT@ boston>  insert into t1  select * from t1;
1376256 rows created.
Elapsed: 00:00:12.03

查看extent情况

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
14:20:34 SYS@ boston> select SEGMENT_NAME,EXTENT_ID,FILE_ID,BLOCK_ID,BLOCKS from dba_extents where OWNER='SCOTT' AND SEGMENT_NAME='T1';
SEGMENT_NAME EXTENT_ID FILE_ID BLOCK_ID BLOCKS
--------------------------------------------------------------------------------- ---------- ---------- ---------- ----------
T1 0 4 176 8
...
T1 15 4 296 8
T1 16 4 384 128
...
T1 78 4 8320 128
T1 79 4 8448 1024
T1 80 4 9472 128
T1 81 4 9600 128
T1 82 4 9728 128
T1 83 4 9856 1024
...
T1 88 4 14976 1024
89 rows selected.

插入数据分配的extent大小均为64M。

2、表空间空间使用默认参数,表参数自定义(INITIAL参数小于65536):

创建测试表:

1
2
3
4
5
6
create table t2 as select * from emp where 0=1;
Table created.

ALTER TABLE t2 MOVE STORAGE ( INITIAL 20k NEXT 40k) TABLESPACE users;
Table altered.
Elapsed: 00:00:00.23

查看表参数:

1
2
3
4
SELECT initial_extent, next_extent,  min_extents, max_extents, pct_increase, blocks, sample_size FROM   user_tables WHERE  table_name = 'T2';
INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS MAX_EXTENTS PCT_INCREASE BLOCKS SAMPLE_SIZE
-------------- ----------- ----------- ----------- ------------ ---------- -----------
24576 40960

表数据为空,查看是否有extent区在:

1
2
select SEGMENT_NAME,EXTENT_ID,FILE_ID,BLOCK_ID,BLOCKS from dba_extents where OWNER='SCOTT' AND SEGMENT_NAME='T2';
no rows selected

向表空插入数据

1
2
insert into t2 select * from emp;
insert into t2 select * from t2;

查看extent情况:

1
2
3
4
select SEGMENT_NAME,EXTENT_ID,FILE_ID,BLOCK_ID,BLOCKS from dba_extents where OWNER='SCOTT' AND SEGMENT_NAME='T2';
SEGMENT_NAME EXTENT_ID FILE_ID BLOCK_ID BLOCKS
--------------------------------------------------------------------------------- ---------- ---------- ---------- ----------
T2 0 4 168 8

分配出来的extent最少8个块。
多次插入数据查询:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
select SEGMENT_NAME,EXTENT_ID,FILE_ID,BLOCK_ID,BLOCKS from dba_extents where OWNER='SCOTT' AND SEGMENT_NAME='T2';

SEGMENT_NAME EXTENT_ID FILE_ID BLOCK_ID BLOCKS
--------------------------------------------------------------------------------- ---------- ---------- ---------- ----------
T2 0 4 168 8
...
T2 15 4 17184 8
T2 16 4 17280 128
...
T2 78 4 7936 128
T2 79 4 8064 1024
T2 80 4 9088 1024
T2 81 4 10112 1024
T2 82 4 11136 1024

extent分配增长情况为(64k-1m-64m-…)。

3、表空间空间使用默认参数,表参数自定义(INITIAL参数大于64k,next小于64k):

创建测试表:

1
2
3
4
5
6
 create table t3 as select * from emp where 0=1;
Table created.

ALTER TABLE t3 MOVE STORAGE ( INITIAL 100k NEXT 40k) TABLESPACE users;
Table altered.
Elapsed: 00:00:00.23

查看表参数:

1
2
3
4
5
SELECT initial_extent, next_extent,  min_extents, max_extents, pct_increase, blocks, sample_size FROM   user_tables WHERE  table_name = 'T3';

INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS MAX_EXTENTS PCT_INCREASE BLOCKS SAMPLE_SIZE
-------------- ----------- ----------- ----------- ------------ ---------- -----------
106496 40960

表数据为空,查看是否有extent区在:

1
2
select SEGMENT_NAME,EXTENT_ID,FILE_ID,BLOCK_ID,BLOCKS from dba_extents where OWNER='SCOTT' AND SEGMENT_NAME='T3';
no rows selected

向表空插入数据

1
insert into t3 select * from emp;

一次分配出来两个extent,每个extent8个block:

1
2
3
4
5
6
 select SEGMENT_NAME,EXTENT_ID,FILE_ID,BLOCK_ID,BLOCKS from dba_extents where OWNER='SCOTT' AND SEGMENT_NAME='T3';

SEGMENT_NAME EXTENT_ID FILE_ID BLOCK_ID BLOCKS
--------------------------------------------------------------------------------- ---------- ---------- ---------- ----------
T3 0 4 176 8
T3 1 4 184 8

再次插入数据:

1
insert into t3 select * from t3;

不要插入太多数据,查看下一次分配的extent为1个,占8个块

1
2
3
4
5
6
7
select SEGMENT_NAME,EXTENT_ID,FILE_ID,BLOCK_ID,BLOCKS from dba_extents where OWNER='SCOTT' AND SEGMENT_NAME='T3';

SEGMENT_NAME EXTENT_ID FILE_ID BLOCK_ID BLOCKS
--------------------------------------------------------------------------------- ---------- ---------- ---------- ----------
T3 0 4 208 8
T3 1 4 216 8
T3 2 4 224 8

再次插入数据,extent增长情况(64k-1m-64m)

4、表空间空间使用默认参数,表参数自定义(INITIAL参数大于64k,next大于64k):

创建测试表:

1
2
3
4
5
6
 create table t4 as select * from emp where 0=1;
Table created.

ALTER TABLE t4 MOVE STORAGE ( INITIAL 100k NEXT 140k) TABLESPACE users;
Table altered.
Elapsed: 00:00:00.23

查看表参数:

1
2
3
4
5
14:31:43 SCOTT@ boston> SELECT initial_extent, next_extent,  min_extents, max_extents, pct_increase, blocks, sample_size FROM   user_tables WHERE  table_name = 'T4';

INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS MAX_EXTENTS PCT_INCREASE BLOCKS SAMPLE_SIZE
-------------- ----------- ----------- ----------- ------------ ---------- -----------
106496 147456

表数据为空,查看是否有extent区在:

1
2
select SEGMENT_NAME,EXTENT_ID,FILE_ID,BLOCK_ID,BLOCKS from dba_extents where OWNER='SCOTT' AND SEGMENT_NAME='T4';
no rows selected

向表空插入数据

1
insert into t4 select * from emp;

一次分配出来两个extent,每个extent8个block:

1
2
3
4
5
6
select SEGMENT_NAME,EXTENT_ID,FILE_ID,BLOCK_ID,BLOCKS from dba_extents where OWNER='SCOTT' AND SEGMENT_NAME='T4';

SEGMENT_NAME EXTENT_ID FILE_ID BLOCK_ID BLOCKS
--------------------------------------------------------------------------------- ---------- ---------- ---------- ----------
T4 0 4 176 8
T4 1 4 184 8

再次插入数据:

1
insert into t4 select * from t4;

不要插入太多数据,查看下一次分配的extent为1个,占8个块

1
2
3
4
5
6
7
select SEGMENT_NAME,EXTENT_ID,FILE_ID,BLOCK_ID,BLOCKS from dba_extents where OWNER='SCOTT' AND SEGMENT_NAME='T4';

SEGMENT_NAME EXTENT_ID FILE_ID BLOCK_ID BLOCKS
--------------------------------------------------------------------------------- ---------- ---------- ---------- ----------
T4 0 4 176 8
T4 1 4 184 8
T4 2 4 192 8

再次插入数据,extent增长情况(64k-1m-64m)

5、创建表空间使用并设置extent大小,统一为2m:

创建测试表空间

1
create tablespace test datafile '/data/u01/app/oracle/oradata/boston/test01.dbf' size 100M extent management local uniform size 2m;

查看表空间参数

1
2
3
4
5
15:28:39 SYS@ boston> select TABLESPACE_NAME,BLOCK_SIZE,INITIAL_EXTENT,NEXT_EXTENT,MIN_EXTENTS,MAX_EXTENTS,EXTENT_MANAGEMENT,ALLOCATION_TYPE from user_tablespaces where tablespace_name='TEST';

TABLESPACE_NAME BLOCK_SIZE INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS MAX_EXTENTS EXTENT_MAN ALLOCATIO
------------------------------ ---------- -------------- ----------- ----------- ----------- ---------- ---------
TEST 8192 2097152 2097152 1 2147483645 LOCAL UNIFORM
6、表空间使用上面创建的test表空间,表参数默认:

创建测试表:

1
2
3
drop table t1;
create table t1 as select * from emp where 0=1;
alter table t1 move tablespace test;

查看表t1参数

1
2
3
4
5
SELECT initial_extent,next_extent,min_extents,max_extents,pct_increase,blocks,sample_size FROM user_tables WHERE  table_name = 'T1';

INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS MAX_EXTENTS PCT_INCREASE BLOCKS SAMPLE_SIZE
-------------- ----------- ----------- ----------- ------------ ---------- -----------
2097152 2097152 1 2147483645 0

表数据为空,查看是否有extent区在:

1
2
3
4
select SEGMENT_NAME,EXTENT_ID,FILE_ID,BLOCK_ID,BLOCKS from dba_extents where OWNER='SCOTT' AND SEGMENT_NAME='T1';
15:32:51 SYS@ boston> select SEGMENT_NAME,EXTENT_ID,FILE_ID,BLOCK_ID,BLOCKS from dba_extents where OWNER='SCOTT' AND SEGMENT_NAME='T1';
no rows selected
Elapsed: 00:00:00.04

插入数据

1
2
15:33:16 SCOTT@ boston> insert into t1  select * from emp;
14 rows created.

为t1表分配了一个2m大小的extent:

1
2
3
4
15:32:54 SYS@ boston> select SEGMENT_NAME,EXTENT_ID,FILE_ID,BLOCK_ID,BLOCKS from dba_extents where OWNER='SCOTT' AND SEGMENT_NAME='T1';
SEGMENT_NAME EXTENT_ID FILE_ID BLOCK_ID BLOCKS
--------------------------------------------------------------------------------- ---------- ---------- ---------- ----------
T1 0 5 128 256

批量插入数据

1
2
3
4
5
6
7
8
15:48:07 SYS@ boston> select SEGMENT_NAME,EXTENT_ID,FILE_ID,BLOCK_ID,BLOCKS from dba_extents where OWNER='SCOTT' AND SEGMENT_NAME='T1';

SEGMENT_NAME EXTENT_ID FILE_ID BLOCK_ID BLOCKS
--------------------------------------------------------------------------------- ---------- ---------- ---------- ----------
T1 0 5 128 256
...
T1 80 5 20608 256
81 rows selected.

不管插入多少数据,因为设置了统一大小为2m,一个extent永远都是256个块。

7、表空间使用上面创建的test表空间,表参数自定义:

创建测试表:

1
2
3
drop table t2;
create table t2 as select * from emp where 0=1;
ALTER TABLE t2 MOVE STORAGE ( INITIAL 5m NEXT 5m) TABLESPACE test;

查看表t2参数

1
2
3
4
5
SELECT initial_extent,next_extent,min_extents,max_extents,pct_increase,blocks,sample_size FROM user_tables WHERE  table_name = 'T2';

INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS MAX_EXTENTS PCT_INCREASE BLOCKS SAMPLE_SIZE
-------------- ----------- ----------- ----------- ------------ ---------- -----------
5242880 5242880

表数据为空,查看是否有extent区在:

1
2
3
select SEGMENT_NAME,EXTENT_ID,FILE_ID,BLOCK_ID,BLOCKS from dba_extents where OWNER='SCOTT' AND SEGMENT_NAME='T2';
no rows selected
Elapsed: 00:00:00.11

插入数据:

1
insert into t2 select * from emp;

因为初始化设置为5m,而每个extent大小统一为2m,因此分配出三个extent:

1
2
3
4
5
6
7
select SEGMENT_NAME,EXTENT_ID,FILE_ID,BLOCK_ID,BLOCKS from dba_extents where OWNER='SCOTT' AND SEGMENT_NAME='T2';

SEGMENT_NAME EXTENT_ID FILE_ID BLOCK_ID BLOCKS
--------------------------------------------------------------------------------- ---------- ---------- ---------- ----------
T2 0 5 1920 256
T2 1 5 2176 256
T2 2 5 2432 256

再次插入数据,一次性不要插入太多:

1
2
3
insert into t2 select * from t2;
...
insert into t2 (select * from t2 where rownum < 10000);

当前三个extent用满之后再次分配extent时,分配1个extent,大小为2m。

1
2
3
4
5
6
7
select SEGMENT_NAME,EXTENT_ID,FILE_ID,BLOCK_ID,BLOCKS from dba_extents where OWNER='SCOTT' AND SEGMENT_NAME='T2';
SEGMENT_NAME EXTENT_ID FILE_ID BLOCK_ID BLOCKS
--------------------------------------------------------------------------------- ---------- ---------- ---------- ----------
T2 0 5 640 256
T2 1 5 896 256
T2 2 5 1152 256
T2 3 5 1408 256

8、测试总结

在本地管理表空间的模式下,oracle分配extent,会根据initial指定的参数分配(按相应的倍数),但是next指定的值会被忽略。官方文档如下所示:

点击阅读

[Linux] centos安装vsftp


1、检查vsftp是否安装,如果没有yum安装vsftp:

1
2
rpm -qa| grep vsftpd
yum install vsftpd -y

2、创建用户:

1
2
3
adduser -s /sbin/nologin -d /var/ftp/ftp1 ftp1 
修改用户名:
passwd ftp1

3、修改配置文件:

cat /etc/vsftpd/vsftpd.conf |grep -v ‘^#’

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#禁止匿名用户登录
anonymous_enable=NO
local_enable=YES
write_enable=YES
local_umask=022
dirmessage_enable=YES
xferlog_enable=YES
connect_from_port_20=YES
xferlog_std_format=YES
#不允许chroot_list下的用户访问其他目录:
chroot_list_file=/etc/vsftpd/chroot_list
chroot_local_user=YES
chroot_list_enable=NO
listen=NO
listen_ipv6=YES

pam_service_name=vsftpd
#只允许vsftpd.user_list文件下的用户登录:
userlist_enable=YES
userlist_deny=NO
userlist_file=/etc/vsftpd/vsftpd.user_list
tcp_wrappers=YES
allow_writeable_chroot=YES

4、添加访问用户:

1
2
3
4
5
cat /etc/vsftpd/vsftpd.user_list
ftp1

cat /etc/vsftpd/chroot_list
ftp1

5、启动,查看ftp服务状态

1
2
3
4
service vsftpd status
service vsftpd start
service vsftpd stop
service vsftpd restart

6、客户端安装ftp:

1
yum -y install ftp

连接到ftp-server

1
shell> ftp *.*.*.*

7、配置过程遇到的问题:

1
2
3
500 OOPS: vsftpd: refusing to run with writable root inside chroot ()
查看配置文件该行参数是否配置:
allow_writeable_chroot=YES
1
2
3
4
5
500 OOPS: chroot
Login failed.
421 Service not available, remote server has closed connection
检查selinux是否为enforcing状态,并修改:
setenforce 0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
ftp vsftpd 530 login incorrect
解决方法:
1.检查密码是否正常。
2.检查/etc/vsftpd/vsftpd.conf配置:
pam_service_name=vsftpd
userlist_enable=YES
userlist_deny=NO
userlist_file=/etc/vsftpd/vsftpd.user_list
3.检查/etc/pam.d/vsftpd配置:
#%PAM-1.0
session optional pam_keyinit.so force revoke
auth required pam_listfile.so item=user sense=deny file=/etc/vsftpd/ftpusers onerr=succeed
auth required pam_shells.so
auth include password-auth
account include password-auth
session required pam_loginuid.so
session include password-auth

点击阅读

Proudly powered by Hexo and Theme by Lap
本站访客数人次
© 2020 zeven0707's blog