[Mysql] mysql大小写敏感参数


1、lower_case_file_system

value=ON|OFF

1
该参数代表mysql数据目录所在的文件系统下的文件名的大小写是否敏感,off代表大小写铭感,on代表大小写不敏感。该变量是个只读参数,只代表一个文件系统属性,进行设置对文件系统没有任何影响。

2、lower_case_table_names

value=0|1|2
不同值代表的含义:

1
2
3
默认值为0,如果设置为0,表名是被指定存储的并且会进行大小写敏感的比较。
如果设置为1,表名存为小写,不进行大小写敏感的比较。
如果设置为2,表名按指定的存储,但是按小写的方式比较。这个参数同时适用于库名和表别名。

不同操作系统的默认值:

1
2
3
如果是windows,值默认为1。
如果是macos,值默认是2。
如果是linux默认是0,且不支持设置为2,如果这是为2,服务端会强制用0替代。

使用过程要注意的问题:

1
2
3
4
5
1)如果MySQL服务运行在操作系统(windows或者macos)上,mysql的数据目录所在的文件系统是大小写不敏感的,因此不能设置lower_case_table_names为0。这种组合是不被支持的,当执行INSERT INTO...SELECT...FROM tbl_name操作时,因为tb1_name的小写,会导致系统hang主。
2)在myisam表中,使用不同的字母大小写访问表,会引起索引损坏。
3)如果在一个大小写不明感的文件系统上设置lower_case_table_names=0,会打印error信息并退出服务。
4)如果使用innodb引擎表,不管什么平台,都需要设置该参数为1,强制将名称转换为小写。
5)数据库禁止启动的时候使用的lower_case_table_names的值与数据库初始化时的值不一致。这个限制是因为,当数据库初始化时候,数据字典表字段的排序是基于当时的值定义的。如果重启以不同的lower_case_table_names值启动,会导致在标识符的排序和比较引起不一致性。

点击阅读

[Oracle] 单实例rman备份异机恢复到rac环境


1、查询DBID信息

DBID是DataBase IDentifier的缩写,是数据库的唯一标识符。这个DBID在数据文件头和控制文件都是存在的,可以用于标示数据文件的归属。对于不同数据库来说,DBID应当不同,而db_name则可能是相同的。一般在nocatalog模式并且控制文件丢失时才需要这个。
查看源库的dbid

1
2
3
4
SYS@ testcadb> select name,dbid from v$database;
NAME DBID
--------- ----------
TESTCADB 2313387174

2、查看源库的数据文件路径

1
2
3
4
5
6
7
8
SYS@ testcadb> select file_id,file_name from dba_data_files;
FILE_ID FILE_NAME

4 /home/oracle/u01/app/oracle/oradata/testcadb/users01.dbf
3 /home/oracle/u01/app/oracle/oradata/testcadb/undotbs01.dbf
2 /home/oracle/u01/app/oracle/oradata/testcadb/sysaux01.dbf
1 /home/oracle/u01/app/oracle/oradata/testcadb/system01.dbf
5 /home/oracle/oradata/orcl/ebank_data.dbf

3、查看临时表空间数据文件

1
2
3
4
5
SYS@ testcadb> select * from dba_temp_files;
FILE_NAME FILE_ID TABLESPACE_NAME BYTES BLOCKS STATUS RELATIVE_FNO AUT MAXBYTES MAXBLOCKS INCREMENT_BY USER_BYTES USER_BLOCKS
-----------------------------------------------------------------------------------------------------------------------------------
/home/oracle/u01/app/oracle/oradata/testcadb/temp01.dbf 1 TEMP 20971520 2560 ONLINE 1 YES 3.4360E+10 4194302 80 19922944 2432
/home/oracle/oradata/orcl/test_temp01.dbf 2 TEST_TEMP 33554432 4096 ONLINE 1 YES 2147483648 262144 4096 32505856 3968

4、查看日志文件路径:

1
2
3
4
5
sql > select a.bytes/1024/1024,b.member,b.group#,a.thread# from v$log a,v$logfile b where a.group#=b.group#;
A.BYTES/1024/1024 MEMBER GROUP# THREAD#
50 /home/oracle/u01/app/oracle/oradata/testcadb/redo03.log 3 1
50 /home/oracle/u01/app/oracle/oradata/testcadb/redo02.log 2 1
50 /home/oracle/u01/app/oracle/oradata/testcadb/redo01.log 1 1

5、开始备份数据库(非归档模式的数据库需要在mount状态下执行)

1
2
3
4
5
6
7
8
9
10
11
shell > mkdir -p /tmp/backup/
shell > rman target /
RMAN> run {
allocate channel c1 device type disk;
backup incremental level 0 format '/tmp/backup/db_full_%U.bkp' tag '2018-12-19-FULL' database plus archivelog;
release channel c1;
}
备份控制文件
RMAN> backup current controlfile format '/tmp/backup/control20181219.bak';
备份pfile文件
RMAN> backup spfile format '/tmp/backup/spfile20181219.bak';

6、拷贝源库备份文件到目标库服务器

1
shell > scp /tmp/backup/* oracle@hostip*:/home/oracle/backup/

7、以下操作都是在目标节点上执行

8、查看文件权限,如果权限不对需要修改

1
shell > chown oracle.oinstall /home/oracle/backup/*

9、在节点1上执行恢复spfile文件:

1
2
3
4
5
6
7
8
9
10
11
生成临时pfile文件
shell > echo 'db_name=testcadb' > $ORACLE_HOME/dbs/inittestcadb1.ora
声明ORACLE_SID
shell > export ORACLE_SID=testcadb1
shell > rman target /
以临时pfile文件启动库到nomount:
rman > startup nomount pfile='/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/inittestcadb1.ora';
设置dbid
rman > set DBID=2313387174;
从备份中恢复spfile文件
rman > restore spfile to '/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/spfileoibs1.ora' from '/home/oracle/backup/spfile20181218.bak';

操作结果如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[oracle@rac1 backup]$ rman target /
恢复管理器: Release 11.2.0.4.0 - Production on 星期五 9月 23 14:13:35 2016
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
RMAN> startup nomount pfile='/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/inittestcadb1.ora';
Oracle instance started
Total System Global Area 371654656 bytes
Fixed Size 2253304 bytes
Variable Size 314576392 bytes
Database Buffers 50331648 bytes
Redo Buffers 4493312 bytes

RMAN> set DBID=2313387174;
executing command: SET DBID

RMAN> restore spfile to '/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/spfiletestcadb1.ora' from '/home/oracle/backup/spfile20181219.bak';
Starting restore at 2018:12:19 10:25:18
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=134 device type=DISK
channel ORA_DISK_1: restoring spfile from AUTOBACKUP /home/oracle/backup/spfile20181218.bak
channel ORA_DISK_1: SPFILE restore from AUTOBACKUP complete
Finished restore at 2018:12:19 10:25:20

根据生成的spfile文件生成pfile文件

1
2
[oracle@rac1 backup]$ sqlplus / as sysdba
sql > create pfile from spfile;

对生成的pfile文件进行修改
vim /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/inittestcadb1.ora

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
*.__db_cache_size=969567232
*.__java_pool_size=16777216
*.__large_pool_size=184549376
*.__oracle_base='/home/oracle/u01/app/oracle'#ORACLE_BASE set from environment
*.__pga_aggregate_target=1617245696
*.__sga_target=1925868544
*.__shared_io_pool_size=0
*.__shared_pool_size=704643072
*.__streams_pool_size=0
*.aq_tm_processes=0
*.audit_file_dest='/u01/app/oracle/admin/testcadb/adump'
*.audit_trail='db'
*.compatible='11.2.0.4.0'
*.control_files='+DATA1/testcadb/control01.ctl','+DATA1/testcadb/control02.ctl'
*.cpu_count=3
*.db_block_size=8192
*.db_domain=''
*.db_name='testcadb'
*.db_recovery_file_dest='+ARCH'
*.db_recovery_file_dest_size=4385144832
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=testcadbXDB)'
*.job_queue_processes=0
*.memory_target=3529482752
*.open_cursors=300
*.processes=400
*.remote_login_passwordfile='EXCLUSIVE'
*.resource_manager_plan='default_plan'
*.sessions=445
*.log_archive_dest_1='LOCATION=+ARCH/testcadb/archivelog'
*.log_archive_format='arch_%t_%s_%r.arc'
testcadb1.instance_name='testcadb1'
testcadb2.instance_name='testcadb2'
testcadb1.instance_number=1
testcadb2.instance_number=2
testcadb1.undo_tablespace='UNDOTBS1'
testcadb2.undo_tablespace='UNDOTBS2'
testcadb2.thread=2
testcadb1.thread=1
*.remote_listener='racscan:1521'

停止数据库

1
2
3
19:17:48 SYS@ testcadb1> shutdown immediate;
ORA-01507: database not mounted
ORACLE instance shut down.

根据pfile文件中的设置路径,创建相应目录,oracle用户:

1
shell > mkdir -p /u01/app/oracle/admin/testcadb/adump

grid用户:

1
2
shell > asmcmd
asmcmd> mkdir ***

修改完成之后以pfile文件重新启动数据库

1
sql > startup nomount pfile='/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/inittestcadb1.ora';

根据pfile文件生成新的spfile文件

1
2
3
sql > create spfile='+DATA1/testcadb/spfiletestcadb.ora' from pfile;
SQL> shutdown abort
ORACLE instance shut down.

将所有的节点上pfile文件,指向共享文件上的spfile文件
rac1节点:

1
[oracle@rac1 ~]$ echo "SPFILE='+DATA1/testcadb/spfiletestcadb.ora' " > /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/inittestcadb1.ora

rac2节点:

1
[oracle@rac2 ~]$ echo "SPFILE='+DATA1/testcadb/spfiletestcadb.ora' " > /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/inittestcadb2.ora

10、还原数据库的controlfile

1
2
[oracle@rac2 ~]$ rman target /
rman > startup nomount

还原控制文件

1
rman > restore controlfile from '/home/oracle/backup/control20181219.bak';

启动到mount

1
rman > alter database mount;

11、还原数据库,指定备份文件到存放路径

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
rman > catalog start with '/home/oracle/backup';
rman > list backup summary;
执行下列run脚本(根据数据文件的不同作相应修改)
RMAN> run{
set newname for datafile '/home/oracle/u01/app/oracle/oradata/testcadb/users01.dbf' to '+DATA1/testcadb/users01.dbf';
set newname for datafile '/home/oracle/u01/app/oracle/oradata/testcadb/undotbs01.dbf' to '+DATA1/testcadb/undotbs01.dbf';
set newname for datafile '/home/oracle/u01/app/oracle/oradata/testcadb/sysaux01.dbf' to '+DATA1/testcadb/sysaux01.dbf';
set newname for datafile '/home/oracle/u01/app/oracle/oradata/testcadb/system01.dbf' to '+DATA1/testcadb/system01.dbf';
set newname for datafile '/home/oracle/oradata/orcl/ebank_data.dbf' to '+DATA1/testcadb/ebank_data.dbf';
restore database;
switch datafile all;
}
或者:
RMAN> run{
set newname for datafile 1 to '+DATA1/testcadb/users01.dbf';
set newname for datafile 2 to '+DATA1/testcadb/undotbs01.dbf';
set newname for datafile 3 to '+DATA1/testcadb/sysaux01.dbf';
set newname for datafile 4 to '+DATA1/testcadb/system01.dbf';
set newname for datafile 5 to '+DATA1/testcadb/ebank_data.dbf';
restore database;
switch datafile all;
}
查看备份的归档(如果是非归档模式的备份没有数据)
RMAN> list backup of archivelog all;
根据scn恢复数据库
RMAN> recover database until scn 2896925;
或者如果没有归档,直接恢复数据库
RMAN> recover database;

尝试打开数据库:

1
2
3
4
5
6
7
8
9
shell > sqlplus / as sysdba
sql > alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-00344: unable to re-create online log '/home/oracle/u01/app/oracle/oradata/testcadb/redo01.log'
ORA-27040: file create error, unable to create file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 1

提示redolog文件路径错误,重新生成redolog文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
sql > select group#,thread#,members,archived,status from v$log;

GROUP# THREAD# MEMBERS ARCHIV STATUS
---------- ---------- ---------- ------ --------------------------------
1 1 1 NO CLEARING
3 1 1 NO CLEARING_CURRENT
2 1 1 NO CLEARING
Elapsed: 00:00:00.03
删除原来的logfile
sql > alter database drop logfile group 1;
新增logfile(要保证redolog路径存在)
sql > alter database add logfile group 1 '+ARCH/testcadb/redolog/redo01.log' size 100M;
sql > alter database drop logfile group 2;
sql > alter database add logfile group 2 '+ARCH/testcadb/redolog/redo02.log' size 100M;
sql > alter database drop logfile group 3;
提示日志无法删除
sql > alter database drop logfile group 3;
alter database drop logfile group 3
*
ERROR at line 1:
ORA-01623: log 3 is current log for instance testcadb1 (thread 1) - cannot drop
ORA-00312: online log 3 thread 1: '/home/oracle/u01/app/oracle/oradata/testcadb/redo03.log'
对文件进行重命名
sql > alter database rename file '/home/oracle/u01/app/oracle/oradata/testcadb/redo03.log' to '+ARCH/testcadb/redolog/redo03.log';
sql > alter database clear logfile group 3;
因为日志文件大小还是按原来的大小,需要重新建
sql > alter database drop logfile group 3;
sql > alter database add logfile group 3 '+ARCH/testcadb/redolog/redo03.log' size 100M;
打开数据库
sql > alter database open resetlogs;
如果原库版本不一样(源库11.2.0.3目标库11.2.0.4)需要升级数据库
sql > startup mount;
sql > alter database open upgrade;
sql > select status from v$Instance;
sql > @$ORACLE_HOME/rdbms/admin/catupgrd.sql;

启动库之后临时表空间是不可用的,需要重新配置临时表空间:

1
2
3
4
5
6
sql > select * from dba_temp_files;
select * from dba_temp_files
*
ERROR at line 1:
ORA-01157: cannot identify/lock data file 201 - see DBWR trace file
ORA-01110: data file 201: '/home/oracle/u01/app/oracle/oradata/testcadb/temp01.dbf'

更换临时表空间

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
创建中转临时表空间
sql > create TEMPORARY TABLESPACE TEMP2 TEMPFILE '+DATA1/testcadb/TEMP02.DBF' SIZE 2048M REUSE AUTOEXTEND ON NEXT 640K MAXSIZE 5120M;
切换默认临时表空间
sql > alter database default temporary tablespace temp2;
sql >
sql > drop tablespace temp including contents and datafiles;
sql > create TEMPORARY TABLESPACE TEMP TEMPFILE '+DATA1/testcadb/TEMP01.DBF' SIZE 2048M REUSE AUTOEXTEND ON NEXT 640K MAXSIZE 5120M;
sql > alter database default temporary tablespace temp;
或者 sql > alter user *** temporary tablespace temp;(修改单独某个用户的临时表空间)

sql > drop tablespace temp2 including contents and datafiles;
sql > select * from v$log;
GROUP# THREAD# SEQUENCE# BYTES BLOCKSIZE MEMBERS ARCHIV STATUS FIRST_CHANGE# FIRST_TIME NEXT_CHANGE# NEXT_TIME
---------- ---------- ---------- ---------- ---------- ---------- ------ -------------------------------- ------------- ------------------- ------------ -------------------
1 1 25 104857600 512 1 NO INACTIVE 64817466 2018:12:19 14:19:09 64828852 2018:12:19 14:21:47
2 1 26 104857600 512 1 NO CURRENT 64828852 2018:12:19 14:21:47 2.8147E+14
3 1 24 52428800 512 1 NO INACTIVE 64807090 2018:12:19 14:18:29 64817466 2018:12:19 14:19:09

添加thread 2对应的logfile,用于分配给节点2

1
2
3
sql > alter database add logfile thread 2 group 4 '+ARCH/testcadb/redolog/redo04.log' size 100m;
sql > alter database add logfile thread 2 group 5 '+ARCH/testcadb/redolog/redo05.log' size 100m;
sql > alter database add logfile thread 2 group 6 '+ARCH/testcadb/redolog/redo06.log' size 100m;

启动thread2

1
2
3
4
5
6
sql > select thread#,status,enabled from v$thread;
THREAD# STATUS ENABLED
---------- ------------ ----------------
1 OPEN PUBLIC
2 CLOSED DISABLED
sql > alter database enable thread 2;

配置两个节点信息

1
2
3
4
5
6
sql > alter system set thread=1 scope=spfile sid='testcadb1';
sql > alter system set thread=2 scope=spfile sid='testcadb2';
sql > alter system set instance_number=1 scope=spfile sid='testcadb1';
sql > alter system set instance_number=2 scope=spfile sid='testcadb2';
sql > alter system set cluster_database_instances=2 scope=spfile;
sql > alter system set cluster_database=true scope=spfile;

为节点2添加undo表空间

1
2
3
sql > select substr(file_name,1,60) UNDO_FILES from dba_data_files where tablespace_name = 'UNDOTBS1' 
sql > create undo tablespace undotbs2 datafile '+data1/testcadb/undotbs02.dbf' size 200M;
sql > alter system set undo_tablespace='undotbs2' scope=spfile sid='testcadb2';

关闭节点1并重新启动,如果不重新启动节点1,直接启动节点2报一下错误:

1
ORA-01102: cannot mount database in EXCLUSIVE mode

启动节点2(在节点2操作)
节点2:注意audit目录是否存在,不存在手动创建一下

1
2
3
4
5
6
shell > mkdir -p /u01/app/oracle/admin/testcadb/adump
shell > export ORACLE_SID=testcadb2
shell > sqlplus / as sysdba
sql > startup nomount;
sql > alter database mount;
sql > alter database open;

至此数据库已经换成完成,但是恢复的数据库还没有注册到集群管理下

12、将数据库注册到集群下

在节点1使用oracle用户执行(不要使用grid用户)

1
2
3
[oracle@rac1 ~]$ srvctl add database -d testcadb -o /u01/app/oracle/product/11.2.0.4/dbhome_1 -p +DATA1/testcadb/spfiletestcadb.ora
[oracle@rac1 ~]$ srvctl add instance -d testcadb -i testcadb1 -n rac1
[oracle@rac1 ~]$ srvctl add instance -d testcadb -i testcadb2 -n rac2

重启数据库

1
2
3
[oracle@rac1 ~]$ srvctl stop database -d testcadb
#因为启动不是通过srvctl启动的,可能需要进入sqlplus控制台先手动关闭库(两个节点都要运行),然后通过下面命令启动数据库:
[oracle@rac1 ~]$ srvctl start database -d testcadb

13、查看监听有没有注册到scanip上,如果没有注册到scanip上继续下面的操作

1
2
3
4
5
6
7
8
9
10
sql > show parameter remote_listener;
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
remote_listener string racscan:1521
#grid用户执行:
shell > lsnrctl status LISTENER_SCAN1
#如果没有注册监听信息,按下面方法配置
alter system set remote_listener = 'racscan:1521' sid='*' scope=both;
alter system register;
lsnrctl status LISTENER_SCAN1

14、因为恢复过来的数据库没有passwordfile,存在sys用户远程登录问题

查看remote_login_passwordfile设置值

1
2
3
4
5
6
7
sql > show parameter remote_login_passwordfile;
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
remote_login_passwordfile string EXCLUSIVE

修改值使用下面方法,修改完成之后,需要重新启动数据库才能生效:
alter system set remote_login_passwordfile='exclusive' scope=spfile sid='*';

查看详细的remote_login_passwordfile配置点击此处
remote_login_passwordfile修改完成之后,生成秘钥文件
节点1

1
orapwd file=orapwtestcadb1 password=oracle entries=100

节点2

1
orapwd file=orapwtestcadb2 password=oracle entries=100

现在,数据终于可以正常访问了。

点击阅读

[Oracle] remote_login_passwordfile&SQLNET.AUTHENTICATION_SERVICES参数


1、remote_login_passwordfile参数

1
2
3
4
5
6
shared:一个或者多个库都能使用该password file。sys用户不能修改,如果尝试修改sys密码,会提示 "ORA-28046: Password change for SYS disallowed." ;其他用户有(SYSDBA, SYSOPER, SYSASM, SYSBACKUP, SYSDG, SYSKM)权限的,密码不能更新,如果更新会提示"ORA-01999: password file cannot be updated in SHARED mode." ;给个别用户授权(SYSDBA, SYSOPER, SYSASM, SYSBACKUP, SYSDG, SYSKM)是不被允许的,否则提示"ORA-01999: password file cannot be updated in SHARED mode.";如果passwordfile文件不存在,等同于REMOTE_LOGIN_PASSWORDFILE=none.

exclusive:password file只能被一个库使用,文件可以包含sys和非sys用户。如果password file文件不存在,等同于REMOTE_LOGIN_PASSWORDFILE=none.
remote_login_passwordfile =exclusive时,启用口令文件,允许远程登录;

none:忽略password file,只能通过本地操作系统验证。

修改值使用下面方法,因为该值为静态参数,修改完成之后,需要重新启动数据库才能生效:

1
alter system set remote_login_passwordfile='exclusive' scope=spfile sid='*';

2、SQLNET.AUTHENTICATION_SERVICES参数详解

2.1、值为none时
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#本地验证方式失效
[oracle@dax-mysql-slave admin]$ sqlplus / as sysdba
ERROR:
ORA-01017: invalid username/password; logon denied
#使用用户名密码方式登录正常:
[oracle@dax-mysql-slave admin]$ sqlplus sys/oracle as sysdba
22:51:21 SYS@ boston> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production

[oracle@dax-mysql-slave admin]$ sqlplus test/oracle@boston
22:52:00 TEST@ boston> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

[oracle@dax-mysql-slave admin]$ sqlplus test/oracle@192.168.168.177:1521/boston.us.oracle.com
22:53:02 TEST@ 192.168.168.177:1521/boston.us.oracle.com> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
2.2、值为all时
1
2
3
4
5
6
7
8
9
10
11
12
13
14
网络服务名方式和service_name方式失效
[oracle@dax-mysql-slave admin]$ sqlplus / as sysdba
22:54:21 SYS@ boston>

[oracle@dax-mysql-slave admin]$ sqlplus sys/oracle as sysdba
22:54:45 SYS@ boston>

[oracle@dax-mysql-slave admin]$ sqlplus test/oracle@boston
ERROR:
ORA-12641: Authentication service failed to initialize

[oracle@dax-mysql-slave admin]$ sqlplus test/oracle@192.168.168.177:1521/boston.us.oracle.com
ERROR:
ORA-12641: Authentication service failed to initialize
2.3、如果SQLNET.AUTHENTICATION_SERVICES=(none)的情况下,修改remote_login_passwordfile的值从exclusive改为none,重启数据库使其生效,会报错提示权限不足:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
22:58:28 SYS@ boston> alter system set remote_login_passwordfile='NONE' scope=spfile;
System altered.
22:58:47 SYS@ boston> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
22:59:15 SYS@ boston> startup
ORA-01031: insufficient privileges
22:59:34 SYS@ boston> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
尝试重新登录启动数据库
[oracle@dax-mysql-slave admin]$ sqlplus sys/oracle as sysdba
Connected to an idle instance.
22:59:46 SYS@ boston> startup
ORA-01031: insufficient privileges
22:59:48 SYS@ boston> exit
Disconnected

只能禁用sqlnet.ora文件中的SQLNET.AUTHENTICATION_SERVICES=(none)参数,否则无法启动数据库。

2.4、启动数据库之后,再次启用sqlnet.ora文件中的SQLNET.AUTHENTICATION_SERVICES=(none)参数,如果在数据库开启的状态下,as sysdba将无法登录,只能使用网络服务名和service_name登录
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
oracle@dax-mysql-slave admin]$ sqlplus / as sysdba
ERROR:
ORA-01017: invalid username/password; logon denied

[oracle@dax-mysql-slave admin]$ sqlplus sys/oracle as sysdba
ERROR:
ORA-01017: invalid username/password; logon denied

[oracle@dax-mysql-slave admin]$ sqlplus test/oracle as sysdba
ERROR:
ORA-01017: invalid username/password; logon denied

[oracle@dax-mysql-slave admin]$ sqlplus test/oracle@boston as sysdba
ERROR:
ORA-01017: invalid username/password; logon denied

[oracle@dax-mysql-slave admin]$ sqlplus test/oracle@boston
10:13:30 TEST@ boston> exit

[oracle@dax-mysql-slave admin]$ sqlplus test/oracle@192.168.168.177:1521/boston.us.oracle.com
10:16:49 TEST@ 192.168.168.177:1521/boston.us.oracle.com> select * from user_role_privs;

USERNAME GRANTED_ROLE ADM DEF OS_
------------------------------ ------------------------------ --- --- ---
TEST CONNECT NO YES NO
TEST DBA NO YES NO
TEST RESOURCE NO YES NO
Elapsed: 00:00:00.02

点击阅读

[Zabbix] zabbix监控域名证书过期时间


1、编辑脚本

more /usr/local/zabbix/script/check_sslcert_expire_script.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#!/bin/bash
#传入第一参数指定输入的域名
domain=$1
#传入的第二个参数指定https的端口
port=$2
#获取证书到期时间
expired_date=`echo |openssl s_client -servername ${domain} -connect ${domain}:${port} 2>/dev/null | openssl x509 -noout -dates |sed -n 's/notAfter=//p'`
#${expired_date}参数一定要加上双引号,否则按多个参数处理
#与当前系统日期时间做对比,查看证书过期时间
if [ -n "${expired_date}" ];then
expired_seconds=`date '+%s' --date "${expired_date}"`
now_seconds=`date '+%s'`
echo "(${expired_seconds}-${now_seconds})/24/3600" | bc
else
:
fi

2、修改zabbix_agentd.conf配置文件,添加下面一行

1
UserParameter=check_ssl_cert_expire[*],sh /usr/local/zabbix/script/check_sslcert_expire_script.sh  $1 $2

重新启动agent

3、检查获取参数是否正常

1
2
3
4
/usr/local/zabbix/bin/zabbix_get -s 127.0.0.1 -p 10050 -k check_ssl_cert_expire[www.baidu.com,443]
159
/usr/local/zabbix/bin/zabbix_get -s 127.0.0.1 -p 10050 -k check_ssl_cert_expire[www.masterdax.com,443]
85

4、打开zabbix,web控制台添加模板,监控项


设置触发器,证书到期时触发报警

点击阅读

[Oracle] 为oracle-sqlplus安装rlwrap


1、安装rlwrap-0.42-1.1.x86_64.rpm方式

下载rlwrap-0.42-1.1.x86_64.rpm

1
wget ftp://ftp.pbone.net/mirror/ftp5.gwdg.de/pub/opensuse/repositories/home%3A/Ledest%3A/misc/CentOS_7/src/rlwrap-0.42-1.1.x86_64.rpm

2、安装rlwarp

1
yum localinstall rlwrap-0.42-1.1.x86_64.rpm

3、修改环境变量文件

grep ‘rlwrap’ ~/.bash_profile
alias sql=’rlwrap sqlplus “/ as sysdba”‘
alias rman=’rlwrap rman’

4、如果找不到x86_64类型的文件,下载 rlwrap-0.42-1.1.src.rpm

1
wget ftp://ftp.pbone.net/mirror/ftp5.gwdg.de/pub/opensuse/repositories/home%3A/Ledest%3A/misc/CentOS_7/src/rlwrap-0.42-1.1.src.rpm

5、安装依赖

1
2
yum install rpm-build -y
yum install readline* libtermcap-devel* -y

6、安装rlwarp

1
2
3
4
5
6
7
8
9
10
11
12
13
14
shell > rpm -i rlwrap-0.42-1.1.src.rpm 
warning: rlwrap-0.42-1.1.src.rpm: Header V3 RSA/SHA1 Signature, key ID 93680782: NOKEY

shell > cd rpmbuild/SOURCES
shell > tar -zxvf rlwrap-0.42.tar.gz
shell > cd rlwrap-0.42
shell > ls
aclocal.m4 BUGS completions configure COPYING filters Makefile.am NEWS README test tools
AUTHORS ChangeLog config.h.in configure.ac doc INSTALL Makefile.in PLEA src TODO

shell > ./configure
shell > make && make install

#### 7、之后修改环境变量

grep ‘rlwrap’ ~/.bash_profile
alias sql=’rlwrap sqlplus “/ as sysdba”‘
alias rman=’rlwrap rman’
```

点击阅读

[Oracle] centos7安装rac报错ohasd failed to start


1、具体报错信息

安装完成grid之后,在执行/u01/app/11.2.0.4/grid/root.sh命令时报以下错误:

1
2
ohasd failed to start
Failed to start the Clusterware. Last 20 lines of the alert log follow

2、单独在linux 7中为ohasd设置一个服务。

1、创建服务ohas.service的服务文件并赋予权限

1
2
touch /usr/lib/systemd/system/ohas.service
chmod 777 /usr/lib/systemd/system/ohas.service

2、往ohas.service服务文件添加启动ohasd的相关信息
vi /usr/lib/systemd/system/ohas.service

1
2
3
4
5
6
7
8
9
10
[Unit]
Description=Oracle High Availability Services
After=syslog.target

[Service]
ExecStart=/etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple
Restart=always

[Install]
WantedBy=multi-user.target

3、重新加载守护进程

1
systemctl daemon-reload

设置守护进程自动启动

1
systemctl enable ohas.service

手工启动ohas服务

1
systemctl start ohas.service

4、重新运行root.sh脚本

1
sh root.sh

5:查看ohas服务状态

1
systemctl status ohas.service

点击阅读

[Oracle] oracle udev配置


1、添加裸设备

增加磁盘vda,vdb,vdc
在节点1上分别对vda,vdb,vdc三块盘分区

1
2
3
fdisk /dev/vda
fdisk /dev/vdb
fdisk /dev/vdc

节点1查看设备信息:

1
2
3
4
5
6
7
8
9
10
[root@rac1 ~]# ll /dev/vd*
brw-rw---- 1 root disk 252, 0 Dec 14 10:56 /dev/vda
brw-rw---- 1 root disk 252, 1 Dec 14 10:56 /dev/vda1
brw-rw---- 1 root disk 252, 2 Dec 14 10:56 /dev/vda2
brw-rw---- 1 root disk 252, 16 Dec 14 11:41 /dev/vdb
brw-rw---- 1 root disk 252, 17 Dec 14 11:41 /dev/vdb1
brw-rw---- 1 root disk 252, 32 Dec 14 11:34 /dev/vdc
brw-rw---- 1 root disk 252, 33 Dec 14 11:34 /dev/vdc1
brw-rw---- 1 root disk 252, 48 Dec 14 11:34 /dev/vdd
brw-rw---- 1 root disk 252, 49 Dec 14 11:34 /dev/vdd1

节点2查看设备信息:

1
2
3
4
5
6
7
[root@rac2 ~]# ll /dev/vd*
brw-rw---- 1 root disk 252, 0 Dec 14 10:56 /dev/vda
brw-rw---- 1 root disk 252, 1 Dec 14 10:56 /dev/vda1
brw-rw---- 1 root disk 252, 2 Dec 14 10:56 /dev/vda2
brw-rw---- 1 root disk 252, 16 Dec 14 10:56 /dev/vdb
brw-rw---- 1 root disk 252, 32 Dec 14 10:56 /dev/vdc
brw-rw---- 1 root disk 252, 48 Dec 14 10:56 /dev/vdd

major device number可以看作是设备驱动程序,被同一设备驱动程序管理的设备有相同的major device number.这个数字实际是Kernel中device driver table 的索引,这个表保存着不同设备驱动程序。(kvm虚拟机virto的驱动对应的major device number值为252,scsi的驱动对应的major device number值为8,裸设备为162)
minor device number用来代表被访问的具体设备。就是说Kernel根据major device number 找到设备驱动程序,然后再从minor device number 获得设备位置等属性。

2、修改配置文件

节点1
vim /etc/udev/rules.d/60-raw.rules

1
2
3
4
5
6
7
8
ACTION=="add", KERNEL=="/dev/vdb1",RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", ENV{MAJOR}=="252",ENV{MINOR}=="17",RUN+="/bin/raw /dev/raw/raw1 %M %m"
ACTION=="add", KERNEL=="/dev/vdc1",RUN+="/bin/raw /dev/raw/raw2 %N"
ACTION=="add", ENV{MAJOR}=="252",ENV{MINOR}=="33",RUN+="/bin/raw /dev/raw/raw2 %M %m"
ACTION=="add", KERNEL=="/dev/vdd1",RUN+="/bin/raw /dev/raw/raw3 %N"
ACTION=="add", ENV{MAJOR}=="252",ENV{MINOR}=="49",RUN+="/bin/raw /dev/raw/raw3 %M %m"

KERNEL=="raw[1-3]", OWNER="grid", GROUP="oinstall", MODE="640"

节点2
vim /etc/udev/rules.d/60-raw.rules

1
2
3
4
5
6
7
8
ACTION=="add", KERNEL=="/dev/vdb1",RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", ENV{MAJOR}=="252",ENV{MINOR}=="17",RUN+="/bin/raw /dev/raw/raw1 %M %m"
ACTION=="add", KERNEL=="/dev/vdc1",RUN+="/bin/raw /dev/raw/raw2 %N"
ACTION=="add", ENV{MAJOR}=="252",ENV{MINOR}=="33",RUN+="/bin/raw /dev/raw/raw2 %M %m"
ACTION=="add", KERNEL=="/dev/vdd1",RUN+="/bin/raw /dev/raw/raw3 %N"
ACTION=="add", ENV{MAJOR}=="252",ENV{MINOR}=="49",RUN+="/bin/raw /dev/raw/raw3 %M %m"

KERNEL=="raw[1-3]", OWNER="grid", GROUP="oinstall", MODE="640"

上述操作需要在双节点执行!且确保在双节点均可以看到裸设备文件,以及grid(或者oracle)用户具有对裸设备的权限

3、启动udev

1
2
3
4
5
6
7
8
9
10
centos6使用start_udev

centos7之后使用
/sbin/udevadm control --reload-rules
/sbin/udevadm trigger
或者
/sbin/udevadm trigger --type=devices --action=change
查看设备状态:
ll /dev/raw/raw*
在配置过程中,启动udev之后没有看到设备,重启服务器后才生效。

4、使用scis驱动方式添加磁盘,用udev根据uuid方式绑定磁盘:

添加sda,sdb,sdc三块盘,获取共享磁盘的uuid

1
2
3
4
5
6
[root@rac1 ~]# /usr/lib/udev/scsi_id -g -u /dev/sda
0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-0
[root@rac1 ~]# /usr/lib/udev/scsi_id -g -u /dev/sdb
0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-2
[root@rac1 ~]# /usr/lib/udev/scsi_id -g -u /dev/sdc
0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1

5、按以下格式写入[/etc/udev/rules.d/99-my-asmdevices.rules]文件,每个设备一行,中间不允许换行

节点1:
vim /etc/udev/rules.d/99-my-asmdevices.rules

1
2
3
KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block",  PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",  RESULT=="0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-0", RUN+="/bin/sh -c 'mknod /dev/asmdisk-vote b  $major $minor; chown grid:asmadmin /dev/asmdisk-vote; chmod 0660 /dev/asmdisk-vote'"
KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-2", RUN+="/bin/sh -c 'mknod /dev/asmdisk-arch b $major $minor; chown grid:asmadmin /dev/asmdisk-arch; chmod 0660 /dev/asmdisk-arch'"
KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1", RUN+="/bin/sh -c 'mknod /dev/asmdisk-data b $major $minor; chown grid:asmadmin /dev/asmdisk-data; chmod 0660 /dev/asmdisk-data'"

节点2:
vim /etc/udev/rules.d/99-my-asmdevices.rules

1
2
3
KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block",  PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",  RESULT=="0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-0", RUN+="/bin/sh -c 'mknod /dev/asmdisk-vote b  $major $minor; chown grid:asmadmin /dev/asmdisk-vote; chmod 0660 /dev/asmdisk-vote'"
KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-2", RUN+="/bin/sh -c 'mknod /dev/asmdisk-arch b $major $minor; chown grid:asmadmin /dev/asmdisk-arch; chmod 0660 /dev/asmdisk-arch'"
KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1", RUN+="/bin/sh -c 'mknod /dev/asmdisk-data b $major $minor; chown grid:asmadmin /dev/asmdisk-data; chmod 0660 /dev/asmdisk-data'"

6、启动udev

1
2
3
4
5
6
7
centos6使用start_udev

centos7之后使用
/sbin/udevadm control --reload-rules
/sbin/udevadm trigger
或者
/sbin/udevadm trigger --type=devices --action=change

查看设备状态

1
2
3
4
[root@rac1 ~]# ll /dev/asmdisk-*
brw-rw---- 1 grid asmadmin 8, 16 Dec 14 14:18 /dev/asmdisk-arch
brw-rw---- 1 grid asmadmin 8, 32 Dec 14 14:18 /dev/asmdisk-data
brw-rw---- 1 grid asmadmin 8, 0 Dec 14 14:18 /dev/asmdisk-vote

7、使用udevadm查看设备信息

udevadm info -a -p /sys/block/sda

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
Udevadm info starts with the device specified by the devpath and then
walks up the chain of parent devices. It prints for every device
found, all possible attributes in the udev rules key format.
A rule to match, can be composed by the attributes of the device
and the attributes from one single parent device.

looking at device '/devices/pci0000:00/0000:00:0d.0/virtio5/host2/target2:0:0/2:0:0:0/block/sda':
KERNEL=="sda"
SUBSYSTEM=="block"
DRIVER==""
ATTR{ro}=="0"
ATTR{size}=="41943040"
ATTR{stat}==" 521604 4 820941 5583621 355886 1462 1263929 7839050 0 9373468 13420933"
ATTR{range}=="16"
ATTR{discard_alignment}=="0"
ATTR{events}==""
ATTR{ext_range}=="256"
ATTR{events_poll_msecs}=="-1"
ATTR{alignment_offset}=="0"
ATTR{inflight}==" 0 0"
ATTR{removable}=="0"
ATTR{capability}=="50"
ATTR{events_async}==""

looking at parent device '/devices/pci0000:00/0000:00:0d.0/virtio5/host2/target2:0:0/2:0:0:0':
KERNELS=="2:0:0:0"
SUBSYSTEMS=="scsi"
DRIVERS=="sd"
ATTRS{rev}=="1.5."
ATTRS{type}=="0"
ATTRS{scsi_level}=="6"
ATTRS{model}=="QEMU HARDDISK "
ATTRS{state}=="running"
ATTRS{unpriv_sgio}=="0"
ATTRS{queue_type}=="none"
ATTRS{iodone_cnt}=="0xd660c"
ATTRS{iorequest_cnt}=="0xd664c"
ATTRS{device_busy}=="0"
ATTRS{evt_capacity_change_reported}=="0"
ATTRS{timeout}=="30"
ATTRS{evt_media_change}=="0"
ATTRS{ioerr_cnt}=="0x18"
ATTRS{queue_depth}=="128"
ATTRS{vendor}=="QEMU "
ATTRS{evt_soft_threshold_reached}=="0"
ATTRS{device_blocked}=="0"
ATTRS{evt_mode_parameter_change_reported}=="0"
ATTRS{evt_lun_change_reported}=="0"
ATTRS{evt_inquiry_change_reported}=="0"
ATTRS{dh_state}=="detached"
ATTRS{iocounterbits}=="32"
ATTRS{inquiry}==""
ATTRS{vpd_pg83}==""
ATTRS{eh_timeout}=="10"

looking at parent device '/devices/pci0000:00/0000:00:0d.0/virtio5/host2/target2:0:0':
KERNELS=="target2:0:0"
SUBSYSTEMS=="scsi"
DRIVERS==""

looking at parent device '/devices/pci0000:00/0000:00:0d.0/virtio5/host2':
KERNELS=="host2"
SUBSYSTEMS=="scsi"
DRIVERS==""

looking at parent device '/devices/pci0000:00/0000:00:0d.0/virtio5':
KERNELS=="virtio5"
SUBSYSTEMS=="virtio"
DRIVERS=="virtio_scsi"
ATTRS{device}=="0x0008"
ATTRS{features}=="0110000000000000000000000000110000000000000000000000000000000000"
ATTRS{status}=="0x00000007"
ATTRS{vendor}=="0x1af4"

looking at parent device '/devices/pci0000:00/0000:00:0d.0':
KERNELS=="0000:00:0d.0"
SUBSYSTEMS=="pci"
DRIVERS=="virtio-pci"
ATTRS{irq}=="10"
ATTRS{subsystem_vendor}=="0x1af4"
ATTRS{broken_parity_status}=="0"
ATTRS{class}=="0x010000"
ATTRS{driver_override}=="(null)"
ATTRS{consistent_dma_mask_bits}=="64"
ATTRS{dma_mask_bits}=="64"
ATTRS{local_cpus}=="ff"
ATTRS{device}=="0x1004"
ATTRS{enable}=="1"
ATTRS{msi_bus}==""
ATTRS{local_cpulist}=="0-7"
ATTRS{vendor}=="0x1af4"
ATTRS{subsystem_device}=="0x0008"
ATTRS{numa_node}=="-1"
ATTRS{d3cold_allowed}=="0"

looking at parent device '/devices/pci0000:00':
KERNELS=="pci0000:00"
SUBSYSTEMS==""
DRIVERS==""

点击阅读

[Mysql] mysql-AUTO_INCREMENT参数详解


1、auto_increment的性质就是为新行生成一个唯一标识,例如

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#创建测试表
CREATE TABLE animals (
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (id)
);
#插入测试数据:
INSERT INTO animals (name) VALUES
('dog'),('cat'),('penguin'),
('lax'),('whale'),('ostrich');
#查看测试结果:
SELECT * FROM animals;
+----+---------+
| id | name |
+----+---------+
| 1 | dog |
| 2 | cat |
| 3 | penguin |
| 4 | lax |
| 5 | whale |
| 6 | ostrich |
+----+---------+
6 rows in set (0.00 sec)

2、上面的auto_increment列没有指定插入的值,mysql会自动序列数。也可以显示指定0或者null值来生成序列值,但是当sql_mode中no_auto_value_on_zero被启用时,如果auto_increment列被指定为0,则该值按0处理。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
#查看sql_mode模式
show variables like '%sql_mode%';
+---------------+------------------------------------------------------------------------------------------------------------------------+
| Variable_name | Value |
+---------------+------------------------------------------------------------------------------------------------------------------------+
| sql_mode | STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION |
+---------------+------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.01 sec)
#插入数据
insert into animals(id,name) values(0,'pig');
insert into animals(id,name) values(null,'monkey');
[aaaa]> SELECT * FROM animals;
+----+---------+
| id | name |
+----+---------+
| 1 | dog |
| 2 | cat |
| 3 | penguin |
| 4 | lax |
| 5 | whale |
| 6 | ostrich |
| 7 | pig |
| 8 | monkey |
+----+---------+
8 rows in set (0.00 sec)
#自动序列依次递增
#修改sql_mode模式
set @@sql_mode='NO_AUTO_VALUE_ON_ZERO,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION';
show variables like '%sql_mode%';
+---------------+----------------------------------------------------------------------------------------------------------------------------------------------+
| Variable_name | Value |
+---------------+----------------------------------------------------------------------------------------------------------------------------------------------+
| sql_mode | NO_AUTO_VALUE_ON_ZERO,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION |
+---------------+----------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
#再次插入数据
insert into animals(id,name) values(0,'lion');
insert into animals(id,name) values(null,'tiger');
root@db 01:38: [aaaa]> SELECT * FROM animals;
+----+---------+
| id | name |
+----+---------+
| 0 | lion |
| 1 | dog |
| 2 | cat |
| 3 | penguin |
| 4 | lax |
| 5 | whale |
| 6 | ostrich |
| 7 | pig |
| 8 | monkey |
| 9 | tiger |
+----+---------+
10 rows in set (0.00 sec)

当sql_mode中NO_AUTO_VALUE_ON_ZERO为启用状态,插入的数值为0时,按0处理。

1
2
3
4
5
6
7
8
9
10
11
12
13
#再次插入数据提示主键重复。
root@db 01:41: [aaaa]> insert into animals(id,name) values(0,'giraffe');
ERROR 1062 (23000): Duplicate entry '0' for key 'PRIMARY'

#修改sql_mode为原来配置
set @@sql_mode='STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION';
show variables like '%sql_mode%';
+---------------+------------------------------------------------------------------------------------------------------------------------+
| Variable_name | Value |
+---------------+------------------------------------------------------------------------------------------------------------------------+
| sql_mode | STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION |
+---------------+------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.01 sec)

3、当向自动增长列插入其他数据,序列值被重置,以便下一个自动生成的值从auto_increment对应列的最大值开始,例如:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
INSERT INTO animals (id,name) VALUES(100,'rabbit');
INSERT INTO animals (id,name) VALUES(NULL,'mouse');
SELECT * FROM animals;
+-----+---------+
| id | name |
+-----+---------+
| 0 | lion |
| 1 | dog |
| 2 | cat |
| 3 | penguin |
| 4 | lax |
| 5 | whale |
| 6 | ostrich |
| 7 | pig |
| 8 | monkey |
| 9 | tiger |
| 100 | rabbit |
| 101 | mouse |
+-----+---------+
12 rows in set (0.00 sec)

当手动插入id为100的值之后,再次默认插入自动增长的值变为101。测试插入位于最小值和最大值之间的值:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
INSERT INTO animals (id,name) VALUES(50,'giraffe');
INSERT INTO animals (id,name) VALUES(NULL,'tortoise');
SELECT * FROM animals;
+-----+----------+
| id | name |
+-----+----------+
| 0 | lion |
| 1 | dog |
| 2 | cat |
| 3 | penguin |
| 4 | lax |
| 5 | whale |
| 6 | ostrich |
| 7 | pig |
| 8 | monkey |
| 9 | tiger |
| 50 | giraffe |
| 100 | rabbit |
| 101 | mouse |
| 102 | tortoise |
+-----+----------+
14 rows in set (0.00 sec)

手动插入位于最小值和最大值之间的值,对生成的序列值没有任何影响。

4、更新innodb类型的表中auto_increment的列值,不会重置auto_increment序列,但是myisam和ndb类型的表会重置。

先以innodb类型表为例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
#更新了auto_increment中列的值小于当前最大值
update animals set id=10 where id=2;
SELECT * FROM animals;
+-----+----------+
| id | name |
+-----+----------+
| 0 | lion |
| 1 | dog |
| 3 | penguin |
| 4 | lax |
| 5 | whale |
| 6 | ostrich |
| 7 | pig |
| 8 | monkey |
| 9 | tiger |
| 10 | cat |
| 50 | giraffe |
| 100 | rabbit |
| 101 | mouse |
| 102 | tortoise |
+-----+----------+
14 rows in set (0.00 sec)

INSERT INTO animals (id,name) VALUES(NULL,'donkey');
SELECT * FROM animals;
+-----+----------+
| id | name |
+-----+----------+
| 0 | lion |
| 1 | dog |
| 3 | penguin |
| 4 | lax |
| 5 | whale |
| 6 | ostrich |
| 7 | pig |
| 8 | monkey |
| 9 | tiger |
| 10 | cat |
| 50 | giraffe |
| 100 | rabbit |
| 101 | mouse |
| 102 | tortoise |
| 103 | donkey |
+-----+----------+
15 rows in set (0.00 sec)

#更新了auto_increment中列的值大于当前最大值
update animals set id=110 where id=3;
SELECT * FROM animals;
+-----+----------+
| id | name |
+-----+----------+
| 0 | lion |
| 1 | dog |
| 4 | lax |
| 5 | whale |
| 6 | ostrich |
| 7 | pig |
| 8 | monkey |
| 9 | tiger |
| 10 | cat |
| 50 | giraffe |
| 100 | rabbit |
| 101 | mouse |
| 102 | tortoise |
| 103 | donkey |
| 110 | penguin |
+-----+----------+
15 rows in set (0.00 sec)

INSERT INTO animals (id,name) VALUES(NULL,'cow');
SELECT * FROM animals;
+-----+----------+
| id | name |
+-----+----------+
| 0 | lion |
| 1 | dog |
| 4 | lax |
| 5 | whale |
| 6 | ostrich |
| 7 | pig |
| 8 | monkey |
| 9 | tiger |
| 10 | cat |
| 50 | giraffe |
| 100 | rabbit |
| 101 | mouse |
| 102 | tortoise |
| 103 | donkey |
| 104 | cow |
| 110 | penguin |
+-----+----------+
16 rows in set (0.00 sec)

auto_increment序列值没有更新,这里面有个大坑,随着序列不断增长,当增长到110是,就会报逐渐重复的错误了,因此自动增长列中的值不要更改。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
root@db 02:33:  [aaaa]> INSERT INTO animals (id,name) VALUES(NULL,'cow');
Query OK, 1 row affected (0.00 sec)
root@db 02:36: [aaaa]> INSERT INTO animals (id,name) VALUES(NULL,'cow');
Query OK, 1 row affected (0.00 sec)
root@db 02:36: [aaaa]> INSERT INTO animals (id,name) VALUES(NULL,'cow');
Query OK, 1 row affected (0.01 sec)
root@db 02:36: [aaaa]> INSERT INTO animals (id,name) VALUES(NULL,'cow');
Query OK, 1 row affected (0.01 sec)
root@db 02:36: [aaaa]> INSERT INTO animals (id,name) VALUES(NULL,'cow');
Query OK, 1 row affected (0.00 sec)
root@db 02:36: [aaaa]> INSERT INTO animals (id,name) VALUES(NULL,'cow');
ERROR 1062 (23000): Duplicate entry '110' for key 'PRIMARY'
root@db 02:36: [aaaa]> INSERT INTO animals (id,name) VALUES(NULL,'cow');
Query OK, 1 row affected (0.00 sec)
SELECT * FROM animals;
+-----+----------+
| id | name |
+-----+----------+
| 0 | lion |
| 1 | dog |
| 4 | lax |
| 5 | whale |
| 6 | ostrich |
| 7 | pig |
| 8 | monkey |
| 9 | tiger |
| 10 | cat |
| 50 | giraffe |
| 100 | rabbit |
| 101 | mouse |
| 102 | tortoise |
| 103 | donkey |
| 104 | cow |
| 105 | cow |
| 106 | cow |
| 107 | cow |
| 108 | cow |
| 109 | cow |
| 110 | penguin |
| 111 | cow |
+-----+----------+
22 rows in set (0.00 sec)

下面以myisam类型表为例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
#创建测试表
CREATE TABLE animals_myisam (
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (id)
) ENGINE=MyISAM;
#插入测试数据
INSERT INTO animals_myisam (name) VALUES
('dog'),('cat'),
('penguin'),('lax'),('whale'),
('ostrich');

SELECT * FROM animals_myisam;
+----+---------+
| id | name |
+----+---------+
| 1 | dog |
| 2 | cat |
| 3 | penguin |
| 4 | lax |
| 5 | whale |
| 6 | ostrich |
+----+---------+
6 rows in set (0.00 sec)
#更新了auto_increment中列的值小于当前最大值
update animals_myisam set id=10 where id=3;
INSERT INTO animals_myisam (name) VALUES ('mouse');
SELECT * FROM animals_myisam;
+----+---------+
| id | name |
+----+---------+
| 1 | dog |
| 2 | cat |
| 10 | penguin |
| 4 | lax |
| 5 | whale |
| 6 | ostrich |
| 11 | mouse |
+----+---------+
7 rows in set (0.00 sec)
update animals_myisam set id=8 where id=4;
INSERT INTO animals_myisam (name) VALUES ('donkey');
root@db 03:51: [aaaa]> SELECT * FROM animals_myisam;
+----+---------+
| id | name |
+----+---------+
| 1 | dog |
| 2 | cat |
| 10 | penguin |
| 8 | lax |
| 5 | whale |
| 6 | ostrich |
| 11 | mouse |
| 12 | donkey |
+----+---------+
8 rows in set (0.00 sec)
#更新了auto_increment中列的值大于当前最大值
update animals_myisam set id=51 where id=4;
INSERT INTO animals_myisam (name) VALUES ('horse');
root@db 02:41: [aaaa]> SELECT * FROM animals_myisam;
+----+---------+
| id | name |
+----+---------+
| 1 | dog |
| 2 | cat |
| 10 | penguin |
| 51 | lax |
| 5 | whale |
| 6 | ostrich |
| 11 | mouse |
| 52 | horse |
+----+---------+
8 rows in set (0.00 sec)

对于myisam类型的表,当对auto_increment列更新时,无论更新后的值大于或者小于当前自增列的值,都按当前列的最大值处理。

5、可以通过mysql函数last_insert_id查看最后插入的值:

1
2
3
4
5
6
7
select last_insert_id();
+------------------+
| last_insert_id() |
+------------------+
| 53 |
+------------------+
1 row in set (0.00 sec)

6、当auto_increment列值达到最大值后,再也无法生成后面的值,当再次插入的时候将会报错,提示主键重复。比如MEDIUMINT类型的最大值为8388607

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
INSERT INTO animals (id,name) VALUES(8388606,'whale');
root@db 02:54: [aaaa]> INSERT INTO animals (id,name) VALUES(null,'elephant');
Query OK, 1 row affected (0.00 sec)
root@db 02:55: [aaaa]> INSERT INTO animals (id,name) VALUES(null,'rhinoceros');
ERROR 1062 (23000): Duplicate entry '8388607' for key 'PRIMARY'
root@db 02:55: [aaaa]> INSERT INTO animals (id,name) VALUES(null,'rhinoceros');
ERROR 1062 (23000): Duplicate entry '8388607' for key 'PRIMARY'
root@db 02:55: [aaaa]> SELECT * FROM animals;
+---------+----------+
| id | name |
+---------+----------+
| 0 | lion |
| 1 | dog |
| 4 | lax |
| 5 | whale |
| 6 | ostrich |
| 7 | pig |
| 8 | monkey |
| 9 | tiger |
| 10 | cat |
| 50 | giraffe |
| 100 | rabbit |
| 101 | mouse |
| 102 | tortoise |
| 103 | donkey |
| 104 | cow |
| 105 | cow |
| 106 | cow |
| 107 | cow |
| 108 | cow |
| 109 | cow |
| 110 | penguin |
| 111 | cow |
| 8388606 | whale |
| 8388607 | elephant |
+---------+----------+

当插入的值达到8388607值时,没有办法再插入数据了,这时需要修改数据类型

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
alter table animals modify id int(10) NOT NULL AUTO_INCREMENT;
INSERT INTO animals (id,name) VALUES(null,'rhinoceros');
SELECT * FROM animals;
root@db 02:57: [aaaa]> INSERT INTO animals (id,name) VALUES(null,'rhinoceros');
Query OK, 1 row affected (0.00 sec)

root@db 02:58: [aaaa]> SELECT * FROM animals;
+---------+------------+
| id | name |
+---------+------------+
| 0 | lion |
| 1 | dog |
| 4 | lax |
| 5 | whale |
| 6 | ostrich |
| 7 | pig |
| 8 | monkey |
| 9 | tiger |
| 10 | cat |
| 50 | giraffe |
| 100 | rabbit |
| 101 | mouse |
| 102 | tortoise |
| 103 | donkey |
| 104 | cow |
| 105 | cow |
| 106 | cow |
| 107 | cow |
| 108 | cow |
| 109 | cow |
| 110 | penguin |
| 111 | cow |
| 8388606 | whale |
| 8388607 | elephant |
| 8388608 | rhinoceros |
+---------+------------+

修改完成之后,再次插入成功,下面是关于数据类型的最小值和最大值的对比

1
2
3
4
5
6
Type   Storage(Bytes) MinimumValueSigned	MinimumValueUnsigned MaximumValueSigned	MaximumValueUnsigned
TINYINT 1 -128 0 127 255
SMALLINT 2 -32768 0 32767 65535
MEDIUMINT 3 -8388608 0 8388607 16777215
INT 4 -2147483648 0 2147483647 4294967295
BIGINT 8 -2^63 0 2^63-1 2^64-1

7、想要修改auto_increment的起始值,可以找alter table,比如:

1
ALTER TABLE animals AUTO_INCREMENT = 100;

先使用truncate清除表数据,truncate会初始化起始值,delete语句不会初始化,先执行delete操作看一下

1
2
3
4
5
6
7
8
9
10
11
12
13
root@db 02:58:  [aaaa]> delete from animals;
Query OK, 25 rows affected (0.01 sec)
root@db 03:04: [aaaa]> SELECT * FROM animals;
Empty set (0.00 sec)
root@db 03:04: [aaaa]> insert into animals (name) values('dog');
Query OK, 1 row affected (0.00 sec)
root@db 03:04: [aaaa]> SELECT * FROM animals;
+---------+------+
| id | name |
+---------+------+
| 8388609 | dog |
+---------+------+
1 row in set (0.00 sec)

auto_increment的值继续增长,没有从1开始。
执行truncate语句

1
2
3
4
5
6
7
8
9
10
11
12
13
root@db 03:04:  [aaaa]> truncate table animals;
Query OK, 0 rows affected (0.01 sec)
root@db 03:05: [aaaa]> SELECT * FROM animals;
Empty set (0.00 sec)
root@db 03:05: [aaaa]> insert into animals (name) values('dog');
Query OK, 1 row affected (0.01 sec)
root@db 03:05: [aaaa]> SELECT * FROM animals;
+----+------+
| id | name |
+----+------+
| 1 | dog |
+----+------+
1 row in set (0.00 sec)

auto_increment序列值从1重新开始计算。
下面使用alter table修改auto_increment的起始值:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
root@db 03:05:  [aaaa]> truncate table animals;
Query OK, 0 rows affected (0.05 sec)
root@db 03:06: [aaaa]> alter table animals auto_increment=99;
Query OK, 0 rows affected (0.01 sec)
Records: 0 Duplicates: 0 Warnings: 0
root@db 03:07: [aaaa]> insert into animals (name) values('dog');
Query OK, 1 row affected (0.01 sec)
root@db 03:07: [aaaa]> insert into animals (name) values('cat');
Query OK, 1 row affected (0.01 sec)
root@db 03:08: [aaaa]> SELECT * FROM animals;
+-----+------+
| id | name |
+-----+------+
| 99 | dog |
| 100 | cat |
+-----+------+
2 rows in set (0.00 sec)

auto_increment起始值从99开始计算。
另外使用alter table设置的auto_increment列值,要大于auto_increment列中的最大值才会有效,如果设置的值小于auto_increment列中的最大值,该值会被重置为当前列的最大值+1。
如果设置系统级的步长(auto_increment_increment)和初始值(auto_increment_offset)
但是auto_increment_offset一定不能大于auto_increment_increment的值,否则auto_increment_offset会被忽略:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
root@db 04:18:  [aaaa]> truncate table animals;
Query OK, 0 rows affected (0.02 sec)

root@db 04:18: [aaaa]> set @@auto_increment_offset=5;
Query OK, 0 rows affected (0.00 sec)

root@db 04:18: [aaaa]> set @@auto_increment_increment=7;
Query OK, 0 rows affected (0.00 sec)

root@db 04:19: [aaaa]>
root@db 04:20: [aaaa]>
root@db 04:20: [aaaa]> INSERT INTO animals (name) VALUES
-> ('dog'),('cat'),('penguin'),
-> ('lax'),('whale'),('ostrich');
Query OK, 6 rows affected (0.01 sec)
Records: 6 Duplicates: 0 Warnings: 0

root@db 04:20: [aaaa]> select * from animals;
+----+---------+
| id | name |
+----+---------+
| 5 | dog |
| 12 | cat |
| 19 | penguin |
| 26 | lax |
| 33 | whale |
| 40 | ostrich |
+----+---------+
6 rows in set (0.00 sec)

8、对于myisam中的自增列,如果自增列在复合索引中,自增列会先匹配前缀,根据前缀条件搜索到的结果在自增,计算方法:s MAX(auto_increment_column) + 1 WHERE prefix=given-prefix。例如:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
drop table animals_myisam;
CREATE TABLE animals_myisam (
grp ENUM('fish','mammal','bird') NOT NULL,
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (grp,id)
) ENGINE=MyISAM;

INSERT INTO animals_myisam (grp,name) VALUES
('mammal','dog'),('mammal','cat'),
('bird','penguin'),('fish','lax'),('mammal','whale'),
('bird','ostrich');

root@db 03:12: [aaaa]> SELECT * FROM animals_myisam ORDER BY grp,id;
+--------+----+---------+
| grp | id | name |
+--------+----+---------+
| fish | 1 | lax |
| mammal | 1 | dog |
| mammal | 2 | cat |
| mammal | 3 | whale |
| bird | 1 | penguin |
| bird | 2 | ostrich |
+--------+----+---------+
6 rows in set (0.00 sec)

root@db 03:12: [aaaa]> INSERT INTO animals_myisam (grp,name) VALUES
-> ('mammal','dog'),('mammal','cat'),
-> ('bird','penguin'),('fish','lax'),('mammal','whale'),
-> ('bird','ostrich');
Query OK, 6 rows affected (0.00 sec)
Records: 6 Duplicates: 0 Warnings: 0

root@db 03:13: [aaaa]> INSERT INTO animals_myisam (grp,name) VALUES
-> ('mammal','dog'),('mammal','cat'),
-> ('bird','penguin'),('fish','lax'),('mammal','whale'),
-> ('bird','ostrich');
Query OK, 6 rows affected (0.00 sec)
Records: 6 Duplicates: 0 Warnings: 0

root@db 03:13: [aaaa]> SELECT * FROM animals_myisam ORDER BY grp,id;
+--------+----+---------+
| grp | id | name |
+--------+----+---------+
| fish | 1 | lax |
| fish | 2 | lax |
| fish | 3 | lax |
| mammal | 1 | dog |
| mammal | 2 | cat |
| mammal | 3 | whale |
| mammal | 4 | dog |
| mammal | 5 | cat |
| mammal | 6 | whale |
| mammal | 7 | dog |
| mammal | 8 | cat |
| mammal | 9 | whale |
| bird | 1 | penguin |
| bird | 2 | ostrich |
| bird | 3 | penguin |
| bird | 4 | ostrich |
| bird | 5 | penguin |
| bird | 6 | ostrich |
+--------+----+---------+
18 rows in set (0.00 sec)

在这种情况下,删除带有包括最大值的增长列,auto_increment序列值会被复用。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
delete from animals_myisam where grp='mammal' and id > 2;
root@db 03:23: [aaaa]> SELECT * FROM animals_myisam ORDER BY grp,id;
+--------+----+---------+
| grp | id | name |
+--------+----+---------+
| fish | 1 | lax |
| fish | 2 | lax |
| fish | 3 | lax |
| mammal | 1 | dog |
| mammal | 2 | cat |
| bird | 1 | penguin |
| bird | 2 | ostrich |
| bird | 3 | penguin |
| bird | 4 | ostrich |
| bird | 5 | penguin |
| bird | 6 | ostrich |
+--------+----+---------+
11 rows in set (0.00 sec)
#再次插入数据
INSERT INTO animals_myisam (grp,name) VALUES ('mammal','dog'),('mammal','cat');

root@db 03:23: [aaaa]> SELECT * FROM animals_myisam ORDER BY grp,id;
+--------+----+---------+
| grp | id | name |
+--------+----+---------+
| fish | 1 | lax |
| fish | 2 | lax |
| fish | 3 | lax |
| mammal | 1 | dog |
| mammal | 2 | cat |
| mammal | 3 | dog |
| mammal | 4 | cat |
| bird | 1 | penguin |
| bird | 2 | ostrich |
| bird | 3 | penguin |
| bird | 4 | ostrich |
| bird | 5 | penguin |
| bird | 6 | ostrich |
+--------+----+---------+
13 rows in set (0.00 sec)

以上情况是在id没有单独索引的情况下发生的,如果id单独的索引,那么插入的值会变成单一的值,不会跟grp列有任何关系。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
create index id_ind on animals_myisam(id);
INSERT INTO animals_myisam (grp,name) VALUES
('mammal','dog'),('mammal','cat'),
('bird','penguin'),('fish','lax'),('mammal','whale'),
('bird','ostrich');
select * from animals_myisam order by mammal;
root@db 03:29: [aaaa]> select * from animals_myisam order by grp;
+--------+----+---------+
| grp | id | name |
+--------+----+---------+
| fish | 10 | lax |
| fish | 1 | lax |
| fish | 2 | lax |
| fish | 3 | lax |
| mammal | 2 | cat |
| mammal | 11 | whale |
| mammal | 8 | cat |
| mammal | 7 | dog |
| mammal | 3 | dog |
| mammal | 4 | cat |
| mammal | 1 | dog |
| bird | 5 | penguin |
| bird | 6 | ostrich |
| bird | 4 | ostrich |
| bird | 3 | penguin |
| bird | 2 | ostrich |
| bird | 9 | penguin |
| bird | 1 | penguin |
| bird | 12 | ostrich |
+--------+----+---------+

点击阅读

[Mysql] optimize table运行是否支持online ddl


0、相应官方文档

1
2
3
4
(online ddl)官方文档提示当存在fulltext索引表进行优化表操作(optimize table)时是不支持online ddl的
https://dev.mysql.com/doc/refman/5.7/en/innodb-online-ddl-operations.html#online-ddl-table-operations
(optimize table)官方文档提示当存在索引时,对表进行优化不支持在线ddl
https://dev.mysql.com/doc/refman/5.7/en/optimize-table.html

下面分别对有主键索引、fulltext索引、无索引情况先查看是否支持online ddl;

1、带有主键索引

查看表结构

1
2
3
4
5
6
7
8
9
10
11
12
root@db 15:44:  [aaaa]> show create table optimize_tb;
+-------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+-------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| optimize_tb | CREATE TABLE `optimize_tb` (
`id` int(7) NOT NULL AUTO_INCREMENT,
`name` varchar(20) DEFAULT NULL,
`age` int(4) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 |
+-------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.01 sec)

插入测试数据

1
2
3
4
5
6
7
8
9
10
11
root@db 15:44:  [aaaa]> insert into optimize_tb (select * from testfororder limit 1000000);
Query OK, 1000000 rows affected (42.98 sec)
Records: 1000000 Duplicates: 0 Warnings: 0

root@db 15:46: [aaaa]> select count(*) from optimize_tb;
+----------+
| count(*) |
+----------+
| 1000000 |
+----------+
1 row in set (0.28 sec)

查看文件大小

1
2
3
[root@dax-mysql-master aaaa]#  ls -lhtr|grep optimize
-rw-r-----. 1 mysql mysql 8.5K 11月 27 15:44 optimize_tb.frm
-rw-r-----. 1 mysql mysql 48M 11月 27 15:46 optimize_tb.ibd

删除前50万数据

1
2
3
4
5
6
7
8
9
10
root@db 15:47:  [aaaa]> delete from optimize_tb limit 500000;
Query OK, 500000 rows affected (8.72 sec)

root@db 15:47: [aaaa]> select count(*) from optimize_tb;
+----------+
| count(*) |
+----------+
| 500000 |
+----------+
1 row in set (0.11 sec)

查看数据文件没有任何变化

1
2
3
[root@dax-mysql-master aaaa]# ls -lhtr|grep optimize
-rw-r-----. 1 mysql mysql 8.5K 11月 27 15:44 optimize_tb.frm
-rw-r-----. 1 mysql mysql 48M 11月 27 15:47 optimize_tb.ibd

session1执行优化表操作

1
2
3
4
5
6
7
8
root@db 15:48:  [aaaa]>  optimize table optimize_tb;
+------------------+----------+----------+-------------------------------------------------------------------+
| Table | Op | Msg_type | Msg_text |
+------------------+----------+----------+-------------------------------------------------------------------+
| aaaa.optimize_tb | optimize | note | Table does not support optimize, doing recreate + analyze instead |
| aaaa.optimize_tb | optimize | status | OK |
+------------------+----------+----------+-------------------------------------------------------------------+
2 rows in set (16.34 sec)

session2执行插入语句操作

1
2
3
4
5
6
7
8
root@db 15:47:  [aaaa]> insert into optimize_tb(id,name,age) values(8166638,'lucy1',24);
Query OK, 1 row affected (0.18 sec)
root@db 15:48: [aaaa]> insert into optimize_tb(id,name,age) values(8166638,'lucy1',24);
ERROR 1062 (23000): Duplicate entry '8166638' for key 'PRIMARY'
root@db 15:48: [aaaa]> insert into optimize_tb(id,name,age) values(8166639,'lucy1',24);
Query OK, 1 row affected (0.25 sec)
root@db 15:48: [aaaa]> insert into optimize_tb(id,name,age) values(8166640,'lucy1',24);
Query OK, 1 row affected (0.25 sec)

最后查看数据文件大小

1
2
3
[root@dax-mysql-master aaaa]# ls -lhtr|grep optimize
-rw-r-----. 1 mysql mysql 8.5K 11月 27 15:48 optimize_tb.frm
-rw-r-----. 1 mysql mysql 28M 11月 27 15:48 optimize_tb.ibd

数据插入成功,在有主键的情况下,使用optimize table可支持online ddl。

2、带有fulltext索引

创建测试表

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
root@db 15:53:  [aaaa]> create table optimize_tb1 like optimize_tb;
Query OK, 0 rows affected (0.65 sec)

root@db 15:53: [aaaa]> show create table optimize_tb1;
+--------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+--------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| optimize_tb1 | CREATE TABLE `optimize_tb1` (
`id` int(7) NOT NULL AUTO_INCREMENT,
`name` varchar(20) DEFAULT NULL,
`age` int(4) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 |
+--------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.04 sec)

删除主键索引

1
2
root@db 15:53:  [aaaa]> alter table optimize_tb1 drop primary key;
ERROR 1075 (42000): Incorrect table definition; there can be only one auto column and it must be defined as a key

因为存在自增约束,报错,需要删除约束

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
root@db 15:54:  [aaaa]> alter table optimize_tb1 modify id int(7);
Query OK, 0 rows affected (2.84 sec)
Records: 0 Duplicates: 0 Warnings: 0

root@db 15:54: [aaaa]> alter table optimize_tb1 drop primary key;
Query OK, 0 rows affected (1.45 sec)
Records: 0 Duplicates: 0 Warnings: 0

root@db 15:54: [aaaa]> show create table optimize_tb1;
+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+
| optimize_tb1 | CREATE TABLE `optimize_tb1` (
`id` int(7) NOT NULL,
`name` varchar(20) DEFAULT NULL,
`age` int(4) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 |
+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

插入数据

1
2
3
4
5
6
7
8
9
10
11
root@db 15:55:  [aaaa]> insert into optimize_tb1 (select * from testfororder limit 1000000);
Query OK, 1000000 rows affected (26.04 sec)
Records: 1000000 Duplicates: 0 Warnings: 0

root@db 15:56: [aaaa]> select count(*) from optimize_tb1;
+----------+
| count(*) |
+----------+
| 1000000 |
+----------+
1 row in set (0.48 sec)

创建fulltext 索引

1
2
3
root@db 15:56:  [aaaa]> create fulltext index name_fullind on optimize_tb1(name);
Query OK, 0 rows affected, 1 warning (1 min 12.58 sec)
Records: 0 Duplicates: 0 Warnings: 1

查看数据文件大小

1
2
3
[root@dax-mysql-master aaaa]# ls -lhtr|grep optimize_tb1
-rw-r-----. 1 mysql mysql 8.5K 11月 27 15:56 optimize_tb1.frm
-rw-r-----. 1 mysql mysql 92M 11月 27 15:58 optimize_tb1.ibd

删除前50万数据

1
2
3
4
5
6
7
8
9
10
root@db 15:58:  [aaaa]> delete from optimize_tb1 limit 500000;
Query OK, 500000 rows affected (1 min 13.37 sec)

root@db 15:59: [aaaa]> select count(*) from optimize_tb1;
+----------+
| count(*) |
+----------+
| 500000 |
+----------+
1 row in set (0.50 sec)

查看数据文件大小

1
2
3
[root@dax-mysql-master aaaa]# ls -lhtr|grep optimize_tb1
-rw-r-----. 1 mysql mysql 8.5K 11月 27 15:56 optimize_tb1.frm
-rw-r-----. 1 mysql mysql 92M 11月 27 15:59 optimize_tb1.ibd

session1对表进行优化

1
2
3
4
5
6
7
8
root@db 15:59:  [aaaa]> optimize table optimize_tb1;
+-------------------+----------+----------+-------------------------------------------------------------------+
| Table | Op | Msg_type | Msg_text |
+-------------------+----------+----------+-------------------------------------------------------------------+
| aaaa.optimize_tb1 | optimize | note | Table does not support optimize, doing recreate + analyze instead |
| aaaa.optimize_tb1 | optimize | status | OK |
+-------------------+----------+----------+-------------------------------------------------------------------+
2 rows in set (1 min 13.65 sec)

session2对表插入数据

1
2
root@db 16:00:  [aaaa]> insert into optimize_tb1(id,name,age) values(8166638,'lucy1',24);
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction

session3查看数据库线程

1
2
3
4
5
6
7
8
9
root@db 16:00:  [(none)]> show processlist;
+----+------+-----------+------+---------+------+---------------------------------+------------------------------------------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+------+-----------+------+---------+------+---------------------------------+------------------------------------------------------------------+
| 5 | root | localhost | aaaa | Query | 41 | copy to tmp table | optimize table optimize_tb1 |
| 7 | root | localhost | aaaa | Query | 40 | Waiting for table metadata lock | insert into optimize_tb1(id,name,age) values(8166638,'lucy1',24) |
| 8 | root | localhost | NULL | Query | 0 | starting | show processlist |
+----+------+-----------+------+---------+------+---------------------------------+------------------------------------------------------------------+
3 rows in set (0.00 sec)

查看数据文件大小

1
2
3
[root@dax-mysql-master aaaa]# ls -lhtr|grep optimize_tb1
-rw-r-----. 1 mysql mysql 8.5K 11月 27 16:00 optimize_tb1.frm
-rw-r-----. 1 mysql mysql 48M 11月 27 16:02 optimize_tb1.ibd

当表存在fulltext索引是,执行optimize table会锁表,其他会话连接进来无法做增删改操作。

3、表不存在任何索引

创建测试表

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
root@db 16:04:  [aaaa]> create table optimize_tb2 like optimize_tb;
Query OK, 0 rows affected (0.60 sec)

root@db 16:04: [aaaa]> show create table optimize_tb2;
+--------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+--------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| optimize_tb2 | CREATE TABLE `optimize_tb2` (
`id` int(7) NOT NULL AUTO_INCREMENT,
`name` varchar(20) DEFAULT NULL,
`age` int(4) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 |
+--------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.01 sec)

删除主键索引

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
root@db 16:04:  [aaaa]> alter table optimize_tb2 modify id int(7);
Query OK, 0 rows affected (1.60 sec)
Records: 0 Duplicates: 0 Warnings: 0

root@db 16:04: [aaaa]> alter table optimize_tb2 drop primary key;
Query OK, 0 rows affected (3.07 sec)
Records: 0 Duplicates: 0 Warnings: 0

root@db 16:04: [aaaa]> show create table optimize_tb2;
+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+
| optimize_tb2 | CREATE TABLE `optimize_tb2` (
`id` int(7) NOT NULL,
`name` varchar(20) DEFAULT NULL,
`age` int(4) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 |
+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

插入数据

1
2
3
4
5
6
7
8
9
10
11
12
root@db 16:04:  [aaaa]> insert into optimize_tb2 (select * from testfororder limit 1000000);
Query OK, 1000000 rows affected (11.85 sec)
Records: 1000000 Duplicates: 0 Warnings: 0

root@db 16:06: [aaaa]>
root@db 16:07: [aaaa]> select count(*) from optimize_tb2;
+----------+
| count(*) |
+----------+
| 1000000 |
+----------+
1 row in set (0.41 sec)

查看数据文件大小

1
2
3
[root@dax-mysql-master aaaa]# ls -lhtr|grep optimize_tb2
-rw-r-----. 1 mysql mysql 8.5K 11月 27 16:04 optimize_tb2.frm
-rw-r-----. 1 mysql mysql 56M 11月 27 16:06 optimize_tb2.ibd

删除前50万数据

1
2
3
4
5
6
7
8
9
10
root@db 16:07:  [aaaa]> delete from optimize_tb2 limit 500000;
Query OK, 500000 rows affected (2.50 sec)

root@db 16:11: [aaaa]> select count(*) from optimize_tb2;
+----------+
| count(*) |
+----------+
| 500000 |
+----------+
1 row in set (0.29 sec)

查看数据文件大小

1
2
3
[root@dax-mysql-master aaaa]# ls -lhtr|grep optimize_tb2
-rw-r-----. 1 mysql mysql 8.5K 11月 27 16:04 optimize_tb2.frm
-rw-r-----. 1 mysql mysql 56M 11月 27 16:11 optimize_tb2.ibd

session1对表进行优化

1
2
3
4
5
6
7
8
root@db 16:11:  [aaaa]> optimize table optimize_tb2;
+-------------------+----------+----------+-------------------------------------------------------------------+
| Table | Op | Msg_type | Msg_text |
+-------------------+----------+----------+-------------------------------------------------------------------+
| aaaa.optimize_tb2 | optimize | note | Table does not support optimize, doing recreate + analyze instead |
| aaaa.optimize_tb2 | optimize | status | OK |
+-------------------+----------+----------+-------------------------------------------------------------------+
2 rows in set (22.04 sec

session2对表插入数据

1
2
3
4
5
6
7
8
9
10
root@db 16:11:  [aaaa]> insert into optimize_tb2(id,name,age) values(8166638,'lucy1',24);
Query OK, 1 row affected (0.35 sec)
root@db 16:11: [aaaa]> insert into optimize_tb2(id,name,age) values(8166638,'lucy1',24);
Query OK, 1 row affected (0.44 sec)
root@db 16:11: [aaaa]> insert into optimize_tb2(id,name,age) values(8166638,'lucy1',24);
Query OK, 1 row affected (0.24 sec)
root@db 16:11: [aaaa]> insert into optimize_tb2(id,name,age) values(8166638,'lucy1',24);
Query OK, 1 row affected (0.38 sec)
root@db 16:11: [aaaa]> insert into optimize_tb2(id,name,age) values(8166638,'lucy1',24);
Query OK, 1 row affected (0.20 sec)

查看数据文件大小

1
2
3
[root@dax-mysql-master aaaa]# ls -lhtr|grep optimize_tb2
-rw-r-----. 1 mysql mysql 8.5K 11月 27 16:11 optimize_tb2.frm
-rw-r-----. 1 mysql mysql 32M 11月 27 16:12 optimize_tb2.ibd

在没有任何索引的情况下,insert语句成功。

4、当mysql环境变量 old_alter_table为enabled(默认为off)或者 mysql启动参数添加了skip-new选项时,运行optimize table命令,会使用table copy的方式重建表,此时online ddl是不可用的,下面添加skip-new参数重新进行测试。

在/etc/my.cnf添加参数skip-new,重新启动mysql:

1
2
3
4
5
[root@dax-mysql-master aaaa]# more /etc/my.cnf|grep skip-new
skip-new
[root@dax-mysql-master aaaa]# /etc/init.d/mysql restart
Shutting down MySQL....... SUCCESS!
Starting MySQL..... SUCCESS!

删除上面的测试表,再次进行测试

1
2
root@db 16:21:  [aaaa]> drop table optimize_tb,optimize_tb1,optimize_tb2;
Query OK, 0 rows affected (1.58 sec)

5、带有主键索引

创建测试表

1
2
root@db 16:23:  [aaaa]> create table optimize_tb like testfororder;
Query OK, 0 rows affected (1.29 sec)

插入测试数据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
root@db 16:23:  [aaaa]> insert into optimize_tb (select * from testfororder limit 1000000);
Query OK, 1000000 rows affected (33.65 sec)
Records: 1000000 Duplicates: 0 Warnings: 0

root@db 16:24: [aaaa]> show create table optimize_tb;
+-------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+-------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| optimize_tb | CREATE TABLE `optimize_tb` (
`id` int(7) NOT NULL AUTO_INCREMENT,
`name` varchar(20) DEFAULT NULL,
`age` int(4) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=8166638 DEFAULT CHARSET=utf8mb4 |
+-------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

root@db 16:25: [aaaa]> select count(*) from optimize_tb;
+----------+
| count(*) |
+----------+
| 1000000 |
+----------+
1 row in set (0.25 sec)

查看数据文件大小

1
2
3
[root@dax-mysql-master aaaa]# ls -lhtr|grep optimize
-rw-r-----. 1 mysql mysql 8.5K 11月 27 16:23 optimize_tb.frm
-rw-r-----. 1 mysql mysql 48M 11月 27 16:24 optimize_tb.ibd

删除前50万数据

1
2
3
4
5
6
7
8
9
10
root@db 16:26:  [aaaa]> delete from optimize_tb limit 500000;
Query OK, 500000 rows affected (9.18 sec)

root@db 16:26: [aaaa]> select count(*) from optimize_tb;
+----------+
| count(*) |
+----------+
| 500000 |
+----------+
1 row in set (0.13 sec)

查看数据文件大小

1
2
3
[root@dax-mysql-master aaaa]# ls -lhtr|grep optimize
-rw-r-----. 1 mysql mysql 8.5K 11月 27 16:23 optimize_tb.frm
-rw-r-----. 1 mysql mysql 48M 11月 27 16:26 optimize_tb.ibd

session1执行优化表操作

1
2
3
root@db 16:27:  [aaaa]> optimize table optimize_tb;
Query OK, 500000 rows affected (26.01 sec)
Records: 500000 Duplicates: 0 Warnings: 0

session2执行插入操作

1
2
root@db 16:27:  [aaaa]> insert into optimize_tb(id,name,age) values(8166638,'lucy1',24);
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction

session3查看mysql线程

1
2
3
4
5
6
7
8
root@db 16:27:  [(none)]> show processlist;
+----+------+-----------+------+---------+------+---------------------------------+-----------------------------------------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+------+-----------+------+---------+------+---------------------------------+-----------------------------------------------------------------+
| 2 | root | localhost | aaaa | Query | 19 | copy to tmp table | optimize table optimize_tb |
| 3 | root | localhost | aaaa | Query | 11 | Waiting for table metadata lock | insert into optimize_tb(id,name,age) values(8166638,'lucy1',24) |
| 4 | root | localhost | NULL | Query | 0 | starting | show processlist |
+----+------+-----------+------+---------+------+---------------------------------+-----------------------------------------------------------------+

查看数据文件大小

1
2
3
[root@dax-mysql-master aaaa]# ls -lhtr|grep optimize
-rw-r-----. 1 mysql mysql 8.5K 11月 27 16:27 optimize_tb.frm
-rw-r-----. 1 mysql mysql 28M 11月 27 16:27 optimize_tb.ibd

在存在主键索引的情况下,数据插入失败,提示锁等待超时。

6、表中带有fulltext索引

创建测试表

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
root@db 16:31:  [aaaa]> create table optimize_tb1 like optimize_tb;
Query OK, 0 rows affected (1.07 sec)

root@db 16:31: [aaaa]> show create table optimize_tb1;
+--------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+--------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| optimize_tb1 | CREATE TABLE `optimize_tb1` (
`id` int(7) NOT NULL AUTO_INCREMENT,
`name` varchar(20) DEFAULT NULL,
`age` int(4) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 |
+--------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.01 sec)

删除主键索引

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
root@db 16:31:  [aaaa]> alter table optimize_tb1 modify id int(7);
Query OK, 0 rows affected (1.13 sec)
Records: 0 Duplicates: 0 Warnings: 0

root@db 16:31: [aaaa]> alter table optimize_tb1 drop primary key;
Query OK, 0 rows affected (1.41 sec)
Records: 0 Duplicates: 0 Warnings: 0

root@db 16:31: [aaaa]> show create table optimize_tb1;
+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+
| optimize_tb1 | CREATE TABLE `optimize_tb1` (
`id` int(7) NOT NULL,
`name` varchar(20) DEFAULT NULL,
`age` int(4) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 |
+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

插入数据

1
2
3
4
5
6
7
8
9
10
11
root@db 16:32:  [aaaa]> insert into optimize_tb1 (select * from testfororder limit 1000000);
Query OK, 1000000 rows affected (14.49 sec)
Records: 1000000 Duplicates: 0 Warnings: 0

root@db 16:32: [aaaa]> select count(*) from optimize_tb1;
+----------+
| count(*) |
+----------+
| 1000000 |
+----------+
1 row in set (0.44 sec)

创建fulltext 索引

1
2
3
root@db 16:33:  [aaaa]> create fulltext index name_fullind on optimize_tb1(name);
Query OK, 0 rows affected, 1 warning (57.22 sec)
Records: 0 Duplicates: 0 Warnings: 1

查看数据文件大小

1
2
3
[root@dax-mysql-master aaaa]# ls -lhtr|grep optimize_tb1
-rw-r-----. 1 mysql mysql 8.5K 11月 27 16:33 optimize_tb1.frm
-rw-r-----. 1 mysql mysql 92M 11月 27 16:34 optimize_tb1.ibd

删除前50万数据

1
2
3
4
5
6
7
8
9
10
root@db 16:34:  [aaaa]> delete from optimize_tb1 limit 500000;
Query OK, 500000 rows affected (59.90 sec)

root@db 16:35: [aaaa]> select count(*) from optimize_tb1;
+----------+
| count(*) |
+----------+
| 500000 |
+----------+
1 row in set (0.69 sec)

查看数据文件大小

1
2
3
[root@dax-mysql-master aaaa]# ls -lhtr|grep optimize_tb1
-rw-r-----. 1 mysql mysql 8.5K 11月 27 16:33 optimize_tb1.frm
-rw-r-----. 1 mysql mysql 92M 11月 27 16:36 optimize_tb1.ibd

session1执行对表进行优化操作

1
2
3
root@db 16:36:  [aaaa]> optimize table optimize_tb1;
Query OK, 500000 rows affected (55.85 sec)
Records: 500000 Duplicates: 0 Warnings: 0

session2对表插入数据

1
2
3
4
root@db 16:37:  [aaaa]> insert into optimize_tb1(id,name,age) values(8166638,'lucy1',24);
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
root@db 16:37: [aaaa]> insert into optimize_tb1(id,name,age) values(8166638,'lucy1',24);
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction

session3查看mysql线程

1
2
3
4
5
6
7
8
9
root@db 16:37:  [(none)]> show processlist;
+----+------+-----------+------+---------+------+---------------------------------+------------------------------------------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+------+-----------+------+---------+------+---------------------------------+------------------------------------------------------------------+
| 2 | root | localhost | aaaa | Query | 9 | copy to tmp table | optimize table optimize_tb1 |
| 4 | root | localhost | NULL | Query | 0 | starting | show processlist |
| 5 | root | localhost | aaaa | Query | 8 | Waiting for table metadata lock | insert into optimize_tb1(id,name,age) values(8166638,'lucy1',24) |
+----+------+-----------+------+---------+------+---------------------------------+------------------------------------------------------------------+
3 rows in set (0.00 sec)

查看数据文件大小

1
2
3
[root@dax-mysql-master aaaa]# ls -lhtr|grep optimize_tb1
-rw-r-----. 1 mysql mysql 8.5K 11月 27 16:37 optimize_tb1.frm
-rw-r-----. 1 mysql mysql 48M 11月 27 16:38 optimize_tb1.ibd

在存在fulltext索引的情况下,数据插入失败,提示锁等待超时。

7、表中没有任何索引

创建测试表

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
root@db 16:38:  [aaaa]> create table optimize_tb2 like optimize_tb;
Query OK, 0 rows affected (0.44 sec)

root@db 16:39: [aaaa]> show create table optimize_tb2;
+--------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+--------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| optimize_tb2 | CREATE TABLE `optimize_tb2` (
`id` int(7) NOT NULL AUTO_INCREMENT,
`name` varchar(20) DEFAULT NULL,
`age` int(4) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 |
+--------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

root@db 16:40: [aaaa]> alter table optimize_tb2 modify id int(7);
Query OK, 0 rows affected (1.39 sec)
Records: 0 Duplicates: 0 Warnings: 0

root@db 16:40: [aaaa]> alter table optimize_tb2 drop primary key;
Query OK, 0 rows affected (1.01 sec)
Records: 0 Duplicates: 0 Warnings: 0

root@db 16:40: [aaaa]> show create table optimize_tb2;
+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+
| optimize_tb2 | CREATE TABLE `optimize_tb2` (
`id` int(7) NOT NULL,
`name` varchar(20) DEFAULT NULL,
`age` int(4) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 |
+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.01 sec)

插入数据

1
2
3
4
5
6
7
8
9
10
11
root@db 16:40:  [aaaa]> insert into optimize_tb2 (select * from testfororder limit 1000000);
Query OK, 1000000 rows affected (16.74 sec)
Records: 1000000 Duplicates: 0 Warnings: 0

root@db 16:41: [aaaa]> select count(*) from optimize_tb2;
+----------+
| count(*) |
+----------+
| 1000000 |
+----------+
1 row in set (0.49 sec)

查看数据文件大小

1
2
3
[root@dax-mysql-master aaaa]# ls -lhtr|grep optimize_tb2
-rw-r-----. 1 mysql mysql 8.5K 11月 27 16:40 optimize_tb2.frm
-rw-r-----. 1 mysql mysql 56M 11月 27 16:41 optimize_tb2.ibd

删除前50万数据

1
2
3
4
5
6
7
8
9
10
root@db 16:42:  [aaaa]> delete from optimize_tb2 limit 500000;
Query OK, 500000 rows affected (3.96 sec)

root@db 16:42: [aaaa]> select count(*) from optimize_tb2;
+----------+
| count(*) |
+----------+
| 500000 |
+----------+
1 row in set (0.32 sec)

数据文件大小

1
2
3
[root@dax-mysql-master aaaa]# ls -lhtr|grep optimize_tb2
-rw-r-----. 1 mysql mysql 8.5K 11月 27 16:40 optimize_tb2.frm
-rw-r-----. 1 mysql mysql 56M 11月 27 16:42 optimize_tb2.ibd

session1执行优化表操作

1
2
3
root@db 16:42:  [aaaa]> optimize table optimize_tb2;
Query OK, 500000 rows affected (23.61 sec)
Records: 500000 Duplicates: 0 Warnings: 0

session2执行插入数据操作

1
2
root@db 16:43:  [aaaa]> insert into optimize_tb2(id,name,age) values(8166638,'lucy1',24);
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction

session3查看mysql线程

1
2
3
4
5
6
7
8
9
root@db 16:43:  [(none)]> show processlist;
+----+------+-----------+------+---------+------+---------------------------------+------------------------------------------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+------+-----------+------+---------+------+---------------------------------+------------------------------------------------------------------+
| 2 | root | localhost | aaaa | Query | 7 | copy to tmp table | optimize table optimize_tb2 |
| 4 | root | localhost | NULL | Query | 0 | starting | show processlist |
| 5 | root | localhost | aaaa | Query | 6 | Waiting for table metadata lock | insert into optimize_tb2(id,name,age) values(8166638,'lucy1',24) |
+----+------+-----------+------+---------+------+---------------------------------+------------------------------------------------------------------+
3 rows in set (0.00 sec)

在没有任何索引的情况下,插入数据锁等待

8、结论

当mysql环境变量 old_alter_table为enabled(默认为disabled)或者 mysql启动参数添加了skip-new选项时,运行optimize table命令,会使用table copy的方式重建表,无论表中有无索引,此时online ddl是不可用的;
当mysql环境变量old_alter_table为disabled(默认为disabled)或者 mysql启动参数没有添加了skip-new选项时,运行optimize table命令,当表中不存在fulltext索引时,online ddl可用,当表中存在fulltext索引是,online ddl不可用。

点击阅读

[Mysql] mysql-mgr模拟the master has purged binary logs containing GTIDs that the slave requires.', Error_code: 1236


0、实验环境介绍:

mysql-mgr单主模式主节点1、从节点1、从节点2

1、实验一,手动删除主库binlog日志文件

从库2

1
执行stop group_replication;

主库:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
#切换日志,创建表gtid_test10,并查看当前的binlog日志文件:
flush logs;create table gtid_test10 (ID int) engine=innodb;
Query OK, 0 rows affected (0.04 sec)
Query OK, 0 rows affected (0.05 sec)
root@db 05:48: [test]> show master status;
+---------------------+----------+--------------+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+---------------------+----------+--------------+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| mysql-binlog.000029 | 493 | | | 8182e5ae-af54-11e8-af0e-000d3a801ae2:1-13253,
c42e3372-ba21-11e8-99ed-000d3a800ed3:1-2,
d240752c-b809-11e8-8947-000d3a800ed3:1,
dd412cc2-ba1f-11e8-9ba2-000d3a801ae2:1-27 |
+---------------------+----------+--------------+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
#切换日志,创建表gtid_test10,并查看当前的binlog日志文件:
root@db 05:48: [test]> flush logs;create table gtid_test11 (ID int) engine=innodb;
Query OK, 0 rows affected (0.05 sec)

Query OK, 0 rows affected (0.05 sec)

root@db 05:48: [test]> show master status;
+---------------------+----------+--------------+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+---------------------+----------+--------------+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| mysql-binlog.000030 | 493 | | | 8182e5ae-af54-11e8-af0e-000d3a801ae2:1-13253,
c42e3372-ba21-11e8-99ed-000d3a800ed3:1-2,
d240752c-b809-11e8-8947-000d3a800ed3:1,
dd412cc2-ba1f-11e8-9ba2-000d3a801ae2:1-28 |
+---------------------+----------+--------------+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
#主库手动移除binlog文件:
mv mysql-binlog.000029 /tmp/

从库2

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
执行start group_replication;
#查看从库2的日志
2018-09-18T05:49:55.983888Z 93 [Note] Plugin group_replication reported: 'Establishing group recovery connection with a possible donor. Attempt 1/10'
2018-09-18T05:49:55.984034Z 0 [Note] Plugin group_replication reported: 'Group membership changed to dax-mysql-slave:3306, dax-mysql-master:3306, dax-mysql-slave2:3306 on view 15371510730966421:31.'
2018-09-18T05:49:56.054981Z 93 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='dax-mysql-master', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''.
2018-09-18T05:49:56.144944Z 93 [Note] Plugin group_replication reported: 'Establishing connection to a group replication recovery donor 8182e5ae-af54-11e8-af0e-000d3a801ae2 at dax-mysql-master port: 3306.'
2018-09-18T05:49:56.155058Z 95 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.
2018-09-18T05:49:56.157141Z 95 [Note] Slave I/O thread for channel 'group_replication_recovery': connected to master 'repl@dax-mysql-master:3306',replication started in log 'FIRST' at position 4
2018-09-18T05:49:56.179441Z 96 [Note] Slave SQL thread for channel 'group_replication_recovery' initialized, starting replication in log 'FIRST' at position 0, relay log './relay-log-group_replication_recovery.000001' position: 4
2018-09-18T05:49:56.326072Z 95 [ERROR] Error reading packet from server for channel 'group_replication_recovery': Could not open log file (server_errno=1236)
2018-09-18T05:49:56.326236Z 95 [ERROR] Slave I/O for channel 'group_replication_recovery': Got fatal error 1236 from master when reading data from binary log: 'Could not open log file', Error_code: 1236
#因为已经手动删除了binlog文件,从库2无法从主库获取到binlog文件,报错
2018-09-18T05:49:56.326310Z 95 [Note] Slave I/O thread exiting for channel 'group_replication_recovery', read up to log 'mysql-binlog.000029', position 4
2018-09-18T05:49:56.326422Z 93 [Note] Plugin group_replication reported: 'Terminating existing group replication donor connection and purging the corresponding logs.'
2018-09-18T05:49:56.347776Z 96 [Note] Error reading relay log event for channel 'group_replication_recovery': slave SQL thread was killed
2018-09-18T05:49:56.438760Z 93 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='dax-mysql-master', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.
2018-09-18T05:49:56.510091Z 93 [Note] Plugin group_replication reported: 'Retrying group recovery connection with another donor. Attempt 2/10'
2018-09-18T05:49:56.572561Z 93 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='dax-mysql-slave', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''.
#从库2开始尝试从从库1节点获取binlog文件,因为只是删除了主库节点的binlog文件,从库1的文件未删除,从库2获取到所需要的文件,开始进行同步。
2018-09-18T05:49:56.643413Z 93 [Note] Plugin group_replication reported: 'Establishing connection to a group replication recovery donor 66d67181-ba5b-11e8-9c54-000d3a800ed3 at dax-mysql-slave port: 3306.'
2018-09-18T05:49:56.643906Z 99 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.
2018-09-18T05:49:56.646363Z 99 [Note] Slave I/O thread for channel 'group_replication_recovery': connected to master 'repl@dax-mysql-slave:3306',replication started in log 'FIRST' at position 4
2018-09-18T05:49:56.666439Z 100 [Note] Slave SQL thread for channel 'group_replication_recovery' initialized, starting replication in log 'FIRST' at position 0, relay log './relay-log-group_replication_recovery.000001' position: 4
2018-09-18T05:49:56.882987Z 93 [Note] Plugin group_replication reported: 'Terminating existing group replication donor connection and purging the corresponding logs.'
2018-09-18T05:49:56.905507Z 99 [Note] Slave I/O thread killed while reading event for channel 'group_replication_recovery'
2018-09-18T05:49:56.905535Z 99 [Note] Slave I/O thread exiting for channel 'group_replication_recovery', read up to log 'mysql-binlog.000005', position 10512
2018-09-18T05:49:56.999936Z 93 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='dax-mysql-slave', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.
2018-09-18T05:49:57.084035Z 0 [Note] Plugin group_replication reported: 'This server was declared online within the replication group'

同步完成。

2、实验二,使用purge命令删除主库binlog文件

从库2

1
stop group_replication:

主库1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
#切换日志,创建表gtid_test12,并查看当前的binlog日志文件:
root@db 06:04: [test]> flush logs;create table gtid_test12 (ID int) engine=innodb;
Query OK, 0 rows affected (0.03 sec)
Query OK, 0 rows affected (0.05 sec)
root@db 06:04: [test]> show master status;
+---------------------+----------+--------------+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+---------------------+----------+--------------+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| mysql-binlog.000033 | 493 | | | 8182e5ae-af54-11e8-af0e-000d3a801ae2:1-13253,
c42e3372-ba21-11e8-99ed-000d3a800ed3:1-2,
d240752c-b809-11e8-8947-000d3a800ed3:1,
dd412cc2-ba1f-11e8-9ba2-000d3a801ae2:1-30 |
+---------------------+----------+--------------+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

#切换日志,创建表gtid_test13,并查看当前的binlog日志文件:
root@db 06:04: [test]> flush logs;create table gtid_test13 (ID int) engine=innodb;
Query OK, 0 rows affected (0.04 sec)
Query OK, 0 rows affected (0.05 sec)
root@db 06:04: [test]> show master status;
+---------------------+----------+--------------+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+---------------------+----------+--------------+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| mysql-binlog.000034 | 493 | | | 8182e5ae-af54-11e8-af0e-000d3a801ae2:1-13253,
c42e3372-ba21-11e8-99ed-000d3a800ed3:1-2,
d240752c-b809-11e8-8947-000d3a800ed3:1,
dd412cc2-ba1f-11e8-9ba2-000d3a801ae2:1-31 |
+---------------------+----------+--------------+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
#主库清除mysql-binlog.000034之前的所有binlog日志:
purge binary logs to 'mysql-binlog.000034';
#查看日志log目录下的34之前的日志已经全部清除:
ll mysql-binlog.*
-rw-r----- 1 mysql mysql 493 Sep 18 06:04 mysql-binlog.000034
-rw-r----- 1 mysql mysql 36 Sep 18 06:22 mysql-binlog.index

#查看主库gtid信息,gtid_purged已经有记录:
show global variables like '%gtid%';
root@db 06:37: [test]> show global variables like '%gtid%';
+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Variable_name | Value |
+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| binlog_gtid_simple_recovery | ON |
| enforce_gtid_consistency | ON |
| group_replication_allow_local_disjoint_gtids_join | ON |
| group_replication_gtid_assignment_block_size | 1000000 |
| gtid_executed | 8182e5ae-af54-11e8-af0e-000d3a801ae2:1-13253,
c42e3372-ba21-11e8-99ed-000d3a800ed3:1-2,
d240752c-b809-11e8-8947-000d3a800ed3:1,
dd412cc2-ba1f-11e8-9ba2-000d3a801ae2:1-31 |
| gtid_executed_compression_period | 1000 |
| gtid_mode | ON |
| gtid_owned | |
| gtid_purged | 8182e5ae-af54-11e8-af0e-000d3a801ae2:1-13253,
c42e3372-ba21-11e8-99ed-000d3a800ed3:1-2,
d240752c-b809-11e8-8947-000d3a800ed3:1,
dd412cc2-ba1f-11e8-9ba2-000d3a801ae2:1-30 |
| session_track_gtids | OFF |
+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
10 rows in set (0.01 sec)

从节点2

1
start group_replication;

查看从库2的日志

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
2018-09-18T06:38:36.664372Z 107 [Note] Plugin group_replication reported: 'Initialized group communication with configuration: group_replication_group_name: "dd412cc2-ba1f-11e8-9ba2-000d3a801ae2"; group_replication_local_address: "dax-mysql-slave2:24901"; group_replication_group_seeds: "dax-mysql-slave:24901,dax-mysql-master:24901,dax-mysql-slave2:24901"; group_replication_bootstrap_group: false; group_replication_poll_spin_loops: 0; group_replication_compression_threshold: 1000000; group_replication_ip_whitelist: "AUTOMATIC"'
2018-09-18T06:38:36.664408Z 107 [Note] Plugin group_replication reported: '[GCS] Configured number of attempts to join: 0'
2018-09-18T06:38:36.664428Z 107 [Note] Plugin group_replication reported: '[GCS] Configured time between attempts to join: 5 seconds'
2018-09-18T06:38:36.664457Z 107 [Note] Plugin group_replication reported: 'Member configuration: member_id: 3306101; member_uuid: "c6ac9ccd-b80b-11e8-b968-000d3a801bf4"; single-primary mode: "true"; group_replication_auto_increment_increment: 1; '
2018-09-18T06:38:36.664812Z 109 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 493, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.
2018-09-18T06:38:36.736702Z 107 [Note] Plugin group_replication reported: 'Group Replication applier module successfully initialized!'
2018-09-18T06:38:36.736749Z 107 [Note] Plugin group_replication reported: 'auto_increment_increment is set to 1'
2018-09-18T06:38:36.736774Z 107 [Note] Plugin group_replication reported: 'auto_increment_offset is set to 3306101'
2018-09-18T06:38:36.736708Z 112 [Note] Slave SQL thread for channel 'group_replication_applier' initialized, starting replication in log 'FIRST' at position 0, relay log './relay-log-group_replication_applier.000014' position: 4
2018-09-18T06:38:36.737167Z 0 [Note] Plugin group_replication reported: 'XCom protocol version: 3'
2018-09-18T06:38:36.737227Z 0 [Note] Plugin group_replication reported: 'XCom initialized and ready to accept incoming connections on port 24901'
2018-09-18T06:38:38.815550Z 107 [Note] Plugin group_replication reported: 'This server is working as secondary member with primary member address dax-mysql-master:3306.'
2018-09-18T06:38:38.815711Z 0 [ERROR] Plugin group_replication reported: 'Group contains 3 members which is greater than group_replication_auto_increment_increment value of 1. This can lead to an higher rate of transactional aborts.'
2018-09-18T06:38:38.816035Z 117 [Note] Plugin group_replication reported: 'Establishing group recovery connection with a possible donor. Attempt 1/10'
2018-09-18T06:38:38.816125Z 0 [Note] Plugin group_replication reported: 'Group membership changed to dax-mysql-slave:3306, dax-mysql-master:3306, dax-mysql-slave2:3306 on view 15371510730966421:33.'
2018-09-18T06:38:38.881138Z 117 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='dax-mysql-slave', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''.
#从节点2选择从从节点1获取binlog文件,因为从节点1文件没有删除,获取binlog文件正常。
2018-09-18T06:38:38.959786Z 117 [Note] Plugin group_replication reported: 'Establishing connection to a group replication recovery donor 66d67181-ba5b-11e8-9c54-000d3a800ed3 at dax-mysql-slave port: 3306.'
2018-09-18T06:38:38.960147Z 119 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.
2018-09-18T06:38:38.962244Z 119 [Note] Slave I/O thread for channel 'group_replication_recovery': connected to master 'repl@dax-mysql-slave:3306',replication started in log 'FIRST' at position 4
2018-09-18T06:38:38.982193Z 120 [Note] Slave SQL thread for channel 'group_replication_recovery' initialized, starting replication in log 'FIRST' at position 0, relay log './relay-log-group_replication_recovery.000001' position: 4
2018-09-18T06:38:39.194864Z 117 [Note] Plugin group_replication reported: 'Terminating existing group replication donor connection and purging the corresponding logs.'
2018-09-18T06:38:39.218168Z 119 [Note] Slave I/O thread killed while reading event for channel 'group_replication_recovery'
2018-09-18T06:38:39.218197Z 119 [Note] Slave I/O thread exiting for channel 'group_replication_recovery', read up to log 'mysql-binlog.000005', position 11362
2018-09-18T06:38:39.313623Z 117 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='dax-mysql-slave', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.
2018-09-18T06:38:39.400236Z 0 [Note] Plugin group_replication reported: 'This server was declared online within the replication group'

同步完成。

3、实验三,删除主库节点和从节点1的binlog文件。

从库2

1
stop group_replication;

主库1:

1
2
3
4
5
6
7
8
9
10
11
3切换日志,创建表gtid_test14,并查看当前的binlog日志文件:
flush logs;create table gtid_test14 (ID int) engine=innodb;
show master status;
#切换日志,创建表gtid_test15,并查看当前的binlog日志文件:
flush logs;create table gtid_test15 (ID int) engine=innodb;
show master status;
#清除mysql-binlog.000036之前的日志:
purge binary logs to 'mysql-binlog.000036';
[root@dax-mysql-master log]# ll mysql-binlog.*
-rw-r----- 1 mysql mysql 493 Sep 18 06:53 mysql-binlog.000036
-rw-r----- 1 mysql mysql 36 Sep 18 06:56 mysql-binlog.index

从库1删除binlog日志:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
show master status;
+---------------------+----------+--------------+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+---------------------+----------+--------------+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| mysql-binlog.000005 | 11728 | | | 324a6fd1-ba55-11e8-b3ff-000d3a800ed3:1,
8182e5ae-af54-11e8-af0e-000d3a801ae2:1-13253,
c42e3372-ba21-11e8-99ed-000d3a800ed3:1-2,
d240752c-b809-11e8-8947-000d3a800ed3:1,
dd412cc2-ba1f-11e8-9ba2-000d3a801ae2:1-34 |
+---------------------+----------+--------------+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
#切换binlog日志:
flush logs;
show master status;
root@db 06:57: [test]> show master status;
+---------------------+----------+--------------+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+---------------------+----------+--------------+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| mysql-binlog.000006 | 350 | | | 324a6fd1-ba55-11e8-b3ff-000d3a800ed3:1,
8182e5ae-af54-11e8-af0e-000d3a801ae2:1-13253,
c42e3372-ba21-11e8-99ed-000d3a800ed3:1-2,
d240752c-b809-11e8-8947-000d3a800ed3:1,
dd412cc2-ba1f-11e8-9ba2-000d3a801ae2:1-34 |
+---------------------+----------+--------------+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
[root@dax-mysql-slave log]# ll
total 96
-rw-r----- 1 mysql mysql 53478 Sep 18 06:49 error.log
-rw-r----- 1 mysql mysql 177 Sep 17 08:39 mysql-binlog.000001
-rw-r----- 1 mysql mysql 437 Sep 17 09:21 mysql-binlog.000002
-rw-r----- 1 mysql mysql 217 Sep 17 09:25 mysql-binlog.000003
-rw-r----- 1 mysql mysql 209 Sep 17 09:26 mysql-binlog.000004
-rw-r----- 1 mysql mysql 11774 Sep 18 06:57 mysql-binlog.000005
-rw-r----- 1 mysql mysql 350 Sep 18 06:57 mysql-binlog.000006
-rw-r----- 1 mysql mysql 216 Sep 18 06:57 mysql-binlog.index
-rw-r--r-- 1 mysql mysql 0 Sep 17 08:36 mysqld.log
-rw-r----- 1 mysql mysql 1968 Sep 18 06:57 slow.log
#删除mysql-binlog.000006之前的所有日志:
purge binary logs to 'mysql-binlog.000006';
[root@dax-mysql-slave log]# ll
total 68
-rw-r----- 1 mysql mysql 53478 Sep 18 06:49 error.log
-rw-r----- 1 mysql mysql 350 Sep 18 06:57 mysql-binlog.000006
-rw-r----- 1 mysql mysql 36 Sep 18 06:58 mysql-binlog.index
-rw-r--r-- 1 mysql mysql 0 Sep 17 08:36 mysqld.log
-rw-r----- 1 mysql mysql 1968 Sep 18 06:57 slow.log

从库2

1
start group_replicatinon;

查看从库2的日志

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
2018-09-18T06:59:20.236425Z 136 [Note] Plugin group_replication reported: 'Retrying group recovery connection with another donor. Attempt 2/10'
2018-09-18T06:59:20.309947Z 136 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='dax-mysql-slave', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''.
2018-09-18T06:59:20.391623Z 136 [Note] Plugin group_replication reported: 'Establishing connection to a group replication recovery donor 66d67181-ba5b-11e8-9c54-000d3a800ed3 at dax-mysql-slave port: 3306.'
2018-09-18T06:59:20.391891Z 142 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.
2018-09-18T06:59:20.394226Z 142 [Note] Slave I/O thread for channel 'group_replication_recovery': connected to master 'repl@dax-mysql-slave:3306',replication started in log 'FIRST' at position 4
2018-09-18T06:59:20.416722Z 143 [Note] Slave SQL thread for channel 'group_replication_recovery' initialized, starting replication in log 'FIRST' at position 0, relay log './relay-log-group_replication_recovery.000001' position: 4
2018-09-18T06:59:20.424400Z 142 [ERROR] Error reading packet from server for channel 'group_replication_recovery': The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires. (server_errno=1236)
2018-09-18T06:59:20.424467Z 142 [ERROR] Slave I/O for channel 'group_replication_recovery': Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.', Error_code: 1236
2018-09-18T06:59:20.424488Z 142 [Note] Slave I/O thread exiting for channel 'group_replication_recovery', read up to log 'FIRST', position 4
2018-09-18T06:59:20.424640Z 136 [Note] Plugin group_replication reported: 'Terminating existing group replication donor connection and purging the corresponding logs.'
2018-09-18T06:59:20.436806Z 143 [Note] Error reading relay log event for channel 'group_replication_recovery': slave SQL thread was killed
2018-09-18T06:59:20.522269Z 136 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='dax-mysql-slave', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.
2018-09-18T06:59:20.603837Z 136 [Note] Plugin group_replication reported: 'Retrying group recovery connection with another donor. Attempt 3/10'
2018-09-18T07:00:20.679461Z 136 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='dax-mysql-slave', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''.
2018-09-18T07:00:20.762021Z 136 [Note] Plugin group_replication reported: 'Establishing connection to a group replication recovery donor 66d67181-ba5b-11e8-9c54-000d3a800ed3 at dax-mysql-slave port: 3306.'
2018-09-18T07:00:20.762368Z 146 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.
2018-09-18T07:00:20.764842Z 146 [Note] Slave I/O thread for channel 'group_replication_recovery': connected to master 'repl@dax-mysql-slave:3306',replication started in log 'FIRST' at position 4
2018-09-18T07:00:20.787728Z 147 [Note] Slave SQL thread for channel 'group_replication_recovery' initialized, starting replication in log 'FIRST' at position 0, relay log './relay-log-group_replication_recovery.000001' position: 4
2018-09-18T07:00:20.794590Z 146 [ERROR] Error reading packet from server for channel 'group_replication_recovery': The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires. (server_errno=1236)
2018-09-18T07:00:20.794623Z 146 [ERROR] Slave I/O for channel 'group_replication_recovery': Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.', Error_code: 1236
2018-09-18T07:00:20.794635Z 146 [Note] Slave I/O thread exiting for channel 'group_replication_recovery', read up to log 'FIRST', position 4
2018-09-18T07:00:20.794786Z 136 [Note] Plugin group_replication reported: 'Terminating existing group replication donor connection and purging the corresponding logs.'
2018-09-18T07:00:20.807977Z 147 [Note] Error reading relay log event for channel 'group_replication_recovery': slave SQL thread was killed
2018-09-18T07:00:20.895774Z 136 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='dax-mysql-slave', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.
#从库2多次尝试从从库1获取binlog日志,没有成功,切换master_host节点,尝试从主库获取binlog日志文件:
2018-09-18T07:00:20.980112Z 136 [Note] Plugin group_replication reported: 'Retrying group recovery connection with another donor. Attempt 4/10'
2018-09-18T07:00:21.052930Z 136 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='dax-mysql-master', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''.
2018-09-18T07:00:21.141126Z 136 [Note] Plugin group_replication reported: 'Establishing connection to a group replication recovery donor 8182e5ae-af54-11e8-af0e-000d3a801ae2 at dax-mysql-master port: 3306.'
2018-09-18T07:00:21.141498Z 150 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.
2018-09-18T07:00:21.143773Z 150 [Note] Slave I/O thread for channel 'group_replication_recovery': connected to master 'repl@dax-mysql-master:3306',replication started in log 'FIRST' at position 4
2018-09-18T07:00:21.166993Z 151 [Note] Slave SQL thread for channel 'group_replication_recovery' initialized, starting replication in log 'FIRST' at position 0, relay log './relay-log-group_replication_recovery.000001' position: 4
2018-09-18T07:00:21.171461Z 150 [ERROR] Error reading packet from server for channel 'group_replication_recovery': The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires. (server_errno=1236)
2018-09-18T07:00:21.171518Z 150 [ERROR] Slave I/O for channel 'group_replication_recovery': Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.', Error_code: 1236
2018-09-18T07:00:21.171528Z 150 [Note] Slave I/O thread exiting for channel 'group_replication_recovery', read up to log 'FIRST', position 4
2018-09-18T07:00:21.171645Z 136 [Note] Plugin group_replication reported: 'Terminating existing group replication donor connection and purging the corresponding logs.'
2018-09-18T07:00:21.186565Z 151 [Note] Error reading relay log event for channel 'group_replication_recovery': slave SQL thread was killed
2018-09-18T07:00:21.271867Z 136 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='dax-mysql-master', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.
....
[ERROR] Slave I/O for channel 'group_replication_recovery': Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.', Error_code: 1236

因为主库1和从库1的binlog文件都删除,从库2无法同步,最后退出集群。

4、查看主库清除的日志

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
 show global variables like '%gtid%';
+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Variable_name | Value |
+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| binlog_gtid_simple_recovery | ON |
| enforce_gtid_consistency | ON |
| group_replication_allow_local_disjoint_gtids_join | ON |
| group_replication_gtid_assignment_block_size | 1000000 |
| gtid_executed | 8182e5ae-af54-11e8-af0e-000d3a801ae2:1-13253,
c42e3372-ba21-11e8-99ed-000d3a800ed3:1-2,
d240752c-b809-11e8-8947-000d3a800ed3:1,
dd412cc2-ba1f-11e8-9ba2-000d3a801ae2:1-35 |
| gtid_executed_compression_period | 1000 |
| gtid_mode | ON |
| gtid_owned | |
| gtid_purged | 8182e5ae-af54-11e8-af0e-000d3a801ae2:1-13253,
c42e3372-ba21-11e8-99ed-000d3a800ed3:1-2,
d240752c-b809-11e8-8947-000d3a800ed3:1,
dd412cc2-ba1f-11e8-9ba2-000d3a801ae2:1-33 |
| session_track_gtids | OFF |
+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
10 rows in set (0.00 sec)

从库2尝试过滤掉缺失的那部分日志,然后重新加入集群:

1
2
3
4
stop group_replication;
reset master;
set global GTID_PURGED="8182e5ae-af54-11e8-af0e-000d3a801ae2:1-13253,c42e3372-ba21-11e8-99ed-000d3a800ed3:1-2,d240752c-b809-11e8-8947-000d3a800ed3:1,dd412cc2-ba1f-11e8-9ba2-000d3a801ae2:1-33";
start group_replication;

再次查看从库2的日志

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
2018-09-18T07:08:05.524626Z 125 [Note] @@GLOBAL.GTID_PURGED was changed from '' to '8182e5ae-af54-11e8-af0e-000d3a801ae2:1-13253,
c42e3372-ba21-11e8-99ed-000d3a800ed3:1-2,
d240752c-b809-11e8-8947-000d3a800ed3:1,
dd412cc2-ba1f-11e8-9ba2-000d3a801ae2:1-33'.
2018-09-18T07:08:05.524661Z 125 [Note] @@GLOBAL.GTID_EXECUTED was changed from '' to '8182e5ae-af54-11e8-af0e-000d3a801ae2:1-13253,
c42e3372-ba21-11e8-99ed-000d3a800ed3:1-2,
d240752c-b809-11e8-8947-000d3a800ed3:1,
dd412cc2-ba1f-11e8-9ba2-000d3a801ae2:1-33'.
2018-09-18T07:08:20.457841Z 125 [Note] Plugin group_replication reported: 'Group communication SSL configuration: group_replication_ssl_mode: "DISABLED"'
2018-09-18T07:08:20.457971Z 125 [Note] Plugin group_replication reported: '[GCS] Added automatically IP ranges 10.0.7.51/24,127.0.0.1/8 to the whitelist'
2018-09-18T07:08:20.458188Z 125 [Note] Plugin group_replication reported: '[GCS] Translated 'dax-mysql-slave2' to 10.0.7.51'
2018-09-18T07:08:20.458321Z 125 [Warning] Plugin group_replication reported: '[GCS] Automatically adding IPv4 localhost address to the whitelist. It is mandatory that it is added.'
2018-09-18T07:08:20.458401Z 125 [Note] Plugin group_replication reported: '[GCS] SSL was not enabled'
2018-09-18T07:08:20.458433Z 125 [Note] Plugin group_replication reported: 'Initialized group communication with configuration: group_replication_group_name: "dd412cc2-ba1f-11e8-9ba2-000d3a801ae2"; group_replication_local_address: "dax-mysql-slave2:24901"; group_replication_group_seeds: "dax-mysql-slave:24901,dax-mysql-master:24901,dax-mysql-slave2:24901"; group_replication_bootstrap_group: false; group_replication_poll_spin_loops: 0; group_replication_compression_threshold: 1000000; group_replication_ip_whitelist: "AUTOMATIC"'
2018-09-18T07:08:20.458473Z 125 [Note] Plugin group_replication reported: '[GCS] Configured number of attempts to join: 0'
2018-09-18T07:08:20.458480Z 125 [Note] Plugin group_replication reported: '[GCS] Configured time between attempts to join: 5 seconds'
2018-09-18T07:08:20.458503Z 125 [Note] Plugin group_replication reported: 'Member configuration: member_id: 3306101; member_uuid: "c6ac9ccd-b80b-11e8-b968-000d3a801bf4"; single-primary mode: "true"; group_replication_auto_increment_increment: 1; '
2018-09-18T07:08:20.458664Z 181 [Note] Plugin group_replication reported: 'Detected previous RESET MASTER invocation or an issue exists in the group replication applier relay log. Purging existing applier logs.'
2018-09-18T07:08:20.531810Z 181 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.
2018-09-18T07:08:20.632692Z 125 [Note] Plugin group_replication reported: 'Group Replication applier module successfully initialized!'
2018-09-18T07:08:20.632736Z 125 [Note] Plugin group_replication reported: 'auto_increment_increment is set to 1'
2018-09-18T07:08:20.632758Z 125 [Note] Plugin group_replication reported: 'auto_increment_offset is set to 3306101'
2018-09-18T07:08:20.632694Z 184 [Note] Slave SQL thread for channel 'group_replication_applier' initialized, starting replication in log 'FIRST' at position 0, relay log './relay-log-group_replication_applier.000001' position: 4
2018-09-18T07:08:20.632947Z 0 [Note] Plugin group_replication reported: 'XCom protocol version: 3'
2018-09-18T07:08:20.632977Z 0 [Note] Plugin group_replication reported: 'XCom initialized and ready to accept incoming connections on port 24901'
2018-09-18T07:08:22.329812Z 125 [Note] Plugin group_replication reported: 'This server is working as secondary member with primary member address dax-mysql-master:3306.'
2018-09-18T07:08:22.329901Z 0 [ERROR] Plugin group_replication reported: 'Group contains 3 members which is greater than group_replication_auto_increment_increment value of 1. This can lead to an higher rate of transactional aborts.'
2018-09-18T07:08:22.330145Z 189 [Note] Plugin group_replication reported: 'Establishing group recovery connection with a possible donor. Attempt 1/10'
2018-09-18T07:08:22.330221Z 0 [Note] Plugin group_replication reported: 'Group membership changed to dax-mysql-slave:3306, dax-mysql-master:3306, dax-mysql-slave2:3306 on view 15371510730966421:37.'
2018-09-18T07:08:22.394512Z 189 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='dax-mysql-master', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''.
2018-09-18T07:08:22.468142Z 189 [Note] Plugin group_replication reported: 'Establishing connection to a group replication recovery donor 8182e5ae-af54-11e8-af0e-000d3a801ae2 at dax-mysql-master port: 3306.'
2018-09-18T07:08:22.468378Z 191 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.
2018-09-18T07:08:22.470340Z 191 [Note] Slave I/O thread for channel 'group_replication_recovery': connected to master 'repl@dax-mysql-master:3306',replication started in log 'FIRST' at position 4
2018-09-18T07:08:22.490866Z 192 [Note] Slave SQL thread for channel 'group_replication_recovery' initialized, starting replication in log 'FIRST' at position 0, relay log './relay-log-group_replication_recovery.000001' position: 4
2018-09-18T07:08:22.632774Z 189 [Note] Plugin group_replication reported: 'Terminating existing group replication donor connection and purging the corresponding logs.'
2018-09-18T07:08:22.653083Z 191 [Note] Slave I/O thread killed while reading event for channel 'group_replication_recovery'
2018-09-18T07:08:22.653098Z 191 [Note] Slave I/O thread exiting for channel 'group_replication_recovery', read up to log 'mysql-binlog.000036', position 1381
2018-09-18T07:08:22.734186Z 189 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='dax-mysql-master', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.
2018-09-18T07:08:22.808784Z 0 [Note] Plugin group_replication reported: 'This server was declared online within the replication group'

此时从库2与主库、从库1的数据是不一致的,gtid_test14表时不存在的,因此需要使用pt-table-checksum和pt-table-sync去同步,但是该方法在mysql-mgr环境下测试没有成功。
另一种方法是直接备份一份主节点的最新备份数据,在从库2上恢复。

点击阅读

Proudly powered by Hexo and Theme by Lap
本站访客数人次
© 2020 zeven0707's blog