[Mysql] mysql勿操作drop table之后,利用mysqldump备份和binlog恢复


1、查看备份表数据:

1
2
3
4
5
6
7
8
root@db 07:30:  [test1]> select * from test;
+----+
| ID |
+----+
| 1 |
| 2 |
| 3 |
+----+

2、查看当前binlog位置

1
2
3
4
5
6
7
8
9
10
show master status;
+---------------------+----------+--------------+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+---------------------+----------+--------------+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| mysql-binlog.000036 | 80147 | | | 8182e5ae-af54-11e8-af0e-000d3a801ae2:1-13253,
c42e3372-ba21-11e8-99ed-000d3a800ed3:1-2,
d240752c-b809-11e8-8947-000d3a800ed3:1,
dd412cc2-ba1f-11e8-9ba2-000d3a801ae2:1-195:1000036-1000040 |
+---------------------+----------+--------------+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

3、开始备份test1数据库下的test数据表

1
mysqldump -uroot -p12345678 test1 test --master-data=2 --single-transaction > /data/test.dump

4、查看备份数据文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
 more /data/test.dump
-- MySQL dump 10.13 Distrib 5.7.22, for linux-glibc2.12 (x86_64)
--
-- Host: localhost Database: test1
-- ------------------------------------------------------
-- Server version 5.7.22-log

/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8 */;
/*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;
/*!40103 SET TIME_ZONE='+00:00' */;
/*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
/*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */;
/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;
/*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */;
SET @MYSQLDUMP_TEMP_LOG_BIN = @@SESSION.SQL_LOG_BIN;
SET @@SESSION.SQL_LOG_BIN= 0;

--
-- GTID state at the beginning of the backup
--

SET @@GLOBAL.GTID_PURGED='8182e5ae-af54-11e8-af0e-000d3a801ae2:1-13253,
c42e3372-ba21-11e8-99ed-000d3a800ed3:1-2,
d240752c-b809-11e8-8947-000d3a800ed3:1,
dd412cc2-ba1f-11e8-9ba2-000d3a801ae2:1-195:1000036-1000040';

--
-- Position to start replication or point-in-time recovery from
--

-- CHANGE MASTER TO MASTER_LOG_FILE='mysql-binlog.000036', MASTER_LOG_POS=80147;

--
-- Table structure for table `test`
--

DROP TABLE IF EXISTS `test`;
/*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `test` (
`ID` int(11) NOT NULL,
PRIMARY KEY (`ID`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
/*!40101 SET character_set_client = @saved_cs_client */;

--
-- Dumping data for table `test`
--

LOCK TABLES `test` WRITE;
/*!40000 ALTER TABLE `test` DISABLE KEYS */;
INSERT INTO `test` VALUES (1),(2),(3);
/*!40000 ALTER TABLE `test` ENABLE KEYS */;
UNLOCK TABLES;
SET @@SESSION.SQL_LOG_BIN = @MYSQLDUMP_TEMP_LOG_BIN;
/*!40103 SET TIME_ZONE=@OLD_TIME_ZONE */;

/*!40101 SET SQL_MODE=@OLD_SQL_MODE */;
/*!40014 SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS */;
/*!40014 SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS */;
/*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;
/*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;
/*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;
/*!40111 SET SQL_NOTES=@OLD_SQL_NOTES */;

-- Dump completed on 2018-09-25 7:32:05

5、备份完成之后插入新的数据

1
2
root@db 07:34:  [test1]> insert into test values(4);
Query OK, 1 row affected (0.01 sec)

6、刷新binlog文件

1
2
root@db 07:44:  [test1]> flush binary logs;
Query OK, 0 rows affected (0.04 sec)

7、查看binlog文件

1
2
3
4
5
6
7
8
9
10
root@db 07:44:  [test1]> show master status;
+---------------------+----------+--------------+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+---------------------+----------+--------------+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| mysql-binlog.000037 | 326 | | | 8182e5ae-af54-11e8-af0e-000d3a801ae2:1-13253,
c42e3372-ba21-11e8-99ed-000d3a800ed3:1-2,
d240752c-b809-11e8-8947-000d3a800ed3:1,
dd412cc2-ba1f-11e8-9ba2-000d3a801ae2:1-196:1000036-1000040 |
+---------------------+----------+--------------+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

8、在新的binlog文件里面再次插入一条数据:

1
2
root@db 07:44:  [test1]> insert into test values(5);
Query OK, 1 row affected (0.01 sec)

9、查看当前测试的数据

1
2
3
4
5
6
7
8
9
10
11
root@db 07:45:  [test1]> select * from test;
+----+
| ID |
+----+
| 1 |
| 2 |
| 3 |
| 4 |
| 5 |
+----+
5 rows in set (0.00 sec)

10、再次刷新binlog文件

1
2
3
4
5
6
7
8
9
10
11
12
13
root@db 07:45:  [test1]> flush binary logs;
Query OK, 0 rows affected (0.04 sec)

root@db 07:49: [test1]> show master status;
+---------------------+----------+--------------+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+---------------------+----------+--------------+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| mysql-binlog.000038 | 326 | | | 8182e5ae-af54-11e8-af0e-000d3a801ae2:1-13253,
c42e3372-ba21-11e8-99ed-000d3a800ed3:1-2,
d240752c-b809-11e8-8947-000d3a800ed3:1,
dd412cc2-ba1f-11e8-9ba2-000d3a801ae2:1-197:1000036-1000040 |
+---------------------+----------+--------------+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

11、删除测试表

1
2
root@db 07:49:  [test1]> drop table test;
Query OK, 0 rows affected (0.04 sec)

12、根据备份文件获取测试表test的创建语句

1
sed -e'/./{H;$!d;}' -e 'x;/CREATE TABLE `test`/!d;q' /data/test.dump
1
2
3
4
5
6
7
8
DROP TABLE IF EXISTS `test`;
/*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `test` (
`ID` int(11) NOT NULL,
PRIMARY KEY (`ID`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
/*!40101 SET character_set_client = @saved_cs_client */;

13、获取测试表test的数据,并指定到数据文件

1
grep 'INSERT INTO `test`' /data/test.dump > insert.sql

cat insert.sql

1
INSERT INTO `test` VALUES (1),(2),(3);

14、根据test.dump备份文件记录的log-file文件位置和log-pos参数,去获取未备份的关于test表的增删改数据:
从mysql-binlog.000036文件开始,–start-position=80147:

1
mysqlbinlog  -v --base64-output=decode-rows --set-charset=UTF-8 --database=test1 --start-position=80147  mysql-binlog.000036  > restore.sql

15、获取test表被删除是的pos,文件为mysql-binlog.000038,位置为507:

1
mysqlbinlog -v --base64-output=DECODE-ROWS --set-charset=UTF-8  /data/mysql/log/mysql-binlog.000038 |grep DROP  -A15 -B15
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# at 326
#180925 7:49:39 server id 3306101 end_log_pos 387 GTID last_committed=0 sequence_number=1 rbr_only=no
SET @@SESSION.GTID_NEXT= 'dd412cc2-ba1f-11e8-9ba2-000d3a801ae2:198'/*!*/;
# at 387
#180925 7:49:39 server id 3306101 end_log_pos 507 Query thread_id=267 exec_time=0 error_code=0
use `test1`/*!*/;
SET TIMESTAMP=1537861779/*!*/;
SET @@session.pseudo_thread_id=267/*!*/;
SET @@session.foreign_key_checks=1, @@session.sql_auto_is_null=0, @@session.unique_checks=1, @@session.autocommit=1/*!*/;
SET @@session.sql_mode=0/*!*/;
SET @@session.auto_increment_increment=1, @@session.auto_increment_offset=29301/*!*/;
/*!\C utf8 *//*!*/;
SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=45/*!*/;
SET @@session.lc_time_names=0/*!*/;
SET @@session.collation_database=DEFAULT/*!*/;
DROP TABLE `test` /* generated by server */
/*!*/;
SET @@SESSION.GTID_NEXT= 'AUTOMATIC' /* added by mysqlbinlog */ /*!*/;
DELIMITER ;
# End of log file
/*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/;
/*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;
/*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;
/*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;
/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/;

16、在备份开始binlog文件和记录drop操作的binlog文件之间还存在一个mysql-binlog.000037文件,需要将该文件内记录的信息都导出:

1
mysqlbinlog -v --base64-output=DECODE-ROWS --set-charset=UTF-8 mysql-binlog.000037 >> restore.sql

17:将记录drop操作的binlog文件里面drop之前的信息导出:

1
mysqlbinlog -v --base64-output=DECODE-ROWS --set-charset=UTF-8 --stop-position=507 mysql-binlog.000038 >> restore.sql

18:对导出的文件进行筛选,过滤出test表的相关信息:

1
more restore.sql |grep  --ignore-case -E 'insert|update|delete' -A3|grep '`test1`.`test`' -A2
1
2
3
4
5
6
7
### INSERT INTO `test1`.`test`
### SET
### @1=4
--
### INSERT INTO `test1`.`test`
### SET
### @1=5

将过滤出来的内容进行编辑,@1为第一列的列名,建议使用sublime或者vim进行批量编辑 :
cat test_insert.sql

1
2
INSERT INTO `test1`.`test` SET   id=4  ;
INSERT INTO `test1`.`test` SET id=5 ;

19、执行create table语句,并引用insert.sql和test_insert.sql文件,至此drop掉的test表被恢复。

点击阅读

[Cassandra] cassandra安装部署及日常使用命令


1、添加yum源

vim /etc/yum.repos.d/cassandra.repo

1
2
3
4
5
6
[cassandra]
name=Apache Cassandra
baseurl=https://www.apache.org/dist/cassandra/redhat/311x/
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://www.apache.org/dist/cassandra/KEYS

2、安装cassandra数据库

1
sudo yum install cassandra -y

3、修改配置文件/etc/cassandra/conf/cassandra.yaml

1
2
3
4
5
6
7
8
9
10
cluster_name: 'My Cluster'
hints_directory: /data/cassandra/hints
data_file_directories:
- /data/cassandra/dbdata
commitlog_directory: /data/cassandra/commitlog
saved_caches_directory: /data/cassandra/caches

- seeds: "10.10.8.3,10.10.8.4,10.10.8.5"
listen_address: dax-mysql-mha
rpc_address: dax-mysql-mha

每个节点需要修改的基本参数:

1
2
3
4
5
cluster_name:修改集群名称
修改存放目录hint、data、log
seeds:添加节点ip
listen_address:修改为本地ip地址
rpc_address:修改为本地ip地址

配置文件部分参数解释:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
cluster_name: 'Test Cluster'
storage_port: 7000
listen_address: dax-mysql-mha
start_native_transport: true #开启native协议
native_transport_port: 9042 #客户端的交互端口
data_file_directories:
- /data/cassandra/dbdata # 数据位置,多盘的话可以写多个目录
commitlog_directory:
- /data/cassandra/commitlog #commitlog的路径,与data目录分开磁盘,提高性能
saved_caches_directory:
- /data/cassandra/caches #缓存数据目录
commitlog_sync: batch #批量记录commitlog,每隔一段时间将数据commitlog
commitlog_sync_batch_window_in_ms: 2 #batch模式下,批量操作缓存的时间间隔
#commitlog_sync: periodic #周期记录commitlog,每一次有数据更新都commitlog
#commitlog_sync_period_in_ms: 10000 #periodic模式,刷新commitlog的时间间隔
rpc_address: dax-mysql-mha

4、创建配置文件内指定的相关目录:

1
2
3
4
5
mkdir -p /data/cassandra/hints
mkdir -p /data/cassandra/dbdata
mkdir -p /data/cassandra/commitlog
mkdir -p /data/cassandra/caches
chmod 777 -R /data/cassandra

5、启动数据库,并添加到开机启动:

1
2
service cassandra start
chkconfig cassandra on

查看运行状态

1
nodetool status

进入控制台,9042端口为默认控制台端口,可自行修改配置文件设置:

1
cqlsh ip 9042

6、如果没有修改cluster-name启动该数据库,之后想要修改cluster-name需要使用下面方法去修改:

1
2
3
4
5
6
7
进入控制台:
UPDATE system.local SET cluster_name = 'daxcluster' where key='local';
bash $ ./nodetool flush
修改cassandra.yaml配置文件cluster配置参数
cluster_name: 'daxcluster'
重启cassandra
/etc/init.d/cassandra restart

7、给数据库设置用户名密码

1)、首先修改配置文件 cassandra.yaml
把默认的authenticator: AllowAllAuthenticator运行所有人登录设置为用密码登录:

1
authenticator: PasswordAuthenticator

2)、登录cassandra创建用户
使用默认账户登录cassandra

1
cqlsh -ucassandra -pcassandra ip 9042

创建用户

1
CREATE USER test WITH PASSWORD 'test' SUPERUSER;

3)、使用新用户登录

1
cqlsh -utest -ptest ip 9042

删除默认帐号:

1
DROP USER cassandra;

4)、java使用用户名密码访问cassandra

1
2
3
4
Cluster cluster = Cluster.builder()
.addContactPoint("192.168.22.161")
.withCredentials("myusername", "mypassword")
.build();

8、账号权限分配命令

1
2
3
4
5
6
7
8
9
10
11
12
授权:
GRANT permission_name PERMISSION ON resource TO user_name;
GRANT ALL PERMISSIONS ON resource TO user_name;
收回权限:
REVOKE permission_name PERMISSION ON resource FROM user_name;
REVOKE ALL PERMISSIONS ON resource FROM user_name;
查看权限:
LIST permission_name PERMISSION ON resource OF user_name NORECURSIVE;
LIST ALL PERMISSIONS ON resource OF user_name NORECURSIVE;
其中,
permission_name为: ALL/ALTER/AUTHORIZE/CREATE/DROP/MODIFY/SELECT
resource为:ALL KEYSPACES/KEYSPACE keyspace_name/TABLE keyspace_name.table_name

9、其他使用命令:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
desc KEYSPACEs
枚举所有数据库
list users;
查看所有用户
show version;
显示当前cqlsh,cassandra,cql spec,native protocol 版本信息
show host;
显示当前集群节点的ip地址和端口
SHOW SESSION <session id>
跟踪一个会话信息
SOURCE '/home/thobbs/commands.cql'
读取文件内容,执行cql语句
CAPTURE '<file>';
将会话查询内容指定到一个文件内
CAPTURE OFF;
停止将查询结果指定到文件内,使其正常输出到屏幕上
CAPTURE;
查看当前抓取信息是否指定到什么目录下

10、如果nodetool decommission命令将某节点提出,执行结束之后,如果需要把节点再重新加入集群,需要把数据目录下数据删除掉,重启cassandra服务。

点击阅读

[Mysql] mgr从节点down恢复过程遇到的问题


1、mgr集群有节点down掉,查看日志一台服务器(3)因网络问题与其他两个节点(1,2)断掉,3节点被踢出了集群

1
2
3
 [Warning] Plugin group_replication reported: 'Member with address wallet-mysql-2:3306 has become unreachable.'
[Warning] Plugin group_replication reported: 'Member with address wallet-mysql-1:3306 has become unreachable.'
[ERROR] Plugin group_replication reported: 'Member was expelled from the group due to network failures, changing member status to ERROR.'

2、登录节点3尝试重新启动group_replication:

1
2
mysql>stop group_replicaiton;
mysql>start group_replication;

3、查看集群成员状态

SELECT * FROM performance_schema.replication_group_members;

1
2
3
4
5
6
7
+---------------------------+--------------------------------------+----------------+-------------+--------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+----------------+-------------+--------------+
| group_replication_applier | 6a41c0b6-8e3d-11e8-b076-000d3aa05296 | wallet-mysql-2 | 3306 | ONLINE |
| group_replication_applier | 71540a17-8e3d-11e8-8cee-000d3aa1d767 | wallet-mysql-1 | 3306 | ONLINE |
| group_replication_applier | 7dd813bf-8e3d-11e8-8d92-000d3aa1c31e | wallet-mysql-3 | 3306 | RECOVERING |
+---------------------------+--------------------------------------+----------------+-------------+--------------+

4、节点3正在进行恢复,与主库同步数据,但是恢复过程error.log出现了以下问题:

1
2
[ERROR] Error reading packet from server for channel 'group_replication_recovery': The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires. (server_errno=1236)
[ERROR] Slave I/O for channel 'group_replication_recovery': Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.', Error_code: 1236

提示无法读取主库的binary log

5、尝试去在另一个节点从库2上,全量dump一份数据,在3节点上应用并重新启动mgr:

5.1、节点2操作:

1
shell>/data/mysql/bin/mysqldump --all-databases --set-gtid-purged=ON --single-transaction -uroot -P3306 -p > /tmp/alldb.sql

拷贝alldb.sql到节点3的/tmp目录

5.2、节点3操作:

1
2
3
4
mysql>stop group_replication;
mysql>reset master;
mysql>set global read_only=0;
mysql>source /tmp/alldb.sql

重新启动mysql数据库

1
/etc/init.d/mysql restart

将节点3加入mgr集群:

1
mysql>start group_replication;

查看集群成员状态,集群状态恢复:

1
2
3
4
5
6
7
8
9
mysql>SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+----------------+-------------+--------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+----------------+-------------+--------------+
| group_replication_applier | 6a41c0b6-8e3d-11e8-b076-000d3aa05296 | wallet-mysql-2 | 3306 | ONLINE |
| group_replication_applier | 71540a17-8e3d-11e8-8cee-000d3aa1d767 | wallet-mysql-1 | 3306 | ONLINE |
| group_replication_applier | 7dd813bf-8e3d-11e8-8d92-000d3aa1c31e | wallet-mysql-3 | 3306 | ONLINE |
+---------------------------+--------------------------------------+----------------+-------------+--------------+
3 rows in set (0.00 sec)

点击阅读

[Mysql] Multi-threaded slave statistics for channel


0、从库error日志提示信息如下:

1
[Note] Multi-threaded slave statistics for channel 'group_replication_applier': seconds elapsed = 267; events assigned = 10241; worker queues filled over overrun level = 0; waited due a Worker queue full = 0; waited due the total size = 0; waited at clock conflicts = 981623300 waited (count) when Workers occupied = 1238 waited when Workers occupied = 6762438500

1、mysqlerror-log出现上述提示信息是因为启用了mts(Multi-threaded slave)需要启用slave_parallel_workers参数(默认值为0,最大值为1024),并且log_warning(该参数将于v8.0.3去除,被log_error_verbosity 替代)参数要大于1,在error_log里面会有上述提示。

1
2
3
4
5
6
7
8
9
seconds elapsed 就是上一次统计跟这一次统计的时间间隔。
events assigned:总共有多少个event被分配执行,计的是总数。
worker queues filled over overrun level:mts在所有的并行workers之间倾向于加载平衡的时间。slave_parrllel_workers参数决定workers数量。这个统计参数显示了当前线程承受的饱和等级。如果以一个并行线程序列趋近与饱和,这个数会递增,线程复制时间会被推迟,避免达到线程序列限制。
Waited due to a Worker queue full:因为worker队列爆满,协调线程必须等待该统计参数会增长
Waited due to the total size:该参数代表因为达到了可用内存的限制,worker队列持有未应用事件造成协调线程睡眠的次数。如果这个值持续增长,需要增大slave_pending_jobs_size_max值来避免协调线程等待时间。
slave_pending_jobs_size_max:此变量代表用于保存尚未应用的事件的从worker队列的最大内存量(以字节为单位),如果没有启动mts,修改该参数不会有任何效果。(v8.0.11之前默认值为16M,v8.0.12默认值为128M,最小值为1024,最大值为16eib)
Waited at clock conflicts:在事务之间存在依赖的情况下,该参数显示等待时间相当于冲突检测和解决方案的逻辑时间。
Waited (count) when used occupied:协调进程监控worker足额(enough)分配的统计次数。enough定义取决于调度类型(基于每个库和时钟)
Waited when workers occupied:对任何可用worker计算协调线程等待的次数,仅适用于提交时钟调度程序。

参考文档

点击阅读

[Mysql] mysql切割慢日志


0、去查看一个测试库的慢日志文件发现有11g的大小,根本没有办法使用mysqldumpslow去查询,因此想要先对日志进行切割:

1
2
3
4
5
split几个主要参数:
-b 分割后的文档大小,单位是byte
-C 分割后的文档,单行最大byte数
-d 使用数字作为后缀,同时使用-a length指定后缀长度
-l 分割后文档的行数

1、使用split,按大小切割日志文件,每个文件1G大小,切割后的文件名前缀为test.log,不指定后缀会使用默认后缀名aa,ab,ac….等

split slow.log -b 1G test.log

1
2
3
4
5
6
7
8
9
10
11
12
ls -ltrh test.log*
-rw-r--r-- 1 root root 1.0G Sep 12 03:08 test.logaa
-rw-r--r-- 1 root root 1.0G Sep 12 03:09 test.logab
-rw-r--r-- 1 root root 1.0G Sep 12 03:10 test.logac
-rw-r--r-- 1 root root 1.0G Sep 12 03:11 test.logad
-rw-r--r-- 1 root root 1.0G Sep 12 03:12 test.logae
-rw-r--r-- 1 root root 1.0G Sep 12 03:13 test.logaf
-rw-r--r-- 1 root root 1.0G Sep 12 03:14 test.logag
-rw-r--r-- 1 root root 1.0G Sep 12 03:15 test.logah
-rw-r--r-- 1 root root 1.0G Sep 12 03:16 test.logai
-rw-r--r-- 1 root root 1.0G Sep 12 03:17 test.logaj
-rw-r--r-- 1 root root 270M Sep 12 03:17 test.logak

2、如果需要指定后缀可使用-d -a 参数(n)代表指定附加后缀的个数,比如切割test.logaa日志文件:

split test.logaa -b 100M -d -a 2 logaa

1
2
3
4
5
6
7
8
9
10
11
12
 ll logaa*
-rw-r--r-- 1 root root 100M Sep 12 09:03 logaa00
-rw-r--r-- 1 root root 100M Sep 12 09:03 logaa01
-rw-r--r-- 1 root root 100M Sep 12 09:03 logaa02
-rw-r--r-- 1 root root 100M Sep 12 09:03 logaa03
-rw-r--r-- 1 root root 100M Sep 12 09:03 logaa04
-rw-r--r-- 1 root root 100M Sep 12 09:03 logaa05
-rw-r--r-- 1 root root 100M Sep 12 09:03 logaa06
-rw-r--r-- 1 root root 100M Sep 12 09:03 logaa07
-rw-r--r-- 1 root root 100M Sep 12 09:03 logaa08
-rw-r--r-- 1 root root 100M Sep 12 09:04 logaa09
-rw-r--r-- 1 root root 24M Sep 12 09:04 logaa10

split test.logab -b 100M -d -a 4 logab

1
2
3
4
5
6
7
8
9
10
11
12
ls -ltrh logab*
-rw-r--r-- 1 root root 100M Sep 12 09:05 logab0000
-rw-r--r-- 1 root root 100M Sep 12 09:05 logab0001
-rw-r--r-- 1 root root 100M Sep 12 09:05 logab0002
-rw-r--r-- 1 root root 100M Sep 12 09:05 logab0003
-rw-r--r-- 1 root root 100M Sep 12 09:05 logab0004
-rw-r--r-- 1 root root 100M Sep 12 09:05 logab0005
-rw-r--r-- 1 root root 100M Sep 12 09:05 logab0006
-rw-r--r-- 1 root root 100M Sep 12 09:05 logab0007
-rw-r--r-- 1 root root 100M Sep 12 09:06 logab0008
-rw-r--r-- 1 root root 100M Sep 12 09:06 logab0009
-rw-r--r-- 1 root root 24M Sep 12 09:06 logab0010

4、按行切割日志文件,查看准备切割的文件有多少行:

1
2
cat logab0001 |wc -l
505

按每个文件60行切割
split logab0001 -l 60 logab0001

1
2
3
4
5
6
7
8
9
10
11
ll logab0001*
-rw-r--r-- 1 root root 104857600 Sep 12 09:05 logab0001
-rw-r--r-- 1 root root 11922880 Sep 12 09:10 logab0001aa
-rw-r--r-- 1 root root 12506422 Sep 12 09:10 logab0001ab
-rw-r--r-- 1 root root 12506026 Sep 12 09:10 logab0001ac
-rw-r--r-- 1 root root 12506635 Sep 12 09:10 logab0001ad
-rw-r--r-- 1 root root 12506065 Sep 12 09:10 logab0001ae
-rw-r--r-- 1 root root 12506313 Sep 12 09:10 logab0001af
-rw-r--r-- 1 root root 12506206 Sep 12 09:10 logab0001ag
-rw-r--r-- 1 root root 12506325 Sep 12 09:10 logab0001ah
-rw-r--r-- 1 root root 5390728 Sep 12 09:10 logab0001ai

查看每个文件的行数

1
2
3
4
5
6
7
8
9
10
11
for i in `ls logab0001*`;do echo "$i文件行数"`cat $i|wc -l` ;done 
logab0001文件行数505
logab0001aa文件行数60
logab0001ab文件行数60
logab0001ac文件行数60
logab0001ad文件行数60
logab0001ae文件行数60
logab0001af文件行数60
logab0001ag文件行数60
logab0001ah文件行数60
logab0001ai文件行数25

5、按每行字节数切割

指定每行字节数最多不超过2000000个字节
split logab0001aa -C 2000000 testlog

1
2
3
4
5
6
7
8
9
10
11
12
ls -lthr testlog*
-rw-r--r-- 1 root root 1.5M Sep 12 10:19 testlogaa
-rw-r--r-- 1 root root 1018K Sep 12 10:19 testlogab
-rw-r--r-- 1 root root 1018K Sep 12 10:19 testlogac
-rw-r--r-- 1 root root 1018K Sep 12 10:19 testlogad
-rw-r--r-- 1 root root 1018K Sep 12 10:19 testlogae
-rw-r--r-- 1 root root 1018K Sep 12 10:19 testlogaf
-rw-r--r-- 1 root root 1018K Sep 12 10:19 testlogag
-rw-r--r-- 1 root root 1018K Sep 12 10:19 testlogah
-rw-r--r-- 1 root root 1018K Sep 12 10:19 testlogai
-rw-r--r-- 1 root root 1018K Sep 12 10:19 testlogaj
-rw-r--r-- 1 root root 1018K Sep 12 10:19 testlogak

查看某一行的字节数

1
2
[root@dax-mysql-slave log]# sed '1p' testlogaa|wc -c
1960374

字节数小于2000000

1
rm -rf testlog*

删除生成的日志文件,生成每行字节最大不超过1500000的日志文件:
split logab0001aa -C 1500000 testlog

1
2
3
4
5
6
7
8
9
10
11
12
13
ls -lthr testlog*
-rw-r--r-- 1 root root 449K Sep 12 10:22 testlogaa
-rw-r--r-- 1 root root 1018K Sep 12 10:22 testlogab
-rw-r--r-- 1 root root 1018K Sep 12 10:22 testlogac
-rw-r--r-- 1 root root 1018K Sep 12 10:22 testlogad
-rw-r--r-- 1 root root 1018K Sep 12 10:22 testlogae
-rw-r--r-- 1 root root 1018K Sep 12 10:22 testlogaf
-rw-r--r-- 1 root root 1018K Sep 12 10:22 testlogag
-rw-r--r-- 1 root root 1018K Sep 12 10:22 testlogah
-rw-r--r-- 1 root root 1018K Sep 12 10:22 testlogai
-rw-r--r-- 1 root root 1018K Sep 12 10:22 testlogaj
-rw-r--r-- 1 root root 1018K Sep 12 10:22 testlogak
-rw-r--r-- 1 root root 1018K Sep 12 10:22 testlogal

查看某一行的字节数

1
2
[root@dax-mysql-slave log]# sed '1p' testlogaa|wc -c
918219

字节数小于1500000

6、上面产生11g的慢日志文件主要是由于日志没有定期切割造成的,因此写了个脚本,定期对慢日志进行切割:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
#!/bin/bash
#mysql慢日志所在路径
slowlogpath=/data/mysql/log
#mysql慢日志文件名
slowlogname=slow.log
#当前系统时间
stamp=`date +'%Y%m%d'`
#与当前系统时间对比,七天前的时间
oldstamp=`date +"%Y%m%d" -d "-7 day"`
#mysql用户名
username=root
#mysql用户名密码
password=12345678
#mysql安装路径
dbpath=/data/mysql/bin
#移动mysql慢日志
mv ${slowlogpath}/${slowlogname} ${slowlogpath}/${stamp}_${slowlogname}
#重新生成slow日志文件:
${dbpath}/mysqladmin -u ${username} -p${password} flush-logs slow
#删除7天前的日志文件
if [ -e ${slowlogpath}/${oldstamp}_${slowlogname} ];then
sudo rm -rf ${slowlogpath}/${oldstamp}_${slowlogname}
else
:
fi

将脚本添加到定时任务:
crontab -l

1
0 1 * * * sh /data/script/split_mysqlslowlog.sh

点击阅读

[Mysql] mysqldumpslow参数详解


1、mysql慢查询日志包含了执行花费较长时间的查询语句信息。

mysqldumpslow解析慢日志文件并打印出内容摘要。
mysqldumpslow进行分组查询,除了有特殊数值和字符串数据值之外。当现实摘要输出是,将数值抽象化为n(数字)和s(字符串)。参数-a和-n可以用于修改抽象化行为。

0、-a 表示不使用抽象的字符串s,数字n替换查询sql语句的内容

不指定-a参数,查询结果中数值和字符串分别用n和s代替
/data/mysql/bin/mysqldumpslow -t 2 /data/mysql/log/slow.log

1
2
3
4
5
Reading mysql slow query log from /data/mysql/log/slow.log
Count: 7 Time=132.03s (924s) Lock=94.51s (661s) Rows=0.0 (0), root[root]@localhost
UPDATE test SET state = 'S' where state='S' and id in (select id from (select id from test ORDER BY id ASC LIMIT N,N) as tmp)
Count: 7 Time=81.63s (571s) Lock=75.80s (530s) Rows=0.0 (0), root[root]@localhost
UPDATE test SET state = 'S' where state='S' and id >=N and id <= N

添加-a参数,查询结果中数值和字符串用各自的对应值表示,未被抽象化:
/data/mysql/bin/mysqldumpslow -a -t 2 /data/mysql/log/slow.log

1
2
3
4
5
Reading mysql slow query log from /data/mysql/log/slow.log
Count: 1 Time=244.60s (244s) Lock=0.00s (0s) Rows=0.0 (0), root[root]@localhost
UPDATE test SET state = 'CANCELLED' where state='CANCEL' and id in (select id from (select id from test ORDER BY id ASC LIMIT 1,1000000) as tmp)
Count: 1 Time=230.44s (230s) Lock=0.00s (0s) Rows=0.0 (0), root[root]@localhost
UPDATE test SET state = 'CANCELLED' where state='CANCEL' and id in (select id from (select id from test ORDER BY id ASC LIMIT 4000000,1000000) as tmp)

2、-n 名称中抽象化数字的最少个数

3、–debug 写入调试信息

4、-g 后面可以写正则表达式匹配,大小写不敏感。根据个人需要,过滤需要的关键字:

/data/mysql/bin/mysqldumpslow -t 5 -l -g ‘test’ /data/mysql/log/slow.log

1
2
3
4
5
6
7
8
9
10
11
Reading mysql slow query log from /data/mysql/log/slow.log
Count: 7 Time=226.54s (1585s) Lock=94.51s (661s) Rows=0.0 (0), root[root]@localhost
UPDATE test SET state = 'S' where state='S' and id in (select id from (select id from test ORDER BY id ASC LIMIT N,N) as tmp)
Count: 7 Time=157.43s (1101s) Lock=75.80s (530s) Rows=0.0 (0), root[root]@localhost
UPDATE test SET state = 'S' where state='S' and id >=N and id <= N
Count: 35 Time=59.29s (2075s) Lock=0.00s (0s) Rows=9207179.6 (322251287), root[root]@localhost
SELECT /*!N SQL_NO_CACHE */ * FROM `test`
Count: 1 Time=44.68s (44s) Lock=0.00s (0s) Rows=2.0 (2), root[root]@localhost
select distinct state from test order by id
Count: 2 Time=27.04s (54s) Lock=0.00s (0s) Rows=2.0 (4), root[root]@localhost
select distinct state from test where id in (select id from (select id from test ORDER BY id ASC LIMIT N,N) as tmp)

5、–help 显示帮助信息并退出

6、-h 日志文件名中服务的主机名

使用-h hostname参数,可使用通配符去过滤慢日志
查看当前目录下的慢日志文件名称:

1
2
3
ll *-slow.log
-rw-r--r-- 1 root root 583 Sep 12 06:21 10.0.7.4-slow.log
-rw-r----- 1 mysql mysql 1889 Sep 12 05:51 test-slow.log

如果使用-slow.log去过滤慢日志,会把所有-slow.log为后缀的文件都过滤。
mysqldumpslow -a -t 20 -h dax-mysql-slave /data/mysql/log/
-slow.log

1
2
3
4
5
6
7
8
9
10
Reading mysql slow query log from /data/mysql/log/10.0.7.4-slow.log /data/mysql/log/test-slow.log
Count: 1 Time=137.68s (137s) Lock=0.00s (0s) Rows=5.0 (5), root[root]@localhost
SELECT * FROM test WHERE trade_asset = 'BCH' AND price_asset = 'ETH' AND broker_id = '1022668762972741633' AND state in ('WAITING', 'PROCESSING') order by id limit 5
Count: 1 Time=120.01s (120s) Lock=0.00s (0s) Rows=15.0 (15), repl[repl]@[10.0.7.51]
SELECT * FROM test WHERE trade_asset = 'BCH' AND price_asset = 'ETH' AND broker_id = '1022668762972741633' AND state in ('WAITING', 'PROCESSING') order by id limit 20
Count: 2 Time=90.84s (181s) Lock=0.00s (0s) Rows=10.0 (20), 2users@2hosts
SELECT * FROM test WHERE trade_asset = 'BCH' AND price_asset = 'ETH' AND broker_id = '1022668762972741633' AND state in ('WAITING', 'PROCESSING') order by id limit 10
Count: 1 Time=82.82s (82s) Lock=0.00s (0s) Rows=12.0 (12), repl[repl]@[10.0.7.51]
SELECT * FROM test WHERE trade_asset = 'BCH' AND price_asset = 'ETH' AND broker_id = '1022668762972741633' AND state in ('WAITING', 'PROCESSING') order by id limit 12
Died at /data/mysql/bin/mysqldumpslow line 161, <> chunk 5.

也可以指定匹配主机范围:
mysqldumpslow -a -t 20 -h dax-mysql-slave /data/mysql/log/10.0.7.*-slow.log

1
2
3
4
Reading mysql slow query log from /data/mysql/log/10.0.7.4-slow.log
Count: 1 Time=82.82s (82s) Lock=0.00s (0s) Rows=12.0 (12), repl[repl]@[10.0.7.51]
SELECT * FROM test WHERE trade_asset = 'BCH' AND price_asset = 'ETH' AND broker_id = '1022668762972741633' AND state in ('WAITING', 'PROCESSING') order by id limit 12
Died at /data/mysql/bin/mysqldumpslow line 161, <> chunk 1.

-h 后面如果不加任何参数,会匹配出test-slow.log为日志文件名的日志:
[root@dax-mysql-slave log]# mysqldumpslow -a -t 20 -h /data/mysql/log/*-slow.log

1
2
3
4
5
6
7
8
Reading mysql slow query log from /data/mysql/log/test-slow.log
Count: 1 Time=137.68s (137s) Lock=0.00s (0s) Rows=5.0 (5), root[root]@localhost
SELECT * FROM test WHERE trade_asset = 'BCH' AND price_asset = 'ETH' AND broker_id = '1022668762972741633' AND state in ('WAITING', 'PROCESSING') order by id limit 5
Count: 1 Time=120.01s (120s) Lock=0.00s (0s) Rows=15.0 (15), repl[repl]@[10.0.7.51]
SELECT * FROM test WHERE trade_asset = 'BCH' AND price_asset = 'ETH' AND broker_id = '1022668762972741633' AND state in ('WAITING', 'PROCESSING') order by id limit 20
Count: 2 Time=90.84s (181s) Lock=0.00s (0s) Rows=10.0 (20), 2users@2hosts
SELECT * FROM test WHERE trade_asset = 'BCH' AND price_asset = 'ETH' AND broker_id '1022668762972741633' AND state in ('WAITING', 'PROCESSING') order by id limit 10
Died at /data/mysql/bin/mysqldumpslow line 161, <> chunk 4.

-h 后面不加任何参数,日志文件名使用过滤条件会报错:
[root@dax-mysql-slave log]# mysqldumpslow -a -t 20 -h /data/mysql/log/10.0.7.*-slow.log

1
2
3
Reading mysql slow query log from /data/mysql/data//data/mysql/log/10.0.7.4-slow.log-slow.log
Can't open /data/mysql/data//data/mysql/log/10.0.7.4-slow.log-slow.log: No such file or directory at /data/mysql/bin/mysqldumpslow line 91.
Died at /data/mysql/bin/mysqldumpslow line 161.

7、-i 服务的实例名(官方文档说通过mysql.server启动查看服务实例名,测试之后未找到)

8、-l 不从总时间内减去锁时间

9、-r 反转排序顺序

10、-s 何种方式排序:t,at:按查询时间或平均查询时间排序;l,al:按锁定时间或平均锁定时间排序;r,ar:按发送的行或发送的平均行排序;c:按计数排序;默认情况下,按平均查询时间(相当于-s at)排序。

按总时间排序:
/data/mysql/bin/mysqldumpslow -s t -t 5 -l /data/mysql/log/slow.log

1
2
3
4
5
6
7
8
9
10
Count: 50831  Time=1.49s (75892s)  Lock=0.00s (4s)  Rows=1.0 (50831), cdax[cdax]@4hosts
SELECT count(N) FROM test WHERE (trade_asset = 'S' AND price_asset = 'S' AND broker_id = 'S' AND broker_uid = N AND state IN ('S', 'S') AND create_time >= 'S' AND create_time <= 'S')
Count: 12053 Time=1.14s (13735s) Lock=0.00s (0s) Rows=1.0 (12053), 2users@5hosts
SELECT count(N) FROM test WHERE (trade_asset = 'S' AND price_asset = 'S' AND broker_id = 'S' AND broker_uid = N AND state IN ('S', 'S'))
Count: 6674 Time=1.52s (10151s) Lock=0.00s (0s) Rows=9.9 (66259), 2users@5hosts
SELECT id, order_no, broker_id, broker_uid, plat_id, trade_asset, price_asset, trade_type, order_type, price, number, client_order_no, traded_number, over_number, traded_money, fee_asset, fee, broker_fee, cloud_fee, create_time, update_time, state, send_state, error_code, error_msg FROM test WHERE (trade_asset = 'S' AND price_asset = 'S' AND broker_id = 'S' AND broker_uid = N AND state IN ('S', 'S')) order by create_time desc LIMIT N
Count: 84 Time=40.42s (3394s) Lock=0.00s (0s) Rows=1.0 (84), 42users@42hosts
SELECT member_id, member_host, member_port, member_state, @@group_replication_single_primary_mode FROM performance_schema.replication_group_members WHERE channel_name = 'S'
Count: 16300 Time=0.20s (3291s) Lock=0.00s (1s) Rows=1.0 (16300), cdax[cdax]@[10.1.110.9]
SELECT count(N) FROM test WHERE (state = 'S' AND send_state = 'S' AND update_time < 'S')

按平均时间排序:
/data/mysql/bin/mysqldumpslow -s at -t 5 -l /data/mysql/log/slow.log

1
2
3
4
5
6
7
8
9
10
11
Reading mysql slow query log from /data/mysql/log/slow.log
Count: 7 Time=226.54s (1585s) Lock=94.51s (661s) Rows=0.0 (0), root[root]@localhost
UPDATE test SET state = 'S' where state='S' and id in (select id from (select id from test ORDER BY id ASC LIMIT N,N) as tmp)
Count: 7 Time=157.43s (1101s) Lock=75.80s (530s) Rows=0.0 (0), root[root]@localhost
UPDATE test SET state = 'S' where state='S' and id >=N and id <= N
Count: 35 Time=79.40s (2779s) Lock=0.00s (0s) Rows=18592104.7 (650723665), root[root]@localhost
SELECT /*!N SQL_NO_CACHE */ * FROM `finance_detail`
Count: 35 Time=59.29s (2075s) Lock=0.00s (0s) Rows=9207179.6 (322251287), root[root]@localhost
SELECT /*!N SQL_NO_CACHE */ * FROM `test`
Count: 1 Time=44.68s (44s) Lock=0.00s (0s) Rows=2.0 (2), root[root]@localhost
select distinct state from test order by id

按锁表时间排序
[root@mw-mysql-2 ~]# /data/mysql/bin/mysqldumpslow -s l -t 5 -l /data/mysql/log/slow.log

1
2
3
4
5
6
7
8
9
10
11
Reading mysql slow query log from /data/mysql/log/slow.log
Count: 7 Time=226.54s (1585s) Lock=94.51s (661s) Rows=0.0 (0), root[root]@localhost
UPDATE test SET state = 'S' where state='S' and id in (select id from (select id from test ORDER BY id ASC LIMIT N,N) as tmp)
Count: 7 Time=157.43s (1101s) Lock=75.80s (530s) Rows=0.0 (0), root[root]@localhost
UPDATE test SET state = 'S' where state='S' and id >=N and id <= N
Count: 50831 Time=1.49s (75892s) Lock=0.00s (4s) Rows=1.0 (50831), cdax[cdax]@4hosts
SELECT count(N) FROM test WHERE (trade_asset = 'S' AND price_asset = 'S' AND broker_id = 'S' AND broker_uid = N AND state IN ('S', 'S') AND create_time >= 'S' AND create_time <= 'S')
Count: 16300 Time=0.20s (3291s) Lock=0.00s (1s) Rows=1.0 (16300), cdax[cdax]@[10.1.110.9]
SELECT count(N) FROM test WHERE (state = 'S' AND send_state = 'S' AND update_time < 'S')
Count: 12053 Time=1.14s (13735s) Lock=0.00s (0s) Rows=1.0 (12053), 2users@5hosts
SELECT count(N) FROM test WHERE (trade_asset = 'S' AND price_asset = 'S' AND broker_id = 'S' AND broker_uid = N AND state IN ('S', 'S'))

按锁表平均时间排序
[root@mw-mysql-2 ~]# /data/mysql/bin/mysqldumpslow -s al -t 5 -l /data/mysql/log/slow.log

1
2
3
4
5
6
7
8
9
10
11
Reading mysql slow query log from /data/mysql/log/slow.log
Count: 7 Time=226.54s (1585s) Lock=94.51s (661s) Rows=0.0 (0), root[root]@localhost
UPDATE test SET state = 'S' where state='S' and id in (select id from (select id from test ORDER BY id ASC LIMIT N,N) as tmp)
Count: 7 Time=157.43s (1101s) Lock=75.80s (530s) Rows=0.0 (0), root[root]@localhost
UPDATE test SET state = 'S' where state='S' and id >=N and id <= N
Count: 1 Time=44.68s (44s) Lock=0.00s (0s) Rows=2.0 (2), root[root]@localhost
select distinct state from test order by id
Count: 1 Time=0.98s (0s) Lock=0.00s (0s) Rows=1.0 (1), root[root]@localhost
select count(*) from test where broker_id = 'S' AND broker_uid = N
Count: 2 Time=27.04s (54s) Lock=0.00s (0s) Rows=2.0 (4), root[root]@localhost
select distinct state from test where id in (select id from (select id from test ORDER BY id ASC LIMIT N,N) as tmp)

按返回行排序
/data/mysql/bin/mysqldumpslow -s r -t 5 -l /data/mysql/log/slow.log

1
2
3
4
5
6
7
8
9
10
11
Reading mysql slow query log from /data/mysql/log/slow.log
Count: 35 Time=79.40s (2779s) Lock=0.00s (0s) Rows=18592104.7 (650723665), root[root]@localhost
SELECT /*!N SQL_NO_CACHE */ * FROM `finance_detail`
Count: 35 Time=59.29s (2075s) Lock=0.00s (0s) Rows=9207179.6 (322251287), root[root]@localhost
SELECT /*!N SQL_NO_CACHE */ * FROM `test`
Count: 4 Time=7.10s (28s) Lock=0.00s (0s) Rows=1121601.2 (4486405), root[root]@localhost
SELECT /*!N SQL_NO_CACHE */ * FROM `mq_produce_log_1`
Count: 4 Time=4.29s (17s) Lock=0.00s (0s) Rows=1121591.2 (4486365), root[root]@localhost
SELECT /*!N SQL_NO_CACHE */ * FROM `mq_produce_log_2`
Count: 4 Time=4.72s (18s) Lock=0.00s (0s) Rows=1121587.5 (4486350), root[root]@localhost
SELECT /*!N SQL_NO_CACHE */ * FROM `mq_produce_log_3`

按平均返回行排序
/data/mysql/bin/mysqldumpslow -s ar -t 5 -l /data/mysql/log/slow.log

1
2
3
4
5
6
7
8
9
10
11
Reading mysql slow query log from /data/mysql/log/slow.log
Count: 35 Time=79.40s (2779s) Lock=0.00s (0s) Rows=18592104.7 (650723665), root[root]@localhost
SELECT /*!N SQL_NO_CACHE */ * FROM `finance_detail`
Count: 35 Time=59.29s (2075s) Lock=0.00s (0s) Rows=9207179.6 (322251287), root[root]@localhost
SELECT /*!N SQL_NO_CACHE */ * FROM `test`
Count: 4 Time=7.10s (28s) Lock=0.00s (0s) Rows=1121601.2 (4486405), root[root]@localhost
SELECT /*!N SQL_NO_CACHE */ * FROM `mq_produce_log_1`
Count: 4 Time=4.29s (17s) Lock=0.00s (0s) Rows=1121591.2 (4486365), root[root]@localhost
SELECT /*!N SQL_NO_CACHE */ * FROM `mq_produce_log_2`
Count: 4 Time=4.72s (18s) Lock=0.00s (0s) Rows=1121587.5 (4486350), root[root]@localhost
SELECT /*!N SQL_NO_CACHE */ * FROM `mq_produce_log_3`

按查询次数排序
/data/mysql/bin/mysqldumpslow -s c -t 5 -l /data/mysql/log/slow.log

1
2
3
4
5
6
7
8
9
10
11
12
13
Reading mysql slow query log from /data/mysql/log/slow.log
Count: 50831 Time=1.49s (75892s) Lock=0.00s (4s) Rows=1.0 (50831), cdax[cdax]@4hosts
SELECT count(N) FROM test WHERE (trade_asset = 'S' AND price_asset = 'S' AND broker_id = 'S' AND broker_uid = N AND state IN ('S', 'S') AND create_time >= 'S' AND create_time <= 'S')
Count: 16300 Time=0.20s (3291s) Lock=0.00s (1s) Rows=1.0 (16300), cdax[cdax]@[10.1.110.9]
SELECT count(N) FROM test WHERE (state = 'S' AND send_state = 'S' AND update_time < 'S')
Count: 12053 Time=1.14s (13735s) Lock=0.00s (0s) Rows=1.0 (12053), 2users@5hosts
SELECT count(N) FROM test WHERE (trade_asset = 'S' AND price_asset = 'S' AND broker_id = 'S' AND broker_uid = N AND state IN ('S', 'S'))
Count: 6674 Time=1.52s (10151s) Lock=0.00s (0s) Rows=9.9 (66259), 2users@5hosts
SELECT id, order_no, broker_id, broker_uid, plat_id, trade_asset, price_asset, trade_type, order_type, price, number, client_order_no, traded_number, over_number, traded_money, fee_asset, fee, broker_fee, cloud_fee, create_time, update_time, state, send_state, error_code, error_msg FROM test WHERE (trade_asset = 'S' AND price_asset = 'S' AND broker_id = 'S' AND broker_uid = N AND state IN ('S', 'S')) order by create_time desc LIMIT N
Count: 619 Time=1.20s (741s) Lock=0.00s (0s) Rows=10.0 (6181), cdax[cdax]@[10.1.110.9]
SELECT id,order_no,broker_id,broker_uid,plat_id,trade_asset,price_asset,trade_type,order_type,price,number,client_order_no,traded_number,over_number,traded_money,fee_asset,fee,broker_fee,cloud_fee,create_time,update_time,state,send_state,error_code,error_msg FROM test WHERE ( state = 'S'
and send_state = 'S'
and update_time < 'S' ) LIMIT N

添加-r参数对取到值反转排序:
/data/mysql/bin/mysqldumpslow -s c -t 5 -l -r /data/mysql/log/slow.log

1
2
3
4
5
6
7
8
9
10
11
12
13
Reading mysql slow query log from /data/mysql/log/slow.log
Count: 619 Time=1.20s (741s) Lock=0.00s (0s) Rows=10.0 (6181), cdax[cdax]@[10.1.110.9]
SELECT id,order_no,broker_id,broker_uid,plat_id,trade_asset,price_asset,trade_type,order_type,price,number,client_order_no,traded_number,over_number,traded_money,fee_asset,fee,broker_fee,cloud_fee,create_time,update_time,state,send_state,error_code,error_msg FROM test WHERE ( state = 'S'
and send_state = 'S'
and update_time < 'S' ) LIMIT N
Count: 6674 Time=1.52s (10151s) Lock=0.00s (0s) Rows=9.9 (66259), 2users@5hosts
SELECT id, order_no, broker_id, broker_uid, plat_id, trade_asset, price_asset, trade_type, order_type, price, number, client_order_no, traded_number, over_number, traded_money, fee_asset, fee, broker_fee, cloud_fee, create_time, update_time, state, send_state, error_code, error_msg FROM test WHERE (trade_asset = 'S' AND price_asset = 'S' AND broker_id = 'S' AND broker_uid = N AND state IN ('S', 'S')) order by create_time desc LIMIT N
Count: 12053 Time=1.14s (13735s) Lock=0.00s (0s) Rows=1.0 (12053), 2users@5hosts
SELECT count(N) FROM test WHERE (trade_asset = 'S' AND price_asset = 'S' AND broker_id = 'S' AND broker_uid = N AND state IN ('S', 'S'))
Count: 16300 Time=0.20s (3291s) Lock=0.00s (1s) Rows=1.0 (16300), cdax[cdax]@[10.1.110.9]
SELECT count(N) FROM test WHERE (state = 'S' AND send_state = 'S' AND update_time < 'S')
Count: 50831 Time=1.49s (75892s) Lock=0.00s (4s) Rows=1.0 (50831), cdax[cdax]@4hosts
SELECT count(N) FROM test WHERE (trade_asset = 'S' AND price_asset = 'S' AND broker_id = 'S' AND broker_uid = N AND state IN ('S', 'S') AND create_time >= 'S' AND create_time <= 'S')

11、-t 按指定数字显示行输出

12、–verbose 详细模式

13、如果不小心删掉了slow.log可通过flush-log参数来重新生成慢日志文件(定期做日志切割的时候可能会用到)

1
mysqladmin -u root -ptest flush-logs slow

点击阅读

[Mysql] mysql limit参数详解及使用过程遇到的问题


前提:开发反应一条sql查询很慢,查看sql执行计划:


查看到没有走设置的索引,而是走了主键索引,尝试去掉limit行限制之后:

发现去掉limit走索引正常。
是用group by和limit测试执行计划是否正常:


group by与limit组合会影响执行计划。
只使用limit,去掉order by和group by实验:


执行计划没有受影响。
结论:order by、group by与limit在一起执行的时候要注意执行计划是否影响,如果不得不使用该组合,担心执行计划被更改,建议使用force index(index_name),或者多次测试执行计划不会被影响再放到生产环境里。
下面介绍官方文档就limit参数的解释:

0、如果是想要在结果集中获取固定的行数,在查询语句中使用limit子句,而不是获取全部的数据之后扔掉额外的数据。

1、如果使用limit子句选择少数行,正常情况下倾向于进行全表扫描的时候,mysql会使用索引。

2、如果使用order by与limit row_count组合,mysql只要找到排序结果row_count行数就会停止排序,而不是把所有的结果排序。排序使用索引效率更高。如果一个文件排序必须完成,在row_count行被找到之前,选择不带限制语句的所有行对他们进行排序。在初始化行被找到之后,mysql将不在对结果集排序。如果order by列多行值相同,服务器返回这些行没有任何顺序,根据全部的执行计划会有所不同。换句话说,这些行的排序顺序是不确定的相对于非排序列。

limit会影响执行计划,因此orderby带有limit和不带有limit返回行会是不同排序。

3、官方文档的实验如下:

1
2
3
4
5
6
7
8
9
10
11
12
mysql> SELECT * FROM ratings ORDER BY category;
+----+----------+--------+
| id | category | rating |
+----+----------+--------+
| 1 | 1 | 4.5 |
| 5 | 1 | 3.2 |
| 3 | 2 | 3.7 |
| 4 | 2 | 3.5 |
| 6 | 2 | 3.5 |
| 2 | 3 | 5.0 |
| 7 | 3 | 2.7 |
+----+----------+--------+

带有limit子句会影响每一个行category的返回值:

1
2
3
4
5
6
7
8
9
10
mysql> SELECT * FROM ratings ORDER BY category LIMIT 5;
+----+----------+--------+
| id | category | rating |
+----+----------+--------+
| 1 | 1 | 4.5 |
| 5 | 1 | 3.2 |
| 4 | 2 | 3.5 |
| 3 | 2 | 3.7 |
| 6 | 2 | 3.5 |
+----+----------+--------+

对于带有limit和不带有limit的子句返回相同的行排序是很重要的,增加合适的列在order by子句中确保顺序。例如在order by后加一个id的排序:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
mysql> SELECT * FROM ratings ORDER BY category, id;
+----+----------+--------+
| id | category | rating |
+----+----------+--------+
| 1 | 1 | 4.5 |
| 5 | 1 | 3.2 |
| 3 | 2 | 3.7 |
| 4 | 2 | 3.5 |
| 6 | 2 | 3.5 |
| 2 | 3 | 5.0 |
| 7 | 3 | 2.7 |
+----+----------+--------+
mysql> SELECT * FROM ratings ORDER BY category, id LIMIT 5;
+----+----------+--------+
| id | category | rating |
+----+----------+--------+
| 1 | 1 | 4.5 |
| 5 | 1 | 3.2 |
| 3 | 2 | 3.7 |
| 4 | 2 | 3.5 |
| 6 | 2 | 3.5 |
+----+----------+--------+

4、根据官方文档提供的信息,自己动手实验,创建测试数据:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
create table ratings (id int,category int,rating varchar(10));
insert into ratings values(5,1,'3.2'),(1,1,'4.5'),(4,2,'3.5'),(3,2,'3.7'),(6,2,'3.5'),(7,3,'2.7'),(2,3,'5.0');
root@db 08:47: [test]> SELECT * FROM ratings order by category;
+------+----------+--------+
| id | category | rating |
+------+----------+--------+
| 5 | 1 | 3.2 |
| 1 | 1 | 4.5 |
| 4 | 2 | 3.5 |
| 3 | 2 | 3.7 |
| 6 | 2 | 3.5 |
| 7 | 3 | 2.7 |
| 2 | 3 | 5.0 |
+------+----------+--------+
7 rows in set (0.00 sec)
root@db 08:47: [test]> SELECT * FROM ratings order by category limit 5;
+------+----------+--------+
| id | category | rating |
+------+----------+--------+
| 5 | 1 | 3.2 |
| 1 | 1 | 4.5 |
| 4 | 2 | 3.5 |
| 3 | 2 | 3.7 |
| 6 | 2 | 3.5 |
+------+----------+--------+
5 rows in set (0.00 sec)

id列没有做任何排序。

1
2
3
4
5
6
7
8
9
10
11
12
13
root@db 08:47:  [test]> SELECT * FROM ratings order by category,id;
+------+----------+--------+
| id | category | rating |
+------+----------+--------+
| 1 | 1 | 4.5 |
| 5 | 1 | 3.2 |
| 3 | 2 | 3.7 |
| 4 | 2 | 3.5 |
| 6 | 2 | 3.5 |
| 2 | 3 | 5.0 |
| 7 | 3 | 2.7 |
+------+----------+--------+
7 rows in set (0.00 sec)

在order by后多加一个id列,category和id列都按大小做了排序。

5、尝试在对id列添加主键索引

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
root@db 09:10:  [test]> alter table ratings add primary key(id);
Query OK, 0 rows affected (0.11 sec)
Records: 0 Duplicates: 0 Warnings: 0
root@db 09:10: [test]> SELECT * FROM ratings order by category;
+----+----------+--------+
| id | category | rating |
+----+----------+--------+
| 1 | 1 | 4.5 |
| 5 | 1 | 3.2 |
| 3 | 2 | 3.7 |
| 4 | 2 | 3.5 |
| 6 | 2 | 3.5 |
| 2 | 3 | 5.0 |
| 7 | 3 | 2.7 |
+----+----------+--------+
7 rows in set (0.00 sec)
root@db 09:10: [test]> SELECT * FROM ratings order by category limit 5;
+----+----------+--------+
| id | category | rating |
+----+----------+--------+
| 1 | 1 | 4.5 |
| 5 | 1 | 3.2 |
| 3 | 2 | 3.7 |
| 4 | 2 | 3.5 |
| 6 | 2 | 3.5 |
+----+----------+--------+
5 rows in set (0.00 sec)

添加主键之后id和category列都按大小排序。

6、如果使用limit row_count和distinct联合,mysql停止查询一旦找到row_count唯一行数。

7、一些情况下,group by通过顺序读取索引来解决(或者在索引上做一个排序),然后统计信息直到索引值发生变化。在一些情况下,limit row_count不计算任何不必要group by值。

8、只要mysql发送需要的行数据到客户端,并中止查询除非使用sql_cal_found_rows函数。在这种情况下,可以使用select found_rows()获取行数。

9、limit 0快速返回一个空集。对于检查查询有效性很用帮助。

10、如果服务使用临时表来解决查询,使用limit row_count子句来计算需要多少空间。

11、如果order by没有用索引并且limit子句存在,优化器为了避免合并文件会使用内存filesort操作对内存内的行排序。

点击阅读

[Mysql] mysql-Lock wait timeout exceeded; try restarting transaction


1、mysql执行一条update语句报错:

1
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction

查看当前系统锁等待超时时间:

1
2
3
4
5
6
7
show variables like '%lock_wait_timeout%';
+--------------------------+-------+
| Variable_name | Value |
+--------------------------+-------+
| innodb_lock_wait_timeout | 30 |
| lock_wait_timeout | 60 |
+--------------------------+-------+

innodb事务锁等待超时事件30s,其他非innodb事务锁等待事件超时为60s,

2、查看造成事务阻塞的信息:

select * from information_schema.innodb_trx\G;查看innodb_trx各列参数,点击此处

线程id为1308的事务被线程id为1311的事务阻塞,查看不到线程id为1311正在执行的语句

3、查看事务锁信息,记录了当前锁的相关信息;查看innodb_lock各列参数,点击此处

select * from information_schema.innodb_locks\G

4、查看线程id为1311的线程信息,select * from information_schema.processlist where id=1311\G,


该事务command为sleep,表示事务sql语句已经执行完成,但是锁仍然存在,没有释放手动kill掉该线程,在执行其他操作正常。

1
kill 1311

点击阅读

[Mysql] mysql show innodb_locks详解


1、INFORMATION_SCHEMA INNODB_LOCKS 提供innodb事务去请求但没有获取到的锁信息和事务阻塞其他事务的锁信息。执行命令如下:

1
select * from information_schema.innodb_locks\G

2、innodb_locks各列参数详解:

1
2
3
4
5
6
7
8
9
10
lock_id:innodb唯一lock id。把他当做一个不透明的字符串。虽然lock_id当前包含trx_id,lock_id的数据格式在任何时间都肯能改变。不要写用于解析lock_id值得应用程序。
lock_trx_id:持有锁的事务id。查询事务信息,与innodb_trx表中trx_id列匹配。
lock_mode:锁请求。该值包括: S, X, IS, IX, GAP, AUTO_INC, and UNKNOWN。锁模式标识符可以组合用于识别特定的锁模式。查看更多信息,点击[此处]((https://dev.mysql.com/doc/refman/8.0/en/innodb-locking.html))
lock_type:锁类型。行锁为record,表锁为table。
lock_table:被锁的表名,或者包含锁记录的表名。
lock_index:lock_type为行锁时,该值为索引名,否则为空。
lock_space:lock_type为行锁时,该值为锁记录的表空间的id,否则为空。
lock_page:lock_type为行锁时,该值为锁记录页数量,否则为空。
lock_rec:lock_type为行锁时,页内锁记录的堆数,否则为空。
lock_data:与锁相关的数据。如果lock_type为行锁时,该值是锁记录的主键值,否则为空。这列包含锁定行的主键列的值,转化为一个有效的字符串,如果没有主键,lock_data是唯一innodb内部行id号。如果是键值或者范围大于索引的最大值会使用间隙锁,lock_data表示为supremum pseudo-record。当包含锁记录的页不在buffer pool内,innodb不去从磁盘获取页,为了避免不必要的磁盘操作,lock_data为空。

点击阅读

[Mysql] mysql innodb_trx参数详解


1、innodb_trx表提供了当前innodb引擎内每个事务的信息(只读事务除外),包括当一个事务启动,事务是否在等待一个锁,以及交易正在执行的语句(如果有的话)。查询语句:

1
select * from information_schema.innodb_trx;

1
select * from information_schema.innodb_trx\G

2、innodb_trx表列信息详解:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
trx_id:唯一事务id号,只读事务和非锁事务是不会创建id的。
TRX_WEIGHT:事务的高度,代表修改的行数(不一定准确)和被事务锁住的行数。为了解决死锁,innodb会选择一个高度最小的事务来当做牺牲品进行回滚。已经被更改的非交易型表的事务权重比其他事务高,即使改变的行和锁住的行比其他事务低。
TRX_STATE:事务的执行状态,值一般分为:RUNNING, LOCK WAIT, ROLLING BACK, and COMMITTING.
TRX_STARTED:事务的开始时间
TRX_REQUESTED_LOCK_ID:如果trx_state是lockwait,显示事务当前等待锁的id,不是则为空。想要获取锁的信息,根据该lock_id,以innodb_locks表中lock_id列匹配条件进行查询,获取相关信息。
TRX_WAIT_STARTED:如果trx_state是lockwait,该值代表事务开始等待锁的时间;否则为空。
TRX_MYSQL_THREAD_ID:mysql线程id。想要获取该线程的信息,根据该thread_id,以INFORMATION_SCHEMA.PROCESSLIST表的id列为匹配条件进行查询。
TRX_QUERY:事务正在执行的sql语句。
TRX_OPERATION_STATE:事务当前的操作状态,没有则为空。
TRX_TABLES_IN_USE:事务在处理当前sql语句使用innodb引擎表的数量。
TRX_TABLES_LOCKED:当前sql语句有行锁的innodb表的数量。(因为只是行锁,不是表锁,表仍然可以被多个事务读和写)
TRX_LOCK_STRUCTS:事务保留锁的数量。
TRX_LOCK_MEMORY_BYTES:在内存中事务索结构占得空间大小。
TRX_ROWS_LOCKED:事务行锁最准确的数量。这个值可能包括对于事务在物理上存在,实际不可见的删除标记的行。
TRX_ROWS_MODIFIED:事务修改和插入的行数
TRX_CONCURRENCY_TICKETS:该值代表当前事务在被清掉之前可以多少工作,由 innodb_concurrency_tickets系统变量值指定。
TRX_ISOLATION_LEVEL:事务隔离等级。
TRX_UNIQUE_CHECKS:当前事务唯一性检查启用还是禁用。当批量数据导入时,这个参数是关闭的。
TRX_FOREIGN_KEY_CHECKS:当前事务的外键坚持是启用还是禁用。当批量数据导入时,这个参数是关闭的。
TRX_LAST_FOREIGN_KEY_ERROR:最新一个外键错误信息,没有则为空。
TRX_ADAPTIVE_HASH_LATCHED:自适应哈希索引是否被当前事务阻塞。当自适应哈希索引查找系统分区,一个单独的事务不会阻塞全部的自适应hash索引。自适应hash索引分区通过 innodb_adaptive_hash_index_parts参数控制,默认值为8。
TRX_ADAPTIVE_HASH_TIMEOUT:是否为了自适应hash索引立即放弃查询锁,或者通过调用mysql函数保留它。当没有自适应hash索引冲突,该值为0并且语句保持锁直到结束。在冲突过程中,该值被计数为0,每句查询完之后立即释放门闩。当自适应hash索引查询系统被分区(由 innodb_adaptive_hash_index_parts参数控制),值保持为0。
TRX_IS_READ_ONLY:值为1表示事务是read only。
TRX_AUTOCOMMIT_NON_LOCKING:值为1表示事务是一个select语句,该语句没有使用for update或者shared mode锁,并且执行开启了autocommit,因此事务只包含一个语句。当TRX_AUTOCOMMIT_NON_LOCKING和TRX_IS_READ_ONLY同时为1,innodb通过降低事务开销和改变表数据库来优化事务。

3、注意事项

1
2
该表用于当系统负载较高时,诊断性能问题。
查询该表必须有process权限。

点击阅读

Proudly powered by Hexo and Theme by Lap
本站访客数人次
© 2020 zeven0707's blog