[Mysql] mysqldump逻辑备份及恢复


0、mysql实验版本

1
5.7.22-log

1、mysqldump备份操作详解:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
-d --no-data No row information 只导出数据结构
-t --no-create-info 只导出数据(不包含结构)
1)导出所有数据库
mysqldump -u root -p --all-databases > all_db.dump
2)导出指定数据库
mysqldump -u root -p zabbixDB > zabbix.dump
3) 导出多个数据库
mysqldump -u root -p --databases zabbixDB mysql > zabbix_mysql.dump
4)导出一个表
mysqldump -u root -p zabbixDB task > zabbixDB_task.dump
5)导出一个数据库的多个表
mysqldump -u root -p --databases zabbixDB --tables trends users > zabbix_trends_users.dump
6)条件导出----导出db1表a1中id=1的数据
mysqldump -uroot -p --databases db1 --tables a1 --where='id=1' > db1_a1_id.dump
7)将h1服务器中的db1数据库的所有数据导入到h2中的db2数据库中,db2的数据库必须存在否则会报错
mysqldump --host=h1 -uroot -proot --databases db1 |mysql --host=h2 -uroot -proot db2
8)压缩备份
mysqldump -uroot -p --databases zabbixDB 2>/dev/null |gzip > zabbixDB.gz

##开始实验

2、源库插入测试数据

1
2
3
4
create database aaaa;
use aaaa;
create table aa (id int primary key);
insert into aa values('1');

3、导出备份

1
mysqldump -u root -p --all-databases --set-gtid-purged=OFF  > /tmp/test.dump

4、在目标数据库进行还原,全库导入

1
mysql -u root -p12345678 < test.dump

因为mysqldump属于逻辑备份,恢复步骤为先检查要恢复的database是否存在,如果不存在就手动创建,如果存在就进入database去检查表是否存在,如果存在,会先删除再手动创建,表创建完成之后插入数据,分析dump文件,具体内容如下:

4.1、创建database:

1
2
3
4
5
--
-- Current Database: `aaaa`
--
CREATE DATABASE /*!32312 IF NOT EXISTS*/ `aaaa` /*!40100 DEFAULT CHARACTER SET utf8mb4 */;
USE `aaaa`;

4.2、检查并创建表aaa:

1
2
3
4
5
6
7
8
9
10
11
--
-- Table structure for table `aaa`
--
DROP TABLE IF EXISTS `aaa`;
/*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `aaa` (
`id` int(11) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
/*!40101 SET character_set_client = @saved_cs_client */;

4.3、对表aaa插入备份出来的数据:

1
2
3
4
5
6
7
8
--
-- Dumping data for table `aaa`
--
LOCK TABLES `aaa` WRITE;
/*!40000 ALTER TABLE `aaa` DISABLE KEYS */;
INSERT INTO `aaa` VALUES (1);
/*!40000 ALTER TABLE `aaa` ENABLE KEYS */;
UNLOCK TABLES;

点击阅读

[Mongodb] mongodb sharding集群配置


0、前提需求

为了满足mongodb数据库高可用环境,可以使用配置副本集CSRS(config server replica set),搭建方法点击此处,但是当业务吞吐量特别高的时刻,单台节点已经不能满足访问需求,因此需要考虑sharding cluster 模式,提高访问性能。

1、将集群架构从csrs调整为sharding cluster,下文按照上面搭建的csrs架构继续调整:

1.1、节点信息如下:
主:10.0.7.53
从:10.0.7.51
仲裁:10.0.7.50
依次关闭仲裁、从、主节点,修改配置文件mongodb.conf,添加如下信息:

1
2
sharding:
clusterRole: shardsvr

1.2、修改完三个节点之后,依次启动主,从,仲裁三个节点,进入控制台使用rs.status(),查看集群状态是否正常。为了保证停机时间较短,可使用rs.stepDown()分别进行主从切换,对从库进行配置并重启。

2、新建一个shard1集群节点,操作如下(三个节点都要执行):

2.1、节点信息如下:
从:10.0.7.53
主:10.0.7.51
仲裁:10.0.7.50
2.2、拷贝一份mongodb.conf文件,并重新命名
cp mongodb.conf shard1mongodb.conf
2.3、主节点需要在无需验证的模式下,先配置用户名密码。之后修改(三个节点)shard1mongodb.conf配置文件内容如下:
cat shard1mongodb.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /data/mongodb/shard1/log/mongod.log

# Where and how to store data.
storage:
dbPath: /data/mongodb/shard1/data/
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:

# how the process runs
processManagement:
fork: true # fork and run in background
pidFilePath: /data/mongodb/shard1/log/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo

# network interfaces
net:
port: 30001
bindIp: 127.0.0.1,10.0.7.53 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
#security:
security:
authorization: enabled
keyFile: /data/mongodb/shard1/mongodb.keyfile
#operationProfiling:
#replication:
replication:
replSetName: "shard1"
#sharding:
sharding:
clusterRole: shardsvr
## Enterprise-Only Options
#auditLog:
#snmp:

注意修改sharding、port、path、replname等参数

2.4、创建相应的数据目录和日志目录:

1
mkdir -p /data/mongodb/shard1/{log,data}

2.5、生成keyfile文件并拷贝keyfile文件到指定目录:

1
cp mongodb.keyfile /data/mongodb/shard1/

2.6、三个节点启动mongodb-shard1:

1
/data/mongodb/bin/mongod -f /data/mongodb/shard1mongodb.conf

2.7、初始化集群配置:
/data/mongodb/bin/mongo –port 30001

rs.initiate(
{
_id : “shard”,
members: [
{ _id : 0, host : “10.0.7.53:30001” },
{ _id : 1, host : “10.0.7.50:30001” },
{ _id : 2, host : “10.0.7.51:30001”,”arbiterOnly” : true }
]
}
)
2.8、查看集群节点信息:

1
rs.status();
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
{
"set" : "shard",
"date" : ISODate("2018-08-09T03:13:34.697Z"),
"myState" : 2,
"term" : NumberLong(1),
"syncingTo" : "10.0.7.50:30001",
"syncSourceHost" : "10.0.7.50:30001",
"syncSourceId" : 1,
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1533784405, 1),
"t" : NumberLong(1)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1533784405, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1533784405, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1533784405, 1),
"t" : NumberLong(1)
}
},
"lastStableCheckpointTimestamp" : Timestamp(1533784355, 1),
"members" : [
{
"_id" : 0,
"name" : "10.0.7.53:30001",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 47039,
"optime" : {
"ts" : Timestamp(1533784405, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-08-09T03:13:25Z"),
"syncingTo" : "10.0.7.50:30001",
"syncSourceHost" : "10.0.7.50:30001",
"syncSourceId" : 1,
"infoMessage" : "",
"configVersion" : 1,
"self" : true,
"lastHeartbeatMessage" : ""
},
{
"_id" : 1,
"name" : "10.0.7.50:30001",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 46862,
"optime" : {
"ts" : Timestamp(1533784405, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1533784405, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-08-09T03:13:25Z"),
"optimeDurableDate" : ISODate("2018-08-09T03:13:25Z"),
"lastHeartbeat" : ISODate("2018-08-09T03:13:33.395Z"),
"lastHeartbeatRecv" : ISODate("2018-08-09T03:13:32.735Z"),
"pingMs" : NumberLong(1),
"lastHeartbeatMessage" : "",
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"electionTime" : Timestamp(1533737562, 1),
"electionDate" : ISODate("2018-08-08T14:12:42Z"),
"configVersion" : 1
},
{
"_id" : 2,
"name" : "10.0.7.51:30001",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 46862,
"lastHeartbeat" : ISODate("2018-08-09T03:13:34.660Z"),
"lastHeartbeatRecv" : ISODate("2018-08-09T03:13:34.658Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"configVersion" : 1
}
],
"ok" : 1,
"operationTime" : Timestamp(1533784405, 1),
"$gleStats" : {
"lastOpTime" : Timestamp(0, 0),
"electionId" : ObjectId("000000000000000000000000")
},
"lastCommittedOpTime" : Timestamp(1533784405, 1),
"$configServerState" : {
"opTime" : {
"ts" : Timestamp(1533784394, 1),
"t" : NumberLong(1)
}
},
"$clusterTime" : {
"clusterTime" : Timestamp(1533784408, 1),
"signature" : {
"hash" : BinData(0,"dToZVSHbihPkQILJVl0qcSY+8ys="),
"keyId" : NumberLong("6587344633552961565")
}
}
}

3、新建一个config-server集群节点,操作如下(三个节点都要执行):

3.1、节点信息如下:

1
2
3
从:10.0.7.53
从:10.0.7.51
主:10.0.7.50

3.2、拷贝一份mongodb.conf文件,并重新命名

1
cp shard1mongodb.conf configmongodb.conf

3.3、主节点需要在无需验证的模式下,先配置用户名密码。之后修改(三个节点)configmongodb.conf配置文件内容如下:
cat configmongodb.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /data/mongodb/config/log/mongod.log

# Where and how to store data.
storage:
dbPath: /data/mongodb/config/data/
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:

# how the process runs
processManagement:
fork: true # fork and run in background
pidFilePath: /data/mongodb/config/log/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo

# network interfaces
net:
port: 30002
bindIp: 127.0.0.1,10.0.7.53 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
#security:
#security:
# authorization: enabled
# keyFile: /data/mongodb/config/mongodb.keyfile
#operationProfiling:
#replication:
replication:
replSetName: "config"
#sharding:
sharding:
clusterRole: configsvr
## Enterprise-Only Options
#auditLog:
#snmp:

注意修改sharding、repl、port、path
3.4、创建相应的数据目录和日志目录:

1
mkdir -p /data/mongodb/config/{log,data}

3.5、三个节点启动mongodb-shard1:

1
/data/mongodb/bin/mongod --configsvr -f /data/mongodb/configmongodb.conf

启动config-server服务需要指定参数,否则初始化集群的时候会报错:

1
"errmsg" : "Nodes being used for config servers must be started with the --configsvr flag"

3.6、初始化集群配置:
/data/mongodb/bin/mongo –port 30002

rs.initiate(
{
_id : “config”,
configsvr: true,
members: [
{ _id : 0, host : “10.0.7.53:30002” },
{ _id : 1, host : “10.0.7.50:30002” },
{ _id : 2, host : “10.0.7.51:30002” }
]
}
)
3.7、查看集群信息:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
rs.status();
{
"set" : "config",
"date" : ISODate("2018-08-06T08:29:14.472Z"),
"myState" : 1,
"term" : NumberLong(1),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1533544151, 2),
"t" : NumberLong(1)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1533544151, 2),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1533544151, 2),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1533544151, 2),
"t" : NumberLong(1)
}
},
"lastStableCheckpointTimestamp" : Timestamp(1533544151, 1),
"members" : [
{
"_id" : 0,
"name" : "10.0.7.53:30002",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 15,
"optime" : {
"ts" : Timestamp(1533544151, 2),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1533544151, 2),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-08-06T08:29:11Z"),
"optimeDurableDate" : ISODate("2018-08-06T08:29:11Z"),
"lastHeartbeat" : ISODate("2018-08-06T08:29:14.161Z"),
"lastHeartbeatRecv" : ISODate("2018-08-06T08:29:12.674Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "10.0.7.51:30002",
"syncSourceHost" : "10.0.7.51:30002",
"syncSourceId" : 2,
"infoMessage" : "",
"configVersion" : 1
},
{
"_id" : 1,
"name" : "10.0.7.50:30002",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 15,
"optime" : {
"ts" : Timestamp(1533544151, 2),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1533544151, 2),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-08-06T08:29:11Z"),
"optimeDurableDate" : ISODate("2018-08-06T08:29:11Z"),
"lastHeartbeat" : ISODate("2018-08-06T08:29:14.164Z"),
"lastHeartbeatRecv" : ISODate("2018-08-06T08:29:12.680Z"),
"pingMs" : NumberLong(1),
"lastHeartbeatMessage" : "",
"syncingTo" : "10.0.7.51:30002",
"syncSourceHost" : "10.0.7.51:30002",
"syncSourceId" : 2,
"infoMessage" : "",
"configVersion" : 1
},
{
"_id" : 2,
"name" : "10.0.7.51:30002",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 59,
"optime" : {
"ts" : Timestamp(1533544151, 2),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-08-06T08:29:11Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "could not find member to sync from",
"electionTime" : Timestamp(1533544150, 1),
"electionDate" : ISODate("2018-08-06T08:29:10Z"),
"configVersion" : 1,
"self" : true,
"lastHeartbeatMessage" : ""
}
],
"ok" : 1,
"operationTime" : Timestamp(1533544151, 2),
"$clusterTime" : {
"clusterTime" : Timestamp(1533544151, 2),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}

4、配置mongs(mongodb-router),用于客户端连接到mongodb集群,为防止单点故障,可配置多个mongos节点

4.1、拷贝一份配置文件,并重命名

1
cp configmongodb.conf routermongodb.conf

4.2、修改配置文件参数,内容如下:
cat routermongodb.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
# mongod.conf

# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/routeruration-options/

# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /data/mongodb/router/log/mongod.log

# Where and how to store data.
#storage:
# dbPath: /data/mongodb/router/data/
# journal:
# enabled: true
# engine:
# mmapv1:
# wiredTiger:

# how the process runs
processManagement:
fork: true # fork and run in background
pidFilePath: /data/mongodb/router/log/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo

# network interfaces
net:
port: 30004
bindIp: 127.0.0.1,10.0.7.53 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.

#security:
#security:
# authorization: enabled
#operationProfiling:

#replication:
#sharding:
sharding:
configDB: config/10.0.7.53:30002,10.0.7.51:30002,10.0.7.50:30002

## Enterprise-Only Options

#auditLog:

#snmp:

注意修改sharding,port参数,注释掉storage参数
4.3、创建router日志目录

1
mkdir -p /data/mongodb/router/log

4.4、启动mongos

1
/data/mongodb/bin/mongos -f /data/mongodb/routermongodb.conf

4.5、为模拟生产环境数据,对testrepl集群节点,插入部分数据,批量插入脚本如下:

1
2
3
4
5
6
7
8
9
10
use test
var bulk = db.test_collection.initializeUnorderedBulkOp();
people = ["Marc", "Bill", "George", "Eliot", "Matt", "Trey", "Tracy", "Greg", "Steve", "Kristina", "Katie", "Jeff"];
for(var i=0; i<1000000; i++){
user_id = i;
name = people[Math.floor(Math.random()*people.length)];
number = Math.floor(Math.random()*10001);
bulk.insert( { "user_id":user_id, "name":name, "number":number });
}
bulk.execute();

4.6、连接到mongos

1
2
3
shell > mongo --host 10.0.7.50 --port 30004
mongos> use admin
mongos> db.auth('admin1','admin123');

4.7、将节点添加到集群内:

1
2
mongos> sh.addShard("testrepl/10.0.7.53:30000")
mongos> sh.addShard("shard1/10.0.7.53:30001")

4.8、使数据库test支持sharding

1
2
3
4
5
6
mongos> sh.enableSharding("test")
mongos> use test
#在拆分键上创建索引
mongos> db.test_collection.createIndex( { number : 1 } )
#拆分集合
mongos> sh.shardCollection('test.mycol2', {'_id': 1})

4.9、查看拆分结果,(可多次重复4.5步骤进行测试)
sh.status()或者db.printShardingStatus()

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
--- Sharding Status --- 
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5b6af30b5025146bfd45717b")
}
shards:
{ "_id" : "shard", "host" : "shard/10.0.7.50:30001,10.0.7.53:30001", "state" : 1 }
{ "_id" : "testrepl", "host" : "testrepl/10.0.7.50:30000,10.0.7.53:30000", "state" : 1 }
active mongoses:
"4.0.0" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
4 : Success
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
config.system.sessions
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
shard 1
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard Timestamp(1, 0)
{ "_id" : "test", "primary" : "testrepl", "partitioned" : true, "version" : { "uuid" : UUID("185a3842-639f-42b0-89a1-afdc8dcdfbe7"), "lastMod" : 1 } }
test.test_collection
shard key: { "number" : 1 }
unique: false
balancing: true
chunks:
shard 4
testrepl 4
{ "number" : { "$minKey" : 1 } } -->> { "number" : 2394 } on : shard Timestamp(2, 0)
{ "number" : 2394 } -->> { "number" : 4791 } on : shard Timestamp(3, 0)
{ "number" : 4791 } -->> { "number" : 7194 } on : shard Timestamp(4, 0)
{ "number" : 7194 } -->> { "number" : 7892 } on : shard Timestamp(5, 0)
{ "number" : 7892 } -->> { "number" : 8591 } on : testrepl Timestamp(5, 1)
{ "number" : 8591 } -->> { "number" : 9287 } on : testrepl Timestamp(3, 4)
{ "number" : 9287 } -->> { "number" : 9589 } on : testrepl Timestamp(3, 5)
{ "number" : 9589 } -->> { "number" : { "$maxKey" : 1 } } on : testrepl Timestamp(4, 1)

5、大功告成,客户端访问mongodb集群,只需连接mongos对应的ip、port即可

点击阅读

[Mysql] mysql-mha主从环境,从库报错Error_code: 1062


0、数据库版本信息:

1
5.7.22

1、从库报错信息

1
2
3
4
Last_SQL_Error: Could not execute Write_rows event on table test.tb1;
Duplicate entry '4' for key 'PRIMARY',
Error_code: 1062;
handler error HA_ERR_FOUND_DUPP_KEY; the event's master log mysql-binlog.000005, end_log_pos 273273632

2、解决方法

2.1、在主库上查询二进制文件信息:

1
/data/mysql/bin/mysqlbinlog  -v --stop-position=273273632 /data/mysql/log/mysql-binlog.000005 > /tmp/f.log

2.2、过滤出报错的pos所对应的行数:

1
cat /tmp/f.log | awk '/end_log_pos 273273632/ {print NR}'
1
22556452

2.3、根据查询到的行数,查看其上下文:

1
cat /tmp/f.log | awk 'NR==22556442,NR==22556492'

2.4、查询到insert的插入语句

1
2
3
4
5
6
7
### INSERT INTO `test`.`tb1`
### SET
### @1=4
### @2='ERC20'
### @3='GTO'
### @4=1533176390
ROLLBACK /* added by mysqlbinlog */ /*!*/;

2.5、在从库上执行下面操作,停止主从同步

1
stop slave;

2.6、删除报错提示的对应的行:

1
2
3
use test;
delete from tb1 where id=4;
select * from tb1;

2.7、重新启动主从同步,查看节点信息正常:

1
2
start slave;
show slave status\G;

点击阅读

[Mongodb] mongodb集群配置一主一从一仲裁


1、首先安装在三个节点安装mongodb,安装方法详见

mongodb单点安装

2、集群节点信息:

1
2
3
主:10.0.7.53
从:10.0.7.51
仲裁:10.0.7.50

3、启动主节点,进入主节点控制台,

3.1、创建管理员用户名、密码:

1
2
3
4
5
6
7
8
9
10
shell > /data/mongodb/bin/mongo --port 30000
MongoDB Enterprise testrepl:PRIMARY>use admin
MongoDB Enterprise testrepl:PRIMARY>
db.createUser(
{
user: "admin",
pwd: "abc123",
roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
}
)

使用userAdminAnyDatabase的role权限去查看rs.status();会提示
“not authorized on admin to execute command”,建议授予root权限:

1
2
3
4
5
6
7
db.createUser(
{
user: "admin1",
pwd: "admin123",
roles: [ { role: "root", db: "admin" } ]
}
)

3.2、查看创建的管理员信息:

1
MongoDB Enterprise testrepl:PRIMARY> db.system.users.find()

3.3、退出控制台,修改配置文件mongodb.conf(每个节点都要配置):

1
2
security:
authorization: enabled

4、在主节点上,生成秘钥文件

4.1、生成密钥文件,用于集群之间互相通信:

1
openssl rand -base64 756 > /data/mongodb/mongodb.keyfile

4.2、更改文件权限以仅为文件所有者提供读取权限

1
chmod 400 /data/mongodb/mongodb.keyfile

4.3、将密钥文件复制到每个副本集成员

1
2
scp /data/mongodb/mongodb.keyfile root@10.0.7.51:/data/mongodb/
scp /data/mongodb/mongodb.keyfile root@10.0.7.50:/data/mongodb/

4.4、修改配置文件mongodb.conf,添加keyfile参数(每个节点都要配置):

1
2
security:
keyFile: /data/mongodb/mongodb.keyfile

5、配置集群

5.1、修改配置文件mongodb.conf,添加集群配置参数(每个节点都要配置):

1
2
replication:
replSetName: "testrepl"

replSetName:设置集群名称,根据自己需求自定义(三个节点要保持一直)

5.2、重新启动mongodb

1
/data/mongodb/bin/mongod -f /data/mongodb/mongodb.conf

5.3、进入主节点控制台

1
/data/mongodb/bin/mongo --port 30000

5.4、添加集群节点:

1
2
3
4
5
6
7
8
9
10
11
use admin;
db.auth('admin','abc123');

rs.initiate( {
_id : "testrepl",
members: [
{ _id: 0, host: "10.0.7.53:30000" },
{ _id: 1, host: "10.0.7.50:30000" },
{ _id: 2, host: "10.0.7.51:30000","arbiterOnly" : true}
]
})

执行结果如下:

1
2
3
4
5
6
7
8
9
10
11
{
"ok" : 1,
"operationTime" : Timestamp(1533388128, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1533388128, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}

5.5、查看集群状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
MongoDB Enterprise testrepl:PRIMARY> rs.status();
{
"set" : "testrepl",
"date" : ISODate("2018-08-04T13:09:25.894Z"),
"myState" : 1,
"term" : NumberLong(1),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1533388160, 1),
"t" : NumberLong(1)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1533388160, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1533388160, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1533388160, 1),
"t" : NumberLong(1)
}
},
"lastStableCheckpointTimestamp" : Timestamp(1533388140, 1),
"members" : [
{
"_id" : 0,
"name" : "10.0.7.53:30000",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 289,
"optime" : {
"ts" : Timestamp(1533388160, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-08-04T13:09:20Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "could not find member to sync from",
"electionTime" : Timestamp(1533388139, 1),
"electionDate" : ISODate("2018-08-04T13:08:59Z"),
"configVersion" : 1,
"self" : true,
"lastHeartbeatMessage" : ""
},
{
"_id" : 1,
"name" : "10.0.7.50:30000",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 37,
"optime" : {
"ts" : Timestamp(1533388160, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1533388160, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-08-04T13:09:20Z"),
"optimeDurableDate" : ISODate("2018-08-04T13:09:20Z"),
"lastHeartbeat" : ISODate("2018-08-04T13:09:25.051Z"),
"lastHeartbeatRecv" : ISODate("2018-08-04T13:09:25.677Z"),
"pingMs" : NumberLong(3),
"lastHeartbeatMessage" : "",
"syncingTo" : "10.0.7.53:30000",
"syncSourceHost" : "10.0.7.53:30000",
"syncSourceId" : 0,
"infoMessage" : "",
"configVersion" : 1
},
{
"_id" : 2,
"name" : "10.0.7.51:30000",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 37,
"lastHeartbeat" : ISODate("2018-08-04T13:09:25.031Z"),
"lastHeartbeatRecv" : ISODate("2018-08-04T13:09:24.265Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"configVersion" : 1
}
],
"ok" : 1,
"operationTime" : Timestamp(1533388160, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1533388160, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}

5.6、从库默认为不能读写模式,如果要启动从库的读模式,进入从库执行以下命令:

1
rs.slaveOk();

5.7、增加一个仲裁节点,新增节点,配置文件中密钥文件、repl参数一定要保持一致:

1
rs.addArb("m1.example.net:27017")

5.8、添加一个新的从节点,添加之前设置该节点优先级为0,表决为0,以防止该节点数据未同步完成之前参与选举,待该节点数据同步完成之后,使用rs.reconfig()参数再修改其优先级和表决:

1
rs.add( { host: "mongodb3.example.net:27017", priority: 0, votes: 0 } )

rs.reconfig()命令执行如下:

1
2
3
4
var cfg = rs.conf();
cfg.members[4].priority = 1
cfg.members[4].votes = 1
rs.reconfig(cfg)

点击阅读

[Mongodb] mongdob-enterprise-4.0安装


1、Yum方式安装

1.1配置yum源
cat /etc/yum.repos.d/mongodb-enterprise.repo

1
2
3
4
5
6
[mongodb-enterprise]
name=MongoDB Enterprise Repository
baseurl=https://repo.mongodb.com/yum/redhat/$releasever/mongodb-enterprise/4.0/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-4.0.asc

1.2执行安装命令

1
sudo yum install -y mongodb-enterprise

如果想要指定安装某一个版本的mongodb,需要指定每一个组件的安装版本,如下所示:

1
sudo yum install -y mongodb-enterprise-4.0.0 mongodb-enterprise-server-4.0.0 mongodb-enterprise-shell-4.0.0 mongodb-enterprise-mongos-4.0.0 mongodb-enterprise-tools-4.0.0

1.3mongodb启动、关闭命令:

1
2
3
sudo service mongod start
sudo service mongod stop
sudo service mongod restart

1.4通过yum方式安装完之后,配置文件所在为/etc/mongod.conf

2、tar包方式安装

2.1安装之前需要,安装依赖包:
linux-6:

1
yum install -y cyrus-sasl cyrus-sasl-plain cyrus-sasl-gssapi krb5-libs libcurl libpcap net-snmp openldap openssl

linux-7

1
yum install -y cyrus-sasl cyrus-sasl-gssapi cyrus-sasl-plain krb5-libs libcurl libpcap lm_sensors-libs net-snmp net-snmp-agent-libs openldap openssl rpm-libs tcp_wrappers-libs

2.2下载mongodb-enterprise软件包

1
wget https://downloads.mongodb.com/linux/mongodb-linux-x86_64-enterprise-rhel70-4.0.0.tgz

2.3解压软件包,修改目录名称,配置环境变量

1
2
3
4
 tar -zxvf mongodb-linux-x86_64-enterprise-rhel70-4.0.0.tgz -C /data/
mv /data/mongodb-linux-x86_64-enterprise-rhel70-4.0.0 /data/mongodb
echo "export PATH=/data/mongodb/bin:\$PATH" >> ~/.bash_profile
source ~/.bash_profile

2.4修改配置文件
cat /data/mongodb/mongodb.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /data/mongodb/log/mongod.log

# Where and how to store data.
storage:
dbPath: /data/mongodb/data/
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:

# how the process runs
processManagement:
fork: true # fork and run in background
pidFilePath: /data/mongodb/log/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo

# network interfaces
net:
port: 30000
bindIp: 127.0.0.1,10.0.7.51 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
#security:
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options
#auditLog:
#snmp:

2.5修改的配置文件中,如果数据目录,日志目录不存在会报错,需要手动配置:

1
mkdir -p /data/mongodb/{log,data}

2.6启动mongodb

1
2
3
4
5
6
shell>/data/mongodb/bin/mongod -f /data/mongodb/mongodb.conf
--------
2018-08-03T08:42:19.873+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
about to fork child process, waiting until server is ready for connections.
forked process: 41022
child process started successfully, parent exiting

2.7查看mongodb进程:

1
2
3
4
 netstat -tunlp|grep 30000
---------
tcp 0 0 10.0.7.51:30000 0.0.0.0:* LISTEN 41022/mongod
tcp 0 0 127.0.0.1:30000 0.0.0.0:* LISTEN 41022/mongod

点击阅读

[Zabbix] zabbix3.4监控nginx性能


1、查看nginx安装HTTP Stub Status

1
openresty -V

如果”–with-http_stub_status_module”参数,表示没有安装,要手动添加:

1
2
在编译nginx 的时候要加上参数 –with-http_stub_status_module,
执行./configure && make就可以了,不用make install。不过,一般情况下都是安装了的。

2、nginx服务器添加配置

2.1添加配置文件nginx_status.conf
cat /usr/local/openresty/nginx/conf/conf.d/nginx_status.conf

1
2
3
4
5
6
7
8
server {
listen 80;
server_name 127.0.0.1;

location /basic_status {
stub_status;
}
}

2.2重新加载配置文件

1
openresty -s reload

2.3获取nginx监控状态

1
2
3
4
5
6
curl http://127.0.0.1/basic_status
-------------
Active connections: 1
server accepts handled requests
572 572 764
Reading: 0 Writing: 1 Waiting: 0

nginx监控参数详解

1
2
3
4
5
6
## Active connections: 对后端发起的活动连接数
## Server accepts handled requests: Nginx 总共处理了 572 个连接,成功创建了 572 次握手(没有失败次数),总共处理了 764 个请求
## Reading: Nginx 读取到客户端的 Header 信息数
## Writing: Nginx 返回给客户端的 Header 信息数
## Waiting: 开启 keep-alive 的情况下,这个值等于 active - ( reading + writing ), 意思是 Nginx 已经处理完成,正在等待下一次请求指令的驻留连接
## 在访问效率很高,请求很快被处理完毕的情况下,Waiting 数比较多是正常的。如果 reading + writing 数较多,则说明并发访问量很大,正在处理过程中

3、配置zabbix

创建存放脚本目录,(我的zabbix安装路径为 /usr/local/zabbix):

1
2
mkdir /usr/local/zabbix/script
cd /usr/local/zabbix/script

创建存放数据文件(可自行定义):

1
touch .status.txt

编辑脚本用于获取nginx监控状态参数:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
echo  "
#!/bin/bash

server_hostname=127.0.0.1
data_file=/usr/local/zabbix/script/.status.txt

status_data=\`curl -o \$data_file -s http://\$server_hostname/basic_status\`

function Active() {
awk -F \"[: ]\" '/Active/{print \$4}' \$data_file
}

function Reading() {
awk -F \"[: ]\" '/Reading/ {print \$3}' \$data_file
}

function Writing() {
awk -F \"[: ]\" '/Writing/ {print \$6}' \$data_file
}

function Waiting() {
awk -F \"[: ]\" '/Waiting/ {print \$9}' \$data_file
}

\$1" > /usr/local/zabbix/script/nginx_connection.sh

修改目录下所有文件的权限,否则脚本写入文件会有权限问题

1
chown -R zabbix.zabbix /usr/local/zabbix/script

添加zabbix参数到agent配置文件

1
2
3
4
echo \"UserParameter=nginx.Active,sh /usr/local/zabbix/script/nginx_connection.sh Active
UserParameter=nginx.Writing,sh /usr/local/zabbix/script/nginx_connection.sh Writing
UserParameter=nginx.Reading,sh /usr/local/zabbix/script/nginx_connection.sh Reading
UserParameter=nginx.Waiting,sh /usr/local/zabbix/script/nginx_connection.sh Waiting\" >> /usr/local/zabbix/etc/zabbix_agentd.conf

重新启动agent客户端:

1
/etc/init.d/zabbix_agentd restart

在server端测试是否能获取到参数:

1
/usr/local/zabbix/bin/zabbix_get -s 10.1.130.47  -k nginx.Active

点击阅读

[Redis] redis-cluster集群部署


0、Redis 集群功能限制

1
2
3
4
5
6
Redis集群相对单机在功能上有一定限制。
key批量操作支持有限。如:MSET``MGET,目前只支持具有相同slot值的key执行批量操作。
key事务操作支持有限。支持多key在同一节点上的事务操作,不支持分布在多个节点的事务功能。
key作为数据分区的最小粒度,因此不能将一个大的键值对象映射到不同的节点。如:hash、list。
不支持多数据库空间。单机下Redis支持16个数据库,集群模式下只能使用一个数据库空间,即db 0
复制结构只支持一层,不支持嵌套树状复制结构。

1、服务器三台,在每台服务器上部署两个redis服务器,节点信息如下:

1
2
3
主1:10.0.7.53-6379, 10.0.7.53-6380
主2:10.0.7.50-6379, 10.0.7.50-6380
主3:10.0.7.51-6379, 10.0.7.51-6380

2、修改redis.conf配置文件

cat /data/redis/etc/redis6379.conf(不同服务器,修改bind对应的ip即可)

1
2
3
4
5
6
7
8
9
10
11
12
pidfile /data/redis6379/log/redis.pid
bind 10.0.7.53
port 6379
daemonize yes
logfile /data/redis6379/log/redis.log
dir /data/redis6379/
databases 16
maxmemory 1g
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes

cat /data/redis/etc/redis6380.conf(不同服务器,修改bind对应的ip即可)

1
2
3
4
5
6
7
8
9
10
11
12
pidfile /data/redis6380/log/redis.pid
bind 10.0.7.53
port 6380
daemonize yes
logfile /data/redis6380/log/redis.log
dir /data/redis6380/
databases 16
maxmemory 1g
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes

创建指定的文件目录(指定目录不存在,启动会报错)

1
2
mkdir -p /data/redis6379/log/
mkdir -p /data/redis6380/log/

3、启动redis服务

3.1启动服务之前,修改内核参数:

1
2
3
4
5
6
7
8
9
10
echo never > /sys/kernel/mm/transparent_hugepage/enabled
添加到开机自动启动/etc/rc.local
echo 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' >> /etc/rc.local
chmod +x /etc/rc.d/rc.local

echo "vm.overcommit_memory = 1" >> /etc/sysctl.conf
使修改环境变量生效
sysctl vm.overcommit_memory=1

echo 511 > /proc/sys/net/core/somaxconn

3.2在三个服务器上,分别启动redis服务

1
2
/data/redis/src/redis-server /data/redis/etc/redis6379.conf
/data/redis/src/redis-server /data/redis/etc/redis6380.conf

4、启动成功之后,在指定的dir目录下回生成一个nodes.conf的文件

1
2
3
cat /data/redis6379/nodes.conf
4933ef69f73b390098a4431449aeef2e83dacfc0 :0@0 myself,master - 0 0 0 connected
vars currentEpoch 0 lastVoteEpoch 0

该文件记录了节点信息id,不随主机名和端口的改变而改变。

5、开始配置集群

5.1配置集群需要安装ruby

1
2
3
4
5
6
7
8
9
10
yum install -y ruby
安装完成之后运行:
gem install redis
提示报错:
----------
Fetching: redis-4.0.1.gem (100%)
ERROR: Error installing redis:
redis requires Ruby version >= 2.2.2.
------------
默认yum安装ruby版本为2.0.0,但是安装redis-trib.rb运行的环境,最少需要ruby-2.2.2版本

5.2需要先安装rvm,升级ruby版本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
安装curl:
sudo yum install curl
安装RVM
curl -L get.rvm.io | bash -s stable
安装如果提示没有public key,按提示在命令行输入信息:
shell> gpg2 --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
-----
gpg: Can't check signature: No public key
Warning, RVM 1.26.0 introduces signed releases and automated check of signatures when GPG software found. Assuming you trust Michal Papis import the mpapis public key (downloading the signatures).

GPG signature verification failed for '/usr/local/rvm/archives/rvm-1.29.4.tgz' - 'https://github.com/rvm/rvm/releases/download/1.29.4/1.29.4.tar.gz.asc'! Try to install GPG v2 and then fetch the public key:

gpg2 --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
------------------------------
使rvm环境变量生效
source /usr/local/rvm/scripts/rvm
查看rvm库中已知的ruby版本
rvm list known
安装一个ruby版本
rvm install 2.5.1
使用一个ruby版本
rvm use 2.5.1
卸载已安装的旧版本
rvm remove 2.0.0
查看版本
ruby --version

5.3再安装redis就可以了

1
gem install redis

5.4安装完成之后,执行以下命令创建一个新的集群

1
/data/redis/src/redis-trib.rb create --replicas 1 10.0.7.53:6379 10.0.7.53:6380 10.0.7.50:6379 10.0.7.50:6380 10.0.7.51:6379 10.0.7.51:6380

–replicas 1:该参数代表为每一个主节点增加一个从节点

1
2
3
>>> Creating cluster
[ERR] Sorry, can't connect to node 10.0.7.53:6379
创建集群报错,确认gem版本没有问题,查看配置文件是否存在requirpass等参数

重新执行创建集群

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
10.0.7.53:6379
10.0.7.50:6379
10.0.7.51:6379
Adding replica 10.0.7.50:6380 to 10.0.7.53:6379
Adding replica 10.0.7.51:6380 to 10.0.7.50:6379
Adding replica 10.0.7.53:6380 to 10.0.7.51:6379
M: 4933ef69f73b390098a4431449aeef2e83dacfc0 10.0.7.53:6379
slots:0-5460 (5461 slots) master
S: 3f757295e082a5fbe237015862d0a502779f3adf 10.0.7.53:6380
replicates 3e31b1ceded27ad5b4114beaebb559d7f5ca465d
M: 27eb4895dc9369adcd81d9a00709dfbf6fbbd0d4 10.0.7.50:6379
slots:5461-10922 (5462 slots) master
S: 5b3aa1532d911e1a9b966ec53e3203a3221056e3 10.0.7.50:6380
replicates 4933ef69f73b390098a4431449aeef2e83dacfc0
M: 3e31b1ceded27ad5b4114beaebb559d7f5ca465d 10.0.7.51:6379
slots:10923-16383 (5461 slots) master
S: 30aabebd554d34e59584774feebd472f622f1cc2 10.0.7.51:6380
replicates 27eb4895dc9369adcd81d9a00709dfbf6fbbd0d4
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join...
>>> Performing Cluster Check (using node 10.0.7.53:6379)
M: 4933ef69f73b390098a4431449aeef2e83dacfc0 10.0.7.53:6379
slots:0-5460 (5461 slots) master
1 additional replica(s)
M: 27eb4895dc9369adcd81d9a00709dfbf6fbbd0d4 10.0.7.50:6379
slots:5461-10922 (5462 slots) master
1 additional replica(s)
M: 3e31b1ceded27ad5b4114beaebb559d7f5ca465d 10.0.7.51:6379
slots:10923-16383 (5461 slots) master
1 additional replica(s)
S: 30aabebd554d34e59584774feebd472f622f1cc2 10.0.7.51:6380
slots: (0 slots) slave
replicates 27eb4895dc9369adcd81d9a00709dfbf6fbbd0d4
S: 3f757295e082a5fbe237015862d0a502779f3adf 10.0.7.53:6380
slots: (0 slots) slave
replicates 3e31b1ceded27ad5b4114beaebb559d7f5ca465d
S: 5b3aa1532d911e1a9b966ec53e3203a3221056e3 10.0.7.50:6380
slots: (0 slots) slave
replicates 4933ef69f73b390098a4431449aeef2e83dacfc0
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
----------------------------------

5.5集群安装完成,查看集群状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
/data/redis/src/redis-trib.rb check 10.0.7.53:6379
(或者 进入控制台/data/redis/src/redis-cli -h 10.0.7.53,输入 CLUSTER nodes)
>>> Performing Cluster Check (using node 10.0.7.53:6379)
M: 4933ef69f73b390098a4431449aeef2e83dacfc0 10.0.7.53:6379
slots:0-5460 (5461 slots) master
1 additional replica(s)
M: 27eb4895dc9369adcd81d9a00709dfbf6fbbd0d4 10.0.7.50:6379
slots:5461-10922 (5462 slots) master
1 additional replica(s)
M: 3e31b1ceded27ad5b4114beaebb559d7f5ca465d 10.0.7.51:6379
slots:10923-16383 (5461 slots) master
1 additional replica(s)
S: 30aabebd554d34e59584774feebd472f622f1cc2 10.0.7.51:6380
slots: (0 slots) slave
replicates 27eb4895dc9369adcd81d9a00709dfbf6fbbd0d4
S: 3f757295e082a5fbe237015862d0a502779f3adf 10.0.7.53:6380
slots: (0 slots) slave
replicates 3e31b1ceded27ad5b4114beaebb559d7f5ca465d
S: 5b3aa1532d911e1a9b966ec53e3203a3221056e3 10.0.7.50:6380
slots: (0 slots) slave
replicates 4933ef69f73b390098a4431449aeef2e83dacfc0
[OK] All nodes agree about slots configuration.

6、集群安装完成,测试集群是否正常运行

6.1杀掉任意一个主节点,从节点自动切换为主,

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
/data/redis/src/redis-trib.rb check 10.0.7.53:6380
>>> Performing Cluster Check (using node 10.0.7.53:6380)
S: 3f757295e082a5fbe237015862d0a502779f3adf 10.0.7.53:6380
slots: (0 slots) slave
replicates 3e31b1ceded27ad5b4114beaebb559d7f5ca465d
M: 27eb4895dc9369adcd81d9a00709dfbf6fbbd0d4 10.0.7.50:6379
slots:5461-10922 (5462 slots) master
1 additional replica(s)
M: 5b3aa1532d911e1a9b966ec53e3203a3221056e3 10.0.7.50:6380
slots:0-5460 (5461 slots) master
0 additional replica(s)
S: 30aabebd554d34e59584774feebd472f622f1cc2 10.0.7.51:6380
slots: (0 slots) slave
replicates 27eb4895dc9369adcd81d9a00709dfbf6fbbd0d4
M: 3e31b1ceded27ad5b4114beaebb559d7f5ca465d 10.0.7.51:6379
slots:10923-16383 (5461 slots) master
1 additional replica(s)

6.2重新启动别杀掉的节点之后,该节点自动变为从:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
/data/redis/src/redis-trib.rb check 10.0.7.53:6380
>>> Performing Cluster Check (using node 10.0.7.53:6380)
S: 3f757295e082a5fbe237015862d0a502779f3adf 10.0.7.53:6380
slots: (0 slots) slave
replicates 3e31b1ceded27ad5b4114beaebb559d7f5ca465d
M: 27eb4895dc9369adcd81d9a00709dfbf6fbbd0d4 10.0.7.50:6379
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: 4933ef69f73b390098a4431449aeef2e83dacfc0 10.0.7.53:6379
slots: (0 slots) slave
replicates 5b3aa1532d911e1a9b966ec53e3203a3221056e3
M: 5b3aa1532d911e1a9b966ec53e3203a3221056e3 10.0.7.50:6380
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: 30aabebd554d34e59584774feebd472f622f1cc2 10.0.7.51:6380
slots: (0 slots) slave
replicates 27eb4895dc9369adcd81d9a00709dfbf6fbbd0d4
M: 3e31b1ceded27ad5b4114beaebb559d7f5ca465d 10.0.7.51:6379
slots:10923-16383 (5461 slots) master
1 additional replica(s)

点击阅读

[Linux] centos7.4--yum install gcc报错


[yum install gcc执行报错如下:]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package gcc.x86_64 0:4.8.5-28.el7_5.1 will be installed
--> Processing Dependency: libgomp = 4.8.5-28.el7_5.1 for package: gcc-4.8.5-28.el7_5.1.x86_64
--> Processing Dependency: cpp = 4.8.5-28.el7_5.1 for package: gcc-4.8.5-28.el7_5.1.x86_64
--> Processing Dependency: glibc-devel >= 2.2.90-12 for package: gcc-4.8.5-28.el7_5.1.x86_64
--> Processing Dependency: libmpfr.so.4()(64bit) for package: gcc-4.8.5-28.el7_5.1.x86_64
--> Processing Dependency: libmpc.so.3()(64bit) for package: gcc-4.8.5-28.el7_5.1.x86_64
--> Running transaction check
---> Package cpp.x86_64 0:4.8.5-28.el7_5.1 will be installed
---> Package glibc-devel.x86_64 0:2.17-222.el7 will be installed
--> Processing Dependency: glibc-headers = 2.17-222.el7 for package: glibc-devel-2.17-222.el7.x86_64
--> Processing Dependency: glibc = 2.17-222.el7 for package: glibc-devel-2.17-222.el7.x86_64
--> Processing Dependency: glibc-headers for package: glibc-devel-2.17-222.el7.x86_64
---> Package libgomp.x86_64 0:4.8.5-11.el7 will be updated
---> Package libgomp.x86_64 0:4.8.5-28.el7_5.1 will be an update
---> Package libmpc.x86_64 0:1.0.1-3.el7 will be installed
---> Package mpfr.x86_64 0:3.1.1-4.el7 will be installed
--> Running transaction check
---> Package glibc.x86_64 0:2.17-157.el7_3.5 will be updated
--> Processing Dependency: glibc = 2.17-157.el7_3.5 for package: glibc-common-2.17-157.el7_3.5.x86_64
---> Package glibc.x86_64 0:2.17-222.el7 will be an update
---> Package glibc-headers.x86_64 0:2.17-222.el7 will be installed
--> Processing Dependency: kernel-headers >= 2.2.1 for package: glibc-headers-2.17-222.el7.x86_64
--> Processing Dependency: kernel-headers for package: glibc-headers-2.17-222.el7.x86_64
--> Running transaction check
---> Package glibc.x86_64 0:2.17-157.el7_3.5 will be updated
--> Processing Dependency: glibc = 2.17-157.el7_3.5 for package: glibc-common-2.17-157.el7_3.5.x86_64
---> Package kernel-headers.x86_64 0:3.10.0-862.9.1.el7 will be installed
--> Finished Dependency Resolution
Error: Package: glibc-common-2.17-157.el7_3.5.x86_64 (@CentOS-Updates)
Requires: glibc = 2.17-157.el7_3.5
Removing: glibc-2.17-157.el7_3.5.x86_64 (@CentOS-Updates)
glibc = 2.17-157.el7_3.5
Updated By: glibc-2.17-222.el7.x86_64 (base)
glibc = 2.17-222.el7
You could try using --skip-broken to work around the problem
** Found 3 pre-existing rpmdb problem(s), 'yum check' output follows:
glibc-common-2.17-222.el7.x86_64 is a duplicate with glibc-common-2.17-157.el7_3.5.x86_64
glibc-common-2.17-222.el7.x86_64 has missing requires of glibc = ('0', '2.17', '222.el7')
libgcc-4.8.5-28.el7_5.1.x86_64 is a duplicate with libgcc-4.8.5-11.el7.x86_64

产生该问题的主要原因是:在系统upgrade的时候,残存了上一个版本的软件包

[解决办法:]

首先安装 yum-utils 套件

1
yum install yum-utils

执行clean duplicate package  

1
package-cleanup --cleandupes

重新安装glibc

1
yum reinstall glibc glibc-common libgcc

安装完成之后再安装gcc

1
yum install gcc

点击阅读

[Redis] redis-sentinel集群配置(一主两从三哨兵)


1、所有节点安装redis

节点信息:
主1:10.0.7.53
从2:10.0.7.50
从3:10.0.7.51
除了在三个节点部署redis服务,再分别部署sentinel。

2、配置主从,在主节点配置文件添加如下配置:

主节点1配置:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
pidfile /data/redis/log/redis.pid
bind 10.0.7.53
port 6379
daemonize yes
logfile "/data/redis/log/redis.log"
databases 16
maxmemory 1g
requirepass 123456
masterauth 123456
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-ping-slave-period 10
repl-timeout 60
repl-disable-tcp-nodelay no
repl-backlog-size 5mb
repl-backlog-ttl 3600

从节点2配置:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
pidfile /data/redis/log/redis.pid
bind 10.0.7.50
port 6379
daemonize yes
logfile "/data/redis/log/redis.log"
databases 16
maxmemory 1g
slaveof 10.0.7.53 6379
masterauth 123456
requirepass 123456
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-ping-slave-period 10
repl-timeout 60
repl-disable-tcp-nodelay no
repl-backlog-size 5mb
repl-backlog-ttl 3600

从节点3配置:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
pidfile /data/redis/log/redis.pid
bind 10.0.7.50
port 6379
daemonize yes
logfile "/data/redis/log/redis.log"
databases 16
maxmemory 1g
slaveof 10.0.7.53 6379
masterauth 123456
requirepass 123456
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-ping-slave-period 10
repl-timeout 60
repl-disable-tcp-nodelay no
repl-backlog-size 5mb
repl-backlog-ttl 3600

上面配置相关参数详解:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
#复制选项,slave复制对应的master。
slaveof <masterip> <masterport>

#如果master设置了requirepass,那么slave要连上master,需要有master的密码才行。masterauth就是用来配置master的密码,这样可以在连上master后进行认证。
masterauth <master-password>
-------------------------------------
如果主库配置了密码,从库没有添加这行数据,会报一下错误:
# Unexpected reply to PSYNC from master: -NOAUTH Authentication required.
* Retrying with SYNC...
# MASTER aborted replication with an error: NOAUTH Authentication required.
-------------------------------------

#当从库同主机失去连接或者复制正在进行,从机库有两种运行方式:1) 如果slave-serve-stale-data设置为yes(默认设置),从库会继续响应客户端的请求。2) 如果slave-serve-stale-data设置为no,除去INFO和SLAVOF命令之外的任何请求都会返回一个错误”SYNC with master in progress”。
slave-serve-stale-data yes

#作为从服务器,默认情况下是只读的(yes),可以修改成NO,用于写(不建议)。
slave-read-only yes

#是否使用socket方式复制数据。目前redis复制提供两种方式,disk和socket。如果新的slave连上来或者重连的slave无法部分同步,就会执行全量同步,master会生成rdb文件。有2种方式:disk方式是master创建一个新的进程把rdb文件保存到磁盘,再把磁盘上的rdb文件传递给slave。socket是master创建一个新的进程,直接把rdb文件以socket的方式发给slave。disk方式的时候,当一个rdb保存的过程中,多个slave都能共享这个rdb文件。socket的方式就的一个个slave顺序复制。在磁盘速度缓慢,网速快的情况下推荐用socket方式。
repl-diskless-sync no

#diskless复制的延迟时间,防止设置为0。一旦复制开始,节点不会再接收新slave的复制请求直到下一个rdb传输。所以最好等待一段时间,等更多的slave连上来。
repl-diskless-sync-delay 5

#slave根据指定的时间间隔向服务器发送ping请求。时间间隔可以通过 repl_ping_slave_period 来设置,默认10秒。
# repl-ping-slave-period 10

#复制连接超时时间。master和slave都有超时时间的设置。master检测到slave上次发送的时间超过repl-timeout,即认为slave离线,清除该slave信息。slave检测到上次和master交互的时间超过repl-timeout,则认为master离线。需要注意的是repl-timeout需要设置一个比repl-ping-slave-period更大的值,不然会经常检测到超时。
# repl-timeout 60

#是否禁止复制tcp链接的tcp nodelay参数,可传递yes或者no。默认是no,即使用tcp nodelay。如果master设置了yes来禁止tcp nodelay设置,在把数据复制给slave的时候,会减少包的数量和更小的网络带宽。但是这也可能带来数据的延迟。默认我们推荐更小的延迟,但是在数据量传输很大的场景下,建议选择yes。
repl-disable-tcp-nodelay no

#复制缓冲区大小,这是一个环形复制缓冲区,用来保存最新复制的命令。这样在slave离线的时候,不需要完全复制master的数据,如果可以执行部分同步,只需要把缓冲区的部分数据复制给slave,就能恢复正常复制状态。缓冲区的大小越大,slave离线的时间可以更长,复制缓冲区只有在有slave连接的时候才分配内存。没有slave的一段时间,内存会被释放出来,默认1m。
# repl-backlog-size 5mb

#master没有slave一段时间会释放复制缓冲区的内存,repl-backlog-ttl用来设置该时间长度。单位为秒。
# repl-backlog-ttl 3600

#当master不可用,Sentinel会根据slave的优先级选举一个master。最低的优先级的slave,当选master。而配置成0,永远不会被选举。
slave-priority 100

#redis提供了可以让master停止写入的方式,如果配置了min-slaves-to-write,健康的slave的个数小于N,mater就禁止写入。master最少得有多少个健康的slave存活才能执行写命令。这个配置虽然不能保证N个slave都一定能接收到master的写操作,但是能避免没有足够健康的slave的时候,master不能写入来避免数据丢失。设置为0是关闭该功能。
# min-slaves-to-write 3

#延迟小于min-slaves-max-lag秒的slave才认为是健康的slave。
# min-slaves-max-lag 10

3、修改哨兵配置文件

cat sentinel.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
port 26379
dir "/tmp/sentinel"
logfile "/tmp/sentinel.log"
daemonize yes
sentinel monitor mymaster 10.0.7.53 6379 2
sentinel down-after-milliseconds mymaster 5000
sentinel parallel-syncs mymaster 1
sentinel auth-pass mymaster 123456
sentinel failover-timeout mymaster 180000
sentinel config-epoch mymaster 2
sentinel leader-epoch mymaster 2
sentinel known-slave mymaster 10.0.7.51 6379
sentinel known-slave mymaster 10.0.7.50 6379
sentinel current-epoch 2
protected-mode no

指定的dir路径若不存在需要手动创建:

1
mkdir -p /tmp/sentinel

sentinel相关配置参数详解:

1
2
3
4
5
6
7
8
9
10
11
12
13
#sentinel monitor mymaster 10.0.7.53 6379 2
// 当前Sentinel节点监控 127.0.0.1:6379 这个主节点
// 2代表判断主节点失败至少需要2个Sentinel节点节点同意
// mymaster是主节点的别名

#sentinel down-after-milliseconds mymaster 30000
//每个Sentinel节点都要定期PING命令来判断Redis数据节点和其余Sentinel节点是否可达,如果超过30000毫秒且没有回复,则判定不可达

#sentinel parallel-syncs mymaster 1
//当Sentinel节点集合对主节点故障判定达成一致时,Sentinel领导者节点会做故障转移操作,选出新的主节点,原来的从节点会向新的主节点发起复制操作,限制每次向新的主节点发起复制操作的从节点个数为1

#sentinel failover-timeout mymaster 180000
//故障转移超时时间为180000毫秒

4、启动服务

各服务器启动redis服务

1
/data/redis/src/redis-server /data/redis/etc/redis.conf

各服务器启动sentinel服务

1
/data/redis/src/redis-sentinel /data/redis/sentinel.conf

查看集群信息

1
/data/redis/src/redis-cli -h 127.0.0.1 -p 26379 INFO Sentinel

点击阅读

[Python] python-2.7升级到python-3.7


[下载、解压python3.7]

1
2
wget https://www.python.org/ftp/python/3.7.0/Python-3.7.0.tar.xz
tar -xvf Python-3.7.0.tar.xz

[安装编译]

1
2
3
cd Python-3.7.0/
./configure --prefix=/usr/local/python3.7
make && make install

[安装报错]

1
2
ModuleNotFoundError: No module named '_ctypes'
make: *** [install] Error 1

解决方法:
安装libffi-devel
yum install libffi-devel
重新编译安装
make && make install

[备份旧版本的python]

1
2
3
4
5
6
7
8
9
ll /usr/bin/python*
lrwxrwxrwx. 1 root root 7 Apr 10 19:35 /usr/bin/python -> python2
lrwxrwxrwx. 1 root root 9 Apr 10 19:35 /usr/bin/python2 -> python2.7
-rwxr-xr-x. 1 root root 7136 Aug 4 2017 /usr/bin/python2.7
-------------------------
一般自带系统已经做好了python2.7的备份,直接替换掉python即可

如果没有备份,使用一些命令备份:
mv /usr/bin/python /usr/bin/python_old #备份旧的python

[新版本python软连接到python]

1
2
3
4
5
rm -rf /usr/bin/python #需要删除旧版的python,否则报错
ln -s /usr/local/python3.7/bin/python3.7 /usr/bin/python #添加软连接
python -V #查看python版本
2.7版本没有pip,升级到python3.7后,自带有pip,做一个pip的软连接即可
ln -s /usr/local/python3.7/bin/pip3 /usr/bin/pip

[升级完python之后,yum命令失效,需修改配置文件]

使用yum命令报以下错误:

1
2
3
4
5
 yum clean all
File "/usr/bin/yum", line 30
except KeyboardInterrupt, e:
^
SyntaxError: invalid syntax

解决 yum 不可用:

1
2
3
4
修改/usr/bin/yum配置文件
#!/usr/bin/python 改成: #!/usr/bin/python2.7
重新测试yum是否正常:
yum clean all

[升级完python之后,yum使用过程中,额外问题:]

1
2
3
4
5
6
yum install tree -y 
----------------------------------
报错:
File "/usr/libexec/urlgrabber-ext-down", line 28
except OSError, e:
---------------------------------

解决方法:

1
2
修改/usr/libexec/urlgrabber-ext-down配置文件
#!/usr/bin/python 改成: #!/usr/bin/python2.7

点击阅读

Proudly powered by Hexo and Theme by Lap
本站访客数人次
© 2020 zeven0707's blog