博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
docker搭建mongodb高可用集群
阅读量:2057 次
发布时间:2019-04-29

本文共 13347 字,大约阅读时间需要 44 分钟。

docker搭建mongodb集群

参考资料

  • 基于 Docker 的 MongoDB 主从集群

http://www.imooc.com/article/details/id/258885

  • mongodb: docker-compose一主两从一仲裁副本集模式

https://blog.csdn.net/weixin_34117522/article/details/94609770

  • Mongodb集群搭建与介绍

https://blog.csdn.net/wangshuang1631/article/details/53857319

前序–聊聊数据库升级方案

  • 一主一从
  • 一主两从
  • 一主一从一仲裁 (当前搭建使用
  • 集群shard分片(暂不讨论)

1.使用docker搭建相应的单节点的mongodb服务

#主节点masterdocker run --name mongo-master --restart=always -d --net="bridge" -p 27017:27017 \-v /root/fct/mongocluster/master/data/db:/data/db \-v /root/fct/mongocluster/master/logs/mongodb:/var/log/mongodb \mongo:3.4 \/bin/sh -c 'mongod --dbpath /data/db --replSet annosys'
#从节点slave1docker run --name mongo-slave1 --restart=always -d --net="bridge" -p 27027:27017  \-v /root/fct/mongocluster/slave1/data/db:/data/db \-v /root/fct/mongocluster/slave1/logs/mongodb:/var/log/mongodb \mongo:3.4 \/bin/sh -c 'mongod --dbpath /data/db --replSet annosys'
#仲裁节点arbiterdocker run --name mongo-arbiter -d  --net="bridge" -p 27037:27017 \mongo:3.4 \/bin/sh -c 'mongod --dbpath /data/db --replSet annosys --smallfiles'

将各个单个的mongdb通过配置构建成给一个mongodb集群

参考:https://blog.csdn.net/wangshuang1631/article/details/53857319

2 初始化配置副本集

2.1在三台机器上任意一台机器登陆mongodb

进入任意一mongodb的容器中,然后

mongo

切换到admin 库

use admin

定义副本集配置变量,这里的 _id:” annosys” 和上面配置文件中replSet=annosys要保持一样。

config={
_id:"annosys", members:[{
_id:0,host:'172.19.32.142:27017',priority:5},{
_id:1,host:'172.19.32.142:27027',priority:3},{
_id:2,host:'172.19.32.142:27037',arbiterOnly:true}]}

初始化副本集配置

rs.initiate(config)

查看集群节点的状态

rs.status()

annosys:SECONDARY> rs.status(){
"set" : "annosys", "date" : ISODate("2019-10-23T09:43:42.784Z"), "myState" : 2, "term" : NumberLong(1), "syncingTo" : "172.19.32.142:27017", "syncSourceHost" : "172.19.32.142:27017", "syncSourceId" : 0, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1571823820, 1), "t" : NumberLong(1) }, "appliedOpTime" : {
"ts" : Timestamp(1571823820, 1), "t" : NumberLong(1) }, "durableOpTime" : {
"ts" : Timestamp(1571823820, 1), "t" : NumberLong(1) } }, "members" : [ {
"_id" : 0, "name" : "172.19.32.142:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 3032, "optime" : {
"ts" : Timestamp(1571823820, 1), "t" : NumberLong(1) }, "optimeDurable" : {
"ts" : Timestamp(1571823820, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2019-10-23T09:43:40Z"), "optimeDurableDate" : ISODate("2019-10-23T09:43:40Z"), "lastHeartbeat" : ISODate("2019-10-23T09:43:42.362Z"), "lastHeartbeatRecv" : ISODate("2019-10-23T09:43:42.036Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "electionTime" : Timestamp(1571820799, 1), "electionDate" : ISODate("2019-10-23T08:53:19Z"), "configVersion" : 1 }, {
"_id" : 1, "name" : "172.19.32.142:27027", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 4498, "optime" : {
"ts" : Timestamp(1571823820, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2019-10-23T09:43:40Z"), "syncingTo" : "172.19.32.142:27017", "syncSourceHost" : "172.19.32.142:27017", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1, "self" : true, "lastHeartbeatMessage" : "" }, {
"_id" : 2, "name" : "172.19.32.142:27037", "health" : 1, "state" : 7, "stateStr" : "ARBITER", "uptime" : 3032, "lastHeartbeat" : ISODate("2019-10-23T09:43:42.362Z"), "lastHeartbeatRecv" : ISODate("2019-10-23T09:43:40.850Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "configVersion" : 1 } ], "ok" : 1}

各个节点的健康状态正常。

3 节点数据同步检查测试、读写分离测试、故障测试模拟

3.1数据同步测试

如果在检查的过程中出现如下错误:not master and slaveOk=false

annosys:SECONDARY> use test;switched to db testannosys:SECONDARY> show tables;2019-10-23T09:44:28.838+0000 E QUERY    [thread1] Error: listCollections failed: {
"ok" : 0, "errmsg" : "not master and slaveOk=false", "code" : 13435, "codeName" : "NotMasterNoSlaveOk"}:_getErrorWithCode@src/mongo/shell/utils.js:25:13DB.prototype._getCollectionInfosCommand@src/mongo/shell/db.js:807:1DB.prototype.getCollectionInfos@src/mongo/shell/db.js:819:19DB.prototype.getCollectionNames@src/mongo/shell/db.js:830:16shellHelper.show@src/mongo/shell/utils.js:807:9shellHelper@src/mongo/shell/utils.js:704:15

此时需要执行rs.slaveOk()命令即可

annosys:SECONDARY> rs.slaveOk()annosys:SECONDARY> use testswitched to db testannosys:SECONDARY> db.testdb.find(){
"_id" : ObjectId("5db01956f6d1094b8e5c0a16"), "test1" : "testval1" }{
"_id" : ObjectId("5db01f167be4feac94c9f596"), "name" : "fct----1" }{
"_id" : ObjectId("5db01f167be4feac94c9f597"), "name" : "fct----2" }{
"_id" : ObjectId("5db01f167be4feac94c9f599"), "name" : "fct----4" }{
"_id" : ObjectId("5db01f167be4feac94c9f598"), "name" : "fct----3" }

如果发现数据同步,证明设置正常

3.2读写分离检查

当前的集群是,只能在master中写数据,然后在salve中读数据,

如果在slave中执行写数据,则报如下错误

annosys:SECONDARY> db.testdb.insert({
msg: 'this is from primary change yyy', ts: new Date()})WriteResult({
"writeError" : {
"code" : 10107, "errmsg" : "not master" } })

在master节点则没问题

annosys:PRIMARY> db.testdb.insert({
msg: 'this is from primary change xxx', ts: new Date()})WriteResult({
"nInserted" : 1 })

说明节点做了读写分离。

3.3 故障测试模拟

尝试docker停止master主节点服务时候,值slave节点容器中执行插入动作时候,

annosys:SECONDARY> db.testdb.insert({
msg: 'this is from primary change yyy', ts: new Date()})WriteResult({
"writeError" : {
"code" : 10107, "errmsg" : "not master" } })annosys:SECONDARY> db.testdb.insert({
msg: 'this is from primary change yyy', ts: new Date()})WriteResult({
"nInserted" : 1 })annosys:PRIMARY> db.testdb.insert({
msg: 'this is from primary change yyy', ts: new Date()})WriteResult({
"nInserted" : 1 })

结果表明

①此时原来只能从节点slave1只能读的节点,也可以写数据了;

②注意查看原来的annosys:SECONDARY的从节点升级为annosys:PRIMARY主节点了。

说明成功完成了故障转移。

至此,测试mongodb集群搭建和高可用性测试完成!

(后期如果需要添加从节点,可以直接修改配置文件后,更新配置即可!用时候需要测试)

4 java连接测试与客户端可视化界面

4.1如果需要使用客户端,建议使用Robot 3T连接mongodb集群。

在这里插入图片描述

4.2 java连接mongodb集群测试

package mongotest;import com.mongodb.MongoClient;import com.mongodb.ServerAddress;import com.mongodb.client.FindIterable;import com.mongodb.client.MongoCollection;import com.mongodb.client.MongoDatabase;import org.bson.Document;import java.util.ArrayList;import java.util.List;/** * 描述: java连接mongdb集群;测试高可用性 * @author: fangchangtan * @version 创建时间:2018年11月26日 下午7:45:29 */public class TestMongoDBReplSet {
public static void main(String[] args) {
ArrayList
arrayList = new ArrayList<>(); try {
//mongpdb的java连接配置为一个集群模式 List
addresses = new ArrayList
(); ServerAddress address1 = new ServerAddress("172.19.32.142" , 27017); ServerAddress address2 = new ServerAddress("172.19.32.142" , 27027); ServerAddress address3 = new ServerAddress("172.19.32.142" , 27037); addresses.add(address1); addresses.add(address2); addresses.add(address3); MongoClient mongoClient = new MongoClient(addresses); MongoDatabase database = mongoClient.getDatabase("test"); MongoCollection
collection = database.getCollection("testdb"); for (int i = 0; i < 1000; i++) {
System.out.println("=========================="); Thread.sleep(2000); //向集群中插入文档数据 collection.insertOne(new Document("name","dog"+i)); //查询集群中的数据记录 FindIterable
find = collection.find(); for(Document doc : find){
System.out.println(doc); } } } catch (Exception e) {
e.printStackTrace(); } }}

mongodb集群可以正常的插入和删除数据;

但是当集群发生slave与master节点切换的时候,会发生异常

2019-10-23 20:21:05,972 WARN [org.mongodb.driver.connection] - Got socket exception on connection [connectionId{
localValue:4, serverValue:15}] to 172.19.32.142:27017. All connections to 172.19.32.142:27017 will be closed.com.mongodb.MongoSocketReadException: Prematurely reached end of stream at com.mongodb.connection.SocketStream.read(SocketStream.java:88) at com.mongodb.connection.InternalStreamConnection.receiveResponseBuffers(InternalStreamConnection.java:494) at com.mongodb.connection.InternalStreamConnection.receiveMessage(InternalStreamConnection.java:224) at com.mongodb.connection.UsageTrackingInternalConnection.receiveMessage(UsageTrackingInternalConnection.java:96) at com.mongodb.connection.DefaultConnectionPool$PooledConnection.receiveMessage(DefaultConnectionPool.java:440) at com.mongodb.connection.WriteCommandProtocol.receiveMessage(WriteCommandProtocol.java:262) at com.mongodb.connection.WriteCommandProtocol.execute(WriteCommandProtocol.java:104) at com.mongodb.connection.InsertCommandProtocol.execute(InsertCommandProtocol.java:67) at com.mongodb.connection.InsertCommandProtocol.execute(InsertCommandProtocol.java:37) at com.mongodb.connection.DefaultServer$DefaultServerProtocolExecutor.execute(DefaultServer.java:168) at com.mongodb.connection.DefaultServerConnection.executeProtocol(DefaultServerConnection.java:289) at com.mongodb.connection.DefaultServerConnection.insertCommand(DefaultServerConnection.java:118) at com.mongodb.operation.MixedBulkWriteOperation$Run$2.executeWriteCommandProtocol(MixedBulkWriteOperation.java:465) at com.mongodb.operation.MixedBulkWriteOperation$Run$RunExecutor.execute(MixedBulkWriteOperation.java:656) at com.mongodb.operation.MixedBulkWriteOperation$Run.execute(MixedBulkWriteOperation.java:411) at com.mongodb.operation.MixedBulkWriteOperation$1.call(MixedBulkWriteOperation.java:177) at com.mongodb.operation.MixedBulkWriteOperation$1.call(MixedBulkWriteOperation.java:168) at com.mongodb.operation.OperationHelper.withConnectionSource(OperationHelper.java:426) at com.mongodb.operation.OperationHelper.withConnection(OperationHelper.java:417) at com.mongodb.operation.MixedBulkWriteOperation.execute(MixedBulkWriteOperation.java:168) at com.mongodb.operation.MixedBulkWriteOperation.execute(MixedBulkWriteOperation.java:74) at com.mongodb.Mongo.execute(Mongo.java:845) at com.mongodb.Mongo$2.execute(Mongo.java:828) at com.mongodb.MongoCollectionImpl.executeSingleWriteRequest(MongoCollectionImpl.java:550) at com.mongodb.MongoCollectionImpl.insertOne(MongoCollectionImpl.java:317) at com.mongodb.MongoCollectionImpl.insertOne(MongoCollectionImpl.java:307) at mongotest.TestMongoDBReplSet.main(TestMongoDBReplSet.java:39)

报错:com.mongodb.MongoSocketReadException: Prematurely reached end of stream

这时候要使用try-catch{}捕获异常,并重连mongodb集群即可。(具体代码不再给出,参照redis集群自己体会!)


5.搭建mongodb集群,常见问题总结

1.常见警告WARNING提示,可以忽略

搭建完成之后,进入任意mongo容器之中执行mongo

root@a10c7a769a4a:/# mongo MongoDB shell version v3.4.23connecting to: mongodb://127.0.0.1:27017MongoDB server version: 3.4.23Server has startup warnings: 2019-10-23T08:25:41.590+0000 I CONTROL  [initandlisten] 2019-10-23T08:25:41.591+0000 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.2019-10-23T08:25:41.591+0000 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.2019-10-23T08:25:41.591+0000 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.2019-10-23T08:25:41.591+0000 I CONTROL  [initandlisten] 2019-10-23T08:25:41.591+0000 I CONTROL  [initandlisten] 2019-10-23T08:25:41.591+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.2019-10-23T08:25:41.591+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'2019-10-23T08:25:41.591+0000 I CONTROL  [initandlisten] 2019-10-23T08:25:41.591+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.2019-10-23T08:25:41.591+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'2019-10-23T08:25:41.591+0000 I CONTROL  [initandlisten]

可以看到4个WARNING信息,基于安全的警告,对使用mongo基本没啥影响。

2.出现No host described in new configuration 1 for replica set annosys maps to this node

> use adminswitched to db admin> config={
_id:"annosys", members:[... {
_id:0,host:'mongo-master:27017',priority:5},... {
_id:1,host:'mongo-slave1:27017',priority:3},... {
_id:3,host:'mongo-arbiter:27017',arbiterOnly:true}]... }{
"_id" : "annosys", "members" : [ {
"_id" : 0, "host" : "mongo-master:27017", "priority" : 5 }, {
"_id" : 1, "host" : "mongo-slave1:27017", "priority" : 3 }, {
"_id" : 3, "host" : "mongo-arbiter:27017", "arbiterOnly" : true } ]}> rs.initiate(config){
"ok" : 0, "errmsg" : "No host described in new configuration 1 for replica set annosys maps to this node", "code" : 93, "codeName" : "InvalidReplicaSetConfig"}

初始化集群失败,集群状态为失败。此时提示是配置项中host无法找到,副本集服务映射到该节点

需要将mongo-master名称修改为真实的ip:172.19.32.142。

config={
_id:"annosys", members:[{
_id:0,host:'172.19.32.142:27017',priority:5},{
_id:1,host:'172.19.32.142:27027',priority:3},{
_id:2,host:'172.19.32.142:27037',arbiterOnly:true}]}
> rs.initiate(config){
"ok" : 1 }
你可能感兴趣的文章
Kubernetes Pod 驱逐详解
查看>>
kubectl 创建 Pod 背后到底发生了什么?
查看>>
[译] Kubernetes 儿童插图指南
查看>>
云原生周报第 2 期 | 2019-07-01~2019-07-05
查看>>
kubectl 创建 Pod 背后到底发生了什么?
查看>>
Kube-scheduler 源码分析(二):调度程序启动前逻辑
查看>>
kubernetes 1.15 有哪些让人眼前一亮的新特性?
查看>>
云原生周报:第 3 期
查看>>
深入理解 Linux Cgroup 系列(三):内存
查看>>
7月最新Java微服务资料
查看>>
Linux 指令
查看>>
wi10优化
查看>>
windows console 颜色设置
查看>>
VC unicode下Cstring转char*
查看>>
MFC ListBox使用
查看>>
Linux 使用grep筛选多个条件
查看>>
H264 NALU分析(sps,pps,关键帧,非关键帧)
查看>>
Windows文本加载wscite的使用
查看>>
浏览器主页被篡改修复
查看>>
FFmpeg - 新老接口对比问题
查看>>