linuxea:activemq+zookeeper集群配置笔记

activemq官网提供三种集群方式,Shared File System Master Slave,JDBC Master Slave,Replicated LevelDB Store,本章采用最后一种
http://activemq.apache.org/masterslave.html ,官网是不建议使用LevelDB,推荐使用kahaDB
集群方式:
依靠Apache ZooKeeper协调节点是否为主,当主节点启动后接受链接时,其他节点接入等候状态并同步持久状态,并不接受客户端链接,所有持久操作都是复制到等候状态的节点,如果主宕机,最近状态的节点将被提升为主。节点重新恢复正常,自动进入从模式。
当然,我们在使用中也可以在上游加上代理层,如下:

下载安装包:

wget http://mirror.rise.ph/apache/activemq/5.14.5/apache-activemq-5.14.5-bin.tar.gz
wget http://mirror.rise.ph/apache/zookeeper/zookeeper-3.4.10/zookeeper-3.4.10.tar.gz
[root@DS-VM-Node49-linuxea /usr/local]# tar xf apache-activemq-5.14.5-bin.tar.gz 
[root@DS-VM-Node49-linuxea /usr/local]# tar xf zookeeper-3.4.10.tar.gz 
[root@DS-VM-Node49-linuxea /usr/local]# mkdir /data/{activemq,zookeeper,logs} -p
[root@DS-VM-Node49-linuxea /usr/local]# ln -s zookeeper-3.4.10 zookeeper
[root@DS-VM-Node49-linuxea /usr/local]# ln -s apache-activemq-5.14.5 activemq

download jdk:

http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
axel -n 30 https://mirrors.dtops.cc/java/8/8u111-b14/jdk-8u111-linux-x64.rpm
yum install jdk-8u111-linux-x64.rpm -y 

防火墙规则:

iptables -I INPUT  -p tcp -m tcp -m state --state NEW -m multiport --dports 2181,2888,3888,1883,61613:61617,62621,5672,8161 -m comment --comment "mq_zook" -j ACCEPT

zookeeper——node1:

[root@DS-VM-Node49-linuxea ~]# vim /usr/local/zookeeper/conf/zoo.cfg 
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
dataLogDir=/data/logs
clientPort=2181
maxClientCnxns=0
autopurge.snapRetainCount=3
autopurge.purgeInterval=5
server.1=10.0.1.49:2888:3888
server.2=10.0.1.61:2889:3889
server.3=10.10.240.113:2890:3890
[root@DS-VM-Node49-linuxea /data/zookeeper]# echo 1 > myid
[root@DS-VM-Node49-linuxea /data/zookeeper]# ls
myid
[root@DS-VM-Node49-linuxea /data/zookeeper]# cat myid 
1
[root@DS-VM-Node49-linuxea /data/zookeeper]# 

zookeeper——node2:

[root@DS-VM-Node113-linuxea /usr/local]# cat /usr/local/zookeeper/conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
dataLogDir=/data/logs
clientPort=2181
maxClientCnxns=0
autopurge.snapRetainCount=3
autopurge.purgeInterval=5
server.1=10.0.1.49:2888:3888
server.2=10.0.1.61:2889:3889
server.3=10.10.240.113:2890:3890
[root@DS-VM-Node113-linuxea /usr/local]# echo 2 > /data/zookeeper/myid
[root@DS-VM-Node113-linuxea /usr/local]# cat /data/zookeeper/myid 
3
[root@DS-VM-Node113-linuxea /usr/local]# 

zookeeper——node3:

[root@DS-VM-Node61-linuxea /usr/local/zookeeper]# cat /usr/local/zookeeper/conf/zoo.cfg 
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
dataLogDir=/data/logs
clientPort=2181
maxClientCnxns=0
autopurge.snapRetainCount=3
autopurge.purgeInterval=5
server.1=10.0.1.49:2888:3888
server.2=10.0.1.61:2889:3889
server.3=10.10.240.113:2890:3890
[root@DS-VM-Node61-linuxea /usr/local/zookeeper]# echo 3 > /data/zookeeper/myid
[root@DS-VM-Node61-linuxea /usr/local/zookeeper]# cat /data/zookeeper/myid
2
[root@DS-VM-Node61-linuxea /usr/local/zookeeper]# 

全部启动后zkServer status,查看状态,分别是follow和leader
摘在网络:
Zookeeper的通信架构
在Zookeeper整个系统中,有3中角色的服务,client、Follower、leader。其中client负责发起应用的请求,Follower接受client发起的请求,参与事务的确认过程,在leader crash后的leader选择。而leader主要承担事务的协调,当然leader也可以承担接收客户请求的功能,为了方便描述,后面的描述都是client与Follower之间的通信,如果Zookeeper的配置支持leader接收client的请求,client与leader的通信跟client与Follower的通信模式完全一样。Follower与leader之间的角色可能在某一时刻进行转换。一个Follower在leader crash掉以后可能被集群(Quorum)的Follower选举为leader。而一个leader在crash后,再次加入集群(Quorum)将作为Follower角色存在。在一个集群(Quorum)中,除了在选举leader的过程中没有Follower和leader的区分外,其他任何时刻都只有1个leader和多个Follower。更多的可以参考:http://zoutm.iteye.com/blog/708447

[root@DS-VM-Node61-linuxea /usr/local/zookeeper]# ./bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: leader
[root@DS-VM-Node61-linuxea /usr/local/zookeeper]# 

MQ:
备份下配置文件,因为要修改一些

[root@DS-VM-Node61-linuxea /usr/local/activemq/conf]# cp activemq.xml activemq.xml.bak

修改第一个地方:brokerName,如下:

 <broker xmlns="http://activemq.apache.org/schema/core" brokerName="linuxea" dataDirectory="${activemq.data}">

修改第二个地方,这里虽然被启用了leveldb,但是还是可以用的:

        <persistenceAdapter>
            <replicatedLevelDB
              directory="${activemq.data}/leveldb"
              replicas="3"
              bind="tcp://0.0.0.0:51621"
              zkAddress="10.0.1.49:2181,10.0.1.61:2181,10.10.240.113:2181"
              hostname="10.0.1.49"
              zkPath="/activemq/leveldb-stores"
            />
        </persistenceAdapter>

node1-10.0.1.49:

    <broker xmlns="http://activemq.apache.org/schema/core" brokerName="linuxea" dataDirectory="${activemq.data}">

        <persistenceAdapter>
            <replicatedLevelDB
              directory="${activemq.data}/leveldb"
              replicas="3"
              bind="tcp://0.0.0.0:51621"
              zkAddress="10.0.1.49:2181,10.0.1.61:2181,10.10.240.113:2181"
              hostname="10.0.1.49"
              zkPath="/activemq/leveldb-stores"
            />
        </persistenceAdapter>

node2-10.0.1.61:

    <broker xmlns="http://activemq.apache.org/schema/core" brokerName="linuxea" dataDirectory="${activemq.data}">
    
        <persistenceAdapter>
            <replicatedLevelDB
              directory="${activemq.data}/leveldb"
              replicas="3"
              bind="tcp://0.0.0.0:51621"
              zkAddress="10.0.1.49:2181,10.0.1.61:2181,10.10.240.113:2181"
              hostname="10.0.1.61"
              zkPath="/activemq/leveldb-stores"
            />
        </persistenceAdapter>

node3-10.10.240.113:

    <broker xmlns="http://activemq.apache.org/schema/core" brokerName="linuxea" dataDirectory="${activemq.data}">
    
    <persistenceAdapter>
        <replicatedLevelDB
          directory="${activemq.data}/leveldb"
          replicas="3"
          bind="tcp://0.0.0.0:51621"
          zkAddress="10.0.1.49:2181,10.0.1.61:2181,10.10.240.113:2181"
          hostname="10.10.240.113"
          zkPath="/activemq/leveldb-stores"
        />
    </persistenceAdapter>

配置完成基本上启动即可

1 分享

您可以选择一种方式赞助本站

支付宝扫码赞助

支付宝扫码赞助

日期: 2017-08-16分类: MQ

标签: mq, activemq

发表评论