本文介绍的是伪集群安装,即在一台机器上模拟3个zookeeper server的集群安装。
1、下载解压
将下载下来的zookeeper解压重命名,如zk1,zk2,zk3,并且我是放在/opt/zookeeper/路径下的。
2、逐个编辑每个zk的conf/zoo.cfg配置文件
/opt/zookeeper/zk1/conf/zoo.cfg内容如下:
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage,/tmp here is just # example sakes. dataDir=/opt/zookeeper/zk1/data dataLog=/opt/zookeeper/zk1/log # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 #server.1=10.8.12.147:20881:30881 #server.2=10.8.12.147:20882:30882 #server.3=10.8.12.147:20883:30883 server.1=localhost:20881:30881 server.2=localhost:20882:30882 server.3=localhost:20883:30883/opt/zookeeper/zk2/conf/zoo.cfg内容如下:
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage,/tmp here is just # example sakes. dataDir=/opt/zookeeper/zk2/data dataLog=/opt/zookeeper/zk2/log # the port at which the clients will connect clientPort=2182 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 #server.1=10.8.12.147:20881:30881 #server.2=10.8.12.147:20882:30882 #server.3=10.8.12.147:20883:30883 server.1=localhost:20881:30881 server.2=localhost:20882:30882 server.3=localhost:20883:30883/opt/zookeeper/zk3/conf/zoo.cfg内容如下:
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage,/tmp here is just # example sakes. dataDir=/opt/zookeeper/zk3/data dataLog=/opt/zookeeper/zk3/log # the port at which the clients will connect clientPort=2183 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 #server.1=10.8.12.147:20881:30881 #server.2=10.8.12.147:20882:30882 #server.3=10.8.12.147:20883:30883 server.1=localhost:20881:30881 server.2=localhost:20882:30882 server.3=localhost:20883:308832.1 因为是本地,所以用localhost,这里三个localhost分别是三台服务器的内网ip地址(如果分别部署在不同的机器)
2.2 20881、20882、20883端口号是zookeeper服务之间通信的端口,表示的是这个服务器与集群中的leader服务器交换信息的端口
2.3 30881、30882、30883端口号表示的是万一集群中的leader服务器挂了,需要一个端口来重新进行选举,选出一个新的leader,这个端口就是用来执行选举时服务器相互通信的端口,集群的配置方式,由于ip都是一样,所以不同的 Zookeeper 实例通信端口号不能一样,所以要给它们分配不同的端口号。
3、事先创建data和log文件夹
4、在每个zookeeper server配置文件的dataDir所对应的目录下,必须创建一个名为myid的文件,其中的内容必须与zoo.cfg中server.x 中的x相同,即:
/opt/zookeeper/zk1/data/myid 中的内容为1,对应server.1中的1
/opt/zookeeper/zk2/data/myid 中的内容为2,对应server.2中的2
/opt/zookeeper/zk3/data/myid 中的内容为3,对应server.3中的3
5、启动测试
5.1 分别在三台zookeeper-3.4.9/bin/目录下执行:./zkServer.sh start
5.2 输入jps命令查看进程,其中,QuorumPeerMain是zookeeper进程,表示启动成功;
5.3 由于ZooKeeper集群启动的时候,每个结点都试图去连接集群中的其它结点,先启动的肯定连不上后面还没启动的,所以日志中出现的异常【[WorkerSender[myid=1]:QuorumCnxManager@382] - Cannot open channel to 2 at election address ****】属于正常现象,其他结点也会出现类似问题,直到最后一个节点启动时才不会出现这种异常现象。
5.4 也可以查看集群中各个节点的角色(Leader或者Follower),如下所示,在ZooKeeper集群中的每个结点的zookeeper-3.4.9目录下执行./bin/zkServer.sh status
6、停止服务
6.1 在各节点的zookeeper的bin目录下执行./bin/zkServer.sh stop
6.2 如果要重启zookeeper服务,则在各节点的zookeeper的bin目录下执行./bin/zkServer.sh restart
7、客户端连接命令
./zkCli.sh -server localhost:2181
原文链接:https://www.f2er.com/ubuntu/353028.html