Docker使用zookeeper

Apache ZooKeeper是一个开源的服务器,可以实现高度可靠的分布式协调。
记录Docker里面使用zookeeper的方法

镜像

docker pull zookeeper

启动一个Zookeeper服务器实例

启动一个zookeeper实例很简单:

docker run --name some-zookeeper --restart always -d zookeeper

由于Zookeeper “fails fast”,最好始终重新启动它。

这里可以加上-p参数把端口映射到主机端口:

docker run --name some-zookeeper -p 2181:2181 --restart always -d zookeeper

这样, 就把容器的2181端口映射到宿主机器的2181端口上了, java程序等可以直接连接(127.0.0.1:2181)

从另一个Docker容器中的应用程序连接到Zookeeper

docker run --name some-app --link some-zookeeper:zookeeper -d application-that-uses-zookeeper

从Zookeeper命令行客户端连接到Zookeeper

docker run -it --rm --link some-zookeeper:zookeeper zookeeper zkCli.sh -server zookeeper

查看日志

docker logs -f e36790ea5c5e

其中e36790ea5c5e是容器的ID, 可以通过docker container ls 来查看.

END

CentOS上zookeeper集群模式安装

本篇介绍在四个节点的集群中搭建zookeeper环境,zookeeper可配置三种模式运行:单机模式,伪集群模式,集群模式,本文使用集群模式搭建。

安装环境

  • 虚拟机:VMware Workstation 12 Player
  • Linux版本:CentOS release 6.4 (Final)
  • zookeeper版本:zookeeper-3.4.5-cdh5.7.6.tar.gz

  • 集群节点:

    • master:192.168.137.11 内存1G
    • slave1:192.168.137.12 内存512M
    • slave2:192.168.137.13 内存512M
    • slave3:192.168.137.14 内存512M
  • 前提:java已安装,已配置ssh免密登录,停掉防火墙等。

上传安装包

将下载的zookeeper-3.4.5-cdh5.7.6.tar.gz安装包上传到CentOS指定目录,例如/opt。
上传方法很多,这里在SecureCRT用rz命令。

解压缩安装包:

tar -zxf zookeeper-3.4.5-cdh5.7.6.tar.gz

重命名文件夹:

mv zookeeper-3.4.5-cdh5.7.6 zookeeper

修改配置文件

配置文件在安装目录conf文件夹下的zoo_sample.cfg,需要先复制一个并且改文件名:

[root@master conf]# pwd
/opt/zookeeper/conf
[root@master conf]# cp zoo_sample.cfg zoo.cfg
[root@master conf]# ll
total 16
-rw-rw-r--. 1 root root  535 Feb 22  2017 configuration.xsl
-rw-rw-r--. 1 root root 2693 Feb 22  2017 log4j.properties
-rw-r--r--. 1 root root  808 Jan 23 10:06 zoo.cfg
-rw-rw-r--. 1 root root  808 Feb 22  2017 zoo_sample.cfg

修改zoo.cfg配置文件:

tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/opt/zookeeper/tmp
# the port at which the clients will connect
clientPort=2181
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
dataLogDir=/opt/zookeeper/logs
server.1=master:2888:3888
server.2=slave1:2888:3888
server.3=slave2:2888:3888
server.4=slave3:2888:3888

参数说明:

  • tickTime: zookeeper中使用的基本时间单位, 毫秒值.
  • dataDir: 数据目录. 可以是任意目录.
  • dataLogDir: log目录, 同样可以是任意目录. 如果没有设置该参数, 将使用和dataDir相同的设置.
  • clientPort: 监听client连接的端口号.
  • initLimit: zookeeper集群中的包含多台server, 其中一台为leader, 集群中其余的server为follower. initLimit参数配置初始化连接时, follower和leader之间的最长心跳时间. 此时该参数设置为5, 说明时间限制为5倍tickTime, 即5*2000=10000ms=10s.
  • syncLimit: 该参数配置leader和follower之间发送消息, 请求和应答的最大时间长度. 此时该参数设置为2, 说明时间限制为2倍tickTime, 即4000ms.
  • server.X=A:B:C 其中X是一个数字, 表示这是第几号server. A是该server所在的IP地址. B配置该server和集群中的leader交换消息所使用的端口. C配置选举leader时所使用的端口.

由于我们修改了dataDir目录,在zookeeper目录中创建一个文件夹用于后面创建myid文件:

mkdir /opt/zookeeper/tmp

mkdir /opt/zookeeper/logs

复制安装包到其他节点

将zookeeper文件夹复制到其他三个服务器上:

scp -r /opt/zookeeper/ root@slave1:/opt
scp -r /opt/zookeeper/ root@slave2:/opt
scp -r /opt/zookeeper/ root@slave3:/opt

在master节点上用一下命令给每个节点上创建myid文件,文件中的id号与zoo.cfg配置文件中的对应:

[root@master zookeeper]# echo 1 > /opt/zookeeper/tmp/myid
[root@master zookeeper]# ssh slave1 "echo 2 > /opt/zookeeper/tmp/myid"
[root@master zookeeper]# ssh slave2 "echo 3 > /opt/zookeeper/tmp/myid"
[root@master zookeeper]# ssh slave3 "echo 4 > /opt/zookeeper/tmp/myid"

运行启动

由于没有配置环境变量,需要用全路径执行:

[root@master zookeeper]# /opt/zookeeper/bin/zkServer.sh start
JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

其实配置文件中修改dataLogDir的本意是想让启动日志输出到配置的文件夹里,但是好像并没有,日志文件zookeeper.out还是在zookeeper的安装目录下生成。

查看zookeeper.out文件发现有错误:

2018-01-23 10:48:35,470 [myid:] - INFO  [main:QuorumPeerConfig@101] - Reading configuration from: /opt/zookeeper/bin/../conf/zoo.cfg
2018-01-23 10:48:35,484 [myid:] - WARN  [main:QuorumPeerConfig@290] - Non-optimial configuration, consider an odd number of servers.
2018-01-23 10:48:35,484 [myid:] - INFO  [main:QuorumPeerConfig@334] - Defaulting to majority quorums
2018-01-23 10:48:35,512 [myid:4] - INFO  [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3
2018-01-23 10:48:35,513 [myid:4] - INFO  [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0
2018-01-23 10:48:35,513 [myid:4] - INFO  [main:DatadirCleanupManager@101] - Purge task is not scheduled.
2018-01-23 10:48:35,536 [myid:4] - INFO  [main:QuorumPeerMain@132] - Starting quorum peer
2018-01-23 10:48:35,587 [myid:4] - INFO  [main:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:2181
2018-01-23 10:48:35,611 [myid:4] - INFO  [main:QuorumPeer@913] - tickTime set to 2000
2018-01-23 10:48:35,612 [myid:4] - INFO  [main:QuorumPeer@933] - minSessionTimeout set to -1
2018-01-23 10:48:35,612 [myid:4] - INFO  [main:QuorumPeer@944] - maxSessionTimeout set to -1
2018-01-23 10:48:35,612 [myid:4] - INFO  [main:QuorumPeer@959] - initLimit set to 10
2018-01-23 10:48:35,639 [myid:4] - INFO  [main:QuorumPeer@429] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation
2018-01-23 10:48:35,643 [myid:4] - INFO  [main:QuorumPeer@444] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation
2018-01-23 10:48:35,652 [myid:4] - INFO  [Thread-1:QuorumCnxManager$Listener@486] - My election bind port: 0.0.0.0/0.0.0.0:3888
2018-01-23 10:48:35,674 [myid:4] - INFO  [QuorumPeer[myid=4]/0:0:0:0:0:0:0:0:2181:QuorumPeer@670] - LOOKING
2018-01-23 10:48:35,679 [myid:4] - INFO  [QuorumPeer[myid=4]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@740] - New election. My id =  4, proposed zxid=0x0
2018-01-23 10:48:35,692 [myid:4] - INFO  [slave3/192.168.137.14:3888:QuorumCnxManager$Listener@493] - Received connection request /192.168.137.11:34491
2018-01-23 10:48:35,704 [myid:4] - INFO  [WorkerReceiver[myid=4]:FastLeaderElection@542] - Notification: 4 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 4 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)
2018-01-23 10:48:35,706 [myid:4] - WARN  [WorkerSender[myid=4]:QuorumCnxManager@368] - Cannot open channel to 2 at election address slave1/192.168.137.12:3888
java.net.ConnectException: Connection refused (Connection refused)
    at java.net.PlainSocketImpl.socketConnect(Native Method)
    at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
    at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
    at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
    at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
    at java.net.Socket.connect(Socket.java:589)
    at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:354)
    at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:327)
    at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:393)
    at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:365)
    at java.lang.Thread.run(Thread.java:748)

提示Connection refused的异常,其实一开始先不急着百度这个问题,其实要所有节点上都启动zookeeper后再看看运行状态,现在查看运行状态都是没运行的,也找不到相应的进程:

[root@master zookeeper]# /opt/zookeeper/bin/zkServer.sh start
JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@master zookeeper]# /opt/zookeeper/bin/zkServer.sh status
JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Error contacting service. It is probably not running.

到其他节点服务器上都启动zookeeper,过一会儿后每个服务器查看状态:

[root@master zookeeper]# /opt/zookeeper/bin/zkServer.sh status
JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: follower
[root@master zookeeper]# jps
5488 QuorumPeerMain
5539 Jps

如果有Mode和QuorumPeerMain,就说明已经启动成功了。

如果要关闭zookeeper,需要在每个节点上执行:

/opt/zookeeper/bin/zkServer.sh stop

另外如果使用如下命令启动,就会在启动时输出日志信息:

/opt/zookeeper/bin/zkServer.sh start-foreground

批量启动和关闭

一台一台服务器去执行命令有点麻烦,写一个脚本批量执行:

#!/bin/bash
#下面变量修改zookeeper安装目录
zooHome=/opt/zookeeper
if  [ $1 != ""  ]
    then
        confFile=$zooHome/conf/zoo.cfg
        slaves=$(cat "$confFile" | sed '/^server/!d;s/^.*=//;s/:.*$//g;/^$/d')
        for salve in $slaves ; do
            ssh $salve "$zooHome/bin/zkServer.sh $1"
        done
    else
        echo "parameter empty! parameter:start|stop"
fi

将上面脚本保存为zooManager文件,调用执行:

sh zooManager start

sh zooManager stop
[root@master opt]# sh zooManager start
JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

由于所有服务器节点都是使用root用户,所以没有考虑权限问题,实际情况要考虑的。

参考:http://coolxing.iteye.com/blog/1871009

zookeeper和etcd有状态服务部署实践

一. 概述

kubernetes通过statefulset为zookeeper、etcd等这类有状态的应用程序提供完善支持,statefulset具备以下特性:

  • 为pod提供稳定的唯一的网络标识
  • 稳定值持久化存储:通过pv/pvc来实现
  • 启动和停止pod保证有序:优雅的部署和伸缩性

本文阐述了如何在k8s集群上部署zookeeper和etcd有状态服务,并结合ceph实现数据持久化。

二. 总结

  • 使用k8s的statefulset、storageclass、pv、pvc和ceph的rbd,能够很好的支持zookeeper、etcd这样的有状态服务部署到kubernetes集群上。
  • k8s不会主动删除已经创建的pv、pvc对象,防止出现误删。

如果用户确定删除pv、pvc对象,同时还需要手动删除ceph段的rbd镜像。

  • 遇到的坑

storageclass中引用的ceph客户端用户,必须要有mon rw,rbd rwx权限。如果没有mon write权限,会导致释放rbd锁失败,无法将rbd镜像挂载到其他的k8s worker节点。

  • zookeeper使用探针检查zookeeper节点的健康状态,如果节点不健康,k8s将删除pod,并自动重建该pod,达到自动重启zookeeper节点的目的。

因zookeeper 3.4版本的集群配置,是通过静态加载文件zoo.cfg来实现的,所以当zookeeper节点pod ip变动后,需要重启zookeeper集群中的所有节点。

  • etcd部署方式有待优化

本次试验中使用静态方式部署etcd集群,如果etcd节点变迁时,需要执行etcdctl member remove/add等命令手动配置etcd集群,严重限制了etcd集群自动故障恢复、扩容缩容的能力。因此,需要考虑对部署方式优化,改为使用DNS或者etcd descovery的动态方式部署etcd,才能让etcd更好的运行在k8s上。

三. zookeeper集群部署

1. 下载镜像

docker pull gcr.mirrors.ustc.edu.cn/google_containers/kubernetes-zookeeper:1.0-3.4.10
docker tag gcr.mirrors.ustc.edu.cn/google_containers/kubernetes-zookeeper:1.0-3.4.10 172.16.18.100:5000/gcr.io/google_containers/kubernetes-zookeeper:1.0-3.4.10
docker push  172.16.18.100:5000/gcr.io/google_containers/kubernetes-zookeeper:1.0-3.4.10

2. 定义ceph secret

cat << EOF | kubectl create -f -
apiVersion: v1
data:
  key: QVFBYy9ndGFRUno4QlJBQXMxTjR3WnlqN29PK3VrMzI1a05aZ3c9PQo=
kind: Secret
metadata:
  creationTimestamp: 2017-11-20T10:29:05Z
  name: ceph-secret
  namespace: default
  resourceVersion: "2954730"
  selfLink: /api/v1/namespaces/default/secrets/ceph-secret
  uid: a288ff74-cddd-11e7-81cc-000c29f99475
type: kubernetes.io/rbd
EOF

3. 定义storageclass rbd存储

cat << EOF | kubectl create -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ceph
parameters:
  adminId: admin
  adminSecretName: ceph-secret
  adminSecretNamespace: default
  fsType: ext4
  imageFormat: "2"
  imagefeatures: layering
  monitors: 172.16.13.223
  pool: k8s
  userId: admin
  userSecretName: ceph-secret
provisioner: kubernetes.io/rbd
reclaimPolicy: Delete
EOF

4. 创建zookeeper集群

使用rbd存储zookeeper节点数据

cat << EOF | kubectl create -f -
---
apiVersion: v1
kind: Service
metadata:
  name: zk-hs
  labels:
    app: zk
spec:
  ports:
  - port: 2888
    name: server
  - port: 3888
    name: leader-election
  clusterIP: None
  selector:
    app: zk
---
apiVersion: v1
kind: Service
metadata:
  name: zk-cs
  labels:
    app: zk
spec:
  ports:
  - port: 2181
    name: client
  selector:
    app: zk
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: zk-pdb
spec:
  selector:
    matchLabels:
      app: zk
  maxUnavailable: 1
---
apiVersion: apps/v1beta2 # for versions before 1.8.0 use apps/v1beta1
kind: StatefulSet
metadata:
  name: zk
spec:
  selector:
    matchLabels:
      app: zk
  serviceName: zk-hs
  replicas: 3
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: Parallel
  template:
    metadata:
      labels:
        app: zk
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                    - zk
              topologyKey: "kubernetes.io/hostname"
      containers:
      - name: kubernetes-zookeeper
        imagePullPolicy: Always
        image: "172.16.18.100:5000/gcr.io/google_containers/kubernetes-zookeeper:1.0-3.4.10"
        ports:
        - containerPort: 2181
          name: client
        - containerPort: 2888
          name: server
        - containerPort: 3888
          name: leader-election
        command:
        - sh
        - -c
        - "start-zookeeper 
          --servers=3 
          --data_dir=/var/lib/zookeeper/data 
          --data_log_dir=/var/lib/zookeeper/data/log 
          --conf_dir=/opt/zookeeper/conf 
          --client_port=2181 
          --election_port=3888 
          --server_port=2888 
          --tick_time=2000 
          --init_limit=10 
          --sync_limit=5 
          --heap=512M 
          --max_client_cnxns=60 
          --snap_retain_count=3 
          --purge_interval=12 
          --max_session_timeout=40000 
          --min_session_timeout=4000 
          --log_level=INFO"
        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        volumeMounts:
        - name: datadir
          mountPath: /var/lib/zookeeper
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
  volumeClaimTemplates:
  - metadata:
      name: datadir
      annotations:
        volume.beta.kubernetes.io/storage-class: ceph
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi
EOF

查看创建结果

[root@172 zookeeper]# kubectl get no
NAME           STATUS    ROLES     AGE       VERSION
172.16.20.10   Ready     <none>    50m       v1.8.2
172.16.20.11   Ready     <none>    2h        v1.8.2
172.16.20.12   Ready     <none>    1h        v1.8.2

[root@172 zookeeper]# kubectl get po -owide 
NAME      READY     STATUS    RESTARTS   AGE       IP              NODE
zk-0      1/1       Running   0          8m        192.168.5.162   172.16.20.10
zk-1      1/1       Running   0          1h        192.168.2.146   172.16.20.11

[root@172 zookeeper]# kubectl get pv,pvc
NAME                                          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                  STORAGECLASS   REASON    AGE
pv/pvc-226cb8f0-d322-11e7-9581-000c29f99475   1Gi        RWO            Delete           Bound     default/datadir-zk-0   ceph                     1h
pv/pvc-22703ece-d322-11e7-9581-000c29f99475   1Gi        RWO            Delete           Bound     default/datadir-zk-1   ceph                     1h

NAME               STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc/datadir-zk-0   Bound     pvc-226cb8f0-d322-11e7-9581-000c29f99475   1Gi        RWO            ceph           1h
pvc/datadir-zk-1   Bound     pvc-22703ece-d322-11e7-9581-000c29f99475   1Gi        RWO            ceph           1h

zk-0 pod的rbd的锁信息为

[root@ceph1 ceph]# rbd lock list kubernetes-dynamic-pvc-227b45e5-d322-11e7-90ab-000c29f99475 -p k8s --user admin
There is 1 exclusive lock on this image.
Locker       ID                              Address                   
client.24146 kubelet_lock_magic_172.16.20.10 172.16.20.10:0/1606152350 

5. 测试pod迁移

尝试将172.16.20.10节点设置为污点,让zk-0 pod自动迁移到172.16.20.12

kubectl cordon 172.16.20.10

[root@172 zookeeper]# kubectl get no
NAME           STATUS                     ROLES     AGE       VERSION
172.16.20.10   Ready,SchedulingDisabled   <none>    58m       v1.8.2
172.16.20.11   Ready                      <none>    2h        v1.8.2
172.16.20.12   Ready                      <none>    1h        v1.8.2

kubectl delete po zk-0

观察zk-0的迁移过程

[root@172 zookeeper]# kubectl get po -owide -w
NAME      READY     STATUS    RESTARTS   AGE       IP              NODE
zk-0      1/1       Running   0          14m       192.168.5.162   172.16.20.10
zk-1      1/1       Running   0          1h        192.168.2.146   172.16.20.11
zk-0      1/1       Terminating   0         16m       192.168.5.162   172.16.20.10
zk-0      0/1       Terminating   0         16m       <none>    172.16.20.10
zk-0      0/1       Terminating   0         16m       <none>    172.16.20.10
zk-0      0/1       Terminating   0         16m       <none>    172.16.20.10
zk-0      0/1       Terminating   0         16m       <none>    172.16.20.10
zk-0      0/1       Terminating   0         16m       <none>    172.16.20.10
zk-0      0/1       Pending   0         0s        <none>    <none>
zk-0      0/1       Pending   0         0s        <none>    172.16.20.12
zk-0      0/1       ContainerCreating   0         0s        <none>    172.16.20.12
zk-0      0/1       Running   0         3s        192.168.3.4   172.16.20.12

此时zk-0正常迁移到172.16.20.12
再查看rbd的锁定信息

[root@ceph1 ceph]# rbd lock list kubernetes-dynamic-pvc-227b45e5-d322-11e7-90ab-000c29f99475 -p k8s --user admin
There is 1 exclusive lock on this image.
Locker       ID                              Address                   
client.24146 kubelet_lock_magic_172.16.20.10 172.16.20.10:0/1606152350 
[root@ceph1 ceph]# rbd lock list kubernetes-dynamic-pvc-227b45e5-d322-11e7-90ab-000c29f99475 -p k8s --user admin
There is 1 exclusive lock on this image.
Locker       ID                              Address                   
client.24154 kubelet_lock_magic_172.16.20.12 172.16.20.12:0/3715989358 

之前在另外一个ceph集群测试这个zk pod迁移的时候,总是报错无法释放lock,经分析应该是使用的ceph账号没有相应的权限,所以导致释放lock失败。记录的报错信息如下:

Nov 27 10:45:55 172 kubelet: W1127 10:45:55.551768   11556 rbd_util.go:471] rbd: no watchers on kubernetes-dynamic-pvc-f35a411e-d317-11e7-90ab-000c29f99475
Nov 27 10:45:55 172 kubelet: I1127 10:45:55.694126   11556 rbd_util.go:181] remove orphaned locker kubelet_lock_magic_172.16.20.12 from client client.171490: err exit status 13, output: 2017-11-27 10:45:55.570483 7fbdbe922d40 -1 did not load config file, using default settings.
Nov 27 10:45:55 172 kubelet: 2017-11-27 10:45:55.600816 7fbdbe922d40 -1 Errors while parsing config file!
Nov 27 10:45:55 172 kubelet: 2017-11-27 10:45:55.600824 7fbdbe922d40 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
Nov 27 10:45:55 172 kubelet: 2017-11-27 10:45:55.600825 7fbdbe922d40 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
Nov 27 10:45:55 172 kubelet: 2017-11-27 10:45:55.600825 7fbdbe922d40 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
Nov 27 10:45:55 172 kubelet: 2017-11-27 10:45:55.602492 7fbdbe922d40 -1 Errors while parsing config file!
Nov 27 10:45:55 172 kubelet: 2017-11-27 10:45:55.602494 7fbdbe922d40 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
Nov 27 10:45:55 172 kubelet: 2017-11-27 10:45:55.602495 7fbdbe922d40 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
Nov 27 10:45:55 172 kubelet: 2017-11-27 10:45:55.602496 7fbdbe922d40 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
Nov 27 10:45:55 172 kubelet: 2017-11-27 10:45:55.651594 7fbdbe922d40 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.k8s.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
Nov 27 10:45:55 172 kubelet: rbd: releasing lock failed: (13) Permission denied
Nov 27 10:45:55 172 kubelet: 2017-11-27 10:45:55.682470 7fbdbe922d40 -1 librbd: unable to blacklist client: (13) Permission denied

k8s rbd volume的实现代码:

if lock {
            // check if lock is already held for this host by matching lock_id and rbd lock id
            if strings.Contains(output, lock_id) {
                // this host already holds the lock, exit
                glog.V(1).Infof("rbd: lock already held for %s", lock_id)
                return nil
            }
            // clean up orphaned lock if no watcher on the image
            used, statusErr := util.rbdStatus(&b)
            if statusErr == nil && !used {
                re := regexp.MustCompile("client.* " + kubeLockMagic + ".*")
                locks := re.FindAllStringSubmatch(output, -1)
                for _, v := range locks {
                    if len(v) > 0 {
                        lockInfo := strings.Split(v[0], " ")
                        if len(lockInfo) > 2 {
                            args := []string{"lock", "remove", b.Image, lockInfo[1], lockInfo[0], "--pool", b.Pool, "--id", b.Id, "-m", mon}
                            args = append(args, secret_opt...)
                            cmd, err = b.exec.Run("rbd", args...)
                            # 执行rbd lock remove命令时返回了错误信息
                            glog.Infof("remove orphaned locker %s from client %s: err %v, output: %s", lockInfo[1], lockInfo[0], err, string(cmd))
                        }
                    }
                }
            }

            // hold a lock: rbd lock add
            args := []string{"lock", "add", b.Image, lock_id, "--pool", b.Pool, "--id", b.Id, "-m", mon}
            args = append(args, secret_opt...)
            cmd, err = b.exec.Run("rbd", args...)
        } 

可以看到,rbd lock remove操作被拒绝了,原因是没有权限rbd: releasing lock failed: (13) Permission denied。

6. 测试扩容

zookeeper集群节点数从2个扩为3个。
集群节点数为2时,zoo.cfg的配置中定义了两个实例

zookeeper@zk-0:/opt/zookeeper/conf$ cat zoo.cfg 
#This file was autogenerated DO NOT EDIT
clientPort=2181
dataDir=/var/lib/zookeeper/data
dataLogDir=/var/lib/zookeeper/data/log
tickTime=2000
initLimit=10
syncLimit=5
maxClientCnxns=60
minSessionTimeout=4000
maxSessionTimeout=40000
autopurge.snapRetainCount=3
autopurge.purgeInteval=12
server.1=zk-0.zk-hs.default.svc.cluster.local:2888:3888
server.2=zk-1.zk-hs.default.svc.cluster.local:2888:3888

使用kubectl edit statefulset zk命令修改replicas=3,start-zookeeper –servers=3,
此时观察pod的变化

[root@172 zookeeper]# kubectl get po -owide -w
NAME      READY     STATUS    RESTARTS   AGE       IP              NODE
zk-0      1/1       Running   0          1h        192.168.5.170   172.16.20.10
zk-1      1/1       Running   0          1h        192.168.3.12    172.16.20.12
zk-2      0/1       Pending   0         0s        <none>    <none>
zk-2      0/1       Pending   0         0s        <none>    172.16.20.11
zk-2      0/1       ContainerCreating   0         0s        <none>    172.16.20.11
zk-2      0/1       Running   0         1s        192.168.2.154   172.16.20.11
zk-2      1/1       Running   0         11s       192.168.2.154   172.16.20.11
zk-1      1/1       Terminating   0         1h        192.168.3.12   172.16.20.12
zk-1      0/1       Terminating   0         1h        <none>    172.16.20.12
zk-1      0/1       Terminating   0         1h        <none>    172.16.20.12
zk-1      0/1       Terminating   0         1h        <none>    172.16.20.12
zk-1      0/1       Terminating   0         1h        <none>    172.16.20.12
zk-1      0/1       Pending   0         0s        <none>    <none>
zk-1      0/1       Pending   0         0s        <none>    172.16.20.12
zk-1      0/1       ContainerCreating   0         0s        <none>    172.16.20.12
zk-1      0/1       Running   0         2s        192.168.3.13   172.16.20.12
zk-1      1/1       Running   0         20s       192.168.3.13   172.16.20.12
zk-0      1/1       Terminating   0         1h        192.168.5.170   172.16.20.10
zk-0      0/1       Terminating   0         1h        <none>    172.16.20.10
zk-0      0/1       Terminating   0         1h        <none>    172.16.20.10
zk-0      0/1       Terminating   0         1h        <none>    172.16.20.10
zk-0      0/1       Terminating   0         1h        <none>    172.16.20.10
zk-0      0/1       Pending   0         0s        <none>    <none>
zk-0      0/1       Pending   0         0s        <none>    172.16.20.10
zk-0      0/1       ContainerCreating   0         0s        <none>    172.16.20.10
zk-0      0/1       Running   0         2s        192.168.5.171   172.16.20.10
zk-0      1/1       Running   0         12s       192.168.5.171   172.16.20.10

可以看到zk-0/zk-1都重启了,这样可以加载新的zoo.cfg配置文件,保证集群正确配置。
新的zoo.cfg配置文件记录了3个实例:

[root@172 ~]# kubectl exec zk-0 -- cat /opt/zookeeper/conf/zoo.cfg
#This file was autogenerated DO NOT EDIT
clientPort=2181
dataDir=/var/lib/zookeeper/data
dataLogDir=/var/lib/zookeeper/data/log
tickTime=2000
initLimit=10
syncLimit=5
maxClientCnxns=60
minSessionTimeout=4000
maxSessionTimeout=40000
autopurge.snapRetainCount=3
autopurge.purgeInteval=12
server.1=zk-0.zk-hs.default.svc.cluster.local:2888:3888
server.2=zk-1.zk-hs.default.svc.cluster.local:2888:3888
server.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888

7. 测试缩容

缩容的时候,zk集群也自动重启了所有的zk节点,缩容过程如下:

[root@172 ~]# kubectl get po -owide -w
NAME      READY     STATUS    RESTARTS   AGE       IP              NODE
zk-0      1/1       Running   0          5m        192.168.5.171   172.16.20.10
zk-1      1/1       Running   0          6m        192.168.3.13    172.16.20.12
zk-2      1/1       Running   0          7m        192.168.2.154   172.16.20.11
zk-2      1/1       Terminating   0         7m        192.168.2.154   172.16.20.11
zk-1      1/1       Terminating   0         7m        192.168.3.13   172.16.20.12
zk-2      0/1       Terminating   0         8m        <none>    172.16.20.11
zk-1      0/1       Terminating   0         7m        <none>    172.16.20.12
zk-2      0/1       Terminating   0         8m        <none>    172.16.20.11
zk-1      0/1       Terminating   0         7m        <none>    172.16.20.12
zk-1      0/1       Terminating   0         7m        <none>    172.16.20.12
zk-1      0/1       Terminating   0         7m        <none>    172.16.20.12
zk-1      0/1       Pending   0         0s        <none>    <none>
zk-1      0/1       Pending   0         0s        <none>    172.16.20.12
zk-1      0/1       ContainerCreating   0         0s        <none>    172.16.20.12
zk-1      0/1       Running   0         2s        192.168.3.14   172.16.20.12
zk-2      0/1       Terminating   0         8m        <none>    172.16.20.11
zk-2      0/1       Terminating   0         8m        <none>    172.16.20.11
zk-1      1/1       Running   0         19s       192.168.3.14   172.16.20.12
zk-0      1/1       Terminating   0         7m        192.168.5.171   172.16.20.10
zk-0      0/1       Terminating   0         7m        <none>    172.16.20.10
zk-0      0/1       Terminating   0         7m        <none>    172.16.20.10
zk-0      0/1       Terminating   0         7m        <none>    172.16.20.10
zk-0      0/1       Pending   0         0s        <none>    <none>
zk-0      0/1       Pending   0         0s        <none>    172.16.20.10
zk-0      0/1       ContainerCreating   0         0s        <none>    172.16.20.10
zk-0      0/1       Running   0         3s        192.168.5.172   172.16.20.10
zk-0      1/1       Running   0         13s       192.168.5.172   172.16.20.10

四. etcd集群部署

1. 创建etcd集群

cat << EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
  name: "etcd"
  annotations:
    # Create endpoints also if the related pod isn't ready
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
  ports:
  - port: 2379
    name: client
  - port: 2380
    name: peer
  clusterIP: None
  selector:
    component: "etcd"
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: "etcd"
  labels:
    component: "etcd"
spec:
  serviceName: "etcd"
  # changing replicas value will require a manual etcdctl member remove/add
  # command (remove before decreasing and add after increasing)
  replicas: 3
  template:
    metadata:
      name: "etcd"
      labels:
        component: "etcd"
    spec:
      containers:
      - name: "etcd"
        image: "172.16.18.100:5000/quay.io/coreos/etcd:v3.2.3"
        ports:
        - containerPort: 2379
          name: client
        - containerPort: 2380
          name: peer
        env:
        - name: CLUSTER_SIZE
          value: "3"
        - name: SET_NAME
          value: "etcd"
        volumeMounts:
        - name: data
          mountPath: /var/run/etcd
        command:
          - "/bin/sh"
          - "-ecx"
          - |
            IP=$(hostname -i)
            for i in $(seq 0 $((${CLUSTER_SIZE} - 1))); do
              while true; do
                echo "Waiting for ${SET_NAME}-${i}.${SET_NAME} to come up"
                ping -W 1 -c 1 ${SET_NAME}-${i}.${SET_NAME}.default.svc.cluster.local > /dev/null && break
                sleep 1s
              done
            done
            PEERS=""
            for i in $(seq 0 $((${CLUSTER_SIZE} - 1))); do
                PEERS="${PEERS}${PEERS:+,}${SET_NAME}-${i}=http://${SET_NAME}-${i}.${SET_NAME}.default.svc.cluster.local:2380"
            done
            # start etcd. If cluster is already initialized the `--initial-*` options will be ignored.
            exec etcd --name ${HOSTNAME} 
              --listen-peer-urls http://${IP}:2380 
              --listen-client-urls http://${IP}:2379,http://127.0.0.1:2379 
              --advertise-client-urls http://${HOSTNAME}.${SET_NAME}:2379 
              --initial-advertise-peer-urls http://${HOSTNAME}.${SET_NAME}:2380 
              --initial-cluster-token etcd-cluster-1 
              --initial-cluster ${PEERS} 
              --initial-cluster-state new 
              --data-dir /var/run/etcd/default.etcd
## We are using dynamic pv provisioning using the "standard" storage class so
## this resource can be directly deployed without changes to minikube (since
## minikube defines this class for its minikube hostpath provisioner). In
## production define your own way to use pv claims.
  volumeClaimTemplates:
  - metadata:
      name: data
      annotations:
        volume.beta.kubernetes.io/storage-class: ceph
    spec:
      accessModes:
        - "ReadWriteOnce"
      resources:
        requests:
          storage: 1Gi
EOF

创建完成之后的po,pv,pvc清单如下:

[root@172 etcd]# kubectl get po -owide 
NAME      READY     STATUS    RESTARTS   AGE       IP              NODE
etcd-0    1/1       Running   0          15m       192.168.5.174   172.16.20.10
etcd-1    1/1       Running   0          15m       192.168.3.16    172.16.20.12
etcd-2    1/1       Running   0          5s        192.168.5.176   172.16.20.10

2. 测试缩容

kubectl scale statefulset etcd --replicas=2

[root@172 ~]# kubectl get po -owide -w
NAME      READY     STATUS    RESTARTS   AGE       IP              NODE
etcd-0    1/1       Running   0          17m       192.168.5.174   172.16.20.10
etcd-1    1/1       Running   0          17m       192.168.3.16    172.16.20.12
etcd-2    1/1       Running   0          1m        192.168.5.176   172.16.20.10
etcd-2    1/1       Terminating   0         1m        192.168.5.176   172.16.20.10
etcd-2    0/1       Terminating   0         1m        <none>    172.16.20.10

检查集群健康

kubectl exec etcd-0 -- etcdctl cluster-health

failed to check the health of member 42c8b94265b9b79a on http://etcd-2.etcd:2379: Get http://etcd-2.etcd:2379/health: dial tcp: lookup etcd-2.etcd on 10.96.0.10:53: no such host
member 42c8b94265b9b79a is unreachable: [http://etcd-2.etcd:2379] are all unreachable
member 9869f0647883a00d is healthy: got healthy result from http://etcd-1.etcd:2379
member c799a6ef06bc8c14 is healthy: got healthy result from http://etcd-0.etcd:2379
cluster is healthy

发现缩容后,etcd-2并没有从etcd集群中自动删除,可见这个etcd镜像对自动扩容缩容的支持并不够好。
我们手工删除掉etcd-2

[root@172 etcd]# kubectl exec etcd-0 -- etcdctl member remove 42c8b94265b9b79a
Removed member 42c8b94265b9b79a from cluster
[root@172 etcd]# kubectl exec etcd-0 -- etcdctl cluster-health                
member 9869f0647883a00d is healthy: got healthy result from http://etcd-1.etcd:2379
member c799a6ef06bc8c14 is healthy: got healthy result from http://etcd-0.etcd:2379
cluster is healthy

3. 测试扩容

从etcd.yaml的启动脚本中可以看出,扩容时新启动一个etcd pod时参数–initial-cluster-state new,该etcd镜像并不支持动态扩容,可以考虑使用基于dns动态部署etcd集群的方式来修改启动脚本,这样才能支持etcd cluster动态扩容。

Zookeeper安装部署(单点/集群)

Zookeeper 是个分布式开源框架,之前在做分布式日志收集的时候,就使用到,Zookeeper搭建比较简单。

下载

目前最新稳定版本:3.4.10, 下载地址https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/

进入 stable 目录下载:

未分类

下载后解压:

[root@1c271ed316ca ~]#  tar zxvf zookeeper-3.4.10.tar.gz 

单点模式启动

使用默认配置,启动监听端口:2181

[root@1c271ed316ca ~]#  cd zookeeper-3.4.10
[root@1c271ed316ca zookeeper-3.4.10 ]#  cp conf/zoo_sample.cfg conf/zoo.cfg
[root@1c271ed316ca zookeeper-3.4.10 ]#  bin/zkServer.sh start

默认情况下,zkServer.sh 加载 ../conf/zoo.cfg 配置文件,配置文件主要有下面内容:

可以修改 dataDir 的目录,Zookeeper暴露给客户端的端口。

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181

集群模式

步骤1:配置文件增加节点信息

分别在不同的节点A、B、C机器上下载Zookeeper,并且解压,在默认配置文件基础上,添加下面的节点信息:

server.1=xx.xx.xx.xx:2888:3888 # A节点IP
server.2=xx.xx.xx.xx:2888:3888 # B节点IP
server.2=xx.xx.xx.xx:2888:3888 # C节点IP

说明:

  • 2888 端口: Zookeeper 各个节点之间的通信端口;
  • 3888端口: Zookeeper 选择Leader的端口;
  • 这些端口可以自行修改,需要指定好 当前节点的ID(myid)即可,Zookeeper在启动服务后,会开启当前节点的端口配置;

步骤2:配置当前节点的ID

[root@1c271ed316ca ~]# echo "X"  > /tmp/zookeeper/myid

注意:

  • “X” 改为当前节点的编号,对应于配置文件中 server.X ;
  • /tmp/zookeeper 改为实际 Zookeeper 配置文件中的 dataDir路径;

Zookeeper 服务启动的时候,会在 Zookeeper DataDir 目录查找 myid文件,里面的数字即表示当前的节点是 server.X.

步骤3:各个节点分别启动服务

[root@A节点 zookeeper-3.4.10 ]#  bin/zkServer.sh start
[root@B节点 zookeeper-3.4.10 ]#  bin/zkServer.sh start
[root@C节点 zookeeper-3.4.10 ]#  bin/zkServer.sh start

ActiveMQ基于zookeeper的主从(levelDB Master/Slave)搭建

一、说明

ActiveMQ 5.9.0新推出的主从实现,基于zookeeper来选举出一个master,其他节点自动作为slave实时同步消息。因为有实时同步数据的slave的存在,master不用担心数据丢失,所以leveldb会优先采用内存存储消息,异步同步到磁盘,所以该方式的activeMQ读写性能最好因为选举机制要超过半数,所以最少需要3台节点,才能实现高可用。如果集群是两台则master失效后slave会不起作用,所以集群至少三台。此种方式仅实现主备功能,避免单点故障,没有负载均衡功能。

二、环境准备

IP
192.168.3.10    server1
192.168.3.11    server2
192.168.3.12 server3

安装软件信息:

apache-activemq-5.13.0-bin.tar.gz

zookeeper-3.5.2-alpha.tar.gz

ZooInspector.zip

三、搭建Zookeeper集群

1、将zookeeper-3.5.2-alpha.tar.gz文件解压到/home/wzh/zk目录;

2、将zoo_sample.cfg复制一份为 zoo.cfg,并修改其配置信息

wzh@hd-master:~/zk/zookeeper-3.5.2-alpha/conf$ cp zoo_sample.cfg zoo.cfg

wzh@hd-master:~/zk/zookeeper-3.5.2-alpha/conf$vim zoo.cfg
tickTime=2000

initLimit=10

syncLimit=5

dataDir=/tmp/zookeeper

clientPort=2181



server.1=192.168.3.10:2888:3888

server.2=192.168.3.11:2888:3888

server.3=192.168.3.11:2888:3888

3、创建/tmp/zookeeper目录

在该目录下创建名为myid的文件,内容为1(这个值随server而改变)

4、将server1上的/home/wzh/zk/zookeeper-3.5.2-alpha文件夹复制到server2,server3,然后创建/tmp/zookeeper目录

在该目录下创建名为myid的文件,内容为2

5、启动zookeeper

[192.168.3.10]

wzh@hd-master:~/zk/zookeeper-3.5.2-alpha/bin$ ./zkServer.sh start

ZooKeeper JMX enabled by default

Using config: /home/wzh/zk/zookeeper-3.5.2-alpha/bin/../conf/zoo.cfg

Starting zookeeper ... STARTED

[192.168.3.11]

wzh@hd-slave1:~/zk/zookeeper-3.5.2-alpha/bin$ ./zkServer.sh start

ZooKeeper JMX enabled by default

Using config: /home/wzh/zk/zookeeper-3.5.2-alpha/bin/../conf/zoo.cfg

Starting zookeeper ... STARTED

[192.168.3.12]

wzh@hd-slave2:~/zk/zookeeper-3.5.2-alpha/bin$ ./zkServer.sh start

ZooKeeper JMX enabled by default

Using config: /home/wzh/zk/zookeeper-3.5.2-alpha/bin/../conf/zoo.cfg

Starting zookeeper ... STARTED

四、搭建ActiveMQ集群

1、将apache-activemq-5.13.0-bin.tar.gz解压到/home/wzh/amq

2、修改activemq.xml配置文件

  • 将broker节点的brokerName设置为wzhamq
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="wzhamq" dataDirectory="${activemq.data}">
  • 将persistenceAdapter的持久化方式选用replicatedLevelDB,将kahaDB方式注释掉
 <persistenceAdapter>
         <!--
            <kahaDB directory="${activemq.data}/kahadb"/>
         -->
        <replicatedLevelDB 
                directory="${activemq.data}/leveldb" 
                replicas="3" 
                bind="tcp://0.0.0.0:0"        
                zkAddress="192.168.3.10:2181,192.168.3.11:2181"     
                hostname="192.168.3.10"          
                sync="local_disk"          
                zkPath="/activemq/leveldb-stores"/>
        </persistenceAdapter>
  • 将apache-activemq-5.13.复制到11,12机器
wzh@hd-master:~/amq$ scp -r apache-activemq-5.13.0/ [email protected]:/tmp
  • 修改配置文件中的hostname=”192.168.3.11″

  • 修改配置文件中的hostname=”192.168.3.12″

3、启动ActiveMQ

wzh@hd-master:~/amq$ ./apache-activemq-5.13.0/bin/activemq status
INFO: Loading '/home/wzh/amq/apache-activemq-5.13.0//bin/env'
INFO: Using java '/opt/java/jdk1.8.0_91/bin/java'
ActiveMQ is running (pid '2031')
wzh@hd-master:~/amq$

依次启动192.168.3.11,192.168.3.12机器

五、集群管理

1、通过使用ZooInspector工具查看zookeeper集群情况

2、http://192.168.3.10:8161/admin/ 默认用户名与口令为admin登录ActiveMQ管理端

未分类

六、通过Spring-boot操作ActiveMQ JMS

1、通过gradle构建Spring-boot应用,在 gradle文件中增加

dependencies {
    compile('org.springframework.boot:spring-boot-starter-activemq')
    compile('org.springframework.boot:spring-boot-starter-web')
    testCompile('org.springframework.boot:spring-boot-starter-test')
}

2、application中增加以下配置

spring.activemq.broker-url=failover:(tcp://192.168.3.10:61616,tcp://192.168.3.11:61616,tcp://192.168.3.12:61616)
spring.activemq.in-memory=true
spring.activemq.pool.enabled=false
spring.activemq.user=admin
spring.activemq.password=admin

3、JMS消息发送

@Service
public class Producer {

    @Autowired
    private JmsMessagingTemplate jmsTemplate;

    public void sendMessage(Destination destination, final String message){
        jmsTemplate.convertAndSend(destination, message);
    }
}

4、JMS消息接收

@Component
public class Consumer {
    @JmsListener(destination = "test.queue")
    public void receiveQueue(String text){

        System.out.println("Consumer收到的报文为:"+text);
    }
}

5、测试

@RestController
@RequestMapping(
        value = "/test",
        headers = "Accept=application/json",
        produces = "application/json;charset=utf-8"
)
public class TestCtrl {
    @Autowired
    Producer producer;

    Destination destination = new ActiveMQQueue("test.queue");

    @RequestMapping(
            value = "/say/{msg}/to/{name}",
            method = RequestMethod.GET
    )
    public Map<String, Object> say(@PathVariable String msg, @PathVariable String name){
        Map<String, Object> map = new HashMap<>();
        map.put("msg", msg);
        map.put("name", name);

        producer.sendMessage(destination, msg);

        return map;
    }
}

6、进入ActiveMQ管理控制台创建一个消息队列

test.queue

7、通过POSTMAN进行测试

2017-08-03 08:10:44.928 INFO 12820 --- [ActiveMQ Task-3] o.a.a.t.failover.FailoverTransport : Successfully reconnected to tcp://192.168.3.10:61616
2017-08-03 08:11:08.854 INFO 12820 --- [ActiveMQ Task-1] o.a.a.t.failover.FailoverTransport : Successfully connected to tcp://192.168.3.10:61616
Consumer收到的报文为:hello
2017-08-03 08:43:39.464 INFO 12820 --- [ActiveMQ Task-1] o.a.a.t.failover.FailoverTransport : Successfully connected to tcp://192.168.3.10:61616
Consumer收到的报文为:hello

8、目前系统连接的是10,如果此时将10集群Down掉,系统会理解选择一台slave作为master提供服务,从而启动案例主备的效果。

ZooKeeper高可用集群的安装及配置

Zookeeper作为很多服务的注册协调中心(dubbo,jstom等),因此高可用集群方案也是必不可少的,Zookeeper集群时要注意将ZK集群的节点数量要为奇数(2n+1:如 3、5、7 个节点)较为合适。

范例项目: http://wosyingjun.iteye.com/blog/2312553

1、下载并上传zookeeper-3.4.6.tar.gz到各个服务器的/usr/local/目录

$ cd /usr/local/
$ wget http://apache.fayea.com/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz

2、在各个服务器上解压zookeeper安装包,并按节点号对zookeeper目录重命名

$ tar -zxvf zookeeper-3.4.6.tar.gz
服务器 1:
$ mv zookeeper-3.4.6 zookeeper-3.4.6_(1)
服务器 2:
$ mv zookeeper-3.4.6 zookeeper-3.4.6_(2)
服务器 3:
$ mv zookeeper-3.4.6 zookeeper-3.4.6_(3)

3、在各zookeeper节点目录下创建以下目录:

$ cd /usr/local/zookeeper-3.4.6_(x)(x代表节点号)
$ mkdir data
$ mkdir logs

4、将 zookeeper/zookeeper-3.4.6_(x)/conf目录下的zoo_sample.cfg文件拷贝一份,命名为zoo.cfg:

$ cp zoo_sample.cfg zoo.cfg

5、修改 zoo.cfg 配置文件

#zookeeper-3.4.6_(1)的配置(/usr/local/zookeeper-3.4.6_(1)/conf/zoo.cfg)如下:
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper-3.4.6_(1)/data
dataLogDir=/usr/local/zookeeper-3.4.6_(1)/logs
clientPort=2181
server.1=192.168.11.97:2881:3881
server.2=192.168.11.98:2882:3882
server.3=192.168.11.99:2883:3883

#zookeeper-3.4.6_(2)的配置(/usr/local/zookeeper-3.4.6_(2)/conf/zoo.cfg)如下:
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper-3.4.6_(2)/data
dataLogDir=/usr/local/zookeeper-3.4.6_(2)/logs
clientPort=2182
server.1=192.168.11.97:2881:3881
server.2=192.168.11.98:2882:3882
server.3=192.168.11.99:2883:3883

#zookeeper-3.4.6_(3)的配置(/usr/local/zookeeper-3.4.6_(3)/conf/zoo.cfg)如下:
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper-3.4.6_(3)/data
dataLogDir=/usr/local/zookeeper-3.4.6_(3)/logs
clientPort=2183
server.1=192.168.11.97:2881:3881
server.2=192.168.11.98:2882:3882
server.3=192.168.11.99:2883:3883

参数说明

  • tickTime=2000
    tickTime这个时间是作为 Zookeeper 服务器之间或客户端与服务器之间维持心跳的时间间隔,也就是每个tickTime时间就会发送一个心跳。

  • initLimit=10
    initLimit这个配置项是用来配置Zookeeper接受客户端(这里所说的客户端不是用户连接Zookeeper 服务器的客户端,而是Zookeeper服务器集群中连接到Leader的Follower 服务器)初始化连接时最长 能忍受多少个心跳时间间隔数。当已经超过 10 个心跳的时间(也就是 tickTime)长度后 Zookeeper 服务器还没有收到客户端的返回信息,那么表明这个客户端连接失败。总的时间长度就是 10*2000=20 秒。

  • syncLimit=5
    syncLimit 这个配置项标识Leader与Follower之间发送消息,请求和应答时间长度,最长不能超过多少个tickTime的时间长度,总的时间长度就是5*2000=10秒。

  • dataDir=/usr/local/zookeeper-3.4.6_(x)/data
    dataDir顾名思义就是Zookeeper保存数据的目录,默认情况下Zookeeper将写数据的日志文件也保存在这个目录里。

  • clientPort=2181
    clientPort这个端口就是客户端(应用程序)连接Zookeeper服务器的端口,Zookeeper 会监听这个端 口接受客户端的访问请求。

server.A=B:C:D
server.1=192.168.11.97:2881:3881
server.2=192.168.11.98:2882:3882
server.3=192.168.11.99:2883:3883
  • A 是一个数字,表示这个是第几号服务器;
  • B 是这个服务器的IP地址(或者是与IP地址做了映射的主机名);
  • C 第一个端口用来集群成员的信息交换,表示这个服务器与集群中的 Leader 服务器交换信息的端口;
  • D 是在leader挂掉时专门用来进行选举 leader 所用的端口。

6、在dataDir=/usr/local/zookeeper-3.4.6_(x)/data下创建 myid 文件

$ vi /usr/local/zookeeper-3.4.6(1) /data/myid 设置值为1
$ vi /usr/local/zookeeper-3.4.6(2) /data/myid 设置值为1
$ vi /usr/local/zookeeper-3.4.6_(3) /data/myid 设置值值为3

7、在防火墙中打开要用到的端口218X、288X、388X

$ vi /etc/sysconfig/iptables
-A INPUT -m state --state NEW -m tcp -p tcp --dport 218X -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 288X -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 388X -j ACCEPT
$ service iptables restart

8、启动并查看zookeeper:

$ /usr/local/zookeeper-3.4.6(x)/bin/zkServer.sh start
$ /usr/local/zookeeper-3.4.6(x)/bin/zkServer.sh status

9、连接zookeeper的客户端配置修改:

zookeeper://192.168.11.97:2181?backup=192.168.11.98:2182,192.168.11.99:2183

Zookeeper系统设计的优点

Zookeeper系统应用越来越广泛,在同一领域内开源软件方面基本处于垄断地位。(最近有个etcd借了docker的东方而异军突起)但是实际用过的人都会觉得这个软件属于可用但又不那么好用的类型。本文是本人结合自己的实际使用经验与思考,同时参考真正大牛对这个系统的分析与评价进行的总结,主要还是想归纳一下关于Zookeeper真正的使用需求,并思考这个系统有哪些设计与实现上的优点,从而能获得如此成功。

一、常见应用场景

先归纳一下工程应用中常见的Zookeeper使用场景(以下简称ZK),这里按照个人感觉应用的频率从高到低排序说明。

1、可靠存储在实际使用中可以表现为配置管理、名字服务,这种应用完全是因为ZK多备份的可靠性强。当然也可以利用回调机制在数据变更时可以进行全体通知。实现起来非常简单而且很有效,所以是应用最广的场景。

2、集群管理利用ZK的通讯与回调机制完成分布式集群的机器状态监视,甚至很多系统中做主从备份时都会在ZK中注册以方便做热备切换。

3、服务注册发现管理由可靠存储加上通知回调机制其实满足了服务注册发现的最基本需求,某些在本人看起来不那么靠谱的应用场景,居然也在采用ZK实现。大有一统天下之势,所有类似的需求都开始采用ZK方案,比较出名的系统比如国内的Dubbo和国外的Kafka(居然还把ZK用在了负载均衡上面)、jStorm、Heron(twitter)等等

4、选主服务选主服务是ZK参考的原始系统Chubby设计出来最初的应用需求,当时是满足BigTable的master选主。ZK最初也是用在HBase里面,而后所有需要选主服务的都在采用,很多KV系统用来方便从多节点中选择一个中心节点(但是本人还真没找到什么)。
需要注意的是有时选主服务在讨论里也被称为分布式锁的一种,很容易混淆概念。的确使用ZK来实现选主服务(实现方法最好跟分布式锁的方法完全一样,这里官方文档都曾经犯过错误)客观上遵循了时间优先原则,但是实际需求并非一定要满足这条,只要保证关键的唯一性就可以了,因此与同步意义上的锁很是有不同的。

5、分布式同步机制即真正的分布式锁,但是实际应用并不常见。本人实现过几次,目前准备运用在表单提交的同步上。

6、负载均衡

二、特性设计与优势

ZK主要使用场景远不是满足最初设计时对一致性调解的需求,这么受欢迎是因为其灵活的特性设计,只要简单组合就能满足很多种需求,同样将特性的受欢迎程度按照个人感觉从高到低进行说明

1、通知回调机制通过创建节点与设置Watcher可以很方便的建立回调通知。ZK的所有应用都基于这个特性,没有这个机制那么机器监控相关的应用都不能处理,也就不会诞生后来在服务注册发现相关的使用方法。实际上为分布式系统提供节点间回调通知方法的系统真的很少,甚至可能只有ZK(大家可以提供一些其他答案?)

2、可靠存储系统设计最初的需求之一,也是ZK特性中实现最好的部分,作为可靠存储ZK基本没出现过问题,仅此一项就可保证其的流行。

3、连接状态维护ZK自动维护了客户端所在的应用与服务器通信连接状态的变化,可以比较简单地维护系统中的成员通信情况。主要是不需要自己再去处理麻烦的通信状态监控问题,比如断线后自动释放节点并产生回调。

4、文件系统模型提供文件接口模型而不是锁接口,更具通用性。文件系统模型中文件与目录的概念可以映射多种含有层级关系的其他模型

5、自增长序列这点包含了锁的本质,但是因为zk的模型设计导致判断与仲裁需在客户端进行

三、实现技术选择与优点

zk本身的系统特性设计很出色,同时选择的实现技术也比较扎实,可谓蕴含相当的分布式系统工程经验在其中。下面结合个人理解讨论下这些实现技术有哪些特别优点与选择时可能的设计思路。

1、通讯机制与状态的实现

基于jute进行编解码处理保证通用性,服务器端通信使用nio或netty都是标准选择。

2、Zab协议与Paxos

zk使用Zab协议保证部署的多台机器间构成的整体系统的一致性与可靠性。这个分布式协议类似Paxos但是更加具体有效,实际上Paxos工程实现会碰到很多协议中没有定义的问题,G家员工为在Chubby中使用Paxos算法甚至专门发了一篇文章来说明Paxos工程化踩了多少坑。

Zab协议中将选主阶段与正常运行之间的阶段用catch up方式进行弥补,而关键的选主阶段使用了一个极其工程化的算法“fast leader election”(这个算法似乎没有经过形式化证明),这个算法足够粗暴有效,实现起来很简单。

最近Paxos的工程简化版算法Raft很火,所有考虑使用Paxos的系统都在实现Raft协议,其过程与Zab协议很类似,但是选主算法更加简单(可能实现结果是比Zab选主更慢)而且无法如Zab一样简单替换这个部分的算法。个人看法是Zab协议比Raft其实更容易理解,而且容易工程实现。(为何没有Raft火爆?可能是因为Zab协议选主部分设计的过于复杂,但是Raft目前还没有工业级的系统进行验证)

3、使用JVM

zk作为一个以稳定性与一致性为主的系统,性能上面肯定有一点损失。相信大家实现这种系统首先都会考虑要利用语言本身的速度优势尽量弥补系统的性能损失,于是我们就能看到很多c++实现的类zk系统(比如Chubby),但是这些新系统却没有zk的普及率。

可以说zk的流行中很重要的一点就是牺牲部分可能的性能使用JVM作为底层。正是因为虚拟机的使用屏蔽了各种异构系统底层,让zk可以很容易的稳定部署在多台配置性能都可能各异的机器上。个人理解这也是为什么现在那么多分布式系统都基于JVM技术栈,分布式系统需求的机器多,不可能所有配置都一样,而且机器都需要很容易进行物理替换或是系统升级,目前还只有JVM可以非常简单的提供这种等级的虚拟化屏蔽。

当然最近docker容器技术大放异彩,轻量级虚拟化方案以极快的速度兴起,让各种异构系统有了更简单可定制的底层虚拟化方式,也许有可能改变分布式系统的底层技术栈。

Zookeeper实现参数的集中式管理

前言

应用项目中都会有一些参数,一般的做法通常可以选择将其存储在本地配置文件或者内存变量中;对于集群机器规模不大、配置变更不是特别频繁的情况下,这两种方式都能很好的解决;但是一旦集群机器规模变大,且配置信息越来越频繁,依靠这两种方式就越来越困难;我们希望能够快速的做到全局参数的变更,因此需要一种参数的集中式管理,下面利用Zookeeper的一些特性来实现简单的参数管理。

准备

jdk:1.7.0_80
zookeeper:3.4.3
curator:2.6.0
spring:3.1.2

Maven引入

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-core</artifactId>
    <version>3.1.2.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-context</artifactId>
    <version>3.1.2.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-beans</artifactId>
    <version>3.1.2.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.apache.zookeeper</groupId>
    <artifactId>zookeeper</artifactId>
    <version>3.4.3</version>
    <exclusions>
        <exclusion>
            <groupId>com.sun.jmx</groupId>
            <artifactId>jmxri</artifactId>
        </exclusion>
        <exclusion>
            <groupId>com.sun.jdmk</groupId>
            <artifactId>jmxtools</artifactId>
        </exclusion>
        <exclusion>
            <groupId>javax.jms</groupId>
            <artifactId>jms</artifactId>
        </exclusion>
    </exclusions>
</dependency>
<dependency>
    <groupId>org.apache.curator</groupId>
    <artifactId>curator-framework</artifactId>
    <version>2.6.0</version>
</dependency>
<dependency>
    <groupId>org.apache.curator</groupId>
    <artifactId>curator-recipes</artifactId>
    <version>2.6.0</version>
</dependency>

目标

1、可以同时配置监听多个节点如/app1,/app2;

2、希望只需要配置如/app1,就能够监听其子节点如/app1/modual1以及子节点的子节点如/app1/modual1/xxx/…;

3、服务器启动能获取当前指定父节点下的所有子节点数据;

4、在添加节点或者在更新节点数据的时候能够动态通知,这样代码中就能够实时获取最新的数据;

5、spring配置中可以从Zookeeper中读取参数进行初始化。

实现

提供ZKWatcher类主要用来和Zookeeper建立连接,监听节点,初始化节点数据,更新节点数据,存储节点数据等

1、同时配置监听多个节点

提供一个字符串数组给用户用来添加需要监听的节点:

private String[] keyPatterns;

2、能够监听其子节点以及子节点的子节点

使用递归的方式用来获取指定监听节点的子节点:

private List<String> listChildren(String path) throws Exception {
    List<String> pathList = new ArrayList<String>();
    pathList.add(path);
    List<String> list = client.getChildren().forPath(path);
    if (list != null && list.size() > 0) {
        for (String cPath : list) {
            String temp = "";
            if ("/".equals(path)) {
                temp = path + cPath;
            } else {
                temp = path + "/" + cPath;
            }
            pathList.addAll(listChildren(temp));
        }
    }
    return pathList;
}

3、服务器启动初始化节点数据

上面已经递归获取了所有的节点,所有可以遍历获取所有节点数据,并且存储在Map中:

private Map<String, String> keyValueMap = new ConcurrentHashMap<String, String>();

if (pathList != null && pathList.size() > 0) {
    for (String path : pathList) {
        keyValueMap.put(path, readPath(path));
        watcherPath(path);
    }
}

private String readPath(String path) throws Exception {
    byte[] buffer = client.getData().forPath(path);
    String value = new String(buffer);
    logger.info("readPath:path = " + path + ",value = " + value);
    return value;
}

4、监听节点数据的变更

使用PathChildrenCache用来监听子节点的CHILD_ADDED,CHILD_UPDATED,CHILD_REMOVED事件:

private void watcherPath(String path) {
    PathChildrenCache cache = null;
    try {
        cache = new PathChildrenCache(client, path, true);
        cache.start(StartMode.POST_INITIALIZED_EVENT);
        cache.getListenable().addListener(new PathChildrenCacheListener() {

            @Override
            public void childEvent(CuratorFramework client, PathChildrenCacheEvent event) throws Exception {
                switch (event.getType()) {
                case CHILD_ADDED:
                    logger.info("CHILD_ADDED," + event.getData().getPath());
                    watcherPath(event.getData().getPath());
                    keyValueMap.put(event.getData().getPath(), new String(event.getData().getData()));
                    break;
                case CHILD_UPDATED:
                    logger.info("CHILD_UPDATED," + event.getData().getPath());
                    keyValueMap.put(event.getData().getPath(), new String(event.getData().getData()));
                    break;
                case CHILD_REMOVED:
                    logger.info("CHILD_REMOVED," + event.getData().getPath());
                    break;
                default:
                    break;
                }
            }
        });
    } catch (Exception e) {
        if (cache != null) {
            try {
                cache.close();
            } catch (IOException e1) {
            }
        }
        logger.error("watch path error", e);
    }
}

5、spring配置中可以从Zookeeper中读取参数进行初始化

实现自定义的PropertyPlaceholderConfigurer类ZKPropPlaceholderConfigurer:

public class ZKPropPlaceholderConfigurer extends PropertyPlaceholderConfigurer {

    private ZKWatcher zkwatcher;

    @Override
    protected Properties mergeProperties() throws IOException {
        return loadPropFromZK(super.mergeProperties());
    }

    /**
     * 从zk中加载配置的常量
     * 
     * @param result
     * @return
     */
    private Properties loadPropFromZK(Properties result) {
        zkwatcher.watcherKeys();
        zkwatcher.fillProperties(result);
        return result;
    }
        ......
}

通过以上的处理,可以使用如下简单的配置来达到目标:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop"
    xmlns:tx="http://www.springframework.org/schema/tx"
    xsi:schemaLocation="http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.0.xsd
        http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
        http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.0.xsd">

    <bean id="zkwatcher" class="zh.maven.DynamicConf.ZKWatcher">
        <property name="keyPatterns" value="/a2,/a3/m1" />
    </bean>

    <bean id="propertyConfigurer" class="zh.maven.DynamicConf.ZKPropPlaceholderConfigurer">
        <property name="zkwatcher" ref="zkwatcher"></property>
    </bean>

    <bean id="person" class="zh.maven.DynamicConf.Person">
        <property name="name">
            <value>${/a2/m1}</value>
        </property>
        <property name="address">
            <value>${/a3/m1/v2}</value>
        </property>
        <property name="company">
            <value>${/a3/m1/v2/t2}</value>
        </property>
    </bean>
</beans>

详细代码svn地址:http://code.taobao.org/svn/temp-pj/DynamicConf

测试

1、首先启动Zookeeper

zkServer.cmd

2、初始化需要使用的节点

public class Create_Node {

    static String path = "/a3/m1/v2/t2";
    static CuratorFramework client = CuratorFrameworkFactory.builder()
            .connectString("127.0.0.1:2181").sessionTimeoutMs(5000)
            .retryPolicy(new ExponentialBackoffRetry(1000, 3)).build();

    public static void main(String[] args) throws Exception {
        client.start();
        client.create().creatingParentsIfNeeded()
                .withMode(CreateMode.PERSISTENT)
                .forPath(path, "init".getBytes());
    }
}

创建需要的节点方便ZKWatcher来监听,这里根据以上的配置,分别初始化/a3/m1/v2/t2和/a2/m1/v1/t1

3、启动Main,分别验证配置文件中的初始化以及代码动态获取参数

public class Main {

    public static void main(String[] args) throws Exception {
        ApplicationContext context = new ClassPathXmlApplicationContext(new String[] { "spring-config.xml" });
        Person person = (Person) context.getBean("person");
        System.out.println(person.toString());

        ZKWatcher zkwatcher = (ZKWatcher) context.getBean("zkwatcher");
        while (true) {
            Person p = new Person(zkwatcher.getKeyValue("/a2/m1"), zkwatcher.getKeyValue("/a3/m1/v2"),
                    zkwatcher.getKeyValue("/a3/m1/v2/t2"));
            System.out.println(p.toString());

            Thread.sleep(1000);
        }
    }
}

4.观察日志同时更新参数:

public class Set_Data {

    static String path = "/a3/m1/v2/t2";
    static CuratorFramework client = CuratorFrameworkFactory.builder().connectString("127.0.0.1:2181")
            .sessionTimeoutMs(5000).retryPolicy(new ExponentialBackoffRetry(1000, 3)).build();

    public static void main(String[] args) throws Exception {
        client.start();
        Stat stat = new Stat();
        System.out.println(stat.getVersion());
        System.out.println("Success set node for :" + path + ",new version:"
                + client.setData().forPath(path, "codingo_v2".getBytes()).getVersion());
    }
}

部分日志如下:

2017-08-05 18:04:57 - watcher path : [/a2, /a2/m1, /a2/m1/v1, /a2/m1/v1/t2, /a3/m1, /a3/m1/v2, /a3/m1/v2/t2]
2017-08-05 18:04:57 - readPath:path = /a2,value = 
2017-08-05 18:04:57 - readPath:path = /a2/m1,value = zhaohui
2017-08-05 18:04:57 - readPath:path = /a2/m1/v1,value = 
2017-08-05 18:04:57 - readPath:path = /a2/m1/v1/t2,value = init
2017-08-05 18:04:57 - readPath:path = /a3/m1,value = 
2017-08-05 18:04:57 - readPath:path = /a3/m1/v2,value = nanjing
2017-08-05 18:04:57 - readPath:path = /a3/m1/v2/t2,value = codingo_v10
2017-08-05 18:04:57 - Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@182f4aea: defining beans [zkwatcher,propertyConfigurer,person]; root of factory hierarchy
name = zhaohui,address = nanjing,company = codingo_v10
name = zhaohui,address = nanjing,company = codingo_v10
2017-08-05 18:04:57 - CHILD_ADDED,/a2/m1
2017-08-05 18:04:57 - CHILD_ADDED,/a3/m1/v2
2017-08-05 18:04:57 - CHILD_ADDED,/a2/m1/v1
2017-08-05 18:04:57 - CHILD_ADDED,/a2/m1/v1/t2
2017-08-05 18:04:57 - CHILD_ADDED,/a3/m1/v2/t2
name = zhaohui,address = nanjing,company = codingo_v10
name = zhaohui,address = nanjing,company = codingo_v10
name = zhaohui,address = nanjing,company = codingo_v10
2017-08-05 18:05:04 - CHILD_UPDATED,/a3/m1/v2/t2
name = zhaohui,address = nanjing,company = codingo_v11
name = zhaohui,address = nanjing,company = codingo_v11

总结

通过Zookeeper实现了一个简单的参数化平台,当然想在生产中使用还有很多需要优化的地方,本文在于提供一个思路;当然除了Zookeeper还可以使用MQ,分布式缓存等来实现参数化平台。