[Django笔记] uwsgi + nginx 配置

安装nginx

安装配置uwsgi

pip install uwsgi

回想php-fpm安装完直接启动就完事了,好像只要配置php的路径
uwsgi的启动需要一大堆参数,可以写好一个配置文件 uwsgi_conf.ini,下面是一个demo:

# uwsig使用配置文件启动
[uwsgi]
# 项目目录
chdir=/var/www/path/to/django/
# 指定项目的application(django 目录的 wsgi.py)
module=pics.wsgi:application
# 指定sock的文件路径       
socket=/var/www/path/to/django/script/uwsgi.sock
# 进程个数       
workers=5
pidfile=/var/www/path/to/django/script/uwsgi.pid
# 指定IP端口       
http=127.0.0.1:80
# 指定静态文件
static-map=/static=/var/www/path/to/django/static
# 启动uwsgi的用户名和用户组
uid=root
gid=root
# 启用主进程
master=true
# 当服务停止的时候自动移除unix Socket和pid文件
vacuum=true
# 序列化接受的内容,如果可能的话
thunder-lock=true
# 启用线程
enable-threads=true
# 设置自中断时间
harakiri=30
# 设置缓冲
post-buffering=4096
# 设置日志目录
daemonize=/var/www/path/to/django/script/uwsgi.log

配置文件解释:

  • 通过 uwsgi [xxx.ini]来启动服务

  • pid, sock, log 三个文件跟这个 ini 文件可以统一在 django 目录下面新建一个script 文件夹存放

  • nginx 是读取sock 来连接 uwsgi

  • pid 文件非常重要, uwsgi 通过 uwsgi –stop [xxx.pid]和uwsgi –reload [xxx.pid] 来停止和重启服务

  • 因为uwsgi 是给nginx 做桥接,ip 为127.0.0.1. 我的http端口不是80,因此这里可以使用80

启动后,可以通过http://127.0.0.1来测试访问 django. (我在linux直接curl 127.0.0.1测试)

nginx.conf 配置 uwsgi

本机配置了一个php服务,想通过子路径/pics来访问 django:

# 指定项目路径uwsgi
location /pics { 
    include uwsgi_params; # 导入一个Nginx模块他是用来和uWSGI进行通讯的
    uwsgi_connect_timeout 30; # 设置连接uWSGI超时时间
    uwsgi_pass unix:/var/www/path/to/django/script/uwsgi.sock; # 指定uwsgi的sock文件所有动态请求就会直接丢给     
}
# 指定静态文件     
location /pics/static/ {
    alias /var/www/path/to/django/static/;
    index index.html index.htm;
}

重启nginx nginx -s reload ,大功告成

未分类

jenkins的安装与配置

一、jenkins 是什么

Jenkins 是一个开源软件项目,旨在提供一个开放易用的软件平台,使软件的持续集成变成可能。Jenkins是基于 Java 开发的一种持续集成工具,用于监控持续重复的工作,功能包括:

持续的软件版本发布/测试项目。
监控外部调用执行的工作。

二、安装 jenkins

jenkins 的安装有很多种,具体可以参照官方文档 https://jenkins.io/doc/book/installing/ ,我这里使用 war 包,放在 tomcat 容器里面启动。

1、安装jdk

请安装 jdk1.8或者更高的版本。

未分类

2、安装tomcat

tomcat我用的是7的版本。

未分类

3、安装jenkins

去官网下载我们的版本,http://mirrors.jenkins.io/war-stable/latest/jenkins.war 。

未分类

启动之后,我们看到在家目录下面生成一个 .jenkins 目录,我们的配置数据都是存放在这里的。

三、配置jenkins

1、初始化

第一次访问的时候会有一个重置密码界面,我们按照他的要求进行即可。

未分类

因为大家众所周知的原因,服务器被墙了,我们无法安装插件,跳过即可。

未分类

最后一步,我们设置管理员账户密码就可以了。

未分类

2、插件服务器

登录到系统里面,点击系统管理—>管理插件—>高级,把里面的升级地址改成如下地址。
http://ftp.tsukuba.wide.ad.jp/software/jenkins/updates/current/update-center.json

未分类

这样我们就可以在线安装插件了。

未分类

3、配置jdk

jdk1.8我们已经事先安装好了,路径是 /usr/java/default/。点击系统管理—>全局工具配置。

未分类

4、配置maven

去 maven 官网下载软件,直接解压在 /usr/local 路径下面,我下载的地址是 http://mirrors.shuosc.org/apache/maven/maven-3/3.5.2/binaries/apache-maven-3.5.2-bin.tar.gz 。

点击系统管理—>全局工具配置。

未分类

四、插件安装

1、安装 git plugin

首先我们需要在 jenkins 所在服务器上面安装 git 客户端,可以 yum 安装,也可以编译安装新版本。

未分类

静静的等待安装完成。

未分类

点击系统工具—>全局工具配置。

未分类

2、安装 maven 插件

目前我们创建项目还是没有 maven 的,因为我们还没有安装插件。

未分类

现在我们点击新建,就可以看到 maven 项目的构建啦。

未分类

3、配置邮件服务器

点击系统管理—>系统设置。

未分类

Jenkins 平台部署

持续集成这个词大家应该都不陌生,同时Jenkins 这个工具肯定也是非常耳熟了,但是对于以前做php运维(不需要进行代码编译)方面工作的朋友对JAVA等项目的编译可能就不了解了,而持续集成说的这么高大上,其实也就是为了把源代码从Git或者SVN仓库里面拖下来,使用一个工具(如Jenkins)通过配置文件把代码编译出来,如Java项目经常就编译打包为war包或者zip包等等,编译完成后再推送到对应的服务器上进行发布。这就是我理解的持续集成的功能。
  
下面简单记录下最最常用的持续集成工具—-Jenkins

  • 系统环境:CentOS Linux release 7.3.1611
  • JAVA版本:java version “1.8.0_121”
  • Tomcat : 8

以上这些基本环境就不多说了,嫌麻烦可以选择采用一键安装包 《 OneinStack 》 进行安装,当然弄完了最好根据需求调整下相应配置。

Jenkins 有多种安装方式,centos上可以采用rpm包安装,war包,docker等等 Jenkins 官方下载 ,本文采用tomcat部署war包方式。

出于安全考虑,个人部署的服务器基本都不会用root用户运行tomcat 等中间件服务(血的教训),所以调整下tomcat的启动用户。

一:新建普通用户启动tomcat

这里指定一个tomcat 跑Jenkins ,所以直接新建一个Jenkins用户。

[root@jenkins ~] useradd jenkins
[root@jenkins ~] chown jenkins.jenkins -R /usr/local/tomcat_jenkins
#如果用oneinstack一键安装的tomcat那么改动下/etc/init.d/tomcat 脚本中的TOMCAT_USER=jenkins 即可,如果自己安装的tomcat并未制作启动脚本,就需要使用su username -c 来启动:
[root@jenkins ~] su jenkins -c /usr/local/tomcat_jenkins/bin/startup.sh

如果直接用root用户运行的tomcat 就无需以上步骤了。

二:初始化Jenkins

  
由于采用war包部署,把war包放到tomcat的webapps目录下即可自动解压部署(autoDeploy=”true”),然后浏览器访问http://yourIPaddr:8080/jenkins/ 就会看到Jenkins初始化界面,要求在服务器上打开这个文件获取初始密码。

未分类  

进入系统设置的时候居然有提示“反向代理设置有误”和“Your container doesn’t use UTF-8 to decode URLs. ……”的提示,不知道是不是这个版本问题,反向代理那个直接放弃了,关于UTF-8这里我核实了tomcat配置文件中已经设置了URIEncoding=”UTF-8″ ,同时也并不影响中文的显示,就没有过多去查这个问题了。

未分类

三:基本插件管理

  
新装好的Jenkins 只有简单的功能,如Git库代码获取,框架构建这些功能都需要单独添加插件。
  
通过【系统管理】-【管理插件】-【可选插件】即可获取相应的功能插件,这里我只需要能够访问Git库,通过maven 打包Java代码,再使用ansible推送 几个功能,所以选择安装Ansible plugin ,Git plugin ,Maven Integration plugin 即可。

插件安装好后在新建项目页面就可以看到已经增加了maven项目。

未分类

Raft算法实现之状态存储——基于etcd

Paxos算法也许是最著名的分布式一致性算法,而Raft则大概是最流行的分布式一致性算法。由于经验和水平所限,单纯看论文感觉并不能达到更进一步的理解。前面听闻Kubernetes, Docker Swarm, CockroachDB等等牛逼的项目都在用Raft。毕竟是经过大规模生产环境考验的技术,我觉得很有必要学习一下。而且etcd的Raft实现是开源的,毕竟“源码之前,了无秘密”。

未分类

无论是Paxos还是Raft,它们都是致力于维护一个RSM(Replicated State Machine),如上图所示。对于RSM来说,状态存储是非常关键的。在这篇博客里,我准备基于etcd的实现分析一下Raft的状态存储。Raft状态的存储主要靠Snapshot和WAL(write ahead log)实现。

  • 和很多数据库一样,为了保证数据的安全性(crash或者宕机下的恢复),都会使用WAL,etcd也不例外。etcd中的每一个事务操作(即写操作),都会预先写到事务文件中,这种文件就是WAL。

  • 此外,etcd作为一个高可用的KV存储系统,不可能只依靠log replay来实现数据恢复。因此,etcd还提供了snapshot(快照)功能。snapshot即是定期把整个数据库保存成一个单独的快照文件,这样一来,不但缩短了日志重放的时间,也减轻了WAL的存储量,过早的WAL可以删除掉。

etcd使用了protobuf来定义协议格式,snapshot和log也在其中。raft/raft.proto文件部分内容如下:

enum EntryType {
    EntryNormal     = 0;
    EntryConfChange = 1;
}

message Entry {
    optional uint64     Term  = 2 [(gogoproto.nullable) = false]; // must be 64-bit aligned for atomic operations
    optional uint64     Index = 3 [(gogoproto.nullable) = false]; // must be 64-bit aligned for atomic operations
    optional EntryType  Type  = 1 [(gogoproto.nullable) = false];
    optional bytes      Data  = 4;
}

message SnapshotMetadata {
    optional ConfState conf_state = 1 [(gogoproto.nullable) = false];
    optional uint64    index      = 2 [(gogoproto.nullable) = false];
    optional uint64    term       = 3 [(gogoproto.nullable) = false];
}

message Snapshot {
    optional bytes            data     = 1;
    optional SnapshotMetadata metadata = 2 [(gogoproto.nullable) = false];
}

其中,entry即是logEntry,表示一条log。

1. Raft library提供的接口

etcd的Raft library其实也不是开箱即用,应用程序需要实现存储io和网络通信。存储io在Raft library被定义为一个Storage接口,这个Storage接口是Raft library用来读取log、snapshot等等数据的接口。Raft library本身提供了一个MemoryStorage的实现,这个实现是基于内存存储的,不能仅仅依靠它来保存持久化数据。

这个Storage的接口定义如下:

type Storage interface {
    // InitialState returns the saved HardState and ConfState information.
    InitialState() (pb.HardState, pb.ConfState, error)
    // Entries returns a slice of log entries in the range [lo,hi).
    // MaxSize limits the total size of the log entries returned, but
    // Entries returns at least one entry if any.
    Entries(lo, hi, maxSize uint64) ([]pb.Entry, error)
    // Term returns the term of entry i, which must be in the range
    // [FirstIndex()-1, LastIndex()]. The term of the entry before
    // FirstIndex is retained for matching purposes even though the
    // rest of that entry may not be available.
    Term(i uint64) (uint64, error)
    // LastIndex returns the index of the last entry in the log.
    LastIndex() (uint64, error)
    // FirstIndex returns the index of the first log entry that is
    // possibly available via Entries (older entries have been incorporated
    // into the latest Snapshot; if storage only contains the dummy entry the
    // first log entry is not available).
    FirstIndex() (uint64, error)
    // Snapshot returns the most recent snapshot.
    // If snapshot is temporarily unavailable, it should return ErrSnapshotTemporarilyUnavailable,
    // so raft state machine could know that Storage needs some time to prepare
    // snapshot and call Snapshot later.
    Snapshot() (pb.Snapshot, error)
}

既然仅仅依靠memoryStorage是不够用的,那么我们还是来看看etcd本身是如何使用Raft libray的。etcd的Storage接口其实也复用了memoryStorage,但是仅仅把它当做一层内存的cache。每一次事务性操作中,etcd都会事先将存储内容flush到持久化存储设备上,然后写入memoryStorage。正如前文所述,Storage仅仅是用做汇报内容给Raft library的,只要能保证它和持久化内容的一致即可。而这一点在单机上很容易保证。此外,Raft library是通过raftlog来操作Storage的,详情见 etcd/raft/raft.go 。

2. etcd的具体实现

etcd server是通过WAL和snapshot实现持久化存储的。etcd使用了一个包裹层,一个叫storage的struct。为了避免混淆,贴一点代码(etcd/etcdserver/storage.go)。

type Storage interface {
    // Save function saves ents and state to the underlying stable storage.
    // Save MUST block until st and ents are on stable storage.
    Save(st raftpb.HardState, ents []raftpb.Entry) error
    // SaveSnap function saves snapshot to the underlying stable storage.
    SaveSnap(snap raftpb.Snapshot) error
    // Close closes the Storage and performs finalization.
    Close() error
}

type storage struct {
    *wal.WAL
    *snap.Snapshotter
}

func NewStorage(w *wal.WAL, s *snap.Snapshotter) Storage {
    return &storage{w, s}
}

注意,这个Storage和之前的那个Storage并没有关系,千万不要搞混淆了。

由于golang的语言特性,storage struct可以直接使用WAL和Snapshotter的方法,因为没有声明成员变量的名字。那么etcd是如何结合使用Raft library的memoryStorage和这里的storage struct的呢?答案就在etcd/etcdserver/raft.go。etcd对Raft library进行了进一步的封装,称之为raftNode,raftNode包含了一个raftNodeConfig的匿名成员。raftNodeConfig的定义如下所示:

type raftNodeConfig struct {
    // to check if msg receiver is removed from cluster
    isIDRemoved func(id uint64) bool
    raft.Node
    raftStorage *raft.MemoryStorage
    storage     Storage
    heartbeat   time.Duration // for logging
    // transport specifies the transport to send and receive msgs to members.
    // Sending messages MUST NOT block. It is okay to drop messages, since
    // clients should timeout and reissue their messages.
    // If transport is nil, server will panic.
    transport rafthttp.Transporter
}

源码看起来就是一目了然,raftStorage就是提供给Raft library的,而storage则是etcd实现的持久化存贮。在使用中,etcd以连续调用的方式实现二者一致的逻辑。以etcd server重启为例,我们看看同步是如何实现的,且看restartNode()的实现。

func restartNode(cfg ServerConfig, snapshot *raftpb.Snapshot) (types.ID, *membership.RaftCluster, raft.Node, *raft.MemoryStorage, *wal.WAL) {
    var walsnap walpb.Snapshot
    if snapshot != nil {
        walsnap.Index, walsnap.Term = snapshot.Metadata.Index, snapshot.Metadata.Term
    }
    w, id, cid, st, ents := readWAL(cfg.WALDir(), walsnap)

    plog.Infof("restarting member %s in cluster %s at commit index %d", id, cid, st.Commit)
    cl := membership.NewCluster("")
    cl.SetID(cid)
    s := raft.NewMemoryStorage()
    if snapshot != nil {
        s.ApplySnapshot(*snapshot)
    }
    s.SetHardState(st)
    s.Append(ents)
    c := &raft.Config{
        ID:              uint64(id),
        ElectionTick:    cfg.ElectionTicks,
        HeartbeatTick:   1,
        Storage:         s,
        MaxSizePerMsg:   maxSizePerMsg,
        MaxInflightMsgs: maxInflightMsgs,
        CheckQuorum:     true,
    }

    n := raft.RestartNode(c)
    raftStatusMu.Lock()
    raftStatus = n.Status
    raftStatusMu.Unlock()
    advanceTicksForElection(n, c.ElectionTick)
    return id, cl, n, s, w
}

这个函数的主要逻辑就是通过读取snapshot和WAL,然后通过s.SetHardState()和s.Append()使得memoryStrorage的状态得到恢复。在etcd的工作过程中也是类似的形式,不信请看raft.go的start()方法:

    if err := r.storage.Save(rd.HardState, rd.Entries); err != nil {
        plog.Fatalf("raft save state and entries error: %v", err)
    }
    if !raft.IsEmptyHardState(rd.HardState) {
        proposalsCommitted.Set(float64(rd.HardState.Commit))
    }
    // gofail: var raftAfterSave struct{}
    r.raftStorage.Append(rd.Entries)

代码我删减了一部分,总体的逻辑可以看得更清楚。r.storage.Save()和r.raftStorage.Append()这种连续调用保证了storage和raftStorage的一致性。

好吧!状态存储就到这里了,但这仅仅是Raft的基本内容,后边继续探索Raft的日志复制、leader选举以及事务提交的实现,当然还有RPC。

Docker 搭建 etcd 集群

etcd 是 CoreOS 团队发起的一个开源项目(Go 语言,其实很多这类项目都是 Go 语言实现的,只能说很强大),实现了分布式键值存储和服务发现,etcd 和 ZooKeeper/Consul 非常相似,都提供了类似的功能,以及 REST API 的访问操作,具有以下特点:

  • 简单:安装和使用简单,提供了 REST API 进行操作交互

  • 安全:支持 HTTPS SSL 证书

  • 快速:支持并发 10 k/s 的读写操作

  • 可靠:采用 raft 算法,实现分布式系统数据的可用性和一致性

etcd 可以单个实例使用,也可以进行集群配置,因为很多项目都是以 etcd 作为服务发现,比如 CoreOS 和 Kubernetes,所以,下面我们使用 Docker 简单搭建一下 etcd 集群。

未分类

1. 主机安装

如果不使用 Docker 的话,etcd 在主机上安装,也非常简单。

Linux 安装命令:

$ curl -L  https://github.com/coreos/etcd/releases/download/v3.3.0-rc.0/etcd-v3.3.0-rc.0-linux-amd64.tar.gz -o etcd-v3.3.0-rc.0-linux-amd64.tar.gz && 
sudo tar xzvf etcd-v3.3.0-rc.0-linux-amd64.tar.gz && 
cd etcd-v3.3.0-rc.0-linux-amd64 && 
sudo cp etcd* /usr/local/bin/

其实就是将编译后的二进制文件,拷贝到/usr/local/bin/目录,各个版本的二进制文件,可以从 https://github.com/coreos/etcd/releases/ 中查找下载。

Mac OS 安装命令:

$ ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" < /dev/null 2> /dev/null
$ brew install etcd

执行下面命令,查看 etcd 是否安装成功:

$ etcd --version
etcd Version: 3.2.12
Git SHA: GitNotFound
Go Version: go1.9.2
Go OS/Arch: darwin/amd64

2. 集群搭建

搭建 etcd 集群,需要借助下 Docker Machine 创建三个 Docker 主机,命令:

$ docker-machine create -d virtualbox manager1 && 
docker-machine create -d virtualbox worker1 && 
docker-machine create -d virtualbox worker2

$ docker-machine ls
NAME       ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER        ERRORS
manager1   -        virtualbox   Running   tcp://192.168.99.100:2376           v17.11.0-ce   
worker1    -        virtualbox   Running   tcp://192.168.99.101:2376           v17.11.0-ce   
worker2    -        virtualbox   Running   tcp://192.168.99.102:2376           v17.11.0-ce   

为防止 Docker 主机中垃取官方镜像,速度慢的问题,我们还需要将 etcd 镜像打包推送到私有仓库中,命令:

$ docker tag quay.io/coreos/etcd 192.168.99.1:5000/quay.io/coreos/etcd:latest && 
docker push 192.168.99.1:5000/quay.io/coreos/etcd:latest && 
docker pull 192.168.99.1:5000/quay.io/coreos/etcd:latest

另外,还需要将私有仓库地址配置在 Docker 主机中,并重启三个 Docker 主机,具体配置参考:Docker 三剑客之 Docker Swarm

Docker 主机配置好之后,我们需要使用docker-machine ssh命令,分别进入三个 Docker 主机中,执行 Docker etcd 配置命令。

manager1 主机(node1 192.168.99.100):

$ docker run -d --name etcd 
    -p 2379:2379 
    -p 2380:2380 
    --volume=etcd-data:/etcd-data 
    192.168.99.1:5000/quay.io/coreos/etcd 
    /usr/local/bin/etcd 
    --data-dir=/etcd-data --name node1 
    --initial-advertise-peer-urls http://192.168.99.100:2380 --listen-peer-urls http://0.0.0.0:2380 
    --advertise-client-urls http://192.168.99.100:2379 --listen-client-urls http://0.0.0.0:2379 
    --initial-cluster-state new 
    --initial-cluster-token docker-etcd 
    --initial-cluster node1=http://192.168.99.100:2380,node2=http://192.168.99.101:2380,node3=http://192.168.99.102:2380

worker1 主机(node2 192.168.99.101):

$ docker run -d --name etcd 
    -p 2379:2379 
    -p 2380:2380 
    --volume=etcd-data:/etcd-data 
    192.168.99.1:5000/quay.io/coreos/etcd 
    /usr/local/bin/etcd 
    --data-dir=/etcd-data --name node2 
    --initial-advertise-peer-urls http://192.168.99.101:2380 --listen-peer-urls http://0.0.0.0:2380 
    --advertise-client-urls http://192.168.99.101:2379 --listen-client-urls http://0.0.0.0:2379 
    --initial-cluster-state new 
    --initial-cluster-token docker-etcd 
    --initial-cluster node1=http://192.168.99.100:2380,node2=http://192.168.99.101:2380,node3=http://192.168.99.102:2380

worker2 主机(node1 192.168.99.102):

$ docker run -d --name etcd 
    -p 2379:2379 
    -p 2380:2380 
    --volume=etcd-data:/etcd-data 
    192.168.99.1:5000/quay.io/coreos/etcd 
    /usr/local/bin/etcd 
    --data-dir=/etcd-data --name node3 
    --initial-advertise-peer-urls http://192.168.99.102:2380 --listen-peer-urls http://0.0.0.0:2380 
    --advertise-client-urls http://192.168.99.102:2379 --listen-client-urls http://0.0.0.0:2379 
    --initial-cluster-state existing 
    --initial-cluster-token docker-etcd 
    --initial-cluster node1=http://192.168.99.100:2380,node2=http://192.168.99.101:2380,node3=http://192.168.99.102:2380

先来说明下 etcd 各个配置参数的意思(参考自 etcd 使用入门):

  • –name:节点名称,默认为 default。
  • –data-dir:服务运行数据保存的路径,默认为${name}.etcd。
  • –snapshot-count:指定有多少事务(transaction)被提交时,触发截取快照保存到磁盘。
  • –heartbeat-interval:leader 多久发送一次心跳到 followers。默认值是 100ms。
  • –eletion-timeout:重新投票的超时时间,如果 follow 在该时间间隔没有收到心跳包,会触发重新投票,默认为 1000 ms。
  • –listen-peer-urls:和同伴通信的地址,比如http://ip:2380,如果有多个,使用逗号分隔。需要所有节点都能够访问,所以不要使用 localhost!
  • –listen-client-urls:对外提供服务的地址:比如http://ip:2379,http://127.0.0.1:2379,客户端会连接到这里和 etcd 交互。
  • –advertise-client-urls:对外公告的该节点客户端监听地址,这个值会告诉集群中其他节点。
  • –initial-advertise-peer-urls:该节点同伴监听地址,这个值会告诉集群中其他节点。
  • –initial-cluster:集群中所有节点的信息,格式为node1=http://ip1:2380,node2=http://ip2:2380,…,注意:这里的 node1 是节点的 –name 指定的名字;后面的 ip1:2380 是 –initial-advertise-peer-urls 指定的值。
  • –initial-cluster-state:新建集群的时候,这个值为 new;假如已经存在的集群,这个值为 existing。
  • –initial-cluster-token:创建集群的 token,这个值每个集群保持唯一。这样的话,如果你要重新创建集群,即使配置和之前一样,也会再次生成新的集群和节点 uuid;否则会导致多个集群之间的冲突,造成未知的错误。

上述配置也可以设置配置文件,默认为/etc/etcd/etcd.conf。

我们可以使用docker ps,查看 Docker etcd 是否配置成功:

$ docker ps
CONTAINER ID        IMAGE                                   COMMAND                  CREATED             STATUS              PORTS                              NAMES
463380d23dfe        192.168.99.1:5000/quay.io/coreos/etcd   "/usr/local/bin/et..."   2 hours ago         Up 2 hours          0.0.0.0:2379-2380->2379-2380/tcp   etcd

然后进入其中一个 Docker 主机:

$ docker exec -it etcd bin/sh

执行下面命令(查看集群成员):

$ etcdctl member list
773d30c9fc6640b4: name=node2 peerURLs=http://192.168.99.101:2380 clientURLs=http://192.168.99.101:2379 isLeader=true
b2b0bca2e0cfcc19: name=node3 peerURLs=http://192.168.99.102:2380 clientURLs=http://192.168.99.102:2379 isLeader=false
c88e2cccbb287a01: name=node1 peerURLs=http://192.168.99.100:2380 clientURLs=http://192.168.99.100:2379 isLeader=false

可以看到,集群里面有三个成员,并且node2为管理员,node1和node3为普通成员。

etcdctl 是 ectd 的客户端命令工具(也是 go 语言实现),里面封装了 etcd 的 REST API 执行命令,方便我们进行操作 etcd,后面再列出 etcdctl 的命令详细说明。

上面命令的 etcd API 版本为 2.0,我们可以手动设置版本为 3.0,命令:

$ export ETCDCTL_API=3 && /usr/local/bin/etcdctl put foo bar
OK

部分命令和执行结果还是和 2.0 版本,有很多不同的,比如同是查看集群成员,3.0 版本的执行结果:

$ etcdctl member list
773d30c9fc6640b4, started, node2, http://192.168.99.101:2380, http://192.168.99.101:2379
b2b0bca2e0cfcc19, started, node3, http://192.168.99.102:2380, http://192.168.99.102:2379
c88e2cccbb287a01, started, node1, http://192.168.99.100:2380, http://192.168.99.100:2379

好了,我们现在再演示一种情况,就是从集群中移除一个节点,然后再把它添加到集群中,为演示 etcd 中使用 Raft 算法,我们将node2管理节点,作为操作对象。

我们在随便一个主机 etcd 容器中(node2除外),执行成员移除集群命令(必须使用 ID,使用别名会报错):

$ etcdctl member remove 773d30c9fc6640b4
Member 773d30c9fc6640b4 removed from cluster f84185fa5f91bdf6

我们再执行下查看集群成员命令(v2 版本):

$ etcdctl member list
b2b0bca2e0cfcc19: name=node3 peerURLs=http://192.168.99.102:2380 clientURLs=http://192.168.99.102:2379 isLeader=true
c88e2cccbb287a01: name=node1 peerURLs=http://192.168.99.100:2380 clientURLs=http://192.168.99.100:2379 isLeader=false

会发现node2管理节点被移除集群了,并且通过 Raft 算法,node3被推举为管理节点。

在将node2节点重新加入集群之前,我们需要执行下面命令:

$ etcdctl member add node2 --peer-urls="http://192.168.99.101:2380"
Member 22b0de6ffcd98f00 added to cluster f84185fa5f91bdf6

ETCD_NAME="node2"
ETCD_INITIAL_CLUSTER="node2=http://192.168.99.101:2380,node3=http://192.168.99.102:2380,node1=http://192.168.99.100:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"

可以看到,ETCD_INITIAL_CLUSTER_STATE 值为existing,也就是我们配置的–initial-cluster-state参数。

我们再执行下查看集群成员命令(v2 版本):

$ etcdctl member list
22b0de6ffcd98f00[unstarted]: peerURLs=http://192.168.99.101:2380
b2b0bca2e0cfcc19: name=node3 peerURLs=http://192.168.99.102:2380 clientURLs=http://192.168.99.102:2379 isLeader=true
c88e2cccbb287a01: name=node1 peerURLs=http://192.168.99.100:2380 clientURLs=http://192.168.99.100:2379 isLeader=false

会发现22b0de6ffcd98f00成员状态变为了unstarted。

我们在node2节点,执行 Docker etcd 集群配置命令:

$ docker run -d --name etcd 
    -p 2379:2379 
    -p 2380:2380 
    --volume=etcd-data:/etcd-data 
    192.168.99.1:5000/quay.io/coreos/etcd 
    /usr/local/bin/etcd 
    --data-dir=/etcd-data --name node2 
    --initial-advertise-peer-urls http://192.168.99.101:2380 --listen-peer-urls http://0.0.0.0:2380 
    --advertise-client-urls http://192.168.99.101:2379 --listen-client-urls http://0.0.0.0:2379 
    --initial-cluster-state existing 
    --initial-cluster-token docker-etcd 
    --initial-cluster node1=http://192.168.99.100:2380,node2=http://192.168.99.101:2380,node3=http://192.168.99.102:2380

结果并不像我们想要的那样成功,执行查看日志:

$ docker logs etcd
2017-12-25 08:19:30.160967 I | etcdmain: etcd Version: 3.2.12
2017-12-25 08:19:30.161062 I | etcdmain: Git SHA: b19dae0
2017-12-25 08:19:30.161082 I | etcdmain: Go Version: go1.8.5
2017-12-25 08:19:30.161092 I | etcdmain: Go OS/Arch: linux/amd64
2017-12-25 08:19:30.161105 I | etcdmain: setting maximum number of CPUs to 1, total number of available CPUs is 1
2017-12-25 08:19:30.161144 N | etcdmain: the server is already initialized as member before, starting as etcd member...
2017-12-25 08:19:30.161195 I | embed: listening for peers on http://0.0.0.0:2380
2017-12-25 08:19:30.161232 I | embed: listening for client requests on 0.0.0.0:2379
2017-12-25 08:19:30.165269 I | etcdserver: name = node2
2017-12-25 08:19:30.165317 I | etcdserver: data dir = /etcd-data
2017-12-25 08:19:30.165335 I | etcdserver: member dir = /etcd-data/member
2017-12-25 08:19:30.165347 I | etcdserver: heartbeat = 100ms
2017-12-25 08:19:30.165358 I | etcdserver: election = 1000ms
2017-12-25 08:19:30.165369 I | etcdserver: snapshot count = 100000
2017-12-25 08:19:30.165385 I | etcdserver: advertise client URLs = http://192.168.99.101:2379
2017-12-25 08:19:30.165593 I | etcdserver: restarting member 773d30c9fc6640b4 in cluster f84185fa5f91bdf6 at commit index 14
2017-12-25 08:19:30.165627 I | raft: 773d30c9fc6640b4 became follower at term 11
2017-12-25 08:19:30.165647 I | raft: newRaft 773d30c9fc6640b4 [peers: [], term: 11, commit: 14, applied: 0, lastindex: 14, lastterm: 11]
2017-12-25 08:19:30.169277 W | auth: simple token is not cryptographically signed
2017-12-25 08:19:30.170424 I | etcdserver: starting server... [version: 3.2.12, cluster version: to_be_decided]
2017-12-25 08:19:30.171732 I | etcdserver/membership: added member 773d30c9fc6640b4 [http://192.168.99.101:2380] to cluster f84185fa5f91bdf6
2017-12-25 08:19:30.171845 I | etcdserver/membership: added member c88e2cccbb287a01 [http://192.168.99.100:2380] to cluster f84185fa5f91bdf6
2017-12-25 08:19:30.171877 I | rafthttp: starting peer c88e2cccbb287a01...
2017-12-25 08:19:30.171902 I | rafthttp: started HTTP pipelining with peer c88e2cccbb287a01
2017-12-25 08:19:30.175264 I | rafthttp: started peer c88e2cccbb287a01
2017-12-25 08:19:30.175339 I | rafthttp: added peer c88e2cccbb287a01
2017-12-25 08:19:30.178326 I | etcdserver/membership: added member cbd7fa8d01297113 [http://192.168.99.102:2380] to cluster f84185fa5f91bdf6
2017-12-25 08:19:30.178383 I | rafthttp: starting peer cbd7fa8d01297113...
2017-12-25 08:19:30.178410 I | rafthttp: started HTTP pipelining with peer cbd7fa8d01297113
2017-12-25 08:19:30.179794 I | rafthttp: started peer cbd7fa8d01297113
2017-12-25 08:19:30.179835 I | rafthttp: added peer cbd7fa8d01297113
2017-12-25 08:19:30.180062 N | etcdserver/membership: set the initial cluster version to 3.0
2017-12-25 08:19:30.180132 I | etcdserver/api: enabled capabilities for version 3.0
2017-12-25 08:19:30.180255 N | etcdserver/membership: updated the cluster version from 3.0 to 3.2
2017-12-25 08:19:30.180430 I | etcdserver/api: enabled capabilities for version 3.2
2017-12-25 08:19:30.183979 I | rafthttp: started streaming with peer c88e2cccbb287a01 (writer)
2017-12-25 08:19:30.184139 I | rafthttp: started streaming with peer c88e2cccbb287a01 (writer)
2017-12-25 08:19:30.184232 I | rafthttp: started streaming with peer c88e2cccbb287a01 (stream MsgApp v2 reader)
2017-12-25 08:19:30.185142 I | rafthttp: started streaming with peer c88e2cccbb287a01 (stream Message reader)
2017-12-25 08:19:30.186518 I | etcdserver/membership: removed member cbd7fa8d01297113 from cluster f84185fa5f91bdf6
2017-12-25 08:19:30.186573 I | rafthttp: stopping peer cbd7fa8d01297113...
2017-12-25 08:19:30.186614 I | rafthttp: started streaming with peer cbd7fa8d01297113 (writer)
2017-12-25 08:19:30.186786 I | rafthttp: stopped streaming with peer cbd7fa8d01297113 (writer)
2017-12-25 08:19:30.186815 I | rafthttp: started streaming with peer cbd7fa8d01297113 (writer)
2017-12-25 08:19:30.186831 I | rafthttp: stopped streaming with peer cbd7fa8d01297113 (writer)
2017-12-25 08:19:30.186876 I | rafthttp: started streaming with peer cbd7fa8d01297113 (stream MsgApp v2 reader)
2017-12-25 08:19:30.187224 I | rafthttp: started streaming with peer cbd7fa8d01297113 (stream Message reader)
2017-12-25 08:19:30.187647 I | rafthttp: stopped HTTP pipelining with peer cbd7fa8d01297113
2017-12-25 08:19:30.187682 I | rafthttp: stopped streaming with peer cbd7fa8d01297113 (stream MsgApp v2 reader)
2017-12-25 08:19:30.187873 I | rafthttp: stopped streaming with peer cbd7fa8d01297113 (stream Message reader)
2017-12-25 08:19:30.187895 I | rafthttp: stopped peer cbd7fa8d01297113
2017-12-25 08:19:30.187911 I | rafthttp: removed peer cbd7fa8d01297113
2017-12-25 08:19:30.188034 I | etcdserver/membership: added member b2b0bca2e0cfcc19 [http://192.168.99.102:2380] to cluster f84185fa5f91bdf6
2017-12-25 08:19:30.188059 I | rafthttp: starting peer b2b0bca2e0cfcc19...
2017-12-25 08:19:30.188075 I | rafthttp: started HTTP pipelining with peer b2b0bca2e0cfcc19
2017-12-25 08:19:30.188510 I | rafthttp: started peer b2b0bca2e0cfcc19
2017-12-25 08:19:30.188533 I | rafthttp: added peer b2b0bca2e0cfcc19
2017-12-25 08:19:30.188795 I | etcdserver/membership: removed member 773d30c9fc6640b4 from cluster f84185fa5f91bdf6
2017-12-25 08:19:30.193643 I | rafthttp: started streaming with peer b2b0bca2e0cfcc19 (writer)
2017-12-25 08:19:30.193730 I | rafthttp: started streaming with peer b2b0bca2e0cfcc19 (writer)
2017-12-25 08:19:30.193797 I | rafthttp: started streaming with peer b2b0bca2e0cfcc19 (stream MsgApp v2 reader)
2017-12-25 08:19:30.194782 I | rafthttp: started streaming with peer b2b0bca2e0cfcc19 (stream Message reader)
2017-12-25 08:19:30.195663 I | raft: 773d30c9fc6640b4 [term: 11] received a MsgHeartbeat message with higher term from b2b0bca2e0cfcc19 [term: 12]
2017-12-25 08:19:30.195716 I | raft: 773d30c9fc6640b4 became follower at term 12
2017-12-25 08:19:30.195736 I | raft: raft.node: 773d30c9fc6640b4 elected leader b2b0bca2e0cfcc19 at term 12
2017-12-25 08:19:30.196617 E | rafthttp: streaming request ignored (ID mismatch got 22b0de6ffcd98f00 want 773d30c9fc6640b4)
2017-12-25 08:19:30.197064 E | rafthttp: streaming request ignored (ID mismatch got 22b0de6ffcd98f00 want 773d30c9fc6640b4)
2017-12-25 08:19:30.197846 E | rafthttp: streaming request ignored (ID mismatch got 22b0de6ffcd98f00 want 773d30c9fc6640b4)
2017-12-25 08:19:30.198242 E | rafthttp: streaming request ignored (ID mismatch got 22b0de6ffcd98f00 want 773d30c9fc6640b4)
2017-12-25 08:19:30.201771 E | etcdserver: the member has been permanently removed from the cluster
2017-12-25 08:19:30.202060 I | etcdserver: the data-dir used by this member must be removed.
2017-12-25 08:19:30.202307 E | etcdserver: publish error: etcdserver: request cancelled
2017-12-25 08:19:30.202338 I | etcdserver: aborting publish because server is stopped
2017-12-25 08:19:30.202364 I | rafthttp: stopping peer b2b0bca2e0cfcc19...
2017-12-25 08:19:30.202482 I | rafthttp: stopped streaming with peer b2b0bca2e0cfcc19 (writer)
2017-12-25 08:19:30.202504 I | rafthttp: stopped streaming with peer b2b0bca2e0cfcc19 (writer)
2017-12-25 08:19:30.204143 I | rafthttp: stopped HTTP pipelining with peer b2b0bca2e0cfcc19
2017-12-25 08:19:30.204186 I | rafthttp: stopped streaming with peer b2b0bca2e0cfcc19 (stream MsgApp v2 reader)
2017-12-25 08:19:30.204205 I | rafthttp: stopped streaming with peer b2b0bca2e0cfcc19 (stream Message reader)
2017-12-25 08:19:30.204217 I | rafthttp: stopped peer b2b0bca2e0cfcc19
2017-12-25 08:19:30.204228 I | rafthttp: stopping peer c88e2cccbb287a01...
2017-12-25 08:19:30.204241 I | rafthttp: stopped streaming with peer c88e2cccbb287a01 (writer)
2017-12-25 08:19:30.204255 I | rafthttp: stopped streaming with peer c88e2cccbb287a01 (writer)
2017-12-25 08:19:30.204824 I | rafthttp: stopped HTTP pipelining with peer c88e2cccbb287a01
2017-12-25 08:19:30.204860 I | rafthttp: stopped streaming with peer c88e2cccbb287a01 (stream MsgApp v2 reader)
2017-12-25 08:19:30.204878 I | rafthttp: stopped streaming with peer c88e2cccbb287a01 (stream Message reader)
2017-12-25 08:19:30.204891 I | rafthttp: stopped peer c88e2cccbb287a01

这么长的日志,说明啥问题呢,就是说我们虽然重新执行的 etcd 创建命令,但因为读取之前配置文件的关系,etcd 会恢复之前的集群成员,但之前的集群节点已经被移除了,所以集群节点就一直处于停止状态。

怎么解决呢?很简单,就是将我们之前创建的etcd-data数据卷轴删掉,命令:

$ docker volume ls
DRIVER              VOLUME NAME
local               etcd-data

$ docker volume rm etcd-data
etcd-data

然后,再在node2节点,重新执行 Docker etcd 集群配置命令(上面),会发现执行是成功的。

我们再执行下查看集群成员命令(v2 版本):

$ etcdctl member list
22b0de6ffcd98f00: name=node2 peerURLs=http://192.168.99.101:2380 clientURLs=http://192.168.99.101:2379 isLeader=false
b2b0bca2e0cfcc19: name=node3 peerURLs=http://192.168.99.102:2380 clientURLs=http://192.168.99.102:2379 isLeader=true
c88e2cccbb287a01: name=node1 peerURLs=http://192.168.99.100:2380 clientURLs=http://192.168.99.100:2379 isLeader=false

3. API 操作

etcd REST API 被用于键值操作和集群成员操作,这边就简单说几个,详细的 API 查看附录说明。

3.1 键值管理

设置键值命令:

$ curl http://127.0.0.1:2379/v2/keys/hello -XPUT -d value="hello world"
{"action":"set","node":{"key":"/hello","value":"hello world","modifiedIndex":17,"createdIndex":17}}

查看键值命令:

$ curl http://127.0.0.1:2379/v2/keys/hello
{"action":"get","node":{"key":"/hello","value":"hello world","modifiedIndex":17,"createdIndex":17}}

删除键值命令:

$ curl http://127.0.0.1:2379/v2/keys/hello -XDELETE
{"action":"delete","node":{"key":"/hello","modifiedIndex":19,"createdIndex":17},"prevNode":{"key":"/hello","value":"hello world","modifiedIndex":17,"createdIndex":17}}

3.2 成员管理

列出集群中的所有成员:

$ curl http://127.0.0.1:2379/v2/members
{"members":[{"id":"22b0de6ffcd98f00","name":"node2","peerURLs":["http://192.168.99.101:2380"],"clientURLs":["http://192.168.99.101:2379"]},{"id":"b2b0bca2e0cfcc19","name":"node3","peerURLs":["http://192.168.99.102:2380"],"clientURLs":["http://192.168.99.102:2379"]},{"id":"c88e2cccbb287a01","name":"node1","peerURLs":["http://192.168.99.100:2380"],"clientURLs":["http://192.168.99.100:2379"]}]}

查看当前节点是否为管理节点:

$ curl http://127.0.0.1:2379/v2/stats/leader
{"leader":"b2b0bca2e0cfcc19","followers":{"22b0de6ffcd98f00":{"latency":{"current":0.001051,"average":0.0029195000000000002,"standardDeviation":0.001646769458667484,"minimum":0.001051,"maximum":0.006367},"counts":{"fail":0,"success":10}},"c88e2cccbb287a01":{"latency":{"current":0.000868,"average":0.0022389999999999997,"standardDeviation":0.0011402923601720172,"minimum":0.000868,"maximum":0.004725},"counts":{"fail":0,"success":12}}}}

查看当前节点信息:

$ curl http://127.0.0.1:2379/v2/stats/self
{"name":"node3","id":"b2b0bca2e0cfcc19","state":"StateLeader","startTime":"2017-12-25T06:00:28.803429523Z","leaderInfo":{"leader":"b2b0bca2e0cfcc19","uptime":"36m45.45263851s","startTime":"2017-12-25T08:13:02.103896843Z"},"recvAppendRequestCnt":6,"sendAppendRequestCnt":22}

查看集群状态:

$ curl http://127.0.0.1:2379/v2/stats/store
{"getsSuccess":9,"getsFail":4,"setsSuccess":9,"setsFail":0,"deleteSuccess":3,"deleteFail":0,"updateSuccess":0,"updateFail":0,"createSuccess":7,"createFail":0,"compareAndSwapSuccess":0,"compareAndSwapFail":0,"compareAndDeleteSuccess":0,"compareAndDeleteFail":0,"expireCount":0,"watchers":0}

当然也可以通过 API 添加和删除集群成员。

4. API 说明和 etcdctl 命令说明

etcd REST API 说明(v2 版本):

未分类

更多 API 请查看:https://coreos.com/etcd/docs/latest/v2/api.html 和 https://coreos.com/etcd/docs/latest/v2/members_api.html

etcdctl 命令说明:

未分类

[CentOS] 解决 crontab 无法读取环境变量的问题

1. 问题描述

一段数据处理的 shell 程序,在 shell 中手动运行,可以正确执行。但是,把它放在 crontab 列表里,就会报错,提示 “matlab: command not found.”。

AutoRefreshData.sh 的部分内容如下:

[She@She ~]$ cat /home/She/data/AutoRefreshData.sh
#!/bin/bash
...
MatlabFile='/mnt/L/Data/main4mat.m'
chmod +x ${MatlabFile}
matlab  -nodesktop -nosplash -nojvm < ${MatlabFile} 1>running.log 2>running.err &

在终端下,AutoRefreshData.sh 可正确执行:

[She@She ~]$ /home/She/data/AutoRefreshData.sh
[She@She ~]$ cat ~/running.log

                            < M A T L A B (R) >
                  Copyright 1984-2015 The MathWorks, Inc.
                   R2015b (8.6.0.267246) 64-bit (glnxa64)
                              August 20, 2015


For online documentation, see http://www.mathworks.com/support
For product information, visit www.mathworks.com.

>> >> >> >> >> >> >> /mnt/L/Data/matFile/jpl16228.mat
>> 
[She@She ~]$ cat ~/running.err
[She@She ~]$  

将该 shell 脚本添加到 crontab 中:

[She@She ~]$ crontab -l
# part 2: refresh She data from FTP
08 12 *  *  * /home/She/data/AutoRefreshData.sh                             > /dev/null 2>&1

在 crontab 中,运行报错,结果如下:

[She@She ~]$ cat ~/running.log
[She@She ~]$ cat ~/running.err
/home/She/data/AutoRefreshData.sh: line 111: matlab: command not found 

2. Bug 原因分析与修复

原因分析:crontab 有一个坏毛病, 就是它总是不会缺省的从用户 profile 文件中读取环境变量参数,经常导致在手工执行某个脚本时是成功的,但是到 crontab 中试图让它定期执行时就是会出错。

修复:在脚本文件的开头,强制要求导入环境变量,可保万无一失。

这样的话,脚本的头部一律以下列格式开头:

#!/bin/sh
. /etc/profile
. ~/.bash_profile

以 AutoRefreshData.sh 为例,它的头部则由

[She@She ~]$ cat /home/She/data/AutoRefreshData.sh
#!/bin/bash
...
MatlabFile='/mnt/L/Data/main4mat.m'
chmod +x ${MatlabFile}
matlab  -nodesktop -nosplash -nojvm < ${MatlabFile} 1>running.log 2>running.err &

改为:

[She@She ~]$ vi /home/She/data/AutoRefreshData.sh
#!/bin/sh
. /etc/profile
. ~/.bash_profile
...
MatlabFile='/mnt/L/Data/main4mat.m'
chmod +x ${MatlabFile}
matlab  -nodesktop -nosplash -nojvm < ${MatlabFile} 1>running.log 2>running.err &

之后,更新 crontab 中的运行时间,立即测试,一切正常,不再报错。

[She@She ~]$ cat ~/running.log

                            < M A T L A B (R) >
                  Copyright 1984-2015 The MathWorks, Inc.
                   R2015b (8.6.0.267246) 64-bit (glnxa64)
                              August 20, 2015


For online documentation, see http://www.mathworks.com/support
For product information, visit www.mathworks.com.

>> >> >> >> >> >> >> /mnt/L/Data/matFile/jpl16228.mat
>> 
[She@She ~]$ cat ~/running.err
[She@She ~]$ 

Done。

Swarm Mode服务管理

环境准备

[root@swarm-manager ~]# cat > ./sources.list <<END
> deb http://mirrors.aliyun.com/debian stretch main contrib non-free
> deb http://mirrors.aliyun.com/debian stretch-proposed-updates main contrib non-free
> deb http://mirrors.aliyun.com/debian stretch-updates main contrib non-free
> deb http://mirrors.aliyun.com/debian-security/ stretch/updates main non-free contrib
> END
[root@swarm-manager ~]# cat Dockerfile
FROM nginx:latest
ADD sources.list /etc/apt/sources.list
RUN apt-get update && apt-get install -y dnsutils iproute2 net-tools curl && apt-get clean
ADD index.html /usr/share/nginx/html/index.html

[root@swarm-manager ~]# echo "nginx:v1" > index.html
[root@swarm-manager ~]# docker build -t registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1 .
[root@swarm-manager ~]# echo "nginx:v2" > index.html
[root@swarm-manager ~]# docker build -t registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v2 .

[root@swarm-manager ~]# docker push registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1
[root@swarm-manager ~]# docker push registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v2

create

[root@swarm-manager ~]# docker service create --name nginx registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1
[root@swarm-manager ~]# docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                                               PORTS
63c04khgjl03        nginx               replicated          1/1                 registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1
[root@swarm-manager ~]# docker service ps nginx
ID                  NAME                IMAGE                                               NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS
zo87xl8e3in9        nginx.1             registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1   swarm-node1         Running             Running 4 minutes ago

update

[root@swarm-manager ~]# docker service update --network-add my-network --publish-add 80:80 nginx
[root@swarm-manager ~]# docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                                               PORTS
63c04khgjl03        nginx               replicated          1/1                 registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1   *:80->80/tcp
[root@swarm-manager ~]# docker service ps nginx
ID                  NAME                IMAGE                                               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
lppql3gw8phx        nginx.1             registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1   swarm-node1         Running             Running 3 minutes ago
zo87xl8e3in9         _ nginx.1         registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1   swarm-node1         Shutdown            Shutdown 3 minutes ago

Scale

[root@swarm-manager ~]# docker service scale nginx=2
[root@swarm-manager ~]# docker service ps nginx
ID                  NAME                IMAGE                                               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
lppql3gw8phx        nginx.1             registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1   swarm-node1         Running             Running 4 minutes ago
zo87xl8e3in9         _ nginx.1         registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1   swarm-node1         Shutdown            Shutdown 4 minutes ago
qqptr8htux76        nginx.2             registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1   swarm-node2         Running             Running 2 minutes ago

update & rollback

查看当前版本

[root@swarm-manager ~]# curl 127.0.0.1
nginx:v1

进行rolling update

[root@swarm-manager ~]# docker service update --image registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v2 nginx
[root@swarm-manager ~]# docker service ps nginx
ID                  NAME                IMAGE                                               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
iwpnzn3ss6uo        nginx.1             registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v2   swarm-node1         Running             Running 4 minutes ago
lppql3gw8phx         _ nginx.1         registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1   swarm-node1         Shutdown            Shutdown 4 minutes ago
zo87xl8e3in9         _ nginx.1         registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1   swarm-node1         Shutdown            Shutdown 5 minutes ago
1c47y3e14dc0        nginx.2             registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v2   swarm-node2         Running             Running 3 minutes ago
qqptr8htux76         _ nginx.2         registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1   swarm-node2         Shutdown            Shutdown 3 minutes ago
[root@swarm-manager ~]# curl 127.0.0.1
nginx:v2

进行rollback

[root@swarm-manager ~]# docker service update --rollback  nginx
nginx
[root@swarm-manager ~]# docker service ps nginx
ID                  NAME                IMAGE                                               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
xign4i5kxc0m        nginx.1             registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1   swarm-node1         Running             Running 3 minutes ago
iwpnzn3ss6uo         _ nginx.1         registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v2   swarm-node1         Shutdown            Shutdown 3 minutes ago
lppql3gw8phx         _ nginx.1         registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1   swarm-node1         Shutdown            Shutdown 5 minutes ago
zo87xl8e3in9         _ nginx.1         registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1   swarm-node1         Shutdown            Shutdown 6 minutes ago
em9hv301ga5f        nginx.2             registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1   swarm-node2         Running             Running 3 minutes ago
1c47y3e14dc0         _ nginx.2         registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v2   swarm-node2         Shutdown            Shutdown 3 minutes ago
qqptr8htux76         _ nginx.2         registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1   swarm-node2         Shutdown            Shutdown 4 minutes ago
[root@swarm-manager ~]# curl 127.0.0.1
nginx:v1

healthcheck

增加健康检查

[root@swarm-manager ~]# docker service update --health-cmd "curl -f http://localhost/ || exit 1"  nginx
[root@swarm-node1 ~]# docker ps
CONTAINER ID        IMAGE                                               COMMAND                  CREATED             STATUS                             PORTS               NAMES
f7e1935f1c22        registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1   "nginx -g 'daemon ..."   17 seconds ago      Up 12 seconds (health: starting)   80/tcp              nginx.1.9dz6g3bizl4kwh8lctr552c6a
[root@swarm-node1 ~]# docker ps
CONTAINER ID        IMAGE                                               COMMAND                  CREATED             STATUS                    PORTS               NAMES
f7e1935f1c22        registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1   "nginx -g 'daemon ..."   44 seconds ago      Up 38 seconds (healthy)   80/tcp              nginx.1.9dz6g3bizl4kwh8lctr552c6a

故障测试

可见当健康状态异常时,将会停止并移除掉异常的容器并重新启动一个新的容器。

[root@swarm-node1 ~]# docker exec -it f7e1935f1c22 rm -f /usr/share/nginx/html/index.html
[root@swarm-node1 ~]# date
Wed Aug 16 16:48:41 CST 2017
[root@swarm-node1 ~]# docker events --since "2017-08-16T16:48:35"
2017-08-16T16:48:40.063995281+08:00 container exec_create: rm -f /usr/share/nginx/html/index.html f7e1935f1c2263112fe86a6e74c0d23a2ff369ffd54eab2781f2645a7dc7810a (com.docker.swarm.node.id=y83k6khc3vxmch1qd3j8kl4ak, com.docker.swarm.service.id=63c04khgjl033syhc5ef0e9g9, com.docker.swarm.service.name=nginx, com.docker.swarm.task=, com.docker.swarm.task.id=lkb6ags10aprqvzii5aht3pjf, com.docker.swarm.task.name=nginx.1.lkb6ags10aprqvzii5aht3pjf, image=registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1@sha256:b223de06038b1187d1d1b955df759839712a10cde33d7f2784ef3f9f5573ad71, name=nginx.1.lkb6ags10aprqvzii5aht3pjf)
...
2017-08-16T16:49:57.889494869+08:00 container health_status: unhealthy f7e1935f1c2263112fe86a6e74c0d23a2ff369ffd54eab2781f2645a7dc7810a (com.docker.swarm.node.id=y83k6khc3vxmch1qd3j8kl4ak, com.docker.swarm.service.id=63c04khgjl033syhc5ef0e9g9, com.docker.swarm.service.name=nginx, com.docker.swarm.task=, com.docker.swarm.task.id=lkb6ags10aprqvzii5aht3pjf, com.docker.swarm.task.name=nginx.1.lkb6ags10aprqvzii5aht3pjf, image=registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1@sha256:b223de06038b1187d1d1b955df759839712a10cde33d7f2784ef3f9f5573ad71, name=nginx.1.lkb6ags10aprqvzii5aht3pjf)
2017-08-16T16:49:59.891503804+08:00 container kill f7e1935f1c2263112fe86a6e74c0d23a2ff369ffd54eab2781f2645a7dc7810a (com.docker.swarm.node.id=y83k6khc3vxmch1qd3j8kl4ak, com.docker.swarm.service.id=63c04khgjl033syhc5ef0e9g9, com.docker.swarm.service.name=nginx, com.docker.swarm.task=, com.docker.swarm.task.id=lkb6ags10aprqvzii5aht3pjf, com.docker.swarm.task.name=nginx.1.lkb6ags10aprqvzii5aht3pjf, image=registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1@sha256:b223de06038b1187d1d1b955df759839712a10cde33d7f2784ef3f9f5573ad71, name=nginx.1.lkb6ags10aprqvzii5aht3pjf, signal=15)
2017-08-16T16:49:59.964124883+08:00 container die f7e1935f1c2263112fe86a6e74c0d23a2ff369ffd54eab2781f2645a7dc7810a (com.docker.swarm.node.id=y83k6khc3vxmch1qd3j8kl4ak, com.docker.swarm.service.id=63c04khgjl033syhc5ef0e9g9, com.docker.swarm.service.name=nginx, com.docker.swarm.task=, com.docker.swarm.task.id=lkb6ags10aprqvzii5aht3pjf, com.docker.swarm.task.name=nginx.1.lkb6ags10aprqvzii5aht3pjf, exitCode=0, image=registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1@sha256:b223de06038b1187d1d1b955df759839712a10cde33d7f2784ef3f9f5573ad71, name=nginx.1.lkb6ags10aprqvzii5aht3pjf)
2017-08-16T16:50:00.289350308+08:00 network disconnect i6xug49nwdsxauqqpli3apvym (container=f7e1935f1c2263112fe86a6e74c0d23a2ff369ffd54eab2781f2645a7dc7810a, name=ingress, type=overlay)
2017-08-16T16:50:00.289489879+08:00 network disconnect vxe1cwk14avlfp2xjgymhkhdl (container=f7e1935f1c2263112fe86a6e74c0d23a2ff369ffd54eab2781f2645a7dc7810a, name=my-network, type=overlay)
2017-08-16T16:50:00.345105732+08:00 container stop f7e1935f1c2263112fe86a6e74c0d23a2ff369ffd54eab2781f2645a7dc7810a (com.docker.swarm.node.id=y83k6khc3vxmch1qd3j8kl4ak, com.docker.swarm.service.id=63c04khgjl033syhc5ef0e9g9, com.docker.swarm.service.name=nginx, com.docker.swarm.task=, com.docker.swarm.task.id=lkb6ags10aprqvzii5aht3pjf, com.docker.swarm.task.name=nginx.1.lkb6ags10aprqvzii5aht3pjf, image=registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1@sha256:b223de06038b1187d1d1b955df759839712a10cde33d7f2784ef3f9f5573ad71, name=nginx.1.lkb6ags10aprqvzii5aht3pjf)
2017-08-16T16:50:00.922761678+08:00 container create 2132a1bd9287bc377f0800d9a06e6876aaddb9a71b0afd567c3838fbb6aa3934 (com.docker.swarm.node.id=y83k6khc3vxmch1qd3j8kl4ak, com.docker.swarm.service.id=63c04khgjl033syhc5ef0e9g9, com.docker.swarm.service.name=nginx, com.docker.swarm.task=, com.docker.swarm.task.id=y00kqcu26sq75js3fvpqx4kt5, com.docker.swarm.task.name=nginx.1.y00kqcu26sq75js3fvpqx4kt5, image=registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1@sha256:b223de06038b1187d1d1b955df759839712a10cde33d7f2784ef3f9f5573ad71, name=nginx.1.y00kqcu26sq75js3fvpqx4kt5)
2017-08-16T16:50:03.006063329+08:00 network destroy vxe1cwk14avlfp2xjgymhkhdl (name=my-network, type=overlay)
2017-08-16T16:50:03.376037876+08:00 container destroy 965a0ce53c2c0f2d5a740cb87c39ea51d164f1a6cf5bea0cf0679883dd99cb5c (com.docker.swarm.node.id=y83k6khc3vxmch1qd3j8kl4ak, com.docker.swarm.service.id=63c04khgjl033syhc5ef0e9g9, com.docker.swarm.service.name=nginx, com.docker.swarm.task=, com.docker.swarm.task.id=xign4i5kxc0mfp1pji7zzkoo1, com.docker.swarm.task.name=nginx.1.xign4i5kxc0mfp1pji7zzkoo1, image=registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1@sha256:b223de06038b1187d1d1b955df759839712a10cde33d7f2784ef3f9f5573ad71, name=nginx.1.xign4i5kxc0mfp1pji7zzkoo1)
2017-08-16T16:50:05.720425867+08:00 network connect i6xug49nwdsxauqqpli3apvym (container=2132a1bd9287bc377f0800d9a06e6876aaddb9a71b0afd567c3838fbb6aa3934, name=ingress, type=overlay)
2017-08-16T16:50:05.808884815+08:00 network disconnect i6xug49nwdsxauqqpli3apvym (container=2132a1bd9287bc377f0800d9a06e6876aaddb9a71b0afd567c3838fbb6aa3934, name=ingress, type=overlay)
2017-08-16T16:50:05.919636688+08:00 network create vxe1cwk14avlfp2xjgymhkhdl (name=my-network, type=overlay)
2017-08-16T16:50:06.102359133+08:00 network connect i6xug49nwdsxauqqpli3apvym (container=2132a1bd9287bc377f0800d9a06e6876aaddb9a71b0afd567c3838fbb6aa3934, name=ingress, type=overlay)
2017-08-16T16:50:06.453151382+08:00 network connect vxe1cwk14avlfp2xjgymhkhdl (container=2132a1bd9287bc377f0800d9a06e6876aaddb9a71b0afd567c3838fbb6aa3934, name=my-network, type=overlay)
2017-08-16T16:50:07.062684099+08:00 container start 2132a1bd9287bc377f0800d9a06e6876aaddb9a71b0afd567c3838fbb6aa3934 (com.docker.swarm.node.id=y83k6khc3vxmch1qd3j8kl4ak, com.docker.swarm.service.id=63c04khgjl033syhc5ef0e9g9, com.docker.swarm.service.name=nginx, com.docker.swarm.task=, com.docker.swarm.task.id=y00kqcu26sq75js3fvpqx4kt5, com.docker.swarm.task.name=nginx.1.y00kqcu26sq75js3fvpqx4kt5, image=registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1@sha256:b223de06038b1187d1d1b955df759839712a10cde33d7f2784ef3f9f5573ad71, name=nginx.1.y00kqcu26sq75js3fvpqx4kt5)
2017-08-16T16:51:07.221879814+08:00 container exec_create: /bin/sh -c curl -f http://localhost/ || exit 1 2132a1bd9287bc377f0800d9a06e6876aaddb9a71b0afd567c3838fbb6aa3934 (com.docker.swarm.node.id=y83k6khc3vxmch1qd3j8kl4ak, com.docker.swarm.service.id=63c04khgjl033syhc5ef0e9g9, com.docker.swarm.service.name=nginx, com.docker.swarm.task=, com.docker.swarm.task.id=y00kqcu26sq75js3fvpqx4kt5, com.docker.swarm.task.name=nginx.1.y00kqcu26sq75js3fvpqx4kt5, image=registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1@sha256:b223de06038b1187d1d1b955df759839712a10cde33d7f2784ef3f9f5573ad71, name=nginx.1.y00kqcu26sq75js3fvpqx4kt5)
[root@swarm-node1 ~]# docker ps
CONTAINER ID        IMAGE                                               COMMAND                  CREATED             STATUS                    PORTS               NAMES
2132a1bd9287        registry.cn-hangzhou.aliyuncs.com/vnimos/nginx:v1   "nginx -g 'daemon ..."   31 minutes ago      Up 30 minutes (healthy)   80/tcp              nginx.1.y00kqcu26sq75js3fvpqx4kt5
constraint & placement

lable

[root@swarm-manager ~]# docker node update --label-add project=nginx swarm-node2
[root@swarm-manager ~]# docker node update --label-add "datacenter=xiamen" swarm-node1
[root@swarm-manager ~]# docker node update --label-add "datacenter=fuzhou" swarm-node2
[root@swarm-manager ~]# docker node inspect -f {{.Spec.Labels}} swarm-node1
map[datacenter:xiamen]
[root@swarm-manager ~]# docker node inspect -f {{.Spec.Labels}} swarm-node2
map[datacenter:fuzhou project:nginx]

constraint

[root@swarm-manager ~]# docker service create --replicas=4 --constraint 'node.hostname == swarm-node1' --name nginx-c1 nginx
[root@swarm-manager ~]# docker service create --replicas=4 --constraint 'node.labels.project == nginx' --name nginx-c2 nginx
[root@swarm-manager ~]# docker service ps nginx-c1
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS
sqcyg8wm8gdt        nginx-c1.1          nginx:latest        swarm-node1         Running             Running 5 minutes ago
ttst5umkpt6g        nginx-c1.2          nginx:latest        swarm-node1         Running             Running 5 minutes ago
lpiz1vsaj6p3        nginx-c1.3          nginx:latest        swarm-node1         Running             Running 5 minutes ago
ykvrdyty4qie        nginx-c1.4          nginx:latest        swarm-node1         Running             Running 5 minutes ago
[root@swarm-manager ~]# docker service ps nginx-c2
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS
x322u16dfnyt        nginx-c2.1          nginx:latest        swarm-node2         Running             Running 5 minutes ago
zjp93whpf4ah        nginx-c2.2          nginx:latest        swarm-node2         Running             Running 5 minutes ago
ff3usxkpo5ae        nginx-c2.3          nginx:latest        swarm-node2         Running             Running 5 minutes ago
p3g0haaqg6yu        nginx-c2.4          nginx:latest        swarm-node2         Running             Running 5 minutes ago

placement

[root@swarm-manager ~]# docker service create --replicas=6 --placement-pref 'spread=node.labels.datacenter' --name nginx-c3 nginx
[root@swarm-manager ~]# docker service ps nginx-c3
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS
p3y7774kpu0w        nginx-c3.1          nginx:latest        swarm-node2         Running             Running 3 minutes ago
081l22wh87ok        nginx-c3.2          nginx:latest        swarm-node1         Running             Running 3 minutes ago
lj1t30mok3te        nginx-c3.3          nginx:latest        swarm-node1         Running             Running 3 minutes ago
fhb90j6brwuv        nginx-c3.4          nginx:latest        swarm-node2         Running             Running 3 minutes ago
ein699law198        nginx-c3.5          nginx:latest        swarm-node2         Running             Running 3 minutes ago
vo91976o481m        nginx-c3.6          nginx:latest        swarm-node1         Running             Running 3 minutes ago

deploy

Docker1.13推出了一个新版本的Docker Compose(V3)。该版本的主要特征是,允许Swarm Mode使用Docker Compose文件定义直接部署服务。

定义compose-file

[root@swarm-manager ~]# cat docker-compose.yml 
version: '3'
services:
  mysql:
    image: mysql
    environment:
      - MYSQL_ROOT_PASSWORD=password
  wordpress:
    image: wordpress
    ports:
      - 80:80
    links:
      - mysql:mysql
    environment:
      - WORDPRESS_DB_PASSWORD=password

在Swarm Mode下,需要使用docker stack deploy命令进行部署,如果直接使用docker-compose up则将会出现如下警告:

[root@swarm-manager ~]# docker-compose up
WARNING: The Docker Engine you're using is running in swarm mode.

Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.

To deploy your application across the swarm, use `docker stack deploy`.

通过compose-file部署多个Service

[root@swarm-manager ~]# docker stack deploy --compose-file=docker-compose.yml demo
Creating network demo_my-network
Creating service demo_mysql
Creating service demo_wordpress
[root@swarm-manager ~]# docker stack ps demo
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS
5n5x8z9oso6d        demo_wordpress.1    wordpress:latest    swarm-node2         Running             Running 3 minutes ago
66wmzw3u50i0        demo_mysql.1        mysql:latest        swarm-node1         Running             Running 4 minutes ago
[root@swarm-manager ~]# docker stack services demo
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
k791axfnh6le        demo_wordpress      replicated          1/1                 wordpress:latest    *:80->80/tcp
wp0rz1kls1m1        demo_mysql          replicated          1/1                 mysql:latest

Docker Swarm 下搭建 MongoDB 分片+副本+选举集群

一、环境准备

三台服务器,建立 Docker Swarm 集群,一个 Manager,两个 Worker。

  • docker 版本:17-09
  • mongo 版本:3.6

二、MongoDB 集群架构设计

未分类

高清图地址: https://www.processon.com/view/link/5a3c7386e4b0bf89b8530376

三、搭建集群

1、【Manager】创建集群网络

docker network create -d overlay --attachable mongo

–attachable 允许其他容器加入此网络

2、创建 9 个 Data 服务,3 个 Config 服务,1 个 Global 模式的 Mongos 服务

2.1、【所有机器】创建相关文件夹

mkdir /root/mongo/config /root/mongo/shard1 /root/mongo/shard2 /root/mongo/shard3

2.2、【Manager】创建 stack.yml

version: '3.3'
services:
  mongors1n1:
    # docker 中国的镜像加速地址
    image: registry.docker-cn.com/library/mongo
    command: mongod --shardsvr --replSet shard1 --dbpath /data/db --port 27017
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /root/mongo/shard1:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        # 指定在服务器 manager 上启动
        constraints:
          - node.hostname==manager
  mongors2n1:
    image: registry.docker-cn.com/library/mongo
    command: mongod --shardsvr --replSet shard2 --dbpath /data/db --port 27017
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /root/mongo/shard2:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==manager
  mongors3n1:
    image: registry.docker-cn.com/library/mongo
    command: mongod --shardsvr --replSet shard3 --dbpath /data/db --port 27017
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /root/mongo/shard3:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==manager
  mongors1n2:
    image: registry.docker-cn.com/library/mongo
    command: mongod --shardsvr --replSet shard1 --dbpath /data/db --port 27017
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /root/mongo/shard1:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==worker1
  mongors2n2:
    image: registry.docker-cn.com/library/mongo
    command: mongod --shardsvr --replSet shard2 --dbpath /data/db --port 27017
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /root/mongo/shard2:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==worker1
  mongors3n2:
    image: registry.docker-cn.com/library/mongo
    command: mongod --shardsvr --replSet shard3 --dbpath /data/db --port 27017
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /root/mongo/shard3:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==worker1
  mongors1n3:
    image: registry.docker-cn.com/library/mongo
    command: mongod --shardsvr --replSet shard1 --dbpath /data/db --port 27017
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /root/mongo/shard1:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==worker2
  mongors2n3:
    image: registry.docker-cn.com/library/mongo
    command: mongod --shardsvr --replSet shard2 --dbpath /data/db --port 27017
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /root/mongo/shard2:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==worker2
  mongors3n3:
    image: registry.docker-cn.com/library/mongo
    command: mongod --shardsvr --replSet shard3 --dbpath /data/db --port 27017
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /root/mongo/shard3:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==worker2
  cfg1:
    image: registry.docker-cn.com/library/mongo
    command: mongod --configsvr --replSet cfgrs --smallfiles --dbpath /data/db --port 27017
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /root/mongo/config:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==manager
  cfg2:
    image: registry.docker-cn.com/library/mongo
    command: mongod --configsvr --replSet cfgrs --smallfiles --dbpath /data/db --port 27017
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /root/mongo/config:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==worker1
  cfg3:
    image: registry.docker-cn.com/library/mongo
    command: mongod --configsvr --replSet cfgrs --smallfiles --dbpath /data/db --port 27017
    networks:
      - mongo
    volumes:
      - /etc/localtime:/etc/localtime
      - /root/mongo/config:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==worker2
  mongos:
    image: registry.docker-cn.com/library/mongo
    # mongo 3.6 版默认绑定IP为 127.0.0.1,此处绑定 0.0.0.0 是允许其他容器或主机可以访问
    command: mongos --configdb cfgrs/cfg1:27017,cfg2:27017,cfg3:27017 --bind_ip 0.0.0.0 --port 27017
    networks:
      - mongo
    # 映射宿主机的 27017 端口
    ports:
      - 27017:27017
    volumes:
      - /etc/localtime:/etc/localtime
    depends_on:
      - cfg1
      - cfg2
      - cfg3
    deploy:
      restart_policy:
        condition: on-failure
      # 在集群内的每一台服务器上都启动一个容器
      mode: global
networks:
  mongo:
    external: true

2.3、启动服务,在 Manager 上执行

docker stack deploy -c stack.yml mongo

2.4、【Manager】查看服务的启动情况

docker service ls

正常情况下,会出现如下结果:

[docker@manager ~]# docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                                         PORTS
z1l5zlghlfbi        mongo_cfg1          replicated          1/1                 registry.docker-cn.com/library/mongo:latest
lg9vbods29th        mongo_cfg2          replicated          1/1                 registry.docker-cn.com/library/mongo:latest
i6d6zwxsq0ss        mongo_cfg3          replicated          1/1                 registry.docker-cn.com/library/mongo:latest
o0lfdavd8kpj        mongo_mongors1n1    replicated          1/1                 registry.docker-cn.com/library/mongo:latest
n85yeyod7mlu        mongo_mongors1n2    replicated          1/1                 registry.docker-cn.com/library/mongo:latest
cwurdqng9tdk        mongo_mongors1n3    replicated          1/1                 registry.docker-cn.com/library/mongo:latest
vu6al5kys28u        mongo_mongors2n1    replicated          1/1                 registry.docker-cn.com/library/mongo:latest
xrjiep0vrf0w        mongo_mongors2n2    replicated          1/1                 registry.docker-cn.com/library/mongo:latest
qqzifwcejjyk        mongo_mongors2n3    replicated          1/1                 registry.docker-cn.com/library/mongo:latest
tddgw8hygv1b        mongo_mongors3n1    replicated          1/1                 registry.docker-cn.com/library/mongo:latest
qrb6fjty03mw        mongo_mongors3n2    replicated          1/1                 registry.docker-cn.com/library/mongo:latest
m8ikdzjssmhn        mongo_mongors3n3    replicated          1/1                 registry.docker-cn.com/library/mongo:latest
mnnlm49b7kyb        mongo_mongos        global              3/3                 registry.docker-cn.com/library/mongo:latest   *:27017->27017/tcp

3、初始化集群

3.1 【Manager】初始化 Mongo 配置集群

docker exec -it $(docker ps | grep "cfg1" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id: "cfgrs",configsvr: true, members: [{ _id : 0, host : "cfg1" },{ _id : 1, host : "cfg2" }, { _id : 2, host : "cfg3" }]})' | mongo"

3.2 【Manager】初始化三个 Mongo 数据集群

docker exec -it $(docker ps | grep "mongors1n1" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : "shard1", members: [{ _id : 0, host : "mongors1n1" },{ _id : 1, host : "mongors1n2" },{ _id : 2, host : "mongors1n3", arbiterOnly: true }]})' | mongo"

docker exec -it $(docker ps | grep "mongors2n1" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : "shard2", members: [{ _id : 0, host : "mongors2n1" },{ _id : 1, host : "mongors2n2" },{ _id : 2, host : "mongors2n3", arbiterOnly: true }]})' | mongo"

docker exec -it $(docker ps | grep "mongors3n1" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : "shard3", members: [{ _id : 0, host : "mongors3n1" },{ _id : 1, host : "mongors3n2" },{ _id : 2, host : "mongors3n3", arbiterOnly: true }]})' | mongo"

3.3 【Manager】将三个数据集群当做分片加入 mongos

docker exec -it $(docker ps | grep "mongos" | awk '{ print $1 }') bash -c "echo 'sh.addShard("shard1/mongors1n1:27017,mongors1n2:27017,mongors1n3:27017")' | mongo "

docker exec -it $(docker ps | grep "mongos" | awk '{ print $1 }') bash -c "echo 'sh.addShard("shard2/mongors2n1:27017,mongors2n3:27017,mongors2n3:27017")' | mongo "

docker exec -it $(docker ps | grep "mongos" | awk '{ print $1 }') bash -c "echo 'sh.addShard("shard3/mongors3n1:27017,mongors3n2:27017,mongors3n3:27017")' | mongo "

4、连接集群

4.1 内部:在 mongo 网络下的容器,通过 mongos:27017 连接

4.2 外部:通过 IP:27017 连接,IP 可以为三台服务的中的一个的 IP

sqlalchemy触发器的使用-Event

说是触发器,其实并不是触发器,这是sqlalchemy中的钩子,也称为事件,在触发某个操作的时候执行某个函数,和sql中的触发器时一样的,更加灵活简单。

我现在也正在学习,我就直接拿出来一个例子吧,大家可以测试一下。

#coding:utf8

from sqlalchemy.orm import scoped_session
from sqlalchemy import Column, Integer, String, DateTime, TIMESTAMP, DECIMAL, func, Text, or_
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.exc import SQLAlchemyError
from sqlalchemy import ForeignKey, Boolean, create_engine, MetaData, Constraint
from sqlalchemy.orm import relationship, backref, sessionmaker
from sqlalchemy import event


Base = declarative_base()




class Role(Base):#  一
    __tablename__= 'roles'
    id = Column(Integer,primary_key=True)
    name = Column(String(36),nullable=True)
    users = relationship('User',backref='role')

class User(Base):#  多
    __tablename__ = 'users'
    id = Column(Integer,primary_key=True)
    name = Column(String(36),nullable=True)
    role_id = Column(Integer, ForeignKey('roles.id'))


class Database():
    def __init__(self, bind, pool_size=100, pool_recycle=3600, echo=False):
        self.__engine = create_engine(bind,pool_size=pool_size,
                                   pool_recycle=pool_recycle,
                                   echo=echo)
        self.__session_factory = sessionmaker(bind=self.__engine)
        self.__db_session = scoped_session(self.__session_factory)
        Base.metadata.create_all(self.__engine)

    @property
    def session(self):
        return self.__db_session()


def on_created(target, value, initiator):
    print "received append event for target: %s" % target



@event.listens_for(User, 'after_insert')
def receive_after_insert(mapper, connection, target):
    print mapper
    print connection
    print target.id
    print "insert......."

db = Database(bind="mysql+pymysql://root:xxxx@localhost/mydata?charset=utf8")





if __name__ == "__main__":
    user = User()
    user.name = "123"
    user.role_id=2
    db.session.add(user)
    db.session.commit()

在插入数据后会执行receive_after_insert函数,很简单。

如果想深入的学习建议看官方文档,说的很详细
http://docs.sqlalchemy.org/en/latest/orm/events.html#attribute-events

flask如何处理并发

1、使用自身服务器的多进程或者多线程,参考werkzeug的run_simple函数的入参。注意,进程和线程不能同时开启

2、使用gunicorn使用多进程,-w worker 进程数,类型于运行多个app.run()开发服务器

gunicorn app -w 2 -b :8000

3、使用gevent异步

/usr/local/bin/gunicorn -t120 -w10  -b 10.57.17.57:3000 --worker-class gevent  Erebus:APP
-k STRING, --worker-class STRING
                        The type of workers to use. [sync]

-w INT, --workers INT
                        The number of worker processes for handling requests.
                        [1]

-t INT, --timeout INT
                        Workers silent for more than this many seconds are
                        killed and restarted. [30]

-b ADDRESS, --bind ADDRESS
                        The socket to bind. [['127.0.0.1:8000']]

当运行开发服务器时,运行app.run(),你会得到一个单一的同步进程,这意味着一次最多只能处理1个请求。

通过在其默认配置中坚持Gunicorn在它的前面,只是增加 – 工作,你获得的本质上是一些进程(由Gunicorn管理),每个行为像app.run()开发服务器。 4个worker == 4个并发请求。这是因为Gunicorn默认使用它包含的同步工作类型。

重要的是要注意,Gunicorn还包括异步工作,即eventlet和gevent(和龙卷风,但是最好使用Tornado框架,似乎)。通过使用–worker-class标志指定其中一个异步工作者,您所获得的是Gunicorn管理多个异步进程,每个进程管理自己的并发。这些进程不使用线程,而是协同程序。基本上,在每个进程内,每次只能发生一件事情(1个线程),但是当对象在等待外部进程完成时(可以考虑数据库查询或等待网络I / O),它们可以被“暂停”。

这意味着,如果你使用Gunicorn的异步工作者,每个工作者可以一次处理多个请求。只有多少工人是最好的取决于你的应用程序的性质,它的环境,它运行的硬件等等。更多的细节可以在Gunicorn’s design页和notes on how gevent works在其介绍页上找到。