crontab 任务误删恢复

这是一篇简短的记录。

某台服务器某账号的 crontab 任务被清空,原因不明。同时,该服务器上的 crontab 任务备份未开启。故思考如何恢复 crontab 任务。

经查,CentOS 系统的 crontab 任务的日志,打印在 /var/log/cron 之中。考虑过滤日志:

cat /var/log/cron* | grep CMD | awk -F'CMD' '{print $2}' | awk -F'[(|)]' '{print $2}' | sort -u

由此得到系统记录过的 crontab 执行命令,过滤其他账号的命令后即可追回目标账号的 crontab 任务。

此外,考虑备份 crontab;脚本如下:

backup_crontab.sh

#!/usr/bin/env bash

BACKUP_DIRECTORY="${HOME}/crontab_backup"

if [ ! -e "${BACKUP_DIRECTORY}" ]; then
        mkdir -p ${BACKUP_DIRECTORY}
fi

crontab -l > ${BACKUP_DIRECTORY}/$(date '+%Y%m%d').txt

zookeeper和etcd有状态服务部署实践

一. 概述

kubernetes通过statefulset为zookeeper、etcd等这类有状态的应用程序提供完善支持,statefulset具备以下特性:

  • 为pod提供稳定的唯一的网络标识
  • 稳定值持久化存储:通过pv/pvc来实现
  • 启动和停止pod保证有序:优雅的部署和伸缩性

本文阐述了如何在k8s集群上部署zookeeper和etcd有状态服务,并结合ceph实现数据持久化。

二. 总结

  • 使用k8s的statefulset、storageclass、pv、pvc和ceph的rbd,能够很好的支持zookeeper、etcd这样的有状态服务部署到kubernetes集群上。
  • k8s不会主动删除已经创建的pv、pvc对象,防止出现误删。

如果用户确定删除pv、pvc对象,同时还需要手动删除ceph段的rbd镜像。

  • 遇到的坑

storageclass中引用的ceph客户端用户,必须要有mon rw,rbd rwx权限。如果没有mon write权限,会导致释放rbd锁失败,无法将rbd镜像挂载到其他的k8s worker节点。

  • zookeeper使用探针检查zookeeper节点的健康状态,如果节点不健康,k8s将删除pod,并自动重建该pod,达到自动重启zookeeper节点的目的。

因zookeeper 3.4版本的集群配置,是通过静态加载文件zoo.cfg来实现的,所以当zookeeper节点pod ip变动后,需要重启zookeeper集群中的所有节点。

  • etcd部署方式有待优化

本次试验中使用静态方式部署etcd集群,如果etcd节点变迁时,需要执行etcdctl member remove/add等命令手动配置etcd集群,严重限制了etcd集群自动故障恢复、扩容缩容的能力。因此,需要考虑对部署方式优化,改为使用DNS或者etcd descovery的动态方式部署etcd,才能让etcd更好的运行在k8s上。

三. zookeeper集群部署

1. 下载镜像

docker pull gcr.mirrors.ustc.edu.cn/google_containers/kubernetes-zookeeper:1.0-3.4.10
docker tag gcr.mirrors.ustc.edu.cn/google_containers/kubernetes-zookeeper:1.0-3.4.10 172.16.18.100:5000/gcr.io/google_containers/kubernetes-zookeeper:1.0-3.4.10
docker push  172.16.18.100:5000/gcr.io/google_containers/kubernetes-zookeeper:1.0-3.4.10

2. 定义ceph secret

cat << EOF | kubectl create -f -
apiVersion: v1
data:
  key: QVFBYy9ndGFRUno4QlJBQXMxTjR3WnlqN29PK3VrMzI1a05aZ3c9PQo=
kind: Secret
metadata:
  creationTimestamp: 2017-11-20T10:29:05Z
  name: ceph-secret
  namespace: default
  resourceVersion: "2954730"
  selfLink: /api/v1/namespaces/default/secrets/ceph-secret
  uid: a288ff74-cddd-11e7-81cc-000c29f99475
type: kubernetes.io/rbd
EOF

3. 定义storageclass rbd存储

cat << EOF | kubectl create -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ceph
parameters:
  adminId: admin
  adminSecretName: ceph-secret
  adminSecretNamespace: default
  fsType: ext4
  imageFormat: "2"
  imagefeatures: layering
  monitors: 172.16.13.223
  pool: k8s
  userId: admin
  userSecretName: ceph-secret
provisioner: kubernetes.io/rbd
reclaimPolicy: Delete
EOF

4. 创建zookeeper集群

使用rbd存储zookeeper节点数据

cat << EOF | kubectl create -f -
---
apiVersion: v1
kind: Service
metadata:
  name: zk-hs
  labels:
    app: zk
spec:
  ports:
  - port: 2888
    name: server
  - port: 3888
    name: leader-election
  clusterIP: None
  selector:
    app: zk
---
apiVersion: v1
kind: Service
metadata:
  name: zk-cs
  labels:
    app: zk
spec:
  ports:
  - port: 2181
    name: client
  selector:
    app: zk
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: zk-pdb
spec:
  selector:
    matchLabels:
      app: zk
  maxUnavailable: 1
---
apiVersion: apps/v1beta2 # for versions before 1.8.0 use apps/v1beta1
kind: StatefulSet
metadata:
  name: zk
spec:
  selector:
    matchLabels:
      app: zk
  serviceName: zk-hs
  replicas: 3
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: Parallel
  template:
    metadata:
      labels:
        app: zk
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                    - zk
              topologyKey: "kubernetes.io/hostname"
      containers:
      - name: kubernetes-zookeeper
        imagePullPolicy: Always
        image: "172.16.18.100:5000/gcr.io/google_containers/kubernetes-zookeeper:1.0-3.4.10"
        ports:
        - containerPort: 2181
          name: client
        - containerPort: 2888
          name: server
        - containerPort: 3888
          name: leader-election
        command:
        - sh
        - -c
        - "start-zookeeper 
          --servers=3 
          --data_dir=/var/lib/zookeeper/data 
          --data_log_dir=/var/lib/zookeeper/data/log 
          --conf_dir=/opt/zookeeper/conf 
          --client_port=2181 
          --election_port=3888 
          --server_port=2888 
          --tick_time=2000 
          --init_limit=10 
          --sync_limit=5 
          --heap=512M 
          --max_client_cnxns=60 
          --snap_retain_count=3 
          --purge_interval=12 
          --max_session_timeout=40000 
          --min_session_timeout=4000 
          --log_level=INFO"
        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        volumeMounts:
        - name: datadir
          mountPath: /var/lib/zookeeper
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
  volumeClaimTemplates:
  - metadata:
      name: datadir
      annotations:
        volume.beta.kubernetes.io/storage-class: ceph
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi
EOF

查看创建结果

[root@172 zookeeper]# kubectl get no
NAME           STATUS    ROLES     AGE       VERSION
172.16.20.10   Ready     <none>    50m       v1.8.2
172.16.20.11   Ready     <none>    2h        v1.8.2
172.16.20.12   Ready     <none>    1h        v1.8.2

[root@172 zookeeper]# kubectl get po -owide 
NAME      READY     STATUS    RESTARTS   AGE       IP              NODE
zk-0      1/1       Running   0          8m        192.168.5.162   172.16.20.10
zk-1      1/1       Running   0          1h        192.168.2.146   172.16.20.11

[root@172 zookeeper]# kubectl get pv,pvc
NAME                                          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                  STORAGECLASS   REASON    AGE
pv/pvc-226cb8f0-d322-11e7-9581-000c29f99475   1Gi        RWO            Delete           Bound     default/datadir-zk-0   ceph                     1h
pv/pvc-22703ece-d322-11e7-9581-000c29f99475   1Gi        RWO            Delete           Bound     default/datadir-zk-1   ceph                     1h

NAME               STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc/datadir-zk-0   Bound     pvc-226cb8f0-d322-11e7-9581-000c29f99475   1Gi        RWO            ceph           1h
pvc/datadir-zk-1   Bound     pvc-22703ece-d322-11e7-9581-000c29f99475   1Gi        RWO            ceph           1h

zk-0 pod的rbd的锁信息为

[root@ceph1 ceph]# rbd lock list kubernetes-dynamic-pvc-227b45e5-d322-11e7-90ab-000c29f99475 -p k8s --user admin
There is 1 exclusive lock on this image.
Locker       ID                              Address                   
client.24146 kubelet_lock_magic_172.16.20.10 172.16.20.10:0/1606152350 

5. 测试pod迁移

尝试将172.16.20.10节点设置为污点,让zk-0 pod自动迁移到172.16.20.12

kubectl cordon 172.16.20.10

[root@172 zookeeper]# kubectl get no
NAME           STATUS                     ROLES     AGE       VERSION
172.16.20.10   Ready,SchedulingDisabled   <none>    58m       v1.8.2
172.16.20.11   Ready                      <none>    2h        v1.8.2
172.16.20.12   Ready                      <none>    1h        v1.8.2

kubectl delete po zk-0

观察zk-0的迁移过程

[root@172 zookeeper]# kubectl get po -owide -w
NAME      READY     STATUS    RESTARTS   AGE       IP              NODE
zk-0      1/1       Running   0          14m       192.168.5.162   172.16.20.10
zk-1      1/1       Running   0          1h        192.168.2.146   172.16.20.11
zk-0      1/1       Terminating   0         16m       192.168.5.162   172.16.20.10
zk-0      0/1       Terminating   0         16m       <none>    172.16.20.10
zk-0      0/1       Terminating   0         16m       <none>    172.16.20.10
zk-0      0/1       Terminating   0         16m       <none>    172.16.20.10
zk-0      0/1       Terminating   0         16m       <none>    172.16.20.10
zk-0      0/1       Terminating   0         16m       <none>    172.16.20.10
zk-0      0/1       Pending   0         0s        <none>    <none>
zk-0      0/1       Pending   0         0s        <none>    172.16.20.12
zk-0      0/1       ContainerCreating   0         0s        <none>    172.16.20.12
zk-0      0/1       Running   0         3s        192.168.3.4   172.16.20.12

此时zk-0正常迁移到172.16.20.12
再查看rbd的锁定信息

[root@ceph1 ceph]# rbd lock list kubernetes-dynamic-pvc-227b45e5-d322-11e7-90ab-000c29f99475 -p k8s --user admin
There is 1 exclusive lock on this image.
Locker       ID                              Address                   
client.24146 kubelet_lock_magic_172.16.20.10 172.16.20.10:0/1606152350 
[root@ceph1 ceph]# rbd lock list kubernetes-dynamic-pvc-227b45e5-d322-11e7-90ab-000c29f99475 -p k8s --user admin
There is 1 exclusive lock on this image.
Locker       ID                              Address                   
client.24154 kubelet_lock_magic_172.16.20.12 172.16.20.12:0/3715989358 

之前在另外一个ceph集群测试这个zk pod迁移的时候,总是报错无法释放lock,经分析应该是使用的ceph账号没有相应的权限,所以导致释放lock失败。记录的报错信息如下:

Nov 27 10:45:55 172 kubelet: W1127 10:45:55.551768   11556 rbd_util.go:471] rbd: no watchers on kubernetes-dynamic-pvc-f35a411e-d317-11e7-90ab-000c29f99475
Nov 27 10:45:55 172 kubelet: I1127 10:45:55.694126   11556 rbd_util.go:181] remove orphaned locker kubelet_lock_magic_172.16.20.12 from client client.171490: err exit status 13, output: 2017-11-27 10:45:55.570483 7fbdbe922d40 -1 did not load config file, using default settings.
Nov 27 10:45:55 172 kubelet: 2017-11-27 10:45:55.600816 7fbdbe922d40 -1 Errors while parsing config file!
Nov 27 10:45:55 172 kubelet: 2017-11-27 10:45:55.600824 7fbdbe922d40 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
Nov 27 10:45:55 172 kubelet: 2017-11-27 10:45:55.600825 7fbdbe922d40 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
Nov 27 10:45:55 172 kubelet: 2017-11-27 10:45:55.600825 7fbdbe922d40 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
Nov 27 10:45:55 172 kubelet: 2017-11-27 10:45:55.602492 7fbdbe922d40 -1 Errors while parsing config file!
Nov 27 10:45:55 172 kubelet: 2017-11-27 10:45:55.602494 7fbdbe922d40 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
Nov 27 10:45:55 172 kubelet: 2017-11-27 10:45:55.602495 7fbdbe922d40 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
Nov 27 10:45:55 172 kubelet: 2017-11-27 10:45:55.602496 7fbdbe922d40 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
Nov 27 10:45:55 172 kubelet: 2017-11-27 10:45:55.651594 7fbdbe922d40 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.k8s.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
Nov 27 10:45:55 172 kubelet: rbd: releasing lock failed: (13) Permission denied
Nov 27 10:45:55 172 kubelet: 2017-11-27 10:45:55.682470 7fbdbe922d40 -1 librbd: unable to blacklist client: (13) Permission denied

k8s rbd volume的实现代码:

if lock {
            // check if lock is already held for this host by matching lock_id and rbd lock id
            if strings.Contains(output, lock_id) {
                // this host already holds the lock, exit
                glog.V(1).Infof("rbd: lock already held for %s", lock_id)
                return nil
            }
            // clean up orphaned lock if no watcher on the image
            used, statusErr := util.rbdStatus(&b)
            if statusErr == nil && !used {
                re := regexp.MustCompile("client.* " + kubeLockMagic + ".*")
                locks := re.FindAllStringSubmatch(output, -1)
                for _, v := range locks {
                    if len(v) > 0 {
                        lockInfo := strings.Split(v[0], " ")
                        if len(lockInfo) > 2 {
                            args := []string{"lock", "remove", b.Image, lockInfo[1], lockInfo[0], "--pool", b.Pool, "--id", b.Id, "-m", mon}
                            args = append(args, secret_opt...)
                            cmd, err = b.exec.Run("rbd", args...)
                            # 执行rbd lock remove命令时返回了错误信息
                            glog.Infof("remove orphaned locker %s from client %s: err %v, output: %s", lockInfo[1], lockInfo[0], err, string(cmd))
                        }
                    }
                }
            }

            // hold a lock: rbd lock add
            args := []string{"lock", "add", b.Image, lock_id, "--pool", b.Pool, "--id", b.Id, "-m", mon}
            args = append(args, secret_opt...)
            cmd, err = b.exec.Run("rbd", args...)
        } 

可以看到,rbd lock remove操作被拒绝了,原因是没有权限rbd: releasing lock failed: (13) Permission denied。

6. 测试扩容

zookeeper集群节点数从2个扩为3个。
集群节点数为2时,zoo.cfg的配置中定义了两个实例

zookeeper@zk-0:/opt/zookeeper/conf$ cat zoo.cfg 
#This file was autogenerated DO NOT EDIT
clientPort=2181
dataDir=/var/lib/zookeeper/data
dataLogDir=/var/lib/zookeeper/data/log
tickTime=2000
initLimit=10
syncLimit=5
maxClientCnxns=60
minSessionTimeout=4000
maxSessionTimeout=40000
autopurge.snapRetainCount=3
autopurge.purgeInteval=12
server.1=zk-0.zk-hs.default.svc.cluster.local:2888:3888
server.2=zk-1.zk-hs.default.svc.cluster.local:2888:3888

使用kubectl edit statefulset zk命令修改replicas=3,start-zookeeper –servers=3,
此时观察pod的变化

[root@172 zookeeper]# kubectl get po -owide -w
NAME      READY     STATUS    RESTARTS   AGE       IP              NODE
zk-0      1/1       Running   0          1h        192.168.5.170   172.16.20.10
zk-1      1/1       Running   0          1h        192.168.3.12    172.16.20.12
zk-2      0/1       Pending   0         0s        <none>    <none>
zk-2      0/1       Pending   0         0s        <none>    172.16.20.11
zk-2      0/1       ContainerCreating   0         0s        <none>    172.16.20.11
zk-2      0/1       Running   0         1s        192.168.2.154   172.16.20.11
zk-2      1/1       Running   0         11s       192.168.2.154   172.16.20.11
zk-1      1/1       Terminating   0         1h        192.168.3.12   172.16.20.12
zk-1      0/1       Terminating   0         1h        <none>    172.16.20.12
zk-1      0/1       Terminating   0         1h        <none>    172.16.20.12
zk-1      0/1       Terminating   0         1h        <none>    172.16.20.12
zk-1      0/1       Terminating   0         1h        <none>    172.16.20.12
zk-1      0/1       Pending   0         0s        <none>    <none>
zk-1      0/1       Pending   0         0s        <none>    172.16.20.12
zk-1      0/1       ContainerCreating   0         0s        <none>    172.16.20.12
zk-1      0/1       Running   0         2s        192.168.3.13   172.16.20.12
zk-1      1/1       Running   0         20s       192.168.3.13   172.16.20.12
zk-0      1/1       Terminating   0         1h        192.168.5.170   172.16.20.10
zk-0      0/1       Terminating   0         1h        <none>    172.16.20.10
zk-0      0/1       Terminating   0         1h        <none>    172.16.20.10
zk-0      0/1       Terminating   0         1h        <none>    172.16.20.10
zk-0      0/1       Terminating   0         1h        <none>    172.16.20.10
zk-0      0/1       Pending   0         0s        <none>    <none>
zk-0      0/1       Pending   0         0s        <none>    172.16.20.10
zk-0      0/1       ContainerCreating   0         0s        <none>    172.16.20.10
zk-0      0/1       Running   0         2s        192.168.5.171   172.16.20.10
zk-0      1/1       Running   0         12s       192.168.5.171   172.16.20.10

可以看到zk-0/zk-1都重启了,这样可以加载新的zoo.cfg配置文件,保证集群正确配置。
新的zoo.cfg配置文件记录了3个实例:

[root@172 ~]# kubectl exec zk-0 -- cat /opt/zookeeper/conf/zoo.cfg
#This file was autogenerated DO NOT EDIT
clientPort=2181
dataDir=/var/lib/zookeeper/data
dataLogDir=/var/lib/zookeeper/data/log
tickTime=2000
initLimit=10
syncLimit=5
maxClientCnxns=60
minSessionTimeout=4000
maxSessionTimeout=40000
autopurge.snapRetainCount=3
autopurge.purgeInteval=12
server.1=zk-0.zk-hs.default.svc.cluster.local:2888:3888
server.2=zk-1.zk-hs.default.svc.cluster.local:2888:3888
server.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888

7. 测试缩容

缩容的时候,zk集群也自动重启了所有的zk节点,缩容过程如下:

[root@172 ~]# kubectl get po -owide -w
NAME      READY     STATUS    RESTARTS   AGE       IP              NODE
zk-0      1/1       Running   0          5m        192.168.5.171   172.16.20.10
zk-1      1/1       Running   0          6m        192.168.3.13    172.16.20.12
zk-2      1/1       Running   0          7m        192.168.2.154   172.16.20.11
zk-2      1/1       Terminating   0         7m        192.168.2.154   172.16.20.11
zk-1      1/1       Terminating   0         7m        192.168.3.13   172.16.20.12
zk-2      0/1       Terminating   0         8m        <none>    172.16.20.11
zk-1      0/1       Terminating   0         7m        <none>    172.16.20.12
zk-2      0/1       Terminating   0         8m        <none>    172.16.20.11
zk-1      0/1       Terminating   0         7m        <none>    172.16.20.12
zk-1      0/1       Terminating   0         7m        <none>    172.16.20.12
zk-1      0/1       Terminating   0         7m        <none>    172.16.20.12
zk-1      0/1       Pending   0         0s        <none>    <none>
zk-1      0/1       Pending   0         0s        <none>    172.16.20.12
zk-1      0/1       ContainerCreating   0         0s        <none>    172.16.20.12
zk-1      0/1       Running   0         2s        192.168.3.14   172.16.20.12
zk-2      0/1       Terminating   0         8m        <none>    172.16.20.11
zk-2      0/1       Terminating   0         8m        <none>    172.16.20.11
zk-1      1/1       Running   0         19s       192.168.3.14   172.16.20.12
zk-0      1/1       Terminating   0         7m        192.168.5.171   172.16.20.10
zk-0      0/1       Terminating   0         7m        <none>    172.16.20.10
zk-0      0/1       Terminating   0         7m        <none>    172.16.20.10
zk-0      0/1       Terminating   0         7m        <none>    172.16.20.10
zk-0      0/1       Pending   0         0s        <none>    <none>
zk-0      0/1       Pending   0         0s        <none>    172.16.20.10
zk-0      0/1       ContainerCreating   0         0s        <none>    172.16.20.10
zk-0      0/1       Running   0         3s        192.168.5.172   172.16.20.10
zk-0      1/1       Running   0         13s       192.168.5.172   172.16.20.10

四. etcd集群部署

1. 创建etcd集群

cat << EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
  name: "etcd"
  annotations:
    # Create endpoints also if the related pod isn't ready
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
  ports:
  - port: 2379
    name: client
  - port: 2380
    name: peer
  clusterIP: None
  selector:
    component: "etcd"
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: "etcd"
  labels:
    component: "etcd"
spec:
  serviceName: "etcd"
  # changing replicas value will require a manual etcdctl member remove/add
  # command (remove before decreasing and add after increasing)
  replicas: 3
  template:
    metadata:
      name: "etcd"
      labels:
        component: "etcd"
    spec:
      containers:
      - name: "etcd"
        image: "172.16.18.100:5000/quay.io/coreos/etcd:v3.2.3"
        ports:
        - containerPort: 2379
          name: client
        - containerPort: 2380
          name: peer
        env:
        - name: CLUSTER_SIZE
          value: "3"
        - name: SET_NAME
          value: "etcd"
        volumeMounts:
        - name: data
          mountPath: /var/run/etcd
        command:
          - "/bin/sh"
          - "-ecx"
          - |
            IP=$(hostname -i)
            for i in $(seq 0 $((${CLUSTER_SIZE} - 1))); do
              while true; do
                echo "Waiting for ${SET_NAME}-${i}.${SET_NAME} to come up"
                ping -W 1 -c 1 ${SET_NAME}-${i}.${SET_NAME}.default.svc.cluster.local > /dev/null && break
                sleep 1s
              done
            done
            PEERS=""
            for i in $(seq 0 $((${CLUSTER_SIZE} - 1))); do
                PEERS="${PEERS}${PEERS:+,}${SET_NAME}-${i}=http://${SET_NAME}-${i}.${SET_NAME}.default.svc.cluster.local:2380"
            done
            # start etcd. If cluster is already initialized the `--initial-*` options will be ignored.
            exec etcd --name ${HOSTNAME} 
              --listen-peer-urls http://${IP}:2380 
              --listen-client-urls http://${IP}:2379,http://127.0.0.1:2379 
              --advertise-client-urls http://${HOSTNAME}.${SET_NAME}:2379 
              --initial-advertise-peer-urls http://${HOSTNAME}.${SET_NAME}:2380 
              --initial-cluster-token etcd-cluster-1 
              --initial-cluster ${PEERS} 
              --initial-cluster-state new 
              --data-dir /var/run/etcd/default.etcd
## We are using dynamic pv provisioning using the "standard" storage class so
## this resource can be directly deployed without changes to minikube (since
## minikube defines this class for its minikube hostpath provisioner). In
## production define your own way to use pv claims.
  volumeClaimTemplates:
  - metadata:
      name: data
      annotations:
        volume.beta.kubernetes.io/storage-class: ceph
    spec:
      accessModes:
        - "ReadWriteOnce"
      resources:
        requests:
          storage: 1Gi
EOF

创建完成之后的po,pv,pvc清单如下:

[root@172 etcd]# kubectl get po -owide 
NAME      READY     STATUS    RESTARTS   AGE       IP              NODE
etcd-0    1/1       Running   0          15m       192.168.5.174   172.16.20.10
etcd-1    1/1       Running   0          15m       192.168.3.16    172.16.20.12
etcd-2    1/1       Running   0          5s        192.168.5.176   172.16.20.10

2. 测试缩容

kubectl scale statefulset etcd --replicas=2

[root@172 ~]# kubectl get po -owide -w
NAME      READY     STATUS    RESTARTS   AGE       IP              NODE
etcd-0    1/1       Running   0          17m       192.168.5.174   172.16.20.10
etcd-1    1/1       Running   0          17m       192.168.3.16    172.16.20.12
etcd-2    1/1       Running   0          1m        192.168.5.176   172.16.20.10
etcd-2    1/1       Terminating   0         1m        192.168.5.176   172.16.20.10
etcd-2    0/1       Terminating   0         1m        <none>    172.16.20.10

检查集群健康

kubectl exec etcd-0 -- etcdctl cluster-health

failed to check the health of member 42c8b94265b9b79a on http://etcd-2.etcd:2379: Get http://etcd-2.etcd:2379/health: dial tcp: lookup etcd-2.etcd on 10.96.0.10:53: no such host
member 42c8b94265b9b79a is unreachable: [http://etcd-2.etcd:2379] are all unreachable
member 9869f0647883a00d is healthy: got healthy result from http://etcd-1.etcd:2379
member c799a6ef06bc8c14 is healthy: got healthy result from http://etcd-0.etcd:2379
cluster is healthy

发现缩容后,etcd-2并没有从etcd集群中自动删除,可见这个etcd镜像对自动扩容缩容的支持并不够好。
我们手工删除掉etcd-2

[root@172 etcd]# kubectl exec etcd-0 -- etcdctl member remove 42c8b94265b9b79a
Removed member 42c8b94265b9b79a from cluster
[root@172 etcd]# kubectl exec etcd-0 -- etcdctl cluster-health                
member 9869f0647883a00d is healthy: got healthy result from http://etcd-1.etcd:2379
member c799a6ef06bc8c14 is healthy: got healthy result from http://etcd-0.etcd:2379
cluster is healthy

3. 测试扩容

从etcd.yaml的启动脚本中可以看出,扩容时新启动一个etcd pod时参数–initial-cluster-state new,该etcd镜像并不支持动态扩容,可以考虑使用基于dns动态部署etcd集群的方式来修改启动脚本,这样才能支持etcd cluster动态扩容。

开启ufw防火墙的一些命令

UFW,即Uncomplicated Firewall,是基于iptables实现的防火墙管理工具,旨在简化配置防火墙的过程,所以实际上UFW修改的是iptables的规则。虽然iptables是一个坚实而灵活的工具,但初学者很难学习如何使用它来正确配置防火墙。如果您希望开始保护您的网络,并且您不确定使用哪种工具,UFW可能是您的正确选择。

本文测试环境为Ubuntu 16.04,其他系统可做参考。

0x01. 温馨提示

如果是远程操作的话,请做好定时防火墙失效,防止自己连接不上。

每10分钟关闭防火墙

$ crontab -e
#*/10 * * * * /data/shell/stop_ufw.sh

非常简单的代码

$ cat /data/shell/stop_ufw.sh 
#!/bin/bash
/usr/sbin/ufw disable

0x02. 环境要求

Ubuntu系统默认已经安装了UFW,如果没有ufw,可以手动安装:

$ sudo apt-get update
$ sudo apt-get install ufw

0x03. 基础配置

允许UFW管理IPV6

如果您的Ubuntu服务器网络支持IPv6,请确保UFW配置为支持IPv6,以便除了IPv4之外还将管理IPv6的防火墙规则。

sudo vim /etc/default/ufw
确保你的IPV6选项为yes即可:

IPV6=yes

设置默认规则

UFW默认情况下允许所有的出站连接,拒绝所有的入站连接,所以这里首先将UFW设置为默认规则:

$ sudo ufw default deny incoming
$ sudo ufw default allow outgoing

ufw default也允许使用reject参数

允许SSH连接

一旦启用UFW之后,如果没有允许SSH连接,将无法再通过SSH远程访问主机,所以在开启防火墙之前要确认SSH连接已经设置为允许:

$ sudo ufw allow ssh

这里创建一条规则,允许ssh连接,其实是允许22端口的连接,等价于:

$ sudo ufw allow 22

UFW通过/etc/services知道ssh服务使用的默认端口为22,如果你的SSH服务使用的端口不是22,则应该修改为相应 的端口号。

查看防火墙状态

通过以下命令查看防火墙状态:

$ sudo ufw status verbose

也可以不带verbose,当防火墙处于关闭状态时只会显示inactive
可以查看刚刚添加的防火墙规则:

0x04. 启用/禁用UFW

启用UFW命令:

$ sudo ufw enable

该命令默认会将UFW设置为开机启动,如果发现重启后UFW并没有自动启动,可以手动设置UFW服务开机自动启动:

$ sudo systemctl start ufw
$ sudo systemctl enable ufw

记得查看防火墙当前的状态:

$ sudo ufw status
Status: active
To                         Action      From
--                         ------      ----
80/tcp                     ALLOW       Anywhere                  
443/tcp                    ALLOW       Anywhere

禁用UFW命令:(该命令会禁用防火墙并关闭其开机自动启动)

$ sudo ufw disable

0x05. 启用/禁用防火墙日志

启用防火墙日志:

$ sudo ufw logging on

禁用防火墙日志:

$ sudo ufw logging off

可以指定日志级别sudo ufw logging low|medium|high
日志文件在/var/log/ufw.log
内容形如:

Oct 11 11:51:31 store42 kernel: [45088.074036] [UFW BLOCK] IN=eno1 OUT= MAC=80:18:44:e1:ae:68:00:0f:e2:b1:01:01:08:00 SRC=60.169.78.143 DST=183.60.192.48 LEN=40 TOS=0x00 PREC=0x00 TTL=244 ID=1991 PROTO=TCP SPT=44007 DPT=8080 WINDOW=1024 RES=0x00 SYN URGP=0

其中前面列出了主机防火墙日志的日期、时间、主机名,后面的内容意思是
[UFW BLOCK]:表示事件描述的开始以及是何种事件。在此例中,它表示阻止了连接。
IN:如果它包含一个值,那么代表该事件是传入事件
OUT:如果它包含一个值,那么代表事件是传出事件
MAC:目的地和源 MAC 地址的组合
SRC:IP数据包的源IP
DST:目的地的IP
LEN:数据包长度
TTL:数据 TTL,或称为time to live。
PROTO:数据包的协议
SPT:数据包的源端口
DPT:目标端口
WINDOW:发送方可以接收的数据包的大小
SYN URGP:指示是否需要三次握手。 0 表示不需要。

0x06. 允许连接

默认情况下ufw的allow不加in允许连接是指允许入站连接,如果要指定允许出站,可以加上out,如:

$ sudo ufw allow in port #允许port入站
$ sudo ufw allow out port #允许port出站

允许指定端口的协议

通过刚才设置ssh的规则,可以知道直接allow就是允许连接
允许HTTP 80端口的所有连接:

$ sudo ufw allow http

等价于:

$ sudo ufw allow 80

允许指定范围内的端口协议

例如,X11的连接端口范围是6000-6007:

$ sudo ufw allow 6000:6007/tcp
$ sudo ufw allow 6000:6007/udp

允许指定IP的连接

$ sudo ufw allow from 192.168.1.100

允许192.168.1.100访问指定端口(22端口):

$ sudo ufw allow from 192.168.1.100 to any port 22

允许子网的连接

允许IP段192.168.1.1到192.168.1.254的所有连接

$ sudo ufw allow from 192.168.1.0/24

允许IP段192.168.1.0/24 访问指定端口(22端口)

$ sudo ufw allow from 192.168.1.0/24 to any port 22

指定允许通过某个网卡的连接

假设这里允许eth0的80端口连接:

$ sudo ufw allow in on eth0 to any port 80

0x07. 拒绝连接

与允许连接一样,只需要将相应的allow换成deny即可,如拒绝http端口的所有连接:

$ sudo ufw deny http

等价于:

$ sudo ufw deny 80

拒绝指定ip的连接:

$ sudo ufw deny from 192.168.1.100

0x08. 删除规则

UFW有两种方式删除防火墙规则,既可以通过规则号删除,也可以通过实际规则删除,通过规则号删除更容易。

通过规则号删除
首先查看所有规则的规则号:

$ sudo ufw status numbered
Status: active
     To                         Action      From
     --                         ------      ----
[ 1] 80/tcp                     ALLOW IN    Anywhere                  
[ 2] 443/tcp                    ALLOW IN    Anywhere                  
[ 3] 22/tcp                  ALLOW IN    Anywhere          
[ 4] Anywhere                   ALLOW IN    192.168.1.0/24

然后直接delete即可,例如删除https(443)的连接规则:

$ sudo ufw delete 2

通过规则删除
删除allow http规则:

$ sudo ufw delete allow 80

0x09. 重置防火墙规则

$ sudo ufw reset

该命令将禁用UFW,并且删除所有已经定义的规则,不过默认该命令会对已经设置的规则进行备份

0x10. 备份/还原规则

UFW的所有规则文件都在路径/etc/ufw/,其中before.rules规则为UFW在运行用户自定义的规则之前运行的规则,相应的before6.rules对应IPV6。after.rules为UFW启用用户自定义规则之后运行的规则。user.rules即为用户自定义的规则。
/etc/default/ufw文件为UFW的配置文件。
所以可以通过直接备份这些配置文件的方式来备份防火墙规则,需要备份的文件有:

/etc/ufw/.rules
/lib/ufw/.rules

/etc/default/ufw # 这个配置文件如果没有修改过,可以不备份
修改配置文件之后通过以下命令重新加载配置文件:

$ sudo ufw reload

0x11. 其他

批量禁止IP

$ while read line; do sudo ufw deny from $line; done < file.txt

file.txt里面是一个需要禁止的IP列表

参考:
1.How To Set Up a Firewall with UFW on Ubuntu
2.How to Configure a Firewall with UFW

http://notes.maxwi.com/2017/01/19/linux-command-tools-ufw/

Linux传输超大文件

linux下的文件传输,大家首先会想到rsync、scp之类的工具,但这类工具有一个特点——慢,因为这类工具都是加密传输,发送端加密,接收端解密,当我们传输一些非敏感文件的时候,完全可以不加密,直接在网络上传输。
直接上实例,传输一个2077M的ISO文件。

nc发送接收数据

接收端:

nc -l 45.55.0.86 9999 > jieshou.iso

➤ -l :监听一个端口来接收数据
➤ -u : 不使用 TCP 而是使用 UDP 来进行数据连接(应该速度更快,没试)
整条命令的意思:本地开启9999端口来接收数据,把接收到的数据存到“jieshou.iso”文件里面。

发送端:

time nc  45.55.0.86 9999 < CentOS-6.9-x86_64-bin-DVD2.iso

命令最前面的time是用来检测该命令运行耗时的。
未分类
24秒就在公网上传完了一个2077M的文件,平均速度高达87M/s,传输完毕后在两端校验MD5,发现文件完全一致。
用nc传输有两个特点:
➤速度快
➤传输简单,不需要登录对方服务器,不需要验证信息。

nc进度显示

若你文件实在太大,想看到传输进度,用PV

yum install epel-release -y
yum install pv -y
cat CentOS-6.9-x86_64-bin-DVD2.iso |pv -b | nc  45.55.0.86 9999

传输目录

接收端:

nc -l 45.55.0.86 9999 | pv -b > home.tar.gz

发送端:

tar -czf - /home/ | nc  45.55.0.86 9999

中转文件

A、B、C三台主机,A美国,C昌南,C只能访问到B,不能直接访问A,B和AC互通。C要怎么才能拿到A上的文件呢?
C上执行:

nc -l 9999 > google_file.txt

B上执行:

nc -l 9999 | nc (C的外网IP) 9999

A上执行:

nc (B的外网IP) 9999 < google_file.txt

CentOS 7更改yum源与更新系统

开源镜像

网易开源镜像镜像

网易开源镜像使用帮助: http://mirrors.163.com/.help
未分类

阿里云开源镜像

https://mirrors.aliyun.com/repo/

1、备份

cp /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
或者
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

2、下载CentOS-Base.repo 到/etc/yum.repos.d/

cd   /etc/yum.repos.d/
wget http://mirrors.163.com/.help/CentOS7-Base-163.repo

3、生成缓存

yum clean all
yum makecache

4、更新系统

更新系统时间可能会比较久取决于服务器网速

yum -y update

解决ubuntu的中文乱码问题

ubuntu中文支持,及中文乱码问题

该篇博文,是本人踩了一下午的坑的成果,亲测有效。对服务器同样有效。

状况:所用的ubuntu系统不支持中文,遇见中文就????。ORZ…

目标:使系统/服务器支持中文,能够正常显示。

首先,安装中文支持包language-pack-zh-hans:

$ sudo apt-get install language-pack-zh-hans

然后,修改/etc/environment(在文件的末尾追加):

LANG="zh_CN.UTF-8"
LANGUAGE="zh_CN:zh:en_US:en"

再修改/var/lib/locales/supported.d/local(没有这个文件就新建,同样在末尾追加):

en_US.UTF-8 UTF-8
zh_CN.UTF-8 UTF-8
zh_CN.GBK GBK
zh_CN GB2312

最后,执行命令:

$ sudo locale-gen

对于中文乱码是空格的情况,安装中文字体解决。

$ sudo apt-get install fonts-droid-fallback ttf-wqy-zenhei ttf-wqy-microhei fonts-arphic-ukai fonts-arphic-uming

以上,问题解决,中文显示正常。:)

使用kubeadm在CentOS 7上安装Kubernetes 1.8

1. 系统配置

1.1 关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

1.2 禁用SELinux

setenforce 0

编辑文件/etc/selinux/config,将SELINUX修改为disabled,如下:

SELINUX=disabled

1.3 关闭系统Swap

Kubernetes 1.8开始要求关闭系统的Swap,如果不关闭,默认配置下kubelet将无法启动。方法一,通过kubelet的启动参数–fail-swap-on=false更改这个限制。方法二,关闭系统的Swap。

swapoff -a

修改/etc/fstab文件,注释掉SWAP的自动挂载,使用free -m确认swap已经关闭。

2. 安装Docker

注: 所有节点均需执行该步骤。

2.1 下载Docker安装包

  • 下载地址:https://download.docker.com/linux/centos/7/x86_64/stable/Packages/
  • 下载安装包:
mkdir ~/k8s
cd k8s
wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm
wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm

2.2 安装Docker

cd k8s
yum install ./docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm
yum install ./docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm
systemctl enable docker
systemctl start docker

2.3 配置Docker

  • 开启iptables filter表的FORWARD链
    编辑/lib/systemd/system/docker.service,在ExecStart=..上面加入如下内容:
ExecStartPost=/usr/sbin/iptables -I FORWARD -s 0.0.0.0/0 -j ACCEPT

如下:

......
ExecStartPost=/usr/sbin/iptables -I FORWARD -s 0.0.0.0/0 -j ACCEPT
ExecStart=/usr/bin/dockerd
......
  • 配置Cgroup Driver
    创建文件/etc/docker/daemon.json,添加如下内容:
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
  • 重启Docker服务
systemctl daemon-reload && systemctl restart docker && systemctl status docker

3. 安装Kubernetes

3.1 安装kubeadm、kubectl、kubelet

  • 配置软件源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
  • 解决路由异常
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
  • 调整swappiness参数
    修改/etc/sysctl.d/k8s.conf添加下面一行:
vm.swappiness=0

执行sysctl -p /etc/sysctl.d/k8s.conf使修改生效。

  • 安装kubeadm、kubectl、kubelet
    ① 查看可用版本
yum list --showduplicates | grep 'kubeadm|kubectl|kubelet'

② 安装指定版本

yum install kubeadm-1.8.1 kubectl-1.8.1 kubelet-1.8.1
systemctl enable kubelet
systemctl start kubelet

3.2 使用kubeadm init初始化集群

注:该小节仅在Master节点上执行

  • 初始化Master节点
kubeadm init --kubernetes-version=v1.8.1 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=master.k8s.samwong.im
  • 配置普通用户使用kubectl访问集群
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • 查看集群状态
[root@master ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-0               Healthy   {"health": "true"} 
  • 初始化失败清理命令
kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/

3.3 安装Pod Network

注:该小节仅在Master节点上执行

  • 安装Flannel
[root@master ~]# cd ~/k8s
[root@master ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@master ~]# kubectl apply -f  kube-flannel.yml
clusterrole "flannel" created
clusterrolebinding "flannel" created
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created
  • 指定网卡
    如果有多个网卡,需要在kube-flannel.yml中使用–iface参数指定集群主机内网网卡的名称,否则可能会出现dns无法解析。需要将kube-flannel.yml下载到本地,flanneld启动参数加上–iface=。
......
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
......
containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.9.0-amd64
        command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr", "--iface=eth1" ]
......
  • 查询Pod状态
kubectl get pod --all-namespaces -o wide

3.4 Master节点参与工作负载

使用kubeadm初始化的集群,出于安全考虑Pod不会被调度到Master Node上,可使用如下命令使Master节点参与工作负载。

kubectl taint nodes node1 node-role.kubernetes.io/master-

3.5 向Kubernetes集群添加Node

  • 查看master的token
kubeadm token list | grep authentication,signing | awk '{print $1}'
  • 查看discovery-token-ca-cert-hash
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
  • 添加节点到Kubernetes集群
kubeadm join --token=a20844.654ef6410d60d465 --discovery-token-ca-cert-hash sha256:0c2dbe69a2721870a59171c6b5158bd1c04bc27665535ebf295c918a96de0bb1 master.k8s.samwong.im:6443
  • 查看集群中的节点
[root@master ~]# kubectl get nodes
NAME                    STATUS    ROLES     AGE       VERSION
master.k8s.samwong.im   Ready     master    1d        v1.8.1

3.6 从Kubernetes集群中移除节点

  • Master节点操作
kubectl drain master.k8s.samwong.im --delete-local-data --force --ignore-daemonsets
kubectl delete node master.k8s.samwong.im
  • Node节点操作
kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/
  • 查看集群节点
kubectl get nodes

3.7 部署Dashboard插件

  • 下载Dashboard插件配置文件
cd ~/k8s
wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
  • 修改Dashboard Service
    编辑kubernetes-dashboard.yaml文件,在Dashboard Service中添加type: NodePort,暴露Dashboard服务。
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  • 安装Dashboard插件
kubectl create -f kubernetes-dashboard.yaml
  • Dashboard账户集群管理权限
    创建一个kubernetes-dashboard-admin的ServiceAccount并授予集群admin的权限,创建kubernetes-dashboard-admin.rbac.yaml。
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-admin
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard-admin
  namespace: kube-system

执行命令:

[root@master ~]# kubectl create -f kubernetes-dashboard-admin.rbac.yaml
serviceaccount "kubernetes-dashboard-admin" created
clusterrolebinding "kubernetes-dashboard-admin" created
  • 查看kubernete-dashboard-admin的token
[root@master ~]# kubectl -n kube-system get secret | grep kubernetes-dashboard-admin
kubernetes-dashboard-admin-token-jxq7l   kubernetes.io/service-account-token   3         22h
[root@master ~]# kubectl describe -n kube-system secret/kubernetes-dashboard-admin-token-jxq7l
Name:         kubernetes-dashboard-admin-token-jxq7l
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name=kubernetes-dashboard-admin
              kubernetes.io/service-account.uid=686ee8e9-ce63-11e7-b3d5-080027d38be0
Type:  kubernetes.io/service-account-token
Data
====
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi1qeHE3bCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjY4NmVlOGU5LWNlNjMtMTFlNy1iM2Q1LTA4MDAyN2QzOGJlMCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.Ua92im86o585ZPBfsOpuQgUh7zxgZ2p1EfGNhr99gAGLi2c3ss-2wOu0n9un9LFn44uVR7BCPIkRjSpTnlTHb_stRhHbrECfwNiXCoIxA-1TQmcznQ4k1l0P-sQge7YIIjvjBgNvZ5lkBNpsVanvdk97hI_kXpytkjrgIqI-d92Lw2D4xAvHGf1YQVowLJR_VnZp7E-STyTunJuQ9hy4HU0dmvbRXBRXQ1R6TcF-FTe-801qUjYqhporWtCaiO9KFEnkcYFJlIt8aZRSL30vzzpYnOvB_100_DdmW-53fLWIGYL8XFnlEWdU1tkADt3LFogPvBP4i9WwDn81AwKg_Q
ca.crt:     1025 bytes
  • 查看Dashboard服务端口
[root@master k8s]# kubectl get svc -n kube-system
NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
kube-dns               ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP   1d
kubernetes-dashboard   NodePort    10.102.209.161   <none>        443:32513/TCP   21h
  • 访问Dashboard
    打开浏览器输入https://192.168.56.2:32513,如下:
    未分类

3.8 部署heapster插件

安装Heapster为集群添加使用统计和监控功能,为Dashboard添加仪表盘。

mkdir -p ~/k8s/heapster
cd ~/k8s/heapster
wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/grafana.yaml
wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml
wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml
wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml
kubectl create -f ./

4. 遇到的问题

4.1 使用代理科学上网

  • 没有代理
    可申请AWS免费账户,创建EC2实例,搭建Shadowsocks服务器。

  • 配置代理客户端
    参考链接:https://www.zybuluo.com/ncepuwanghui/note/954160

  • 配置Docker代理
    ① 创建docker服务配置文件

mkdir -p /etc/systemd/system/docker.service.d

② 编辑vi /etc/systemd/system/docker.service.d/http-proxy.conf,添加如下内容:

[Service]
Environment="HTTP_PROXY=http://master.k8s.samwong.im:8118" "NO_PROXY=localhost,*.samwong.im,192.168.0.0/16,127.0.0.1,10.244.0.0/16"

③ 编辑/etc/systemd/system/docker.service.d/https-proxy.conf,添加如下内容:

[Service]
Environment="HTTPS_PROXY=https://master.k8s.samwong.im:8118" "NO_PROXY=localhost,*.samwong.im,192.168.0.0/16,127.0.0.1,10.244.0.0/16"

④ 重启Docker服务

systemctl daemon-reload && systemctl restart docker

⑤ 查看是否配置成功

[root@master k8s]# systemctl show --property=Environment docker | more
Environment=HTTP_PROXY=http://master.k8s.samwong.im:8118 NO_PROXY=localhost,*.samwong.im,192.168.0.0/16,127.0.0.1,10.244.0.0/16 HTTPS_PROXY=https://master.k8
s.samwong.im:8118
  • 配置yum代理
    ① 编辑/etc/yum.conf文件,追加如下内容:
proxy=http://master.k8s.samwong.im:8118

② 更新yum缓存

yum makecache
  • 配置wget代理
    编辑/etc/wgetrc文件,追加如下内容:
ftp_proxy=http://master.k8s.samwong.im:8118
http_proxy=http://master.k8s.samwong.im:8118
https_proxy=http://master.k8s.samwong.im:8118
  • 配置全局代理
    如需上网,可编辑/etc/profile文件,追加如下内容:
PROXY_HOST=master.k8s.samwong.im
export all_proxy=http://$PROXY_HOST:8118
export ftp_proxy=http://$PROXY_HOST:8118
export http_proxy=http://$PROXY_HOST:8118
export https_proxy=http://$PROXY_HOST:8118
export no_proxy=localhost,*.samwong.im,192.168.0.0/16.,127.0.0.1,10.10.0.0/16

注: 部署Kubernetes时需禁用全局代理,会导致访问内部服务失败。

4.2 下载软件包和镜像

  • 下载kubeadm、kubectl、kubelet
wget https://storage.googleapis.com/kubernetes-release/release/v1.8.1/bin/linux/amd64/kubeadm
wget https://storage.googleapis.com/kubernetes-release/release/v1.8.1/bin/linux/amd64/kubectl
wget https://storage.googleapis.com/kubernetes-release/release/v1.8.1/bin/linux/amd64/kubelet

参考链接:https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-via-curl

4.3 推送本地镜像到镜像仓库

  • 上传镜像
docker login -u [email protected] -p xxxxxx hub.c.163.com
docker tag gcr.io/google_containers/kube-apiserver-amd64:v1.8.1 hub.c.163.com/xxxxxx/kube-apiserver-amd64:v1.8.1
docker push hub.c.163.com/xxxxxx/kube-apiserver-amd64:v1.8.1
docker rmi hub.c.163.com/xxxxxx/kube-apiserver-amd64:v1.8.1
docker logout hub.c.163.com
  • 下载镜像
docker pull hub.c.163.com/xxxxxx/kube-apiserver-amd64:v1.8.1
docker tag hub.c.163.com/xxxxxx/kube-apiserver-amd64:v1.8.1 gcr.io/google_containers/kube-apiserver-amd64:v1.8.1
docker rmi hub.c.163.com/xxxxxx/kube-apiserver-amd64:v1.8.1
docker logout hub.c.163.com
  • 更新镜像
docker update --restart=no $(docker ps -q) && docker stop $(docker ps -q) && docker rm $(docker ps -q)

4.4 kubeadm init错误

  • 错误描述
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
  },
  "status": "Failure",
  "message": "nodes is forbidden: User "system:anonymous" cannot list nodes at the cluster scope",
  "reason": "Forbidden",
  "details": {
    "kind": "nodes"
  },
  "code": 403
}
  • 问题原因
    该节点在/etc/profile中配置了全局代理,kubectl访问kube-apiserver也通过代理转发请求,导致证书不对,连接拒绝。

  • 解决方法
    取消全局代理,只配置Docker代理、yum代理、wget代理。
    参考4.1。

4.5 向Kubernetes集群添加Node失败

  • 问题描述
    在Node上使用kubeadm join命令向kubernetes集群添加节点时提示Failed,如下:
kubeadm join --token=a20844.654ef6410d60d465 --discovery-token-ca-cert-hash sha256:0c2dbe69a2721870a59171c6b5158bd1c04bc27665535ebf295c918a96de0bb1 master.k8s.samwong.im:6443
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "master.k8s.samwong.im:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://master.k8s.samwong.im:6443"
[discovery] Failed to request cluster info, will try again: [Get https://master.k8s.samwong.im:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: EOF]
  • 问题原因
    token失效被删除。在Master上查看token,结果为空。
kubeadm token list
  • 解决方法
    重新生成token,默认token有效期为24小时,生成token时通过指定–ttl 0可设置token永久有效。
[root@master ~]# kubeadm token create --ttl 0
3a536a.5d22075f49cc5fb8
[root@master ~]# kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION   EXTRA GROUPS
3a536a.5d22075f49cc5fb8   <forever>   <never>                     authentication,signing   <none>        system:bootstrappers:kubeadm:default-node-token

CentOS下yum安装mysql5.7

1、安装yum库

[root@dream7788 ~]# wget https://repo.mysql.com//mysql57-community-release-el7-9.noarch.rpm
[root@dream7788 ~]# yum localinstall mysql57-community-release-el7-9.noarch.rpm
[root@dream7788 ~]# yum repolist enabled | grep "mysql.*-community.*"

2、安装mysql

[root@dream7788 ~]# yum install mysql-community-server

3、启动mysql

[root@dream7788 ~]# systemctl restart mysqld.service

4、获取初次安装时root的密码

[root@dream7788 ~]# grep 'temporary password' /var/log/mysqld.log

显示如下:

2017-01-08T19:41:01.080513Z 1 [Note] A temporary password is generated for root@localhost: js!iUor1wOTT

5、修改root密码

[root@dream7788 ~]# mysql -u root -p

修改密码:

mysql> set global validate_password_policy=0;
mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'dream7788';
mysql> FLUSH PRIVILEGES;

6、允许root远程登录

mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'dream7788' WITH GRANT OPTION;
mysql> FLUSH PRIVILEGES;
mysql> quit

在 Ubuntu/Debian 下安装 PHP7.2

介绍

如果不出意外的 PHP 7.2.0 即将在 2017 年11月30日 发布 GA,届时大家就可以第一时间尝鲜了,所以这里先放出 PHP7.2 安装的教程以便大家升级。

适用系统: Ubuntu 16.04 LTS / Ubuntu 14.04 LTS / Debian 9 stretch / Debian 8 jessie

安装 PHP

Ondřej Surý 的 PHP PPA 为 Ubuntu 16.04/14.04 提供了 PHP7.2 版本,同时也有通过个人网站为 Debian 9/8 提供 PHP7.2 版本,因此 Ubuntu 是源于 Debian 所以基本可以通用,同时维护难度较低,软件源安装的 PHP 默认以 Unix Socket 的状态运行在 /run/php/php7.1-fpm.sock ,比使用 TCP 以 localhost:9000 的方式性能更好。

值得一提的是 Ondřej Surý 是 Debian PHP 软件源的官方维护者之一,所以说稳定性和安全性基本上不是问题。

由于 PHP7.2 是新出的版本势必有不少的兼容性问题,特别是国产的程序建议等待开发者通知再进行升级,一些 PECL 扩展可能也不会及时适配最新版。建议更新前提前做好备份准备。目前已知的是 WordPress 4.9 版本开始支持 PHP7.2。

添加软件源

Ubuntu

安装软件源拓展工具:

apt -y install software-properties-common

添加 Ondřej Surý 的 PHP PPA 源,需要按一次回车:

add-apt-repository ppa:ondrej/php  

更新软件源缓存:

apt update

Debian

添加 GPG

wget -O /etc/apt/trusted.gpg.d/php.gpg https://mirror.xtom.com.hk/sury/php/apt.gpg

安装 apt-transport-https

apt-get install apt-transport-https

添加 sury 软件源

sh -c 'echo "deb https://packages.sury.org/php/ $(lsb_release -sc) main" > /etc/apt/sources.list.d/php.list'   

更新软件源缓存:

apt-get update

安装软件

安装 PHP7.2:

apt install php7.2-fpm php7.2-mysql php7.2-curl php7.2-gd php7.2-mbstring php7.2-xml php7.2-xmlrpc php7.2-zip php7.2-opcache -y

设置 PHP

安装完成后,编辑 /etc/php/7.2/fpm/php.ini 替换换 ;cgi.fix_pathinfo=1 为 cgi.fix_pathinfo=0 快捷命令:

sed -i 's/;cgi.fix_pathinfo=1/cgi.fix_pathinfo=0/' /etc/php/7.2/fpm/php.ini 

管理 PHP

安装好了先重启一下!

systemctl restart php7.2-fpm

更多操作:

systemctl restart php7.2-fpm #重启
systemctl start php7.2-fpm #启动
systemctl stop php7.2-fpm #关闭
systemctl status php7.2-fpm #检查状态

更新 PHP

运行下面的命令系统就会更新所有可以更新的软件包括 PHP

apt update
apt upgrade -y

安装更多组件

上面的一条命令安装 PHP 只是安装了部分 PHP 拓展,更多的软件可见:

[email protected]:~# apt-cache search php7.2

php-radius - radius client library for PHP
php-http - PECL HTTP module for PHP Extended HTTP Support
php-uploadprogress - file upload progress tracking extension for PHP
php-yaml - YAML-1.1 parser and emitter for PHP
php-mongodb - MongoDB driver for PHP
php-apcu - APC User Cache for PHP
php-imagick - Provides a wrapper to the ImageMagick library
php-ssh2 - Bindings for the libssh2 library
php-redis - PHP extension for interfacing with Redis
php-memcached - memcached extension module for PHP, uses libmemcached
php-apcu-bc - APCu Backwards Compatibility Module
php-rrd - PHP bindings to rrd tool system
php-uuid - PHP UUID extension
php-memcache - memcache extension module for PHP
php-zmq - ZeroMQ messaging bindings for PHP
php-igbinary - igbinary PHP serializer
php-msgpack - PHP extension for interfacing with MessagePack
php-geoip - GeoIP module for PHP
php-tideways - Tideways PHP Profiler Extension
php-yac - YAC (Yet Another Cache) for PHP
php-mailparse - Email message manipulation for PHP
php-oauth - OAuth 1.0 consumer and provider extension
php-gnupg - PHP wrapper around the gpgme library
php-propro - propro module for PHP
php-raphf - raphf module for PHP
php-solr - PHP extension for communicating with Apache Solr server
php-stomp - Streaming Text Oriented Messaging Protocol (STOMP) client module for PHP
php-gearman - PHP wrapper to libgearman
php-phalcon - full-stack PHP framework delivered as a C-extension
php-ds - PHP extension providing efficient data structures for PHP 7
php-sass - PHP bindings to libsass - fast, native Sass parsing in PHP
php-lua - PHP Embedded lua interpreter
libapache2-mod-php7.2 - server-side, HTML-embedded scripting language (Apache 2 module)
libphp7.2-embed - HTML-embedded scripting language (Embedded SAPI library)
php7.2-bcmath - Bcmath module for PHP
php7.2-bz2 - bzip2 module for PHP
php7.2-cgi - server-side, HTML-embedded scripting language (CGI binary)
php7.2-cli - command-line interpreter for the PHP scripting language
php7.2-common - documentation, examples and common module for PHP
php7.2-curl - CURL module for PHP
php7.2-dba - DBA module for PHP
php7.2-dev - Files for PHP7.2 module development
php7.2-enchant - Enchant module for PHP
php7.2-fpm - server-side, HTML-embedded scripting language (FPM-CGI binary)
php7.2-gd - GD module for PHP
php7.2-gmp - GMP module for PHP
php7.2-imap - IMAP module for PHP
php7.2-interbase - Interbase module for PHP
php7.2-intl - Internationalisation module for PHP
php7.2-json - JSON module for PHP
php7.2-ldap - LDAP module for PHP
php7.2-mbstring - MBSTRING module for PHP
php7.2-mysql - MySQL module for PHP
php7.2-odbc - ODBC module for PHP
php7.2-opcache - Zend OpCache module for PHP
php7.2-pgsql - PostgreSQL module for PHP
php7.2-phpdbg - server-side, HTML-embedded scripting language (PHPDBG binary)
php7.2-pspell - pspell module for PHP
php7.2-readline - readline module for PHP
php7.2-recode - recode module for PHP
php7.2-snmp - SNMP module for PHP
php7.2-soap - SOAP module for PHP
php7.2-sqlite3 - SQLite3 module for PHP
php7.2-sybase - Sybase module for PHP
php7.2-tidy - tidy module for PHP
php7.2-xml - DOM, SimpleXML, WDDX, XML, and XSL module for PHP
php7.2-xmlrpc - XMLRPC-EPI module for PHP
php7.2-zip - Zip module for PHP
php7.2-xsl - XSL module for PHP (dummy)
php7.2 - server-side, HTML-embedded scripting language (metapackage)
php7.2-sodium - libsodium module for PHP

ubuntu上安装composer和laravel

ubuntu上安装PHP框架laravel。

一、准备

1、服务器要求

laravel框架对系统的要求可以用Laravel Homestead 虚拟机满足,如果不使用Homestead作为开发环境需要服务器符合以下要求:

  • PHP >= 7.0.0

  • PHP OpenSSL 扩展

  • PHP PDO 扩展

  • PHP Mbstring扩展

  • PHP Tokenizer 扩展

  • PHP XML 扩展

2、安装PHP扩展

未分类

3、安装配置composer

3.1 安装composer

依次输入下面命令安装composer

php -r "copy('https://install.phpcomposer.com/installer','composer-setup.php');"下载安装脚本 - composer-setup.php - 到当前目录。
php composer-setup.php执行安装过程
php -r "unlink('composer-setup.php');"删除安装脚本

全局安装

sudo mv composer.phar /usr/local/bin/composer

安装完以后 composer -v

未分类

说明安装成功了。

3.2 配置composer

修改composer的全局配置文件

composer config -g repo.packagist composer https://packagist.phpcomposer.com

然后执行 composer selfupdate

二、安装配置laravel

安装laravel

composer global require "laravel/installer=~1.1"

配置laravel

export PATH="~/.config/composer/vendor/bin:$PATH" 添加环境变量

并填加在 /etc/bash.bashrc 执行 source ~/.bashrc 让环境变量立即生效

然后就去 /var/www 新建一个laravel项目

laravel new laravel 在laravel new的时候会报几个文件没有权限,直接给777权限就OK了。

修改laravel目录权限

sudo chown -R :www-data /var/www/laravel
sudo chmod -R 775 /var/www/laravel/storage

然后laravel就弄好了,跑起来吧。