kubernetes下的Nginx加Tomcat三部曲之一:极速体验

章节介绍

在生产环境中,常用到Nginx加Tomcat的部署方式,如下图:

未分类

从本章开始,我们来实战kubernetes下部署上述Nginx和Tomcat服务,并开发spring boot的web应用来验证环境,整个实战分为以下三篇内容:

  1. 极速体验kubernetes下的nginx加tocmat;
  2. 细说nginx和tomcat镜像的制作;
  3. 实战tomcat server的在线扩容和应用升级;

实战工程介绍

本次实战创建的Pod如下:

  1. 一个Nginx的Pod,负责转发web请求到Tomcat;
  2. 三个Tomcat的Pod,上面部署了web应用,收到Nginx转发的请求后,返回的内容是当前Pod的IP地址;

准备kubernetes环境

本次实战需要可用的kubernetes环境,您可以参考以下文章进行快速搭建:

  1. http://blog.csdn.net/boling_cavalry/article/details/78762829
  2. http://blog.csdn.net/boling_cavalry/article/details/78764915

如何执行kubectl命令

实战中,需要在一台ubuntu电脑上安装kubectl工具,然后连接到kubernetes环境执行各种命令,kubectl工具的安装步骤可以参照这篇文章: http://blog.csdn.net/boling_cavalry/article/details/79223091

脚本文件下载

本次体验所需的deployment和service资源是通过脚本创建的,这个脚本可以通过以下两种方式中的任意一种下载:

  1. CSDN下载(无法设置免费下载,只能委屈您用掉两个积分了):http://download.csdn.net/download/boling_cavalry/10235034
  2. GitHub下载,地址和链接信息如下表所示:

未分类

这个git项目中有多个目录,本次所需的资源放在k8s_nginx_tomcat_resource,如下图红框所示:

未分类

下到的k8stomcatcluster20180201.tar是个压缩包,复制到可以执行kubectl命令的ubuntu电脑上,然后解压开,是个名为k8stomcatcluster的文件夹;

执行脚本文件下载

  1. 进入解压好的k8stomcatcluster目录;
  2. 执行命令chmod a+x *.sh,给shell脚本赋可执行权限;3.
    执行命令start_all.sh,创建本次实战的资源,页面输出如下信息:
root@maven:/usr/local/work/k8s/k8stomcatcluster# ./start_all.sh 
deployment "tomcathost" created
service "tomcathost" created
deployment "ng" created
service "ng" created

nginx and tomcat running now

验证服务已经启动

  • 先去kubernetes的管理页面看一下服务是否启动,如下图,名为ng、tomcathost的两个服务都已经启动:

未分类

  • 点击tomcathost服务,看到详情信息,里面有pod的情况,如下图:

未分类

  • 上图中显示tomcathost是在node1创建的,我的node1机器的IP地址是192.168.119.153,所以在浏览器输入:
    http://192.168.119.153:30006/getserverinfo

  • 在浏览器看到的信息如下图所示,机器Tomcat所在机器的IP地址和当前时间:

未分类

  • 多次刷新页面,能看到这三个IP地址:10.42.38.128、10.42.184.35、10.42.127.135,这就是三个Tomcat Pod的地址,Pod信息如下图红框所示:

未分类

  • 执行k8stomcatcluster目录下的stop_all.sh脚本,可以将前面创建的所有service,deployment资源删除;

  • 至此,我们已经在kubernetes下简单体验了Nginx加Tomcat的网站结构,接下来的章节,我们一起来细看如何在kubernetes下创建整个环境;

Kubernetes之利用prometheus监控K8S集群

prometheus它是一个主动拉取的数据库,在K8S中应该展示图形的grafana数据实例化要保存下来,使用分布式文件系统加动态PV,但是在本测试环境中使用本地磁盘,安装采集数据的agent使用DaemonSet来部署,DaemonSet的特性就是在每个node上部署一个服务进程,这一切都是自动的部署。

此处只讲如何用prometheus来监控K8S集群,关于prometheus的知识参考官方文档。

部署前提: 准备好所需要的文件

$ ls -l 
Prometheus/prometheus#:/data/Prometheus/prometheus# ls -l 
total 28
drwxr-xr-x 2 root root 4096 Jan 15 02:53 grafana
drwxr-xr-x 2 root root 4096 Jan 15 03:11 kube-state-metrics
-rw-r--r-- 1 root root   60 Jan 14 06:48 namespace.yaml
drwxr-xr-x 2 root root 4096 Jan 15 03:22 node-directory-size-metrics
drwxr-xr-x 2 root root 4096 Jan 15 03:02 node-exporter
drwxr-xr-x 2 root root 4096 Jan 15 02:55 prometheus
drwxr-xr-x 2 root root 4096 Jan 15 02:37 rbac

$ ls grafana/
grafana-configmap.yaml  grafana-core-deployment.yaml  grafana-import-dashboards-job.yaml  grafana-pvc-claim.yaml  grafana-pvc-volume.yaml  grafana-service.yaml

$ ls prometheus/
configmap.yaml  deployment.yaml  prometheus-rules.yaml  service.yaml

grafana和 prometheus 都是部署文件,node-exporter、kube-state-metrics、node-directory-size-metrics这三个是采集器,相当于prometheus的agent

文件准备好了,现在开始一步一步来部署:

1、创建所需Namespace

因为prometheus 部署的所有的deploy、pod、svc都是在monitoring完成的,所以需要事先创建之。

 $ cat namespace.yaml 
 apiVersion: v1
 kind: Namespace
 metadata:
  name: monitoring

 $ kubectl create -f namespace.yaml 
 namespace "monitoring" created

2、创建grafana的pv、 pvc

grafana# cat grafana-pvc-volume.yaml 
kind: PersistentVolume
apiVersion: v1
metadata:
  name: grafana-pv-volume
  labels:
    type: local
spec:
  storageClassName: grafana-pv-volume
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  hostPath:
    path: "/data/volume/grafana"

grafana# cat grafana-pvc-claim.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: grafana-pvc-volume
  namespace: "monitoring"
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: grafana-pv-volume

$ kubectl create -f grafana/grafana-pvc-volume.yaml -f grafana/grafana-pvc-claim.yaml 
persistentvolume "grafana-pv-volume" created
persistentvolumeclaim "grafana-pvc-volume" created

$ kubectl get pvc -n monitoring
NAME          STATUS           VOLUME       CAPACITY   ACCESS MODES   STORAGECLASS     AGE
grafana-pvc-volume   Bound     grafana-pv-volume   10Gi       RWO     grafana-pv-volume   52s

状态bound已绑定到了 grafana-pv-volume

3、创建grafana应用,这些应用都是第三方的,都会有自已的配置,通过configmap来定义

grafana# ls
grafana-configmap.yaml  grafana-core-deployment.yaml  grafana-import-dashboards-job.yaml  grafana-pvc-claim.yaml  grafana-pvc-volume.yaml  grafana-service.yaml
grafana# kubectl create -f ./    #grafana目录下所有文件都创建
configmap "grafana-import-dashboards" created
deployment "grafana-core" created
job "grafana-import-dashboards" created
service "grafana" created 


grafana# kubectl get deployment,pod -n monitoring 
NAME                  DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/grafana-core   1         1         1            0           1m

NAME                              READY     STATUS              RESTARTS   AGE
po/grafana-core-9c7f66868-7q8lx   0/1       ContainerCreating   0          1m
运行po/grafana-core 容器时会下载镜像: grafana/grafana:4.2.0

grafana创建的应用 简单的自已描述了下:

      grafana-pv-volume=/data/volume/grafana =10G    
      grafana-pvc-volume=5G--->grafana-pv-volume
      ---configmap=grafana-import-dashboards     
      Job=grafana-import-dashboards

      Deployment=grafana-core     replicas: 1  containers=grafana-core   mount:  grafana-pvc-volume:/var
      service=grafana     port: 3000  = nodePort: 30161     (3000是grafana服务的默认端口)

4、现在grafana的核心应用已部署好了,现在来部署prometheus的RBAC

prometheus/rbac# ls
grant_serviceAccount.sh  prometheus_rbac.yaml
#先创建RBAC文件:
prometheus/rbac# kubectl create -f prometheus_rbac.yaml 
clusterrolebinding "prometheus-k8s" created
clusterrolebinding "kube-state-metrics" created
clusterrole "kube-state-metrics" created
serviceaccount "kube-state-metrics" created
clusterrolebinding "prometheus" created
clusterrole "prometheus" created
serviceaccount "prometheus-k8s" created
prometheus/rbac#

5、创建prometheus的deloyment,service

prometheus/prometheus# ls
configmap.yaml  deployment.yaml  prometheus-rules.yaml  service.yaml
prometheus/prometheus# 
在configmap.yaml中要注意的是在1.7以后,获取cadvsion监控pod等的信息时,用的是kubelet的4194端口,
注意以下这段:这是采集cadvision信息,必须是通过kubelet的4194端口,所以Kubelet必须监听着,4194部署了cadvsion来获取pod中容器信息
prometheus/prometheus#cat configmap.yaml
 # https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml#L37
      - job_name: 'kubernetes-nodes'
        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
        kubernetes_sd_configs:
          - role: node
        relabel_configs:
          - source_labels: [__address__]
            regex: '(.*):10250'
            replacement: '${1}:10255'
            target_label: __address__
      - job_name: 'kubernetes-cadvisor'
        scheme: https
        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
        kubernetes_sd_configs:
          - role: node
        relabel_configs:
        - action: labelmap
          regex: __meta_kubernetes_node_label_(.+)
        - target_label: __address__
          replacement: kubernetes.default.svc.cluster.local:443
        - source_labels: [__meta_kubernetes_node_name]
          regex: (.+)
          target_label: __metrics_path__
          replacement: /api/v1/nodes/${1}:4194/proxy/metrics

      # https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml#L79

prometheus-rules.yaml 这是它的发现规则文件

deployment.yaml service.yaml 这两个是部署的文件, deployment部署中资源限制建议放大一点

现在部署prometheus目录下所有文件:

prometheus/prometheus# kubectl create -f ./
configmap "prometheus-core" created
deployment "prometheus-core" created
configmap "prometheus-rules" created
service "prometheus" created
prometheus/prometheus# 

prometheus/prometheus# kubectl get deployment,pod -n monitoring 
NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/grafana-core      1         1         1            1           16m
deploy/prometheus-core   1         1         1            1           1m

NAME                                  READY     STATUS    RESTARTS   AGE
po/grafana-core-9c7f66868-wm68j       1/1       Running   0          16m
po/prometheus-core-6dc6777c5b-5nc7j   1/1       Running   0          1m
prometheus应用的部署,简单描述下创建的内容:
1
2
    Deployment= prometheus-core   replicas: 1    containers=prometheus   image: prom/prometheus:v1.7.0    containerPort: 9090(webui)
    Service    name: prometheus   NodePort-->port: 9090 -webui

6、prometheus部署完了现在来部署它的agent,也就是采集器:

Prometheus/prometheus# ls node-directory-size-metrics/
daemonset.yaml
Prometheus/prometheus# ls kube-state-metrics/
deployment.yaml  service.yaml
Prometheus/prometheus# ls node-exporter/
exporter-daemonset.yaml  exporter-service.yaml
Prometheus/prometheus# 
#其中两个用的是daemonset

Prometheus/prometheus# kubectl create -f node-exporter/ -f kube-state-metrics/ -f node-directory-size-metrics/
daemonset "prometheus-node-exporter" created
service "prometheus-node-exporter" created
deployment "kube-state-metrics" created
service "kube-state-metrics" created
daemonset "node-directory-size-metrics" created
Prometheus/prometheus# 

Prometheus/prometheus# kubectl get deploy,pod,svc -n monitoring 
NAME                        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/grafana-core         1         1         1            1           26m
deploy/kube-state-metrics   2         2         2            2           1m
deploy/prometheus-core      1         1         1            1           11m

NAME                                     READY     STATUS    RESTARTS   AGE
po/grafana-core-9c7f66868-wm68j          1/1       Running   0          26m
po/kube-state-metrics-694fdcf55f-bqcp8   1/1       Running   0          1m
po/kube-state-metrics-694fdcf55f-nnqqd   1/1       Running   0          1m
po/node-directory-size-metrics-n9wx7     2/2       Running   0          1m
po/node-directory-size-metrics-ppscw     2/2       Running   0          1m
po/prometheus-core-6dc6777c5b-5nc7j      1/1       Running   0          11m
po/prometheus-node-exporter-kchmb        1/1       Running   0          1m
po/prometheus-node-exporter-lks5m        1/1       Running   0          1m

NAME                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
svc/grafana                    NodePort    10.254.231.25   <none>        3000:30161/TCP   26m
svc/kube-state-metrics         ClusterIP   10.254.156.51   <none>        8080/TCP         1m
svc/prometheus                 NodePort    10.254.239.90   <none>        9090:37318/TCP   10m
svc/prometheus-node-exporter   ClusterIP   None            <none>        9100/TCP         1m
Prometheus/prometheus#

--------
Prometheus/prometheus# kubectl get pod -o wide -n monitoring 
NAME                                  READY     STATUS    RESTARTS   AGE       IP             NODE
prometheus-node-exporter-kchmb        1/1       Running   0          4m        10.3.1.16      10.3.1.16
prometheus-node-exporter-lks5m        1/1       Running   0          4m        10.3.1.17      10.3.1.17

#这两个是exporter,用的是daemonset 分别在这两个node上运行了。这样就可以采集到所有数据了。

如上部署完成,以下是用自已的话简单描述下:

 node-exporter/exporter-daemonset.yaml 文件:
       DaemonSet=prometheus-node-exporter   
          containers: name: prometheus-node-exporter    image: prom/node-exporter:v0.14.0
          containerPort: 9100   hostPort: 9100  hostNetwork: true    #它用的是主机的9100端口

        Prometheus/prometheus/node-exporter# kubectl get  daemonset,pod -n monitoring 
        NAME                             DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
        ds/node-directory-size-metrics   2         2         2         2            2           <none>          16h
        ds/prometheus-node-exporter      2         2         2         2            2           <none>          16h
           因为它是daemonset,所以相应的也会运行着两个Pod: prometheus-node-exporter

      Service=prometheus-node-exporter   clusterIP: None   port: 9100  type: ClusterIP   #它没有clusterIP

    # kubectl get  service -n monitoring 
    NAME                       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
    prometheus-node-exporter   ClusterIP   None            <none>        9100/TCP         16h
kube-state-metrics/deployment.yaml 文件:
      Deployment=kube-state-metrics replicas: 2   containers-->name: kube-state-metrics  image: gcr.io/google_containers/kube-state-metrics:v0.5.0 
                 containerPort: 8080

      Service     name: kube-state-metrics   port: 8080  #没有映射
                                 #kubectl get deployment,pod,svc -n monitoring                               
            NAME                        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
            deploy/kube-state-metrics   2         2         2            2           16h

            NAME                                     READY     STATUS    RESTARTS   AGE
            po/kube-state-metrics-694fdcf55f-2mmd5   1/1       Running   0          11h
            po/kube-state-metrics-694fdcf55f-bqcp8   1/1       Running   0          16h

            NAME                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
            svc/kube-state-metrics         ClusterIP   10.254.156.51   <none>        8080/TCP         16h
node-directory-size-metrics/daemonset.yaml 文件:
        #因为是daemonset,所以未定义replicas数量,直接运行在每个node之上,但是它没有创建service
      DaemonSet : name: node-directory-size-metrics  
                  containers-->name: read-du  image: giantswarm/tiny-tools   mountPath: /mnt/var   mountPath: /tmp
                  containers--> name: caddy    image: dockermuenster/caddy:0.9.3 containerPort: 9102
                               mountPath: /var/www   hostPath /var

        kubectl get daemonset,pod,svc -n monitoring 
        NAME                             DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
        ds/node-directory-size-metrics   2         2         2         2            2           <none>          16h


        NAME                                     READY     STATUS    RESTARTS   AGE
        po/node-directory-size-metrics-n9wx7     2/2       Running   0          16h
        po/node-directory-size-metrics-ppscw     2/2       Running   0          16h

        NAME                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
                     没有node-directory-size-metrics的service

到此 prometheus算是部署完成了,最后来看下它暴露的端口:

Prometheus/prometheus# kubectl get svc -o wide -n monitoring 
NAME                       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE       SELECTOR
grafana                    NodePort    10.254.231.25   <none>        3000:30161/TCP   31m       app=grafana,component=core
kube-state-metrics         ClusterIP   10.254.156.51   <none>        8080/TCP         6m        app=kube-state-metrics
prometheus                 NodePort    10.254.239.90   <none>        9090:37318/TCP   16m       app=prometheus,component=core
prometheus-node-exporter   ClusterIP   None            <none>        9100/TCP         6m        app=prometheus,component=node-exporter
Prometheus/prometheus#

7、访问、使用prometheus

如上可以看到grafana的端口号是30161,NodeIP:30161 就可以打开grafana,默认admin/admin

未分类

登录后,添加数据源:

未分类

添加Prometheus的数据源:

将Prometheus的作为数据源的相关参数如下图所示:

未分类

添加完后,导入模板文件:

未分类

未分类

未分类

部署完成。

使用minikube在本机搭建kubernetes集群

Kubernetes(k8s)是自动化容器操作的开源平台,基于这个平台,你可以进行容器部署,资源调度和集群扩容等操作。如果你曾经用过Docker部署容器,那么可以将Docker看成Kubernetes底层使用的组件,Kubernetes是Docker的上层封装,通过它可以很方便的进行Docker集群的管理。今天我们使用minikube在单机上进行Kubernetes集群的部署,目的是让我们对k8s有个初步的认识。

安装docker

首先安装docker环境,不详细说明了,网上资料一大堆,可以参考官方安装文档

Mac: https://docs.docker.com/docker-for-mac/install/

Ubuntu: https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/

CentOS: https://docs.docker.com/engine/installation/linux/docker-ce/centos/

当然,如果上面所有方法你都失败了,也可以尝试直接下载binary可执行文件,然后启动docker即可 https://docs.docker.com/engine/installation/linux/docker-ce/binaries/

安装Minikube

Mac

# 如未安装cask,自行搜索 brew安装cask
brew cask install minikube

minikube -h

Linux

# 下载v0.24.1版本
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.24.1/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

# 也可以下载最新版,但可能和本文执行环境不一致,会有坑
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

minikube -h

安装Kubectl

kubectl即kubernetes的客户端,通过他可以进行类似docker run等容器管理操作

curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/

kubectl -h

启动程序

启动minikube

sudo minikube start

首次启动会下载localkube,下载过程可能会失败,会有如下提示,重试几次即可

Starting local Kubernetes v1.8.0 cluster...
Starting VM...
Downloading Minikube ISO
 64.70 MB / 140.01 MB [====================>-----------------------]  46.21% 14s
E0105 14:06:03.884826   10434 start.go:150] Error starting host: Error attempting to cache minikube ISO from URL: Error downloading Minikube ISO: failed to download: failed to download to temp file: failed to copy contents: read tcp 10.0.2.15:47048->172.217.24.16:443: read: connection reset by peer.

================================================================================
An error has occurred. Would you like to opt in to sending anonymized crash
information to minikube to help prevent future errors?
To opt out of these messages, run the command:
    minikube config set WantReportErrorPrompt false
================================================================================
Please enter your response [Y/n]:

如果下载成功,但是报了诸如VBoxManage not found这样的错误,如下

Starting local Kubernetes v1.8.0 cluster...
Starting VM...
Downloading Minikube ISO
 140.01 MB / 140.01 MB [============================================] 100.00% 0s
E0105 14:10:00.035369   10474 start.go:150] Error starting host: Error creating host: Error executing step: Running precreate checks.
: VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path.

 Retrying.
E0105 14:10:00.035780   10474 start.go:156] Error starting host:  Error creating host: Error executing step: Running precreate checks.
: VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path
================================================================================
An error has occurred. Would you like to opt in to sending anonymized crash
information to minikube to help prevent future errors?
To opt out of these messages, run the command:
    minikube config set WantReportErrorPrompt false
================================================================================
Please enter your response [Y/n]:

解决办法是安装 VirtualBox【对于windows或者mac】 再重新启动;当然如果你是Linux,也可以执行如下命令启动minikube,此时就不需要安装VirtualBox了。

因为minikube默认需要虚拟机来初始化kunernetes环境,但Linux是个例外,可以追加–vm-driver=none参数来使用自己的环境,说明见https://github.com/kubernetes/minikube#quickstart

# linux 下独有,不依赖虚拟机启动
sudo minikube start --vm-driver=none

# 如果是Mac or Windows,安装VirtualBox后再重新start即可
sudo minikube start

如果安装了虚拟机,或者使用了–vm-driver=none参数,并且下载完毕,会有如下提示运行成功

Starting local Kubernetes v1.8.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Downloading localkube binary
 148.25 MB / 148.25 MB [============================================] 100.00% 0s
 0 B / 65 B [----------------------------------------------------------]   0.00%
 65 B / 65 B [======================================================] 100.00% 0sSetting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
===================
WARNING: IT IS RECOMMENDED NOT TO RUN THE NONE DRIVER ON PERSONAL WORKSTATIONS
    The 'none' driver will run an insecure kubernetes apiserver as root that may leave the host vulnerable to CSRF attacks

When using the none driver, the kubectl config and credentials generated will be root owned and will appear in the root home directory.
You will need to move the files to the appropriate location and then set the correct permissions.  An example of this is below:

    sudo mv /root/.kube $HOME/.kube # this will write over any previous configuration
    sudo chown -R $USER $HOME/.kube
    sudo chgrp -R $USER $HOME/.kube

    sudo mv /root/.minikube $HOME/.minikube # this will write over any previous configuration
    sudo chown -R $USER $HOME/.minikube
    sudo chgrp -R $USER $HOME/.minikube

This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
Loading cached images from config file.

启动一个容器服务

# kube-nginx999 是要定义的容器名称 nginx:latest表明要用nginx镜像 --port=80表明容器对外暴露80端口
sudo kubectl run kube-nginx999 --image=nginx:latest --port=80

> deployment "kube-nginx999" created

查看状态

sudo kubectl get pods

NAME                             READY     STATUS              RESTARTS   AGE
nginx999-55f47cb99-46nm8         1/1       containerCreating   0          38s

稍等一分钟左右,如果你的服务一直是containerCreating状态,没有变化,那就是创建实例出现问题,如下方法查看log

sudo minikube logs

日志中出现 failed pulling image… 则是因为镜像拉取失败导致服务创建失败,原因?GFW嘛!服务在拉取自身需要的gcr.io/google_containers/pause-amd64:3.0镜像时失败了,如下报错。

Jan 05 03:52:58 minikube localkube[3624]: E0105 03:52:58.952990    3624 kuberuntime_manager.go:632] createPodSandbox for pod "nginx666-864b85987c-kvdpb_default(b0cc687d-f1cb-11e7-ba05-080027e170dd)" failed: rpc error: code = Unknown desc = failed pulling image "gcr.io/google_containers/pause-amd64:3.0": Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

解决方法:用本地镜像替代

原理就是使用阿里云的镜像下载到本地,然后命名为minikube使用的gcr.io的同名镜像,替代远端镜像即可

# 下载阿里云镜像
docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0

# 本地命名为 gcr.io/google_containers/pause-amd64:3.0
docker tag registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0

重新启动服务

增加 –image-pull-policy=IfNotPresent 参数,表明优先使用本地镜像,不从远端拉取

sudo kubectl run kube-nginx999 --image=nginx:latest --port=80 --image-pull-policy=IfNotPresent

如果提示已经存在,换个名字重新执行即可。这时候查看服务状态应该是如下Running状态代表创建成功,但此时还不能访问容器

sudo kubectl get pods

NAMESPACE     NAME                             READY     STATUS             RESTARTS   AGE
default       kube-nginx999-77867567f5-48czx   1/1       Running            2          16h

发布服务

sudo kubectl expose deployment kube-nginx999 --type=NodePort

> service "kube-nginx999" exposed

查看服务地址

sudo minikube service kube-nginx999 --url

> http://127.0.0.1:30690

上面命令展示的地址即启动的nginx容器服务地址,访问 http://127.0.0.1:30690 即可出现nginx首页,服务成功启动!

PS: 访问http://localhost:30690是不可以的。

dashboard 管理后台

dashboard是kubernetes提供的容器服务管理后台,可视化界面,用来进行机器负载,集群管理,镜像扩容,配置数据等相关操作

启动dashboard

# 会打印出管理后台地址
sudo minikube dashboard --url

# 或者用下面写法,会自动打开默认浏览器,但我的一直失败,没有打开默认浏览器,没关系,执行后自己打开也行
sudo minikube dashboard

但初次会报下面的两种错误之一

# 1
Could not find finalized endpoint being pointed to by kubernetes-dashboard: Error validating service: Error getting service kubernetes-dashboard: services "kubernetes-dashboard" not found

# 2
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...

如果查看log的话,会找到和pause一样的错误,即在镜像拉取的时候失败,解决方法如下,将所有kubernetes需要的镜像全部用阿里源下载到本地,然后命名为gcr.io…,来让minikube不从远端下载

如果不确定应该将tag重命名为什么的话,可以执行sudo grep ‘image:’ -R /etc/kubernetes看到默认情况下需要的镜像名以及版本号,对应去 https://dev.aliyun.com/search.html 搜索下载,然后命名为上面配置中定义的tag即可,当然,你可以在阿里云下载1.1然后重命名为1.2也没关系,差几个小版本不会有太大影响。

docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/kubernetes-dashboard-amd64:v1.7.1
docker tag registry.cn-hangzhou.aliyuncs.com/google-containers/kubernetes-dashboard-amd64:v1.7.1 gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.0

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-addon-manager:v6.4-beta.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-addon-manager:v6.4-beta.2 gcr.io/google-containers/kube-addon-manager:v6.4-beta.2

docker pull registry.cn-shenzhen.aliyuncs.com/gcrio/k8s-dns-kube-dns-amd64:latest
docker tag registry.cn-shenzhen.aliyuncs.com/gcrio/k8s-dns-kube-dns-amd64:latest gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5

docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5
docker tag registry.cn-hangzhou.aliyuncs.com/google-containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5

docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/k8s-dns-sidecar-amd64:1.14.5
docker tag registry.cn-hangzhou.aliyuncs.com/google-containers/k8s-dns-sidecar-amd64:1.14.5 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v1.8.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v1.8.1 gcr.io/k8s-minikube/storage-provisioner:v1.8.1

然后重启minikube

sudo minikube stop
sudo minikube start [--vm-driver=none] # linux没装virtualbox的情况下需要加上后面的参数

再次执行

sudo minikube dashboard --url

> http://127.0.0.1:30000/

访问 http://127.0.0.1:30000/ 即可看到操作后台

未分类

写在最后

如果你下载工具时提示下载错误,基本上就是因为GFW,所以如果你有本地ss能够科学上网的话,可以在终端里执行下面命令,让 curl wget等命令也会走代理,加快下载

export http_proxy='socks5:127.0.0.1:1080'

有个坑,执行完以后访问 127.0.0.1 也是会走代理,这时候当然要换一个tab访问即可。

使用Jenkins部署应用到Kubernetes

【编者的话】这篇文章基于去年5月进行的一次Kubernetes使用情况调查,阐述了使用Jenkins作为持续集成工具部署应用程序到Kubernetes的现状,对于大家如何进行CI/CD工具的选型有参考意义。

新的技术栈正不断交付Kubernetes开源容器编排引擎的内容。 本周我们将报告如何将应用程序实际部署到Kubernetes。 当在容器编排、持续交付流水线或配置管理工具之间进行选择时,我们2017年5月进行的Kubernetes调查的受访者经常表示他们使用编排框架来部署应用程序。 虽然有些人曾经把Kubernetes想象成可以做任何事情的瑞士军刀,但实际上他们对应用程序部署中的Kubernetes的角色有着更细致的了解。

当更直接地问到他们是否使用Jenkins将应用程序部署到Kubernetes时,45%的人表示赞成,另有36%的Kubernetes用户使用其它的持续部署(CD)流水线将应用程序部署到Kubernetes,其中GitLab CI和CircleCI是被提及最多的。请注意,调查问题的性质意味着这些信息不能用来决定市场份额,其他工具的整体采用率可能高于下图所示。

未分类

这就是说,Jenkins仍然是最常用的持续集成(CI)工具。 未来的研究将不会问是否使用Jenkins,而是会问在组织流水线中的核心地位。 在许多情况下,同一公司内的不同团队将使用竞争产品。 在其他情况下,Jenkins是更大工作流程中的一个小组件,另一个供应商的工具会是CD流水线图中的焦点。

深入挖掘一下,我们发现小公司(两到一百名员工)相对于平均情况使用Jenkins的可能性较小(38%比45%),可能是因为他们以前没有需要部署工作流的大型组织。 这与我们上周写到的一个趋势是一致的,即不同规模的团队倾向于使用不同的CI/CD工具。

受访者认为,应用程序的自动化部署是使用成熟的Kubernetes一个关键优势。 许多受访者描述了交付流水线如何与Kubernetes所带来的更大的组织变革相关联。 有人说:“在我们的早期阶段,改进开发人员的工作流程,使开发人员能够思考他们的代码是如何在生产环境中运行的,这是我们想要用Kubernetes实现的最重要的事情。”

我们知道有65%的生产环境用户在Kubernetes上运行应用程序开发工具。 在这方面,Kubernetes对于持续部署和提供“基础设施即代码”至关重要。

kubernetes 1.8 高可用安装(六)

6 、安装kube-dns

下载kube-dns.yaml

#获取文件
wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/kube-dns.yaml.sed
mv kube-dns.yaml.sed kube-dns.yaml

#修改配置
sed -i 's/$DNS_SERVER_IP/10.96.0.12/g' kube-dns.yaml 
sed -i 's/$DNS_DOMAIN/cluster.local/g' kube-dns.yaml

# 创建
kubectl create -f kube-dns.yaml

kube-dns.yaml

# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Should keep target in cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml
# in sync with this file.

# Warning: This is a file generated from the base underscore template file: kube-dns.yaml.base

apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "KubeDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: $DNS_SERVER_IP
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    rollingUpdate:
      maxSurge: 10%
      maxUnavailable: 0
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"
      volumes:
      - name: kube-dns-config
        configMap:
          name: kube-dns
          optional: true
      containers:
      - name: kubedns
        image: foxchan/k8s-dns-kube-dns-amd64:1.14.7
        resources:
          # TODO: Set memory limits when we've profiled the container for large
          # clusters, then set request = limit to keep this container in
          # guaranteed class. Currently, this container falls into the
          # "burstable" category so the kubelet doesn't backoff from restarting it.
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        livenessProbe:
          httpGet:
            path: /healthcheck/kubedns
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /readiness
            port: 8081
            scheme: HTTP
          # we poll on pod startup for the Kubernetes master service and
          # only setup the /readiness HTTP server once that's available.
          initialDelaySeconds: 3
          timeoutSeconds: 5
        args:
        - --domain=$DNS_DOMAIN.
        - --dns-port=10053
        - --config-dir=/kube-dns-config
        - --v=2
        env:
        - name: PROMETHEUS_PORT
          value: "10055"
        ports:
        - containerPort: 10053
          name: dns-local
          protocol: UDP
        - containerPort: 10053
          name: dns-tcp-local
          protocol: TCP
        - containerPort: 10055
          name: metrics
          protocol: TCP
        volumeMounts:
        - name: kube-dns-config
          mountPath: /kube-dns-config
      - name: dnsmasq
        image: foxchan/k8s-dns-dnsmasq-nanny-amd64:1.14.7
        livenessProbe:
          httpGet:
            path: /healthcheck/dnsmasq
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - -v=2
        - -logtostderr
        - -configDir=/etc/k8s/dns/dnsmasq-nanny
        - -restartDnsmasq=true
        - --
        - -k
        - --cache-size=1000
        - --no-negcache
        - --log-facility=-
        - --server=/$DNS_DOMAIN/127.0.0.1#10053
        - --server=/in-addr.arpa/127.0.0.1#10053
        - --server=/ip6.arpa/127.0.0.1#10053
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        # see: https://github.com/kubernetes/kubernetes/issues/29055 for details
        resources:
          requests:
            cpu: 150m
            memory: 20Mi
        volumeMounts:
        - name: kube-dns-config
          mountPath: /etc/k8s/dns/dnsmasq-nanny
      - name: sidecar
        image: foxchan/k8s-dns-sidecar-amd64:1.14.7
        livenessProbe:
          httpGet:
            path: /metrics
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - --v=2
        - --logtostderr
        - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.$DNS_DOMAIN,5,SRV
        - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.$DNS_DOMAIN,5,SRV
        ports:
        - containerPort: 10054
          name: metrics
          protocol: TCP
        resources:
          requests:
            memory: 20Mi
            cpu: 10m
      dnsPolicy: Default  # Don't use cluster DNS.
      serviceAccountName: kube-dns

kubernetes 1.8 高可用安装(五)

5 安装网络组件calico

安装前需要确认kubelet配置是否已经增加–network-plugin=cni
如果没有配置就加到kubelet配置文件里

Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin

5.1先装rbac

官方URL
https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/rbac.yaml

calico-rbac.yaml

# Calico Version v2.6.1
# https://docs.projectcalico.org/v2.6/releases#v2.6.1

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: calico-kube-controllers
  namespace: kube-system
rules:
  - apiGroups:
    - ""
    - extensions
    resources:
      - pods
      - namespaces
      - networkpolicies
    verbs:
      - watch
      - list
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: calico-kube-controllers
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-kube-controllers
subjects:
- kind: ServiceAccount
  name: calico-kube-controllers
  namespace: kube-system

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: calico-node
  namespace: kube-system
rules:
  - apiGroups: [""]
    resources:
      - pods
      - nodes
    verbs:
      - get

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: calico-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-node
subjects:
- kind: ServiceAccount
  name: calico-node
  namespace: kube-system

5.2 创建calico.yaml

官方URL
https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/calico.yaml

配置修改请看下面的参数说明

# Calico Version v2.6.1
# https://docs.projectcalico.org/v2.6/releases#v2.6.1
# This manifest includes the following component versions:
#   calico/node:v2.6.1
#   calico/cni:v1.11.0
#   calico/kube-controllers:v1.0.0

# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
  name: calico-config
  namespace: kube-system
data:
  # Configure this with the location of your etcd cluster.
  etcd_endpoints: " :2379"

  # Configure the Calico backend to use.
  calico_backend: "bird"

  # The CNI network configuration to install on each node.
  cni_network_config: |-
    {
        "name": "k8s-pod-network",
        "cniVersion": "0.1.0",
        "type": "calico",
        "etcd_endpoints": "__ETCD_ENDPOINTS__",
        "etcd_key_file": "__ETCD_KEY_FILE__",
        "etcd_cert_file": "__ETCD_CERT_FILE__",
        "etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__",
        "log_level": "info",
        "mtu": 1500,
        "ipam": {
            "type": "calico-ipam"
        },
        "policy": {
            "type": "k8s",
            "k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",
            "k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"
        },
        "kubernetes": {
            "kubeconfig": "__KUBECONFIG_FILEPATH__"
        }
    }

  # If you're using TLS enabled etcd uncomment the following.
  # You must also populate the Secret below with these files.
  etcd_ca: ""   # "/calico-secrets/etcd-ca"
  etcd_cert: "" # "/calico-secrets/etcd-cert"
  etcd_key: ""  # "/calico-secrets/etcd-key"

---

# The following contains k8s Secrets for use with a TLS enabled etcd cluster.
# For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/
apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: calico-etcd-secrets
  namespace: kube-system
data:
  # Populate the following files with etcd TLS configuration if desired, but leave blank if
  # not using TLS for etcd.
  # This self-hosted install expects three files with the following names.  The values
  # should be base64 encoded strings of the entire contents of each file.
  # etcd-key: null
  # etcd-cert: null
  # etcd-ca: null

---

# This manifest installs the calico/node container, as well
# as the Calico CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
  name: calico-node
  namespace: kube-system
  labels:
    k8s-app: calico-node
spec:
  selector:
    matchLabels:
      k8s-app: calico-node
  template:
    metadata:
      labels:
        k8s-app: calico-node
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
        scheduler.alpha.kubernetes.io/tolerations: |
          [{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
           {"key":"CriticalAddonsOnly", "operator":"Exists"}]
    spec:
      hostNetwork: true
      serviceAccountName: calico-node
      containers:
        # Runs calico/node container on each Kubernetes node.  This
        # container programs network policy and routes on each
        # host.
        - name: calico-node
          image: quay.io/calico/node:v2.6.1
          env:
            # The location of the Calico etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # Choose the backend to use.
            - name: CALICO_NETWORKING_BACKEND
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: calico_backend
            # Cluster type to identify the deployment type
            - name: CLUSTER_TYPE
              value: "k8s,bgp"
            # Disable file logging so `kubectl logs` works.
            - name: CALICO_DISABLE_FILE_LOGGING
              value: "true"
            # Set Felix endpoint to host default action to ACCEPT.
            - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
              value: "ACCEPT"
            # Configure the IP Pool from which Pod IPs will be chosen.
            - name: CALICO_IPV4POOL_CIDR
              value: "192.168.0.0/16"
            - name: CALICO_IPV4POOL_IPIP
              value: "always"
            # Disable IPv6 on Kubernetes.
            - name: FELIX_IPV6SUPPORT
              value: "false"
            # Set Felix logging to "info"
            - name: FELIX_LOGSEVERITYSCREEN
              value: "info"
            # Set MTU for tunnel device used if ipip is enabled
            - name: FELIX_IPINIPMTU
              value: "1440"
            # Auto-detect the BGP IP address.
            - name: IP
              value: "autodetect"
            - name: IP_AUTODETECTION_METHOD
              value: "can-reach=www.baidu.com"
            - name: FELIX_HEALTHENABLED
              value: "true"
          securityContext:
            privileged: true
          resources:
            requests:
              cpu: 250m
          livenessProbe:
            httpGet:
              path: /liveness
              port: 9099
            periodSeconds: 10
            initialDelaySeconds: 10
            failureThreshold: 6
          readinessProbe:
            httpGet:
              path: /readiness
              port: 9099
            periodSeconds: 10
          volumeMounts:
            - mountPath: /lib/modules
              name: lib-modules
              readOnly: true
            - mountPath: /var/run/calico
              name: var-run-calico
              readOnly: false
            - mountPath: /calico-secrets
              name: etcd-certs
        # This container installs the Calico CNI binaries
        # and CNI network config file on each node.
        - name: install-cni
          image: quay.io/calico/cni:v1.11.0
          command: ["/install-cni.sh"]
          env:
            # The location of the Calico etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # The CNI network config to install on each node.
            - name: CNI_NETWORK_CONFIG
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: cni_network_config
          volumeMounts:
            - mountPath: /host/opt/cni/bin
              name: cni-bin-dir
            - mountPath: /host/etc/cni/net.d
              name: cni-net-dir
            - mountPath: /calico-secrets
              name: etcd-certs
      volumes:
        # Used by calico/node.
        - name: lib-modules
          hostPath:
            path: /lib/modules
        - name: var-run-calico
          hostPath:
            path: /var/run/calico
        # Used to install CNI.
        - name: cni-bin-dir
          hostPath:
            path: /opt/cni/bin
        - name: cni-net-dir
          hostPath:
            path: /etc/cni/net.d
        # Mount in the etcd TLS secrets.
        - name: etcd-certs
          secret:
            secretName: calico-etcd-secrets

---

# This manifest deploys the Calico Kubernetes controllers.
# See https://github.com/projectcalico/kube-controllers
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: calico-kube-controllers
  namespace: kube-system
  labels:
    k8s-app: calico-kube-controllers
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ''
    scheduler.alpha.kubernetes.io/tolerations: |
      [{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
       {"key":"CriticalAddonsOnly", "operator":"Exists"}]
spec:
  # The controllers can only have a single active instance.
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      name: calico-kube-controllers
      namespace: kube-system
      labels:
        k8s-app: calico-kube-controllers
    spec:
      # The controllers must run in the host network namespace so that
      # it isn't governed by policy that would prevent it from working.
      hostNetwork: true
      serviceAccountName: calico-kube-controllers
      containers:
        - name: calico-kube-controllers
          image: quay.io/calico/kube-controllers:v1.0.0
          env:
            # The location of the Calico etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints

---

# This deployment turns off the old "policy-controller". It should remain at 0 replicas, and then
# be removed entirely once the new kube-controllers deployment has been deployed above.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: calico-policy-controller
  namespace: kube-system
  labels:
    k8s-app: calico-policy
spec:
  # Turn this deployment off in favor of the kube-controllers deployment above.
  replicas: 0
  strategy:
    type: Recreate
  template:
    metadata:
      name: calico-policy-controller
      namespace: kube-system
      labels:
        k8s-app: calico-policy
    spec:
      hostNetwork: true
      serviceAccountName: calico-kube-controllers
      containers:
        - name: calico-policy-controller
          image: quay.io/calico/kube-controllers:v1.0.0
          env:
            # The location of the Calico etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-kube-controllers
  namespace: kube-system

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-node
  namespace: kube-system

参数说明:

  • etcd_endpoints
    改为你自己的etcd集群

  • CALICO_IPV4POOL_CIDR
    calico的IP池,不要和集群的cidr,以及机器的其他IP段冲突,比如用:10.10.0.0/16

  • IP Autodetection methods
    机器多网卡的时候,安装calico-node会报错,因为calico默认IP的获取方式是first-found,这个ip可能不是你需要的那个。导致网络不成功,导致注册失败

#calico报错日志
Skipping datastore connection test
IPv4 address 10.96.0.1 discovered on interface kube-ipvs0
No AS number configured on node resource, using global value

需要修改calico.yaml,修改IP的获取方式为autodetect,注意顺序,修改如下

- name: IP
  value: "autodetect"
- name: IP_AUTODETECTION_METHOD
  value: "can-reach=www.baidu.com"

IP_AUTODETECTION_METHOD 参数说明
官方文档URL:https://docs.projectcalico.org/v2.6/reference/node/configuration

  • 使用通过ip访问的interface
    can-reach=61.135.169.121

  • 使用通过域名访问的interface
    can-reach=www.baidu.com

  • 使用指定的interface
    interface=ethx

此时node都应该处于Ready状态

[root@kvm-master network]# kubectl get nodes
NAME         STATUS    ROLES     AGE       VERSION
node2        Ready     <none>    23h       v1.8.0
node1        Ready     <none>    1d        v1.8.0

5.3 安装calicoctl管理calico网络

calicoctl.yaml

# Calico Version v2.6.1
# https://docs.projectcalico.org/v2.6/releases#v2.6.1
# This manifest includes the following component versions:
#   calico/ctl:v1.6.1

apiVersion: v1
kind: Pod
metadata:
  name: calicoctl
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: calicoctl
    image: quay.io/calico/ctl:v1.6.1
    command: ["/bin/sh", "-c", "while true; do sleep 3600; done"]
    env:
    - name: ETCD_ENDPOINTS
      valueFrom:
        configMapKeyRef:
          name: calico-config
          key: etcd_endpoints

注意:calicoctl以pod形式运行时,无法使用calicoctl node 命令

kubernetes 1.8 高可用安装(四)

4、安装kubernetes node

Kubernetes的一个Node节点上需要运行如下组件:

  • Docker,目前安装的是docker-1.12.6
  • kubelet
  • kube-proxy 使用daemonset安装

4.1 安装kubelet和cni

安装rpm包

yum localinstall -y kubelet-1.8.0-1.x86_64.rpm kubernetes-cni-0.5.1-1.x86_64.rpm

在任一master节点创建ClusterRoleBinding

kubectl create clusterrolebinding kubelet-bootstrap 
  --clusterrole=system:node-bootstrapper 
  --user=kubelet-bootstrap

4.2 将证书和配置文件同步到本机

rsync -avSH rsync://master_ip/k8s/pki /etc/kubernetes/
rsync -avSH rsync://master_ip/k8s/bootstrap.kubeconfig /etc/kubernetes/

4.3 配置kubelet

/etc/systemd/system/kubelet.service.d/kubelet.conf

[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.12 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.pem"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki"
Environment="KUBELET_EXTRA_ARGS=--v=2 --pod-infra-container-image=foxchan/pause-amd64:3.0 --fail-swap-on=false"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $K
UBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS

4.4 配置kube-proxy

修改后启动kubelet

systemctl daemon-reload
systemctl start kubelet

由于采用了 TLS Bootstrapping,所以 kubelet 启动后不会立即加入集群,而是进行证书申请,
看日志

Oct 24 16:45:43  kubelet[240975]: I1024 16:45:43.566069  240975 bootstrap.go:57] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file

看csr,仍然是pending状态

[root@kvm-master manifests]# kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-VJFRWBpJqhe3lpLKPULmJ9wfYeF0xoMQF8VzfcvYyqw   2h        kubelet-bootstrap   Approved,Issued
node-csr-yCn3MIUz-luhqwEVva1haugCmoz48ykxU7x4er3pfQs   44s       kubelet-bootstrap   Pending

需要在 master 允许其证书申请

kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve

此时看node已经加入集群

[root@kvm-master manifests]# kubectl get nodes
NAME            STATUS     ROLES     AGE       VERSION
node2   NotReady   <none>    5m        v1.8.0
node1    Ready      <none>    1h        v1.8.0

因为kubelet配置了network-plugin=cni,但是还没安装,所以状态会是NotReady,不想看这个报错或者不需要网络,就可以修改kubelet配置文件,去掉network-plugin=cni 就可以了。

Oct 25 15:48:15 localhost kubelet: W1025 15:48:15.584765  240975 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Oct 25 15:48:15 localhost kubelet: E1025 15:48:15.585057  240975 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

创建kube-proxy 相关文件

在master操作

kubectl apply -f kube-proxy-rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-proxy
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: system:kube-proxy
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
subjects:
  - kind: ServiceAccount
    name: kube-proxy
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: system:node-proxier
  apiGroup: rbac.authorization.k8s.io

kubectl apply -f kubeproxy-ds.yaml

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    k8s-app: kube-proxy
  name: kube-proxy
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: kube-proxy
  template:
    metadata:
      labels:
        k8s-app: kube-proxy
    spec:
      containers:
      - command:
        - /bin/sh
        - -c
        - /usr/local/bin/kube-proxy
          --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
          --cluster-cidr=10.96.0.0/12
          --conntrack-max-per-core=655360
          --conntrack-min=655360
          --conntrack-tcp-timeout-established=1h
          --conntrack-tcp-timeout-close-wait=60s
          --v=2 1>>/var/log/kube-proxy.log 2>&1
        name: kube-proxy
        image: foxchan/kube-proxy-amd64:v1.8.1
        imagePullPolicy: IfNotPresent
        securityContext:
          privileged: true
        volumeMounts:
        - mountPath: /etc/kubernetes/
          name: k8s
        - mountPath: /var/log/kube-proxy.log
          name: logfile
        - mountPath: /run/xtables.lock
          name: xtables-lock
        - mountPath: /lib/modules
          name: modprobe
      hostNetwork: true
      serviceAccountName: kube-proxy
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
      volumes:
      - hostPath:
          path: /etc/kubernetes
        name: k8s
      - hostPath:
          path: /var/log/kube-proxy.log
        name: logfile
      - hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
        name: xtables-lock
      - hostPath:
          path: /lib/modules
          type: ""
        name: modprobe
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate

查看 proxy 是否正常

[root@kvm-master kubeproxy]# kubectl get pods -n kube-system
NAME               READY     STATUS    RESTARTS   AGE
kube-proxy-rw2bt   1/1       Running   0          1m
kube-proxy-sct84   1/1       Running   0          1m

kubernetes 1.8 高可用安装(三)

3、master 组件安装(etcd/api-server/controller/scheduler)

3.1 etcd集群安装

确定你要安装的master机器, 上面安装rpm包,配置kubelet

注意:
所有的image,我都已经放到docker hub仓库,需要的可以去下载

https://hub.docker.com/u/foxchan/

安装rpm包

yum localinstall -y kubectl-1.8.0-1.x86_64.rpm kubelet-1.8.0-1.x86_64.rpm kubernetes-cni-0.5.1-1.x86_64.rpm

创建manitest目录

mkdir -p /etc/kubernetes/manifests

修改kubelet配置

/etc/systemd/system/kubelet.service.d/kubelet.conf

[Service]
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.12 --cluster-domain=cluster.local"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
Environment="KUBELET_EXTRA_ARGS=--v=2  --pod-infra-container-image=foxchan/google_containers/pause-amd64:3.0 --fail-swap-on=false"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS  $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS

注意:

–cluster-dns=10.96.0.12 这个IP自己规划,记得和创建证书时候的IP段保持一致

–fail-swap-on=false 1.8开始,如果机器开启了swap,kubulet会无法启动,默认参数是true

启动kubelet

systemctl daemon-reload
systemctl restart kubelet

3.2 安装etcd集群

创建etcd.yaml,并放到 /etc/kubernetes/manifests

注意:

提前创建日志文件,便于挂载

/var/log/kube-apiserver.log
/var/log/kube-etcd.log
/var/log/kube-controller-manager.log
/var/log/kube-scheduler.log

#根据挂载配置创建相关目录
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: etcd
    tier: control-plane
  name: etcd-server
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - image: foxchan/google_containers/etcd-amd64:3.0.17
    name: etcd-container
    command:
    - /bin/sh
    - -c
    - /usr/local/bin/etcd
      --name=etcd0
      --initial-advertise-peer-urls=http://master_IP:2380
      --listen-peer-urls=http://master_IP:2380
      --advertise-client-urls=http://master_IP:2379
      --listen-client-urls=http://master_IP:2379,http://127.0.0.1:2379
      --data-dir=/var/etcd/data
      --initial-cluster-token=emar-etcd-cluster
      --initial-cluster=etcd0=http://master_IP1:2380,etcd1=http://master_IP2:2380,etcd2=http://master_IP3:2380
      --initial-cluster-state=new 1>>/var/log/kube-etcd.log 2>&1
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /health
        port: 2379
        scheme: HTTP
      initialDelaySeconds: 15
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /var/log/kube-etcd.log
      name: logfile
    - mountPath: /var/etcd
      name: varetcd
    - mountPath: /etc/ssl/certs
      name: certs
    - mountPath: /etc/kubernetes/
      name: k8s
      readOnly: true
  volumes:
  - hostPath:
      path: /var/log/kube-etcd.log
    name: logfile
  - hostPath:
      path: /var/etcd/data
    name: varetcd
  - hostPath:
      path: /etc/ssl/certs
    name: certs
  - hostPath:
      path: /etc/kubernetes/
    name: k8s
status: {}

3台master机器 重复操作3.1-3.2,

参数说明

  • –name=etcd0 每个etcd name都是唯一

  • client-urls 修改对应的机器ip

kubelet 会定时查看manifests目录,拉起 里面的配置文件

3.3 安装kube-apiserver

创建kube-apiserver.yaml,并放到 /etc/kubernetes/manifests

#根据挂载配置创建相关目录
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - command:
    - /bin/sh
    - -c
    - /usr/local/bin/kube-apiserver
      --kubelet-https=true
      --enable-bootstrap-token-auth=true
      --token-auth-file=/etc/kubernetes/token.csv
      --service-cluster-ip-range=10.96.0.0/12
      --tls-cert-file=/etc/kubernetes/pki/kubernetes.pem
      --tls-private-key-file=/etc/kubernetes/pki/kubernetes-key.pem
      --client-ca-file=/etc/kubernetes/pki/ca.pem
      --service-account-key-file=/etc/kubernetes/pki/ca-key.pem
      --insecure-port=9080
      --secure-port=6443
      --insecure-bind-address=0.0.0.0
      --bind-address=0.0.0.0
      --advertise-address=master_IP
      --storage-backend=etcd3
      --etcd-servers=http://master_IP1:2379,http://master_IP2:2379,http://master_IP3:2379
      --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,NodeRestriction
      --allow-privileged=true
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
      --authorization-mode=Node,RBAC
      --v=2 1>>/var/log/kube-apiserver.log 2>&1
    image: foxchan/google_containers/kube-apiserver-amd64:v1.8.1
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: kube-apiserver
    resources:
      requests:
        cpu: 250m
    volumeMounts:
    - mountPath: /etc/kubernetes/
      name: k8s
      readOnly: true
    - mountPath: /etc/ssl/certs
      name: certs
    - mountPath: /etc/pki
      name: pki
    - mountPath: /var/log/kube-apiserver.log
      name: logfile
  hostNetwork: true
  volumes:
  - hostPath:
      path: /etc/kubernetes
    name: k8s
  - hostPath:
      path: /etc/ssl/certs
    name: certs
  - hostPath:
      path: /etc/pki
    name: pki
  - hostPath:
      path: /var/log/kube-apiserver.log
    name: logfile
status: {}

参数说明:

  • –advertise-address 修改对应机器的ip
  • –enable-bootstrap-token-auth Bootstrap Token authenticator
  • –authorization-mode 授权模型增加了 Node 参数,因为 1.8 后默认 system:node role 不会自动授予 system:nodes 组
  • 由于以上原因,–admission-control 同时增加了 NodeRestriction 参数

检测:可以看到api已经正常

kubectl --server=https://master_IP:6443 
--certificate-authority=/etc/kubernetes/pki/ca.pem  
--client-certificate=/etc/kubernetes/pki/admin.pem 
--client-key=/etc/kubernetes/pki/admin-key.pem 
get componentstatuses
NAME                 STATUS      MESSAGE                                                                                        ERROR
controller-manager   Unhealthy   Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: getsockopt: connection refused   
scheduler            Healthy     ok                                                                                             
etcd-1               Healthy     {"health": "true"}                                                                             
etcd-0               Healthy     {"health": "true"}                                                                             
etcd-2               Healthy     {"health": "true"}

3.4 安装kube-controller-manager

创建kube-controller-manager.yaml,并放到
/etc/kubernetes/manifests

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-controller-manager
    tier: control-plane
  name: kube-controller-manager
  namespace: kube-system
spec:
  containers:
  - command:
    - /bin/sh
    - -c
    - /usr/local/bin/kube-controller-manager 
      --master=127.0.0.1:9080
      --controllers=*,bootstrapsigner,tokencleaner
      --root-ca-file=/etc/kubernetes/pki/ca.pem
      --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem
      --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem
      --service-account-private-key-file=/etc/kubernetes/pki/ca-key.pem
      --leader-elect=true 
      --v=2 1>>/var/log/kube-controller-manager.log 2>&1
    image: foxchan/google_containers/kube-controller-manager-amd64:v1.8.1
    livenessProbe:
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10252
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: kube-controller-manager
    volumeMounts:
    - mountPath: /etc/kubernetes/
      name: k8s
      readOnly: true
    - mountPath: /var/log/kube-controller-manager.log
      name: logfile
    - mountPath: /etc/ssl/certs
      name: certs
    - mountPath: /etc/pki
      name: pki
  hostNetwork: true
  volumes:
  - hostPath:
      path: /etc/kubernetes
    name: k8s
  - hostPath:
      path: /var/log/kube-controller-manager.log
    name: logfile
  - hostPath:
      path: /etc/ssl/certs
    name: certs
  - hostPath:
      path: /etc/pki
    name: pki
status: {}

参数说明

  • –controllers=*,tokencleaner,bootstrapsigner 启用bootstrap token

3.5 安装kube-scheduler

3.5.1 配置scheduler.conf

cd /etc/kubernetes
export KUBE_APISERVER="https://master_VIP:6443"

# set-cluster
kubectl config set-cluster kubernetes 
  --certificate-authority=/etc/kubernetes/pki/ca.pem 
  --embed-certs=true 
  --server=${KUBE_APISERVER} 
  --kubeconfig=scheduler.conf

# set-credentials
kubectl config set-credentials system:kube-scheduler 
  --client-certificate=/etc/kubernetes/pki/scheduler.pem 
  --embed-certs=true 
  --client-key=/etc/kubernetes/pki/scheduler-key.pem 
  --kubeconfig=scheduler.conf

# set-context
kubectl config set-context system:kube-scheduler@kubernetes 
  --cluster=kubernetes 
  --user=system:kube-scheduler 
  --kubeconfig=scheduler.conf

# set default context
kubectl config use-context system:kube-scheduler@kubernetes --kubeconfig=scheduler.conf

scheduler.conf文件生成后将这个文件分发到各个Master节点的/etc/kubernetes目录下

3.5.2创建kube-scheduler.yaml,并放到 /etc/kubernetes/manifests

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-scheduler
    tier: control-plane
  name: kube-scheduler
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - command:
    - /bin/sh
    - -c
    - /usr/local/bin/kube-scheduler 
      --address=127.0.0.1
      --leader-elect=true 
      --kubeconfig=/etc/kubernetes/scheduler.conf 
      --v=2 1>>/var/log/kube-scheduler.log 2>&1
    image: foxchan/google_containers/kube-scheduler-amd64:v1.8.1
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10251
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: kube-scheduler
    resources:
      requests:
        cpu: 100m
    volumeMounts:
    - mountPath: /var/log/kube-scheduler.log
      name: logfile
    - mountPath: /etc/kubernetes/scheduler.conf
      name: kubeconfig
      readOnly: true
  volumes:
  - hostPath:
      path: /var/log/kube-scheduler.log
    name: logfile
  - hostPath:
      path: /etc/kubernetes/scheduler.conf
    name: kubeconfig
status: {}

到这里三个Master节点上的kube-scheduler部署完成,通过选举出一个leader工作。

查看kube-scheduler日志

 tail -f kube-scheduler.log
I1024 05:20:44.704783       7 event.go:218] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-scheduler", UID:"1201fc85-b7e1-11e7-9792-525400b406cc", APIVersion:"v1", ResourceVersion:"87114", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kvm-sh002154 became leader

查看Kubernetes Master集群各个核心组件的状态全部正常

kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-2               Healthy   {"health": "true"}
etcd-0               Healthy   {"health": "true"}
etcd-1               Healthy   {"health": "true"}

kubernetes 1.8 高可用安装(二)

2、设置kubeconfig

2.1 设置kubectl的kubeconfig(admin.conf)

# 设置集群参数
kubectl config set-cluster kubernetes 
  --certificate-authority=/etc/kubernetes/pki/ca.pem 
  --embed-certs=true 
  --server=https://master_VIP:6443 
  --kubeconfig=admin.conf


# 设置客户端认证参数
kubectl config set-credentials kubernetes-admin 
  --client-certificate=/etc/kubernetes/pki/admin.pem 
  --embed-certs=true 
  --client-key=/etc/kubernetes/pki/admin-key.pem 
  --kubeconfig=admin.conf

# 设置上下文参数
kubectl config set-context kubernetes-admin@kubernetes 
  --cluster=kubernetes 
  --user=kubernetes-admin 
  --kubeconfig=admin.conf

# 设置默认上下文
kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=admin.conf

2.2 配置 bootstrap.kubeconfig

# 生成配置
cd /etc/kubernetes/
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
export KUBE_APISERVER="https://master_VIP:6443"
echo "Token: ${BOOTSTRAP_TOKEN}"

# 生成token文件
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

#设置集群参数
kubectl config set-cluster kubernetes 
  --certificate-authority=/etc/kubernetes/pki/ca.pem 
  --embed-certs=true 
  --server=${KUBE_APISERVER} 
  --kubeconfig=bootstrap.kubeconfig

# 设置客户端认证参数
 kubectl config set-credentials kubelet-bootstrap 
  --token=${BOOTSTRAP_TOKEN} 
  --kubeconfig=bootstrap.kubeconfig

# 设置上下文参数
 kubectl config set-context default 
  --cluster=kubernetes 
  --user=kubelet-bootstrap 
  --kubeconfig=bootstrap.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

2.3生成kube-proxy.kubeconfig

#设置集群参数
kubectl config set-cluster kubernetes 
  --certificate-authority=/etc/kubernetes/pki/ca.pem 
  --embed-certs=true 
  --server=${KUBE_APISERVER} 
  --kubeconfig=kube-proxy.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials kube-proxy 
  --client-certificate=/etc/kubernetes/pki/kube-proxy.pem 
  --client-key=/etc/kubernetes/pki/kube-proxy-key.pem 
  --embed-certs=true 
  --kubeconfig=kube-proxy.kubeconfig

# 设置上下文参数
kubectl config set-context default 
  --cluster=kubernetes 
  --user=kube-proxy 
  --kubeconfig=kube-proxy.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

kubernetes 1.8 高可用安装(一)

1、创建证书

1.1 安装cfssl工具

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

1.2 生成ca证书

创建ca-config.json

{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}

创建ca-csr.json

{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

生成证书

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

查看ca证书

ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

1.3 生成kubernetes证书

创建kubernetes-csr.json

{
    "CN": "kubernetes",
    "hosts": [
        "127.0.0.1",
        "10.96.0.1",
        "master_ip1",
        "master_ip2",
        "master_ip3",
        "master_VIP",
        "localhost",
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}

master_IP :就是准备跑master的机器

10.96.0.1 : 是集群ip,根据自己环境去修改

生成证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

查看证书

kubernetes.csr  kubernetes-csr.json  kubernetes-key.pem  kubernetes.pem

1.4生成admin证书

cat <<EOF > admin-csr.json
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

生成证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

1.5 创建kube-controller-manager客户端证书和私钥

创建controller-manager-csr.json

{
  "CN": "system:kube-controller-manager",
  "hosts": [
      "master_ip1",
      "master_ip2",
      "master_ip3",
      "master_VIP"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:kube-controller-manager",
      "OU": "System"
    }
  ]
}

生成证书和私钥:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes controller-manager-csr.json | cfssljson -bare controller-manager

1.6 创建kube-scheduler客户端证书和私钥

创建scheduler-csr.json

{
  "CN": "system:kube-scheduler",
  "hosts": [
      "master_ip1",
      "master_ip2",
      "master_ip3",
      "master_VIP"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:kube-scheduler",
      "OU": "System"
    }
  ]
}

生成证书和私钥:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes scheduler-csr.json | cfssljson -bare scheduler

1.7 创建kube-proxy客户端证书和私钥

cat <<EOF > kube-proxy-csr.json
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

生成证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy

1.8 密钥分发

将证书分发到各个机器

总的证书概览:

  • etcd:使用 ca.pem、kubernetes-key.pem、kubernetes.pem;
  • kube-apiserver:使用 ca.pem、kubernetes-key.pem、kubernetes.pem;
  • kubelet:使用 ca.pem;
  • kube-proxy:使用 ca.pem、kube-proxy-key.pem、kube-proxy.pem;
  • kubectl:使用 ca.pem、admin-key.pem、admin.pem;

证书后缀说明

  • 证书:.crt, .pem
  • 私钥:.key
  • 证书请求:.csr