使用 Kubeadm 升级 Kubernetes 版本

升级最新版 kubelet kubeadm kubectl (阿里云镜像)

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet

查看此版本的容器镜像版本

$ kubeadm config images list

k8s.gcr.io/kube-apiserver:v1.12.1
k8s.gcr.io/kube-controller-manager:v1.12.1
k8s.gcr.io/kube-scheduler:v1.12.1
k8s.gcr.io/kube-proxy:v1.12.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.2

查询可用的版本

$ yum list --showduplicates | grep 'kubeadm|kubectl|kubelet'

拉取容器镜像(github)

echo ""
echo "=========================================================="
echo "Pull Kubernetes v1.12.1 Images from aliyuncs.com ......"
echo "=========================================================="
echo ""

MY_REGISTRY=registry.cn-hangzhou.aliyuncs.com/openthings

## 拉取镜像
docker pull ${MY_REGISTRY}/k8s-gcr-io-kube-apiserver:v1.12.1
docker pull ${MY_REGISTRY}/k8s-gcr-io-kube-controller-manager:v1.12.1
docker pull ${MY_REGISTRY}/k8s-gcr-io-kube-scheduler:v1.12.1
docker pull ${MY_REGISTRY}/k8s-gcr-io-kube-proxy:v1.12.1
docker pull ${MY_REGISTRY}/k8s-gcr-io-etcd:3.2.24
docker pull ${MY_REGISTRY}/k8s-gcr-io-pause:3.1
docker pull ${MY_REGISTRY}/k8s-gcr-io-coredns:1.2.2


## 添加Tag
docker tag ${MY_REGISTRY}/k8s-gcr-io-kube-apiserver:v1.12.1 k8s.gcr.io/kube-apiserver:v1.12.1
docker tag ${MY_REGISTRY}/k8s-gcr-io-kube-scheduler:v1.12.1 k8s.gcr.io/kube-scheduler:v1.12.1
docker tag ${MY_REGISTRY}/k8s-gcr-io-kube-controller-manager:v1.12.1 k8s.gcr.io/kube-controller-manager:v1.12.1
docker tag ${MY_REGISTRY}/k8s-gcr-io-kube-proxy:v1.12.1 k8s.gcr.io/kube-proxy:v1.12.1
docker tag ${MY_REGISTRY}/k8s-gcr-io-etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag ${MY_REGISTRY}/k8s-gcr-io-pause:3.1 k8s.gcr.io/pause:3.1
docker tag ${MY_REGISTRY}/k8s-gcr-io-coredns:1.2.2 k8s.gcr.io/coredns:1.2.2

echo ""
echo "=========================================================="
echo "Pull Kubernetes v1.12.1 Images FINISHED."
echo "into registry.cn-hangzhou.aliyuncs.com/openthings, "
echo "           by openthings@https://my.oschina.net/u/2306127."
echo "=========================================================="

echo ""

备注:或参考https://www.cnblogs.com/Irving/p/9818440.html的镜像脚本

查询需要升级的信息

$ kubeadm upgrade plan

升级 Master 节点

$ kubeadm upgrade apply v1.12.1

Node 节点升级

升级对应的 kubelet kubeadm kubectl 的版本,拉取对应版本的镜像即可。

查询各节点信息与 pod 信息

$ kubectl get nodes
$ kubectl get pod --all-namespaces -o wide

使用 kubeadm 安装k8s

安装版本

Kubernetes – 1.11.1

机器规划

未分类

注:三台机器已做key认证并添加hosts, 方便传送文件

系统配置(三台机器都执行)

关闭selinux及firewalld

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0
systemctl stop firewalld && systemctl disable firewalld

内核参数调整

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

关闭swap, 注释fstab中的swap配置

swapoff -a

注: k8s 版本1.8开始要求关闭系统的swap,否则启动不了kubelet;

安装docker并启动(三台机器都执行)

docker从1.13版本之后采用时间线的方式作为版本号,分为社区版CE和企业版EE; 比如: 18.06 为18年6月份发布的版本;

安装包官方地址:

https://download.docker.com/linux/centos/7/x86_64/stable/Packages/

下载安装 docker-ce 17.03版本:

wget -c https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm
wget -c https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm
yum install -y docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm

注:本次安装的版本为 17.03 ; k8s-1.11.1 对应的最高版本docker是 17.03 ;

修改docker启动配置文件:

/usr/lib/systemd/system/docker.service

# 指定监听端口、数据存储目录以及加速镜像源
ExecStart=/usr/bin/dockerd -H unix:///var/run/docker.sock -H 127.0.0.1:8888 -g /data/docker --registry-mirror=http://xxxx.io --exec-opt native.cgroupdriver=systemd

启动 docker :

systemctl daemon-reload && systemctl enable docker && systemctl start docker

导入必要的镜像

默认使用 kubeadm 初始化集群时需要从 k8s.gcr.io 地址上拉取相关镜像,如果可以科学上网的,这一步可以省略;

master节点执行:

images=(kube-apiserver-amd64.tar kube-controller-manager-amd64.tar kube-scheduler-amd64.tar kubernetes-dashboard-amd64.tar etcd-amd64.tar coredns.tar flannel.tar kube-proxy-amd64.tar pause.tar)
for i in ${images[@]};do
    docker load < $(dirname $script_dir)/$i
done

node节点执行:

images=(flannel.tar kube-proxy-amd64.tar pause.tar)
for i in ${images[@]};do
    docker load < $(dirname $script_dir)/$i
done

镜像分享地址:

链接: https://pan.baidu.com/s/1jtT0qHpcz1WjovIBP6JOwQ 密码:5xxn

安装 kubeadm (三台机器都执行)

添加 yum 源:

cat > /etc/yum.repos.d/kubernetes.repo <<EOF 
[kubernetes] 
name=Kubernetes 
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 
enabled=1 
gpgcheck=0 
repo_gpgcheck=0 
EOF

安装 kubeadm :

# 列出最新版本
yum list kubeadm --showduplicates
# 安装
yum install kubeadm-1.11.1

注:安装 kubeadm ,相关依赖会安装 kubectl、kubelet、kubernetes-cni ; kubectl 是管理集群的工具,kubelet 是每个node节点都会运行的一个服务,用于管理节点的docker启动、网络组件等功能;

配置启动 kubelet (三台机器都执行)

确保kubelet和docker使用同一个cgroupdriver, 这里使用systemd;

kubelet 额外参数配置:

/etc/sysconfig/kubelet

KUBELET_EXTRA_ARGS="--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice"

启动 kubelet :

systemctl enable kubelet && systemctl start kubelet

初始化k8s集群 (master机器执行)

初始化:

kubeadm init --apiserver-advertise-address 10.211.55.10 --pod-network-cidr=10.10.0.0/16  --kubernetes-version=v1.11.1

初始化完成后会有以下提示:

[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 10.211.55.10:6443 --token 4zfhdz.8jue59q95hzoc7p9 --discovery-token-ca-cert-hash sha256:da756adb30db06963db480d3868b9acd02c2e38271314baf62e9d28c6629ae92

增加kubectl权限访问:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

配置 k8s 网络 – flannel

Kubernetes支持Flannel、Calico、Weave network等多种cni网络Drivers;

配置 flannel :

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f  kube-flannel.yml

如果有多张网卡,需要在kube-flannel.yml中使用–iface参数指定集群主机内网网卡的名称, 类似 :

args:
- --ip-masq
- --kube-subnet-mgr
- --iface=eth1

查看 k8s 集群状态

查看集群状态 :

kubectl get cs

查看 pods :

kubectl get pod --all-namespaces

其它节点添加到集群

集群初始化完成后可以看到,最后有提示加入集群的命令,但token是有时间限制的,可能过一段时间就过期了,可以通过以下命令成生新的命令:

kubeadm token create --print-join-command

输出类似:

kubeadm join 10.211.55.10:6443 --token pxlvg5.puu7a0mrhpoif16e --discovery-token-ca-cert-hash sha256:f732ceef958151c56583641048ce6a11c39f960166969fb8f782374c3b4a7570

查看已有 token :

kubeadm token list

部署 k8s Dashboard

下载部署配置文件:

wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

指向端口类型为 NodePort , 可以使用 节点ip:端口 访问 :

修改 kubernetes-dashboard.yaml

# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

因 k8s 在1.6版本以后 kube-apiserver 启用了 RBAC 授权, 因此需要认证权限配置文件 :

增加 kubernetes-dashboard-admin.rbac.yaml

---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-admin
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard-admin
  namespace: kube-system

节点增加标签用法:

kubectl label nodes <node-name> <label-key>=<label-value>
kubectl get nodes
kubectl label nodes k8s-master1 k8s-app=kubernetes-dashboard

查看 dashboard 分配到的 NodePort :

kubectl get svc,pod --all-namespaces | grep dashboard

创建 dashboard 的 pod :

kubectl create -f kubernetes-dashboard.yaml
kubectl create -f kubernetes-dashboard-admin.rbac.yaml

查看登陆 dashboard 的token :

kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace) | grep token

kubectl 命令补全

yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

使用kubeadm在CentOS 7上安装Kubernetes 1.8

1. 系统配置

1.1 关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

1.2 禁用SELinux

setenforce 0

编辑文件/etc/selinux/config,将SELINUX修改为disabled,如下:

SELINUX=disabled

1.3 关闭系统Swap

Kubernetes 1.8开始要求关闭系统的Swap,如果不关闭,默认配置下kubelet将无法启动。方法一,通过kubelet的启动参数–fail-swap-on=false更改这个限制。方法二,关闭系统的Swap。

swapoff -a

修改/etc/fstab文件,注释掉SWAP的自动挂载,使用free -m确认swap已经关闭。

2. 安装Docker

注: 所有节点均需执行该步骤。

2.1 下载Docker安装包

  • 下载地址:https://download.docker.com/linux/centos/7/x86_64/stable/Packages/
  • 下载安装包:
mkdir ~/k8s
cd k8s
wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm
wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm

2.2 安装Docker

cd k8s
yum install ./docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm
yum install ./docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm
systemctl enable docker
systemctl start docker

2.3 配置Docker

  • 开启iptables filter表的FORWARD链
    编辑/lib/systemd/system/docker.service,在ExecStart=..上面加入如下内容:
ExecStartPost=/usr/sbin/iptables -I FORWARD -s 0.0.0.0/0 -j ACCEPT

如下:

......
ExecStartPost=/usr/sbin/iptables -I FORWARD -s 0.0.0.0/0 -j ACCEPT
ExecStart=/usr/bin/dockerd
......
  • 配置Cgroup Driver
    创建文件/etc/docker/daemon.json,添加如下内容:
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
  • 重启Docker服务
systemctl daemon-reload && systemctl restart docker && systemctl status docker

3. 安装Kubernetes

3.1 安装kubeadm、kubectl、kubelet

  • 配置软件源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
  • 解决路由异常
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
  • 调整swappiness参数
    修改/etc/sysctl.d/k8s.conf添加下面一行:
vm.swappiness=0

执行sysctl -p /etc/sysctl.d/k8s.conf使修改生效。

  • 安装kubeadm、kubectl、kubelet
    ① 查看可用版本
yum list --showduplicates | grep 'kubeadm|kubectl|kubelet'

② 安装指定版本

yum install kubeadm-1.8.1 kubectl-1.8.1 kubelet-1.8.1
systemctl enable kubelet
systemctl start kubelet

3.2 使用kubeadm init初始化集群

注:该小节仅在Master节点上执行

  • 初始化Master节点
kubeadm init --kubernetes-version=v1.8.1 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=master.k8s.samwong.im
  • 配置普通用户使用kubectl访问集群
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • 查看集群状态
[root@master ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-0               Healthy   {"health": "true"} 
  • 初始化失败清理命令
kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/

3.3 安装Pod Network

注:该小节仅在Master节点上执行

  • 安装Flannel
[root@master ~]# cd ~/k8s
[root@master ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@master ~]# kubectl apply -f  kube-flannel.yml
clusterrole "flannel" created
clusterrolebinding "flannel" created
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created
  • 指定网卡
    如果有多个网卡,需要在kube-flannel.yml中使用–iface参数指定集群主机内网网卡的名称,否则可能会出现dns无法解析。需要将kube-flannel.yml下载到本地,flanneld启动参数加上–iface=。
......
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
......
containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.9.0-amd64
        command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr", "--iface=eth1" ]
......
  • 查询Pod状态
kubectl get pod --all-namespaces -o wide

3.4 Master节点参与工作负载

使用kubeadm初始化的集群,出于安全考虑Pod不会被调度到Master Node上,可使用如下命令使Master节点参与工作负载。

kubectl taint nodes node1 node-role.kubernetes.io/master-

3.5 向Kubernetes集群添加Node

  • 查看master的token
kubeadm token list | grep authentication,signing | awk '{print $1}'
  • 查看discovery-token-ca-cert-hash
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
  • 添加节点到Kubernetes集群
kubeadm join --token=a20844.654ef6410d60d465 --discovery-token-ca-cert-hash sha256:0c2dbe69a2721870a59171c6b5158bd1c04bc27665535ebf295c918a96de0bb1 master.k8s.samwong.im:6443
  • 查看集群中的节点
[root@master ~]# kubectl get nodes
NAME                    STATUS    ROLES     AGE       VERSION
master.k8s.samwong.im   Ready     master    1d        v1.8.1

3.6 从Kubernetes集群中移除节点

  • Master节点操作
kubectl drain master.k8s.samwong.im --delete-local-data --force --ignore-daemonsets
kubectl delete node master.k8s.samwong.im
  • Node节点操作
kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/
  • 查看集群节点
kubectl get nodes

3.7 部署Dashboard插件

  • 下载Dashboard插件配置文件
cd ~/k8s
wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
  • 修改Dashboard Service
    编辑kubernetes-dashboard.yaml文件,在Dashboard Service中添加type: NodePort,暴露Dashboard服务。
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  • 安装Dashboard插件
kubectl create -f kubernetes-dashboard.yaml
  • Dashboard账户集群管理权限
    创建一个kubernetes-dashboard-admin的ServiceAccount并授予集群admin的权限,创建kubernetes-dashboard-admin.rbac.yaml。
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-admin
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard-admin
  namespace: kube-system

执行命令:

[root@master ~]# kubectl create -f kubernetes-dashboard-admin.rbac.yaml
serviceaccount "kubernetes-dashboard-admin" created
clusterrolebinding "kubernetes-dashboard-admin" created
  • 查看kubernete-dashboard-admin的token
[root@master ~]# kubectl -n kube-system get secret | grep kubernetes-dashboard-admin
kubernetes-dashboard-admin-token-jxq7l   kubernetes.io/service-account-token   3         22h
[root@master ~]# kubectl describe -n kube-system secret/kubernetes-dashboard-admin-token-jxq7l
Name:         kubernetes-dashboard-admin-token-jxq7l
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name=kubernetes-dashboard-admin
              kubernetes.io/service-account.uid=686ee8e9-ce63-11e7-b3d5-080027d38be0
Type:  kubernetes.io/service-account-token
Data
====
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi1qeHE3bCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjY4NmVlOGU5LWNlNjMtMTFlNy1iM2Q1LTA4MDAyN2QzOGJlMCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.Ua92im86o585ZPBfsOpuQgUh7zxgZ2p1EfGNhr99gAGLi2c3ss-2wOu0n9un9LFn44uVR7BCPIkRjSpTnlTHb_stRhHbrECfwNiXCoIxA-1TQmcznQ4k1l0P-sQge7YIIjvjBgNvZ5lkBNpsVanvdk97hI_kXpytkjrgIqI-d92Lw2D4xAvHGf1YQVowLJR_VnZp7E-STyTunJuQ9hy4HU0dmvbRXBRXQ1R6TcF-FTe-801qUjYqhporWtCaiO9KFEnkcYFJlIt8aZRSL30vzzpYnOvB_100_DdmW-53fLWIGYL8XFnlEWdU1tkADt3LFogPvBP4i9WwDn81AwKg_Q
ca.crt:     1025 bytes
  • 查看Dashboard服务端口
[root@master k8s]# kubectl get svc -n kube-system
NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
kube-dns               ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP   1d
kubernetes-dashboard   NodePort    10.102.209.161   <none>        443:32513/TCP   21h
  • 访问Dashboard
    打开浏览器输入https://192.168.56.2:32513,如下:
    未分类

3.8 部署heapster插件

安装Heapster为集群添加使用统计和监控功能,为Dashboard添加仪表盘。

mkdir -p ~/k8s/heapster
cd ~/k8s/heapster
wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/grafana.yaml
wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml
wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml
wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml
kubectl create -f ./

4. 遇到的问题

4.1 使用代理科学上网

  • 没有代理
    可申请AWS免费账户,创建EC2实例,搭建Shadowsocks服务器。

  • 配置代理客户端
    参考链接:https://www.zybuluo.com/ncepuwanghui/note/954160

  • 配置Docker代理
    ① 创建docker服务配置文件

mkdir -p /etc/systemd/system/docker.service.d

② 编辑vi /etc/systemd/system/docker.service.d/http-proxy.conf,添加如下内容:

[Service]
Environment="HTTP_PROXY=http://master.k8s.samwong.im:8118" "NO_PROXY=localhost,*.samwong.im,192.168.0.0/16,127.0.0.1,10.244.0.0/16"

③ 编辑/etc/systemd/system/docker.service.d/https-proxy.conf,添加如下内容:

[Service]
Environment="HTTPS_PROXY=https://master.k8s.samwong.im:8118" "NO_PROXY=localhost,*.samwong.im,192.168.0.0/16,127.0.0.1,10.244.0.0/16"

④ 重启Docker服务

systemctl daemon-reload && systemctl restart docker

⑤ 查看是否配置成功

[root@master k8s]# systemctl show --property=Environment docker | more
Environment=HTTP_PROXY=http://master.k8s.samwong.im:8118 NO_PROXY=localhost,*.samwong.im,192.168.0.0/16,127.0.0.1,10.244.0.0/16 HTTPS_PROXY=https://master.k8
s.samwong.im:8118
  • 配置yum代理
    ① 编辑/etc/yum.conf文件,追加如下内容:
proxy=http://master.k8s.samwong.im:8118

② 更新yum缓存

yum makecache
  • 配置wget代理
    编辑/etc/wgetrc文件,追加如下内容:
ftp_proxy=http://master.k8s.samwong.im:8118
http_proxy=http://master.k8s.samwong.im:8118
https_proxy=http://master.k8s.samwong.im:8118
  • 配置全局代理
    如需上网,可编辑/etc/profile文件,追加如下内容:
PROXY_HOST=master.k8s.samwong.im
export all_proxy=http://$PROXY_HOST:8118
export ftp_proxy=http://$PROXY_HOST:8118
export http_proxy=http://$PROXY_HOST:8118
export https_proxy=http://$PROXY_HOST:8118
export no_proxy=localhost,*.samwong.im,192.168.0.0/16.,127.0.0.1,10.10.0.0/16

注: 部署Kubernetes时需禁用全局代理,会导致访问内部服务失败。

4.2 下载软件包和镜像

  • 下载kubeadm、kubectl、kubelet
wget https://storage.googleapis.com/kubernetes-release/release/v1.8.1/bin/linux/amd64/kubeadm
wget https://storage.googleapis.com/kubernetes-release/release/v1.8.1/bin/linux/amd64/kubectl
wget https://storage.googleapis.com/kubernetes-release/release/v1.8.1/bin/linux/amd64/kubelet

参考链接:https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-via-curl

4.3 推送本地镜像到镜像仓库

  • 上传镜像
docker login -u [email protected] -p xxxxxx hub.c.163.com
docker tag gcr.io/google_containers/kube-apiserver-amd64:v1.8.1 hub.c.163.com/xxxxxx/kube-apiserver-amd64:v1.8.1
docker push hub.c.163.com/xxxxxx/kube-apiserver-amd64:v1.8.1
docker rmi hub.c.163.com/xxxxxx/kube-apiserver-amd64:v1.8.1
docker logout hub.c.163.com
  • 下载镜像
docker pull hub.c.163.com/xxxxxx/kube-apiserver-amd64:v1.8.1
docker tag hub.c.163.com/xxxxxx/kube-apiserver-amd64:v1.8.1 gcr.io/google_containers/kube-apiserver-amd64:v1.8.1
docker rmi hub.c.163.com/xxxxxx/kube-apiserver-amd64:v1.8.1
docker logout hub.c.163.com
  • 更新镜像
docker update --restart=no $(docker ps -q) && docker stop $(docker ps -q) && docker rm $(docker ps -q)

4.4 kubeadm init错误

  • 错误描述
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
  },
  "status": "Failure",
  "message": "nodes is forbidden: User "system:anonymous" cannot list nodes at the cluster scope",
  "reason": "Forbidden",
  "details": {
    "kind": "nodes"
  },
  "code": 403
}
  • 问题原因
    该节点在/etc/profile中配置了全局代理,kubectl访问kube-apiserver也通过代理转发请求,导致证书不对,连接拒绝。

  • 解决方法
    取消全局代理,只配置Docker代理、yum代理、wget代理。
    参考4.1。

4.5 向Kubernetes集群添加Node失败

  • 问题描述
    在Node上使用kubeadm join命令向kubernetes集群添加节点时提示Failed,如下:
kubeadm join --token=a20844.654ef6410d60d465 --discovery-token-ca-cert-hash sha256:0c2dbe69a2721870a59171c6b5158bd1c04bc27665535ebf295c918a96de0bb1 master.k8s.samwong.im:6443
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "master.k8s.samwong.im:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://master.k8s.samwong.im:6443"
[discovery] Failed to request cluster info, will try again: [Get https://master.k8s.samwong.im:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: EOF]
  • 问题原因
    token失效被删除。在Master上查看token,结果为空。
kubeadm token list
  • 解决方法
    重新生成token,默认token有效期为24小时,生成token时通过指定–ttl 0可设置token永久有效。
[root@master ~]# kubeadm token create --ttl 0
3a536a.5d22075f49cc5fb8
[root@master ~]# kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION   EXTRA GROUPS
3a536a.5d22075f49cc5fb8   <forever>   <never>                     authentication,signing   <none>        system:bootstrappers:kubeadm:default-node-token

kubeadm快速部署kubernetes

1、环境准备

准备了三台机器作安装测试工作,机器信息如下:

未分类

2、安装docker

tee /etc/yum.repos.d/docker.repo <<-'EOF'
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/$releasever/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
EOF

yum update -y && yum upgrade -y
yum install docker-engine -y
systemctl start docker
systemctl enable docker.service

3、安装k8s工具包

三种方式:官方源安装、非官方源安装和release工程编译,yum方式因为不能直接使用google提供的源,非官方源中提供的版本比较老(mritd提供的源很不错,版本很新),如果要使用新版本,可以尝试release工程编译的方式。

博主提供

一些比较懒得同学:-D,可以直接从博主提供的位置下载RPM工具包安装,下载地址: http://abyun.org/wp-content/themes/begin/inc/go.php?url=https://github.com/CloudNil/kubernetes-library/tree/master/rpm_x86_64/For_kubelet_1.5.2

yum install -y socat

rpm -ivh kubeadm-1.6.0-0.alpha.0.2074.a092d8e0f95f52.x86_64.rpm  kubectl-1.5.1-0.x86_64.rpm  kubelet-1.5.1-0.x86_64.rpm  kubernetes-cni-0.3.0.1-0.07a8a2.x86_64.rpm

systemctl enable kubelet.service

官方源安装

跨越GFW方式不细说,你懂的。

建议使用yumdownloader下载rpm包,不然那下载速度,会让各位对玩k8s失去兴趣的。

yum install -y yum-utils

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

yumdownloader kubelet kubeadm kubectl kubernetes-cni
rpm -ivh *.rpm
systemctl enable kubelet.service && systemctl start kubelet

非官方源安装

#感谢mritd维护了一个yum源
tee /etc/yum.repos.d/mritd.repo << EOF
[mritdrepo]
name=Mritd Repository
baseurl=https://rpm.mritd.me/centos/7/x86_64
enabled=1
gpgcheck=1
gpgkey=https://cdn.mritd.me/keys/rpm.public.key
EOF
yum makecache
yum install -y kubelet kubectl kubernetes-cni kubeadm
systemctl enable kubelet && systemctl start kubelet

relese编译

git clone https://github.com/kubernetes/release.git
cd release/rpm
./docker-build.sh

编译完成后生成rpm包到:/output/x86_64,进入到该目录后安装rpm包,注意选择amd64的包(相信大多数同学都是64bit环境,如果是32bit或者arm架构请自行选择安装)。

4、下载docker镜像

kubeadm方式安装kubernetes集群需要的镜像在docker官方镜像中并未提供,只能去google的官方镜像库:gcr.io 中下载,GFW咋办?翻墙!也可以使用docker hub做跳板自己构建,这里针对k8s-1.5.2我已经做好镜像,各位可以直接下载,dashboard的版本并未紧跟kubelet主线版本,用哪个版本都可以,本文使用kubernetes-dashboard-amd64:v1.5.0。

kubernetes-1.5.2所需要的镜像:

  • etcd-amd64:2.2.5
  • kubedns-amd64:1.9
  • kube-dnsmasq-amd64:1.4
  • dnsmasq-metrics-amd64:1.0
  • exechealthz-amd64:1.2
  • pause-amd64:3.0
  • kube-discovery-amd64:1.0
  • kube-proxy-amd64:v1.5.2
  • kube-scheduler-amd64:v1.5.2
  • kube-controller-manager-amd64:v1.5.2
  • kube-apiserver-amd64:v1.5.2
  • kubernetes-dashboard-amd64:v1.5.0

偷下懒吧,直接执行以下脚本:

#!/bin/bash
images=(kube-proxy-amd64:v1.5.2 kube-discovery-amd64:1.0 kubedns-amd64:1.9 kube-scheduler-amd64:v1.5.2 kube-controller-manager-amd64:v1.5.2 kube-apiserver-amd64:v1.5.2 etcd-amd64:2.2.5 kube-dnsmasq-amd64:1.4 dnsmasq-metrics-amd64:1.0 exechealthz-amd64:1.2 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.5.0 nginx-ingress-controller:0.8.3)
for imageName in ${images[@]} ; do
  docker pull cloudnil/$imageName
  docker tag cloudnil/$imageName gcr.io/google_containers/$imageName
  docker rmi cloudnil/$imageName
done

5、安装master节点

由于kubeadm和kubelet安装过程中会生成/etc/kubernetes目录,而kubeadm init会先检测该目录是否存在,所以我们先使用kubeadm初始化环境。

kubeadm reset && systemctl start kubelet
kubeadm init --api-advertise-addresses=172.16.1.101 --use-kubernetes-version v1.5.2
#如果使用外部etcd集群:
kubeadm init --api-advertise-addresses=172.16.1.101 --use-kubernetes-version v1.5.2 --external-etcd-endpoints http://172.16.1.107:2379,http://172.16.1.107:4001

说明:如果打算使用flannel网络,请加上:–pod-network-cidr=10.244.0.0/16。如果有多网卡的,请根据实际情况配置–api-advertise-addresses=,单网卡情况可以省略。

如果出现ebtables not found in system path的错误,要先安装ebtables包,我安装的过程中未提示,该包系统已经自带了。

yum install -y ebtables
  • 1

安装过程大概2-3分钟,输出结果如下:

[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[init] Using Kubernetes version: v1.5.2
[tokens] Generated token: "064158.548b9ddb1d3fad3e"
[certificates] Generated Certificate Authority key and certificate.
[certificates] Generated API Server key and certificate
[certificates] Generated Service Account signing keys
[certificates] Created keys and certificates in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 61.317580 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 6.556101 seconds
[apiclient] Creating a test deployment
[apiclient] Test deployment succeeded
[token-discovery] Created the kube-discovery deployment, waiting for it to become ready
[token-discovery] kube-discovery is ready after 6.020980 seconds
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully!

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node:

kubeadm join --token=de3d61.504a049ec342e135 172.16.1.101

6、安装minion节点

Master节点安装好了Minoin节点就简单了。

kubeadm reset && systemctl start kubelet
kubeadm join --token=de3d61.504a049ec342e135 172.16.1.101
  • 1
  • 2

输出结果如下:

[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[tokens] Validating provided token
[discovery] Created cluster info discovery client, requesting info from "http://172.16.1.101:9898/cluster-info/v1/?token-id=f11877"
[discovery] Cluster info object received, verifying signature using given token
[discovery] Cluster info signature and contents are valid, will use API endpoints [https://172.16.1.101:6443]
[bootstrap] Trying to connect to endpoint https://172.16.1.101:6443
[bootstrap] Detected server version: v1.5.2
[bootstrap] Successfully established connection with endpoint "https://172.16.1.101:6443"
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server:
Issuer: CN=kubernetes | Subject: CN=system:node:yournode | CA: false
Not before: 2016-12-15 19:44:00 +0000 UTC Not After: 2017-12-15 19:44:00 +0000 UTC
[csr] Generating kubelet configuration
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

安装完成后可以查看下状态:

[root@master ~]# kubectl get nodes
NAME       STATUS         AGE
master     Ready,master   6m
minion01   Ready          2m
minion02   Ready          2m

7、安装Calico网络

网络组件选择很多,可以根据自己的需要选择calico、weave、flannel,calico性能最好,weave和flannel差不多。Addons中有配置好的yaml,部署环境使用的阿里云的VPC,官方提供的flannel.yaml创建的flannel网络有问题,所以本文中尝试calico网络,。

kubectl apply -f http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/kubeadm/calico.yaml
  • 1

如果使用了外部etcd,去掉其中以下内容,并修改etcd_endpoints: [ETCD_ENDPOINTS]:

---

# This manifest installs the Calico etcd on the kubeadm master.  This uses a DaemonSet
# to force it to run on the master even when the master isn't schedulable, and uses
# nodeSelector to ensure it only runs on the master.
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: calico-etcd
  namespace: kube-system
  labels:
    k8s-app: calico-etcd
spec:
  template:
    metadata:
      labels:
        k8s-app: calico-etcd
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
        scheduler.alpha.kubernetes.io/tolerations: |
          [{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
           {"key":"CriticalAddonsOnly", "operator":"Exists"}]
    spec:
      # Only run this pod on the master.
      nodeSelector:
        kubeadm.alpha.kubernetes.io/role: master
      hostNetwork: true
      containers:
        - name: calico-etcd
          image: gcr.io/google_containers/etcd:2.2.1
          env:
            - name: CALICO_ETCD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
          command: ["/bin/sh","-c"]
          args: ["/usr/local/bin/etcd --name=calico --data-dir=/var/etcd/calico-data --advertise-client-urls=http://$CALICO_ETCD_IP:6666 --listen-client-urls=http://0.0.0.0:6666 --listen-peer-urls=http://0.0.0.0:6667"]
          volumeMounts:
            - name: var-etcd
              mountPath: /var/etcd
      volumes:
        - name: var-etcd
          hostPath:
            path: /var/etcd

---

# This manfiest installs the Service which gets traffic to the Calico
# etcd.
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: calico-etcd
  name: calico-etcd
  namespace: kube-system
spec:
  # Select the calico-etcd pod running on the master.
  selector:
    k8s-app: calico-etcd
  # This ClusterIP needs to be known in advance, since we cannot rely
  # on DNS to get access to etcd.
  clusterIP: 10.96.232.136
  ports:
    - port: 6666

检查各节点组件运行状态:

[root@master work]# kubectl get po -n=kube-system -o wide
NAME                                       READY     STATUS    RESTARTS   AGE       IP                NODE
calico-node-0jkjn                          2/2       Running   0          25m       172.16.1.101      master
calico-node-w1kmx                          2/2       Running   2          25m       172.16.1.106      minion01
calico-node-xqch6                          2/2       Running   0          25m       172.16.1.107      minion02
calico-policy-controller-807063459-d7z47   1/1       Running   0          11m       172.16.1.107      minion02
dummy-2088944543-qw3vr                     1/1       Running   0          29m       172.16.1.101      master
kube-apiserver-master                      1/1       Running   0          28m       172.16.1.101      master
kube-controller-manager-master             1/1       Running   0          29m       172.16.1.101      master
kube-discovery-1769846148-lzlff            1/1       Running   0          29m       172.16.1.101      master
kube-dns-2924299975-jfvrd                  4/4       Running   0          29m       192.168.228.193   master
kube-proxy-6bk7n                           1/1       Running   0          28m       172.16.1.107      minion02
kube-proxy-6pgqz                           1/1       Running   1          29m       172.16.1.106      minion01
kube-proxy-7ms6m                           1/1       Running   0          29m       172.16.1.101      master
kube-scheduler-master                      1/1       Running   0          28m       172.16.1.101      master

说明:kube-dns需要等calico配置完成后才是running状态。

8、部署Dashboard

下载kubernetes-dashboard.yaml

curl -O https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
  • 1

修改配置内容,#——内是修改的内容,调整目的:部署kubernetes-dashboard到default-namespaces,不暴露端口到HostNode,调整版本为1.5.0,imagePullPolicy调整为IfNotPresent。

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  labels:
    app: kubernetes-dashboard
  name: kubernetes-dashboard
#----------
#  namespace: kube-system
#----------
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kubernetes-dashboard
  template:
    metadata:
      labels:
        app: kubernetes-dashboard
      annotations:
        scheduler.alpha.kubernetes.io/tolerations: |
          [
            {
              "key": "dedicated",
              "operator": "Equal",
              "value": "master",
              "effect": "NoSchedule"
            }
          ]
    spec:
      containers:
      - name: kubernetes-dashboard
        #----------
        image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.0
        imagePullPolicy: IfNotPresent
        #----------
        ports:
        - containerPort: 9090
          protocol: TCP
        args:
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # - --apiserver-host=http://my-address:port
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: kubernetes-dashboard
  name: kubernetes-dashboard
#----------
#  namespace: kube-system
#----------
spec:
#----------
#  type: NodePort
#----------
  ports:
  - port: 80
    targetPort: 9090
  selector:
    app: kubernetes-dashboard

9、Dashboard服务暴露到公网

kubernetes中的Service暴露到外部有三种方式,分别是:

  • LoadBlancer Service
  • NodePort Service
  • Ingress

LoadBlancer Service是kubernetes深度结合云平台的一个组件;当使用LoadBlancer Service暴露服务时,实际上是通过向底层云平台申请创建一个负载均衡器来向外暴露服务;目前LoadBlancer Service支持的云平台已经相对完善,比如国外的GCE、DigitalOcean,国内的 阿里云,私有云 Openstack 等等,由于LoadBlancer Service深度结合了云平台,所以只能在一些云平台上来使用。

NodePort Service顾名思义,实质上就是通过在集群的每个node上暴露一个端口,然后将这个端口映射到某个具体的service来实现的,虽然每个node的端口有很多(0~65535),但是由于安全性和易用性(服务多了就乱了,还有端口冲突问题)实际使用可能并不多。

Ingress可以实现使用nginx等开源的反向代理负载均衡器实现对外暴露服务,可以理解Ingress就是用于配置域名转发的一个东西,在nginx中就类似upstream,它与ingress-controller结合使用,通过ingress-controller监控到pod及service的变化,动态地将ingress中的转发信息写到诸如nginx、apache、haproxy等组件中实现方向代理和负载均衡。

9.1 部署Nginx-ingress-controller

Nginx-ingress-controller是kubernetes官方提供的集成了Ingress-controller和Nginx的一个docker镜像。

apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-ingress-controller
  labels:
    k8s-app: nginx-ingress-lb
spec:
  replicas: 1
  selector:
    k8s-app: nginx-ingress-lb
  template:
    metadata:
      labels:
        k8s-app: nginx-ingress-lb
        name: nginx-ingress-lb
    spec:
      terminationGracePeriodSeconds: 60
      hostNetwork: true
      #本环境中的minion02节点有外网IP,并且有label定义:External-IP=true
      nodeSelector:
        External-IP: true
      containers:
      - image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
        name: nginx-ingress-lb
        imagePullPolicy: IfNotPresent
        readinessProbe:
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
        livenessProbe:
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          timeoutSeconds: 1
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
        args:
        - /nginx-ingress-controller
        - --default-backend-service=$(POD_NAMESPACE)/kubernetes-dashboard

9.2 部署Ingress

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: k8s-dashboard
spec:
  rules:
  - host: dashboard.cloudnil.com
    http:
      paths:
      - path: /
        backend:
          serviceName: kubernetes-dashboard
          servicePort: 80

部署完Ingress后,解析域名dashboard.cloudnil.com到minion02的外网IP,就可以使用dashboard.cloudnil.com访问dashboard。

10、注意事项

kubeadm目前还在开发测试阶段,不建议在生产环境中使用kubeadm部署kubernetes环境。此外,使用kubeadm是需要注意以下几点:

10.1 单点故障

当前版本的kubeadm暂且不能部署真正高可用的kubernetes环境,只具有单点的master环境,如采用内置etcd,那etcd也是单节点,若master节点故障,可能存在数据丢失的情况,所以建议采用外部的etcd集群,这样即使master节点故障,那只要重启即可,数据不会丢失,高可用的部署功能据说正在开发中,很快就可以发布使用。

10.2 暴露主机端口

POD实例配置中的HostPort和HostIP参数无法用于使用了CNI网络插件的kubernetes集群环境,如果需要暴露容器到主机端口,可以使用NodePort或者HostNetwork。

10.3 CentOS环境路由错误

RHEL/CentOS7 环境中iptables的策略关系,会导致路由通讯错误,需要手动调整iptables的桥接设置:

# cat /etc/sysctl.d/k8s.conf
 net.bridge.bridge-nf-call-ip6tables = 1
 net.bridge.bridge-nf-call-iptables = 1

10.4 Token丢失

Master节点部署完成之后,会输出一个token用于minion节点的配置链接,不过这个token没有很方便的查看方式,导致此日志输出关闭后,没有token无法join minion节点,可以通过下述方式查看token:

kubectl -n kube-system get secret clusterinfo -o yaml | grep token-map | awk '{print $2}' | base64 --decode | sed "s|{||g;s|}||g;s|:|.|g;s/"//g;" | xargs echo
  • 1

建议提前使用kubeadm token命令生成token,然后在执行kubeadm init和kubeadm join的使用通过–token指定token。

10.5 Vagrant中主机名的问题

如果使用Vagrant虚拟化环境部署kubernetes,首先得确保hostname -i能够获取正确的通讯IP,默认情况下,如果/etc/hosts中未配置主机名与IP的对应关系,kubelet会取第一个非lo网卡作为通讯入口,若这个网卡不做了NAT桥接的网卡,那安装就会出现问题。

ubuntu使用kubeadm安装配置k8s(Kubernetes)

安装

翻墙使用root权限执行以下内容或者参考这里

wget https://coding.net/u/scaffrey/p/hosts/git/raw/master/hosts 
cp hosts /etc/hosts

安装

apt-get update && apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y docker.io
apt-get install -y kubelet kubeadm kubectl kubernetes-cni

配置master节点

在master节点执行以下命令

kubeadm init  --pod-network-cidr 10.244.0.0/16

使master node参与工作负载

kubectl taint nodes --all node-role.kubernetes.io/master-

安装网络

kubectl create -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel-rbac.yml
kubectl create -f  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

配置worker节点

执行初始化master节点后打印的最后一行

kubeadm reset
kubeadm join --token 6aa93f.04f7fbce49b7f3bb 222.20.101.106:6443

使用

使用kubectl的run命令创建deployment

kubectl run nginx --image=nginx:1.10.0
kubectl get pods -o wide

使用expose 将端口暴露出来

kubectl expose deployment nginx --port 80 --type LoadBalancer
kubectl get services -o wide

通过scale命令扩展应用

kubectl scale deployments/nginx --replicas=4
kubectl get pods -o wide

创建nginx版本

更新应用镜像,滚动更新应用镜像

kubectl set image deployments/nginx nginx=qihao/nginx

确认更新

kubectl rollout status deployments/nginx 

回滚到之前版本

kubectl rollout undo deployments/nginx 

负载均衡(不停的刷新服务的地址,过段时间会有变化)

kubectl exec -it nginx-2027219757-r1sqm bash
uname -n > /usr/share/nginx/html/index.html 

结束

kubectl delete deployment nginx && kubectl delete service nginx
kubectl get services -o wide
kubectl get pods -o wide
kubectl get nodes