3、master 组件安装(etcd/api-server/controller/scheduler)
3.1 etcd集群安装
确定你要安装的master机器, 上面安装rpm包,配置kubelet
注意:
所有的image,我都已经放到docker hub仓库,需要的可以去下载
https://hub.docker.com/u/foxchan/
安装rpm包
yum localinstall -y kubectl-1.8.0-1.x86_64.rpm kubelet-1.8.0-1.x86_64.rpm kubernetes-cni-0.5.1-1.x86_64.rpm
创建manitest目录
mkdir -p /etc/kubernetes/manifests
修改kubelet配置
/etc/systemd/system/kubelet.service.d/kubelet.conf
[Service]
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.12 --cluster-domain=cluster.local"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
Environment="KUBELET_EXTRA_ARGS=--v=2 --pod-infra-container-image=foxchan/google_containers/pause-amd64:3.0 --fail-swap-on=false"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS
注意:
–cluster-dns=10.96.0.12 这个IP自己规划,记得和创建证书时候的IP段保持一致
–fail-swap-on=false 1.8开始,如果机器开启了swap,kubulet会无法启动,默认参数是true
启动kubelet
systemctl daemon-reload
systemctl restart kubelet
3.2 安装etcd集群
创建etcd.yaml,并放到 /etc/kubernetes/manifests
注意:
提前创建日志文件,便于挂载
/var/log/kube-apiserver.log
/var/log/kube-etcd.log
/var/log/kube-controller-manager.log
/var/log/kube-scheduler.log
#根据挂载配置创建相关目录
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: etcd
tier: control-plane
name: etcd-server
namespace: kube-system
spec:
hostNetwork: true
containers:
- image: foxchan/google_containers/etcd-amd64:3.0.17
name: etcd-container
command:
- /bin/sh
- -c
- /usr/local/bin/etcd
--name=etcd0
--initial-advertise-peer-urls=http://master_IP:2380
--listen-peer-urls=http://master_IP:2380
--advertise-client-urls=http://master_IP:2379
--listen-client-urls=http://master_IP:2379,http://127.0.0.1:2379
--data-dir=/var/etcd/data
--initial-cluster-token=emar-etcd-cluster
--initial-cluster=etcd0=http://master_IP1:2380,etcd1=http://master_IP2:2380,etcd2=http://master_IP3:2380
--initial-cluster-state=new 1>>/var/log/kube-etcd.log 2>&1
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /health
port: 2379
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 15
volumeMounts:
- mountPath: /var/log/kube-etcd.log
name: logfile
- mountPath: /var/etcd
name: varetcd
- mountPath: /etc/ssl/certs
name: certs
- mountPath: /etc/kubernetes/
name: k8s
readOnly: true
volumes:
- hostPath:
path: /var/log/kube-etcd.log
name: logfile
- hostPath:
path: /var/etcd/data
name: varetcd
- hostPath:
path: /etc/ssl/certs
name: certs
- hostPath:
path: /etc/kubernetes/
name: k8s
status: {}
3台master机器 重复操作3.1-3.2,
参数说明
- –name=etcd0 每个etcd name都是唯一
-
client-urls 修改对应的机器ip
kubelet 会定时查看manifests目录,拉起 里面的配置文件
3.3 安装kube-apiserver
创建kube-apiserver.yaml,并放到 /etc/kubernetes/manifests
#根据挂载配置创建相关目录
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
hostNetwork: true
containers:
- command:
- /bin/sh
- -c
- /usr/local/bin/kube-apiserver
--kubelet-https=true
--enable-bootstrap-token-auth=true
--token-auth-file=/etc/kubernetes/token.csv
--service-cluster-ip-range=10.96.0.0/12
--tls-cert-file=/etc/kubernetes/pki/kubernetes.pem
--tls-private-key-file=/etc/kubernetes/pki/kubernetes-key.pem
--client-ca-file=/etc/kubernetes/pki/ca.pem
--service-account-key-file=/etc/kubernetes/pki/ca-key.pem
--insecure-port=9080
--secure-port=6443
--insecure-bind-address=0.0.0.0
--bind-address=0.0.0.0
--advertise-address=master_IP
--storage-backend=etcd3
--etcd-servers=http://master_IP1:2379,http://master_IP2:2379,http://master_IP3:2379
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,NodeRestriction
--allow-privileged=true
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--authorization-mode=Node,RBAC
--v=2 1>>/var/log/kube-apiserver.log 2>&1
image: foxchan/google_containers/kube-apiserver-amd64:v1.8.1
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 6443
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-apiserver
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /etc/kubernetes/
name: k8s
readOnly: true
- mountPath: /etc/ssl/certs
name: certs
- mountPath: /etc/pki
name: pki
- mountPath: /var/log/kube-apiserver.log
name: logfile
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes
name: k8s
- hostPath:
path: /etc/ssl/certs
name: certs
- hostPath:
path: /etc/pki
name: pki
- hostPath:
path: /var/log/kube-apiserver.log
name: logfile
status: {}
参数说明:
- –advertise-address 修改对应机器的ip
- –enable-bootstrap-token-auth Bootstrap Token authenticator
- –authorization-mode 授权模型增加了 Node 参数,因为 1.8 后默认 system:node role 不会自动授予 system:nodes 组
- 由于以上原因,–admission-control 同时增加了 NodeRestriction 参数
检测:可以看到api已经正常
kubectl --server=https://master_IP:6443
--certificate-authority=/etc/kubernetes/pki/ca.pem
--client-certificate=/etc/kubernetes/pki/admin.pem
--client-key=/etc/kubernetes/pki/admin-key.pem
get componentstatuses
NAME STATUS MESSAGE ERROR
controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: getsockopt: connection refused
scheduler Healthy ok
etcd-1 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
3.4 安装kube-controller-manager
创建kube-controller-manager.yaml,并放到
/etc/kubernetes/manifests
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- /bin/sh
- -c
- /usr/local/bin/kube-controller-manager
--master=127.0.0.1:9080
--controllers=*,bootstrapsigner,tokencleaner
--root-ca-file=/etc/kubernetes/pki/ca.pem
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem
--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem
--service-account-private-key-file=/etc/kubernetes/pki/ca-key.pem
--leader-elect=true
--v=2 1>>/var/log/kube-controller-manager.log 2>&1
image: foxchan/google_containers/kube-controller-manager-amd64:v1.8.1
livenessProbe:
httpGet:
host: 127.0.0.1
path: /healthz
port: 10252
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-controller-manager
volumeMounts:
- mountPath: /etc/kubernetes/
name: k8s
readOnly: true
- mountPath: /var/log/kube-controller-manager.log
name: logfile
- mountPath: /etc/ssl/certs
name: certs
- mountPath: /etc/pki
name: pki
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes
name: k8s
- hostPath:
path: /var/log/kube-controller-manager.log
name: logfile
- hostPath:
path: /etc/ssl/certs
name: certs
- hostPath:
path: /etc/pki
name: pki
status: {}
参数说明
- –controllers=*,tokencleaner,bootstrapsigner 启用bootstrap token
3.5 安装kube-scheduler
3.5.1 配置scheduler.conf
cd /etc/kubernetes
export KUBE_APISERVER="https://master_VIP:6443"
# set-cluster
kubectl config set-cluster kubernetes
--certificate-authority=/etc/kubernetes/pki/ca.pem
--embed-certs=true
--server=${KUBE_APISERVER}
--kubeconfig=scheduler.conf
# set-credentials
kubectl config set-credentials system:kube-scheduler
--client-certificate=/etc/kubernetes/pki/scheduler.pem
--embed-certs=true
--client-key=/etc/kubernetes/pki/scheduler-key.pem
--kubeconfig=scheduler.conf
# set-context
kubectl config set-context system:kube-scheduler@kubernetes
--cluster=kubernetes
--user=system:kube-scheduler
--kubeconfig=scheduler.conf
# set default context
kubectl config use-context system:kube-scheduler@kubernetes --kubeconfig=scheduler.conf
scheduler.conf文件生成后将这个文件分发到各个Master节点的/etc/kubernetes目录下
3.5.2创建kube-scheduler.yaml,并放到 /etc/kubernetes/manifests
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-scheduler
tier: control-plane
name: kube-scheduler
namespace: kube-system
spec:
hostNetwork: true
containers:
- command:
- /bin/sh
- -c
- /usr/local/bin/kube-scheduler
--address=127.0.0.1
--leader-elect=true
--kubeconfig=/etc/kubernetes/scheduler.conf
--v=2 1>>/var/log/kube-scheduler.log 2>&1
image: foxchan/google_containers/kube-scheduler-amd64:v1.8.1
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10251
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-scheduler
resources:
requests:
cpu: 100m
volumeMounts:
- mountPath: /var/log/kube-scheduler.log
name: logfile
- mountPath: /etc/kubernetes/scheduler.conf
name: kubeconfig
readOnly: true
volumes:
- hostPath:
path: /var/log/kube-scheduler.log
name: logfile
- hostPath:
path: /etc/kubernetes/scheduler.conf
name: kubeconfig
status: {}
到这里三个Master节点上的kube-scheduler部署完成,通过选举出一个leader工作。
查看kube-scheduler日志
tail -f kube-scheduler.log
I1024 05:20:44.704783 7 event.go:218] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-scheduler", UID:"1201fc85-b7e1-11e7-9792-525400b406cc", APIVersion:"v1", ResourceVersion:"87114", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kvm-sh002154 became leader
查看Kubernetes Master集群各个核心组件的状态全部正常
kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}