[摘要] 在 K8s 上配置的 traefik-ingress 作为LB,在配置 traefik-ingress 的节点上配置keepalived起VIP做高可用,可以起到app发现的功能,统一访问入口,并不需要知道后端具体启动的应用。不过测试中访问速度并不是很理想,基本每个请求都在10s左右,很是令人费解..
测试结果,测试用例是之前配置的 K8S rolling-update 的go程序:
# kubectl get pods
NAME READY STATUS RESTARTS AGE
glusterfs 1/1 Running 0 3d
rolling-update-test-759964656c-dp6jj 1/1 Running 0 3d
rolling-update-test-759964656c-jtpfb 1/1 Running 0 3d
rolling-update-test-759964656c-pjgrg 1/1 Running 0 3d
# time curl -H Host:rolling-update-test.sudops.in http://172.30.0.251/
This is version 3.
real 0m11.744s
user 0m0.004s
sys 0m0.006s
#
traefik.yaml 的内容如下:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: traefik-ingress-lb
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
terminationGracePeriodSeconds: 60
hostNetwork: true
restartPolicy: Always
serviceAccountName: ingress
containers:
- image: traefik
name: traefik-ingress-lb
resources:
limits:
cpu: 200m
memory: 30Mi
requests:
cpu: 100m
memory: 20Mi
ports:
- name: http
containerPort: 80
hostPort: 80
- name: admin
containerPort: 8580
hostPort: 8580
args:
- --web
- --web.address=:8580
- --kubernetes
nodeSelector:
edgenode: "true"
反复查找原因,发现在其中的一台边缘节点上访问很快,基本0.0ms级的响应,其他节点均在10s左右。
网上搜了好久,终于发现有一外国哥们说是去掉对traefik-ingress-lb的资源限制看看
于是将资源的限制部分去掉(注释掉),delete 后重新 create,在任意一个节点上测试都会发现快多了。
将:
resources:
limits:
cpu: 200m
memory: 30Mi
requests:
cpu: 100m
memory: 20Mi
注释掉:
#resources:
# limits:
# cpu: 200m
# memory: 30Mi
# requests:
# cpu: 100m
# memory: 20Mi
重新创建 traefik-ingress
测试结果:
# time curl -H Host:rolling-update-test.sudops.in http://172.30.0.251/
This is version 3.
real 0m0.015s
user 0m0.006s
sys 0m0.004s
#
traefik-ingress-health
在此Mark一下。