- 以 Daemonset 方式部署 kube-proxy
以 Daemonset 方式部署 kube-proxy
kube-proxy 可以用二进制部署,也可以用 kubelet 的静态 Pod 部署,但最简单使用 DaemonSet 部署。直接使用 ServiceAccount 的 token 认证,不需要签发证书,也就不用担心证书过期问题。
先在终端设置下面的变量:
APISERVER="https://10.200.16.79:6443"CLUSTER_CIDR="10.10.0.0/16"
APISERVER替换为 apiserver 对外暴露的访问地址。有同学想问为什么不直接用集群内的访问地址(kubernetes.default或对应的 CLUSTER IP),这是一个鸡生蛋还是蛋生鸡的问题,CLSUTER IP 本身就是由 kube-proxy 来生成 iptables 或 ipvs 规则转发 Service 对应 Endpoint 的 Pod IP,kube-proxy 刚启动还没有生成这些转发规则,生成规则的前提是 kube-proxy 需要访问 apiserver 获取 Service 与 Endpoint,而由于还没有转发规则,kube-proxy 访问 apiserver 的 CLUSTER IP 的请求无法被转发到 apiserver。CLUSTER_CIDR替换为集群 Pod IP 的 CIDR 范围,这个在部署 kube-controller-manager 时也设置过
为 kube-proxy 创建 RBAC 权限和配置文件:
cat <<EOF | kubectl apply -f -apiVersion: v1kind: ServiceAccountmetadata:name: kube-proxynamespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: system:kube-proxyroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:node-proxiersubjects:- kind: ServiceAccountname: kube-proxynamespace: kube-system---kind: ConfigMapapiVersion: v1metadata:name: kube-proxynamespace: kube-systemlabels:app: kube-proxydata:kubeconfig.conf: |-apiVersion: v1kind: Configclusters:- cluster:certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crtserver: ${APISERVER}name: defaultcontexts:- context:cluster: defaultnamespace: defaultuser: defaultname: defaultcurrent-context: defaultusers:- name: defaultuser:tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/tokenconfig.conf: |-apiVersion: kubeproxy.config.k8s.io/v1alpha1kind: KubeProxyConfigurationbindAddress: 0.0.0.0clientConnection:acceptContentTypes: ""burst: 10contentType: application/vnd.kubernetes.protobufkubeconfig: /var/lib/kube-proxy/kubeconfig.confqps: 5# 集群中 Pod IP 的 CIDR 范围clusterCIDR: ${CLUSTER_CIDR}configSyncPeriod: 15m0sconntrack:# 每个核心最大能跟踪的NAT连接数,默认32768maxPerCore: 32768min: 131072tcpCloseWaitTimeout: 1h0m0stcpEstablishedTimeout: 24h0m0senableProfiling: falsehealthzBindAddress: 0.0.0.0:10256iptables:# SNAT 所有 Service 的 CLUSTER IPmasqueradeAll: falsemasqueradeBit: 14minSyncPeriod: 0ssyncPeriod: 30sipvs:minSyncPeriod: 0s# ipvs 调度类型,默认是 rr,支持的所有类型:# rr: round-robin# lc: least connection# dh: destination hashing# sh: source hashing# sed: shortest expected delay# nq: never queuescheduler: rrsyncPeriod: 30smetricsBindAddress: 0.0.0.0:10249# 使用 ipvs 模式转发 servicemode: ipvs# 设置 kube-proxy 进程的 oom-score-adj 值,范围 [-1000,1000]# 值越低越不容易被杀死,这里设置为 —999 防止发生系统OOM时将 kube-proxy 杀死oomScoreAdj: -999EOF
在终端设置下面的变量:
ARCH="amd64"VERSION="v1.16.1"
VERSION是 K8S 版本ARCH是节点的 cpu 架构,大多数用的amd64,即 x86_64。其它常见的还有:arm64,arm,ppc64le,s390x,如果你的集群有不同 cpu 架构的节点,可以分别指定ARCH部署多个 daemonset (每个节点不会有多个 kube-proxy,nodeSelector 会根据 cpu 架构来选中节点)
使用 hostNetwork 以 Daemonset 方式部署 kube-proxy 到每个节点:
cat <<EOF | kubectl apply -f -apiVersion: apps/v1kind: DaemonSetmetadata:labels:k8s-app: kube-proxy-ds-${ARCH}name: kube-proxy-ds-${ARCH}namespace: kube-systemspec:selector:matchLabels:k8s-app: kube-proxy-ds-${ARCH}updateStrategy:type: RollingUpdatetemplate:metadata:labels:k8s-app: kube-proxy-ds-${ARCH}spec:priorityClassName: system-node-criticalcontainers:- name: kube-proxyimage: k8s.gcr.io/kube-proxy-${ARCH}:${VERSION}imagePullPolicy: IfNotPresentcommand:- /usr/local/bin/kube-proxy- --config=/var/lib/kube-proxy/config.conf- --hostname-override=\$(NODE_NAME)securityContext:privileged: truevolumeMounts:- mountPath: /var/lib/kube-proxyname: kube-proxy- mountPath: /run/xtables.lockname: xtables-lockreadOnly: false- mountPath: /lib/modulesname: lib-modulesreadOnly: trueenv:- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeNamehostNetwork: trueserviceAccountName: kube-proxyvolumes:- name: kube-proxyconfigMap:name: kube-proxy- name: xtables-lockhostPath:path: /run/xtables.locktype: FileOrCreate- name: lib-moduleshostPath:path: /lib/modulestolerations:- key: CriticalAddonsOnlyoperator: Exists- operator: ExistsnodeSelector:beta.kubernetes.io/arch: ${ARCH}EOF
