kubeadm搭建单master节点1.20版本kubernetes集群

2023-11-29 12:21:00 浏览数 (1)

资源准备

2台腾讯云CVM实例,配置如下:

主机名

配置

内网ip

master

2核4G

10.0.1.7

node1

2核4G

10.0.1.9

准备工作

由于是云服务器,selinux、firewalld、swap都会默认关闭,iptables规则也会清空,所以仅需要配置下主机名、hosts文件以及配置下kubernetes的转发规则就好,如下:

分别设置2台实例的主机名

master节点执行:

代码语言:txt复制
hostnamectl set-hostname master

node节点执行:

代码语言:txt复制
hostnamectl set-hostname node1

设置hosts文件,2台机器都执行,在文件末尾追加:

代码语言:txt复制
vim /etc/hosts
127.0.0.1 VM-1-6-centos VM-1-6-centos
127.0.0.1 localhost.localdomain localhost
127.0.0.1 localhost4.localdomain4 localhost4

::1 VM-1-6-centos VM-1-6-centos
::1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6

10.0.1.7 master
10.0.1.9 node1

配置转发相关:

代码语言:txt复制
vim /etc/sysctl.d/Kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness=0

配置完成后执行 sysctl -p生效,如果提示以下错误:

代码语言:txt复制
[root@master ~]# sysctl -p /etc/sysctl.d/Kubernetes.conf
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
net.ipv4.ip_forward = 1
vm.swappiness = 0

执行 modprobe br_netfilter后,再执行 sysctl -p生效

配置相关下载源以及安装Docker

下载Docker镜像源:

代码语言:txt复制
wget -O /etc/yum.repos.d/docker-ce.repo http://download.docker.com/linux/centos/docker-ce.repo

修改镜像源为腾讯的下载地址:

代码语言:txt复制
sed -i 's download.docker.com mirrors.cloud.tencent.com/docker-ce ' /etc/yum.repos.d/docker-ce.repo

列出所有Docker的版本:

代码语言:txt复制
[root@master ~]# yum list docker-ce --showduplicates | sort -r
Loaded plugins: fastestmirror, langpacks
docker-ce.x86_64            3:20.10.9-3.el7                     docker-ce-stable
docker-ce.x86_64            3:20.10.8-3.el7                     docker-ce-stable
docker-ce.x86_64            3:20.10.7-3.el7                     docker-ce-stable
docker-ce.x86_64            3:20.10.6-3.el7                     docker-ce-stable
docker-ce.x86_64            3:20.10.5-3.el7                     docker-ce-stable
docker-ce.x86_64            3:20.10.4-3.el7                     docker-ce-stable
docker-ce.x86_64            3:20.10.3-3.el7                     docker-ce-stable
docker-ce.x86_64            3:20.10.2-3.el7                     docker-ce-stable
docker-ce.x86_64            3:20.10.1-3.el7                     docker-ce-stable
docker-ce.x86_64            3:20.10.10-3.el7                    docker-ce-stable
docker-ce.x86_64            3:20.10.0-3.el7                     docker-ce-stable
docker-ce.x86_64            3:19.03.9-3.el7                     docker-ce-stable
docker-ce.x86_64            3:19.03.8-3.el7                     docker-ce-stable
docker-ce.x86_64            3:19.03.7-3.el7                     docker-ce-stable
docker-ce.x86_64            3:19.03.6-3.el7                     docker-ce-stable
docker-ce.x86_64            3:19.03.5-3.el7                     docker-ce-stable
docker-ce.x86_64            3:19.03.4-3.el7                     docker-ce-stable
docker-ce.x86_64            3:19.03.3-3.el7                     docker-ce-stable
docker-ce.x86_64            3:19.03.2-3.el7                     docker-ce-stable
docker-ce.x86_64            3:19.03.15-3.el7                    docker-ce-stable

安装19.03版本Docker:

代码语言:txt复制
yum install -y docker-ce-19.03.9-3.el7

启动Docker并设置成开机自启动:

代码语言:txt复制
systemctl start docker
systemctl enable docker

配置Docker的镜像拉取源为腾讯的源:

代码语言:txt复制
vim /etc/docker/daemon.json
代码语言:txt复制
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://mirrors.ccs.tencentyun.com"]
}

重载systemctl配置,并重启Docker生效:

代码语言:txt复制
systemctl daemon-reload
systemctl restart docker

配置kubernetes软件下载源:

代码语言:txt复制
[root@master ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.cloud.tencent.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enable=1
gpgcheck=0
repo_gpgcheck=0
EOF

安装集群所需要的三个软件kubeadm、kubectl、kubelet,这里都选择1.20.0版本的,因为我采用离线docker镜像安装的,所有离线的k8s组件都是适配1.20.0版本的kubelet,安装命令如下:

代码语言:txt复制
yum install -y kubeadm-1.20.0 kubectl-1.20.0 kubelet-1.20.0

设置kubelet开启自启动:

代码语言:txt复制
systemctl enable kubelet

下载kubernetes所有组件的Docker镜像,1.20版本对应的8个组件版本如下:

代码语言:txt复制
kube-apiserver:v1.20.0
kube-scheduler:v1.20.0
kube-controller-manager:v1.20.0
kube-proxy:v1.20.0
etcd:3.4.13-0
coredns:1.7.0
pause:3.2
flannel:v0.13.1-rc1  # 网络组件
dashboard:v2.0.0-rc7 # 仪表盘相关组件,可以忽略
metrics-scraper:v1.0.4 # 仪表盘相关组件,可以忽略

由于 kubeadm初始化文件中使用的是 k8s.gcr.io这个仓库地址,需要科学上网才能拉取镜像,所以我自己拉取了所需要组件的所有镜像上传到Docker Hub和腾讯云了,Docker Hub下载只需要在前面加前缀 tangxu/就行,如下:

代码语言:txt复制
docker pull tangxu/kube-controller-manager:v1.20.0
docker pull tangxu/kube-apiserver:v1.20.0
docker pull tangxu/kube-scheduler:v1.20.0
docker pull tangxu/kube-proxy:v1.20.0
docker pull tangxu/flannel:v0.13.1-rc1
docker pull tangxu/etcd:3.4.13-0
docker pull tangxu/coredns:1.7.0
docker pull tangxu/pause:3.2
docker pull tangxu/dashboard:v2.0.0-rc7 
docker pull tangxu/metrics-scraper:v1.0.4

腾讯云下载的话,如下:

代码语言:txt复制
docker pull ccr.ccs.tencentyun.com/tangxu/kube-controller-manager:v1.20.0
docker pull ccr.ccs.tencentyun.com/tangxu/kube-apiserver:v1.20.0
docker pull ccr.ccs.tencentyun.com/tangxu/kube-scheduler:v1.20.0
docker pull ccr.ccs.tencentyun.com/tangxu/kube-proxy:v1.20.0
docker pull ccr.ccs.tencentyun.com/tangxu/flannel:v0.13.1-rc1
docker pull ccr.ccs.tencentyun.com/tangxu/etcd:3.4.13-0
docker pull ccr.ccs.tencentyun.com/tangxu/coredns:1.7.0
docker pull ccr.ccs.tencentyun.com/tangxu/pause:3.2
docker pull ccr.ccs.tencentyun.com/tangxu/dashboard:v2.0.0-rc7 
docker pull ccr.ccs.tencentyun.com/tangxu/metrics-scraper:v1.0.4

重新改下镜像的名字,因为初始化yaml文件中仓库地址与离线的镜像不匹配,如果不修改名字的话,不能使用已经离线的镜像,需要重新拉取,而拉取需要科学上网,Docker Hub下载的镜像修改名字如下:

代码语言:txt复制
docker tag tangxu/kube-controller-manager:v1.20.0 k8s.gcr.io/kube-controller-manager:v1.20.0
docker tag tangxu/kube-apiserver:v1.20.0 k8s.gcr.io/kube-apiserver:v1.20.0
docker tag tangxu/kube-scheduler:v1.20.0 k8s.gcr.io/kube-scheduler:v1.20.0
docker tag tangxu/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0
docker tag tangxu/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0
docker tag tangxu/pause:3.2 k8s.gcr.io/pause:3.2
docker tag tangxu/kube-proxy:v1.20.0 k8s.gcr.io/kube-proxy:v1.20.0
docker tag tangxu/flannel:v0.13.1-rc1 quay.io/coreos/flannel:v0.13.1-rc1
docker tag tangxu/dashboard:v2.0.0-rc7 kubernetesui/dashboard:v2.0.0-rc7
docker tag tangxu/metrics-scraper:v1.0.4 kubernetesui/metrics-scraper:v1.0.4

腾讯云下载的镜像修改名字如下:

代码语言:txt复制
docker tag ccr.ccs.tencentyun.com/tangxu/kube-controller-manager:v1.20.0 k8s.gcr.io/kube-controller-manager:v1.20.0
docker tag ccr.ccs.tencentyun.com/tangxu/kube-apiserver:v1.20.0 k8s.gcr.io/kube-apiserver:v1.20.0
docker tag ccr.ccs.tencentyun.com/tangxu/kube-scheduler:v1.20.0 k8s.gcr.io/kube-scheduler:v1.20.0
docker tag ccr.ccs.tencentyun.com/tangxu/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0
docker tag ccr.ccs.tencentyun.com/tangxu/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0
docker tag ccr.ccs.tencentyun.com/tangxu/pause:3.2 k8s.gcr.io/pause:3.2
docker tag ccr.ccs.tencentyun.com/tangxu/kube-proxy:v1.20.0 k8s.gcr.io/kube-proxy:v1.20.0
docker tag ccr.ccs.tencentyun.com/tangxu/flannel:v0.13.1-rc1 quay.io/coreos/flannel:v0.13.1-rc1
docker tag ccr.ccs.tencentyun.com/tangxu/dashboard:v2.0.0-rc7 kubernetesui/dashboard:v2.0.0-rc7
docker tag ccr.ccs.tencentyun.com/tangxu/metrics-scraper:v1.0.4 kubernetesui/metrics-scraper:v1.0.4

所有镜像准备就绪,可以通过 docekr images命令查看:

代码语言:txt复制
[root@master ~]# docker images | grep -v tangxu
REPOSITORY                                              TAG           IMAGE ID       CREATED         SIZE
k8s.gcr.io/kube-proxy                                   v1.20.0       10cc881966cf   10 months ago   118MB
k8s.gcr.io/kube-apiserver                               v1.20.0       ca9843d3b545   10 months ago   122MB
k8s.gcr.io/kube-scheduler                               v1.20.0       3138b6e3d471   10 months ago   46.4MB
k8s.gcr.io/kube-controller-manager                      v1.20.0       b9fa1895dcaa   10 months ago   116MB
quay.io/coreos/flannel                                  v0.13.1-rc1   f03a23d55e57   11 months ago   64.6MB
k8s.gcr.io/etcd                                         3.4.13-0      0369cf4303ff   14 months ago   253MB
k8s.gcr.io/coredns                                      1.7.0         bfe3a36ebd25   16 months ago   45.2MB
kubernetesui/dashboard                                  v2.0.0-rc7    126a52ef8802   19 months ago   221MB
kubernetesui/metrics-scraper                            v1.0.4        86262685d9ab   19 months ago   36.9MB
k8s.gcr.io/pause                                        3.2           80d28bedfe5d   20 months ago   683kB
初始化集群

master节点执行,生成集群的初始化配置文件:

代码语言:txt复制
kubeadm config print init-defaults > kubeadm-config.yaml

对配置文件稍作修改:

代码语言:txt复制
vim ./kubeadm-config.yaml
代码语言:txt复制
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.0.1.7 # master 节点内网ip
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: master
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
ControlPlaneEndpoint: "mycluster:6443"
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.20.0
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"  # 使用flannel网络插件这样写网段
  serviceSubnet: 10.96.0.0/12
scheduler: {}

主要修改下 advertiseAddress的ip地址为master节点的地址,然后在 dnsDomain: cluster.local下面增加 podSubnet: "10.244.0.0/16"

开始初始化:

代码语言:txt复制
kubeadm init --config=kubeadm-config.yaml

初始化成功:

代码语言:txt复制
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 10.0.1.7]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [10.0.1.7 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [10.0.1.7 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 12.501493 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.1.7:6443 --token abcdef.0123456789abcdef 
    --discovery-token-ca-cert-hash sha256:59a7d6f60dd67d0be0af8022b1b21cadc77797e89cb6a9d0b7587c1ead4906ee

执行下提示中的命令:

代码语言:txt复制
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

初始化成功会生成加入节点的命令,上面初始化成功后生成的加入节点命令如下:

代码语言:txt复制
kubeadm join 10.0.1.7:6443 --token abcdef.0123456789abcdef 
    --discovery-token-ca-cert-hash sha256:59a7d6f60dd67d0be0af8022b1b21cadc77797e89cb6a9d0b7587c1ead4906ee

通过 kubectl get cs命令查看组件准备状态:

代码语言:txt复制
[root@VM-1-7-centos ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19 
NAME                 STATUS      MESSAGE                                                                                       ERROR
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused   
etcd-0               Healthy     {"health":"true"}

这里 schedulercontroller-manager显示不健康,需要修改2个配置文件 kube-controller-manager.yamlkube-scheduler.yaml,将配置文件中 --prot = 0这项注释掉:

代码语言:txt复制
vim /etc/kubernetes/manifests/kube-controller-manager.yaml
代码语言:txt复制
spec:
  containers:
  - command:
    - kube-controller-manager
    - --allocate-node-cidrs=true
    - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --bind-address=127.0.0.1
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --cluster-cidr=10.244.0.0/16
    - --cluster-name=kubernetes
    - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
    - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
    - --controllers=*,bootstrapsigner,tokencleaner
    - --kubeconfig=/etc/kubernetes/controller-manager.conf
    - --leader-elect=true
#    - --port=0
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --root-ca-file=/etc/kubernetes/pki/ca.crt
    - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
    - --service-cluster-ip-range=10.96.0.0/12
    - --use-service-account-credentials=true
代码语言:txt复制
vim /etc/kubernetes/manifests/kube-scheduler.yaml
代码语言:txt复制
spec:
  containers:
  - command:
    - kube-scheduler
    - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
    - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
    - --bind-address=127.0.0.1
    - --kubeconfig=/etc/kubernetes/scheduler.conf
    - --leader-elect=true
#    - --port=0

修改保存完成后,等待一会儿状态就正常了:

代码语言:txt复制
[root@VM-1-7-centos ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19 
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok    
scheduler            Healthy   ok    
etcd-0               Healthy   {"health":"true"}

加入工作节点,node1节点执行:

代码语言:txt复制
kubeadm join 10.0.1.7:6443 --token abcdef.0123456789abcdef 
    --discovery-token-ca-cert-hash sha256:59a7d6f60dd67d0be0af8022b1b21cadc77797e89cb6a9d0b7587c1ead4906ee

执行后看到以下提示说明节点成功加入集群:

代码语言:txt复制
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

查看集群所有节点,通过管理节点也就是master节点执行 kubectl get nodes查看:

代码语言:txt复制
[root@VM-1-7-centos ~]# kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   18m   v1.20.0
node1    Ready    <none>                 14m   v1.20.0

未安装网络组件flannel之前这里的 STATUS状态会是未准备状态,安装网络组件之后就可以了

安装网络组件flannel,kube-flannel.yml文件如下:

代码语言:txt复制
apiVersion: v1
kind: Namespace
metadata:
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
  name: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - clustercidrs
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
kind: ConfigMap
metadata:
  labels:
    app: flannel
    k8s-app: flannel
    tier: node
  name: kube-flannel-cfg
  namespace: kube-flannel
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: flannel
    k8s-app: flannel
    tier: node
  name: kube-flannel-ds
  namespace: kube-flannel
spec:
  selector:
    matchLabels:
      app: flannel
      k8s-app: flannel
  template:
    metadata:
      labels:
        app: flannel
        k8s-app: flannel
        tier: node
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      containers:
      - args:
        - --ip-masq
        - --kube-subnet-mgr
        command:
        - /opt/bin/flanneld
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        image: docker.io/flannel/flannel:v0.23.0
        name: kube-flannel
        resources:
          requests:
            cpu: 100m
            memory: 50Mi
        securityContext:
          capabilities:
            add:
            - NET_ADMIN
            - NET_RAW
          privileged: false
        volumeMounts:
        - mountPath: /run/flannel
          name: run
        - mountPath: /etc/kube-flannel/
          name: flannel-cfg
        - mountPath: /run/xtables.lock
          name: xtables-lock
      hostNetwork: true
      initContainers:
      - args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        command:
        - cp
        image: docker.io/flannel/flannel-cni-plugin:v1.2.0
        name: install-cni-plugin
        volumeMounts:
        - mountPath: /opt/cni/bin
          name: cni-plugin
      - args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        command:
        - cp
        image: docker.io/flannel/flannel:v0.23.0
        name: install-cni
        volumeMounts:
        - mountPath: /etc/cni/net.d
          name: cni
        - mountPath: /etc/kube-flannel/
          name: flannel-cfg
      priorityClassName: system-node-critical
      serviceAccountName: flannel
      tolerations:
      - effect: NoSchedule
        operator: Exists
      volumes:
      - hostPath:
          path: /run/flannel
        name: run
      - hostPath:
          path: /opt/cni/bin
        name: cni-plugin
      - hostPath:
          path: /etc/cni/net.d
        name: cni
      - configMap:
          name: kube-flannel-cfg
        name: flannel-cfg
      - hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
        name: xtables-lock

yaml文件有严格的缩进模式,不支持tab键,如果上面的yaml文件部署报错,可以直接去 GitHub 下载到本地,下载地址:

kube-flannel.yml

部署网络组件:

代码语言:txt复制
kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

查看所有组件运行状态:

代码语言:txt复制
[root@VM-1-7-centos ~]# kubectl get pods -A
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
kube-system   coredns-74ff55c5b-727nv          1/1     Running   0          15m
kube-system   coredns-74ff55c5b-r85ql          1/1     Running   0          15m
kube-system   etcd-master                      1/1     Running   0          15m
kube-system   kube-apiserver-master            1/1     Running   0          15m
kube-system   kube-controller-manager-master   1/1     Running   0          7m23s
kube-system   kube-flannel-ds-nnqnj            1/1     Running   0          47s
kube-system   kube-flannel-ds-s5rsl            1/1     Running   0          47s
kube-system   kube-proxy-8tt49                 1/1     Running   0          11m
kube-system   kube-proxy-vnbh4                 1/1     Running   0          15m
kube-system   kube-scheduler-master            1/1     Running   1          4m24s

集群搭建完成,这里是单节点master,生产环境建议准备3台服务器用来做master节点。

0 人点赞