██ 环境准备【所有节点】
■ 关闭防火墙、selinux
systemctl stop firewalld
setenforce 0
■ 关闭 swap
swapoff -a
fstab 注释掉 swap 的自动挂载
确认 swap 为0
free -m
■ 设置主机名
hostnamectl set-hostname <hostname>
在master添加hosts:
cat >> /etc/hosts << EOF
192.168.222.21 node5
192.168.222.22 node6
192.168.222.23 node7
EOF
■ 将桥接的IPv4流量传递到iptables的链:
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
■ 设置时区、时间同步等
██ 安装Docker/kubeadm/kubelet【所有节点】
■ 修改系统自带的源镜像地址
备份源镜像地址
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
■ 安装Docker
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce-18.06.1.ce-3.el7
systemctl enable docker && systemctl start docker
docker --version
查看可用的版本,安装最新版,安装时间2022-05-30
yum list docker-ce --showduplicates
yum -y remove docker-ce-18.06.1.ce-3.el7
yum -y install docker-ce-20.10.16-3.el7
yum -y remove docker-ce-20.10.16-3.el7
yum -y install docker-ce-19.03.15-3.el7
■ 配置镜像仓库地址
cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": "https://b9pmyelo.mirror.aliyuncs.com",
"exec-opts":"native.cgroupdriver=systemd"
}
EOF
■ 重启docker
systemctl status docker
systemctl daemon-reload
systemctl restart docker
systemctl enable docker
■ 添加 k8s yum 源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
kubernetes
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
■ 安装 kubelet,kubeadm,kubectl
yum list kubelet --showduplicates
yum list kubeadm --showduplicates
yum list kubectl --showduplicates
yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
yum remove -y kubelet kubeadm kubectl
yum install -y kubelet-1.20.1 kubeadm-1.20.1 kubectl-1.20.1
yum install -y kubelet-1.23.1 kubeadm-1.23.1 kubectl-1.23.1
systemctl status kubelet
systemctl enable kubelet
systemctl start kubelet
TIPS:k8s集群还未拉起,故这里的kubelet是无法启动的,等master初始化时会自动拉起
██ 部署 k8s master
kubeadm init
--apiserver-advertise-address=192.168.222.21
--image-repository registry.aliyuncs.com/google_containers
--kubernetes-version v1.23.1
--service-cidr=10.92.0.0/12
--pod-network-cidr=10.220.0.0/16
--ignore-preflight-errors=all
初始化成功会输出类似如下信息:
代码语言:txt复制Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.222.21:6443 --token plfz4s.m574ak0ryby28q6o
--discovery-token-ca-cert-hash sha256:ea7fe9e638b97215c3f656c4cb7988ef876a9a69217b7663ef33680e414df6e5
--ignore-preflight-errors=all
使用kubectl命令查看状态:
kubectl get nodes
node节点使用 kubectl 命令,需用scp命令分别拷贝config文件至对应目录方可:
mkdir -p $HOME/.kube
scp node5:/etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
██ 部署 k8s node
在node节点中执行 master init 后产生的加入命令:
【如上】
TIPS:这里加入node后会看到node状态为NotReady,是因为没有安装CNI,kubelet无法通过网络给apiserver上报node状态,安装CNI后即可恢复
代码语言:txt复制kubectl get nodes
NAME STATUS ROLES AGE VERSION
node5 NotReady master 16h v1.18.0
node6 NotReady <none> 6m55s v1.18.0
node7 NotReady <none> 42s v1.18.0
██ 部署网络插件【CNI】
wget https://docs.projectcalico.org/manifests/calico.yaml --no-check-certificate
下载完成后,修改、定义Pod网络:CALICO_IPV4POOL_CIDR, 需与前面master初始化时配置一样(参数 pod-network-cidr)
代码语言:txt复制 - name: CALICO_IPV4POOL_CIDR
value: "192.168.0.0/16"
修改完成后应用报错如下:
kubectl apply -f calico.yaml
报错如下:【2022-05-28】
root@node6:0 ~# kubectl apply -f calico.yaml
error: unable to recognize "calico.yaml": no matches for kind "PodDisruptionBudget" in version "policy/v1"
可见网络插件无法部署,看上去是calico版本问题导致,后续根据错误日志,反复测试不同版本,最终确认docker版本19.03.15与kubelet版本1.23.1是匹配的,可以顺利配置calico网络接口。
应用成功后,可以看到CNI POD正在初始化中,静待拉起~~~
calico running 状态后,查看节点已全部 Ready
██ 以下是node6的操作日志,可以看到网络插件部署、初始化、成功的全过程。
代码语言:txt复制[root@node6:0 ~]# kubectl apply -f calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created
[root@node6:0 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node5 NotReady control-plane,master 5m50s v1.23.1
node6 NotReady <none> 3m43s v1.23.1
[root@node6:0 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-6b77fff45-cwtd2 0/1 Pending 0 61s
calico-node-5nw2r 0/1 Init:0/2 0 61s
calico-node-fl272 0/1 Init:0/2 0 61s
coredns-6d8c4cb4d-fwxk2 0/1 Pending 0 6m17s
coredns-6d8c4cb4d-ql29c 0/1 Pending 0 6m17s
etcd-node5 1/1 Running 0 6m30s
kube-apiserver-node5 1/1 Running 0 6m32s
kube-controller-manager-node5 1/1 Running 0 6m31s
kube-proxy-gzdfk 1/1 Running 0 6m17s
kube-proxy-k8vmn 1/1 Running 0 4m27s
kube-scheduler-node5 1/1 Running 0 6m30s
[root@node6:0 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-6b77fff45-cwtd2 0/1 ContainerCreating 0 3m48s
calico-node-5nw2r 0/1 PodInitializing 0 3m48s
calico-node-fl272 0/1 PodInitializing 0 3m48s
coredns-6d8c4cb4d-fwxk2 0/1 ContainerCreating 0 9m4s
coredns-6d8c4cb4d-ql29c 0/1 ContainerCreating 0 9m4s
etcd-node5 1/1 Running 0 9m17s
kube-apiserver-node5 1/1 Running 0 9m19s
kube-controller-manager-node5 1/1 Running 0 9m18s
kube-proxy-gzdfk 1/1 Running 0 9m4s
kube-proxy-k8vmn 1/1 Running 0 7m14s
kube-scheduler-node5 1/1 Running 0 9m17s
[root@node6:0 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-6b77fff45-cwtd2 1/1 Running 0 9m48s
calico-node-5nw2r 1/1 Running 0 9m48s
calico-node-fl272 1/1 Running 0 9m48s
coredns-6d8c4cb4d-fwxk2 1/1 Running 0 15m
coredns-6d8c4cb4d-ql29c 1/1 Running 0 15m
etcd-node5 1/1 Running 0 15m
kube-apiserver-node5 1/1 Running 0 15m
kube-controller-manager-node5 1/1 Running 0 15m
kube-proxy-gzdfk 1/1 Running 0 15m
kube-proxy-k8vmn 1/1 Running 0 13m
kube-scheduler-node5 1/1 Running 0 15m
[root@node6:0 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node5 Ready control-plane,master 17m v1.23.1
node6 Ready <none> 15m v1.23.1