【废亿点k8s】k8s单master集群安装(1.24版本)

2022-09-22 10:25:36 浏览数 (1)

环境准备

准备两台服务器,一台用作k8smaster,一台用作k8snode。示例系统为Centos7。

IP

ROLE

192.168.2.131

k8smaster

192.168.2.132

k8snode

机器配置

每台机器都需要执行如下命令

代码语言:javascript复制
# 关闭防火墙
sudo systemctl stop firewalld
sudo systemctl disable firewalld

# 关闭selinux
# 永久  # 这是允许容器访问主机文件系统所必需的
sudo sed -i 's/enforcing/disabled/' /etc/selinux/config
# 临时
sudo setenforce 0

# 关闭swap
# 临时
sudo swapoff -a
# 永久
sudo sed -ri 's/.*swap.*/#&/' /etc/fstab

# 根据规划设置主机名
sudo hostnamectl set-hostname <hostname>  #k8smaster k8snode

# 添加hosts
sudo cat >> /etc/hosts << EOF
192.168.2.131 k8smaster
192.168.2.132 k8snode
EOF

sudo cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system  # 生效

# 时间同步
sudo yum install ntpdate -y
sudo ntpdate time.windows.com
date # 查看时间

安装Docker

代码语言:javascript复制
sudo yum remove docker docker-common docker-selinux docker-engine
sudo yum install -y yum-utils device-mapper-persistent-data lvm2

# 使用阿里云的docker安装yum源
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sudo yum -y install docker-ce docker-ce-cli containerd.io

sudo systemctl enable docker
sudo systemctl start docker

添加k8s yum源

每台机器都需要

代码语言:javascript复制
sudo cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装kubeadm,kubelet和kubectl

每台机器都需要

代码语言:javascript复制
sudo yum install -y kubelet-1.24.0 kubeadm-1.24.0 kubectl-1.24.0
sudo systemctl enable kubelet

部署k8s master

在master节点执行

代码语言:javascript复制
kubeadm init 
  --apiserver-advertise-address=192.168.2.131 
  --image-repository registry.aliyuncs.com/google_containers 
  --kubernetes-version v1.24.0 
  --service-cidr=10.96.0.0/12 
  --pod-network-cidr=10.244.0.0/16

执行可能会报错:

代码语言:javascript复制
Kubeadm unknown service runtime.v1alpha2.RuntimeService
# 解决方法
# https://github.com/containerd/containerd/issues/4581
rm /etc/containerd/config.toml
systemctl restart containerd

输出 Your Kubernetes control-plane has initialized successfully! 表示master节点安装成功。

根据提示执行

代码语言:javascript复制
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

加入node节点

在node节点执行

代码语言:javascript复制
kubeadm join 192.168.2.131:6443 --token 9ypuou.dv1aqg0w3f4k5woe 
        --discovery-token-ca-cert-hash sha256:4837358ffa2fce8e2a45be7a99b01b020ced7adf52c151ccdfc3b756108e3041

把node节点加入集群。输出如下信息表示node节点加入集群成功。

代码语言:javascript复制
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

查看节点信息

代码语言:javascript复制
kubectl get nodes -o wide

NAME        STATUS     ROLES           AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
k8smaster   NotReady   control-plane   13m   v1.24.0   192.168.2.131   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   containerd://1.6.7
k8snode     NotReady   <none>          49s   v1.24.0   192.168.2.132   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   containerd://1.6.7

目前节点状态还是 NotReady,还需要配置网络插件。

安装网络插件

代码语言:javascript复制
# 在主节点执行
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 输出
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

# 查看pod状态
kubectl get pods -n kube-system
# 输出
NAME                                READY   STATUS    RESTARTS   AGE
coredns-74586cf9b6-fr5vk            0/1     Pending   0          16m
coredns-74586cf9b6-j8pxj            0/1     Pending   0          16m
etcd-k8smaster                      1/1     Running   0          16m
kube-apiserver-k8smaster            1/1     Running   0          16m
kube-controller-manager-k8smaster   1/1     Running   0          16m
kube-proxy-6cpxh                    1/1     Running   0          3m49s
kube-proxy-kbb82                    1/1     Running   0          16m
kube-scheduler-k8smaster            1/1     Running   0          16m

等待一段时间coredns状态ready之后即可。

最终所有pod状态都处于Running。

测试集群

至此,k8s单master集群已经安装完成,接下来通过部署一个nginx服务测试下集群是否可用。

master 执行

代码语言:javascript复制
kubectl create deployment nginx --image=nginx
# 等待一段时间查看pod状态
kubectl get pod -o wide

pod状态为Running表示nginx已经部署完成。想要访问nginx服务还需要暴露外部访问端口

代码语言:javascript复制
kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort
# 输出
service/nginx exposed

# 查看暴露的端口
kubectl get svc

svc是k8s里面的一种服务,用于pod访问。port是svc的端口,target-port是容器的端口,把port改为12345也是可行的,可以一试。以后有时间具体讲解。

通过集群的任一ip的30779端口即可访问到该nginx服务。

0 人点赞