外包精通--教你5分钟搞定k8s安装(CentOS)笔记、思路

2023-11-28 15:07:22 浏览数 (1)

节约时间成本 为何不在云服务厂商购买机器?减少服务器安装时间,网络限制呢。安装k8s呢?比如国内比较知名的腾讯云。

不完全是为了推广腾讯云,只是在几家厂商中,腾讯云的回国带宽比较稳定,所以选择的是腾讯云

核心思路

1、购买服务器

2、配置docker源、kubernetes

3、初始化master、初始化worknode节点

4、部署flannel插件

购买腾讯云服务器

私有网络创建

购买服务器前创建私有网络

创建私有网络

选购链接

选择“自定义配置”

选择计费模式,“按量计费”,选择“香港”地域。

机型配置选择2c4g足够创建使用了

源配置

docker源配置

根据不同的操作系统,可以在左边导航栏中选择。

docker官方安装文档

操作节点master、worknode

代码语言:javascript复制
 sudo yum install -y yum-utils
 sudo yum-config-manager 
    --add-repo 
    https://download.docker.com/linux/centos/docker-ce.repo

kubernetes源配置

根据操作系统选择不同的安装方式

kubernetes官方文档

操作节点master、worknode

代码语言:javascript复制
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

执行过程

代码语言:javascript复制
[root@VM-3-4-centos ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
> EOF
[root@VM-3-4-centos ~]# ls /etc/yum.repos.d/
CentOS-Base.repo  CentOS-x86_64-kernel.repo  kubernetes.repo
CentOS-Epel.repo  docker-ce.repo
[root@VM-3-4-centos ~]# 

开始安装部署

可配置国内源安装

代码语言:javascript复制
# 清华源
sed -i 's/download.docker.com/mirrors.tuna.tsinghua.edu.cn/g' docker-ce.repo
sed -i 's/packages.cloud.google.com/mirrors.tuna.tsinghua.edu.cn/g' kubernetes.repo

安装docker

安装docker命令说明

yum -y install docker-ce : 安装docker

systemctl start docker:启动docker

systemctl enable docker:开机自启动docker

docker version:docker版本查看

docker images:镜像查看

操作节点master、worknode

代码语言:javascript复制
yum -y install docker-ce && systemctl start docker && systemctl enable docker && docker version && docker images

master节点执行过程

代码语言:javascript复制
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
Client: Docker Engine - Community
 Version:           20.10.8
 API version:       1.41
 Go version:        go1.16.6
 Git commit:        3967b7d
 Built:             Fri Jul 30 19:55:49 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.8
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.6
  Git commit:       75249d8
  Built:            Fri Jul 30 19:54:13 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.9
  GitCommit:        e25210fe30a0a703442421b0f60afac609f950a3
 runc:
  Version:          1.0.1
  GitCommit:        v1.0.1-0-g4144b63
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
REPOSITORY   TAG       IMAGE ID   CREATED   SIZE
[root@VM-3-4-centos ~]# 

node节点执行过程

代码语言:javascript复制
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
Client: Docker Engine - Community
 Version:           20.10.8
 API version:       1.41
 Go version:        go1.16.6
 Git commit:        3967b7d
 Built:             Fri Jul 30 19:55:49 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.8
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.6
  Git commit:       75249d8
  Built:            Fri Jul 30 19:54:13 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.9
  GitCommit:        e25210fe30a0a703442421b0f60afac609f950a3
 runc:
  Version:          1.0.1
  GitCommit:        v1.0.1-0-g4144b63
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
REPOSITORY   TAG       IMAGE ID   CREATED   SIZE
[root@VM-3-16-centos ~]# 

安装过程

安装k8s组件

操作节点master、worknode

代码语言:javascript复制
yum -y install kubeadm-1.21.3 kubelet-1.21.3 kubectl-1.21.3

执行过程

代码语言:javascript复制
Dependency Installed:
  conntrack-tools.x86_64 0:1.4.4-7.el7                                         
  cri-tools.x86_64 0:1.13.0-0                                                  
  kubernetes-cni.x86_64 0:0.8.7-0                                              
  libnetfilter_cthelper.x86_64 0:1.0.0-11.el7                                  
  libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7                                  
  libnetfilter_queue.x86_64 0:1.0.2-2.el7_2                                    
  socat.x86_64 0:1.7.3.2-2.el7                                                 

Complete!
[root@VM-3-4-centos ~]# 

安装k8s

我们知道k8s的主机角色分为master、worknode,创建k8s集群首先需要初始化k8s的master节点。

服务器列表

操作系统

IP地址

主机角色

CentOS Linux release 7.9.2009 (Core)

172.16.3.4

master、初始化节点

CentOS Linux release 7.9.2009 (Core)

172.16.3.16

worknode

初始化master节点

操作节点master

apiserver-advertise-address地址,需要修改为实际的内网地址 初始化命令

代码语言:javascript复制
kubeadm init 
  --apiserver-advertise-address=172.16.3.4 
  --kubernetes-version v1.21.3 
  --service-cidr=10.96.0.0/12 
  --pod-network-cidr=10.244.0.0/16

master初始化过程

初始化过程

代码语言:javascript复制
[root@VM-3-4-centos ~]# kubeadm init 
>   --apiserver-advertise-address=172.16.3.4 
>   --kubernetes-version v1.21.3 
>   --service-cidr=10.96.0.0/12 
>   --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.21.3
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local vm-3-4-centos] and IPs [10.96.0.1 172.16.3.4]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost vm-3-4-centos] and IPs [172.16.3.4 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost vm-3-4-centos] and IPs [172.16.3.4 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 14.503187 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node vm-3-4-centos as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node vm-3-4-centos as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: xin3ax.ivwqtsi32jrubc54
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.3.4:6443 --token xin3ax.ivwqtsi32jrubc54 
	--discovery-token-ca-cert-hash sha256:f346670db4d41bf446b07b74c97e374551678bcfd210c2597fa0ba530930440d 
[root@VM-3-4-centos ~]# 

命令说明

mkdir -p $HOME/.kube:创建kube

cp -i /etc/kubernetes/admin.conf $HOME/.kube/config:拷贝config

sudo chown $(id -u):$(id -g) $HOME/.kube/config:赋权

代码语言:javascript复制
[root@VM-3-4-centos ~]#   mkdir -p $HOME/.kube
[root@VM-3-4-centos ~]#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@VM-3-4-centos ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config

初始化worknode节点

代码语言:javascript复制
[root@VM-3-16-centos ~]# kubeadm join 172.16.3.4:6443 --token xin3ax.ivwqtsi32jrubc54 --discovery-token-ca-cert-hash sha256:f346670db4d41bf446b07b74c97e374551678bcfd210c2597fa0ba530930440d 
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@VM-3-16-centos ~]# 

查看集群

代码语言:javascript复制
[root@VM-3-4-centos ~]# kubectl get node
NAME             STATUS     ROLES                  AGE    VERSION
vm-3-16-centos   NotReady   <none>                 37s    v1.21.3
vm-3-4-centos    NotReady   control-plane,master   6m6s   v1.21.3
[root@VM-3-4-centos ~]# 

安装flannel

如果不知道怎么安装,可以参考

https://cloud.tencent.com/developer/article/1869235

代码语言:javascript复制
[root@VM-3-4-centos ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21 , unavailable in v1.25 
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@VM-3-4-centos ~]# 

大约不到1分钟

代码语言:javascript复制
[root@VM-3-4-centos ~]# kubectl get node
NAME             STATUS     ROLES                  AGE    VERSION
vm-3-16-centos   Ready   <none>                        37s     v1.21.3
vm-3-4-centos     Ready   control-plane,master   6m6s   v1.21.3
[root@VM-3-4-centos ~]# 

创建deployment

代码语言:javascript复制
kubectl create deployment web --image=nginx

创建过程

代码语言:javascript复制
[root@VM-3-4-centos ~]# kubectl create deployment web --image=nginx
deployment.apps/web created
[root@VM-3-4-centos ~]# 

0 人点赞