集群节点环境说明
功能名称 | 操作系统 | IP | CPU/MEM | Disk |
---|---|---|---|---|
k8s-master | CentOS Linux 7 (Core) | 172.16.132.231 | 2c8g | 100G |
K8s-node2 | CentOS Linux 7 (Core) | 172.16.132.232 | 8c16g | 100G |
K8s-node3 | CentOS Linux 7 (Core) | 172.16.132.233 | 8c16g | 100G |
K8s-node4 | CentOS Linux 7 (Core) | 172.16.132.234 | 4c8g | 100G |
K8s-node5 | CentOS Linux 7 (Core) | 172.16.132.235 | 4c8g | 100G |
storage-01 | CentOS Linux 7 (Core) | 172.16.132.231-nfs | 4c8g | 100G |
storage-01 | CentOS Linux 7 (Core) | 172.16.132.175-nfs | 4c8g | 140G |
Harbor | CentOS Linux 7 (Core) | 172.16.132.183 | 2c4g | 100G |
节点环境初始化
- 更新环境
yum update -y
yum install -y wget vim net-tools epel-release
- 关闭filewalld
systemctl disable firewalld
systemctl stop firewalld
- 关闭selinux
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux
if [ `getenforce` == "Enforcing" ];then
setenforce 0
else
echo "current selinux status..." `getenforce`
fi
- 关闭swap
swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab
- 增加主机名解析
cat << EOF /etc/host
172.16.132.231 dev-k8s-01.kubemaster.top
172.16.132.232 dev-k8s-02.kubemaster.top
172.16.132.233 dev-k8s-03.kubemaster.top
172.16.132.234 dev-k8s-04.kubemaster.top
172.16.132.235 dev-k8s-05.kubemaster.top
EOF
6. 优化内核参数
```bash
cat << EOF >> /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
- 更新Yum源配置
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.`date %F`.backup
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum makecache fast
cat << EOF /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF
yum clean all
yum makecache fast
yum -y update
- 安装docker
yum -y install yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker-ce-18.09.9-3.el7
mkdir /etc/docker -pv
cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://registry.docker-cn.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
systemctl enable --now docker.service
- 安装初始化工具
yum install -y kubeadm kubelet
- 获取基础镜像
KUBE_VERSION=v1.16.0
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.3.15-0
CORE_DNS_VERSION=1.6.2
GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers
images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION})
for imageName in ${images[@]} ; do
docker pull $ALIYUN_URL/$imageName
docker tag $ALIYUN_URL/$imageName $GCR_URL/$imageName
docker rmi $ALIYUN_URL/$imageName
done
以上10步建议在所有的节点上执行,但是在node节点上可以不用安装kubeadm
初始化k8s1.16.0集群
kubeadm初始化集群
代码语言:javascript复制[root@dev-k8s-01 ~]# sudo kubeadm init
> --apiserver-advertise-address 172.16.132.231
> --kubernetes-version=v1.16.0
> --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.16.0
[preflight] Running pre-flight checks
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [dev-k8s-01.kubemaster.top kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.132.231]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [dev-k8s-01.kubemaster.top localhost] and IPs [172.16.132.231 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [dev-k8s-01.kubemaster.top localhost] and IPs [172.16.132.231 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 39.003840 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node dev-k8s-01.kubemaster.top as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node dev-k8s-01.kubemaster.top as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[kubelet-check] Initial timeout of 40s passed.
[bootstrap-token] Using token: 9nwjok.ykyphybsveka8gev
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.16.132.231:6443 --token 9nwjok.ykyphybsveka8gev
--discovery-token-ca-cert-hash sha256:b92d7553a1da683a315ad2f4f5fcc855e2d630da0c7553467cdf2db3bd25a3ff
初始化kubectl配置文件
代码语言:javascript复制[root@dev-k8s-01 ~]# mkdir -p $HOME/.kube
[root@dev-k8s-01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@dev-k8s-01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
添加节点
- 添加k8s-node2:172.16.132.232
[root@dev-k8s-05 ~]# kubeadm join 172.16.132.231:6443 --token 9pr3rj.0u8m510fai0op75h
--discovery-token-ca-cert-hash sha256:b86bdaaa0bed56e846adb0abc625cf29902dec9e3130d0ff7dae42ffb2e13349
[preflight] Running pre-flight checks
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@dev-k8s-05 ~]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@dev-k8s-05 ~]#
如上所示依旧添加k8s-node3,k8s-node4,k8s-node5
验证集群状态
代码语言:javascript复制[root@dev-k8s-01 ~]# kubectl cluster-info
Kubernetes master is running at https://172.16.132.231:6443
KubeDNS is running at https://172.16.132.231:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
➜ ~ (☸ kubernetes-admin@kubernetes:default) kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
dev-k8s-01.kubemaster.top Ready master 14h v1.16.3 172.16.132.231 <none> CentOS Linux 7 (Core) 3.10.0-957.21.3.el7.x86_64 docker://18.9.9
dev-k8s-02.kubemaster.top Ready <none> 14h v1.16.3 172.16.132.232 <none> CentOS Linux 7 (Core) 3.10.0-957.21.3.el7.x86_64 docker://18.9.9
dev-k8s-03.kubemaster.top Ready <none> 14h v1.16.3 172.16.132.233 <none> CentOS Linux 7 (Core) 3.10.0-957.21.3.el7.x86_64 docker://18.9.9
dev-k8s-04.kubemaster.top Ready <none> 14h v1.16.3 172.16.132.234 <none> CentOS Linux 7 (Core) 3.10.0-1062.4.1.el7.x86_64 docker://18.9.9
dev-k8s-05.kubemaster.top Ready <none> 13h v1.16.3 172.16.132.235 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.9.9
➜ ~ (☸ kubernetes-admin@kubernetes:default) kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-5644d7b6d9-96xm6 1/1 Running 0 14h 10.244.3.2 dev-k8s-04.kubemaster.top <none> <none>
kube-system coredns-5644d7b6d9-nkb9f 1/1 Running 0 14h 10.244.1.2 dev-k8s-02.kubemaster.top <none> <none>
kube-system etcd-dev-k8s-01.kubemaster.top 1/1 Running 0 14h 172.16.132.231 dev-k8s-01.kubemaster.top <none> <none>
kube-system kube-apiserver-dev-k8s-01.kubemaster.top 1/1 Running 0 14h 172.16.132.231 dev-k8s-01.kubemaster.top <none> <none>
kube-system kube-controller-manager-dev-k8s-01.kubemaster.top 1/1 Running 0 14h 172.16.132.231 dev-k8s-01.kubemaster.top <none> <none>
kube-system kube-proxy-bhtjc 1/1 Running 0 14h 172.16.132.232 dev-k8s-02.kubemaster.top <none> <none>
kube-system kube-proxy-h2ltx 1/1 Running 0 14h 172.16.132.231 dev-k8s-01.kubemaster.top <none> <none>
kube-system kube-proxy-kh9k9 1/1 Running 0 14h 172.16.132.234 dev-k8s-04.kubemaster.top <none> <none>
kube-system kube-proxy-lfh46 1/1 Running 0 14h 172.16.132.233 dev-k8s-03.kubemaster.top <none> <none>
kube-system kube-proxy-pcm5d 1/1 Running 0 13h 172.16.132.235 dev-k8s-05.kubemaster.top <none> <none>
kube-system kube-scheduler-dev-k8s-01.kubemaster.top 1/1 Running 0 14h 172.16.132.231 dev-k8s-01.kubemaster.top <none> <none>
安装k8s fannel插件
flannel借助etcd内的路由表实现k8s集群节点上的每个Pod能相互的通信。
安装flannel网络插件
代码语言:javascript复制wget -O /opt/k8sworkspces/kube-flannel.yml https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
FLANNEL_VERSION=v0.11.0
QUAY_URL=quay.io/coreos
QINIU_URL=quay-mirror.qiniu.com/coreos
images=(flannel:${FLANNEL_VERSION}-amd64
flannel:${FLANNEL_VERSION}-arm64
flannel:${FLANNEL_VERSION}-arm
flannel:${FLANNEL_VERSION}-ppc64le
flannel:${FLANNEL_VERSION}-s390x)
for imageName in ${images[@]} ; do
docker pull $QINIU_URL/$imageName
docker tag $QINIU_URL/$imageName $QUAY_URL/$imageName
docker rmi $QINIU_URL/$imageName
done # 也可以只拉去你机器适配的架构版本`rpm -q centos-release`
kubectl apply -f /opt/k8sworkspces/kube-flannel.yml #安装flannel
➜ ~ (☸ kubernetes-admin@kubernetes:default) kubectl get pods --all-namespaces -o wide | grep flannel
kube-system kube-flannel-ds-amd64-9tnc7 1/1 Running 0 14h 172.16.132.234 dev-k8s-04.kubemaster.top <none> <none>
kube-system kube-flannel-ds-amd64-cjh4s 1/1 Running 0 14h 172.16.132.231 dev-k8s-01.kubemaster.top <none> <none>
kube-system kube-flannel-ds-amd64-fhlk4 1/1 Running 0 13h 172.16.132.235 dev-k8s-05.kubemaster.top <none> <none>
kube-system kube-flannel-ds-amd64-fnfpj 1/1 Running 0 14h 172.16.132.233 dev-k8s-03.kubemaster.top <none> <none>
kube-system kube-flannel-ds-amd64-v5qtj 1/1 Running 0 14h 172.16.132.232 dev-k8s-02.kubemaster.top <none> <none>
kubectl的插件管理工具 krew
krew 能够很方便的管理kubectl的插件包,包括安装卸载,查询升级
安装
代码语言:javascript复制(
set -x; cd /opt/k8sworkspces/krew &&
curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/download/v0.3.2/krew.{tar.gz,yaml}" &&
tar zxvf krew.tar.gz &&
./krew-"$(uname | tr '[:upper:]' '[:lower:]')_amd64" install
--manifest=krew.yaml --archive=krew.tar.gz
)
export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"
[root@dev-k8s-01 krew]# kubectl krew install ca-cert # 安装一个ca-caert的插件
[root@dev-k8s-01 krew]# kubectl ca-cert
-----BEGIN CERTIFICATE-----
MIICyDCCAbCgAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl
cm5ldGVzMB4XDTE5MTExNTA0MjEzOVoXDTI5MTExMjA0MjEzOVowFTETMBEGA1UE
AxMKa3ViZXJuZXRlczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANEi
tPdWINQZfZqM4c/uaOzsBBByn0CLLLmMdiKF4Gpk9proDoR9eOMhQiiVLZ4tFFsb
POTwq MvHe4kEsunl/hBwNbXvGfbvnr vX9ZsDfU5FT5O55Zryq5jgANDKFChKx9
R91QsbCeQKIWlc9AFdot8ig9LhYTfHJRfMeUBYl5Xzoof8YRMsJ0jOKLWca oCfd
doLKda9VpahU2AEmEFHuD6ctwBGFObadSktoOvr0Gfzo4cXRkjGXp4G1U8O1LLsU
HiypNN4m7Romy4tIjPAxDAoDDyjA8OrbPlvJt8Oo0CHcAxFZDJCsKAG1s0nS7PJj
vR2ULtIrHAm5QZa8BmMCAwEAAaMjMCEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB
/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAKAi1Fg/2MlFxPbq9yaNkBhAV2ou
/VbbuEJF1c92Tk24cuJV3vuYoTmWIGp1LYTLTW/xcfwFoanLRPBlBONoJRzXLIZD
/mmuYMrTaKMwbCz2t4awqQyDb8A3RcgTrSfCWMs0uyvjPVgiJDfMlg0WDJ4kPb3Y
SQv7UaaNa57gkEHB1PJy10n1E3gAcb6NVxvly7cHVaJlenZY6mkT40K8zVOXuM/G
ausCNXEfEUXED2C8Ippj/sr1TgRlD8Gfi Xp7XzHTeu5A ac4YPmnoW8jurzo5z5
Q5TDBFRaOTyRgUxYt PKv01S9tTiHgkxHoBzPQF7Z2TuRNKXoVQeXiUzW/s=
-----END CERTIFICATE-----
[root@dev-k8s-01 krew]# kubectl krew --help #查看krew的支持选项