最新 KubeKey 3.1.5 离线部署KubeSphere 3.4.1+k8s(更容易了)

2024-08-30 11:38:34 浏览数 (4)

点击公众号关注后,回复ks3.4离线包获取 本文制品和镜像。有任何问题可点击联系我,添加微信进行咨询和反馈。

上一篇中已介绍使用kk2.3.0离线部署ks3.3.1 k8s。介于kk2.3.0扩展控制节点有问题和ks3.3.1的一些问题,这里升级为最新版kk3.1.5 ks3.4.1,离线部署更简单!制品方法整体与上一篇大同小异 天行1st,公众号:编码如写诗信创:海光(x86) 银河麒麟(kylin v10)离线部署k8s和KubeSphere(一)

服务器配置

主机名

IP

CPU

OS

用途

node1

192.168.120.190

Hygon C86 3350

Kylin V10 SP3

离线环境主节点和镜像仓库节点

node2

192.168.120.191

Hygon C86 3350

Kylin V10 SP3

工作节点

deploy

192.168.200.7

Hygon C86 3350

Kylin V10 SP3

联网主机用于制作离线包

实战环境涉及软件版本信息

  • 服务器芯片:Hygon C86 3350
  • 操作系统:麒麟 V10 SP3 x86_64
  • Docker: 24.0.9
  • Harbor: v2.10.1
  • KubeSphere:v3.4.1
  • Kubernetes:v1.22.12
  • KubeKey: v3.1.5

1. 本文介绍

本文还是以银河麒麟V10为例,在上一篇麒麟V10(X86)离线部署k8s和KubeSphere基础上进行简化和自动化sehll。其他操作系统同理。 后续将分享多操作系统离线包制作和一键部署教程,敬请关注。

1.1 确认操作系统配置

在执行下文的任务之前,先确认操作系统相关配置。

  • 操作系统类型
代码语言:javascript复制
[root@localhost ~]# cat /etc/os-release 
NAME="Kylin Linux Advanced Server"
VERSION="V10 (Lance)"
ID="kylin"
VERSION_ID="V10"
PRETTY_NAME="Kylin Linux Advanced Server V10 (Lance)"
ANSI_COLOR="0;31
  • CPU
代码语言:javascript复制
[root@node2 ~]# lscpu
架构:                              x86_64
CPU 运行模式:                      32-bit, 64-bit
字节序:                            Little Endian
Address sizes:                      48 bits physical, 48 bits virtual
CPU:                                16
在线 CPU 列表:                     0-15
每个核的线程数:                    2
每个座的核数:                      8
座:                                1
NUMA 节点:                         1
厂商 ID:                           HygonGenuine
BIOS Vendor ID:                     Chengdu Hygon
CPU 系列:                          24
型号:                              2
型号名称:                          Hygon C86 3350  8-core Processor
BIOS Model name:                    Hygon C86 3350  8-core Processor
步进:                              2
...
  • 操作系统内核
代码语言:javascript复制
[root@node1 kubesphere]# uname -a
Linux node1 4.19.90-52.22.v2207.ky10.x86_64 #1 SMP Tue Mar 14 12:19:10 CST 2023 x86_64 x86_64 x86_64 GNU/Linux

2. 离线安装包制作

要制作离线包,需要一台同操作系统的在线环境,这里使用主机名:deploy 作为联网主机用于制作离线包。

2.1 下载麒麟系统k8s依赖包

此处为x86不同操作系统安装k8s的主要区别

代码语言:javascript复制
mkdir -p /root/kubesphere/k8s-init
该命令将下载
yum -y install openssl socat conntrack ipset ebtables chrony ipvsadm --downloadonly --downloaddir /root/kubesphere/k8s-init
编写安装脚本
vim install.sh
#!/bin/bash
rpm -ivh *.rpm --force --nodeps
打成压缩包,方便离线部署使用
tar -czvf k8s-init-KylinV10.tar.gz ./k8s-init/*

2.1 下载kk

  • 方式一
代码语言:javascript复制
[root@node1 kubesphere]# export KKZONE=cn
[root@node1 kubesphere]#  curl -sfL https://get-kk.kubesphere.io | VERSION=v3.1.5 sh -

Downloading kubekey v3.1.5 from https://kubernetes.pek3b.qingstor.com/kubekey/releases/download/v3.1.5/kubekey-v3.1.5-linux-amd64.tar.gz ...


Kubekey v3.1.5 Download Complete!
[root@node1 kubesphere]# ls
kk  kubekey-v3.1.5-linux-amd64.tar.gz
  • 方式二

使用本地电脑,直接去github下载 Releases · kubesphere/kubekey 上传至服务器/root/kubesphere目录解压

代码语言:javascript复制
tar zxf kubekey-v3.1.5-linux-amd64.tar.gz

2.3 编辑制品配置文件

在使用官方文档示例生成制品时出现了各种镜像错误,这里不再下载镜像(旧版本kk需要下载最少一个镜像)。镜像通过编写shell脚本处理。操作系统的iso也不再下载,使用第一步制作的依赖包。 此种方式优势

  • 制品体积更小(938M,不用harbor300多M,镜像文件额外1.9G)
  • 镜像变动更灵活
  • 组件按需增加/减少

劣势

  • 额外编写更多脚本
  • 额外增加离线部署过程
代码语言:javascript复制
vim manifest-kylin.yaml
代码语言:javascript复制
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:
  name: sample
spec:
  arches:
  - amd64
  operatingSystems:
  - arch: amd64
    type: linux
    id: kylin
    version: "V10"
    osImage: Kylin Linux Advanced Server V10
    repository:
      iso:
        localPath:
        url:
  kubernetesDistributions:
  - type: kubernetes
    version: v1.22.12
  components:
    helm:
      version: v3.14.3
    cni:
      version: v1.2.0
    etcd:
      version: v3.5.13
    containerRuntimes:
    - type: docker
      version: 24.0.9
    - type: containerd
      version: 1.7.13
    calicoctl:
      version: v3.27.3
    crictl:
      version: v1.29.0
    #docker-registry:
     # version: "2"
    harbor:
      version: v2.10.1
    docker-compose:
      version: v2.26.1

备注

  • 若需要导出的 artifact 文件中包含操作系统依赖文件(如:conntarck、chrony 等),可在 operationSystem 元素中的 .repostiory.iso.url 中配置相应的 ISO 依赖文件下载地址或者提前下载 ISO 包到本地在 localPath 里填写本地存放路径并删除 url 配置项,由于本文使用国产操作系统没有对应iso,而且最新kk支持不配置iso,这里不再配置。
  • **harbor、docker-compose和docker-registry **可二选一开启配置项,为后面通过 KubeKey 自建仓库推送镜像使用。

2.4 导出离线制品

代码语言:javascript复制
export KKZONE=cn
./kk artifact export -m manifest-kylin.yaml -o ks3.4-artifact.tar.gz

备注 制品(artifact)是一个根据指定的 manifest 文件内容导出的包含镜像 tar 包和相关二进制文件的 tgz 包。在 KubeKey 初始化镜像仓库、创建集群、添加节点和升级集群的命令中均可指定一个 artifact,KubeKey 将自动解包该 artifact 并在执行命令时直接使用解包出来的文件。

  • 导出时请确保网络连接正常。

导出成功后,最后输出内容如下

代码语言:javascript复制
15:08:55 CST success: [LocalHost]
15:08:55 CST [ChownOutputModule] Chown output file
15:08:55 CST success: [LocalHost]
15:08:55 CST [ChownWorkerModule] Chown ./kubekey dir
15:08:55 CST success: [LocalHost]
15:08:55 CST Pipeline[ArtifactExportPipeline] execute successfully

2.5 手动拉取k8s相关镜像

代码语言:javascript复制
vim pull-images.sh
代码语言:javascript复制
#!/bin/bashpull-images.sh
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.27.3
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.27.3
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.27.3
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.27.3
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.9.6-alpine
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.4.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.4.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.4.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.4.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v2.3.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v2.3.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.31.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/opensearch:2.6.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/log-sidecar-injector:v1.2.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.39.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.6.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/fluentbit-operator:v0.14.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.12
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.12
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.12
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.12
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/fluent-bit:v1.9.4
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.7.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.55.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v1.3.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.22.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.23.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:1.31.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/opensearch-curator:v0.0.5
代码语言:javascript复制
#执行脚本拉取镜像
source pull-images.sh

2.6 重命名镜像

代码语言:javascript复制
vim tag-images.sh

根据自己harbor仓库名称修改harbor地址和项目名称

代码语言:javascript复制
#!/bin/bashtag-images.sh

HarborAddr="dockerhub.kubekey.local/kubesphereio"
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.27.3 $HarborAddr/kube-controllers:v3.27.3
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.27.3 $HarborAddr/cni:v3.27.3
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.27.3 $HarborAddr/pod2daemon-flexvol:v3.27.3
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.27.3 $HarborAddr/node:v3.27.3
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.9.6-alpine $HarborAddr/haproxy:2.9.6-alpine
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.4.1 $HarborAddr/ks-installer:v3.4.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.4.1 $HarborAddr/ks-console:v3.4.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.4.1 $HarborAddr/ks-controller-manager:v3.4.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.4.1 $HarborAddr/ks-apiserver:v3.4.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v2.3.0 $HarborAddr/notification-manager:v2.3.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v2.3.0 $HarborAddr/notification-manager-operator:v2.3.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.31.0 $HarborAddr/thanos:v0.31.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/opensearch:2.6.0 $HarborAddr/opensearch:2.6.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20 $HarborAddr/k8s-dns-node-cache:1.22.20
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/log-sidecar-injector:v1.2.0 $HarborAddr/log-sidecar-injector:v1.2.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.39.1 $HarborAddr/prometheus:v2.39.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.6.0 $HarborAddr/kube-state-metrics:v2.6.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/fluentbit-operator:v0.14.0 $HarborAddr/fluentbit-operator:v0.14.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.12 $HarborAddr/kube-apiserver:v1.22.12
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.12 $HarborAddr/kube-controller-manager:v1.22.12
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.12 $HarborAddr/kube-scheduler:v1.22.12
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.12 $HarborAddr/kube-proxy:v1.22.12
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0 $HarborAddr/provisioner-localpv:3.3.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0 $HarborAddr/linux-utils:3.3.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/fluent-bit:v1.9.4 $HarborAddr/fluent-bit:v1.9.4
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.7.1 $HarborAddr/configmap-reload:v0.7.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1 $HarborAddr/prometheus-config-reloader:v0.55.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.55.1 $HarborAddr/prometheus-operator:v0.55.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v1.3.1 $HarborAddr/node-exporter:v1.3.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.22.0 $HarborAddr/kubectl:v1.22.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0 $HarborAddr/notification-tenant-sidecar:v3.2.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.23.0 $HarborAddr/alertmanager:v0.23.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0 $HarborAddr/kube-rbac-proxy:v0.11.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03 $HarborAddr/docker:19.03
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5 $HarborAddr/pause:3.5
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0 $HarborAddr/snapshot-controller:v4.0.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0 $HarborAddr/coredns:1.8.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:1.31.1 $HarborAddr/busybox:1.31.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4 $HarborAddr/defaultbackend-amd64:1.4
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/opensearch-curator:v0.0.5 $HarborAddr/opensearch-curator:v0.0.5
代码语言:javascript复制
#执行脚本,重命名镜像
source tag-images.sh

2.7 导出保存镜像

代码语言:javascript复制
mkdir ks3.4.1-images
cd ks3.4.1-images
vim save-images.sh

此处与上一篇略有不同,目的是更灵活变更各镜像版本

代码语言:javascript复制
#!/bin/bash

HarborAddr="dockerhub.kubekey.local/kubesphereio"
docker save -o kube-controllers:v3.27.3.tar $HarborAddr/kube-controllers:v3.27.3
docker save -o cni:v3.27.3.tar $HarborAddr/cni:v3.27.3
docker save -o pod2daemon-flexvol:v3.27.3.tar $HarborAddr/pod2daemon-flexvol:v3.27.3
docker save -o node:v3.27.3.tar $HarborAddr/node:v3.27.3
docker save -o haproxy:2.9.6-alpine.tar $HarborAddr/haproxy:2.9.6-alpine
docker save -o ks-installer:v3.4.1.tar $HarborAddr/ks-installer:v3.4.1
docker save -o ks-console:v3.4.1.tar $HarborAddr/ks-console:v3.4.1
docker save -o ks-controller-manager:v3.4.1.tar $HarborAddr/ks-controller-manager:v3.4.1
docker save -o ks-apiserver:v3.4.1.tar $HarborAddr/ks-apiserver:v3.4.1
docker save -o notification-manager:v2.3.0.tar $HarborAddr/notification-manager:v2.3.0
docker save -o notification-manager-operator:v2.3.0.tar $HarborAddr/notification-manager-operator:v2.3.0
docker save -o thanos:v0.31.0.tar $HarborAddr/thanos:v0.31.0
docker save -o opensearch:2.6.0.tar $HarborAddr/opensearch:2.6.0
docker save -o k8s-dns-node-cache:1.22.20.tar $HarborAddr/k8s-dns-node-cache:1.22.20
docker save -o log-sidecar-injector:v1.2.0.tar $HarborAddr/log-sidecar-injector:v1.2.0
docker save -o prometheus:v2.39.1.tar $HarborAddr/prometheus:v2.39.1
docker save -o kube-state-metrics:v2.6.0.tar $HarborAddr/kube-state-metrics:v2.6.0
docker save -o fluentbit-operator:v0.14.0.tar $HarborAddr/fluentbit-operator:v0.14.0
docker save -o kube-apiserver:v1.22.12.tar $HarborAddr/kube-apiserver:v1.22.12
docker save -o kube-controller-manager:v1.22.12.tar $HarborAddr/kube-controller-manager:v1.22.12
docker save -o kube-scheduler:v1.22.12.tar $HarborAddr/kube-scheduler:v1.22.12
docker save -o kube-proxy:v1.22.12.tar $HarborAddr/kube-proxy:v1.22.12
docker save -o provisioner-localpv:3.3.0.tar $HarborAddr/provisioner-localpv:3.3.0
docker save -o linux-utils:3.3.0.tar $HarborAddr/linux-utils:3.3.0
docker save -o fluent-bit:v1.9.4.tar $HarborAddr/fluent-bit:v1.9.4
docker save -o configmap-reload:v0.7.1.tar $HarborAddr/configmap-reload:v0.7.1
docker save -o prometheus-config-reloader:v0.55.1.tar $HarborAddr/prometheus-config-reloader:v0.55.1
docker save -o prometheus-operator:v0.55.1.tar $HarborAddr/prometheus-operator:v0.55.1
docker save -o node-exporter:v1.3.1.tar $HarborAddr/node-exporter:v1.3.1
docker save -o kubectl:v1.22.0.tar $HarborAddr/kubectl:v1.22.0
docker save -o notification-tenant-sidecar:v3.2.0.tar $HarborAddr/notification-tenant-sidecar:v3.2.0
docker save -o alertmanager:v0.23.0.tar $HarborAddr/alertmanager:v0.23.0
docker save -o kube-rbac-proxy:v0.11.0.tar $HarborAddr/kube-rbac-proxy:v0.11.0
docker save -o docker:19.03.tar $HarborAddr/docker:19.03
docker save -o pause:3.5.tar $HarborAddr/pause:3.5
docker save -o snapshot-controller:v4.0.0.tar $HarborAddr/snapshot-controller:v4.0.0
docker save -o coredns:1.8.0.tar $HarborAddr/coredns:1.8.0
docker save -o busybox:1.31.1.tar $HarborAddr/busybox:1.31.1
docker save -o defaultbackend-amd64:1.4.tar $HarborAddr/defaultbackend-amd64:1.4
docker save -o opensearch-curator:v0.0.5.tar $HarborAddr/opensearch-curator:v0.0.5

编写推送脚本 load-push.sh

代码语言:javascript复制
#!/bin/bash
#
FILES=$(find . -type f ( -iname "*.tar"  -o -iname "*.tar.gz"  ) -printf '%Pn' | grep -E ".tar$|.tar.gz$")

Harbor="dockerhub.kubekey.local"

docker login -u admin -p Harbor12345 ${Harbor}
echo "--------[Login Harbor succeed]--------"

# 遍历所有 ".tar" 或 ".tar.gz" 文件,逐个加载 Docker 镜像
for file in ${FILES}
do
    echo "--------[Loading Docker image from $file]--------"
    docker load -i "$file" > loadimages
    IMAGE=`cat loadimages | grep 'Loaded image:' | awk '{print $3}' | head -1`
    echo "--------[$IMAGE]--------"
    docker push $IMAGE
done
echo "--------[All Docker images push successfully]--------"

压缩k8s和ks镜像

代码语言:javascript复制
cd ..
tar -czvf ks3.4.1-images.tar.gz ks3.4.1-images

3. 离线安装集群

3.1 移除麒麟系统自带的podman

podman是麒麟系统自带的容器引擎,为避免后续与docker冲突,直接卸载。否则后续coredns/nodelocaldns也会受影响无法启动以及各种docker权限问题。所有节点执行

代码语言:javascript复制
yum remove podman

3.2 将安装包拷贝至离线环境

将下载的 KubeKey 、制品 artifact 、脚本和导出的镜像通过 U 盘等介质拷贝至离线环境安装节点。

3.3 安装k8s依赖包

所有节点执行,上传k8s-init-KylinV10.tar.gz解压后执行install.sh

3.4 修改config-sample.yaml配置文件

修改相关节点和harbor信息

  • 必须指定 registry 仓库部署节点(用于 KubeKey 部署自建 Harbor 仓库)。
  • registry 里指定 type 类型为 harbor,否则默认安装 docker registry。
代码语言:javascript复制
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  # If you install Kubernetes on ARM, add "arch: arm64". For example, {...user: ubuntu, password: Qcloud@123, arch: arm64}.
  - {name: node1, address: 192.168.120.190, internalAddress: "192.168.120.190", user: root, password: "123***"}
  - {name: node2, address: 192.168.120.191, internalAddress: "192.168.120.191", user: root, password: "123***"}
  roleGroups:
    etcd:
    - node1 # All the nodes in your cluster that serve as the etcd nodes.
    master:
    - node1
      #  - node[2:10] # From node2 to node10. All the nodes in your cluster that serve as the master nodes.
    worker:
    - node1
    - node2
    registry:
      - node1
  controlPlaneEndpoint:
    # Internal loadbalancer for apiservers. Support: haproxy, kube-vip [Default: ""]
    internalLoadbalancer: haproxy
    domain: lb.kubesphere.local
    address: ""      
    port: 6443
  system:
    ntpServers:
      - node1 # Set the node name in `hosts` as ntp server if no public ntp servers access.
    timezone: "Asia/Shanghai"

  kubernetes:
    version: v1.22.12
    containerManager: docker
    clusterName: cluster.local
    # Whether to install a script which can automatically renew the Kubernetes control plane certificates. [Default: false]
    autoRenewCerts: true
    # maxPods is the number of Pods that can run on this Kubelet. [Default: 110]
    maxPods: 210
  etcd:
    type: kubekey  
    ## caFile, certFile and keyFile need not be set, if TLS authentication is not enabled for the existing etcd.
    # external:
    #   endpoints:
    #     - https://192.168.6.6:2379
    #   caFile: /pki/etcd/ca.crt
    #   certFile: /pki/etcd/etcd.crt
    #   keyFile: /pki/etcd/etcd.key
    dataDir: "/var/lib/etcd"
    heartbeatInterval: 250
    electionTimeout: 5000
    snapshotCount: 10000
    autoCompactionRetention: 8
    metrics: basic
    quotaBackendBytes: 2147483648 
    maxRequestBytes: 1572864
    maxSnapshots: 5
    maxWals: 5
    logLevel: info
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    multusCNI:
      enabled: false
  storage:
    openebs:
      basePath: /var/openebs/local # base path of the local PV provisioner
  registry:
    type: harbor
    registryMirrors: []
    insecureRegistries: []
    privateRegistry: "dockerhub.kubekey.local"
    namespaceOverride: "kubesphereio"
    auths: # if docker add by `docker login`, if containerd append to `/etc/containerd/config.toml`
      "dockerhub.kubekey.local":
        username: "admin"
        password: Harbor12345
        skipTLSVerify: true # Allow contacting registries over HTTPS with failed TLS verification.
        plainHTTP: false # Allow contacting registries over HTTP.
        certsPath: "/etc/docker/certs.d/dockerhub.kubekey.local" # Use certificates at path (*.crt, *.cert, *.key) to connect to the registry.
  addons: [] # You can install cloud-native addons (Chart or YAML) by using this field.

---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.4.1
...

3.5 使用制品安装harbor私有仓库

代码语言:javascript复制
./kk init registry -f config-sample.yaml -a ks3.4-artifact.tar.gz

harbor安装完成后,发现启动实现,排查问题后,需要给容器增加权限 privileged: true 以下4个harbor服务都需要添加权限 core/harbor-db/redis/registryctl/jobservice

验证

如果有服务启动失败,可重启harbor

代码语言:javascript复制
cd /opt/harbor
systemctl restart docker
docker-compose down
docker-compose up -d

创建harbor项目

代码语言:javascript复制
[root@node1 ks3.4.1-kylin]# cat create_project_harbor.sh
#!/usr/bin/env bash
   
url="https://dockerhub.kubekey.local"  #修改url的值为https://dockerhub.kubekey.local
user="admin"
passwd="Harbor12345"

harbor_projects=(
    kubesphereio
    kubesphere
    other
)

for project in "${harbor_projects[@]}"; do
    echo "creating $project"
    curl -u "${user}:${passwd}" -X POST -H "Content-Type: application/json" "${url}/api/v2.0/projects" -d "{ "project_name": "${project}", "public": true}" -k #curl命令末尾加上 -k
done

3.6 推送ks相关镜像至harbor

解压2.7部分的镜像包,后执行

代码语言:javascript复制
./load-push.sh

3.7 创建集群

此处不再增加参数 -a ks3.4-artifact.tar.gz,因为在上一步创建harbor时,已经将artifact制品解压提取。此处再加-a参数,如果没有下载镜像或者iso有问题会报错。

代码语言:javascript复制
./kk create cluster -f config-sample.yaml

3.8 部署结果验证

介于篇幅长度,此处不再展示,结果可见上一篇,功能一切正常。

4. 遇到的问题

4.1 harbor问题

  • docker-compose.yaml问题

使用kk离线制品部署完了harbor2.10.1版本后,harbor会启动失败,需要给几个容器增加权限:privileged

  • 缺少helm-chart问题

Harbor 2.10.1已经移除helm-chart,如果项目使用了helm存放在harbor,可使用harbor2.5.3版本, 如果频繁需要用kk 制品部署,可修改kk源码中harbor版本

4.2 工作节点kubectl异常

现象:

代码语言:javascript复制
[root@node2 network-scripts]# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@node2 home]# kubectl config view
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

解决: 要解决这个问题,需要将工作节点上的kubectl配置为指向Kubernetes master节点的API Server。通常,master节点会在/etc/kubernetes/admin.conf文件中保存集群的配置信息。可以将这个配置文件复制到工作节点上,并将其设置为kubectl使用的配置文件.

代码语言:javascript复制
scp /etc/kubernetes/admin.conf user@work-node:/root/.kube/config
代码语言:javascript复制
scp /etc/kubernetes/admin.conf root@192.168.120.190:/root/.kube/config

5. 总结

  • kk 3.1.5制作制品时不再需要必须包含最少一个镜像,因此我们可以只下载ks和k8s相关服务
  • kk 3.1.5 离线部署时不再需要必须包含系统需要的依赖iso,这样大大减少了我们国产系统适配的难度
  • kk 3.1.5离线部署时如果制品中未含镜像,使用-a 指定制品安装集群时会报错,此时我们可以使用./kk create cluster -f config-sample.yaml,不再加 -a指令创建集群
  • kk 3.1.5 部署的harbor有些许问题,需要我们手动修改。如果用不到harbor,制品时可不下载或者使用docker-registry,制品大小也可减少几百M

虽然kk 3.1.5 还有点问题,但是对比之前的版本已经优化了很大,尤其是不再检测系统和iso文件的匹配,对于国产化操作系统离线部署更加友好,推荐体验和使用。

0 人点赞