信创:海光(x86)+银河麒麟(kylin v10)离线部署k8s和KubeSphere(一)

2024-08-30 11:35:12 浏览数 (4)

在上一篇中鲲鹏 麒麟离线部署,有网友希望出x86 麒麟离线部署文档,故出此文档 天行1st,公众号:编码如写诗信创:鲲鹏(arm64) 麒麟(kylin v10)离线部署k8s和kubesphere(二)

服务器配置

主机名

IP

CPU

OS

用途

node1

10.11.5.117

Hygon C86 3250

Kylin V10 SP3

离线环境主节点和镜像仓库节点

deploy

192.168.200.7

Hygon C86 3250

Kylin V10 SP3

联网主机用于制作离线包

实战环境涉及软件版本信息

  • 服务器芯片:Hygon C86 3250
  • 操作系统:麒麟 V10 SP3 x86_64
  • Docker: 24.0.7
  • Harbor: v2.7.1
  • KubeSphere:v3.3.1
  • Kubernetes:v1.22.12
  • KubeKey: v2.3.0

1. 本文介绍

本文介绍如何在麒麟 V10 X86_64 架构服务器上制作制品和离线部署 KubeSphere 和 Kubernetes 集群。x86机器部署ks,镜像基本没有变化。主要区别在于各操作系统k8s初始化的依赖包和KubeKey用到的repository有区别。本文将详细记录制品制作和离线部署过程。

1.1 确认操作系统配置

在执行下文的任务之前,先确认操作系统相关配置。

  • 操作系统类型
代码语言:javascript复制
[root@localhost ~]# cat /etc/os-release 
NAME="Kylin Linux Advanced Server"
VERSION="V10 (Lance)"
ID="kylin"
VERSION_ID="V10"
PRETTY_NAME="Kylin Linux Advanced Server V10 (Lance)"
ANSI_COLOR="0;31
  • 操作系统内核
代码语言:javascript复制
[root@node1 kubesphere]# uname -a
Linux node1 4.19.90-52.22.v2207.ky10.x86_64 #1 SMP Tue Mar 14 12:19:10 CST 2023 x86_64 x86_64 x86_64 GNU/Linux
  • 服务器 CPU 信息
代码语言:javascript复制
[root@localhost ~]# lscpu
架构:                           x86_64
CPU 运行模式:                   32-bit, 64-bit
字节序:                         Little Endian
Address sizes:                   43 bits physical, 48 bits virtual
CPU:                             16
在线 CPU 列表:                  0-15
每个核的线程数:                 2
每个座的核数:                   8
座:                             1
NUMA 节点:                      1
厂商 ID:                        HygonGenuine
CPU 系列:                       24
型号:                           2
型号名称:                       Hygon C86 3250  8-core Processor
步进:                           2
CPU MHz:                        2806.567
BogoMIPS:                       5600.35
虚拟化:                         AMD-V
L1d 缓存:                       256 KiB
L1i 缓存:                       512 KiB
L2 缓存:                        4 MiB
L3 缓存:                        16 MiB
NUMA 节点0 CPU:                 0-15
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Mmio stale data:   Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Retpolines, IBPB conditional, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected
标记:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid a
                                 perfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoex
                                 t perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb hw_pstate sme ssbd sev ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 xsaves clzero irperf xsave
                                 erptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca

2. 离线安装包制作

本文离线包制作与官方离线安装[1]略有不同,主要是因为在使用官方指导时,由于各种原因制品中的镜像难以全部拉取成功,未能成功完成制品的制作。

2.1 下载麒麟系统k8s依赖包

此处为x86不同操作系统安装k8s的主要区别之一

代码语言:javascript复制
mkdir -p /root/kubesphere/k8s-init
# 该命令将下载
yum -y install openssl socat conntrack ipset ebtables chrony ipvsadm --downloadonly --downloaddir /root/kubesphere/k8s-init
# 编写安装脚本
vim install.sh
#!/bin/bash
# 

rpm -ivh *.rpm --force --nodeps

# 打成压缩包,方便离线部署使用
tar -czvf k8s-init-KylinV10.tar.gz ./k8s-init/*

2.2下载 repository ios

此处为x86不同操作系统安装k8s的主要区别之二,与上一步结合,x86不同操作系统安装k8s主要这俩区别。

下载地址:KubeKey releases iso页面[2]

银河麒麟系统可直接使用centos7的iso,因为在上一步骤中系统依赖包已自行下载,这里只为了让kk继续完成后续步骤。若想要完全使用麒麟的包,可至银河麒麟软件包[3]下载制作。

建议本地电脑科学上网下载后,上传至服务器某个目录下。本文下载后上传至/home/k8s/centos-7-amd64.iso

2.3 下载kk

  • 方式一
代码语言:javascript复制
lhost kubesphere]# export KKZONE=cn

[root@localhost kubesphere]# curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -

Downloading kubekey v2.3.0 from https://kubernetes.pek3b.qingstor.com/kubekey/releases/download/v2.3.0/kubekey-v2.3.0-linux-amd64.tar.gz ...


Kubekey v2.3.0 Download Complete!

[root@localhost kubesphere]# ls
kk  kubekey-v2.3.0-linux-amd64.tar.gz
  • 方式二

使用本地电脑,直接去github下载KubeKey releases 页面[4] 上传至服务器/root/kubesphere目录解压

代码语言:javascript复制
tar zxf kubekey-v2.3.0-linux-amd64.tar.gz

2.4 编辑制品配置文件

在使用官方文档示例生成制品时出现了各种镜像错误,这里只使用了一个镜像busybox,目的是用于生成制品。其他镜像自己编写脚本处理。 优势

  • 制品体积更小
  • 镜像变动更灵活
  • 组件按需增加/减少

劣势

  • 额外编写更多脚本
  • 额外增加离线部署过程
代码语言:javascript复制
[root@node1 k8s]# cat manifest.yaml 
---

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:
  name: sample
spec:
  arches:
  - amd64
  operatingSystems:
  - arch: amd64
    type: linux
    id: kylin
    version: "V10"
    osImage: Kylin Linux Advanced Server V10 (Halberd)
    repository:
      iso:
        localPath: /home/k8s/centos-7-amd64.iso
        url: 
  kubernetesDistributions:
  - type: kubernetes
    version: v1.22.12
  components:
    helm:
      version: v3.9.0
    cni:
      version: v0.9.1
    etcd:
      version: v3.4.13
   ## For now, if your cluster container runtime is containerd, KubeKey will add a docker 20.10.8 container runtime in the below list.
   ## The reason is KubeKey creates a cluster with containerd by installing a docker first and making kubelet connect the socket file of containerd which docker contained.
    containerRuntimes:
    - type: docker
      version: 20.10.8
    crictl:
      version: v1.24.0
    docker-registry:
      version: "2"
    harbor:
      version: v2.5.3
    docker-compose:
      version: v2.2.2
  images:
  ##k8s-images
  - registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:1.31.

备注

  • 若需要导出的 artifact 文件中包含操作系统依赖文件(如:conntarck、chrony 等),可在 operationSystem 元素中的 .repostiory.iso.url 中配置相应的 ISO 依赖文件下载地址或者提前下载 ISO 包到本地在 localPath 里填写本地存放路径并删除 url 配置项。
  • 开启 harbordocker-compose 配置项,为后面通过 KubeKey 自建 harbor 仓库推送镜像使用。
  • 默认创建的 manifest 里面的镜像列表从 docker.io 获取。
  • 可根据实际情况修改 manifest-sample.yaml 文件的内容,用于之后导出期望的 artifact 文件。
  • 您可以访问 https://github.com/kubesphere/kubekey/releases/tag/v2.3.0 下载 ISO 文件。

2.5 导出离线制品

代码语言:javascript复制
./kk artifact export -m manifest.yaml -o kubesphere.tar.gz

备注 制品(artifact)是一个根据指定的 manifest 文件内容导出的包含镜像 tar 包和相关二进制文件的 tgz 包。在 KubeKey 初始化镜像仓库、创建集群、添加节点和升级集群的命令中均可指定一个 artifact,KubeKey 将自动解包该 artifact 并在执行命令时直接使用解包出来的文件。

  • 导出时请确保网络连接正常。
  • KubeKey 会解析镜像列表中的镜像名,若镜像名中的镜像仓库需要鉴权信息,可在 manifest 文件中的 .registry.auths 字段中进行配置。

2.6 手动拉取k8s相关镜像

代码语言:javascript复制
vim pull-images.sh
代码语言:javascript复制
#!/bin/bash
#
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.12
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.12
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.12
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.12
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.23.2
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.23.2
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.23.2
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.23.2
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.12.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.3
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/nfs-subdir-external-provisioner:v4.0.2
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
  ##kubesphere-images
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.3.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.3.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.3.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.3.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/ks-upgrade:v3.3.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.22.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.21.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.20.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kubefed:v0.8.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/tower:v0.2.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/nginx-ingress-controller:v1.1.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/metrics-server:v0.4.2
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/redis:5.0.14-alpine
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.0.25-alpine
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/alpine:3.14
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/openldap:1.3.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/netshoot:v1.0
  ##kubesphere-monitoring-images
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.5.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.34.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.55.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.5.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v1.3.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.23.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.25.2
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/grafana:8.3.3
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.8.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v1.4.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v1.4.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0
  ##kubesphere-logging-images
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-curator:v5.7.6
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-oss:6.8.22
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/fluentbit-operator:v0.13.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/fluent-bit:v1.8.11
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/log-sidecar-injector:1.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/filebeat:6.7.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-operator:v0.4.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-exporter:v0.4.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-ruler:v0.4.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-operator:v0.2.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-webhook:v0.2.0
  ##example-images
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:1.31.1
代码语言:javascript复制
source pull-images.sh

2.7 重命名镜像

代码语言:javascript复制
vim tag-images.sh

根据自己harbor仓库名称修改harbor地址和项目名称

代码语言:javascript复制
#!/bin/bash
#
HarborAddr="dockerhub.kubekey.local/kubesphereio"
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.12 $HarborAddr/kube-apiserver:v1.22.12
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.12 $HarborAddr/kube-controller-manager:v1.22.12
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.12 $HarborAddr/kube-proxy:v1.22.12
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.12 $HarborAddr/kube-scheduler:v1.22.12
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5 $HarborAddr/pause:3.5
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0 $HarborAddr/coredns:1.8.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2 $HarborAddr/cni:v3.23.2
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.23.2 $HarborAddr/kube-controllers:v3.23.2
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.23.2 $HarborAddr/node:v3.23.2
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.23.2 $HarborAddr/pod2daemon-flexvol:v3.23.2
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.23.2 $HarborAddr/typha:v3.23.2
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.12.0 $HarborAddr/flannel:v0.12.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0 $HarborAddr/provisioner-localpv:3.3.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0 $HarborAddr/linux-utils:3.3.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.3 $HarborAddr/haproxy:2.3
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/nfs-subdir-external-provisioner:v4.0.2 $HarborAddr/nfs-subdir-external-provisioner:v4.0.2
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12 $HarborAddr/k8s-dns-node-cache:1.15.12
  ##kubesphere-images
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.3.1 $HarborAddr/ks-installer:v3.3.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.3.1 $HarborAddr/ks-apiserver:v3.3.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.3.1 $HarborAddr/ks-console:v3.3.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.3.1 $HarborAddr/ks-controller-manager:v3.3.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/ks-upgrade:v3.3.1 $HarborAddr/ks-upgrade:v3.3.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.22.0 $HarborAddr/kubectl:v1.22.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.21.0 $HarborAddr/kubectl:v1.21.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.20.0 $HarborAddr/kubectl:v1.20.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kubefed:v0.8.1 $HarborAddr/kubefed:v0.8.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/tower:v0.2.0 $HarborAddr/tower:v0.2.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z $HarborAddr/minio:RELEASE.2019-08-07T01-59-21Z
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z $HarborAddr/mc:RELEASE.2019-08-07T23-14-43Z
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0 $HarborAddr/snapshot-controller:v4.0.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/nginx-ingress-controller:v1.1.0 $HarborAddr/nginx-ingress-controller:v1.1.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4 $HarborAddr/defaultbackend-amd64:1.4
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/metrics-server:v0.4.2 $HarborAddr/metrics-server:v0.4.2
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/redis:5.0.14-alpine $HarborAddr/redis:5.0.14-alpine
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.0.25-alpine $HarborAddr/haproxy:2.0.25-alpine
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/alpine:3.14 $HarborAddr/alpine:3.14
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/openldap:1.3.0 $HarborAddr/openldap:1.3.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/netshoot:v1.0 $HarborAddr/netshoot:v1.0
  ##kubesphere-monitoring-images
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.5.0 $HarborAddr/configmap-reload:v0.5.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.34.0 $HarborAddr/prometheus:v2.34.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1 $HarborAddr/prometheus-config-reloader:v0.55.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.55.1 $HarborAddr/prometheus-operator:v0.55.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0 $HarborAddr/kube-rbac-proxy:v0.11.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.5.0 $HarborAddr/kube-state-metrics:v2.5.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v1.3.1 $HarborAddr/node-exporter:v1.3.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.23.0 $HarborAddr/alertmanager:v0.23.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.25.2 $HarborAddr/thanos:v0.25.2
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/grafana:8.3.3 $HarborAddr/grafana:8.3.3
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.8.0 $HarborAddr/kube-rbac-proxy:v0.8.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v1.4.0 $HarborAddr/notification-manager-operator:v1.4.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v1.4.0 $HarborAddr/notification-manager:v1.4.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0 $HarborAddr/notification-tenant-sidecar:v3.2.0
  ##kubesphere-logging-images
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-curator:v5.7.6 $HarborAddr/elasticsearch-curator:v5.7.6
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-oss:6.8.22 $HarborAddr/elasticsearch-oss:6.8.22
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/fluentbit-operator:v0.13.0 $HarborAddr/fluentbit-operator:v0.13.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03 $HarborAddr/docker:19.03
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/fluent-bit:v1.8.11 $HarborAddr/fluent-bit:v1.8.11
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/log-sidecar-injector:1.1 $HarborAddr/log-sidecar-injector:1.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/filebeat:6.7.0 $HarborAddr/filebeat:6.7.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-operator:v0.4.0 $HarborAddr/kube-events-operator:v0.4.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-exporter:v0.4.0 $HarborAddr/kube-events-exporter:v0.4.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-ruler:v0.4.0 $HarborAddr/kube-events-ruler:v0.4.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-operator:v0.2.0 $HarborAddr/kube-auditing-operator:v0.2.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-webhook:v0.2.0 $HarborAddr/kube-auditing-webhook:v0.2.0
  ##example-images
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:1.31.1 $HarborAddr/busybox:1.31.1
代码语言:javascript复制
source tag-images.sh

2.8 导出镜像

代码语言:javascript复制
vim save-images.sh
代码语言:javascript复制
#!/bin/bash
#
#harbor仓库域名和项目名称
HarborAddr="dockerhub.kubekey.local/kubesphereio"

docker save -o ks-images.tar  $HarborAddr/kube-apiserver:v1.22.12  $HarborAddr/kube-controller-manager:v1.22.12  $HarborAddr/kube-proxy:v1.22.12  $HarborAddr/kube-scheduler:v1.22.12  $HarborAddr/pause:3.5  $HarborAddr/coredns:1.8.0  $HarborAddr/cni:v3.23.2  $HarborAddr/kube-controllers:v3.23.2  $HarborAddr/node:v3.23.2  $HarborAddr/pod2daemon-flexvol:v3.23.2  $HarborAddr/typha:v3.23.2  $HarborAddr/flannel:v0.12.0  $HarborAddr/provisioner-localpv:3.3.0  $HarborAddr/linux-utils:3.3.0  $HarborAddr/haproxy:2.3  $HarborAddr/nfs-subdir-external-provisioner:v4.0.2  $HarborAddr/k8s-dns-node-cache:1.15.12  $HarborAddr/ks-installer:v3.3.1  $HarborAddr/ks-apiserver:v3.3.1  $HarborAddr/ks-console:v3.3.1  $HarborAddr/ks-controller-manager:v3.3.1  $HarborAddr/ks-upgrade:v3.3.1  $HarborAddr/kubectl:v1.22.0  $HarborAddr/kubectl:v1.21.0  $HarborAddr/kubectl:v1.20.0  $HarborAddr/kubefed:v0.8.1  $HarborAddr/tower:v0.2.0  $HarborAddr/minio:RELEASE.2019-08-07T01-59-21Z  $HarborAddr/mc:RELEASE.2019-08-07T23-14-43Z  $HarborAddr/snapshot-controller:v4.0.0  $HarborAddr/nginx-ingress-controller:v1.1.0  $HarborAddr/defaultbackend-amd64:1.4  $HarborAddr/metrics-server:v0.4.2  $HarborAddr/redis:5.0.14-alpine  $HarborAddr/haproxy:2.0.25-alpine  $HarborAddr/alpine:3.14  $HarborAddr/openldap:1.3.0  $HarborAddr/netshoot:v1.0  $HarborAddr/configmap-reload:v0.5.0  $HarborAddr/prometheus:v2.34.0  $HarborAddr/prometheus-config-reloader:v0.55.1  $HarborAddr/prometheus-operator:v0.55.1  $HarborAddr/kube-rbac-proxy:v0.11.0  $HarborAddr/kube-state-metrics:v2.5.0  $HarborAddr/node-exporter:v1.3.1  $HarborAddr/alertmanager:v0.23.0  $HarborAddr/thanos:v0.25.2  $HarborAddr/grafana:8.3.3  $HarborAddr/kube-rbac-proxy:v0.8.0  $HarborAddr/notification-manager-operator:v1.4.0  $HarborAddr/notification-manager:v1.4.0  $HarborAddr/notification-tenant-sidecar:v3.2.0  $HarborAddr/elasticsearch-curator:v5.7.6  $HarborAddr/elasticsearch-oss:6.8.22  $HarborAddr/fluentbit-operator:v0.13.0  $HarborAddr/docker:19.03  $HarborAddr/fluent-bit:v1.8.11  $HarborAddr/log-sidecar-injector:1.1  $HarborAddr/filebeat:6.7.0  $HarborAddr/kube-events-operator:v0.4.0  $HarborAddr/kube-events-exporter:v0.4.0  $HarborAddr/kube-events-ruler:v0.4.0  $HarborAddr/kube-auditing-operator:v0.2.0  $HarborAddr/kube-auditing-webhook:v0.2.0  $HarborAddr/busybox:1.31.1
#压缩
gzip ks-images.tar

3. 离线安装集群

3.1 移除麒麟系统自带的podman

podman是麒麟系统自带的容器引擎,为避免后续与docker冲突,直接卸载。否则后续coredns/nodelocaldns也会受影响无法启动以及各种docker权限问题。所有节点执行

代码语言:javascript复制
yum remove podman

3.2 将安装包拷贝至离线环境

将下载的 KubeKey 、制品 artifact 、脚本和导出的镜像通过 U 盘等介质拷贝至离线环境安装节点。

3.3 安装k8s依赖包

所有节点执行,上传k8s-init-KylinV10.tar.gz解压后执行install.sh

3.4 修改config-sample.yaml配置文件

修改相关节点和harbor信息

  • 必须指定 registry 仓库部署节点(用于 KubeKey 部署自建 Harbor 仓库)。
  • registry 里必须指定 type 类型为 harbor,否则默认安装 docker registry。
代码语言:javascript复制


apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: node1, address: 10.11.5.117, internalAddress: 10.11.5.117, user: root, password: "123xxx"}
  roleGroups:
    etcd:
    - node1
    control-plane:
    - node1
    worker:
    - node1
    registry:
    - node1
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers
    # internalLoadbalancer: haproxy

    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.22.12
    clusterName: cluster.local
    autoRenewCerts: true
    containerManager: docker
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  storage:
    openebs:
      basePath: /data/openebs/local
  registry:
    type: harbor
    auths:
      "dockerhub.kubekey.local":
        username: admin
        password: Harbor12345
    privateRegistry: "dockerhub.kubekey.local"
    namespaceOverride: "kubesphereio"
    registryMirrors: []
    insecureRegistries: []
  addons: []

3.5 使用制品安装harbor私有仓库

代码语言:javascript复制
./kk init registry -f config-sample.yaml -a kubesphere.tar.gz

麒麟系统需要给/opt/harbor/common设置777权限,否则harbor有服务启动失败

验证

如果有服务启动失败,可重启harbor

代码语言:javascript复制
cd /opt/harbor
systemctl restart docker
docker-compose down
docker-compose up -d

访问web页面

创建 Harbor 项目
代码语言:javascript复制
vim create_project_harbor.sh
代码语言:javascript复制
#!/usr/bin/env bash
   
# Copyright 2018 The KubeSphere Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
   
url="https://dockerhub.kubekey.local"  #修改url的值为https://dockerhub.kubekey.local
user="admin"
passwd="Harbor12345"
   
harbor_projects=(
    kubesphereio
    kubesphere
    other
)
   
for project in "${harbor_projects[@]}"; do
    echo "creating $project"
    curl -u "${user}:${passwd}" -X POST -H "Content-Type: application/json" "${url}/api/v2.0/projects" -d "{ "project_name": "${project}", "public": true}" -k #curl命令末尾加上 -k
done

备注

  • 修改 url 的值为 https://dockerhub.kubekey.local
  • 需要指定仓库项目名称和镜像列表的项目名称保持一致。
  • 脚本末尾 curl 命令末尾加上 -k

登录harbor后查看

3.6 推送ks相关镜像至harbor

代码语言:javascript复制
vim push-images.sh

制作离线安装包时,使用save-images.sh将k8s和ks相关镜像保存为了ks-images.tar.gz,若镜像名称和harbor项目名有修改,记得修改push-images.sh中的名称

代码语言:javascript复制
#!/bin/bash
#
HarborAddr="dockerhub.kubekey.local/kubesphereio"
idocker login -u admin -p Harbor12345 dockerhub.kubekey.local

docker load <  ks-images.tar.gz
#登录harbor

docker push $HarborAddr/kube-apiserver:v1.22.12
docker push $HarborAddr/kube-controller-manager:v1.22.12
docker push $HarborAddr/kube-proxy:v1.22.12
docker push $HarborAddr/kube-scheduler:v1.22.12
docker push $HarborAddr/pause:3.5
docker push $HarborAddr/coredns:1.8.0
docker push $HarborAddr/cni:v3.23.2
docker push $HarborAddr/kube-controllers:v3.23.2
docker push $HarborAddr/node:v3.23.2
docker push $HarborAddr/pod2daemon-flexvol:v3.23.2
docker push $HarborAddr/typha:v3.23.2
docker push $HarborAddr/flannel:v0.12.0
docker push $HarborAddr/provisioner-localpv:3.3.0
docker push $HarborAddr/linux-utils:3.3.0
docker push $HarborAddr/haproxy:2.3
docker push $HarborAddr/nfs-subdir-external-provisioner:v4.0.2
docker push $HarborAddr/k8s-dns-node-cache:1.15.12
  ##kubesphere-images
docker push $HarborAddr/ks-installer:v3.3.1
docker push $HarborAddr/ks-apiserver:v3.3.1
docker push $HarborAddr/ks-console:v3.3.1
docker push $HarborAddr/ks-controller-manager:v3.3.1
docker push $HarborAddr/ks-upgrade:v3.3.1
docker push $HarborAddr/kubectl:v1.22.0
docker push $HarborAddr/kubectl:v1.21.0
docker push $HarborAddr/kubectl:v1.20.0
docker push $HarborAddr/kubefed:v0.8.1
docker push $HarborAddr/tower:v0.2.0
docker push $HarborAddr/minio:RELEASE.2019-08-07T01-59-21Z
docker push $HarborAddr/mc:RELEASE.2019-08-07T23-14-43Z
docker push $HarborAddr/snapshot-controller:v4.0.0
docker push $HarborAddr/nginx-ingress-controller:v1.1.0
docker push $HarborAddr/defaultbackend-amd64:1.4
docker push $HarborAddr/metrics-server:v0.4.2
docker push $HarborAddr/redis:5.0.14-alpine
docker push $HarborAddr/haproxy:2.0.25-alpine
docker push $HarborAddr/alpine:3.14
docker push $HarborAddr/openldap:1.3.0
docker push $HarborAddr/netshoot:v1.0
  ##kubesphere-monitoring-images
docker push $HarborAddr/configmap-reload:v0.5.0
docker push $HarborAddr/prometheus:v2.34.0
docker push $HarborAddr/prometheus-config-reloader:v0.55.1
docker push $HarborAddr/prometheus-operator:v0.55.1
docker push $HarborAddr/kube-rbac-proxy:v0.11.0
docker push $HarborAddr/kube-state-metrics:v2.5.0
docker push $HarborAddr/node-exporter:v1.3.1
docker push $HarborAddr/alertmanager:v0.23.0
docker push $HarborAddr/thanos:v0.25.2
docker push $HarborAddr/grafana:8.3.3
docker push $HarborAddr/kube-rbac-proxy:v0.8.0
docker push $HarborAddr/notification-manager-operator:v1.4.0
docker push $HarborAddr/notification-manager:v1.4.0
docker push $HarborAddr/notification-tenant-sidecar:v3.2.0
  ##kubesphere-logging-images
docker push $HarborAddr/elasticsearch-curator:v5.7.6
docker push $HarborAddr/elasticsearch-oss:6.8.22
docker push $HarborAddr/fluentbit-operator:v0.13.0
docker push $HarborAddr/docker:19.03
docker push $HarborAddr/fluent-bit:v1.8.11
docker push $HarborAddr/log-sidecar-injector:1.1
docker push $HarborAddr/filebeat:6.7.0
docker push $HarborAddr/kube-events-operator:v0.4.0
docker push $HarborAddr/kube-events-exporter:v0.4.0
docker push $HarborAddr/kube-events-ruler:v0.4.0
docker push $HarborAddr/kube-auditing-operator:v0.2.0
docker push $HarborAddr/kube-auditing-webhook:v0.2.0
  ##example-images
docker push $HarborAddr/busybox:1.31.1

执行推送

代码语言:javascript复制
source push-images.sh

3.7 执行以下命令安装 KubeSphere 集群

代码语言:javascript复制
./kk create cluster -f config-sample.yaml -a kubesphere.tar.gz --with-packages

另开一个窗口,查看部署情况

查看日志 方式一:

代码语言:javascript复制
kubectl logs -f ks-installer-d6dcd67b9-7c26m -n kubesphere-system

方式二:

代码语言:javascript复制
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

等待大约十分钟,看到部署成功的消息

3.8 部署结果验证

  • 登录管理页面
  • 系统组件状态
  • 容器日志
  • 集群状态

4. 总结

本文主要实战演示了X86 版 麒麟 V10服务器通过在线环境将基础依赖和镜像下载保存为离线包,并通过生成单个镜像的制品来进行后续离线部署。后续将整理安装包,适配中标麒麟,欧拉,龙蜥等并简化部署过程,敬请期待第二篇。 离线安装主要知识点

  • 卸载podman
  • 安装k8s依赖包
  • 使用kk安装镜像仓库
  • 编写脚本推送镜像到harbor
  • 使用kk部署集群

引用链接

[1]

离线安装: https://kubesphere.io/zh/docs/v3.3/installing-on-linux/introduction/air-gapped-installation/

[2]

KubeKey releases iso页面: https://github.com/kubesphere/kubekey/releases/download/v2.3.0/centos7-rpms-amd64.iso

[3]

银河麒麟软件包: https://update.cs2c.com.cn/NS/V10/V10SP2/os/adv/lic/base/x86_64/Packages/

[4] KubeKey releases 页面: https://github.com/kubesphere/kubekey/releases

0 人点赞