TKE使用nfs文件系统

2024-07-31 20:10:19 浏览数 (2)

背景

本文主要实践TKE集群使用nfs文件系统,包括使用cfs-csi(新创建实例,共享新实例),静态nfs挂载,已有实例共享挂载

准备

  1. k8s集群,cfs-csi建议使用tke集群(腾讯云tke)
  2. 静态nfs挂载以及共享实例挂载需要提前准备nfs实例(可以自建也可以使用腾讯云的cfs: 腾讯云cfs)

使用nfs文件系统

静态nfs

k8s原生支持静态nfs,包括volume支持nfs,persistentvolume也支持nfs

persistentvolume使用nfs

所有yaml参考如下

  1. 创建pv使用nfs类型(注意nfs里的path必须事先保证目录存在,否则事件报错mount失败原因是目录不存在)
  2. 创建pvc直接使用volumeName字段绑定创建的pv
  3. 注意pv和pvc的storageClassName必须一致(集群里可以不存在该sc)
代码语言:txt复制
apiVersion: v1
kind: PersistentVolume
metadata:
  name: static-cfs-pv
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 10Gi
  nfs:
    path: /staticcfs1
    server: 10.0.7.15
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: slow
  volumeMode: Filesystem

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: static-cfs-pvc
  namespace: default
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: slow
  volumeMode: Filesystem
  volumeName: static-cfs-pv

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: nginx
      qcloud-app: nginx
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        k8s-app: nginx
        qcloud-app: nginx
    spec:
      affinity: {}
      containers:
      - image: nginx
        imagePullPolicy: Always
        name: nginx
        resources:
          limits:
            cpu: 500m
            memory: 1Gi
          requests:
            cpu: 250m
            memory: 256Mi
        securityContext:
          privileged: false
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /mnt
          name: vol
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: qcloudregistrykey
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: vol
        persistentVolumeClaim:
          claimName: static-cfs-pvc

使用上面的yaml创建之后,可以看到pod挂载正常运行,并且容器里正常挂载nfs path

代码语言:txt复制
$ kubectl get po
NAME                     READY   STATUS    RESTARTS   AGE
nginx-7f499bd79c-8nd65   1/1     Running   0          23m

root@nginx-7f499bd79c-8nd65:/# df -h /mnt/
Filesystem             Size  Used Avail Use% Mounted on
10.0.7.15:/staticcfs1   10G   42M   10G   1% /mnt

volume使用nfs

yaml参考如下

  1. 实际上是k8s volume默认支持nfs,所以直接把server和path配置到volume即可,注意path目录必须在cfs中存在
代码语言:txt复制
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-nfs-volume
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: nginx-nfs-volume
      qcloud-app: nginx-nfs-volume
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        k8s-app: nginx-nfs-volume
        qcloud-app: nginx-nfs-volume
    spec:
      affinity: {}
      containers:
      - image: nginx
        imagePullPolicy: Always
        name: nginx
        resources:
          limits:
            cpu: 500m
            memory: 1Gi
          requests:
            cpu: 250m
            memory: 256Mi
        securityContext:
          privileged: false
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /mnt
          name: vol
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: qcloudregistrykey
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: vol
        nfs:
          server: 10.0.7.15
          path: /static-volume

使用上面的yaml创建之后,可以看到pod挂载正常运行,并且容器里正常挂载nfs path

代码语言:txt复制
$ kubectl get po | grep nginx-nfs-volume
nginx-nfs-volume-66db74f76f-gg52s   1/1     Running   0          4m8s

root@nginx-nfs-volume-66db74f76f-gg52s:/# df -h /mnt/
Filesystem                Size  Used Avail Use% Mounted on
10.0.7.15:/static-volume   10G   42M   10G   1% /mnt

已有nfs实例共享挂载

已有nfs实例Provisioner推荐使用nfs-subdir-external-provisioner,开源也有nfs-client-provisioner可能会报错“selfLink was empty, can't make reference”,原因是高版本SelfLink已经删除,并且1.24以上版本也不支持设置RemoveSelfLink featuregate

参考nfs-subdir-external-provisioner部署部署方法

  • 安装helm参考helm install
  • helm添加repo
代码语言:txt复制
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/

  • helm安装,需要事前创建nfs服务并且网络通信正常,并且保证对应的path已经存在,否则会报错“mounting <IP>:/<path> failed, reason given by server: No such file or directory”。另外helm默认的镜像可能会拉取超时,可以从dockerhub上找相应版本的镜像替换
代码语言:txt复制
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner 
    --set nfs.server=x.x.x.x 
    --set nfs.path=/exported/path 
    --set image.repository=eipwork/nfs-subdir-external-provisioner

  • 部署成功后可以查看Provisioner pod,以及新sc: nfs-client
代码语言:txt复制
$ kubectl get sc nfs-client
NAME         PROVISIONER                                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client   cluster.local/nfs-subdir-external-provisioner   Delete          Immediate           true                   64s

$ kubectl get po
NAME                                               READY   STATUS    RESTARTS   AGE
nfs-subdir-external-provisioner-745d595dfc-jxdd9   1/1     Running   0          67s

  • 创建pvc选择新sc,yaml参考
代码语言:javascript复制
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
  annotations:
    nfs.io/storage-path: "test-path" # not required, depending on whether this annotation was shown in the storage class description
spec:
  storageClassName: nfs-client
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi

  • 可以看到pvc创建成功后,nfs-client会创建pv并绑定到pvc。并且可以看到绑定的pv是安装nfs-subdir-external-provisioner设置的path的子目录<ns-pvc-pv>
代码语言:txt复制
$ kubectl get pvc  test-claim
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-claim   Bound    pvc-fb9f42a5-57dc-44d6-b1da-15cab95829fb   1Gi        RWX            nfs-client     2m28s

$ kubectl get pv  pvc-fb9f42a5-57dc-44d6-b1da-15cab95829fb -o custom-columns=:.spec.nfs

map[path:/static-nfs-subdir/default-test-claim-pvc-fb9f42a5-57dc-44d6-b1da-15cab95829fb server:10.0.2.25]

  • 创建workload使用pvc
代码语言:txt复制
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-deploy
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: nfs-client-deploy
      qcloud-app: nfs-client-deploy
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        k8s-app: nfs-client-deploy
        qcloud-app: nfs-client-deploy
    spec:
      affinity: {}
      containers:
      - image: nginx
        imagePullPolicy: Always
        name: nginx
        resources:
          limits:
            cpu: 500m
            memory: 1Gi
          requests:
            cpu: 250m
            memory: 256Mi
        securityContext:
          privileged: false
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /mnt
          name: vol
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: qcloudregistrykey
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: vol
        persistentVolumeClaim:
          claimName: test-claim

  • 测试workload的pod运行正常,并且挂载成功
代码语言:txt复制
$ kubectl get po
NAME                                               READY   STATUS    RESTARTS   AGE
nfs-client-deploy-55f79cbc9c-zbmk8                 1/1     Running   0          2m56s

root@nfs-client-deploy-55f79cbc9c-zbmk8:/# df -h /mnt/
Filesystem                                                                                Size  Used Avail Use% Mounted on
10.0.2.25:/static-nfs-subdir/default-test-claim-pvc-fb9f42a5-57dc-44d6-b1da-15cab95829fb   10G   32M   10G   1% /mnt

cfs-csi

TKE支持创建cfs-csi组件,方便kubernetes快速接入腾讯云cfs,参考cfs-csi组件

新建实例

storageclass根据pvc创建的pv每次会新建cfs实例

  • 安装cfs-csi组件。控制台“组件管理”-“新建”,勾选cfs组件,点击安装,等待安装成功
  • 创建sc。控制台“存储”-“StorageClass”-“新建”,注意Provisioner选择“文件存储CFS”,实例创建模式选择“创建新实例”
  • 创建pvc。控制台“存储”-“PersistentVolumeClaim”-“新建”,注意Provisioner勾选“文件存储CFS”,并且StorageClass指定新创建的StorageClass。这里为了测试是新建实例创建两个pvc
  • 可以看到两个pvc创建好之后,已经绑定好了pv并且绑定的两个pv的cfs是不同实例,pv是由statefulset: kube-system/csi-provisioner-cfsplugin创建,可以查看创建cfs实例以及创建pv的日志
代码语言:txt复制
$ kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
newcfs-pvc1      Bound    pvc-df26a64c-3303-4110-ada8-50615d0b3519   10Gi       RWX            newscf-sc      4m3s
newcfs-pvc2      Bound    pvc-8b689604-62ab-4178-b94e-39ad1e98af31   10Gi       RWX            newscf-sc      3m53s

$ kubectl get pv pvc-df26a64c-3303-4110-ada8-50615d0b3519 -o yaml| grep volumeHandle
    volumeHandle: cfs-ioien2ah
$ kubectl get pv pvc-8b689604-62ab-4178-b94e-39ad1e98af31 -o yaml| grep volumeHandle
    volumeHandle: cfs-m6uiekm3

共享新实例(不同pvc共享同一个cfs实例,对应不同path)

storageclass根据pvc创建的pv首次会新建cfs实例,之后所有的pv会共享使用该cfs实例

  • 安装cfs-csi组件,参考“新建实例”流程
  • 创建sc。控制台“存储”-“StorageClass”-“新建”,注意Provisioner选择“文件存储CFS”,实例创建模式选择“共享实例”,区别“新建实例”里的“创建新实例”
  • 创建pvc。使用上一步创建的sc。这里为了对比创建两个pvc
  • 可以看到两个pvc创建好后,自动绑定pv并且两个pv的cfs一样,不同的pvc对应的nfs path也是各个pvc唯一的
代码语言:txt复制
$ kubectl get pvc  | grep sharecfs-pvc
sharecfs-pvc1    Bound    pvc-64031d20-4346-4df2-9d2b-ab5a977334bd   10Gi       RWX            sharecfs-sc    68m
sharecfs-pvc2    Bound    pvc-1c454b82-c95b-4148-bd86-0e1dd636861c   10Gi       RWX            sharecfs-sc    68m


$ kubectl get pv pvc-64031d20-4346-4df2-9d2b-ab5a977334bd -o custom-columns=:.spec.csi.volumeAttributes

map[fsid:c7e6gjwo host:10.0.2.25 path:/default-sharecfs-pvc1-pvc-64031d20-4346-4df2-9d2b-ab5a977334bd]
$ kubectl get pv  pvc-1c454b82-c95b-4148-bd86-0e1dd636861c  -o custom-columns=:.spec.csi.volumeAttributes

map[fsid:c7e6gjwo host:10.0.2.25 path:/default-sharecfs-pvc2-pvc-1c454b82-c95b-4148-bd86-0e1dd636861c]

0 人点赞