K8S 部署 Statefulset zookeeper

2021-12-14 11:01:30 浏览数 (1)

创建存储卷

Zookeeper集群需要用到存储,这里需要准备持久卷(PersistentVolume,简称PV),我这里以yaml文件创建3个PV,供待会儿3个Zookeeper节点创建出来的持久卷声明

代码语言:javascript复制
kind: PersistentVolume
apiVersion: v1
metadata:
  name: k8s-pv-zk1
  annotations:
    volume.beta.kubernetes.io/storage-class: "anything"
  labels:
    type: zookeeper
spec:
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/var/lib/zookeeper"
  persistentVolumeReclaimPolicy: Retain
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: k8s-pv-zk2
  annotations:
    volume.beta.kubernetes.io/storage-class: "anything"
  labels:
    type: zookeeper
spec:
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/var/lib/zookeeper"
  persistentVolumeReclaimPolicy: Retain
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: k8s-pv-zk3
  annotations:
    volume.beta.kubernetes.io/storage-class: "anything"
  labels:
    type: zookeeper
spec:
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/var/lib/zookeeper"
  persistentVolumeReclaimPolicy: Retain

部署及存储卷状态查询

这里发现pv和pvc还没有绑定状态是Available

代码语言:javascript复制
kubectl apply -f persistent-volume.yaml 
代码语言:javascript复制
kubectl get pv 

新版本创建卷及使用

建议使用新版创建

Kubernetes 使用注解 volume.beta.kubernetes.io/storage-class 而不是 storageClassName 属性。这一注解目前仍然起作用,不过在将来的 Kubernetes 发布版本中该注解会被彻底废弃。

创建卷

代码语言:javascript复制
kind: PersistentVolume
apiVersion: v1
metadata:
  name: k8s-pv-zk1
  labels:
    type: zookeeper
spec:
  storageClassName: disk
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/var/lib/zookeeper"
  persistentVolumeReclaimPolicy: Retain
存储声明
代码语言:javascript复制
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: datadir
spec:
  storageClassName: disk
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi
pod引用
代码语言:javascript复制
          ····
    volumeMounts:
      - name: datadir
        mountPath: /var/lib/zookeeper
volumeClaimTemplates:
- metadata:
    name: datadir
  spec:
    storageClassName: disk
    accessModes: [ "ReadWriteOnce" ]
    resources:
      requests:
        storage: 3Gi

注意:如果是使用云服务商比如阿里云,要注意购买云盘要与node节点使用区一致

创建一个 ZooKeeper Ensemble

下面的清单包含一个 无头服务, 一个 Service, 一个 PodDisruptionBudget, 和一个 StatefulSet。

代码语言:javascript复制
apiVersion: v1
kind: Service
metadata:
  name: zk-hs
  labels:
    app: zk
spec:
  ports:
  - port: 2888
    name: server
  - port: 3888
    name: leader-election
  clusterIP: None
  selector:
    app: zk
---
apiVersion: v1
kind: Service
metadata:
  name: zk-cs
  labels:
    app: zk
spec:
  ports:
  - port: 2181
    name: client
  selector:
    app: zk
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: zk-pdb
spec:
  selector:
    matchLabels:
      app: zk
  maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: zk
spec:
  selector:
    matchLabels:
      app: zk
  serviceName: zk-hs
  replicas: 3
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: Parallel
  template:
    metadata:
      labels:
        app: zk
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                    - zk
              topologyKey: "kubernetes.io/hostname"
      containers:
      - name: kubernetes-zookeeper
        imagePullPolicy: Always
        image: "guglecontainers/kubernetes-zookeeper:1.0-3.4.10"
        resources:
          requests:
            memory: "1Gi"
            cpu: "0.5"
        ports:
        - containerPort: 2181
          name: client
        - containerPort: 2888
          name: server
        - containerPort: 3888
          name: leader-election
        command:
        - sh
        - -c
        - "start-zookeeper 
          --servers=3 
          --data_dir=/var/lib/zookeeper/data 
          --data_log_dir=/var/lib/zookeeper/data/log 
          --conf_dir=/opt/zookeeper/conf 
          --client_port=2181 
          --election_port=3888 
          --server_port=2888 
          --tick_time=2000 
          --init_limit=10 
          --sync_limit=5 
          --heap=512M 
          --max_client_cnxns=60 
          --snap_retain_count=3 
          --purge_interval=12 
          --max_session_timeout=40000 
          --min_session_timeout=4000 
          --log_level=INFO"
        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        volumeMounts:
        - name: datadir
          mountPath: /var/lib/zookeeper
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
  volumeClaimTemplates:
  - metadata:
      name: datadir
      annotations:
        volume.beta.kubernetes.io/storage-class: "anything"
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 3Gi

开始创建

创建了 zk-hs 无头服务、zk-cs 服务、zk-pdb PodDisruptionBudget 和 zk StatefulSet。

代码语言:javascript复制
kubectl apply -f zookeeper.yml --namespace=zookeeper

service/zk-hs created
service/zk-cs created
poddisruptionbudget.policy/zk-pdb created
statefulset.apps/zk created

状态查询

代码语言:javascript复制
kubectl get poddisruptionbudgets -n zookeeper
kubectl get pods -n zookeeper
kubectl get pods -n zookeeper -w -l app=zk

如果发现没有启动pod

代码语言:javascript复制
kubectl logs zk-0 -n zookeeper

没有权限没有办法创建目录 没有zookeeper用户 创建一下并给个权限

创建用户以及授权

代码语言:javascript复制
useradd -s /sbin/nologin zookeeper

chown zookeeper.zookeeper /var/lib/zookeeper/

【注意】{每个安装zk的机器都要执行创建用户以及授权}

如果你是k8s三节点,请注意:

出于安全考虑Pod不会被调度到Master Node上,也就是说Master Node不参与工作负载

如果希望master进行调度 使用污点(taints)与容忍(tolerations)进行调整

促成 Leader 选举

获取 zk StatefulSet 中 Pods 的主机名。

代码语言:javascript复制
for i in 0 1 2; do kubectl exec --namespace zookeeper zk-$i -- hostname; done

看一下效果是不是集群模式

代码语言:javascript复制
for i in 0 1 2; do kubectl exec --namespace zookeeper  zk-$i zkServer.sh status; done

检查每个服务器的 myid 文件的内容

代码语言:javascript复制
for i in 0 1 2; do echo "myid zk-$i";kubectl exec --namespace zookeeper  zk-$i -- cat /var/lib/zookeeper/data/myid; done

获取 zk StatefulSet 中每个 Pod 的全限定域名

代码语言:javascript复制
for i in 0 1 2; do kubectl exec --namespace zookeeper zk-$i -- hostname -f; done

Pod 中查看 zoo.cfg 文件的内容。

代码语言:javascript复制
kubectl exec --namespace zookeeper zk-0 -- cat /opt/zookeeper/conf/zoo.cfg

Ensemble 健康检查

最基本的健康检查是向一个 ZooKeeper 服务器写入一些数据,然后从 另一个服务器读取这些数据

代码语言:javascript复制
kubectl exec --namespace zookeeper zk-0 zkCli.sh create /hello world


WATCHER::

WatchedEvent state:SyncConnected type:None path:null
Created /hello

从 zk-1 Pod 获取数据。

代码语言:javascript复制
kubectl exec  --namespace zookeeper  zk-1 zkCli.sh get /hello


WATCHER::

WatchedEvent state:SyncConnected type:None path:null
world
cZxid = 0x100000014
ctime = Thu Mar 18 03:21:38 UTC 2021
mZxid = 0x100000014
mtime = Thu Mar 18 03:21:38 UTC 2021
pZxid = 0x100000014
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0

如果出现myid重复可以进入node内/var/lib/zookeeper/data/ 下 修改id参数,然后重新部署

参考:https://kubernetes.io/zh/docs/tutorials/stateful-application/zookeeper/

0 人点赞