K8S 上搭建 RabbitMQ 高可用集群

2022-04-21 14:28:57 浏览数 (1)

目前我们已经将RabbitMQ 集群用于生产, 部署非常方便: 最少填1个参数, 点创建, 1分钟后就能创建如下集群. (利用的是类似helm的OpenShift Template的能力)

推荐阅读人群;

  • 架构师
  • 应用架构师
  • 容器平台管理员
  • 开源技术爱好者

总结: 基于官方博客: Peer Discovery Subsystem in RabbitMQ 3.7 实现. 具体的插件为: 官方插件: rabbitmq-peer-discovery-k8s 同时结合 OpenShift Template 来实现一键搭建. (本文未详细介绍这一概念)

RabbitMQ 3.7以后引入了一个新的子系统: Peer Discovery Subsystem

为什么需要 Peer Discovery

开源MQ服务(如 RabbitMQ)的用户对操作自动化的期望越来越高。这包括所谓的Day 1 操作:初始群集预配。

首次形成 RabbitMQ 群集时,新启动节点需要有一种方法来发现彼此。在 3.6.x 的版本中,有两种执行此操作的方法:

  • CLI 工具
  • 配置文件中的节点列表

前一个选项由某些预配工具使用,但通常不太便于自动化。后者更方便,但有其自身的局限性:节点数是固定的,更改它需要配置文件重新部署和节点重新启动。

更好的方法

其实还有第三个选择,它已经在社区里存在了几年:rabbitmq-autocluster,一个最初由Gavin Roy开发的插件。该插件修改 RabbitMQ 启动过程并注入 Peer Discovery 步骤。在这种情况下,对等方的列表不必来自配置文件:可以从 AWS autoscaling group 或外部工具(如 etcd)检索。

rabbitmq-autocluster作者得出结论,没有一种真正的方法来执行对等发现,不同的方法对于不同的部署方案是有意义的。因此,他们引入了一个可插拔的接口。此可插拔接口的特定实现称为对等发现机制。鉴于在过去几年中平台和部署自动化堆栈的激增,这是一个明智的决定。

对于 RabtMQ 3.7.0,我们采取rabbitmq-autocluster并集成了其主要思想的核心,并进行了一些修改,这些修改受我们支持生产RabbitMQ安装和社区反馈的经验的影响。

结果是一个新的对等发现子系统。

它是如何工作的?

当节点启动并检测到它没有以前初始化的数据库时,它将检查是否配置了对等发现机制。如果是这种情况,它将执行发现并尝试按顺序联系每个发现的peer。最后,它将尝试加入第一个可访问的peer的群集。

某些机制假定所有群集成员都提前知道彼此(例如,在配置文件中列出),其他机制是动态的(节点可以扩容和缩容)。

RabbitMQ 3.7自带多种 Peer Discovery 机制:

  • AWS(EC2 instance tags 或autoscaling group)
  • Kubernetes (容器平台)
  • etcd
  • Consul (Service Mesh)
  • 预配置的 DNS 记录
  • 配置文件

并且很容易在未来引入对更多选项的支持。

由于在配置文件中列出群集节点的功能并不新鲜,因此让我们关注新功能。

节点注册和取消注册

某些机制使用数据存储来跟踪节点列表。新加入的群集成员更新数据存储以指示其存在。etcd和Consul这2个插件就是通过这种机制来实现的。

其他的机制,群集成员身份通过带外管理(由 RabbitMQ 节点无法控制的机制管理)。例如, AWS 机制使用 EC2 instance filtering or autoscaling group membership,这两个成员身份都由 AWS 管理和更新.

我们这次的介绍重点, Kubernetes 插件 就是基于这个 RabbitMQ peer discovery 机制的. 它是通过调用Kubernetes的API(endpoints)来获取集群内各个实例的信息.

容器下 RabbitMQ 集群安装部署

详细拆解步骤如下:

概览:

  1. 创建租户(步骤较简单, 略)
  2. RBAC权限配置
RBAC 权限配置

要获取各个实例间的信息, 首先需要有对应的Kubernetes的RBAC权限, 最简YAML 示例如下:

代码语言:javascript复制
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rabbitmq
  namespace: test-rabbitmq
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: rabbitmq-peer-discovery-rbac
  namespace: test-rabbitmq
rules:
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get"]
# - apiGroups: [""]
#   resources: ["events"]
#   verbs: ["create"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: rabbitmq-peer-discovery-rbac
  namespace: test-rabbitmq
subjects:
- kind: ServiceAccount
  name: rabbitmq
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: rabbitmq-peer-discovery-rbac

具体解释如下:

  1. 创建一个ServiceAccount , Rabbitmq会用这个账户来进行peer discovery
  2. 创建一个Role, 对应的权限就是: get endpoints. (在OpenShift中, 只有这个权限行不通, 所以直接关联的是标准的View Role)
  3. ServiceAccount rabbitmq 关联Role, 该账号就拥有get endpoints的权限.
RabbitMQ 集群配置 - ConfigMap

接下来进行RabbitMQ的集群配置, 集群配置是通过ConfigMap挂载进去的.

最简配置如下:

代码语言:javascript复制
apiVersion: v1
kind: ConfigMap
metadata:
  name: rabbitmq-config
  namespace: test-rabbitmq
data:
  enabled_plugins: |
      [rabbitmq_management,rabbitmq_peer_discovery_k8s].
  rabbitmq.conf: |
      ## Cluster formation. See https://www.rabbitmq.com/cluster-formation.html to learn more.
      cluster_formation.peer_discovery_backend  = rabbit_peer_discovery_k8s
      cluster_formation.k8s.host = kubernetes.default.svc.cluster.local
      ## Should RabbitMQ node name be computed from the pod's hostname or IP address?
      ## IP addresses are not stable, so using [stable] hostnames is recommended when possible.
      ## Set to "hostname" to use pod hostnames.
      ## When this value is changed, so should the variable used to set the RABBITMQ_NODENAME
      ## environment variable.
      cluster_formation.k8s.address_type = hostname
      ## How often should node cleanup checks run?
      cluster_formation.node_cleanup.interval = 30
      ## Set to false if automatic removal of unknown/absent nodes
      ## is desired. This can be dangerous, see
      ##  * https://www.rabbitmq.com/cluster-formation.html#node-health-checks-and-cleanup
      ##  * https://groups.google.com/forum/#!msg/rabbitmq-users/wuOfzEywHXo/k8z_HWIkBgAJ
      cluster_formation.node_cleanup.only_log_warning = true
      cluster_partition_handling = autoheal
      ## See https://www.rabbitmq.com/ha.html#master-migration-data-locality
      queue_master_locator=min-masters
      ## This is just an example.
      ## This enables remote access for the default user with well known credentials.
      ## Consider deleting the default user and creating a separate user with a set of generated
      ## credentials instead.
      ## Learn more at https://www.rabbitmq.com/access-control.html#loopback-users
      loopback_users.guest = false

主要有以下2个配置:

  • 启用的插件: [rabbitmq_management,rabbitmq_peer_discovery_k8s].. 启用了: rabbitmq_management和rabbitmq_peer_discovery_k8s 这2个插件.
    • 另外, 如果集群需要监控的话, 还可以额外再启用 rabbitmq_prometheus 插件
  • 主配置**rabbitmq.conf**:
    • cluster_formation.peer_discovery_backend = rabbit_peer_discovery_k8s peer discovery 的后端就是这个
    • cluster_formation.k8s.host = kubernetes.default.svc.cluster.local
    • cluster_formation.k8s.address_type = hostname K8S里, 倾向于使用 hostname
集群配置 - StatefulSetService

因为集群是有状态的, 所以需要通过K8S的Statefulset来实现集群实例部署配置.

最简yaml如下:

STATEFULSET.YAML
代码语言:javascript复制
apiVersion: apps/v1
# See the Prerequisites section of https://www.rabbitmq.com/cluster-formation.html#peer-discovery-k8s.
kind: StatefulSet
metadata:
  name: rabbitmq
  namespace: test-rabbitmq
spec:
  serviceName: rabbitmq
  # Three nodes is the recommended minimum. Some features may require a majority of nodes
  # to be available.
  replicas: 3
  selector:
    matchLabels:
      app: rabbitmq
  template:
    metadata:
      labels:
        app: rabbitmq
    spec:
      serviceAccountName: rabbitmq
      terminationGracePeriodSeconds: 10
      nodeSelector:
        # Use Linux nodes in a mixed OS kubernetes cluster.
        # Learn more at https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-os
        kubernetes.io/os: linux
      containers:
      - name: rabbitmq-k8s
        image: rabbitmq:3.8
        volumeMounts:
          - name: config-volume
            mountPath: /etc/rabbitmq
        # Learn more about what ports various protocols use
        # at https://www.rabbitmq.com/networking.html#ports
        ports:
          - name: http
            protocol: TCP
            containerPort: 15672
          - name: amqp
            protocol: TCP
            containerPort: 5672
        livenessProbe:
          exec:
            # This is just an example. There is no "one true health check" but rather
            # several rabbitmq-diagnostics commands that can be combined to form increasingly comprehensive
            # and intrusive health checks.
            # Learn more at https://www.rabbitmq.com/monitoring.html#health-checks.
            #
            # Stage 2 check:
            command: ["rabbitmq-diagnostics", "status"]
          initialDelaySeconds: 60
          # See https://www.rabbitmq.com/monitoring.html for monitoring frequency recommendations.
          periodSeconds: 60
          timeoutSeconds: 15
        readinessProbe:
          exec:
            # This is just an example. There is no "one true health check" but rather
            # several rabbitmq-diagnostics commands that can be combined to form increasingly comprehensive
            # and intrusive health checks.
            # Learn more at https://www.rabbitmq.com/monitoring.html#health-checks.
            #
            # Stage 2 check:
            command: ["rabbitmq-diagnostics", "status"]
            # To use a stage 4 check:
            # command: ["rabbitmq-diagnostics", "check_port_connectivity"]
          initialDelaySeconds: 20
          periodSeconds: 60
          timeoutSeconds: 10
        imagePullPolicy: Always
        env:
          - name: MY_POD_NAME
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: metadata.name
          - name: MY_POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: RABBITMQ_USE_LONGNAME
            value: "true"
          # See a note on cluster_formation.k8s.address_type in the config file section
          - name: K8S_SERVICE_NAME
            value: rabbitmq
          - name: RABBITMQ_NODENAME
            value: rabbit@$(MY_POD_NAME).$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local
          - name: K8S_HOSTNAME_SUFFIX
            value: .$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local
          - name: RABBITMQ_ERLANG_COOKIE
            value: "mycookie"
      volumes:
        - name: config-volume
          configMap:
            name: rabbitmq-config
            items:
            - key: rabbitmq.conf
              path: rabbitmq.conf
            - key: enabled_plugins
              path: enabled_plugins

详细解释如下:

  1. replicas: 3 建议集群最小副本数为3.
  2. serviceAccountName: rabbitmq就是之前的 ServiceAccount
  3. 镜像是: rabbitmq:3.8
  4. 上一节ConfigMap 的挂载点为: /etc/rabbitmq
  5. 容器端口为2个:
    1. http端口, 用于管理: 15672
    2. TCP端口, MQ协议: 5672
  6. 探针都为:command: ["rabbitmq-diagnostics", "status"]
  7. 通过MY_POD_NAMEMY_POD_NAMESPACE 2个 ENV获取POD的名称和POD所在的租户(NameSpace)
  8. RABBITMQ_USE_LONGNAME 在K8S中, Node Name会很长, 需要加这个参数
  9. K8S_SERVICE_NAME对应下一节的Service
  10. RABBITMQ_NODENAME 完整的名称规则为: rabbit@(MY_POD_NAME).(K8S_SERVICE_NAME).
  11. K8S_HOSTNAME_SUFFIX为: .(K8S_SERVICE_NAME).(MY_POD_NAMESPACE).svc.cluster.local
  12. RABBITMQ_ERLANG_COOKIE RabbitMQ集群内Node间需要有同一个Cookie, 此处指定为: mycookie
SERVICES.YAML
代码语言:javascript复制
kind: Service
apiVersion: v1
metadata:
  namespace: test-rabbitmq
  name: rabbitmq
  labels:
    app: rabbitmq
    type: LoadBalancer
spec:
  type: NodePort
  ports:
   - name: http
     protocol: TCP
     port: 15672
     targetPort: 15672
     nodePort: 31672
   - name: amqp
     protocol: TCP
     port: 5672
     targetPort: 5672
     nodePort: 30672
  selector:
    app: rabbitmq

这里是需要配置的Service的信息. 暴露2个端口, 用于供外部访问以及RabbitMQ集群内通信

  1. http: 15672
  2. tcp: 5672

备注1:

在OpenShift中, 需要建立 headless service用于RabbitMQ集群内通信:

代码语言:javascript复制
apiVersion: v1
kind: Service
metadata:
  name: ${CLUSTER_NAME}
  labels:
    app: ${CLUSTER_NAME} 
spec:
  selector:
    app: ${CLUSTER_NAME}
  clusterIP: None
  ports:
    - name: amqp
      port: 5672
      targetPort: 5672
    - name: clustering
      port: 25672
      targetPort: 25672

端口为: 5672和 25672. 同时: clusterIP: None

备注2:

如果配置了rabbitmq_prometheus插件, 则还需要Service 通过暴露: 15692来暴露metrics指标.

在OpenShift中, 通过ServiceMonitor 来实现对接.

kubectl Apply

kubectl create -f ... 应用以上yaml文件.

检查RabbitMQ集群状态

kubectl --namespace="test-rabbitmq" get pods

通过以上命令检查pod的状态.

通过rabbitmq-diagnostics cluster_status 命令来检查RabbitMQ集群状态:

代码语言:javascript复制
FIRST_POD=$(kubectl get pods --namespace test-rabbitmq -l 'app=rabbitmq' -o jsonpath='{.items[0].metadata.name }')
kubectl exec --namespace=test-rabbitmq $FIRST_POD rabbitmq-diagnostics cluster_status

正确的输出如下:

代码语言:javascript复制
Cluster status of node rabbit@rabbitmq-0.rabbitmq.test-rabbitmq.svc.cluster.local ...
Basics

Cluster name: rabbit@rabbitmq-0.rabbitmq.test-rabbitmq.svc.cluster.local

Disk Nodes

rabbit@rabbitmq-0.rabbitmq.test-rabbitmq.svc.cluster.local
rabbit@rabbitmq-1.rabbitmq.test-rabbitmq.svc.cluster.local
rabbit@rabbitmq-2.rabbitmq.test-rabbitmq.svc.cluster.local

Running Nodes

rabbit@rabbitmq-0.rabbitmq.test-rabbitmq.svc.cluster.local
rabbit@rabbitmq-1.rabbitmq.test-rabbitmq.svc.cluster.local
rabbit@rabbitmq-2.rabbitmq.test-rabbitmq.svc.cluster.local

Versions

rabbit@rabbitmq-0.rabbitmq.test-rabbitmq.svc.cluster.local: RabbitMQ 3.8.1 on Erlang 22.1.8
rabbit@rabbitmq-1.rabbitmq.test-rabbitmq.svc.cluster.local: RabbitMQ 3.8.1 on Erlang 22.1.8
rabbit@rabbitmq-2.rabbitmq.test-rabbitmq.svc.cluster.local: RabbitMQ 3.8.1 on Erlang 22.1.8

Alarms

(none)

Network Partitions

(none)

Listeners

Node: rabbit@rabbitmq-0.rabbitmq.test-rabbitmq.svc.cluster.local, interface: [::], port: 25672, protocol: clustering, purpose: inter-node and CLI tool communication
Node: rabbit@rabbitmq-0.rabbitmq.test-rabbitmq.svc.cluster.local, interface: [::], port: 5672, protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0
Node: rabbit@rabbitmq-0.rabbitmq.test-rabbitmq.svc.cluster.local, interface: [::], port: 15672, protocol: http, purpose: HTTP API
Node: rabbit@rabbitmq-1.rabbitmq.test-rabbitmq.svc.cluster.local, interface: [::], port: 25672, protocol: clustering, purpose: inter-node and CLI tool communication
Node: rabbit@rabbitmq-1.rabbitmq.test-rabbitmq.svc.cluster.local, interface: [::], port: 5672, protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0
Node: rabbit@rabbitmq-1.rabbitmq.test-rabbitmq.svc.cluster.local, interface: [::], port: 15672, protocol: http, purpose: HTTP API

Feature flags

Flag: drop_unroutable_metric, state: enabled
Flag: empty_basic_get_metric, state: enabled
Flag: implicit_default_bindings, state: enabled
Flag: quorum_queue, state: enabled
Flag: virtual_host_metadata, state: enabled
连接使用
  • amqp://guest:guest@{minikube_ip}:30672 使用MQ客户端连接
  • http://{minikube_ip}:31672 访问管理界面
伸缩RabbitMQ集群实例数

即伸缩K8S的Pod数.

推荐为奇数.

代码语言:javascript复制
# Odd numbers of nodes are necessary for a clear quorum: 3, 5, 7 and so on
kubectl scale statefulset/rabbitmq --namespace=test-rabbitmq --replicas=5

0 人点赞