Prometheus监控神器-Kubernetes篇(二)

2020-09-10 13:10:19 浏览数 (1)

在Kubernetes中手动方式部署Statefulset的Grafana,并使用StorageClass来持久化数据,并且配置ingress-nginx访问。

本篇使用StorageClass来持久化数据,搭建Statefulset的Grafana,并且在Dashboard导入前配置前面已经创建好的Prometheus的集群内部访问地址,同时配置ingress-nginx外部访问。

环境

我的本地环境使用的 sealos 一键部署,主要是为了便于测试。

OS

Kubernetes

HostName

IP

Service

Ubuntu 18.04

1.17.7

sealos-k8s-m1

192.168.1.151

node-exporter prometheus-federate-0

Ubuntu 18.04

1.17.7

sealos-k8s-m2

192.168.1.152

node-exporter grafana alertmanager-0

Ubuntu 18.04

1.17.7

sealos-k8s-m3

192.168.1.150

node-exporter alertmanager-1

Ubuntu 18.04

1.17.7

sealos-k8s-node1

192.168.1.153

node-exporter prometheus-0 kube-state-metrics

Ubuntu 18.04

1.17.7

sealos-k8s-node2

192.168.1.154

node-exporter prometheus-1

Ubuntu 18.04

1.17.7

sealos-k8s-node2

192.168.1.155

node-exporter prometheus-2

部署 Grafana

创建Grafana的SA文件

代码语言:txt复制
mkdir /data/manual-deploy/grafana/
cat grafana-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: grafana
  namespace: kube-system

创建Grafana的sc配置文件

代码语言:txt复制
cat grafana-data-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: grafana-lpv
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

创建Grafana的pv配置文件

代码语言:txt复制
cat grafana-data-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: grafana-pv-0
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: grafana-lpv
  local:
    path: /data/grafana-data
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - sealos-k8s-m2

在调度节点上创建pv目录与赋权

代码语言:txt复制
mkdir /data/grafana-data
chown -R 65534.65534 /data/grafana-data

Dashboard文件太大,自己下载改一下的namespace

grafana-dashboard-configmap.yaml

代码语言:txt复制
# 下载到本地
cat grafana-dashboard-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  creationTimestamp: null
  name: grafana-dashboards
  namespace: kube-system
  labels:
    app.kubernetes.io/name: grafana
    app.kubernetes.io/component: grafana
data:
....

创建Grafana的configmap配置文件,其中的Prometheus是集群内部dns地址,请自行调整。

代码语言:txt复制
cat grafana-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: grafana-datasources
  namespace: kube-system
  labels:
    app.kubernetes.io/name: grafana
data:
  datasources.yaml: |
    apiVersion: 1
    datasources:
    - access: proxy
      isDefault: true
      name: prometheus
      type: prometheus
      url: http://prometheus-0.prometheus.kube-system.svc.cluster.local:9090
      version: 1
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: grafana-dashboardproviders
  namespace: kube-system
  labels:
    app.kubernetes.io/name: grafana
data:
  dashboardproviders.yaml: |
    apiVersion: 1
    providers:
    - disableDeletion: false
      editable: true
      folder: ""
      name: default
      options:
        path: /var/lib/grafana/dashboards
      orgId: 1
      type: file

我这里没有用secret,需要的自己调整下,在statefulset中有调用方法,我已经注释了。

代码语言:txt复制
cat grafana-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: grafana-secret
  namespace: kube-system
  labels:
    app.kubernetes.io/name: grafana
    app.kubernetes.io/component: grafana
type: Opaque
data:
  admin-user: YWRtaW4=
  admin-password: "123456"

创建Grafana的statefulset配置文件

代码语言:txt复制
cat grafana-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: grafana
  namespace: kube-system
  labels: &Labels
    k8s-app: grafana
    app.kubernetes.io/name: grafana
    app.kubernetes.io/component: grafana
spec:
  serviceName: grafana
  replicas: 1
  selector:
    matchLabels: *Labels
  template:
    metadata:
      labels: *Labels
    spec:
      serviceAccountName: grafana
      initContainers:
          - name: "init-chmod-data"
            image: debian:9
            imagePullPolicy: "IfNotPresent"
            command: ["chmod", "777", "/var/lib/grafana"]
            volumeMounts:
            - name: grafana-data
              mountPath: "/var/lib/grafana"
      containers:
        - name: grafana
          image: grafana/grafana:7.1.0
          imagePullPolicy: Always
          volumeMounts:
            - name: dashboards
              mountPath: "/var/lib/grafana/dashboards"
            - name: datasources
              mountPath: "/etc/grafana/provisioning/datasources"              
            - name: grafana-dashboardproviders
              mountPath: "/etc/grafana/provisioning/dashboards"
            - name: grafana-data
              mountPath: "/var/lib/grafana"
          ports:
            - name: service
              containerPort: 80
              protocol: TCP
            - name: grafana
              containerPort: 3000
              protocol: TCP
          env:
            - name: GF_SECURITY_ADMIN_USER
              value: "admin"
              #valueFrom:
              #  secretKeyRef:
              #    name: grafana-secret
              #    key: admin-user
            - name: GF_SECURITY_ADMIN_PASSWORD
              value: "admin"
              #valueFrom:
              #  secretKeyRef:
              #    name: grafana-secret
              #    key: admin-password
          livenessProbe:
            httpGet:
              path: /api/health
              port: 3000
          readinessProbe:
            httpGet:
              path: /api/health
              port: 3000
            initialDelaySeconds: 60
            timeoutSeconds: 30
            failureThreshold: 10
            periodSeconds: 10
          resources:
            limits:
              cpu: 50m
              memory: 100Mi
            requests:
              cpu: 50m
              memory: 100Mi
      volumes:
        - name: datasources
          configMap:
            name: grafana-datasources
        - name: grafana-dashboardproviders
          configMap:
            name: grafana-dashboardproviders
        - name: dashboards
          configMap:
            name: grafana-dashboards
  volumeClaimTemplates:
  - metadata:
      name: grafana-data
    spec:
      storageClassName: "grafana-lpv"
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: "2Gi"

创建Grafana的statefulset的svc配置文件

代码语言:txt复制
cat grafana-service-statefulset.yaml
apiVersion: v1
kind: Service
metadata:
  name: grafana
  namespace: kube-system
  labels:
    k8s-app: grafana
    app.kubernetes.io/name: grafana
    app.kubernetes.io/component: grafana
  annotations:
    prometheus.io/scrape: 'true'
spec:
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: 3000
  selector:
    k8s-app: grafana

部署

代码语言:txt复制
cd /data/manual-deploy/grafana
ls
grafana-configmap.yaml
grafana-dashboard-configmap.yaml
grafana-data-pv.yaml
grafana-data-storageclass.yaml
grafana-secret.yaml
grafana-serviceaccount.yaml
grafana-service-statefulset.yaml
grafana-statefulset.yaml
kubectl apply .

验证

代码语言:txt复制
kubectl -n kube-system get sa,pod,svc,ep,sc,secret|grep grafana
serviceaccount/grafana                              1         1h
pod/grafana-0                                  1/1     Running   0          1h
service/grafana                   ClusterIP   10.101.176.62    <none>        80/TCP                         1h
endpoints/grafana                   100.73.217.86:3000                                                         1h
storageclass.storage.k8s.io/grafana-lpv               kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  33h
secret/grafana-token-lrsbd                              kubernetes.io/service-account-token   3      1h

部署ingress-nginx

代码语言:txt复制
cd /data/manual-deploy/ingress-nginx
# 创建ingress-NGINX的ns与svc
cat ngress-nginx-svc.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
---
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
    - name: https
      port: 443
      targetPort: 443
      protocol: TCP
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

# 创建 mandatory 文件
cat ngress-nginx-mandatory.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      hostNetwork: true
      # wait up to five minutes for the drain of connections
      terminationGracePeriodSeconds: 300
      serviceAccountName: nginx-ingress-serviceaccount
      nodeSelector:
        kubernetes.io/os: linux
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 101
            runAsUser: 101
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown

---

apiVersion: v1
kind: LimitRange
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  limits:
  - min:
      memory: 90Mi
      cpu: 100m
    type: Container

部署

代码语言:txt复制
cd /data/manual-deploy/ingress-nginx
ls
ingress-nginx-mandatory.yaml
ngress-nginx-svc.yaml
kubectl apply -f  .

验证

代码语言:txt复制
kubectl -n ingress-nginx get pod,svc,ep
NAME                                            READY   STATUS    RESTARTS   AGE
pod/nginx-ingress-controller-6ffc8fdf96-45ksg   1/1     Running   0          3d12h
pod/nginx-ingress-controller-6ffc8fdf96-76rxj   1/1     Running   0          3d13h
pod/nginx-ingress-controller-6ffc8fdf96-xrhlp   1/1     Running   0          3d13h

NAME                    TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/ingress-nginx   NodePort   10.110.106.22   <none>        80:31926/TCP,443:31805/TCP   3d13h

NAME                      ENDPOINTS                                                        AGE
endpoints/ingress-nginx   192.168.1.153:80,192.168.1.154:80,192.168.1.155:80   3 more...   3d13h

配置ingress访问。

Prometheus ingress-NGINX配置文件

代码语言:txt复制
cat alertmanager-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: prometheus-ingress
  namespace: kube-system
  annotations:
    nginx.ingress.kubernetes.io/affinity: cookie
    nginx.ingress.kubernetes.io/session-cookie-name: "prometheus-cookie"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    kubernetes.io/ingress.class: nginx
    certmanager.k8s.io/cluster-issuer: "letsencrypt-local"
    kubernetes.io/tls-acme: "false"
spec:
  rules:
  - host: prom.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: prometheus
          servicePort: 9090
  tls:
  - hosts:
      - prom.example.com

Alertmanager ingress-NGINX配置文件

代码语言:txt复制
cat alertmanager-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: alertmanager-ingress
  namespace: kube-system
  annotations:
    nginx.ingress.kubernetes.io/affinity: cookie
    nginx.ingress.kubernetes.io/session-cookie-name: "alert-cookie"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    kubernetes.io/ingress.class: nginx
    certmanager.k8s.io/cluster-issuer: "letsencrypt-local"
    kubernetes.io/tls-acme: "false"
spec:
  rules:
  - host: alert.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: alertmanager-operated
          servicePort: 9093
  tls:
  - hosts:
      - alert.example.com

Grafana ingress-NGINX配置文件

代码语言:txt复制
cat grafana-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: grafana-ingress
  namespace: kube-system
  annotations:
    nginx.ingress.kubernetes.io/affinity: cookie
    nginx.ingress.kubernetes.io/session-cookie-name: "grafana-cookie"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    kubernetes.io/ingress.class: nginx
    certmanager.k8s.io/cluster-issuer: "letsencrypt-local"
    kubernetes.io/tls-acme: "false"
spec:
  rules:
  - host: grafana.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: grafana
          servicePort: http
  tls:
  - hosts:
      - grafana.example.com

部署

代码语言:txt复制
cd /data/manual-deploy/ingress-nginx
alertmanager-ingress.yaml
grafana-ingress.yaml
prometheus-ingress.yaml
kubectl apply -f alertmanager-ingress.yaml
kubectl apply -f prometheus-ingress.yaml
kubectl apply -f grafana-ingress.yaml                 

验证

代码语言:txt复制
kubectl -n kube-system get ingresses
NAME                   HOSTS                 ADDRESS         PORTS     AGE
alertmanager-ingress   alert.example.com     10.110.106.22   80, 443   15h
grafana-ingress        grafana.example.com   10.110.106.22   80, 443   30h
prometheus-ingress     prom.example.com      10.110.106.22   80, 443   15h

然后可以在在本地dns服务器或者host中绑定域名跟主机关系访问,我这里没有配置SSL证书,如果你有这个需求,需要自己单独配置。

到此,针对手动部署的全部过程就此结束,本次搭建的服务是比较新的版本,可能相互依赖中会有未知问题,尽量版本保持一致。

0 人点赞