基于AWS EKS的K8S实践 - 通过 Agent收集日志

2023-08-23 10:18:23 浏览数 (1)

基于AWS EKS的K8S实践系列文章是基于企业级的实战文章,一些设置信息需要根据公司自身要求进行设置,如果大家有问题讨论或咨询可以加我微信(公众号后台回复 程序员修炼笔记 可获取联系方式)

基于SideCar的容器

基于SideCar的容器灵活程度相对较高,但每个Pod多一个容器也是对资源的消耗。

基于Agent的容器收集方案

基于Agent的日志虽然将所有配置都放在了一个ConfigMap中,可能配置会比较冗长(通过filebeat.autodiscover进行解决),但由于他是DaemoSet的形式,将会极大程度的降低资源的损耗。

配置kube-system的filebeat-config ConfigMap

修改ConfigMap,配置filebeat-config,这里相当于修改filebeat的配置文件如下:

由于我们线上开了证书认证,因此我这里还需要存储一下证书,用于DaemonSet 在连接的时候使用,另外我们这里选择的 input 是 container,可以看到在我这里我分别采集了我们 nginx 的日志(我们的 nginx 日志是个 json 格式)和 xxx-app 的日志, 如果你们的日志格式都很统一,且日志处理方式也类似,推荐使用filebeat.autodiscover,这个可能会大大减少你的配置文件的书写。

代码语言:javascript复制
apiVersion: v1
 
kind: ConfigMap
 
metadata:
 
  name: filebeat-config
 
namespace: kube-system
 
  labels:
 
    k8s-app: filebeat
 
data:
 
  elasticsearch-ca.pem: |-
 
-----BEGIN CERTIFICATE-----
 
    XXXXXXX
 
-----END CERTIFICATE-----
 
  filebeat.yml: |-
 
    filebeat.inputs:
 
- type: container
 
      ignore_older: 1h
 
      paths:
 
- /var/log/containers/ingress-nginx-controller*.log
 
      fields:
 
        project: k8s-nginx
 
      json.keys_under_root: true
 
      json.add_error_key: true
 
      json.overwrite_keys: true
 
      json.expand_keys: true
 


 
- type: container
 
      paths:
 
- "/var/log/containers/xxx-app*.log"
 
      multiline:
 
        type: pattern
 
        pattern: '^d{4}-d{2}-d{2}sd{2}:d{2}:d{2}.d{3}'
 
        negate: true
 
        match: after
 
      fields:
 
        project: xxx-app
 
      pipeline: application-service-log-pipeline
 


 
    output.elasticsearch:
 
      hosts: ['{ELASTICSEARCH_HOST:elasticsearch}:{ELASTICSEARCH_PORT:9200}'] 
      username: ${ELASTICSEARCH_USERNAME}
 
      password: ${ELASTICSEARCH_PASSWORD}
 
      allow_older_versions: "true"
 
      protocol: "https"
 
      ssl.certificate_authorities: /etc/filebeat/certs/elasticsearch-ca.pem
 
      indices:
 
- index: "%{[fields.project]}-%{ yyyy.MM.dd}"
 

修改DaemonSet

这里我主要修改了污点容忍,这是为了让我 DaemonSet 的 FileBeat 能够在有污点的节点上运行,另外还需要设置ES 的用户名、密码以及端口,以及SSL 证书,如下(下面的示例只是我这里和官网上配置不通的地方,完整的配置可以见文章末尾):

代码语言:javascript复制
apiVersion: apps/v1
 
kind: DaemonSet
 
metadata:
 
  name: filebeat
 
namespace: kube-system
 
  labels:
 
    k8s-app: filebeat
 
spec:
 
...
 
    spec:
 
...
 
      tolerations:
 
- key: subnet-type.kubernetes.io
 
operator: Equal
 
          value: public
 
          effect: NoSchedule
 
      containers:
 
- name: filebeat
 
          image: docker.elastic.co/beats/filebeat:8.6.1
 
          args: [
 
"-c", "/etc/filebeat.yml",
 
"-e",
 
]
 
          env:
 
- name: ELASTICSEARCH_HOST
 
              value:  xxx
 
- name: ELASTICSEARCH_PORT
 
              value: "9200"
 
- name: ELASTICSEARCH_USERNAME
 
              value: xxxx
 
- name: ELASTICSEARCH_PASSWORD
 
              value: xxxxx
 
...
 
          volumeMounts:
 
- name: config
 
              mountPath: /etc/filebeat/certs/elasticsearch-ca.pem
 
              readOnly: true
 
              subPath: elasticsearch-ca.pem
 
...
 
      volumes:
 
- name: config
 
          configMap:
 
            defaultMode: 0640
 
            name: filebeat-config
 
...
 

完整 FileBeat DaemonSet 的部署 Yaml

下面的 Yaml把上面我提到的地方按照你们自己的配置修改以后,直接运行就可以将 FileBeat 进行部署,部署成功以后如果有新的日志采集需要加入,只需要修改 ConfigMap,然后重新部署 DaemonSet 即可。

代码语言:javascript复制
apiVersion: v1
 
kind: ServiceAccount
 
metadata:
 
  name: filebeat
 
namespace: kube-system
 
  labels:
 
    k8s-app: filebeat
 


 
---
 


 
apiVersion: rbac.authorization.k8s.io/v1
 
kind: ClusterRole
 
metadata:
 
  name: filebeat
 
  labels:
 
    k8s-app: filebeat
 
rules:
 
- apiGroups: [""] # "" indicates the core API group
 
    resources:
 
- namespaces
 
- pods
 
- nodes
 
    verbs:
 
- get
 
- watch
 
- list
 
- apiGroups: ["apps"]
 
    resources:
 
- replicasets
 
    verbs: ["get", "list", "watch"]
 
- apiGroups: ["batch"]
 
    resources:
 
- jobs
 
    verbs: ["get", "list", "watch"]
 


 
---
 
apiVersion: rbac.authorization.k8s.io/v1
 
kind: Role
 
metadata:
 
  name: filebeat
 
# should be the namespace where filebeat is running
 
namespace: kube-system
 
  labels:
 
    k8s-app: filebeat
 
rules:
 
- apiGroups:
 
- coordination.k8s.io
 
    resources:
 
- leases
 
    verbs: ["get", "create", "update"]
 


 
---
 
apiVersion: rbac.authorization.k8s.io/v1
 
kind: Role
 
metadata:
 
  name: filebeat-kubeadm-config
 
namespace: kube-system
 
  labels:
 
    k8s-app: filebeat
 
rules:
 
- apiGroups: [""]
 
    resources:
 
- configmaps
 
    resourceNames:
 
- kubeadm-config
 
    verbs: ["get"]
 


 
---
 
apiVersion: rbac.authorization.k8s.io/v1
 
kind: ClusterRoleBinding
 
metadata:
 
  name: filebeat
 
subjects:
 
- kind: ServiceAccount
 
    name: filebeat
 
namespace: kube-system
 
roleRef:
 
  kind: ClusterRole
 
  name: filebeat
 
  apiGroup: rbac.authorization.k8s.io
 


 
---
 
apiVersion: rbac.authorization.k8s.io/v1
 
kind: RoleBinding
 
metadata:
 
  name: filebeat
 
namespace: kube-system
 
subjects:
 
- kind: ServiceAccount
 
    name: filebeat
 
namespace: kube-system
 
roleRef:
 
  kind: Role
 
  name: filebeat
 
  apiGroup: rbac.authorization.k8s.io
 


 
---
 
apiVersion: rbac.authorization.k8s.io/v1
 
kind: RoleBinding
 
metadata:
 
  name: filebeat-kubeadm-config
 
namespace: kube-system
 
subjects:
 
- kind: ServiceAccount
 
    name: filebeat
 
namespace: kube-system
 
roleRef:
 
  kind: Role
 
  name: filebeat-kubeadm-config
 
  apiGroup: rbac.authorization.k8s.io
 


 
---
 
apiVersion: v1
 
kind: ConfigMap
 
metadata:
 
  name: filebeat-config
 
namespace: kube-system
 
  labels:
 
    k8s-app: filebeat
 
data:
 
  elasticsearch-ca.pem: |-
 
-----BEGIN CERTIFICATE-----
 
    XXXXXXX
 
-----END CERTIFICATE-----
 
  filebeat.yml: |-
 
    filebeat.inputs:
 
- type: container
 
      ignore_older: 1h
 
      paths:
 
- /var/log/containers/ingress-nginx-controller*.log
 
      fields:
 
        project: k8s-nginx
 
      json.keys_under_root: true
 
      json.add_error_key: true
 
      json.overwrite_keys: true
 
      json.expand_keys: true
 


 
- type: container
 
      paths:
 
- "/var/log/containers/xxx-app*.log"
 
      multiline:
 
        type: pattern
 
        pattern: '^d{4}-d{2}-d{2}sd{2}:d{2}:d{2}.d{3}'
 
        negate: true
 
        match: after
 
      fields:
 
        project: xxx-app
 
      pipeline: application-service-log-pipeline
 


 
    output.elasticsearch:
 
      hosts: ['{ELASTICSEARCH_HOST:elasticsearch}:{ELASTICSEARCH_PORT:9200}'] 
      username: ${ELASTICSEARCH_USERNAME}
 
      password: ${ELASTICSEARCH_PASSWORD}
 
      allow_older_versions: "true"
 
      protocol: "https"
 
      ssl.certificate_authorities: /etc/filebeat/certs/elasticsearch-ca.pem
 
      indices:
 
- index: "%{[fields.project]}-%{ yyyy.MM.dd}"
 


 
---
 
apiVersion: apps/v1
 
kind: DaemonSet
 
metadata:
 
  name: filebeat
 
namespace: kube-system
 
  labels:
 
    k8s-app: filebeat
 
spec:
 
  selector:
 
    matchLabels:
 
      k8s-app: filebeat
 
template:
 
    metadata:
 
      labels:
 
        k8s-app: filebeat
 
    spec:
 
      serviceAccountName: filebeat
 
      terminationGracePeriodSeconds: 30
 
      hostNetwork: true
 
      dnsPolicy: ClusterFirstWithHostNet
 
      tolerations:
 
- key: subnet-type.kubernetes.io
 
operator: Equal
 
          value: public
 
          effect: NoSchedule
 
      containers:
 
- name: filebeat
 
          image: docker.elastic.co/beats/filebeat:8.6.1
 
          args: [
 
"-c", "/etc/filebeat.yml",
 
"-e",
 
]
 
          env:
 
- name: ELASTICSEARCH_HOST
 
              value:  xxx
 
- name: ELASTICSEARCH_PORT
 
              value: "9200"
 
- name: ELASTICSEARCH_USERNAME
 
              value: xxxx
 
- name: ELASTICSEARCH_PASSWORD
 
              value: xxxxx
 
- name: NODE_NAME
 
              valueFrom:
 
                fieldRef:
 
                  fieldPath: spec.nodeName
 
          securityContext:
 
            runAsUser: 0
 
# If using Red Hat OpenShift uncomment this:
 
#privileged: true
 
          resources:
 
            limits:
 
              memory: 200Mi
 
            requests:
 
              cpu: 100m
 
              memory: 100Mi
 
          volumeMounts:
 
- name: config
 
              mountPath: /etc/filebeat.yml
 
              readOnly: true
 
              subPath: filebeat.yml
 
- name: config
 
              mountPath: /etc/filebeat/certs/elasticsearch-ca.pem
 
              readOnly: true
 
              subPath: elasticsearch-ca.pem
 
- name: data
 
              mountPath: /usr/share/filebeat/data
 
- name: varlibdockercontainers
 
              mountPath: /var/lib/docker/containers
 
              readOnly: true
 
- name: varlog
 
              mountPath: /var/log
 
              readOnly: true
 
      volumes:
 
- name: config
 
          configMap:
 
            defaultMode: 0640
 
            name: filebeat-config
 
- name: varlibdockercontainers
 
          hostPath:
 
            path: /var/lib/docker/containers
 
- name: varlog
 
          hostPath:
 
            path: /var/log
 
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
 
- name: data
 
          hostPath:
 
# When filebeat runs as non-root user, this directory needs to be writable by group (g w).
 
            path: /var/lib/filebeat-data
 
            type: DirectoryOrCreate
 

0 人点赞