修改了多source到多output时,发现Fluentd采集不到原来的某些pod的日志,并报错如下:
“
2022-12-05 02:23:16.388480528 0000 fluent.warn: {"tag":"kubernetes.var.log.containers.flink-stream2sream-77d587bd84-kh5gn_flink-cluster_flink-job-manager-bf525d2c5de239b0a369946e6fb213d33bc1e9a63cd1aed7cb21f25107fc0d57.log",
"message":"no patterns matched tag="kubernetes.var.log.containers.flink-stream2sream-77d587bd84-kh5gn_flink-application-cluster_flink-job-manager-bf525d2c5de239b0a369946e6fb213d33bc1e9a63cd1aed7cb21f25107fc0d57.log""}
”
排查下来是output的配置中match标签无法匹配到这个tag
代码语言:javascript复制# 原match配置: <match **>
# 修改后报错配置:<match raw.kubernetes.*>
# 根据报错内容在match标签中增加tag的匹配规则: <match raw.kubernetes.* kubernetes.**>
output.conf:
<match raw.kubernetes.* kubernetes.**>
@id elasticsearch
@type elasticsearch
@log_level info
logstash_format true
logstash_prefix foobar
pipeline foobar_pipe
include_tag_key true
include_timestamp true
hosts localhost:9200
request_timeout 30s
<buffer>
@type file
path /data/logs/fluentd-buffers/foobar.system.buffer
flush_mode interval
retry_type exponential_backoff
flush_thread_count 2
flush_interval 5s
retry_forever
retry_max_interval 20
chunk_limit_size 2M
total_limit_size 256M
overflow_action block
timekey 20
timekey_wait 120
timekey_zone 0800
</buffer>
</match>
<match foobar.**>
@id foobar-es
@type elasticsearch
@log_level info
logstash_format true
logstash_prefix lakehouse-task
pipeline lakehouse_task_pipeline
include_tag_key true
include_timestamp true
hosts localhost:9200
request_timeout 30s
<buffer>
@type file
path /data/logs/fluentd-buffers/foobar.system.buffer
flush_mode interval
retry_type exponential_backoff
flush_thread_count 2
flush_interval 5s
retry_forever
retry_max_interval 20
chunk_limit_size 2M
total_limit_size 256M
overflow_action block
timekey 20
timekey_wait 120
timekey_zone 0800
</buffer>
</match>