8.【kafka运维】kafka-dump-log.sh数据查看

2022-04-13 20:03:13 浏览数 (1)

日常运维 问题排查 怎么能够少了滴滴开源的 滴滴开源LogiKM一站式Kafka监控与管控平台

1.查看日志文件 kafka-dump-log.sh

参数

描述

例子

--deep-iteration

--files <String: file1, file2, ...>

必需; 读取的日志文件

–files 0000009000.log

--key-decoder-class

如果设置,则用于反序列化键。这类应实现kafka.serializer。解码器特性。自定义jar应该是在kafka/libs目录中提供

--max-message-size

最大的数据量,默认:5242880

--offsets-decoder

if set, log data will be parsed as offset data from the __consumer_offsets topic.

--print-data-log

打印内容

--transaction-log-decoder

if set, log data will be parsed as transaction metadata from the __transaction_state topic

--value-decoder-class [String]

if set, used to deserialize the messages. This class should implement kafka. serializer.Decoder trait. Custom jar should be available in kafka/libs directory. (default: kafka.serializer. StringDecoder)

--verify-index-only

if set, just verify the index log without printing its content.

查询Log文件

sh bin/kafka-dump-log.sh --files kafka-logs-0/test2-0/00000000000000000300.log

查询Log文件具体信息 --print-data-log

sh bin/kafka-dump-log.sh --files kafka-logs-0/test2-0/00000000000000000300.log --print-data-log

查询index文件具体信息

sh bin/kafka-dump-log.sh --files kafka-logs-0/test2-0/00000000000000000300.index

配置项为log.index.size.max.bytes; 来控制创建索引的大小;

查询timeindex文件

sh bin/kafka-dump-log.sh --files kafka-logs-0/test2-0/00000000000000000300.timeindex

0 人点赞