kafka常见报错集合-一

2024-03-03 15:38:36 浏览数 (2)

1、FETCH_SESSION_TOPIC_ID_ERROR

用java客户端消费topic的数据的时候,会报错The fetch session encountered inconsistent topic ID usage Caused by: org.apache.kafka.common.KafkaException: Unexpected error in join group response: The fetch session encountered inconsistent topic ID usage

如何处理

FETCH_SESSION_TOPIC_ID_ERROR 关闭了自动创建group,可以打开或先手动创建完group再消费

2、问题现象:客户原问题x使用ip:port无法连接,报kafka: client has run out of available brokers to talk to: EOF

网络也是通的

加了这个config.Version = sarama.V1_1_1_0

就可以发送成功了

3、kafka 使用 github.com/Shopify/sarama  v1.32.0    --连接  0.10.2   报错

panic: kafka: client has run out of available brokers to talk to (Is your cluster reachable?)

client/metadata fetching metadata for all topics from broker [xxx.xxx.xxx.xxx:6169]n

Connected to broker at [xxx.xxx.xxx.xxx:6169] (unregistered)n

client/metadata got error from broker [-1 824634286320] while fetching metadata: %!v(MISSING)n

Closed connection to broker [xxx.xxx.xxx.xxx:6169]n

[client/metadata no available broker to send metadata request to]

client/brokers resurrecting [1] dead seed brokers

[Closing Client]

服务端的报错:

[2023-07-26 14:03:40,853] ERROR Closing socket for xxx.xxx.xxx.xxx:57918-92605684 because of error (kafka.network.Processor)

org.apache.kafka.common.errors.InvalidRequestException: Error getting request for apiKey: 3 and apiVersion: 5

Caused by: java.lang.IllegalArgumentException: Invalid version for API key 3: 5

需要客户端指定版本:

默认是:

4、消费者报错报retriable exception

报错信息:[Consumer clientId=consumer-6, groupId=-test] Asynchrnous auto-commit of offsets {gactravel.charge.estimate.result.bi.topic-0=OffsetAndMetadata{offset=5499679, metadata=''}} failed: Offset commit failed with a retriable exception. You should retry committing the latest consumed offsets.

辛苦大佬协助看下报错原因 使用的异步提交导致的

可以看一下这个:https://baijiahao.baidu.com/s?id=1761383239525043271&wfr=spider&for=pc

可以改成同步提交再观察一下

5、消息队列CKafKa报错kafka.Error {"event": "Application maximum poll interval (300000ms) exceeded by xxx ms"}的解决方案

报错明显提示maximum poll interval超时。指的是超过消费者超过max.poll.interval.ms=300000这个时间退出了

表示##### 使用 Kafka 消费分组机制时,再次调用 poll 允许的最大间隔。如果在该时间内没有再次调用 poll,则认为该消费者已经失败,Broker 会重新发起 Rebalance 把分配给它的 partition 分配给其他消费者

处理建议:增大参数max.poll.interval.ms

后续有相关的报错会继续更新,欢迎持续关注

0 人点赞