Flume和kafka连接测试

2022-09-10 09:16:51 浏览数 (1)

大家好,又见面了,我是你们的朋友全栈君。

Flume的配置文件:(和kafka连接的配置文件)

#文件名:kafka.properties

#配置内容:

分别在linux系统里面建两个文件夹:一个文件夹用于存储配置文件(flumetest),一个文件夹用于存储需要读取的文件(flume)

代码语言:javascript复制
a1.sources = s1
a1.channels = c1
a1.sinks = k1

a1.sources.s1.type = netcat
a1.sources.s1.bind = 192.168.123.102
a1.sources.s1.port = 44455

a1.channels.c1.type = memory

a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.topic = t1
a1.sinks.k1.kafka.bootstrap.servers = 192.168.123.103:9092

a1.sources.s1.channels = c1
a1.sinks.k1.channel = c1

需要先启动zookeeper。

启动kafka集群:(配置的节点都要启动)

代码语言:javascript复制
[hadoop@hadoop02 kafka_2.11-1.0.0]$ bin/kafka-server-start.sh config/server.properties

kafka集群需要有 t1 这个 topic

代码语言:javascript复制
a1.sinks.k1.kafka.topic = t1

启动Flume:

代码语言:javascript复制
[hadoop@hadoop02 apache-flume-1.8.0-bin]$ flume-ng agent --conf conf --conf-file /home/hadoop/apps/apache-flume-1.8.0-bin/flumetest/kafka.properties --name a1 -Dflume.root.logger=INFO,console

在hadoop03上启动kafka消费的信息:

代码语言:javascript复制
[hadoop@hadoop03 kafka_2.11-1.0.0]$ bin/kafka-console-consumer.sh --zookeeper hadoop02:2181 --from-beginning --topic t1       
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
ok
aaa

然后在hadoop02上面连接:

代码语言:javascript复制
[hadoop@hadoop02 kafka_2.11-1.0.0]$ telnet 192.168.123.102 44455  
Trying 192.168.123.102...
Connected to 192.168.123.102.
Escape character is '^]'.
aaa
OK

发送aaa会在hadoop03节点的kafka消费信息中显示。

发布者:全栈程序员栈长,转载请注明出处:https://javaforall.cn/152389.html原文链接:https://javaforall.cn

0 人点赞