Flume和Kafka的组合使用

2022-09-10 09:30:06 浏览数 (1)

大家好,又见面了,我是你们的朋友全栈君。

在Windows系统中打开第1个cmd窗口,执行如下命令启动 Zookeeper服务:

cd C:zookeeperapache-zookeeper-3.7.1-bin

.binzkServer.cmd

打开第2个cmd窗口,然后执行下面命令启动Kafka服务:

cd C:kafka_2.12-2.4.0kafka_2.12-2.4.0

.binwindowskafka-server-start.bat .configserver.properties

打开第3个cmd窗口,执行如下命令创建一个名为test的Topic:

> cd c:kafka_2.12-2.4.0

> .binwindowskafka-topics.bat –create –zookeeper localhost:2181 –replication-factor 1 –partitions 1 –topic test

在Flume的安装目录的conf子目录下创建一个配置文件kafka.conf,内容如下:

# Name the components on this agent

a1.sources = r1 a1.sinks = k1

a1.channels = c1

# source

a1.sources.r1.type = netcat

a1.sources.r1.bind = localhost

a1.sources.r1.port = 44444

# sink

a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink

a1.sinks.k1.kafka.topic = test

a1.sinks.k1.kafka.bootstrap.servers = localhost:9092

a1.sinks.k1.kafka.flumeBatchSize = 20

a1.sinks.k1.kafka.producer.acks = 1

a1.sinks.k1.kafka.producer.linger.ms = 1

a1.sinks.k1.kafka.producer.compression.type = snappy

# channel

a1.channels.c1.type = memory

a1.channels.c1.capacity = 1000

a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel

a1.sources.r1.channels = c1

a1.sinks.k1.channel = c1

打开第4个cmd窗口,执行如下命令启动Flume:

> cd c:apache-flume-1.9.0-bin

> .binflume-ng.cmd agent –conf ./conf –conf-file ./conf/kafka.conf –name a1 -property flume.root.logger=INFO,console

打开第5个cmd窗口,执行如下命令:

> telnet localhost 44444

执行上面命令以后,在该窗口内用键盘输入一些单词(不会显示),比如“hadoop”。 这个单词会发送给Flume,然后,Flume发送给Kafka。

打开第6个cmd窗口,执行如下命令:

> cd c:kafka_2.12-2.4.0

> .binwindowskafka-console-consumer.bat –bootstrap-server localhost:9092 –topic test –from-beginning

上面命令执行以后,就可以在屏幕上看到“hadoop”,说明Kafka成功接 收到了数据。

发布者:全栈程序员栈长,转载请注明出处:https://javaforall.cn/152365.html原文链接:https://javaforall.cn

0 人点赞