使用Docker(k8s)安装Kafka并使用宿主机连接

2022-09-25 10:58:05 浏览数 (1)

大家好,又见面了,我是你们的朋友全栈君。

使用Docker(k8s)安装Kafka并使用宿主机连接

  1. 安装Docker及docker-compose 具体安装方法可以去官网看教程 检查docker-compose是否安装成功
  1. 创建 docker-compose.yml 文件
代码语言:javascript复制
version: '2'
services:
  zookeeper:
    image: "zookeeper"
    hostname: "zookeeper.local"
    container_name: "zookeeper"
    #设置网络别名 可随意取
    networks:
      local:
        aliases:
          - "zookeeper.local"
  kafka:
    image: "wurstmeister/kafka"
    hostname: "kafka.local"
    container_name: "kafka"
    ports:
      - "9092:9092"
    networks:
      local:
        aliases:
          - "kafka.local"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: kafka.local
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
#设置网络,名为local模式
networks:
  local:
    driver: bridge
  1. 进入Kafka容器
代码语言:javascript复制
docker exec -it kafka /bin/bash
  1. 创建Topic
代码语言:javascript复制
/opt/kafka_2.13-2.7.0/bin/kafka-topics.sh --create --zookeeper zookeeper:2181 -replication-factor 1 --partitions 1 --topic test_topic
  1. 测试命令行生产消费
  • 生产者
代码语言:javascript复制
/opt/kafka_2.13-2.7.0/bin/kafka-console-producer.sh --broker-list kafka.local:9092 --topic test_topic
  • 消费者
代码语言:javascript复制
/opt/kafka_2.13-2.7.0/bin/kafka-console-consumer.sh --bootstrap-server kafka.local:9092 --topic test_top
ic --from-beginning

从宿主机使用代码连接Kafka 6.1 进入Zookeeper容器查看brokers注册信息

代码语言:javascript复制
# 进入容器
docker exec -it zookeeper /bin/bash

# 进入zookeeper命令行
bin/zkCli.sh

6.2 查看brokers注册信息

代码语言:javascript复制
get /brokers/ids/1001

6.3 配置宿主机hosts

代码语言:javascript复制
# 添加
127.0.0.1	kafka.local

6.4 使用Java代码连接Kafka

代码语言:javascript复制
public class KafkaConsumerDemo { 
     
public static void main(String[] args) { 
     
	//1.配置消费者连接属性
	Properties props = new Properties();
	props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka.local:9092");
	props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
	props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
	props.put(ConsumerConfig.GROUP_ID_CONFIG, "test_demo");


	//2.创建Kafka消费者
	KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);

	//3.订阅topics
	consumer.subscribe(Arrays.asList("test_topic"));
// consumer.assign(Arrays.asList(new TopicPartition("test_topic", 0)));
// consumer.seekToBeginning(Arrays.asList(new TopicPartition("test_topic", 0)));//不改变当前offset

	//4.死循环读取消息
	for (; ; ) { 
     
		ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(1));

		if (records != null && !records.isEmpty()) { 
     
			records.forEach(r -> { 
     
				System.out.println("key:"   r.key()   "----value:"   r.value());
			});
		}
	}
  }
}

测试成功

发布者:全栈程序员栈长,转载请注明出处:https://javaforall.cn/171579.html原文链接:https://javaforall.cn

0 人点赞