kafka 集群安装

Last updated on November 26, 2025 am

🧙 Questions

安装kafka集群(2.12-2.6.2)

☄️ Ideas

前提

三台服务器

main : 106.15.107.138 172.19.189.247
node1: 139.224.102.176 172.19.189.246
node2: 106.14.69.245 172.19.189.248

下载

三台服务器都要下载

cd /tmp
wget https://mirrors.huaweicloud.com/apache/kafka/2.6.2/kafka_2.12-2.6.2.tgz --no-check-certificate
mkdir -p /data/kafka
tar -vzxf kafka_2.12-2.6.2.tgz -C /data/kafka
ln -s /data/kafka/kafka_2.12-2.6.2 /opt/kafka
rm -rf /tmp/kafka_2.12-2.6.2.tgz

安装java

yum install java-1.8.0-openjdk-devel java-1.8.0-openjdk -y

配置zk和kafka的环境变量

vim /etc/profile

# === vim /etc/profile ===
export KAFKA_HOME=/opt/kafka
export ZOOKEEPER_HOME=/opt/kafka
export PATH=$PATH:$ZOOKEEPER_HOME/bin:$KAFKA_HOME/bin
# === vim /etc/profile ===

source /etc/profile

配置默认的zookeeper

vim /opt/kafka/config/zookeeper.properties

注意:将当前服务器配置中的ip改为0.0.0.0

# 直接新增
dataDir=/data/zookeeper/data
dataLogDir=/data/zookeeper/logs

# 修改zk端口号,默认2181
clientPort=2181

# ... other config

# 客户端连接端口
clientPort=2181

# 集群初始化超时时间(tickTime的倍数)
initLimit=10

# 同步限制时间(tickTime的倍数)
syncLimit=5

# 基本时间单位(毫秒)
tickTime=2000

# 最大客户端连接数
maxClientCnxns=60

# 快照文件保留数量
autopurge.snapRetainCount=3

# 清理间隔(小时)
autopurge.purgeInterval=1

# 服务器信息
server.1=172.19.189.247:2888:3888
server.2=172.19.189.246:2888:3888
server.3=172.19.189.248:2888:3888
添加myid文件 (每台都要添加)

注意: 和配置zoo.cfg中的server.${number}保持一致

mkdir -p /data/zookeeper/data
mkdir -p /data/zookeeper/logs
vim /data/zookeeper/data/myid
1

启动zk集群

都启动就不会报错了

cd /opt/kafka/bin
./zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties
tail -f /opt/kafka/logs/zookeeper.out

验证kafka集群

# 查看端口是否开始
netstat -ntpl | grep 2181
# 使用命令查看
zookeeper-shell.sh 172.19.189.247:2181,172.19.189.246:2181,172.19.189.248:2181 ls /zookeeper

修改kafka配置文件

mkdir -p /data/kafka/logs
vim /opt/kafka/config/server.properties

broker.id 必须唯一
listeners 一定要是ip地址且要是内网
advertised.listeners 一定要是ip且要为外网

broker.id=1
listeners=PLAINTEXT://172.30.31.225:9092
advertised.listeners=PLAINTEXT://121.199.75.185:9092
log.dirs=/data/kafka/logs
zookeeper.connect=172.19.189.247:2181,172.19.189.246:2181,172.19.189.248:2181
delete.topic.enable=true

# 设置 consumer_offsets topic 的副本因子
offsets.topic.replication.factor=3

# 设置默认副本因子
default.replication.factor=3

# 最小同步副本数
min.insync.replicas=2

启动zk集群

# kafka-server-stop.sh -daemon /opt/kafka/config/server.properties
cd /opt/kafka/bin
./kafka-server-start.sh -daemon /opt/kafka/config/server.properties
tail -f /opt/kafka/logs/server.log

验证kafka集群

# 查看端口是否开始
netstat -ntpl | grep 9092

# 验证kafka集群是否生效  在其他kafka上是否存在topic
kafka-topics.sh --create --bootstrap-server 172.19.189.247:9092,172.19.189.246:9092,172.19.189.248:9092 --topic zhiqingyun-logs --replication-factor 1 -partitions 3
kafka-topics.sh --describe --zookeeper 172.19.189.247:2181,172.19.189.246:2181,172.19.189.248:2181
kafka-topics.sh --bootstrap-server 172.19.189.247:9092,172.19.189.246:9092,172.19.189.248:9092 --list 
kafka-topics.sh --bootstrap-server 172.19.189.247:9092,172.19.189.246:9092,172.19.189.248:9092 --describe --topic zhiqingyun-logs
kafka-topics.sh --delete --bootstrap-server 172.19.189.247:9092,172.19.189.246:9092,172.19.189.248:9092 --topic ispong_topic

# 一台输入,多台输出
kafka-console-producer.sh --broker-list 172.19.189.247:9092,172.19.189.246:9092,172.19.189.248:9092 --topic app2-logs
kafka-console-consumer.sh --bootstrap-server 172.19.189.247:9092,172.19.189.246:9092,172.19.189.248:9092 --topic app2-logs --from-beginning

# 查看topic详情
kafka-topics.sh --describe --bootstrap-server 192.168.25.175:30120 --topic flink_log

# 修改分区,只能加不能减
kafka-topics.sh --alter --bootstrap-server 192.168.25.175:30120 --topic spark_client --partitions 3

# 查看消费情况
GROUP              TOPIC              PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                      HOST            CLIENT-ID
chunjun_client_log chunjun_client_log 0          14281985        14281985        0               consumer-10-0cda9770-638e-4202-b717-d3029eaf8683 /192.168.25.175 consumer-10

CURRENT-OFFSET: consumer group当前消费到的偏移量
LOG-END-OFFSET: topic当前最新的消息偏移量
LAG: 消息积压数量(LOG-END-OFFSET - CURRENT-OFFSET)
kafka-consumer-groups.sh --bootstrap-server 192.168.25.175:30120 --all-groups --describe

# 查看topic的消息统计
kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 192.168.25.175:30120 --topic spark_client --time -1
强行删除topic
zookeeper-shell.sh 172.19.189.247:2181,172.19.189.246:2181,172.19.189.248:2181 ls /brokers/topics
zookeeper-shell.sh 172.19.189.247:2181,172.19.189.246:2181,172.19.189.248:2181 deleteall /brokers/topics/ispong_topic

kafka 集群安装
https://ispong.isxcode.com/hadoop/kafka/kafka 集群安装/
Author
ispong
Posted on
October 14, 2021
Licensed under