flink zk集群安装

Last updated on September 15, 2024 pm

🧙 Questions

  • 场景1: 发布的作业会根据资源,自动分配到不同的服务器上。
  • 场景2: 当主节点挂掉后,不会影响其他的作业发布和运行。

☄️ Ideas

原理:
因为flink的作业都是通过JobManager去发布作业,所以JobManager挂掉了就无法发布作业,
通过配置多个JobManager,如果一个挂掉了,就选取别的JobManger继续运行,可以避免JobManager挂掉而不能发布作业的问题。

安装环境
  • 部署环境 centOS 7.9 64位 2vCPU 8GiB
name ip(public) ip(private)
main 39.103.182.135 172.26.34.187
node1 39.103.129.230 172.26.34.188
node2 8.142.183.118 172.26.34.189
前提
  • 安装zookeeper
配置main节点免密登录

Note:
仅在main节点上,且使用ispong用户操作
namenode主节点 和 备用namenode节点 需要互相免密登录
此安装以 main为主节点 和 node1为备用节点


# 配置主节点
ssh-keygen
ssh-copy-id ispong@main
ssh-copy-id ispong@node1
ssh-copy-id ispong@node2

# 配置备用节点
ssh ispong@node1
ssh-keygen
ssh-copy-id ispong@main
ssh-copy-id ispong@node1
ssh-copy-id ispong@node2
创建安装目录
sudo mkdir -p /data/flink/
sudo chown -R ispong:ispong /data/flink/
下载flink安装包
nohup wget https://archive.apache.org/dist/flink/flink-1.13.3/flink-1.13.3-bin-scala_2.12.tgz >> download_flink.log 2>&1 &
tail -f download_flink.log
发放安装包
scp flink-1.13.3-bin-scala_2.12.tgz ispong@node1:~/
scp flink-1.13.3-bin-scala_2.12.tgz ispong@node2:~/
解压并创建软连接
tar -vzxf flink-1.13.3-bin-scala_2.12.tgz -C /data/flink/
sudo ln -s /data/flink/flink-1.13.3 /opt/flink
下载hadoop依赖
wget https://repo.maven.apache.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/2.8.3-10.0/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar
cp flink-shaded-hadoop-2-uber-2.8.3-10.0.jar /opt/flink/lib/
scp  flink-shaded-hadoop-2-uber-2.8.3-10.0.jar ispong@node1:/opt/flink/lib/
scp  flink-shaded-hadoop-2-uber-2.8.3-10.0.jar ispong@node2:/opt/flink/lib/
配置项目环境
sudo vim /etc/profile
export FLINK_HOME=/opt/flink
export PATH=$PATH:$FLINK_HOME/bin
source /etc/profile
创建hdfs文件夹
hadoop fs -mkdir -p /flink/recovery
hadoop fs -chown -R ispong:ispong /flink/recovery

配置zk

每台都要配置

vim /opt/flink/conf/flink-conf.yaml
high-availability: zookeeper
high-availability.storageDir: hdfs:///flink/recovery
high-availability.zookeeper.quorum: main:2181,node1:2181,node2:2181
high-availability.zookeeper.path.root: /flink
# high-availability.cluster-id: /default_ns  # 多套ha模式,需要单独配置 
配置备用主节点

端口号为默认ui的端口号8081

vim /opt/flink/conf/masters
main:8081
node1:8081
启动集群
cd /opt/flink
sudo bash ./bin/start-cluster.sh
停止集群
cd /opt/flink
sudo bash ./bin/stop-cluster.sh

flink zk集群安装
https://ispong.isxcode.com/hadoop/flink/flink zk集群安装/
Author
ispong
Posted on
November 27, 2021
Licensed under