spark 命令
Last updated on November 22, 2024 pm
🧙 Questions
☄️ Ideas
查看spark-submit版本
spark-submit --version
# or
spark-shell --version
提交作业
spark-submit --class com.isxcode.star.Main \
--master yarn \
--deploy-mode cluster \
--driver-memory 2g \
--executor-memory 1g \
--executor-cores 1 \
--queue thequeue \
target/star-executor-0.0.1.jar
spark sql终端
spark-sql
CREATE TEMPORARY VIEW users_view
USING org.apache.spark.sql.jdbc
OPTIONS (
driver 'com.mysql.cj.jdbc.Driver',
url 'jdbc:mysql://39.98.36.112:30306/ispong_db',
user 'root',
password 'ispong123',
dbtable 'users'
);
select * from users_view;
spark shell终端
spark-shell
直接连接hive数据库
import org.apache.spark.sql.SparkSession;
val spark = SparkSession.builder().appName("Spark Hive Example").config("spark.sql.warehouse.dir", "/user/hive/warehouse").config("spark.hadoop.hive.metastore.uris","thrift://localhost:9083").enableHiveSupport().getOrCreate();
import spark.sql;
spark.sql("show tables").show();
连接mysql和hive,实现数据同步,将mysql的数据同步到hive中
import org.apache.spark.sql.SparkSession;
val spark = SparkSession.builder()
.appName("Spark Hive Example")
.config("spark.sql.warehouse.dir", "/user/hive/warehouse")
.config("spark.hadoop.hive.metastore.uris","thrift://localhost:9083")
.enableHiveSupport()
.getOrCreate();
val selectedColumns = Seq("username");
val jdbcDF = spark.read.format("jdbc")
.option("driver", "com.mysql.cj.jdbc.Driver")
.option("url", "jdbc:mysql://isxcode:30306/ispong_db")
.option("dbtable", "users")
.option("user", "root")
.option("password", "ispong123")
.load();
jdbcDF.write.format("hive").mode("overwrite").insertInto("ispong_table");
import spark.sql;
spark.sql("select * from ispong_table;").show();
🔗 Links
spark 命令
https://ispong.isxcode.com/hadoop/spark/spark 命令/