使用的三台主机名称分别为bigdata1,bigdata2,bigdata3。所使用的安装包名称按自己的修改,安装包可去各大官网上下载
一,解压
tar -zxvf /opt/software/spark-3.1.1-bin-hadoop3.2.tgz -C /opt/module/
修改名称
mv spark-3.1.1-bin-hadoop3.2 spark-3.1.1
二,环境变量
vim /etc/profile
#.spark
export SPARK_HOME=/opt/module/spark-3.1.1
export PATH=$PATH:$SPARK_HOME/bin
刷新环境
source /etc/profile
验证是否安装成功
spark-submit --version
使用在环境变量中添加
vim/etc/profile
```c
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
刷新环境
source /etc/profile
在hadoop-3.1.3/etc/hadoop下的yarn-site.xml添加
```c
vim /opt/module/hadoop-3.1.3/etc/hadoop/yarn-site.xml
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
<property>
<name>yarn.nodemanager.pmem-check-enabled</name>
<value>false</value>
</property>
分发、hadoop-3.1.3/etc/hadoop/yarn-site.xml
scp -r /opt/module/hadoop-3.1.3/etc/hadoop/yarn-site.xml root@bigdata2:/opt/module/hadoop-3.1.3/etc/hadoop/yarn-site.xml
scp -r /opt/module/hadoop-3.1.3/etc/hadoop/yarn-site.xml root@bigdata3:/opt/module/hadoop-3.1.3/etc/hadoop/yarn-site.xml
如果分发不过去就把主机名称改为ip
然后执行
spark-submit --master yarn --class org.apache.spark.examples.SparkPi $SPARK_HOME/examples/jars/spark-examples_2.12-3.1.1.jar