您好,登录后才能下订单哦!
密码登录
登录注册
点击 登录注册 即表示同意《亿速云用户服务条款》
# CentOS下Hadoop-Spark的安装方法
## 前言
Hadoop和Spark是大数据领域最核心的两个分布式计算框架。本文将详细介绍在CentOS 7/8系统上搭建Hadoop 3.x + Spark 3.x集群的完整流程,包含环境准备、组件安装、配置调优及验证测试。
---
## 一、环境准备
### 1. 系统要求
- CentOS 7/8 最小化安装
- Java 8/11(推荐OpenJDK)
- 至少4GB内存
- 20GB磁盘空间
- 关闭防火墙和SELinux
```bash
# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
# 禁用SELinux
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
useradd hadoop
passwd hadoop
visudo # 添加hadoop用户的sudo权限
hostnamectl set-hostname master # 主节点
echo "192.168.1.10 master
192.168.1.11 worker1
192.168.1.12 worker2" >> /etc/hosts
yum install -y java-11-openjdk-devel
echo 'export JAVA_HOME=/usr/lib/jvm/java-11-openjdk' >> /etc/profile
echo 'export PATH=$PATH:$JAVA_HOME/bin' >> /etc/profile
source /etc/profile
验证安装:
java -version
wget https://downloads.apache.org/hadoop/common/hadoop-3.3.4/hadoop-3.3.4.tar.gz
tar -zxvf hadoop-3.3.4.tar.gz -C /opt/
mv /opt/hadoop-3.3.4 /opt/hadoop
chown -R hadoop:hadoop /opt/hadoop
echo 'export HADOOP_HOME=/opt/hadoop' >> /etc/profile
echo 'export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin' >> /etc/profile
source /etc/profile
编辑 $HADOOP_HOME/etc/hadoop/
目录下文件:
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
</configuration>
workers文件
worker1
worker2
su - hadoop
hdfs namenode -format # 首次需要格式化
start-dfs.sh
jps # 应看到NameNode/DataNode进程
wget https://downloads.apache.org/spark/spark-3.3.2/spark-3.3.2-bin-hadoop3.tgz
tar -zxvf spark-3.3.2-bin-hadoop3.tgz -C /opt/
mv /opt/spark-3.3.2-bin-hadoop3 /opt/spark
chown -R hadoop:hadoop /opt/spark
echo 'export SPARK_HOME=/opt/spark' >> /etc/profile
echo 'export PATH=$PATH:$SPARK_HOME/bin' >> /etc/profile
source /etc/profile
spark-env.sh
cp $SPARK_HOME/conf/spark-env.sh.template $SPARK_HOME/conf/spark-env.sh
echo 'export JAVA_HOME=/usr/lib/jvm/java-11-openjdk
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export SPARK_MASTER_HOST=master' >> $SPARK_HOME/conf/spark-env.sh
workers
worker1
worker2
$SPARK_HOME/sbin/start-all.sh
hdfs dfs -mkdir /test
hdfs dfs -put /etc/passwd /test
hdfs dfs -cat /test/passwd
spark-shell --master spark://master:7077
sc.parallelize(1 to 100).count() # 应返回100
~/.ssh/authorized_keys
配置正确yarn-site.xml
中的内存参数本文完整演示了CentOS下Hadoop+Spark集群的搭建过程。实际生产环境中还需考虑: - 安全加固(Kerberos认证) - 高可用配置(ZooKeeper) - 监控方案(Prometheus+Grafana) “`
(全文约1300字,实际字数可能因格式略有差异)
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。