Hadoop在Linux上的部署步骤
sudo apt update && sudo apt install -y openjdk-11-jdk # Ubuntu/Debian
sudo yum install -y java-11-openjdk-devel # CentOS/RHEL
验证安装:java -version(需显示Java版本信息)。/opt/hadoop):wget https://dlcdn.apache.org/hadoop/common/hadoop-3.3.5/hadoop-3.3.5.tar.gz
sudo mkdir -p /opt/hadoop
sudo tar -xzvf hadoop-3.3.5.tar.gz -C /opt/hadoop --strip-components=1
(--strip-components=1用于去除压缩包内的顶层目录)。编辑用户家目录下的.bashrc文件(全局配置可修改/etc/profile),添加Hadoop和Java路径:
export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64 # 根据实际Java安装路径调整
export HADOOP_HOME=/opt/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
使配置生效:source ~/.bashrc。
Hadoop的主配置文件位于$HADOOP_HOME/etc/hadoop/目录,需修改以下文件:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value> <!-- HDFS的默认文件系统URI -->
</property>
</configuration>
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value> <!-- 数据副本数(单节点设为1,集群需调整) -->
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/opt/hadoop/tmp/dfs/name</value> <!-- NameNode元数据存储路径 -->
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/opt/hadoop/tmp/dfs/data</value> <!-- DataNode数据存储路径 -->
</property>
</configuration>
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value> <!-- MapReduce运行在YARN上 -->
</property>
</configuration>
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value> <!-- Shuffle服务支持 -->
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>localhost</value> <!-- ResourceManager主机(单节点设为localhost) -->
</property>
</configuration>
(注:若需多节点集群,需将localhost替换为对应节点IP,并配置slaves文件)。首次启动HDFS前,需格式化NameNode(会清除原有数据,仅首次需要):
hdfs namenode -format
格式化后,会在dfs.namenode.name.dir和dfs.datanode.data.dir指定的路径下生成元数据和数据目录。
start-dfs.sh
start-yarn.sh
启动后,可通过jps命令查看进程:
http://localhost:9870(Hadoop 3.x版本端口)http://localhost:8088hdfs dfs -ls /hdfs dfs -put ~/test.txt /user/hadoop/hdfs dfs -get /user/hadoop/test.txt ~/ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa # 生成密钥对
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys # 添加公钥到授权文件
chmod 600 ~/.ssh/authorized_keys # 设置权限
ssh localhost # 测试免密登录(无需输入密码则成功)
sudo firewall-cmd --permanent --zone=public --add-port=9000/tcp
sudo firewall-cmd --permanent --zone=public --add-port=50070/tcp
sudo firewall-cmd --permanent --zone=public --add-port=8088/tcp
sudo firewall-cmd --reload