在CentOS上配置Hadoop分布式文件系统(HDFS)是一个相对复杂的过程,需要多个步骤。以下是一个基本的步骤指南,假设你要配置一个单机版的HDFS集群。请注意,这些步骤可能会根据具体的版本和需求有所不同。
/etc/profile
文件,添加Hadoop的路径和库路径。source /etc/profile
命令使配置生效。hdfs namenode -format
命令,初始化文件系统。jps
命令检查进程,确保NameNode和DataNode正在运行。http://namenode-host:port
)访问HDFS。yum install java-1.8.0-openjdk-devel -y
ssh-keygen -t rsa
ssh-copy-id localhost
wget https://downloads.apache.org/hadoop/core/hadoop-3.2.4/hadoop-3.2.4.tar.gz
tar -zxvf hadoop-3.2.4.tar.gz -C /opt/
echo "export HADOOP_HOME=/opt/hadoop-3.2.4" >> /etc/profile
echo "export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin" >> /etc/profile
source /etc/profile
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/tmp</value>
</property>
</configuration>
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>/usr/local/hadoop/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/usr/local/hadoop/hdfs/datanode</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
<configuration>
<property>
<name>yarn.resourcemanager.address</name>
<value>localhost:8032</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
hdfs namenode -format
/opt/hadoop-3.2.4/sbin/start-dfs.sh
jps
访问HDFS Web界面:http://localhost:9000
亿速云「云服务器」,即开即用、新一代英特尔至强铂金CPU、三副本存储NVMe SSD云盘,价格低至29元/月。点击查看>>
相关推荐:HDFS CentOS配置步骤详解