在CentOS上配置HDFS高可用性(HA)涉及多个步骤,包括安装和配置Hadoop、ZooKeeper、以及设置NameNode和JournalNode等。以下是一个基本的指南,帮助你在CentOS系统上搭建一个高可用的HDFS集群。
下载并解压ZooKeeper:
wget https://downloads.apache.org/zookeeper/zookeeper-3.8.0/apache-zookeeper-3.8.0-bin.tar.gz
tar -xzf apache-zookeeper-3.8.0-bin.tar.gz
cd apache-zookeeper-3.8.0
配置ZooKeeper:
conf/zoo.cfg
文件,设置dataDir
和其他必要的配置。myid
文件,内容为其节点编号。./bin/zkServer.sh start
下载并解压Hadoop:
wget https://downloads.apache.org/hadoop/core/hadoop-3.3.0/hadoop-3.3.0.tar.gz
tar -xzf hadoop-3.3.0.tar.gz
cd hadoop-3.3.0
配置core-site.xml:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://namenode1:9000</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>namenode1:2181,namenode2:2181,namenode3:2181</value>
</property>
</configuration>
配置hdfs-site.xml:
<configuration>
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>namenode1,namenode2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.namenode1</name>
<value>namenode1:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.namenode2</name>
<value>namenode2:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.namenode1</name>
<value>namenode1:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.namenode2</name>
<value>namenode2:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://namenode1:8485;namenode2:8485;namenode3:8485/mycluster</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>ssh</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
</configuration>
ssh-keygen -t rsa
ssh-copy-id root@namenode1
ssh-copy-id root@namenode2
ssh-copy-id root@namenode3
在NameNode1上格式化NameNode:
hdfs namenode -format
在每个NameNode上启动HDFS服务:
/usr/local/hadoop/sbin/start-dfs.sh
通过Web界面或命令行工具验证HDFS集群的状态:
hdfs dfsadmin -report
通过以上步骤,你可以在CentOS上配置一个高可用的HDFS集群。请根据实际需求和环境调整配置细节。