搭建HDFS高可用性集群需要以下几个步骤:
wget https://downloads.apache.org/zookeeper/zookeeper-3.8.0/apache-zookeeper-3.8.0-bin.tar.gz
tar -xzf apache-zookeeper-3.8.0-bin.tar.gz
cd apache-zookeeper-3.8.0
conf/zoo.cfg
文件,设置dataDir
和其他必要的配置。在每个DataNode上创建myid
文件,内容为其节点编号。在每个节点上启动ZooKeeper服务:./bin/zkServer.sh start
wget https://downloads.apache.org/hadoop/core/hadoop-3.3.0/hadoop-3.3.0.tar.gz
tar -xzf hadoop-3.3.0.tar.gz
cd hadoop-3.3.0
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://namenode1:9000</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>namenode1:2181,namenode2:2181,namenode3:2181</value>
</property>
</configuration>
<configuration>
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>namenode1,namenode2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.namenode1</name>
<value>namenode1:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.namenode2</name>
<value>namenode2:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.namenode1</name>
<value>namenode1:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.namenode2</name>
<value>namenode2:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://namenode1:8485;namenode2:8485;namenode3:8485/mycluster</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>ssh</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
</configuration>
在每个节点上生成SSH密钥对:
ssh-keygen -t rsa
ssh-copy-id root@namenode1
ssh-copy-id root@namenode2
ssh-copy-id root@namenode3
在NameNode1上格式化NameNode:
hdfs namenode -format
在每个NameNode上启动HDFS服务:
/usr/local/hadoop/sbin/start-dfs.sh
通过Web界面或命令行工具验证HDFS集群的状态:
hdfs dfsadmin -report
以上步骤可以帮助你在Linux系统中为HDFS配置高可用性,确保在关键组件出现故障时,系统仍然可以继续工作,从而保障数据的高可用性和可靠性。