您好,登录后才能下订单哦!
本篇内容主要讲解“hadoop从集群中怎么移除一个节点”,感兴趣的朋友不妨来看看。本文介绍的方法操作简单快捷,实用性强。下面就让小编来带大家学习“hadoop从集群中怎么移除一个节点”吧!
1.首先检查整个集群的Average block replication,如果大于2,那即使直接拔出一个节点也不会丢失数据
hadoop fsck / 检查集群的文件系统状况
可以手动设置文件的冗余倍数,为了安全备份,可对关键数据hadoop fs -setrep -w 3 -R <path>
2.给下面两个文件增加配置,在excludes中填写要解任的节点名
mapred-site.xml
<property>
<name>mapred.hosts</name>
<value></value>
<description>Names a file that contains the list of nodes that may
connect to the jobtracker. If the value is empty, all hosts are
permitted.</description>
</property>
<property>
<name>mapred.hosts.exclude</name>
<value>HADOOP_HOME/conf/excludes</value>
<description>Names a file that contains the list of hosts that
should be excluded by the jobtracker. If the value is empty, no
hosts are excluded.</description>
</property>
hdfs-site.xml
<property>
<name>dfs.hosts</name>
<value></value>
<description>Names a file that contains a list of hosts that are
permitted to connect to the namenode. The full pathname of the file
must be specified. If the value is empty, all hosts are
permitted.</description>
</property>
<property>
<name>dfs.hosts.exclude</name>
<value>HADOOP_HOME/conf/excludes</value>
<description>Names a file that contains a list of hosts that are
not permitted to connect to the namenode. The full pathname of the
file must be specified. If the value is empty, no hosts are
excluded.</description>
</property>
excludes 文件里面配置机器的hostname即可。
run on namenode: hadoop dfsadmin -refreshNodes
run on jobtracker: hadoop mradmin -refreshNodes
“hadoop dfsadmin -refreshNodes”会触发Decommission过程,在Decommission过程,集群会将Decommission节点上的数据冗余到其他几点上,
到此,相信大家对“hadoop从集群中怎么移除一个节点”有了更深的了解,不妨来实际操作一番吧!这里是亿速云网站,更多相关内容可以进入相关频道进行查询,关注我们,继续学习!
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。