您好,登录后才能下订单哦!
小编给大家分享一下如何使用4个节点搭建Hadoop2.x HA测试集群,相信大部分人都还不怎么了解,因此分享这篇文章给大家参考一下,希望大家阅读完这篇文章后大有收获,下面让我们一起去了解一下吧!
虚拟机 4台
10.211.55.22 node1
10.211.55.23 node2
10.211.55.24 node3
10.211.55.25 node4
node | namenode | datanode | zk | zkfc | jn | rm | applimanager |
---|---|---|---|---|---|---|---|
node1 | 1 | 1 | 1 | ||||
node2 | 1 | 1 | 1 | 1 | 1 | 1 | |
node3 | 1 | 1 | 1 | 1 | 1 | ||
node4 | 1 | 1 | 1 | 1 |
总结:
node | 启动节点数 |
---|---|
node1 | 4 |
node2 | 7 |
node3 | 6 |
node4 | 5 |
修改虚拟机的名称
修改mac的node1 node2 node3 node4的dns
hostname node1 node2 node3 node4 vi /etc/sysconfig/network 宿主机及node1 node2 node3 node4 vi /etc/hosts 10.211.55.22 node1 10.211.55.23 node2 10.211.55.24 node3 10.211.55.25 node4
重启
service iptables stop && chkconfig iptables off
检查
service iptables status
这里使用dsa算法
node1 node2 node3 node4本身机器配置免密钥
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
从node1拷贝到node2 node3 node4
scp ~/.ssh/id_dsa.pub root@node2:~ scp ~/.ssh/id_dsa.pub root@node3:~ scp ~/.ssh/id_dsa.pub root@node4:~ node2 node3 node4自身追加: cat ~/id_dsa.pub >> ~/.ssh/authorized_keys
从node2拷贝到node1 node3 node4
scp ~/.ssh/id_dsa.pub root@node1:~ scp ~/.ssh/id_dsa.pub root@node3:~ scp ~/.ssh/id_dsa.pub root@node4:~ node1 node3 node4自身追加: cat ~/id_dsa.pub >> ~/.ssh/authorized_keys
从node3拷贝到node1 node2 node4
scp ~/.ssh/id_dsa.pub root@node1:~ scp ~/.ssh/id_dsa.pub root@node2:~ scp ~/.ssh/id_dsa.pub root@node4:~ node1 node2 node4自身追加: cat ~/id_dsa.pub >> ~/.ssh/authorized_keys
从node4拷贝到node1 node2 node3
scp ~/.ssh/id_dsa.pub root@node1:~ scp ~/.ssh/id_dsa.pub root@node2:~ scp ~/.ssh/id_dsa.pub root@node3:~ node1 node2 node3自身追加: cat ~/id_dsa.pub >> ~/.ssh/authorized_keys
所有机器:
yum install ntp ntpdate -u s2m.time.edu.cn 在启动的时候,需要同步一下保险,最好设置局域网时间同步,保持同步
检查: date
安装jdk,配置环境变量
所有机器:
卸载openjdk: java -version rpm -qa | grep jdk rpm -e --nodeps java-1.6.0-openjdk-javadoc-1.6.0.0-1.41.1.10.4.el6.x86_64 ... rpm -qa | grep jdk 安装jdk: rpm -ivh jdk-7u67-linux-x64.rpm vi ~/.bash_profile export JAVA_HOME=/usr/java/jdk1.7.0_67 export PATH=$PATH:$JAVA_HOME/bin source ~/.bash_profile
检查:
java -version
上传hadoop-2.5.1_x64.tar.gz
scp /Users/mac/Documents/happyup/study/files/hadoop/hadoop-2.5.1_x64.tar.gz root@node1:/home node2 node3 node4
上传zk
scp /Users/mac/Documents/happyup/study/files/hadoop/ha/zookeeper-3.4.6.tar.gz root@node1:/home node2 node3
解压:
node1 node2 node3 node4 tar -xzvf /home/hadoop-2.5.1_x64.tar.gz node1 node2 node3 tar -xzvf /home/zookeeper-3.4.6.tar.gz
hadoop 完全ha准备工作
3.1主机名及每台hosts dns文件配置
3.2关闭防火墙
3.3配置所有机器的互相免密钥
3.4时间同步 ntp
3.5安装java jdk
3.6上传解压软件hadoop zk
这时候做一个快照,其他机器也可以使用
ssh root@node1 cp /home/zookeeper-3.4.6/conf/zoo_sample.cfg /home/zookeeper-3.4.6/conf/zoo.cfg vi zoo.cfg 其中把dataDir=/opt/zookeeper 另外在最后添加: server.1=node1:2888:3888 server.2=node2:2888:3888 server.3=node3:2888:3888 :wq
到datadir目录下: mkdir /opt/zookeeper cd /opt/zookeeper ls vi myid,填写1 :wq 拷贝相关文件到node2 node3 scp -r /opt/zookeeper/ root@node2:/opt 修改为2 scp -r /opt/zookeeper/ root@node3:/opt 修改为3
拷贝zk到node2 node3 scp -r /home/zookeeper-3.4.6/conf root@node2:/home/zookeeper-3.4.6/conf scp -r /home/zookeeper-3.4.6/conf root@node3:/home/zookeeper-3.4.6/conf
node1 node2 node3
添加PATH vi ~/.bash_profile export ZOOKEEPER_HOME=/home/zookeeper-3.4.6 PATH 添加 :$ZOOKEEPER_HOME/bin source ~/.bash_profile
启动: cd zk的bin目录下: zkServer.sh start jps: 3214 QuorumPeerMain 依次启动 node1 node2 node3
cd /home/hadoop-2.5.1/etc/hadoop/
vi hadoop-env.sh 改动:export JAVA_HOME=/usr/java/jdk1.7.0_67
vi slaves node2 node3 node4
vi hdfs-site.xml
<property> <name>dfs.nameservices</name> <value>cluster1</value> </property> <property> <name>dfs.ha.namenodes.cluster1</name> <value>nn1,nn2</value> </property> <property> <name>dfs.namenode.rpc-address.cluster1.nn1</name> <value>node1:8020</value> </property> <property> <name>dfs.namenode.rpc-address.cluster1.nn2</name> <value>node2:8020</value> </property> <property> <name>dfs.namenode.http-address.cluster1.nn1</name> <value>node1:50070</value> </property> <property> <name>dfs.namenode.http-address.cluster1.nn2</name> <value>node2:50070</value> </property> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://node2:8485;node3:8485;node4:8485/cluster1</value> </property> <property> <name>dfs.client.failover.proxy.provider.cluster1</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/root/.ssh/id_dsa</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>/opt/journal/data</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property>
vi core-site.xml
<property> <name>fs.defaultFS</name> <value>hdfs://cluster1</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/opt/hadoop</value> </property> <property> <name>ha.zookeeper.quorum</name> <value>node1:2181,node2:2181,node3:2181</value> </property>
vi mapred-site.xml
cp mapred-site.xml.template mapred-site.xml <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property>
vi yarn-site.xml 无需配置applicationmanager,因为和datanode相同
<property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <property> <name>yarn.resourcemanager.cluster-id</name> <value>rm</value> </property> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>node3</value> </property> <property> <name>yarn.resourcemanager.hostname.rm2</name> <value>node4</value> </property> <property> <name>yarn.resourcemanager.zk-address</name> <value>node1:2181,node2:2181,node3:2181</value> </property>
同步到node2 node3 node4
scp /home/hadoop-2.5.1/etc/hadoop/* root@node2:/home/hadoop-2.5.1/etc/hadoop scp /home/hadoop-2.5.1/etc/hadoop/* root@node3:/home/hadoop-2.5.1/etc/hadoop scp /home/hadoop-2.5.1/etc/hadoop/* root@node4:/home/hadoop-2.5.1/etc/hadoop
node1 node2 node3 node4
vi ~/.bash_profile export HADOOP_HOME=/home/hadoop-2.5.1 PATH 添加::$HADOOP_HOME/bin:$HADOOP_HOME/sbin source ~/.bash_profile
1.启动node1 node2 node3 的zk
启动: cd zk的bin目录下: zkServer.sh start jps: 3214 QuorumPeerMain 依次启动 node1 node2 node3
2.启动journalnode,用于格式化namenode 如果是第二次重新配置,删除 /opt/hadoop /opt/journal/data node1 node2 node3 node4
在node2 node3 node4分别执行:
./hadoop-daemon.sh start journalnode jps验证是否有journalnode进程
3.格式化一台namenode node1
cd bin ./hdfs namenode -format 验证打印日志,看工作目录有无文件生成
4.同步这个namenode的edits文件到另外一个node2,要启动被拷贝的namenode node1
cd sbin ./hadoop-daemon.sh start namenode 验证log日志 cd ../logs tail -n50 hadoop-root-namenode
5.执行同步edits文件
在没有格式化到namenode上进行(node2) cd bin ./hdfs namenode -bootstrapStandby 在node2上看有无文件生成
6.到node1停止所有服务
cd sbin ./stop-dfs.sh
7.初始化zkfc,zk一定要启动,在任何一台namenode上
cd bin ./hdfs zkfc -formatZK
8.启动
cd sbin: ./start-dfs.sh sbin/start-yarn.sh jps:remanager nodemanager node1:8088 或者start-all.sh
2.x中resourcemanager 需要手动启动 node3 node4 yarn-daemon.sh start resourcemanager yarn-daemon.sh stop resourcemanager
9.查看是否启动成功及测试
jps hdfs webui: http://node1:50070 http://node2:50070 standby rm webui: http://node3:8088 http://node4:8088 上传文件: cd bin ./hdfs dfs -mkdir -p /usr/file ./hdfs dfs -put /usr/local/jdk /usr/file 关闭一个rm,效果 关闭一个namenode效果
10.出现问题解决方法
1.控制台输出 2.jps 3.对应节点的日志 4.格式化之前要删除hadoop工作目录,删除journode的工作目录
以上是“如何使用4个节点搭建Hadoop2.x HA测试集群”这篇文章的所有内容,感谢各位的阅读!相信大家都有了一定的了解,希望分享的内容对大家有所帮助,如果还想学习更多知识,欢迎关注亿速云行业资讯频道!
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。