您好,登录后才能下订单哦!
小编给大家分享一下SUSE上如何搭建Hadoop环境,希望大家阅读完这篇文章之后都有所收获,下面让我们一起去探讨吧!
【环境】:
经常遭遇因为依赖软件版本不匹配导致的问题,这次大意了,以为java问题不大,就用本来通过yast安装的java1.6 openjdk去搞了,结果可想而知,问题很多,反复定位,反复谷歌百度,最后一朋友启发下决定换换jdk版本。问题解决了,所以这里贴下我的环境
java环境: java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
系统: openSUSE 11.2 (x86_64)
hadoop版本:Hadoop-1.1.2.tar.gz
【Step1:】创建hadoop用户及用户组
组:hadoop
用户:hadoop -> /home/hadoop
加权限: vi /etc/sudoers 增加 hadoop ALL=(ALL:ALL) ALL
【Stpe2:】安装hadoop
笔者tar xf 安装完后是这样的目录结构(供参考):
/home/hadoop/hadoop-home/[bin|conf]
【Step3:】配SSH(避免启动hadoop时需要密码)
略安装ssh
ssh-keygen -t rsa -P "" [一路回车及确认]
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
尝试 ssh localhost [检查下是不是不需要密码啦]
【Step4:】安装java
版本见【环境】部分
【Step5:】配conf/hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.7.0_17xxx #[jdk目录]
export HADOOP_INSTALL=/home/hadoop/hadoop-home
export PATH=$PATH:$HADOOP_INSTALL/bin #[这里是hadoop脚本所在目录]
【Step6:】使用单机模式
hadoop version
mkdir input
man find > input/test.txt
hadoop jar hadoop-examples-1.1.2.jar wordcount input output
【Step7:】伪分布模式(单机实现namenode,datanode,tackerd等模块)
conf/[core-site.xml、hdfs-site.xml、mapred-site.xml]
core-site.xml
<configuration> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/usr/local/hadoop/tmp</value> </property> </configuration>
hdfs-site.xml
<configuration> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.name.dir</name> <value>/usr/local/hadoop/datalog1,/usr/local/hadoop/datalog2</value> </property> <prop<configuration> <property> <name>mapred.job.tracker</name> <value>localhost:9001</value> </property> </configuration>erty> <name>dfs.data.dir</name> <value>/usr/local/hadoop/data1,/usr/local/hadoop/data2</value> </property> </configuration>
mapred-site.xml
<configuration> <property> <name>mapred.job.tracker</name> <value>localhost:9001</value> </property> </configuration>
【Step8:】启动
格式化:hadoop namenode -format
cd bin
sh start-all.sh
hadoop@linux-peterguo:~/hadoop-home/bin> sh start-all.sh starting namenode, logging to /home/hadoop/hadoop-home/libexec/../logs/hadoop-hadoop-namenode-linux-peterguo.out localhost: starting datanode, logging to /home/hadoop/hadoop-home/libexec/../logs/hadoop-hadoop-datanode-linux-peterguo.out localhost: starting secondarynamenode, logging to /home/hadoop/hadoop-home/libexec/../logs/hadoop-hadoop-secondarynamenode-linux-peterguo.out starting jobtracker, logging to /home/hadoop/hadoop-home/libexec/../logs/hadoop-hadoop-jobtracker-linux-peterguo.out localhost: starting tasktracker, logging to /home/hadoop/hadoop-home/libexec/../logs/hadoop-hadoop-tasktracker-linux-peterguo.out
jps查看进程是否全启动 五个java进程 jobtracker/tasktracker/namenode/datanode/sencondarynamenode
可以通过下面的操作来查看服务是否正常,在Hadoop中用于监控集群健康状态的Web界面:
http://localhost:50030/ - Hadoop 管理介面
http://localhost:50060/ - Hadoop Task Tracker 状态
http://localhost:50070/ - Hadoop DFS 状态
【Step9:】操作dfs数据文件
hadoop dfs -mkdir input
hadoop dfs -copyFromLocal input/test.txt input
hadoop dfs -ls input
【Step10:】运行dfs上的mr
hadoop jar hadoop-examples-1.1.2.jar wordcount input output
hadoop dfs -cat output/*
【Step11:】关闭
stop-all.sh
看完了这篇文章,相信你对“SUSE上如何搭建Hadoop环境”有了一定的了解,如果想了解更多相关知识,欢迎关注亿速云行业资讯频道,感谢各位的阅读!
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。