您好,登录后才能下订单哦!
这篇文章主要介绍VM9+Debian6+hadoop0.23.9如何实现单点安装,文中介绍的非常详细,具有一定的参考价值,感兴趣的小伙伴们一定要看完!
一、环境准备
1.1 Debian 6,安装时根据提示安装SSH;(如果是window中模拟,可先安装VMware,本人选择的是VMware workstation 9)
1.2 jdk1.7,hadoop0.23.9:下载位置http://mirror.esocc.com/apache/hadoop/common/hadoop-0.23.9/hadoop-0.23.9.tar.gz
二、安装过程
2.1 为Debian安装sudo
root@debian:apt-get install sudo
2.2 安装jdk1.7
先通过SSH客户端将jdk-7u45-linux-i586.tar.gz传到/root/路径下,然后执行下面命令
root@debian~:tar -zxvf jdk-7u45-linux-i586.tar.gz -C /usr/java/
2.3 hadoop下载&安装
root@debian~:wget http://mirror.esocc.com/apache/hadoop/common/hadoop-0.23.9/hadoop-0.23.9.tar.gz root@debian~:tar zxvf hadoop-0.23.9.tar.gz -C /opt/ root@debian~:cd /opt/ root@debian:/opt/# ln -s hadoop-0.23.9/ hadoop
----------这里做了个hadoop0.23.9的映射,相当于windows下面的.link
2.4 添加hadoop用户权限
root@debian~:groupadd hadoop root@debian~:useradd -g hadoop hadoop root@debian~:passwd hadoop root@debian~:vi /etc/sudoers
sudoers中添加hadoop用户权限 
 root ALL=(ALL) ALL下方添加 
 
hadoop ALL=(ALL:ALL) ALL
2.5 配置SSH登录
root@debian:su – hadoop root@debian:ssh-keygen -t rsa -P "自己的密码" 可以是无密码 root@debian:cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys root@debian:chmod 600 ~/.ssh/authorized_keys
测试登录
root@debian:ssh localhost
  如果想设置空密码登录,还是提示输入密码的话,确认本机sshd的配置文件(需要root权限) 
   root @debian :vi /etc/ssh/sshd_config 
   找到以下内容,并去掉注释符”#“ 
      RSAAuthentication yes 
      PubkeyAuthentication yes 
      AuthorizedKeysFile     .ssh/authorized_keys 
   然后重启sshd,不想设置空密码登录的不用重启 
   root @debian :servicesshd restart
 
2.6 配置hadoop用户
root@debian:chown -R hadoop:hadoop /opt/hadoop root@debian:chown -R hadoop:hadoop /opt/hadoop-0.23.9 root@debian:su – hadoop hadoop@debian6-01:~#:vi .bashrc
添加以下部分 
 export JAVA_HOME=/usr/java//usr/java/jdk1.7.0_45 
 export JRE_HOME=${JAVA_HOME}/jre 
 export HADOOP_HOME=/opt/hadoop 
 export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib 
 export PATH=${JAVA_HOME}/bin:$HADOOP_HOME/bin:$PATH 
 export HADOOP_COMMON_HOME=$HADOOP_HOME 
 export HADOOP_HDFS_HOME=$HADOOP_HOME 
 export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop 
 export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop 
 
 
root@debian:cd /opt/hadoop/etc/hadoop/ root@debian6-01:/opt/hadoop/etc/hadoop# vi yarn-env.sh
追加以下 
 export HADOOP_FREFIX=/opt/hadoop 
 export HADOOP_COMMON_HOME=${HADOOP_FREFIX} 
 export HADOOP_HDFS_HOME=${HADOOP_FREFIX} 
 export PATH=$PATH:$HADOOP_FREFIX/bin 
 export PATH=$PATH:$HADOOP_FREFIX/sbin 
 export HADOOP_MAPRED_HOME=${HADOOP_FREFIX} 
 export YARN_HOME=${HADOOP_FREFIX} 
 export HADOOP_CONF_HOME=${HADOOP_FREFIX}/etc/hadoop 
 export YARN_CONF_DIR=${HADOOP_FREFIX}/etc/hadoop 
 
 
root@debian6-01:/opt/hadoop/etc/hadoop# vi core-site.xml
<configuration> 
  <property> 
   <name>fs.defaultFS</name> 
   <value>hdfs://localhost:12200</value> 
  </property> 
  <property> 
   <name>hadoop.tmp.dir</name> 
   <value>/opt/hadoop/hadoop-root</value> 
  </property> 
  <property> 
   <name>fs.arionfs.impl</name> 
   <value>org.apache.hadoop.fs.pvfs2.Pvfs2FileSystem</value> 
   <description>The FileSystem for arionfs.</description> 
  </property> 
 </configuration> 
 
 
root@debian6-01:/opt/hadoop/etc/hadoop# vi hdfs-site.xml
<configuration> 
  <property> 
   <name>dfs.namenode.name.dir</name> 
   <value>file:/opt/hadoop/data/dfs/name</value> 
   <final>true</final> 
  </property> 
  <property> 
   <name>dfs.namenode.data.dir</name> 
   <value>file:/opt/hadoop/data/dfs/data</value> 
   <final>true</final> 
  </property> 
  <property> 
   <name>dfs.replication</name> 
   <value>1</value> 
  </property> 
  <property> 
   <name>dfs.permission</name> 
   <value>false</value> 
  </property> 
 </configuration> 
 
 
root@debian6-01:/opt/hadoop/etc/hadoop#cp mapred-site.xml.templatemapred-site.xml root@debian6-01:/opt/hadoop/etc/hadoop# vi mapred-site.xml
<configuration> 
     <property> 
         <name>mapreduce.framework.name</name> 
         <value>yarn</value> 
     </property> 
     <property> 
         <name>mapreduce.job.tracker</name> 
         <value>hdfs://localhost:9001</value> 
         <final>true</final> 
     </property> 
     <property> 
         <name>mapreduce.map.memory.mb</name> 
         <value>1536</value> 
     </property> 
     <property> 
         <name>mapreduce.map.java.opts</name> 
         <value>-Xmx1024M</value> 
     </property> 
     <property> 
         <name>mapreduce.reduce.memory.mb</name> 
         <value>3072</value> 
     </property> 
     <property> 
              <name>mapreduce.reduce.java.opts</name> 
         <value>-Xmx2560M</value> 
     </property> 
     <property> 
         <name>mapreduce.task.io.sort.mb</name> 
         <value>512</value> 
     </property> 
     <property> 
         <name>mapreduce.task.io.sort.factor</name> 
         <value>100</value> 
     </property>     
     <property> 
         <name>mapreduce.reduce.shuffle.parallelcopies</name> 
         <value>50</value> 
     </property> 
     <property> 
         <name>mapreduce.system.dir</name> 
         <value>file:/opt/hadoop/data/mapred/system</value> 
     </property> 
     <property> 
         <name>mapreduce.local.dir</name> 
         <value>file:/opt/hadoop/data/mapred/local</value> 
         <final>true</final> 
     </property> 
 </configuration> 
 
 
 
root@debian6-01:/opt/hadoop/etc/hadoop# vi yarn-site.xml
<configuration> 
 <!-- Site specific YARN configuration properties --> 
   <property> 
     <name>yarn.nodemanager.aux-services</name> 
     <value>mapreduce.shuffle</value> 
   </property> 
   <property> 
     <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> 
     <value>org.apache.hadoop.mapred.ShuffleHandler</value> 
   </property> 
   <property> 
     <name>mapreduce.framework.name</name> 
     <value>yarn</value> 
   </property> 
   <property> 
     <name>user.name</name> 
     <value>hadoop</value> 
   </property> 
   <property> 
     <name>yarn.resourcemanager.address</name> 
     <value>localhost:54311</value> 
   </property> 
   <property> 
     <name>yarn.resourcemanager.scheduler.address</name> 
     <value>localhost:54312</value> 
   </property> 
   <property> 
     <name>yarn.resourcemanager.webapp.address</name> 
     <value>localhost:54313</value> 
   </property> 
   <property> 
     <name>yarn.resourcemanager.resource-tracker.address</name> 
     <value>localhost:54314</value> 
   </property> 
   <property> 
     <name>yarn.web-proxy.address</name> 
     <value>localhost:54315</value> 
   </property> 
   <property> 
     <name>mapred.job.tracker</name> 
     <value>localhost</value> 
        </property> 
 </configuration>
 
2.7 启动并运行wordcount程序
设置JAVA_HOME
root@debian6-01:vi /opt/hadoop/libexec/hadoop-config.sh
# Attempt to set JAVA_HOME if it is not set 
 export JAVA_HOME=/usr/java/jdk1.7.0_45    -添加 
 if [[ -z $JAVA_HOME ]]; then  -------:wq!保存退出
格式化namenode
root@debian6-01:/opt/hadoop/lib# hadoop namenode -format
启动 
 
root@debian6-01:/opt/hadoop/sbin/start-dfs.sh root@debian6-01:/opt/hadoop/sbin/start-yarn.sh
检查
root@debian6-01:jps
6365 SecondaryNameNode 
 7196 ResourceManager 
 6066 NameNode 
 7613 Jps 
 6188 DataNode 
 7311 NodeManager 
以上是“VM9+Debian6+hadoop0.23.9如何实现单点安装”这篇文章的所有内容,感谢各位的阅读!希望分享的内容对大家有帮助,更多相关知识,欢迎关注亿速云行业资讯频道!
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。