hadoop-006完全分布式问题有哪些

发布时间:2021-12-08 09:58:33 作者:小新
来源:亿速云 阅读:189

小编给大家分享一下hadoop-006完全分布式问题有哪些,希望大家阅读完这篇文章之后都有所收获,下面让我们一起去探讨吧!

一、运行mapreduce 任务,但是在yarn里看不到任务。

bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar  wordcount /test /out1

我配置的resourceManager 为 http://192.168.31.136:8088

原因是在 mapred-site.xml 没有配置

<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
</property>

二、配置好问题一后,运行mr任务提示The auxService:mapreduce_shuffle does not exist

原因是在yarn-site.xml 没有配置yarn.nodemanager.aux-services 节点

<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
</property>

报错如下

16/11/29 23:10:45 INFO mapreduce.Job: Task Id : attempt_1480432102879_0001_m_000000_2, Status : FAILED
Container launch failed for container_e02_1480432102879_0001_01_000004 : org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:mapreduce_shuffle does not exist
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:155)
        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:375)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

-----------------------------------------------------------------------------------------------------------------

配置文件记录

已提前搭建好三个节点的zookeeper 小集群,配置可以实现 HDFS HA和 YARN HA

一、hadoop-env.sh 修改了

export JAVA_HOME=/usr/lib/jvm/jdk8/jdk1.8.0_111

二、yarn-env.sh 修改了

export JAVA_HOME=/usr/lib/jvm/jdk8/jdk1.8.0_111

三、core-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<!-- 指定nameservice的名称为mycluster -->
    <property>
      <name>fs.defaultFS</name>
      <value>hdfs://mycluster</value>
    </property>
    <!-- 编辑日志文件存储的路径 -->
    <property>
      <name>dfs.journalnode.edits.dir</name>
      <value>/home/jxlgzwh/hadoop-2.7.2/data/jn</value>
    </property>
    
    <!-- 配置缓存文件的目录 -->
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/jxlgzwh/hadoop-2.7.2/data/tmp</value>
    </property>
    <property>
       <name>ha.zookeeper.quorum</name>
       <value>master:2181,slave01:2181,slave02:2181</value>
     </property>
</configuration>

 

四、hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<!-- 指定hdfs的nameservice的名称为mycluster -->
    <property>
      <name>dfs.nameservices</name>
      <value>mycluster</value>
    </property>
    <!-- 指定mycluster的两个namenode的名称,分别是nn1,nn2 -->
    <property>
      <name>dfs.ha.namenodes.mycluster</name>
      <value>nn1,nn2</value>
    </property>
    
    <!-- 配置nn1,nn2的rpc通信 端口    -->
    <property>
      <name>dfs.namenode.rpc-address.mycluster.nn1</name>
      <value>master:8020</value>
    </property>
    <property>
      <name>dfs.namenode.rpc-address.mycluster.nn2</name>
      <value>slave01:8020</value>
    </property>
    
    <!-- 配置nn1,nn2的http访问端口 -->
    <property>
      <name>dfs.namenode.http-address.mycluster.nn1</name>
      <value>master:50070</value>
    </property>
    <property>
      <name>dfs.namenode.http-address.mycluster.nn2</name>
      <value>slave01:50070</value>
    </property>
    <!-- 指定namenode的元数据存储在journalnode中的路径 -->
    <property>
      <name>dfs.namenode.shared.edits.dir</name>
      <value>qjournal://master:8485;slave01:8485;slave02:8485/mycluster</value>
    </property>
    <!-- 配置失败自动切换的方式 -->
    <property>
      <name>dfs.client.failover.proxy.provider.mycluster</name>
      <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    <!-- 配置隔离机制 -->
     <property>
      <name>dfs.ha.fencing.methods</name>
      <value>sshfence</value>
    </property>
    <!-- 指定秘钥的位置 -->
    <property>
      <name>dfs.ha.fencing.ssh.private-key-files</name>
      <value>/home/jxlgzwh/.ssh/id_dsa</value>
    </property>
    <!-- 数据备份的个数 -->
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
        <!--关闭权限验证 -->
    <property>
        <name>dfs.permissions.enabled</name>
        <value>false</value>
    </property>
    
    <property>
       <name>dfs.ha.automatic-failover.enabled</name>
       <value>true</value>
     </property>
    
</configuration>

 

五、mapred-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
</property>
</configuration>

 

六、slaves

192.168.31.136
192.168.31.130
192.168.31.229

 

七、yarn-site.xml

<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>

<property>
  <name>yarn.resourcemanager.ha.enabled</name>
  <value>true</value>
</property>
<property>
  <name>yarn.resourcemanager.cluster-id</name>
  <value>cluster1</value>
</property>
<property>
  <name>yarn.resourcemanager.ha.rm-ids</name>
  <value>rm1,rm2</value>
</property>
<property>
  <name>yarn.resourcemanager.hostname.rm1</name>
  <value>master</value>
</property>
<property>
  <name>yarn.resourcemanager.hostname.rm2</name>
  <value>slave01</value>
</property>
<property>
  <name>yarn.resourcemanager.webapp.address.rm1</name>
  <value>master:8088</value>
</property>
<property>
  <name>yarn.resourcemanager.webapp.address.rm2</name>
  <value>slave01:8088</value>
</property>
<property>
  <name>yarn.resourcemanager.zk-address</name>
  <value>master:2181,slave01:2181,slave02:2181</value>
</property>

<!--  yarn restart-->
<!-- 开启resourcemanager restart -->
<property>
  <name>yarn.resourcemanager.recovery.enabled</name>
  <value>true</value>
</property>
<!-- 配置zookeeper的存储位置 -->
<property>
  <name>yarn.resourcemanager.zk-state-store.parent-path</name>
  <value>/rmstore</value>
</property>

<!-- 配置存储到zookeeper中 -->
<property>
  <name>yarn.resourcemanager.store.class</name>
  <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>
<!-- 开启nodemanager restart -->
<property>
  <name>yarn.nodemanager.recovery.enabled</name>
  <value>true</value>
</property>
<!-- 指定nodemanager 存储端口 -->
<property>
  <name>yarn.nodemanager.address</name>
  <value>0.0.0.0:45454</value>
</property>
<!-- 开启nodemanager 存储目录-->
<property>
  <name>yarn.nodemanager.recovery.dir</name>
  <value>/home/jxlgzwh/hadoop-2.7.2/data/tmp/yarn-nm-recovery</value>
</property>
<!-- Site specific YARN configuration properties -->
<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
</property>
</configuration>

八、配置 /etc/hosts 文件 并配置 ssh免密码登录

192.168.31.136 master.com master
192.168.31.130 slave01
192.168.31.229 slave02

看完了这篇文章,相信你对“hadoop-006完全分布式问题有哪些”有了一定的了解,如果想了解更多相关知识,欢迎关注亿速云行业资讯频道,感谢各位的阅读!

推荐阅读:
  1. Hadoop完全分布式配置
  2. Hadoop完全分布式部署

免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。

hadoop

上一篇:HDFS架构有哪些

下一篇:hadoop中HBase子项目入门的示例分析

相关阅读

您好,登录后才能下订单哦!

密码登录
登录注册
其他方式登录
点击 登录注册 即表示同意《亿速云用户服务条款》