hive on Spark部署

发布时间:2020-07-16 18:40:53 作者:navyaijm2012
来源:网络 阅读:2246

一、环境

1、zk集群

10.10.103.144:2181,10.10.103.246:2181,10.10.103.62:2181

2、metastore数据库

10.10.103.246:3306

二、安装

1、安装配置数据库

yum -y install mysql55-server mysql55
GRANT ALL PRIVILEGES ON metastore.* TO 'hive'@'localhost' IDENTIFIED BY 'hive';
GRANT ALL PRIVILEGES ON metastore.* TO 'hive'@'10.10.103.246' IDENTIFIED BY 'hive';
GRANT ALL PRIVILEGES ON metastore.* TO 'hive'@'127.0.0.1' IDENTIFIED BY 'hive';
CREATE DATABASE IF NOT EXISTS metastore;
USE metastore;
SOURCE /usr/lib/hive/scripts/metastore/upgrade/mysql/hive-schema-1.1.0.mysql.sql;#执行这个会报错,然后在执行下面sql
source  /usr/lib/hive/scripts/metastore/upgrade/mysql/hive-txn-schema-0.13.0.mysql.sql;

2、安装hive

yum -y install hive hive-jdbc hive-metastore hive-server2

3、配置

vim /etc/hive/conf/hive-site.xml 
<?xml version="1.0"?>

<configuration>
<property>
  <name>hive.execution.engine</name>
  <value>spark</value>
</property>

<property>
  <name>hive.enable.spark.execution.engine</name>
  <value>true</value>
</property>

<property>
  <name>spark.master</name>
  <value>yarn-client</value>
</property>
<property>
  <name>spark.enentLog.enabled</name>
  <value>true</value>
</property>
<property>
  <name>spark.enentLog.dir</name>
  <value>hdfs://mycluster:8020/spark-log</value>
</property>
<property>
  <name>spark.serializer</name>
  <value>org.apache.spark.serializer.KryoSerializer</value>
</property>
<property>
  <name>spark.executor.memeory</name>
  <value>1g</value>
</property>
<property>
  <name>spark.driver.memeory</name>
  <value>1g</value>
</property>
<property>
  <name>spark.executor.extraJavaOptions</name>
  <value>-XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"</value>
</property>
<property>
  <name>hive.metastore.uris</name>
  <value>thrift://10.10.103.246:9083</value>
</property>
<property>
  <name>hive.metastore.local</name>
  <value>false</value>
</property>
<property>
  <name>javax.jdo.option.ConnectionURL</name>
  <value>jdbc:mysql://10.10.103.246/metastore</value>
</property>
<property>
  <name>javax.jdo.option.ConnectionDriverName</name>
  <value>com.mysql.jdbc.Driver</value>
</property>
<property>
  <name>javax.jdo.option.ConnectionUserName</name>
  <value>hive</value>
</property>
<property>
  <name>javax.jdo.option.ConnectionPassword</name>
  <value>hive</value>
</property>

<property>
  <name>datanucleus.autoCreateSchema</name>
  <value>false</value>
</property>
<property>
  <name>datanucleus.fixedDatastore</name>
  <value>true</value>
</property>
<property>
  <name>datanucleus.autoStartMechanism</name> 
  <value>SchemaTable</value>
</property> 
<property>
  <name>hive.support.concurrency</name>
  <value>true</value>
</property>
<property>
  <name>hive.zookeeper.quorum</name>
  <value>10.10.103.144:2181,10.10.103.246:2181,10.10.103.62:2181</value>
</property>

<property>
  <name>hive.aux.jars.path</name>
    <value>file:///usr/lib/hive/lib/zookeeper.jar</value>
</property>


<property>
  <name>hive.metastore.schema.verification</name><value>false</value>
</property>

</configuration>

4、启动metastore服务

/etc/init.d/hive-metastore start

5、验证

[root@ip-10-10-103-246 conf]# hive
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0



Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
17/05/12 15:04:47 WARN conf.HiveConf: HiveConf of name hive.metastore.local does not exist
17/05/12 15:04:47 WARN conf.HiveConf: HiveConf of name hive.enable.spark.execution.engine does not exist

Logging initialized using configuration in file:/etc/hive/conf.dist/hive-log4j.properties
WARNING: Hive CLI is deprecated and migration to Beeline is recommended.
hive> 
    > 
    > 
    > create table navy1(ts BIGINT,line STRING); 
OK
Time taken: 0.925 seconds
hive> select count(*) from navy1;
Query ID = root_20170512150505_8f7fb28e-cf32-4efc-bb95-6add37f13fb6
Total jobs = 1
Launching Job 1 out of 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Spark Job = f045ab15-baaa-40e7-9641-d821fa313abe
Running with YARN Application = application_1494472050574_0014
Kill Command = /usr/lib/hadoop/bin/yarn application -kill application_1494472050574_0014

Query Hive on Spark job[0] stages:
0
1

Status: Running (Hive on Spark job[0])
Job Progress Format
CurrentTime StageId_StageAttemptId: SucceededTasksCount(+RunningTasksCount-FailedTasksCount)/TotalTasksCount [StageCost]
2017-05-12 15:05:30,835 Stage-0_0: 0(+1)/1      Stage-1_0: 0/1
2017-05-12 15:05:33,853 Stage-0_0: 1/1 Finished Stage-1_0: 1/1 Finished
Status: Finished successfully in 16.05 seconds
OK
0
Time taken: 19.325 seconds, Fetched: 1 row(s)
hive>

6、遇到的问题

报错:

hive> select count(*) from test;
Query ID = root_20170512143232_48d9f363-7b60-4414-9310-e6348104f476
Total jobs = 1
Launching Job 1 out of 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/HBaseConfiguration
        at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.initiateSparkConf(HiveSparkClientFactory.java:74)
        at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.setup(SparkSessionManagerImpl.java:81)
        at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:102)
        at org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:111)
        at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:99)
        at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214)
        at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
        at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1979)
        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1692)
        at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1424)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1208)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1198)
        at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:220)
        at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:172)
        at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:383)
        at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:775)
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:693)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.HBaseConfiguration
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        ... 24 more
FAILED: Execution Error, return code -101 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. org/apache/hadoop/hbase/HBaseConfiguration

解决:

yum -y install hbase


推荐阅读:
  1. SQL Server 2012 Always ON笔记
  2. Hive On Spark

免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。

on hive spar

上一篇:operate

下一篇:thinkphp-volist2

相关阅读

您好,登录后才能下订单哦!

密码登录
登录注册
其他方式登录
点击 登录注册 即表示同意《亿速云用户服务条款》