org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found

发布时间:2020-06-30 14:29:23 作者:yangws2004
来源:网络 阅读:1122

异常信息如下:

        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)

        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)

        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)

        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)

        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

java.lang.RuntimeException: MetaException(message:java.lang.ClassNotFoundException Class org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found)

        at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:290)

        at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:281)

        at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:631)

        at org.apache.hadoop.hive.ql.metadata.Table.checkValidity(Table.java:189)

        at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1017)

        at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:950)

        at org.apache.spark.sql.hive.HiveMetastoreCatalog.lookupRelation(HiveMetastoreCatalog.scala:201)

        at org.apache.spark.sql.hive.HiveContext$$anon$2.org$apache$spark$sql$catalyst$analysis$OverrideCatalog$$super$lookupRelation(HiveContext.scala:262)

        at org.apache.spark.sql.catalyst.analysis.OverrideCatalog$$anonfun$lookupRelation$3.apply(Catalog.scala:161)

        at org.apache.spark.sql.catalyst.analysis.OverrideCatalog$$anonfun$lookupRelation$3.apply(Catalog.scala:161)

        at scala.Option.getOrElse(Option.scala:120)

        at org.apache.spark.sql.catalyst.analysis.OverrideCatalog$class.lookupRelation(Catalog.scala:161)

        at org.apache.spark.sql.hive.HiveContext$$anon$2.lookupRelation(HiveContext.scala:262)

        at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.getTable(Analyzer.scala:174)

        at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$6.applyOrElse(Analyzer.scala:186)

        at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$6.applyOrElse(Analyzer.scala:181)

        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:188)

        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:188)



背景:

CREATE TABLE apachelog (

  host STRING,

  identity STRING,

  user STRING,

  time STRING,

  request STRING,

  status STRING,

  size STRING,

  referer STRING,

  agent STRING)

ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe'

WITH SERDEPROPERTIES (

  "input.regex" = "([^]*) ([^]*) ([^]*) (-|\\[^\\]*\\]) ([^ \"]*|\"[^\"]*\") (-|[0-9]*) (-|[0-9]*)(?: ([^ \"]*|\".*\") ([^ \"]*|\".*\"))?"

)

STORED AS TEXTFILE;


因为建表时使用了 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe',而不是'org.apache.hadoop.hive.serde2.RegexSerDe',导致在使用spark-sql或者spark-shell访问时,一直报上述异常,总是找不到相应的类,导入相关的Jar包仍然无法解决。


解决办法:

    要启动spark-shell,spark-sql时导入 --jar xxxxx.jar将相应的jar包导入。(注意:正常情况下,大家应该都会想到将相应jar导入。但我遇到的问题,如果jar的路径是个软连接路径的话,仍然会报上述异常,找不到相应的类,必须导入jar包的实际路径才行。可能因为spark对软路径的处理有bug,不确定哦。。。。)

推荐阅读:
  1. 【大数据】年薪百万架构师必备技能
  2. 怎么在python中使用opencv实现一个分水岭算法

免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。

spark contrib.serde2.regex fo

上一篇:python在win运行的方法

下一篇:python将列表排序是如何操作的

相关阅读

您好,登录后才能下订单哦!

密码登录
登录注册
其他方式登录
点击 登录注册 即表示同意《亿速云用户服务条款》