怎么在Hadoop+HBase上安装snappy

发布时间:2021-08-25 16:13:02 作者:chen
来源:亿速云 阅读:192

这篇文章主要介绍“怎么在Hadoop+HBase上安装snappy ”,在日常操作中,相信很多人在怎么在Hadoop+HBase上安装snappy 问题上存在疑惑,小编查阅了各式资料,整理出简单好用的操作方法,希望对大家解答”怎么在Hadoop+HBase上安装snappy ”的疑惑有所帮助!接下来,请跟着小编一起来学习吧!

1、检查snappy压缩包是否安装

命令为:bin/hbase org.apache.hadoop.hbase.util.CompressionTest file:///tmp/test.txt snappy

如果显示信息为:

12/12/03 10:30:02 WARN metrics.SchemaConfigured: Could not determine table and column family of the HFile path file:/tmp/test.txt. Expecting at least 5 path components.
12/12/03 10:30:02 WARN snappy.LoadSnappy: Snappy native library not loaded
Exception in thread "main" java.lang.RuntimeException: native snappy library not available
     at org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:123)
     at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:100)
     at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:112)
     at org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.getCompressor(Compression.java:264)
     at org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer.<init>(HFileBlock.java:739)
     at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.finishInit(HFileWriterV2.java:127)
     at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.<init>(HFileWriterV2.java:118)
     at org.apache.hadoop.hbase.io.hfile.HFileWriterV2$WriterFactoryV2.createWriter(HFileWriterV2.java:101)
     at org.apache.hadoop.hbase.io.hfile.HFile$WriterFactory.create(HFile.java:394)
     at org.apache.hadoop.hbase.util.CompressionTest.doSmokeTest(CompressionTest.java:108)

 则说明snappy压缩包没有安装;

2、下载snappy-*.tar.gz压缩包(只要和hbase版本兼容就可以,我的是snappy-1.1.1.tar.gz),解压;

3、进入snappy目录,进行编译,两条命令:

      ./configure

       make

4、make完之后会产生一个libsnappy.so文件(这就是我们所需要的库!!!),正常情况下出现在当前目录./libs/libsnappy.so,但是很多时候不按套路出牌,跑到别的文件夹下了,如果make没有出错,可以在根目录search一下,肯定能找到这个文件;

5、将生成的这个libsnappy.so拷贝到HBase的lib/native/Linux-ARCH目录下,ARCH代表 amd64 或 i386-32,注意,对于amd64的HBase可能没有这个目录,此时,需要手动创建:

     mkdir /opt/hbase-0.98.6.1/lib/native/Linux-amd64-64

6、如果还是不确定HBase在哪里查找lib,那么可以修改log4j文件中的日志级别(log level)进行调试;

7、重新运行第1步中的命令,现在看到的信息应该为:

12/12/03 10:34:35 INFO util.ChecksumType: Checksum can use java.util.zip.CRC32
12/12/03 10:34:35 INFO util.ChecksumType: org.apache.hadoop.util.PureJavaCrc32C not available. 
12/12/03 10:34:35 DEBUG util.FSUtils: Creating file:file:/tmp/test.txtwith permission:rwxrwxrwx
12/12/03 10:34:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
12/12/03 10:34:35 WARN metrics.SchemaConfigured: Could not determine table and column family of the HFile path file:/tmp/test.txt. Expecting at least 5 path components.
12/12/03 10:34:35 WARN snappy.LoadSnappy: Snappy native library is available
12/12/03 10:34:35 WARN snappy.LoadSnappy: Snappy native library not loaded
Exception in thread "main" java.lang.RuntimeException: native snappy library not available
    at org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:123)
    at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:100)
    at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:112)
    at org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.getCompressor(Compression.java:264)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer.<init>(HFileBlock.java:739)
    at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.finishInit(HFileWriterV2.java:127)
    at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.<init>(HFileWriterV2.java:118)
    at org.apache.hadoop.hbase.io.hfile.HFileWriterV2$WriterFactoryV2.createWriter(HFileWriterV2.java:101)
    at org.apache.hadoop.hbase.io.hfile.HFile$WriterFactory.create(HFile.java:394)
    at org.apache.hadoop.hbase.util.CompressionTest.doSmokeTest(CompressionTest.java:108)
    at org.apache.hadoop.hbase.util.CompressionTest.main(CompressionTest.java:138)

8、可以看到,snappy已经可以找到了,但是还没有加载(not loaded)。想加载的话,还需要拷贝hadoop的本地库到与libsnappy.so同一个路径下,hadoop的本地库路径为:

      hadoop-1.2.1/lib/native/Linux-ARCH/libhadoop.so;

       如果这个路径下没有,可以根据所使用的hadoop版本到 https://archive.apache.org/dist/hadoop/core/ 下载相应的tar.gz包,解压之后就能找到所需要的文件了;

9、再次运行测试命令(第1步中的命令),可以得到:

12/12/03 10:37:48 INFO util.ChecksumType: org.apache.hadoop.util.PureJavaCrc32 not available.
12/12/03 10:37:48 INFO util.ChecksumType: Checksum can use java.util.zip.CRC32
12/12/03 10:37:48 INFO util.ChecksumType: org.apache.hadoop.util.PureJavaCrc32C not available. 
12/12/03 10:37:48 DEBUG util.FSUtils: Creating file:file:/tmp/test.txtwith permission:rwxrwxrwx
12/12/03 10:37:48 INFO util.NativeCodeLoader: Loaded the native-hadoop library
12/12/03 10:37:48 WARN metrics.SchemaConfigured: Could not determine table and column family of the HFile path file:/tmp/test.txt. Expecting at least 5 path components.
12/12/03 10:37:48 WARN snappy.LoadSnappy: Snappy native library is available
12/12/03 10:37:48 INFO snappy.LoadSnappy: Snappy native library loaded
12/12/03 10:37:48 INFO compress.CodecPool: Got brand-new compressor
12/12/03 10:37:48 DEBUG hfile.HFileWriterV2: Initialized with CacheConfig:disabled
12/12/03 10:37:49 WARN metrics.SchemaConfigured: Could not determine table and column family of the HFile path file:/tmp/test.txt. Expecting at least 5 path components.
12/12/03 10:37:49 INFO compress.CodecPool: Got brand-new decompressor
SUCCESS

      看到SUCCESS,说明安装成功,snappy压缩包可以使用,搞定。

到此,关于“怎么在Hadoop+HBase上安装snappy ”的学习就结束了,希望能够解决大家的疑惑。理论与实践的搭配能更好的帮助大家学习,快去试试吧!若想继续学习更多相关知识,请继续关注亿速云网站,小编会继续努力为大家带来更多实用的文章!

推荐阅读:
  1. Hadoop+Hbase 安装配置实录
  2. hadoop cdh版本安装snappy

免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。

hadoop hbase snappy

上一篇:怎么在Windows上安装GNU Emacs

下一篇:Ubuntu怎么用自己的图片替换登录窗口背景

相关阅读

您好,登录后才能下订单哦!

密码登录
登录注册
其他方式登录
点击 登录注册 即表示同意《亿速云用户服务条款》