您好,登录后才能下订单哦!
HBASE基于coprocessor实现二级索引
场景如下:存储UC_TWEETS表,ROWKEY设计:folderId_dayId_siteId_docId,导出有如下需求:根据campaignId导出,所以需要存储campaignId的索引表
实现步骤如下:
一, 代码实现如下:
public class HbaseCoprocessor extends BaseRegionObserver {
@Override
public void prePut(final ObserverContext<RegionCoprocessorEnvironment> e, final Put put,
final WALEdit edit, final Durability durability) throws IOException {
Configuration configuration = HBaseConfiguration.create();
configuration.set("hbase.regionserver.lease.period", "900000");
configuration.set("hbase.rpc.timeout", "1800000");
configuration.set("hbase.client.scanner.timeout.period", "1800000");
configuration.set("hbase.zookeeper.property.clientPort", "2181");
configuration.set("hbase.zookeeper.quorum", "DEV-HADOOP-01,DEV-HADOOP-02,DEV-HADOOP-03");
configuration.set("hbase.master", "DEV-HADOOP-01:60000");
HTable table = new HTable(configuration, "UC_INDEX");
List<Cell> kv = put.get("f1".getBytes(), "campaignId".getBytes());
Iterator<Cell> kvItor = kv.iterator();
while (kvItor.hasNext()) {
KeyValue tmp = (KeyValue)kvItor.next();
String rowkey = new String(tmp.getRow());
String value = new String(tmp.getValue());
String newRowkey = value + "_" + rowkey;
Put indexPut = new Put(newRowkey.getBytes());
indexPut.add("f1".getBytes(), tmp.getQualifier(), tmp.getValue());
table.put(indexPut);
}
table.close();
}
}
二, 把上面的HbaseCoprocessor类导出.选择Export -> Jar File,导出成ucTweet.jar文件
三, 把ucTweet.jar文件上传到HDFS,命令如下:./hadoop fs -put /data/server/ucTweet_index.jar /jars
四, 设置UC_TWEETS表的coprocessor属性,命令如下:alter 'UC_TWEETS',METHOD=>'table_att','coprocessor'=>'hdfs://192.168.9.110:9000/jars/ucTweet.jar|com.prime.dsc.inputservice.coprocessor.HbaseCoprocessor|1001|'
五, 把数据插入UC_TWEETS表,如果UC_INDEX表同样有数据,并且符合设计,则说明二级索引建立成功
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。