HBase与Spark在CentOS上的集成流程
在CentOS系统上集成HBase与Spark前,需完成以下基础配置:
/etc/hosts)及网络互通;关闭防火墙(systemctl stop firewalld.service)或开放必要端口(如HBase的2181、HDFS的9000、Spark的7077等)。JAVA_HOME)、Hadoop环境变量(HADOOP_HOME、HADOOP_CONF_DIR)及Zookeeper集群(zoo.cfg中指定节点信息)。hbase-2.5.7-bin.tar.gz、spark-3.3.2-bin-hadoop3.tgz),上传至CentOS的/opt目录。tar -xzf hbase-2.5.7-bin.tar.gz -C /opt/,重命名目录为hbase。~/.bashrc中添加export HBASE_HOME=/opt/hbase、export PATH=$PATH:$HBASE_HOME/bin,执行source ~/.bashrc生效。$HBASE_HOME/conf/hbase-site.xml,指定HBase根目录(HDFS路径)及分布式模式:<property>
<name>hbase.rootdir</name>
<value>hdfs://hadoop-master:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
hbase-site.xml中添加<property><name>hbase.zookeeper.quorum</name><value>hadoop-master,hadoop-slave1,hadoop-slave2</value></property>(替换为实际Zookeeper节点IP)。tar -xzf spark-3.3.2-bin-hadoop3.tgz -C /opt/,重命名目录为spark。~/.bashrc中添加export SPARK_HOME=/opt/spark、export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin,执行source ~/.bashrc生效。$SPARK_HOME/conf/spark-env.sh(复制spark-env.sh.template生成),指定Java、Hadoop路径及Master节点:export JAVA_HOME=/usr/local/jdk1.8.0_161
export HADOOP_HOME=/opt/hadoop
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export SPARK_MASTER_HOST=hadoop-master
export SPARK_WORKER_MEMORY=2g
$SPARK_HOME/conf/slaves中添加Worker节点IP(如hadoop-slave1、hadoop-slave2)。将HBase的lib目录下必要jar包复制到Spark的jars目录,确保Spark能访问HBase API:
cd /opt/hbase/lib
cp hbase*.jar /opt/spark/jars/
cp guava-12.0.1.jar /opt/spark/jars/ # 解决版本冲突
cp htrace-core-3.1.0-incubating.jar /opt/spark/jars/
cp protobuf-java-2.5.0.jar /opt/spark/jars/
或在$SPARK_HOME/conf/spark-env.sh中添加HBase classpath:
export SPARK_DIST_CLASSPATH=$(/opt/hadoop/bin/hadoop classpath):$(/opt/hbase/bin/hbase classpath):/opt/spark/jars/hbase/*
start-dfs.sh、start-yarn.sh(通过jps查看NameNode、DataNode、ResourceManager进程)。zkServer.sh start(zkServer.sh status查看Leader状态)。start-hbase.sh(jps查看HMaster、HRegionServer进程)。start-master.sh、start-workers.sh(通过Spark Web UI http://hadoop-master:8080查看集群状态)。在HBase Shell中创建表并插入测试数据:
create 'student', 'info'
put 'student', '1', 'info:name', 'Rongcheng'
put 'student', '2', 'info:name', 'Guanhua'
scan 'student'
使用newAPIHadoopRDD接口读取HBase表数据:
from pyspark import SparkContext, SparkConf
def read_from_hbase():
conf = {
"hbase.zookeeper.quorum": "hadoop-master",
"hbase.mapreduce.inputtable": "student"
}
key_conv = "org.apache.spark.examples.pythonconverters.ImmutableBytesWritableToStringConverter"
value_conv = "org.apache.spark.examples.pythonconverters.HBaseResultToStringConverter"
hbase_rdd = sc.newAPIHadoopRDD(
"org.apache.hadoop.hbase.mapreduce.TableInputFormat",
"org.apache.hadoop.hbase.io.ImmutableBytesWritable",
"org.apache.hadoop.hbase.client.Result",
keyConverter=key_conv,
valueConverter=value_conv,
conf=conf
)
for (k, v) in hbase_rdd.collect():
print(f"Rowkey: {k}, Value: {v}")
if __name__ == "__main__":
conf = SparkConf().setMaster("local[2]").setAppName("SparkReadHBase")
sc = SparkContext(conf=conf)
read_from_hbase()
sc.stop()
使用TableOutputFormat接口将数据写入HBase表:
from pyspark import SparkContext, SparkConf
def write_to_hbase():
conf = {
"hbase.zookeeper.quorum": "hadoop-master",
"hbase.mapred.outputtable": "student",
"mapreduce.outputformat.class": "org.apache.hadoop.hbase.mapreduce.TableOutputFormat",
"mapreduce.job.output.key.class": "org.apache.hadoop.hbase.io.ImmutableBytesWritable",
"mapreduce.job.output.value.class": "org.apache.hadoop.io.Writable"
}
data = [("3", "info:name", "SparkUser"), ("4", "info:name", "HBaseUser")]
rdd = sc.parallelize(data).map(lambda x: (
x[0], # Rowkey
{"info:name": x[2]} # Column Family:Qualifier -> Value
))
rdd.saveAsNewAPIHadoopDataset(conf)
if __name__ == "__main__":
conf = SparkConf().setMaster("local[2]").setAppName("SparkWriteHBase")
sc = SparkContext(conf=conf)
write_to_hbase()
sc.stop()
运行上述脚本,通过hbase shell的scan 'student'命令验证数据是否写入成功。