您好,登录后才能下订单哦!
这篇文章主要为大家展示了“HUE如何安装与配置”,内容简而易懂,条理清晰,希望能够帮助大家解决疑惑,下面让小编带领大家一起研究并学习一下“HUE如何安装与配置”这篇文章吧。
HUE 安装与配置
1. HUE下载:http://cloudera.github.io/hue/docs-3.0.0/manual.html#_hadoop_configuration
2. 安装HUE相关依赖(root下)
Redhat | Ubuntu |
gcc | gcc |
g++ | g++ |
libxml2-devel | libxml2-dev |
libxslt-devel | libxslt-dev |
cyrus-sasl-devel | libsasl2-dev |
cyrus-sasl-gssapi | libsasl2-modules-gssapi-mit |
mysql-devel | libmysqlclient-dev |
python-devel | python-dev |
python-setuptools | python-setuptools |
python-simplejson | python-simplejson |
sqlite-devel | libsqlite3-dev |
ant | ant |
libsasl2-dev | cyrus-sasl-devel |
libsasl2-modules-gssapi-mit | cyrus-sasl-gssapi |
libkrb5-dev | krb5-devel |
libtidy-0.99-0 | libtidy (For unit tests only) |
mvn | mvn (From maven2 package or tarball) |
openldap-dev / libldap2-dev | openldap-devel |
$ yum install -y gcc g++ libxml2-devel libxslt-devel cyrus-sasl-devel cyrus-sasl-gssapi mysql-devel python-devel python-setuptools python-simplejson sqlite-devel ant libsasl2-dev libsasl2-modules-gssapi-mit libkrb5-dev libtidy-0.99-0 mvn openldap-dev
3. 修改pom.xml文件
$ vim /opt/hue/maven/pom.xml
a.) 修改hadoop与spark版本
<hadoop-mr1.version>2.6.0</hadoop-mr1.version>
<hadoop.version>2.6.0</hadoop.version>
<spark.version>1.4.0</spark.version>
b.) 将hadoop-core修改为hadoop-common
<artifactId>hadoop-common</artifactId>
c.) 将hadoop-test的版本改为1.2.1
<artifactId>hadoop-test</artifactId>
<version>1.2.1</version>
d.) 将两个ThriftJobTrackerPlugin.java文件删除,分别在如下两个目录:
/usr/hdp/hue/desktop/libs/hadoop/java/src/main/java/org/apache/hadoop/thriftfs/ThriftJobTrackerPlugin.java
/usr/hdp/hue/desktop/libs/hadoop/java/src/main/java/org/apache/hadoop/mapred/ThriftJobTrackerPlugin.java
4. 编译
$ cd /opt/hue
$ make apps
5. 启动HUE服务
$ ./build/env/bin/supervisor
$ ps -aux|grep "hue"
$ kill -9 <端口号>
6. hue.ini 参数配置
$ vim /usr/hdp/hue/hue-3.10.0/desktop/conf/hue.ini
a.) [desktop] 配置
[desktop]
# Webserver listens on this address and port
http_host=xx.xx.xx.xx
http_port=8888
# Time zone name
time_zone=Asia/Shanghai
# Webserver runs as this user
server_user=hue
server_group=hue
# This should be the Hue admin and proxy user
default_user=hue
# This should be the hadoop cluster admin
default_hdfs_superuser=hdfs
[hadoop]
[[hdfs_clusters]]
[[[default]]]
# Enter the filesystem uri
# 如果HDFS没有配置 HA,则按照以下配置
fs_defaultfs=hdfs://xx.xx.xx.xx:8020 ## hadoop NameNode节点
# 如果HDFS配置了HA,则按照以下配置
fs_defaultfs=hdfs://mycluster ## 逻辑名称,与core-site.xml的fs_defaultfs保持一致
# NameNode logical name.
## logical_name=carmecluster
# Use WebHdfs/HttpFs as the communication mechanism.
# Domain should be the NameNode or HttpFs host.
# Default port is 14000 for HttpFs.
# 如果HDFS没有配置HA,则按照以下配置
webhdfs_url=http://xx.xx.xx.xx:50070/webhdfs/v1
# 如果HDFS配置HA,则HUE只能通过Hadoop-httpfs来访问HDFS; 手动安装Hadoop-httpfs:$ sudo yum install Hadoop-httpfs 启动Hadoop-httpfs服务:$ ./hadoop-httpfs start &
webhdfs_url=http://xx.xx.xx.xx:14000/webhdfs/v1
[[yarn_clusters]]
[[[default]]]
# Enter the host on which you are running the ResourceManager
resourcemanager_host=xx.xx.xx.xx
# The port where the ResourceManager IPC listens on
resourcemanager_port=8050
# Whether to submit jobs to this cluster
submit_to=True
# Resource Manager logical name (required for HA)
## logical_name=
# Change this if your YARN cluster is Kerberos-secured
## security_enabled=false
# URL of the ResourceManager API
resourcemanager_api_url=http://xx.xx.xx.xx:8088
# URL of the ProxyServer API
proxy_api_url=http://xx.xx.xx.xx:8088
# URL of the HistoryServer API
history_server_api_url=http://xx.xx.xx.xx:19888
# URL of the Spark History Server
## spark_history_server_url=http://localhost:18088
[[mapred_clusters]]
[[[default]]]
# Enter the host on which you are running the Hadoop JobTracker
jobtracker_host=xx.xx.xx.xx
# The port where the JobTracker IPC listens on
jobtracker_port=8021
# JobTracker logical name for HA
## logical_name=
# Thrift plug-in port for the JobTracker
thrift_port=9290
# Whether to submit jobs to this cluster
submit_to=False
[beeswax]
# Host where HiveServer2 is running.
# If Kerberos security is enabled, use fully-qualified domain name (FQDN).
hive_server_host=xx.xx.xx.xx
# Port where HiveServer2 Thrift server runs on.
hive_server_port=10000
# Hive configuration directory, where hive-site.xml is located
hive_conf_dir=/etc/hive/conf
# Timeout in seconds for thrift calls to Hive service
## server_conn_timeout=120
[hbase]
# Comma-separated list of HBase Thrift servers for clusters in the format of '(name|host:port)'.
# Use full hostname with security.
# If using Kerberos we assume GSSAPI SASL, not PLAIN.
hbase_clusters=(Cluster|xx.xx.xx.xx:9090)
# 如果连接hbase报错, 开启服务 $ nohup hbase thrift start &
[zookeeper]
[[clusters]]
[[[default]]]
# Zookeeper ensemble. Comma separated list of Host/Port.
# e.g. localhost:2181,localhost:2182,localhost:2183
host_ports=xx.xx.xx.xx:2181,xx.xx.xx.xx:2181,xx.xx.xx.xx:2181
[liboozie]
# The URL where the Oozie service runs on. This is required in order for
# users to submit jobs. Empty value disables the config check.
oozie_url=http://xx.xx.xx.xx:11000/oozie
b.) hadoop 相关配置
hdfs-site.xml 配置文件
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
core-site.xml 配置文件
<property>
<name>hadoop.proxyuser.hue.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hue.groups</name>
<value>*</value>
</property>
如果HUE server在hadoop集群外,可以通过运行HttpFS server服务来访问HDFS. HttpFS服务仅需要一个开放
httpfs-site.xml 配置文件
<property>
<name>httpfs.proxyuser.hue.hosts</name>
<value>*</value>
</property>
<property>
<name>httpfs.proxyuser.hue.groups</name>
<value>*</value>
</property>
c.) MapReduce 0.20(MR1)相关配置
HUE与JobTracker的通信是通过一个jar包,在mapreduce的lib文件夹下
如果JobTracker和HUE在同一个主机上,拷贝他
$ cd /usr/share/hue
$ cp desktop/libs/hadoop/java-lib/hue-plugins-*.jar /usr/lib/hadoop-0.20-mapreduce/lib
如果JobTracker 运行在不同的主机上,需要scp的Hue plugins jar 到JobTracker主机上
添加以下内容到mapred-site.xml配置文件,重启JobTracker
<property>
<name>jobtracker.thrift.address</name>
<value>0.0.0.0:9290</value>
</property>
<property>
<name>mapred.jobtracker.plugins</name>
<value>org.apache.hadoop.thriftfs.ThriftJobTrackerPlugin</value>
<description>Comma-separated list of jobtracker plug-ins to be activated.</description>
</property>
d.) Oozie 相关配置
oozie-site.xml配置
<property>
<name>oozie.service.ProxyUserService.proxyuser.hue.hosts</name>
<value>*</value>
</property>
<property>
<name>oozie.service.ProxyUserService.proxyuser.hue.groups</name>
<value>*</value>
</property>
以上是“HUE如何安装与配置”这篇文章的所有内容,感谢各位的阅读!相信大家都有了一定的了解,希望分享的内容对大家有所帮助,如果还想学习更多知识,欢迎关注亿速云行业资讯频道!
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。