HDFS安全机制搭建可从认证、权限、加密、监控等方面入手,以下是基于Linux环境的配置步骤及关键措施:
sudo yum install krb5-workstation krb5-libs
/etc/krb5.conf
,指定领域(REALM)和KDC服务器地址。kadmin.local -q "addprinc -randkey hdfs/_HOST@YOUR.REALM.COM"
kadmin.local -q "ktadd -k /etc/security/keytabs/hdfs.keytab hdfs/_HOST@YOUR.REALM.COM"
core-site.xml
中设置:<property>
<name>hadoop.security.authentication</name>
<value>kerberos</value>
</property>
在hdfs-site.xml
中配置NameNode和DataNode的主体及密钥文件路径。hdfs-site.xml
中设置:<property>
<name>dfs.permissions.enabled</name>
<value>true</value>
</property>
hdfs dfs -setfacl -m user:user1:rwx /path/to/directory
hdfs dfs -setfacl -m group:group1:r /path/to/directory
hdfs dfs -getfacl /path/to/directory
hdfs dfs -setfacl -R -m user:user1:rwx /path
core-site.xml
中配置:<property>
<name>hadoop.ssl.enabled</name>
<value>true</value>
</property>
<property>
<name>hadoop.ssl.keystore.file</name>
<value>/path/to/keystore.jks</value>
</property>
<property>
<name>dfs.encryption.key.provider.uri</name>
<value>kms://http@kms-server:16000/kms</value>
</property>
firewalld
)限制HDFS默认端口(8020、50070等)的访问范围:sudo firewall-cmd --add-port=8020/tcp --permanent
sudo firewall-cmd --reload
hdfs dfsadmin -safemode enter
hdfs-site.xml
中配置:<property>
<name>dfs.audit.logger</name>
<value>INFO,audit</value>
</property>
kinit
获取票据并访问HDFS:kinit username@REALM.COM
hdfs dfs -ls /
hdfs dfs -getfacl /test-dir
以上步骤参考自,可根据实际环境调整参数。