在CentOS上部署Kafka时,安全设置需围绕认证、加密、授权、网络隔离、审计五大核心维度展开,以下是具体步骤:
/etc/passwd排查id=0的账户(除root外),使用passwd -l <用户名>锁定无用超级账户,或将其shell改为/sbin/nologin(如usermod -s /sbin/nologin testuser)。authconfig --passminlen=10 --passcomplexity=1 --update配置)。chattr +i /etc/passwd /etc/shadow /etc/group /etc/gshadow设置文件不可修改,防止未授权删除或篡改。SCRAM-SHA-256(比PLAIN更安全)或GSSAPI(Kerberos,适用于企业级场景)。kafka-configs.sh工具创建用户及密码哈希:kafka-configs.sh --zookeeper localhost:2181 --entity-type users --entity-name kafka-user --alter --add-config SCRAM-SHA-256=[iterations=8192,password=test123]
/etc/kafka/下创建kafka_server_jaas.conf(SASL/SCRAM示例):KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="kafka-user"
password="test123";
};
server.properties:启用SASL并指定JAAS文件路径:listeners=SASL_PLAINTEXT://0.0.0.0:9092 # 生产环境建议用SASL_SSL
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.enabled.mechanisms=SCRAM-SHA-256
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
sasl.jaas.config=/etc/kafka/kafka_server_jaas.conf
security.protocol=SASL_PLAINTEXT、sasl.mechanism=SCRAM-SHA-256。# 生成密钥库(含Broker私钥)
keytool -genkey -alias kafka-server -keystore kafka.server.keystore.jks -storepass kafka123 -keypass kafka123 -validity 365 -keyalg RSA -keystorepass kafka123
# 导出Broker证书
keytool -export -alias kafka-server -file kafka.server.crt -keystore kafka.server.keystore.jks -storepass kafka123
# 创建信任库并导入Broker证书(客户端需信任此证书)
keytool -import -alias kafka-server -file kafka.server.crt -keystore kafka.server.truststore.jks -storepass kafka123 -noprompt
server.properties:启用SSL并指定证书路径:listeners=SSL://0.0.0.0:9093
security.inter.broker.protocol=SSL
ssl.keystore.location=/etc/kafka/kafka.server.keystore.jks
ssl.keystore.password=kafka123
ssl.key.password=kafka123
ssl.truststore.location=/etc/kafka/kafka.server.truststore.jks
ssl.truststore.password=kafka123
ssl.enabled.protocols=TLSv1.2
ssl.cipher.suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384
security.protocol=SSL、ssl.truststore.location(指向信任库)、ssl.keystore.location(若客户端需认证)。server.properties中设置授权器并禁用默认允许:authorizer.class.name=kafka.security.authorizer.AclAuthorizer
allow.everyone.if.no.acl.found=false
super.users=User:admin # 定义超级用户(可绕过ACL)
kafka-configs.sh工具管理权限:# 创建用户
kafka-configs.sh --zookeeper localhost:2181 --alter --entity-type users --entity-name alice --add-config SCRAM-SHA-256=[password=alice123]
# 授权alice对topic1的读写权限
kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:alice --operation Read --operation Write --topic topic1
# 授权group1组对topic2的消费权限
kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal Group:group1 --operation Read --topic topic2
kafka-acls.sh --list查看现有规则。firewall-cmd开放Kafka端口(默认9092/9093),仅允许可信IP访问:firewall-cmd --permanent --zone=public --add-port=9093/tcp # SSL端口
firewall-cmd --permanent --zone=public --add-rich-rule='rule family="ipv4" source address="192.168.1.0/24" port port="9093" protocol="tcp" accept'
firewall-cmd --reload
server.properties中设置listeners为具体IP(而非0.0.0.0),限制监听网卡:listeners=SSL://192.168.1.100:9093
advertised.listeners=SSL://your.public.ip:9093 # 客户端连接的地址
server.properties中配置日志级别,记录客户端操作:log4j.logger.kafka.authorizer.logger=INFO, authorizerAppender
log4j.additivity.kafka.authorizer.logger=false
log4j.appender.authorizerAppender=org.apache.log4j.RollingFileAppender
log4j.appender.authorizerAppender.File=/var/log/kafka/audit.log
log4j.appender.authorizerAppender.MaxFileSize=100MB
log4j.appender.authorizerAppender.MaxBackupIndex=5
log4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.authorizerAppender.layout.ConversionPattern=%d{ISO8601} [%t] %-5p %c %x - %m%n
grep、awk等工具分析audit.log,检测异常访问(如频繁的失败登录)。setenforce 0)或调整其策略,避免与Kafka冲突;关闭ZooKeeper的匿名访问(若集成)。log.dirs)和配置文件,防止数据丢失。通过以上步骤,可构建Kafka在CentOS上的多层安全防护体系,覆盖认证、加密、授权、网络隔离等关键环节,满足生产环境的安全需求。