在CentOS上配置Kafka安全前,需完成以下准备工作:
yum install -y java-11-openjdk-devel安装并验证(java -version)。/usr/local/kafka)。bin/zookeeper-server-start.sh config/zookeeper.properties启动。SCRAM(Salted Challenge Response Authentication Mechanism)通过动态盐值和迭代哈希增强密码安全性,适合大多数生产场景。
kafka-configs.sh工具添加用户及密码(如producer、consumer):bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=producer-secret]' --entity-type users --entity-name producer
bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=consumer-secret]' --entity-type users --entity-name consumer
config目录下创建kafka_server_jaas.conf,定义SCRAM认证模块及用户凭证:KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
user_producer="producer-secret"
user_consumer="consumer-secret";
};
server.properties,启用SASL/SCRAM认证:security.inter.broker.protocol=SASL_PLAINTEXT # Broker间通信协议
sasl.enabled.mechanisms=SCRAM-SHA-256 # 支持的认证机制
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256 # Broker间使用的机制
listener.name.sasl_plaintext.scram-sha-256.sasl.jaas.config=file:/usr/local/kafka/config/kafka_server_jaas.conf # 指定JAAS文件路径
listeners=SASL_PLAINTEXT://0.0.0.0:9092 # 监听所有网卡
advertised.listeners=SASL_PLAINTEXT://your-server-ip:9092 # 对外暴露的地址
# 生产者配置(producer.properties)
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config='org.apache.kafka.common.security.scram.ScramLoginModule required username="producer" password="producer-secret";'
# 消费者配置(consumer.properties)
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config='org.apache.kafka.common.security.scram.ScramLoginModule required username="consumer" password="consumer-secret";'
systemctl restart kafka使配置生效。若需快速测试,可使用PLAIN机制(需注意密码明文传输风险):
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret";
};
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN
listener.name.sasl_plaintext.plain.sasl.jaas.config=file:/usr/local/kafka/config/kafka_server_jaas.conf
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
sasl.jaas.config='org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret";'
结合SASL使用SSL/TLS,实现“认证+加密”的双重安全:
listeners=SSL://:9093,PLAINTEXT://:9092
advertised.listeners=SSL://your-server-ip:9093,PLAINTEXT://your-server-ip:9092
security.inter.broker.protocol=SSL
ssl.keystore.location=/usr/local/kafka/config/kafka.server.keystore.jks
ssl.keystore.password=kafka123
ssl.key.password=kafka123
ssl.truststore.location=/usr/local/kafka/config/kafka.truststore.jks
ssl.truststore.password=kafka123
ssl.enabled.protocols=TLSv1.2,TLSv1.3
security.protocol=SSL
ssl.keystore.location=/usr/local/kafka/config/client.keystore.jks
ssl.keystore.password=kafka123
ssl.truststore.location=/usr/local/kafka/config/kafka.truststore.jks
ssl.truststore.password=kafka123
通过ACL(访问控制列表)实现细粒度的权限管理,限制用户对Topic、消费组等的操作。
server.properties,指定授权类并关闭“无ACL时的允许访问”:authorizer.class.name=kafka.security.authorizer.AclAuthorizer
allow.everyone.if.no.acl.found=false
admin),用于管理ACL:bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret]' --entity-type users --entity-name admin
kafka-acls.sh工具为用户分配权限(示例:允许producer向test-topic生产消息,允许consumer从test-topic消费消息):# 生产权限
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:producer --operation Produce --topic test-topic
# 消费权限
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:consumer --operation Consume --topic test-topic --group consumer-group
test-topic的ACL规则:bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --list --topic test-topic
通过CentOS防火墙(firewalld)限制Kafka端口的访问,仅允许可信IP连接:
sudo firewall-cmd --permanent --add-port=9092/tcp
sudo firewall-cmd --permanent --add-port=9093/tcp
sudo firewall-cmd --reload
192.168.1.0/24)访问Kafka端口:sudo firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.1.0/24" port port="9092" protocol="tcp" accept'
sudo firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.1.0/24" port port="9093" protocol="tcp" accept'
sudo firewall-cmd --reload
PLAINTEXT协议,仅保留SASL_SSL或SSL协议,减少未加密传输的风险。