Ubuntu Kafka安全配置指南
Kafka的安全配置需围绕认证、加密、授权、网络隔离及日志监控五大核心维度展开,以下是具体步骤及最佳实践:
sudo apt update && sudo apt install -y openjdk-11-jdk
/opt
目录。wget https://downloads.apache.org/kafka/3.8.0/kafka_2.13-3.8.0.tgz
tar -xzf kafka_2.13-3.8.0.tgz -C /opt/
cd /opt/kafka_2.13-3.8.0
config
目录下新建kafka_server_jaas.conf
,定义broker用户(如admin
)及密码。KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
serviceName="kafka"
username="admin"
password="admin-secret"
user_admin="admin-secret" # 用户admin的密码
user_alice="alice-secret"; # 可添加多个用户
};
server.properties
:开启SASL监听及机制。listeners=SASL_PLAINTEXT://:9092 # 生产环境建议用SASL_SSL(见下文加密部分)
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
sasl.jaas.config=/opt/kafka_2.13-3.8.0/config/kafka_server_jaas.conf
kafka-server-start.sh
,指定JAAS配置文件路径。export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka_2.13-3.8.0/config/kafka_server_jaas.conf"
SCRAM(Salted Challenge-Response Authentication Mechanism)通过哈希值存储密码,比PLAIN更安全。
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin-secret";
};
kafka-configs.sh
工具创建用户及哈希密码(迭代次数建议≥8192)。bin/kafka-configs.sh --zookeeper localhost:2181 \
--alter --add-config "SCRAM-SHA-256=[iterations=8192,password=admin-secret]" \
--entity-type users --entity-name admin
server.properties
:将security.mechanism.inter.broker.protocol
改为SCRAM-SHA-256
,sasl.enabled.mechanisms
改为SCRAM-SHA-256
。producer.properties
/consumer.properties
):指定SASL机制及JAAS文件。security.protocol=SASL_PLAINTEXT # 生产环境用SASL_SSL
sasl.mechanism=PLAIN # 或SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="admin" \
password="admin-secret";
使用OpenSSL生成自签名证书(生产环境建议使用CA签发的证书):
mkdir -p config/certificates
cd config/certificates
# 生成CA私钥及证书
openssl req -newkey rsa:2048 -nodes -keyout ca-key.pem -out ca-cert.pem -x509 -days 3650
# 生成broker私钥及证书签名请求(CSR)
openssl req -newkey rsa:2048 -nodes -keyout broker-key.pem -out broker-req.pem -subj "/CN=localhost"
# 用CA签署broker CSR
openssl x509 -req -in broker-req.pem -CA ca-cert.pem -CAkey ca-key.pem -CAcreateserial -out broker-cert.pem -days 3650
# 生成客户端证书(可选,用于双向认证)
openssl req -newkey rsa:2048 -nodes -keyout client-key.pem -out client-req.pem -subj "/CN=client"
openssl x509 -req -in client-req.pem -CA ca-cert.pem -CAkey ca-key.pem -CAcreateserial -out client-cert.pem -days 3650
Kafka需通过JKS格式管理证书:
# 导入CA证书至信任库(truststore)
keytool -import -alias ca -file ca-cert.pem -keystore truststore.jks -storepass changeit -noprompt
# 导入broker证书及私钥至密钥库(keystore)
openssl pkcs12 -export -in broker-cert.pem -inkey broker-key.pem -out broker.p12 -name kafka -CAfile ca-cert.pem -caname root -password pass:changeit
keytool -importkeystore -deststorepass changeit -destkeypass changeit -destkeystore keystore.jks -srckeystore broker.p12 -srcstoretype PKCS12 -srcstorepass changeit -alias kafka
server.properties
:开启SSL监听、指定证书路径及密码。listeners=SASL_SSL://:9093 # 生产环境建议用SASL_SSL(认证+加密)
security.inter.broker.protocol=SASL_SSL
ssl.keystore.location=config/certificates/keystore.jks
ssl.keystore.password=changeit
ssl.key.password=changeit
ssl.truststore.location=config/certificates/truststore.jks
ssl.truststore.password=changeit
ssl.client.auth=required # 要求客户端提供证书(双向认证)
security.protocol=SASL_SSL
ssl.truststore.location=config/certificates/truststore.jks
ssl.truststore.password=changeit
ssl.keystore.location=config/certificates/client-keystore.jks # 客户端证书(双向认证需配置)
ssl.keystore.password=changeit
ssl.key.password=changeit
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
username="admin" \
password="admin-secret";
通过**ACL(访问控制列表)**限制用户对Topic、分区的操作权限(如读、写、创建)。
bin/kafka-topics.sh --create --bootstrap-server localhost:9093 \
--replication-factor 1 --partitions 1 --topic test_topic
alice
授予test_topic
的读写权限(生产者/消费者)。bin/kafka-acls.sh --bootstrap-server localhost:9093 \
--add --allow-principal User:alice \
--operation Read --operation Write \
--topic test_topic
bin/kafka-acls.sh --bootstrap-server localhost:9093 --list --topic test_topic
sudo ufw allow 9093/tcp # 仅允许SSL端口
sudo ufw enable
kafka
)下,避免使用root
。sudo useradd -r -m -d /opt/kafka_2.13-3.8.0 -s /sbin/nologin kafka
sudo chown -R kafka:kafka /opt/kafka_2.13-3.8.0
sudo chmod -R 750 /opt/kafka_2.13-3.8.0/config /opt/kafka_2.13-3.8.0/data
若需更高级别的认证(如与Active Directory集成),可配置Kerberos:
kafka/hostname@REALM
)。kafka_server_jaas.conf
使用Krb5LoginModule
。kinit
获取Kerberos票据后访问Kafka。bin/kafka-console-producer.sh --bootstrap-server localhost:9093 \
--topic test_topic \
--producer.config config/producer.properties
bin/kafka-console-consumer.sh --bootstrap-server localhost:9093 \
--topic test_topic --from-beginning \
--consumer.config config/consumer.properties
tcpdump
抓包,确认数据为加密传输(无明文内容)。test_topic
,应被拒绝。通过以上步骤,可构建一个具备认证、加密、授权、网络隔离的Kafka安全环境,有效防范未授权访问、数据泄露及恶意操作。生产环境中需定期更新证书、审计日志,并根据业务需求调整权限策略。