在CentOS下将Filebeat与Kafka集成,通常用于日志的集中收集和转发。以下是一个基本的集成实践步骤:
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.6.0-linux-x86_64.tar.gz
tar -xzf filebeat-6.6.0-linux-x86_64.tar.gz
cd filebeat-6.6.0
filebeat.yml
):filebeat.inputs:
- type: log
paths:
- /var/log/nginx/access.log
output.kafka:
hosts: ["kafka:9092"]
topic: 'nginx_access_logs'
compression: gzip
max_message_bytes: 1000000
./filebeat -e -c filebeat.yml
wget https://downloads.apache.org/kafka/2.8.1/kafka_2.12-2.8.1.tgz
tar -xzf kafka_2.12-2.8.1.tgz
cd kafka_2.12-2.8.1
zookeeper.properties
):dataDir=/tmp/zookeeper
clientPort=2181
server.properties
):log.dirs=/tmp/kafka
zookeeper.connect=localhost:2181
./bin/zookeeper-server-start.sh config/zookeeper.properties
./bin/kafka-server-start.sh config/server.properties
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic nginx_access_logs
kafkacat
验证消息:kafkacat -C -b localhost:9092 -t nginx_access_logs
setenforce 0
systemctl stop firewalld
systemctl disable firewalld
在filebeat.yml
中添加Kafka认证信息:
output.kafka.username: your_username
output.kafka.password: your_password
output.kafka.sasl.mechanism: PLAIN