Debian上Filebeat数据压缩的实用做法
一 输出链路压缩(推荐)
# 输出到 Elasticsearch(启用压缩)
output.elasticsearch:
hosts: ["http://localhost:9200"]
compression: true
# 输出到 Logstash(启用压缩)
output.logstash:
hosts: ["127.0.0.1:5044"]
compress: true
# 输出到 Kafka(启用压缩,并设置压缩级别)
output.kafka:
hosts: ["kafka-broker:9092"]
topic: "filebeat-logs"
compression_level: 5
required_acks: 1
sudo systemctl restart filebeat
二 事件内容压缩处理器(可选)
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/*.log
processors:
- compress:
codec: gzip
sudo systemctl restart filebeat
三 验证与注意事项
sudo filebeat test config -e
sudo systemctl status filebeat
journalctl -u filebeat -f