Filebeat在Ubuntu上运行慢可能是由于多种原因造成的,以下是一些常见的优化措施和排查步骤:
multiline.pattern: '\['
multiline.negate: true
multiline.match: after
multiline.max_lines: 10000
json.keys_under_root: true
json.overwrite_keys: true
json.message_key: log
json.add_error_key: true
queue.type: persisted
queue.max_bytes: 1024mb
flush.min_events: 2048
flush.timeout: 1s
harvester_limit
可以限制同时运行的harvester数量,避免资源过度占用。harvester_limit: 512
bulk_max_size
可以设置每次批量发送的最大文档数,提高发送效率。output.elasticsearch:
hosts: ["localhost:9200"]
bulk_max_size: 2048
output.compression: true
filebeat.inputs:
- type: filestream
paths:
- /var/log/*.log
pipeline.workers
数量和pipeline.batch.size
。setup.monitor.enabled: true
sudo systemctl status filebeat
tail -f /var/log/filebeat/filebeat
filebeat -c /etc/filebeat/filebeat.yml validate
sudo chmod 644 /path/to/logfile
sudo netstat -tuln | grep 端口号
import requests
import json
def check_filebeat_status():
response = requests.get('http://localhost:5066')
if response.status_code == 200:
print("Filebeat is running")
else:
print("Filebeat is not running")
def query_elasticsearch():
es_url = 'http://localhost:9200'
query = {
"query": {
"match_all": {}
},
"size": 10
}
response = requests.post(f"{es_url}/_search", json=query)
results = json.loads(response.text)
for hit in results['hits']['hits']:
print(hit['_source'])
check_filebeat_status()
query_elasticsearch()
通过上述配置和优化措施,可以显著提升Filebeat在Ubuntu系统上的性能。建议根据实际场景选择合适的配置参数,并持续监控Filebeat的运行状态,以确保其高效稳定地处理日志数据。