在Debian上使用Kafka监控工具,你可以选择kafka_exporter
结合Prometheus
和Grafana
来实现。以下是具体步骤:
kafka_exporter
:cd /usr/local/appserver
wget https://github.com/danielqsj/kafka_exporter/releases/download/v1.2.0/kafka_exporter-1.2.0.linux-amd64.tar.gz
tar -xzf kafka_exporter-1.2.0.linux-amd64.tar.gz
mv kafka_exporter-1.2.0.linux-amd64 kafka_exporter
kafka_exporter
:编辑/etc/supervisord.d/kafka_exporter.ini
文件,添加以下内容:
[program:kafka_exporter]
command=/usr/local/appserver/kafka_exporter/kafka_exporter --kafka.server=kafka-1:19091 --kafka.server=kafka-2:19092 --kafka.server=kafka-3:19093 --kafka.server=kafka-4:19094 --kafka.server=kafka-5:19095
autostart=true
autorestart=true
startsecs=5
priority=1
startretries=3
stopwaitsecs=1
stdout_logfile=/data/logs/kafka_exporter.log
supervisor
服务使配置生效:systemctl restart supervisor
prometheus.yml
文件,添加Kafka监控配置:# 大数据平台kafka监控
job_name: 'kafka'
scrape_interval: 5s
file_sd_configs:
- refresh_interval: 1m
files:
- "configs/SHN_Kafka_Service.yml"
configs/SHN_Kafka_Service.yml
文件,添加Kafka实例的监控目标:targets:
- "192.168.1.29:9308"
labels:
node: "in_service"
type: "Kafka Service监控"
group: "重要中间件以及平台"
role: "kafka-1"
env: "生产环境"
- "192.168.1.30:9308"
labels:
node: "in_service"
type: "Kafka Service监控"
group: "重要中间件以及平台"
role: "kafka-2"
env: "生产环境"
- "192.168.1.21:9308"
labels:
node: "in_service"
type: "Kafka Service监控"
group: "重要中间件以及平台"
role: "kafka-3"
env: "生产环境"
- "192.168.1.28:9308"
labels:
node: "in_service"
type: "Kafka Service监控"
group: "重要中间件以及平台"
role: "kafka-4"
env: "生产环境"
- "192.168.1.23:9308"
labels:
node: "in_service"
type: "Kafka Service监控"
group: "重要中间件以及平台"
role: "kafka-5"
env: "生产环境"
systemctl restart prometheus
http://localhost:3000
)。Manage
-> Data Sources
。Add Data Source
,选择Prometheus
。http://127.0.0.1:9090
,然后点击Save & Test
。Explore
,搜索kafka
,导入相关的仪表盘。除了上述方法,你还可以考虑使用其他Kafka监控工具,如:
希望这些步骤和工具能帮助你成功在Debian上设置和使用Kafka监控工具。如果有任何问题,请随时提问。