ubuntu

filebeat如何与elasticsearch配合ubuntu

小樊
38
2025-11-11 10:37:42
栏目: 智能运维

Installing Filebeat on Ubuntu
Before integrating Filebeat with Elasticsearch, you need to install Filebeat on your Ubuntu system. Use the Elastic APT repository for a streamlined installation:

  1. Update the package list: sudo apt update.
  2. Add the Elastic GPG key to verify package authenticity: wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -.
  3. Add the Elastic APT repository for Filebeat (replace 7.x with your desired version, e.g., 8.x): echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list.
  4. Update the package list again to include Elastic packages: sudo apt update.
  5. Install Filebeat: sudo apt install filebeat -y.

Configuring Filebeat to Collect Logs
Filebeat requires input definitions to specify which logs to collect. The default configuration file is located at /etc/filebeat/filebeat.yml.

  1. Enable the system module (for collecting system logs like syslog and auth logs) or manually define log paths:
    • To enable the system module: filebeat modules enable system. This creates a pre-configured input for system logs.
    • For manual configuration, add an input section to /etc/filebeat/filebeat.yml:
      filebeat.inputs:
      - type: log
        enabled: true
        paths:
          - /var/log/syslog
          - /var/log/apache2/*.log  # Example: Apache logs
          - /var/log/nginx/*.log    # Example: Nginx logs
      
  2. Save the file after making changes.

Configuring Filebeat to Output to Elasticsearch
To send collected logs to Elasticsearch, modify the output.elasticsearch section in /etc/filebeat/filebeat.yml:

output.elasticsearch:
  hosts: ["localhost:9200"]  # Replace with your Elasticsearch server's address (e.g., ["es-node1:9200", "es-node2:9200"] for a cluster)
  index: "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"  # Creates daily indices with the Filebeat version and date

Starting and Enabling Filebeat
After configuring inputs and outputs, start the Filebeat service and enable it to launch on boot:

sudo systemctl start filebeat
sudo systemctl enable filebeat

Use sudo systemctl status filebeat to verify that the service is running without errors.

Verifying the Integration
To confirm that Filebeat is sending logs to Elasticsearch:

  1. Check Filebeat logs for real-time status: sudo journalctl -u filebeat -f. Look for messages indicating successful connections to Elasticsearch.
  2. Query Elasticsearch for Filebeat indices: curl -X GET "localhost:9200/_cat/indices?v". You should see indices named filebeat-* (e.g., filebeat-7.17.0-2025.11.11).
  3. If Kibana is installed, go to the Discover page and select the filebeat-* index pattern to view log data.

Optional: Configuring Elasticsearch for Filebeat
Ensure your Elasticsearch cluster is ready to receive data from Filebeat:

  1. Enable Elasticsearch: Start the Elasticsearch service (sudo systemctl start elasticsearch) and enable it to start on boot (sudo systemctl enable elasticsearch).
  2. Adjust Network Settings: In /etc/elasticsearch/elasticsearch.yml, set network.host: 0.0.0.0 to allow remote connections (if Filebeat is on a different server) and http.port: 9200 (default).
  3. Secure Communication (Recommended): Use SSL/TLS to encrypt data between Filebeat and Elasticsearch. Generate certificates with the Elastic Certificate Authority (ECA) and configure output.elasticsearch.ssl.certificate_authorities in Filebeat to point to the CA certificate.

Performance Tuning (Optional)
For high-volume log environments, optimize Filebeat and Elasticsearch configurations to improve throughput:

  1. Filebeat Tuning: In /etc/filebeat/filebeat.yml, adjust these input parameters:
    • harvester_buffer_size: Increase buffer size for reading log files (e.g., 40960000 bytes).
    • spool_size: Increase the number of logs queued before sending to Elasticsearch (e.g., 250000).
    • idle_timeout: Reduce the time to send logs when idle (e.g., 1s).
  2. Elasticsearch Output Tuning: In the output.elasticsearch section, adjust these parameters:
    • worker: Set to the number of Elasticsearch nodes (e.g., worker: 3 for 3 nodes).
    • bulk_max_size: Increase the number of events per batch (e.g., 15000).
    • flush_interval: Reduce the time between batches (e.g., 1s).
      These changes help Filebeat handle large log volumes efficiently.

0
看了该问题的人还看了