您好,登录后才能下订单哦!
密码登录
登录注册
点击 登录注册 即表示同意《亿速云用户服务条款》
小生博客:http://xsboke.blog.51cto.com
-------谢谢您的参考,如有疑问,欢迎交流
目录:
WEB
配置Elasticsearch
配置nginx
访问elasticsearch
和kibana
filebeat input
配置filebeat收集日志 -> logstash过滤/格式化 -> elasticsearch存储 -> kibana展示
# 个人理解
其实logstash和filebeat都可以收集日志并且直接输出到elasticsearch.
只不过logstash功能比filebeat更多,比如:过滤,格式化
filebeat比logstash更轻,所以filebeat收集日志速度更快.
#基于ELK7.4,通过收集Nginx日志示例.
centos7.2-web 172.16.100.251 nginx/filebeat/logstash
centos7.2-elasticsearch 172.16.100.252 elasticsearch/kibana
WEB
配置Nginx
yum -y install yum-utils
vim /etc/yum.repos.d/nginx.repo
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true
[nginx-mainline]
name=nginx mainline repo
baseurl=http://nginx.org/packages/mainline/centos/$releasever/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true
yum-config-manager --enable nginx-mainline
yum -y install nginx
nginx
JDK
tar zxf jdk-8u202-linux-x64.tar.gz
mv jdk1.8.0_202 /usr/local/jdk1.8
vim /etc/profile
export JAVA_HOME=/usr/local/jdk1.8
export JRE_HOME=/usr/local/jdk1.8/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
export PATH=$JAVE_HOME/bin:$JRE_HOME/bin:$PATH
source /etc/profile
# 如果不做这个软连接logstash依然会报错找不到openSDK
ln -s /usr/local/jdk1.8/bin/java /usr/bin/java
filebeat
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.4.0-x86_64.rpm
rpm -vi filebeat-7.4.0-x86_64.rpm
vim /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log # 监控的日志
tags: ["access"] # 用于实现多日志收集
- type: log
enabled: true
paths:
- /var/log/nginx/error.log
tags: ["error"]
output.logstash:
hosts: ["localhost:5044"] # logstash的配置文件会指定监听这个端口
# 注释: "output.elasticsearch",否则在启用logstash模块时会报错:Error initializing beat: error unpacking config data: more than one namespace configured accessing 'output' (source:'/etc/filebeat/filebeat.yml')
# 启动logstatsh模块,其实修改的是这个文件"/etc/filebeat/modules.d/logstash.yml"
filebeat modules enable logstash
logstash
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
vim /etc/yum.repos.d/logstash.repo
[logstash-7.x]
name=Elastic repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
yum -y install logstash
ln -s /usr/share/logstash/bin/logstash /usr/local/bin/
# logstash.yml部分配置简介
path.data: 数据存放目录
config.reload.automatic: 是否动态加载配置文件
config.reload.interval: 动态加载配置文件间隔
http.host: 监听主机
http.port: 端口
# 在logstash/conf.d/ 下编写你的配置文件
vim /etc/logstash/conf.d/nginx.conf
input {
beats {
port => 5044
}
}
output {
if "access" in [tags] { # 通过判断标签名,为不同的日志配置不同的index
elasticsearch {
hosts => ["172.16.100.252:9200"]
index => "nginx-access-%{+YYYY.MM.dd}" # 索引名不能大写
sniffing => true
template_overwrite => true
}
}
if "error" in [tags] {
elasticsearch {
hosts => ["172.16.100.252:9200"]
index => "nginx-error-%{+YYYY.MM.dd}"
sniffing => true
template_overwrite => true
}
}
}
systemctl daemon-reload
systemctl enable logstashe
systemctl start logstashe
firewall-cmd --permanent --add-port=80/tcp
firewall-cmd --reload
Elasticsearch
配置JDK
tar zxf jdk-8u202-linux-x64.tar.gz
mv jdk1.8.0_202 /usr/local/jdk1.8
vim /etc/profile
export JAVA_HOME=/usr/local/jdk1.8
export JRE_HOME=/usr/local/jdk1.8/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
export PATH=$JAVE_HOME/bin:$JRE_HOME/bin:$PATH
source /etc/profile
elasticsearch
```
vim /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
yum -y install elasticsearch
# 修改elasticsearch
关键字:
cluster.name: 群集名字
node.name: 节点名字
path.data: 数据存放路径
path.logs: 日志存放路径
bootstrap.memory_lock: 在启动时侯是否锁定内存
network.host: 提供服务绑定的ip地址,0.0.0.0代表所有地址
http.port: 侦听端口
discovery.seed_hosts: 集群主机
cluster.initial_master_nodes: 指定master节点
sed -i "/#cluster.name: my-application/a\cluster.name: my-elk-cluster" /etc/elasticsearch/elasticsearch.yml
sed -i "/#node.name: node-1/a\node.name: node-1" /etc/elasticsearch/elasticsearch.yml
sed -i "s/path.data: \/var\/lib\/elasticsearch/path.data: \/data\/elasticsearch/g" /etc/elasticsearch/elasticsearch.yml
sed -i "/#bootstrap.memory_lock: true/a\bootstrap.memory_lock: false" /etc/elasticsearch/elasticsearch.yml
sed -i "/#network.host: 192.168.0.1/a\network.host: 0.0.0.0" /etc/elasticsearch/elasticsearch.yml
sed -i "/#http.port: 9200/a\http.port: 9200" /etc/elasticsearch/elasticsearch.yml
sed -i '/#discovery.seed_hosts: \["host1", "host2"\]/a\discovery.seed_hosts: \["172.16.100.252"\]' /etc/elasticsearch/elasticsearch.yml
sed -i '/#cluster.initial_master_nodes: \["node-1", "node-2"\]/a\cluster.initial_master_nodes: \["node-1"\]' /etc/elasticsearch/elasticsearch.yml
mkdir -p /data/elasticsearch
chown elasticsearch:elasticsearch /data/elasticsearch
systemctl daemon-reload
systemctl enable elasticsearch
systemctl start elasticsearch
```
Kibana
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
vim /etc/yum.repos.d/kibana.repo
[kibana-7.x]
name=Kibana repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
yum -y install kibana
sed -i "/#server.port: 5601/a\server.port: 5601" /etc/kibana/kibana.yml
sed -i '/#server.host: "localhost"/a\server.host: "0.0.0.0"' /etc/kibana/kibana.yml
sed -i '/#elasticsearch.hosts: \["http:\/\/localhost:9200"\]/a\elasticsearch.hosts: \["http:\/\/localhost:9200"\]' /etc/kibana/kibana.yml
sed -i '/#kibana.index: ".kibana"/a\kibana.index: ".kibana"' /etc/kibana/kibana.yml
systemctl daemon-reload
systemctl enable kibana
systemctl start kibana
firewall-cmd --permanent --add-port=9200/tcp
# firewall-cmd --permanent --add-port=9300/tcp # 集群端口
firewall-cmd --permanent --add-port=5601/tcp
firewall-cmd --reload
nginx
访问elasticsearch
和kibana
(使用nginx
实现elasticsearch
和kibana
的访问限制)172.16.100.252
# 修改hosts
vim /etc/hosts
172.16.100.252 elk.elasticsearch
# 安装nginx并且配置
server {
listen 80;
server_name elk.elasticsearch;
location / {
allow 127.0.0.1/32;
allow 172.16.100.251/32;
deny all;
proxy_pass http://127.0.0.1:9200;
}
}
server {
listen 80;
server_name elk.kibana;
location / {
allow "可以访问kibana的IP";
deny all;
proxy_pass http://127.0.0.1:5601;
}
}
# 修改elasticsearch配置
network.host: 127.0.0.1
discovery.seed_hosts: ["elk.elasticsearch"]
# 修改kibana配置
server.host: "127.0.0.1"
systemctl restart elasticsearch
systemctl restart kibana
172.16.100.251
# 修改hosts
vim /etc/hosts
172.16.100.252 elk.elasticsearch
# logstash input output conf
vim /etc/logstash/conf.d/nginx.conf
input {
beats {
port => 5044
}
}
output {
if "access" in [tags] { # 通过判断标签名,为不同的日志配置不同的index
elasticsearch {
hosts => ["elk.elasticsearch:80"] # 必须指定端口,否则默认访问9200
index => "nginx-access-%{+YYYY.MM.dd}" # 索引名不能大写
sniffing => false
template_overwrite => true
}
}
if "error" in [tags] {
elasticsearch {
hosts => ["elk.elasticsearch:80"]
index => "nginx-error-%{+YYYY.MM.dd}"
sniffing => false
template_overwrite => true
}
}
}
systemctl restart logstash
filebeat input
配置# filebeat将多行合并为一行收集
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
tags: ["access"]
multiline.pattern: '^\[[0-9]{4}' # 指定要匹配的正则表达式模式,匹配以[YYYY 开头的行.
multiline.negate: true # 不匹配模式的连续行
multiline.match: after # 追加到不匹配的前一行
# filebeat收集指定目录下的日志并且包括子目录
filebeat.inputs:
- type: log
enabled: true
paths:
- "/var/log/**"
recursive_glob.enabled: true # 开启递归模式
tags: ["LogAll"]
# 启动filebeat并且将信息输出到终端
filebeat -e
# 启动logstash并且将信息输出到终端
logstash /etc/logstash/conf.d/nginx.conf
# 随意写入内容到收集的日志中
echo "1" >> /var/log/nginx/access.log
# 然后通过查看filebeat和logstash的输出来判断错误
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。