您好,登录后才能下订单哦!
本篇内容主要讲解“ELKB5.2.2集群环境的部署过程”,感兴趣的朋友不妨来看看。本文介绍的方法操作简单快捷,实用性强。下面就让小编来带大家学习“ELKB5.2.2集群环境的部署过程”吧!
ELKB5.2.2集群环境部署
本人陆陆续续接触了ELK的1.4,2.0,2.4,5.0,5.2版本,可以说前面使用当中一直没有太多感触,最近使用5.2才慢慢有了点感觉,可见认知事务的艰难,本次文档尽量详细点,现在写文档越来越喜欢简洁了,不知道是不是不太好。不扯了看正文(注意这里的配置是优化前配置,正常使用没问题,量大时需要优化)。
备注:
本次属于大版本变更,有很多修改,部署重大修改如下:
1,filebeat直接输出kafka,并drop不必要的字段如beat相关的
2,elasticsearch集群布局优化:分三master节点6data节点
3,logstash filter 加入urldecode支持url、reffer、agent中文显示
4,logstash fileter加入geoip支持客户端ip区域城市定位功能
5, logstash mutate替换字符串并remove不必要字段如kafka相关的
5,elasticsearch插件需要另外部署node.js,不能像以前一样集成一起
6,nginx日志新增request参数、请求方法
一,架构
可选架构
filebeat--elasticsearch--kibana
filebeat--logstash--kafka--logstash--elasticsearch--kibana
filebeat--kafka--logstash--elasticsearch--kibana
由于filebeat5.2.2支持多种输出logstash、elasticsearch、kafka、redis、syslog、file等,为了优化资源使用率且能够支持大并发场景选择
filebeat(18)--kafka(3)--logstash(3)--elasticsearch(3)--kibana(3--nginx负载均衡
共3台物理机、12台虚拟机、系统CentOS6.8、具体划分如下:
服务器一(192.168.188.186) kafka1 32G700G4CPU logstash8G 100G 4CPU elasticsearch2 40G1.4T 8CPU elasticsearch3 40G 1.4T 8CPU 服务器二(192.168.188.187) kafka2 32G700G4CPU logstash8G 100G 4CPU elasticsearch4 40G1.4T 8CPU elasticsearch5 40G 1.4T 8CPU 服务器三(192.168.188.188) kafka3 32G700G4CPU logstash8G 100G 4CPU elasticsearch6 40G1.4T 8CPU elasticsearch7 40G 1.4T 8CPU 磁盘分区 Logstach 100G SWAP 8G /boot 200M 剩下/ Kafka 700G SWAP 8G /boot 200M /30G 剩下 /data Elasticsearch 1.4T SWAP 8G /boot 200M /30G 剩下 /data IP分配 Elasticsearch2-6 192.168.188.191-196 kibana1-3 192.168.188.191/193/195 kafka1-3 192.168.188.237-239 logstash 192.168.188.238/198/240
二,环境准备
yum -y remove java-1.6.0-openjdk yum -y remove java-1.7.0-openjdk yum -y remove perl-* yum -y remove sssd-* yum -y install java-1.8.0-openjdk java -version yum update reboot
设置host环境kafka需要用到
cat /etc/hosts
192.168.188.191 ES191(master和data) 192.168.188.192 ES192(data) 192.168.188.193 ES193(master和data) 192.168.188.194 ES194(data) 192.168.188.195 ES195(master和data) 192.168.188.196 ES196(data) 192.168.188.237 kafka237 192.168.188.238 kafka238 192.168.188.239 kafka239 192.168.188.197 logstash297 192.168.188.198 logstash298 192.168.188.240 logstash340
三,部署elasticsearch集群
mkdir /data/esnginx
mkdir /data/eslog
rpm -ivh /srv/elasticsearch-5.2.2.rpm
chkconfig --add elasticsearch
chkconfig postfix off
rpm -ivh /srv/kibana-5.2.2-x86_64.rpm
chown elasticsearch:elasticsearch /data/eslog -R
chown elasticsearch:elasticsearch /data/esnginx -R
配置文件(3master+6data)
[root@ES191 elasticsearch]# cat elasticsearch.yml|grep -Ev '^#|^$'
cluster.name: nginxlog node.name: ES191 node.master: true node.data: true node.attr.rack: r1 path.data: /data/esnginx path.logs: /data/eslog bootstrap.memory_lock: true network.host: 192.168.188.191 http.port: 9200 transport.tcp.port: 9300 discovery.zen.ping.unicast.hosts: ["192.168.188.191","192.168.188.192","192.168.188.193","192.168.188.194","192.168.188.195","192.168.188.196"] discovery.zen.minimum_master_nodes: 2 gateway.recover_after_nodes: 5 gateway.recover_after_time: 5m gateway.expected_nodes: 6 cluster.routing.allocation.same_shard.host: true script.engine.groovy.inline.search: on script.engine.groovy.inline.aggs: on indices.recovery.max_bytes_per_sec: 30mb http.cors.enabled: true http.cors.allow-origin: "*" bootstrap.system_call_filter: false#内核3.0以下的需要,centos7内核3.10不需要
特别注意
/etc/security/limits.conf elasticsearch soft memlock unlimited elasticsearch hard memlock unlimited elasticsearch soft nofile 65536 elasticsearch hard nofile 131072 elasticsearch soft nproc 2048 elasticsearch hard nproc 4096 /etc/elasticsearch/jvm.options # Xms represents the initial size of total heap space # Xmx represents the maximum size of total heap space -Xms20g -Xmx20g
启动集群
service elasticsearch start
健康检查
http://192.168.188.191:9200/_cluster/health?pretty=true { "cluster_name" : "nginxlog", "status" : "green", "timed_out" : false, "number_of_nodes" : 6, "number_of_data_nodes" : 6, "active_primary_shards" : 0, "active_shards" : 0, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 0, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 100.0 }
elasticsearch-head插件
http://192.168.188.215:9100/
连接上面192.168.188.191:9200任意一台即可
设置分片
官方建议生成索引时再设置
curl -XPUT 'http://192.168.188.193:9200/_all/_settings?preserve_existing=true' -d '{
"index.number_of_replicas" : "1",
"index.number_of_shards" : "6"
}'
没有生效,后来发现这个分片设置可以在模版创建时指定,目前还是使用默认1副本,5分片。
其他报错(这个只是参考,优化时有方案)
bootstrap.system_call_filter: false # 针对 system call filters failed to install,
参见 https://www.elastic.co/guide/en/elasticsearch/reference/current/system-call-filter-check.html
[WARN ][o.e.b.JNANatives ] unable to install syscall filter:
java.lang.UnsupportedOperationException: seccomp unavailable: requires kernel 3.5+ with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled in
四、部署kafka集群
kafka集群搭建
1,zookeeper集群
wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.10/zookeeper-3.4.10.tar.gz tar zxvf zookeeper-3.4.10.tar.gz -C /usr/local/ ln -s /usr/local/zookeeper-3.4.10/ /usr/local/zookeeper mkdir -p /data/zookeeper/data/ vim /usr/local/zookeeper/conf/zoo.cfg tickTime=2000 initLimit=5 syncLimit=2 dataDir=/data/zookeeper/data clientPort=2181 server.1=192.168.188.237:2888:3888 server.2=192.168.188.238:2888:3888 server.3=192.168.188.239:2888:3888 vim /data/zookeeper/data/myid 1 /usr/local/zookeeper/bin/zkServer.sh start
2,kafka集群
wget http://mirrors.hust.edu.cn/apache/kafka/0.10.0.1/kafka_2.11-0.10.0.1.tgz
tar zxvf kafka_2.11-0.10.0.1.tgz -C /usr/local/
ln -s /usr/local/kafka_2.11-0.10.0.1 /usr/local/kafka
diff了下server.properties和zookeeper.properties变动不大可以直接使用
vim /usr/local/kafka/config/server.properties
broker.id=237 port=9092 host.name=192.168.188.237 num.network.threads=4 num.io.threads=8 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 log.dirs=/data/kafkalog num.partitions=3 num.recovery.threads.per.data.dir=1 log.retention.hours=24 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 log.cleaner.enable=false zookeeper.connect=192.168.188.237:2181,192.168.188.238:2181,192.168.188.239:237 zookeeper.connection.timeout.ms=6000 producer.type=async broker.list=192.168.188.237:9092,192.168.188.238:9092,192.168.188.239:9092
mkdir /data/kafkalog
修改内存使用大小
vim /usr/local/kafka/bin/kafka-server-start.sh
export KAFKA_HEAP_OPTS="-Xmx16G -Xms16G"
启动kafka
/usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties
创建六组前端topic
/usr/local/kafka/bin/kafka-topics.sh --create --topic ngx1-168 --replication-factor 1 --partitions 3 --zookeeper 192.168.188.237:2181,192.168.188.238:2181,192.168.188.239:2181
/usr/local/kafka/bin/kafka-topics.sh --create --topic ngx2-178 --replication-factor 1 --partitions 3 --zookeeper 192.168.188.237:2181,192.168.188.238:2181,192.168.188.239:2181
/usr/local/kafka/bin/kafka-topics.sh --create --topic ngx3-188 --replication-factor 1 --partitions 3 --zookeeper 192.168.188.237:2181,192.168.188.238:2181,192.168.188.239:2181
检查topic
/usr/local/kafka/bin/kafka-topics.sh --list --zookeeper 192.168.188.237:2181,192.168.188.238:2181,192.168.188.239:2181
ngx1-168
ngx2-178
ngx3-188
3,开机启动
cat /etc/rc.local
/usr/local/zookeeper/bin/zkServer.sh start
/usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties
注意:开机启动如果设置在rc.local里,java安装又不是用yum安装的openjdk-1.8.0时,需要指定JAVA_HOME,否则java环境不生效,受java环境影响的zookeeper与kafka服务也启动不了,因为java环境一般配置在/etc/profile里,它的生效时间在rc.local后。
五,部署配置logstash
安装
rpm -ivh logstash-5.2.2.rpm
mkdir /usr/share/logstash/config
#1. 复制配置文件到logstash home
cp /etc/logstash /usr/share/logstash/config
#2. 配置路径
vim /usr/share/logstash/config/logstash.yml
修改前:
path.config: /etc/logstash/conf.d
修改后:
path.config: /usr/share/logstash/config/conf.d
#3.修改 startup.options
修改前:
LS_SETTINGS_DIR=/etc/logstash
修改后:
LS_SETTINGS_DIR=/usr/share/logstash/config
修改startup.options需要执行/usr/share/logstash/bin/system-install 生效
配置
消费者输出端3个logstash只负责一部分
in-kafka-ngx1-out-es.conf
in-kafka-ngx2-out-es.conf
in-kafka-ngx3-out-es.conf
[root@logstash297 conf.d]# cat in-kafka-ngx1-out-es.conf
input { kafka { bootstrap_servers => "192.168.188.237:9092,192.168.188.238:9092,192.168.188.239:9092" group_id => "ngx1" topics => ["ngx1-168"] codec => "json" consumer_threads => 3 decorate_events => true } } filter { mutate { gsub => ["message", "\\x", "%"] remove_field => ["kafka"] } json { source => "message" remove_field => ["message"] } geoip { source => "clientRealIp" } urldecode { all_fields => true } } output { elasticsearch { hosts => ["192.168.188.191:9200","192.168.188.192:9200","192.168.188.193:9200","192.168.188.194:9200","192.168.188.195:9200","192.168.188.196:9200"] index => "filebeat-%{type}-%{+YYYY.MM.dd}" manage_template => true template_overwrite => true template_name => "nginx_template" template => "/usr/share/logstash/templates/nginx_template" flush_size => 50000 idle_flush_time => 10 } }
nginx 模版
[root@logstash297 logstash]# cat /usr/share/logstash/templates/nginx_template
{ "template" : "filebeat-*", "settings" : { "index.refresh_interval" : "10s" }, "mappings" : { "_default_" : { "_all" : {"enabled" : true, "omit_norms" : true}, "dynamic_templates" : [ { "string_fields" : { "match_pattern": "regex", "match" : "(agent)|(status)|(url)|(clientRealIp)|(referrer)|(upstreamhost)|(http_host)|(request)|(request_method)|(upstreamstatus)", "match_mapping_type" : "string", "mapping" : { "type" : "string", "index" : "analyzed", "omit_norms" : true, "fields" : { "raw" : {"type": "string", "index" : "not_analyzed", "ignore_above" : 512} } } } } ], "properties": { "@version": { "type": "string", "index": "not_analyzed" }, "geoip" : { "type": "object", "dynamic": true, "properties": { "location": { "type": "geo_point" } } } } } } }
启动
/usr/share/logstash/bin/logstash -f /usr/share/logstash/config/conf.d/in-kafka-ngx1-out-es.conf &
默认logstash开机启动
参考
/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-kafka-5.1.5/DEVELOPER.md
报错处理
[2017-05-08T12:24:30,388][ERROR][logstash.inputs.kafka ] Unknown setting 'zk_connect' for kafka
[2017-05-08T12:24:30,390][ERROR][logstash.inputs.kafka ] Unknown setting 'topic_id' for kafka
[2017-05-08T12:24:30,390][ERROR][logstash.inputs.kafka ] Unknown setting 'reset_beginning' for kafka
[2017-05-08T12:24:30,395][ERROR][logstash.agent ] Cannot load an invalid configuration {:reason=>"Something is wrong with your configuration."}
验证日志
[root@logstash297 conf.d]# cat /var/log/logstash/logstash-plain.log
[2017-05-09T10:43:20,832][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://192.168.188.191:9200/, http://192.168.188.192:9200/, http://192.168.188.193:9200/, http://192.168.188.194:9200/, http://192.168.188.195:9200/, http://192.168.188.196:9200/]}} [2017-05-09T10:43:20,838][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.168.188.191:9200/, :path=>"/"} [2017-05-09T10:43:20,919][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x59d1baad URL:http://192.168.188.191:9200/>} [2017-05-09T10:43:20,920][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.168.188.192:9200/, :path=>"/"} [2017-05-09T10:43:20,922][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x39defbff URL:http://192.168.188.192:9200/>} [2017-05-09T10:43:20,924][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.168.188.193:9200/, :path=>"/"} [2017-05-09T10:43:20,927][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x6e2b7f40 URL:http://192.168.188.193:9200/>} [2017-05-09T10:43:20,927][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.168.188.194:9200/, :path=>"/"} [2017-05-09T10:43:20,929][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x208910a2 URL:http://192.168.188.194:9200/>} [2017-05-09T10:43:20,930][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.168.188.195:9200/, :path=>"/"} [2017-05-09T10:43:20,932][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x297a8bbd URL:http://192.168.188.195:9200/>} [2017-05-09T10:43:20,933][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.168.188.196:9200/, :path=>"/"} [2017-05-09T10:43:20,935][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x3ac661af URL:http://192.168.188.196:9200/>} [2017-05-09T10:43:20,936][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>"/usr/share/logstash/templates/nginx_template"} [2017-05-09T10:43:20,970][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"filebeat-*", "settings"=>{"index.refresh_interval"=>"10s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "omit_norms"=>true}, "dynamic_templates"=>[{"string_fields"=>{"match_pattern"=>"regex", "match"=>"(agent)|(status)|(url)|(clientRealIp)|(referrer)|(upstreamhost)|(http_host)|(request)|(request_method)", "match_mapping_type"=>"string", "mapping"=>{"type"=>"string", "index"=>"analyzed", "omit_norms"=>true, "fields"=>{"raw"=>{"type"=>"string", "index"=>"not_analyzed", "ignore_above"=>512}}}}}]}}}} [2017-05-09T10:43:20,974][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/nginx_template [2017-05-09T10:43:21,009][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0x65ed1af5 URL://192.168.188.191:9200>, #<URI::Generic:0x2d2a52a6 URL://192.168.188.192:9200>, #<URI::Generic:0x6e79e44b URL://192.168.188.193:9200>, #<URI::Generic:0x531436ae URL://192.168.188.194:9200>, #<URI::Generic:0x5e23a48b URL://192.168.188.195:9200>, #<URI::Generic:0x2163628b URL://192.168.188.196:9200>]} [2017-05-09T10:43:21,010][INFO ][logstash.filters.geoip ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-geoip-4.0.4-java/vendor/GeoLite2-City.mmdb"} [2017-05-09T10:43:21,022][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500} [2017-05-09T10:43:21,037][INFO ][logstash.pipeline ] Pipeline main started [2017-05-09T10:43:21,086][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
六,部署配置filebeat
安装
rpm -ivh filebeat-5.2.2-x86_64.rpm
nginx日志格式需要为json的
log_format access '{ "@timestamp": "$time_iso8601", ' '"clientRealIp": "$clientRealIp", ' '"size": $body_bytes_sent, ' '"request": "$request", ' '"method": "$request_method", ' '"responsetime": $request_time, ' '"upstreamhost": "$upstream_addr", ' '"http_host": "$host", ' '"url": "$uri", ' '"referrer": "$http_referer", ' '"agent": "$http_user_agent", ' '"status": "$status"} ';
配置filebeat
vim /etc/filebeat/filebeat.yml
filebeat.prospectors: - input_type: log paths: - /data/wwwlogs/*.log document_type: ngx1-168 tail_files: true json.keys_under_root: true json.add_error_key: true output.kafka: enabled: true hosts: ["192.168.188.237:9092","192.168.188.238:9092","192.168.188.239:9092"] topic: '%{[type]}' partition.round_robin: reachable_only: false required_acks: 1 compression: gzip max_message_bytes: 1000000 worker: 3 processors: - drop_fields: fields: ["input_type", "beat.hostname", "beat.name", "beat.version", "offset", "source"] logging.to_files: true logging.files: path: /var/log/filebeat name: filebeat rotateeverybytes: 10485760 # = 10MB keepfiles: 7
filebeat详细配置参考官网
https://www.elastic.co/guide/en/beats/filebeat/5.2/index.html
采用kafka作为日志输出端
https://www.elastic.co/guide/en/beats/filebeat/5.2/kafka-output.html
output.kafka:
# initial brokers for reading cluster metadata
hosts: ["kafka1:9092", "kafka2:9092", "kafka3:9092"]
# message topic selection + partitioning
topic: '%{[type]}'
partition.round_robin:
reachable_only: false
required_acks: 1
compression: gzip
max_message_bytes: 1000000
启动
chkconfig filebeat on
/etc/init.d/filebeat start
报错处理
[root@localhost ~]# tail -f /var/log/filebeat/filebeat
2017-05-09T15:21:39+08:00 ERR Error decoding JSON: invalid character 'x' in string escape code
使用$uri 可以在nginx对URL进行更改或重写,但是用于日志输出可以使用$request_uri代替,如无特殊业务需求,完全可以替换
参考
http://www.mamicode.com/info-detail-1368765.html
七,验证
1,kafka消费者查看
/usr/local/kafka/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic ngx1-168
2,elasticserch head查看Index及分片信息
八,部署配置kibana
1,配置启动
cat /etc/kibana/kibana.yml
server.port: 5601
server.host: "192.168.188.191"
elasticsearch.url: "http://192.168.188.191:9200"
chkconfig --add kibana
/etc/init.d/kibana start
2,字段格式
{ "_index": "filebeat-ngx1-168-2017.05.10", "_type": "ngx1-168", "_id": "AVvvtIJVy6ssC9hG9dKY", "_score": null, "_source": { "request": "GET /qiche/奥迪A3/ HTTP/1.1", "agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.101 Safari/537.36", "geoip": { "city_name": "Jinhua", "timezone": "Asia/Shanghai", "ip": "122.226.77.150", "latitude": 29.1068, "country_code2": "CN", "country_name": "China", "continent_code": "AS", "country_code3": "CN", "region_name": "Zhejiang", "location": [ 119.6442, 29.1068 ], "longitude": 119.6442, "region_code": "33" }, "method": "GET", "type": "ngx1-168", "http_host": "www.niubi.com", "url": "/qiche/奥迪A3/", "referrer": "http://www.niubi.com/qiche/奥迪S6/", "upstreamhost": "172.17.4.205:80", "@timestamp": "2017-05-10T08:14:00.000Z", "size": 10027, "beat": {}, "@version": "1", "responsetime": 0.217, "clientRealIp": "122.226.77.150", "status": "200" }, "fields": { "@timestamp": [ 1494404040000 ] }, "sort": [ 1494404040000 ] }
3,视图仪表盘
1),添加高德地图
编辑kibana配置文件kibana.yml,最后面添加
tilemap.url: 'http://webrd02.is.autonavi.com/appmaptile?lang=zh_cn&size=1&scale=1&style=7&x={x}&y={y}&z={z}'
ES 模版的调整,Geo-points 不适用 dynamic mapping 因此这类项目需要显式的指定:
需要将 geoip.location 指定为 geo_point 类型,则在模版的 properties 中增加一个项目,如下所示:
"properties": {
"@version": { "type": "string", "index": "not_analyzed" },
"geoip" : {
"type": "object",
"dynamic": true,
"properties": {
"location": { "type": "geo_point" }
}
}
}
4,安装x-pack插件
参考
https://www.elastic.co/guide/en/x-pack/5.2/installing-xpack.html#xpack-installing-offline
https://www.elastic.co/guide/en/x-pack/5.2/setting-up-authentication.html#built-in-users
注意要修改密码
http://192.168.188.215:5601/app/kibana#/dev_tools/console?load_from=https://www.elastic.co/guide/en/x-pack/5.2/snippets/setting-up-authentication/1.json
http://192.168.188.215:5601/app/kibana#/dev_tools/console?load_from=https://www.elastic.co/guide/en/x-pack/5.2/snippets/setting-up-authentication/2.json
http://192.168.188.215:5601/app/kibana#/dev_tools/console?load_from=https://www.elastic.co/guide/en/x-pack/5.2/snippets/setting-up-authentication/3.json
或者
curl -XPUT 'localhost:9200/_xpack/security/user/elastic/_password?pretty' -H 'Content-Type: application/json' -d'
{
"password": "elasticpassword"
}
'
curl -XPUT 'localhost:9200/_xpack/security/user/kibana/_password?pretty' -H 'Content-Type: application/json' -d'
{
"password": "kibanapassword"
}
'
curl -XPUT 'localhost:9200/_xpack/security/user/logstash_system/_password?pretty' -H 'Content-Type: application/json' -d'
{
"password": "logstashpassword"
}
'
下面是官网x-pack安装升级卸载文档,后发现注册版本的x-pack,只具有监控功能,就没安装
Installing X-Pack on Offline Machines The plugin install scripts require direct Internet access to download and install X-Pack. If your server doesn’t have Internet access, you can manually download and install X-Pack. To install X-Pack on a machine that doesn’t have Internet access: Manually download the X-Pack zip file: https://artifacts.elastic.co/downloads/packs/x-pack/x-pack-5.2.2.zip (sha1) Transfer the zip file to a temporary directory on the offline machine. (Do NOT put the file in the Elasticsearch plugins directory.) Run bin/elasticsearch-plugin install from the Elasticsearch install directory and specify the location of the X-Pack zip file. For example: bin/elasticsearch-plugin install file:///path/to/file/x-pack-5.2.2.zip Note You must specify an absolute path to the zip file after the file:// protocol. Run bin/kibana-plugin install from the Kibana install directory and specify the location of the X-Pack zip file. (The plugins for Elasticsearch, Kibana, and Logstash are included in the same zip file.) For example: bin/kibana-plugin install file:///path/to/file/x-pack-5.2.2.zip Run bin/logstash-plugin install from the Logstash install directory and specify the location of the X-Pack zip file. (The plugins for Elasticsearch, Kibana, and Logstash are included in the same zip file.) For example: bin/logstash-plugin install file:///path/to/file/x-pack-5.2.2.zip Enabling and Disabling X-Pack Features By default, all X-Pack features are enabled. You can explicitly enable or disable X-Pack features in elasticsearch.yml and kibana.yml: SettingDescription xpack.security.enabled Set to false to disable X-Pack security. Configure in both elasticsearch.yml and kibana.yml. xpack.monitoring.enabled Set to false to disable X-Pack monitoring. Configure in both elasticsearch.yml and kibana.yml. xpack.graph.enabled Set to false to disable X-Pack graph. Configure in both elasticsearch.yml and kibana.yml. xpack.watcher.enabled Set to false to disable Watcher. Configure in elasticsearch.yml only. xpack.reporting.enabled Set to false to disable X-Pack reporting. Configure in kibana.yml only.
九、Nginx负载均衡
1,配置负载
[root@~# cat /usr/local/nginx/conf/nginx.conf
server
{ listen 5601; server_name 192.168.188.215; index index.html index.htm index.shtml; location / { allow 192.168.188.0/24; deny all; proxy_pass http://kibanangx_niubi_com; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; auth_basic "Please input Username and Password"; auth_basic_user_file /usr/local/nginx/conf/.pass_file_elk; } access_log /data/wwwlogs/access_kibanangx.niubi.com.log access; } upstream kibanangx_niubi_com { ip_hash; server 192.168.188.191:5601; server 192.168.188.193:5601; server 192.168.188.195:5601; }
2,访问
http://192.168.188.215:5601/app/kibana#
-------------------------------------------------------------------------------------------------
完美的分割线
-------------------------------------------------------------------------------------------------
优化文档
ELKB5.2集群优化方案
一,优化效果
优化前
收集日志请求达到1万/s,延时10s内,默认设置数据10s刷新。
优化后
收集日志请求达到3万/s,延时10s内,默认设置数据10s刷新。(预估可以满足最大请求5万/s)
缺点:CPU处理能力不足,在dashboard大时间聚合运算是生成仪表视图会有超时现象发生;另外elasticsarch结构和搜索语法等还有进一步优化空间。
二,优化步骤
1,内存和CPU重新规划
1),es 16CPU 48G内存
2),kafka 8CPU 16G内存
3),logstash 16CPU 12G内存
2,kafka优化
kafka manager 监控观察消费情况
kafka heap size需要修改
logstash涉及kafka的一个参数修改
1),修改jvm内存数
vi /usr/local/kafka/bin/kafka-server-start.sh
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-Xmx8G -Xms8G"
export JMX_PORT="8999"
fi
2),Broker参数配置
配置优化都是修改server.properties文件中参数值
网络和io操作线程配置优化
# broker处理消息的最大线程数(默认3,可以为CPU核数)
num.network.threads=4
# broker处理磁盘IO的线程数 (默认4,可以为CPU核数2倍左右)
num.io.threads=8
3),安装kafka监控
/data/scripts/kafka-manager-1.3.3.4/bin/kafka-manager
http://192.168.188.215:8099/clusters/ngxlog/consumers
3,logstah优化
logstas需要修改2个配置文件
1),修改jvm参数
vi /usr/share/logstash/config/jvm.options
-Xms2g
-Xmx6g
2),修改logstash.yml
vi /usr/share/logstash/config/logstash.yml
path.data: /var/lib/logstash
pipeline.workers: 16#cpu核心数
pipeline.output.workers: 4#这里相当于output elasticsearch里面的workers数
pipeline.batch.size: 5000#根据qps,压力情况等填写
pipeline.batch.delay: 5
path.config: /usr/share/logstash/config/conf.d
path.logs: /var/log/logstash
3),修改对应的logstash.conf文件
input文件
vi /usr/share/logstash/config/in-kafka-ngx12-out-es.conf
input { kafka { bootstrap_servers => "192.168.188.237:9092,192.168.188.238:9092,192.168.188.239:9092" group_id => "ngx1" topics => ["ngx1-168"] codec => "json" consumer_threads => 3 auto_offset_reset => "latest" #添加这行 #decorate_events => #true 这行去掉 } }
filter文件
filter { mutate { gsub => ["message", "\\x", "%"] #这个是转义,url里面的加密方式和request等不一样,用于汉字显示 #remove_field => ["kafka"]这行去掉 decorate events 默认false后就不添加kafka.{}字段了,这里也及不需要再remove了 }
output文件
修改前
flush_size => 50000
idle_flush_time => 10
修改后
4秒集齐8万条一次性输出
flush_size => 80000
idle_flush_time => 4
启动后logstash输出(pipeline.max_inflight是8万)
[2017-05-16T10:07:02,552][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>16, "pipeline.batch.size"=>5000, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>80000} [2017-05-16T10:07:02,553][WARN ][logstash.pipeline ] CAUTION: Recommended inflight events max exceeded! Logstash will run with up to 80000 events in memory in your current configuration. If your message sizes are large this may cause instability with the default heap size. Please consider setting a non-standard heap size, changing the batch size (currently 5000), or changing the number of pipeline workers (currently 16)
4,elasticsearch优化
1),修改jvm参加
vi /etc/elasticsearch/jvm.options
调整为24g,最大为虚拟机内存的50%
-Xms24g
-Xmx24g
2),修改GC方法(待定,后续观察,该参数不确定时不建议修改)
elasticsearch默认使用的GC是CMS GC
如果你的内存大小超过6G,CMS是不给力的,容易出现stop-the-world
建议使用G1 GC
注释掉:
JAVA_OPTS=”$JAVA_OPTS -XX:+UseParNewGC”
JAVA_OPTS=”$JAVA_OPTS -XX:+UseConcMarkSweepGC”
JAVA_OPTS=”$JAVA_OPTS -XX:CMSInitiatingOccupancyFraction=75″
JAVA_OPTS=”$JAVA_OPTS -XX:+UseCMSInitiatingOccupancyOnly”
修改为:
JAVA_OPTS=”$JAVA_OPTS -XX:+UseG1GC”
JAVA_OPTS=”$JAVA_OPTS -XX:MaxGCPauseMillis=200″
3),安装elasticsearch集群监控工具Cerebro
https://github.com/lmenezes/cerebro
Cerebro 时一个第三方的 elasticsearch 集群管理软件,可以方便地查看集群状态:
https://github.com/lmenezes/cerebro/releases/download/v0.6.5/cerebro-0.6.5.tgz
安装后访问地址
http://192.168.188.215:9000/
4),elasticsearch搜索参数优化(难点问题)
发现没事可做的,首先默认配置已经很好了,其次bulk,刷新等配置里都写好了
5),elasticsarch集群角色优化
es191,es193,es195只做master节点+ingest节点
es192,es194,es196只做data节点(上面是虚拟机2个虚拟机共用一组raid5磁盘,如果都做data节点性能表现不好)
再加2个data节点,这样聚合计算性能提升很大
5,filebeat优化
1),使用json格式输入,这样logstash就不需要dcode减轻后端压力
json.keys_under_root: true
json.add_error_key: true
2),drop不必要的字段如下
vim /etc/filebeat/filebeat.yml
processors:
- drop_fields:
fields: ["input_type", "beat.hostname", "beat.name", "beat.version", "offset", "source"]
3),计划任务删索引
index默认保留5天
cat /data/scripts/delindex.sh
#!/bin/bash OLDDATE=`date -d -5days +%Y.%m.%d` echo $OLDDATE curl -XDELETE http://192.168.188.193:9200/filebeat-ngx1-168-$OLDDATE curl -XDELETE http://192.168.188.193:9200/filebeat-ngx2-178-$OLDDATE curl -XDELETE http://192.168.188.193:9200/filebeat-ngx3-188-$OLDDATE
到此,相信大家对“ELKB5.2.2集群环境的部署过程”有了更深的了解,不妨来实际操作一番吧!这里是亿速云网站,更多相关内容可以进入相关频道进行查询,关注我们,继续学习!
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。