首页
About Me
推荐
weibo
github
Search
1
linuxea:gitlab-ci之docker镜像质量品质报告
49,451 阅读
2
linuxea:如何复现查看docker run参数命令
23,046 阅读
3
Graylog收集文件日志实例
18,582 阅读
4
linuxea:jenkins+pipeline+gitlab+ansible快速安装配置(1)
18,275 阅读
5
git+jenkins发布和回滚示例
18,181 阅读
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack/logs
Open-Falcon
Prometheus
victoriaMetrics
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
互联咨询
最后的净土
软件交付
持续集成
gitops
devops
登录
Search
标签搜索
kubernetes
docker
zabbix
Golang
mariadb
持续集成工具
白话容器
elk
linux基础
nginx
dockerfile
Gitlab-ci/cd
最后的净土
基础命令
gitops
jenkins
docker-compose
Istio
haproxy
saltstack
marksugar
累计撰写
690
篇文章
累计收到
139
条评论
首页
栏目
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack/logs
Open-Falcon
Prometheus
victoriaMetrics
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
互联咨询
最后的净土
软件交付
持续集成
gitops
devops
页面
About Me
推荐
weibo
github
搜索到
23
篇与
的结果
2023-08-11
linuxea: 日志收集的悄然变化
日志收集短期发展史日志的查看和告警是日志收集最核心的两个原因之一,通常99%的日志都是无用的,除非这些日志被用来做数据聚合环比数据分析。而传统的ELK,无论是Logstash还是ES都是非常消耗系统资源的应用,大规模场景中,要即时消费kafka的数据是一件不太容易的事情。观测性我们知道,现在的大多数应用皆是分布式或者微服务。微服务架构是让开发人员能够更快构建和发布,而随着服务进一步扩张,我们越发的不清楚服务运行的状态。而opentelmetry是来解决这种问题手段之一,微服务扩展后自己的服务和服务依赖之间的关系,通过可观察的方式使开发和维护都能够获得对系统的可见性。为了具备这种能力,系统就需要具备观测行。观测性是用来描述对系统所发生情况的理解程度,正常运行还是已经停止,用户察觉是变快还是变慢。如何构建KPI和SLA约定指标,或者说能够接受怎么样的最坏状态。能够回答如上的这些问题,并且能够指出问题。理想情况下在中断服务之前快速响应并且快速解决。而在术语上,观测性分为:事件日志,链路追踪和聚合指标。如果听到这个,让我们想起了最近的开源领域一些状态,或许就明白了,他们在做什么小米夜莺监控Nightingale在2023.7月底发布了V6版,转而构建可观测性平台。需要时刻掌握运行状态,可能是无法避免浪费,云产品提供了昂贵的观测平台服务:阿里云的商用产品arms和腾讯的商用产品介绍到这里,我想今天的主题是从日志发展作为开始,而结束必然是与事件日志(logs),链路追踪(traces)和聚合指标(metrics)有关。ELK的开始最早的ELK日志收集逻辑如下后来Graylog也成为更多的选择。而随着容器的发展,收集端logstash显然不够轻量,作为收集段fluentd,Fluent-bit用插件的方式比logstash更加轻便。而日志告警除开logstash,就是es插件。但是,他们都有一个同样的问题,假如你只愿意收集某一些日志,而不是所有的日志 ,无论是Logstash还是fluentd,Fluent-bit都i不会很 轻松,你需要配置一些过滤规则或者标签,而整个配置清单至少需要100行上下。而在中间的插曲是阿里开源的 log-point,log-pilot不在对所有的pod收集,你可以通过传入环境变量的方式进行选择,这与早期阿里的平台收集日志是一样 但是好景不长,log-pilot突然就停更了,它不在支持新特性和变化。事情在短期内由回到了开源领域的fluentd。而在这个 过程中 ,有一家石墨公司推出了clickvisual,它不在使用es,而是clickhouse,因此在同配置下它的集群性能超过了ES集群并且,clickhouse的数据很容易通过ttl来进行修改删除过期数据。但是clickvisual的产品是自己公司内部使用,而后免费开源出来的,因此它的界面似乎并没有获得广大用户青睐,clickvisual社区比较清淡。但是维护人员很活跃。当然,这并没有完。除了上述这些,log-pilot在停更的2年后,阿里随后推出了新的开源项目ilogtail,但是ilogtail似乎和log-pilot的宿命一样,ilogtail的社区更多时候永远慢一些 ,无论是补丁还是PR合并,以及ISSUE回复,这让社区旁观的人仍然会认为这依旧是一个 KPI产品。而在ilogtail出现的同时,另一个 loggie-io悄然出现,loggie-io是网易公司的日志收集端, loggie-io与clickvisual一样,都是商业公司内部的产品,而后进行开源公开。而clickvisual和 loggie-io的维护者相对要活跃,因此使用者居多。并且 loggie-io成功了接替了没有log-pilot这些日志的空白。并且 loggie-io提供了更多的使用功能 。此时的拓扑如下而在这其中,唯一没有被取代的是logstash, logstash是老牌日志处理中最关键的一环,他几乎包含了所有能够被用到的功能他都有。但是Datadog公司的vector出现后,logstash有了被替代的可能。vector是由rust编写,相比较java 的logstash使用资源更小,vector能够替代logstash的日志收集,中转,过滤,处理。它几乎可以替代logstash.事情到了 这里 并没有完,VictoriaLogs还没有结束,而github上openobserve通过不到一年的时间收获6K星,它的出现对标了es和kibana,因为它可以同时替代es和kibana。并声称与 Elasticsearch 相比,后者可以将日志存储成本降低约 140 倍。支持日志、指标、跟踪(Opentelemetry),集群支持S3,警报和查询, SQL 和 PromQL。或许是得益于openobserve的parquet,openobserve声称单机每天可以处理超过 2 TB 的数据。Mac M2的处理速度为约 31 MB/秒,即每分钟处理 1.8 GB,每天处理 2.6 TB。而这种情况在上一次这样的描述的是vm存储TimescaleDB相比。而且openobserve与vm的存储都是无状态的,尽管他们并不相同 ,但他可以仍然水平扩展,这样一来,这一切就更加明显了。
2023年08月11日
341 阅读
0 评论
0 点赞
2018-08-16
linuxea:logstash6和filebeat6配置笔记
开始配置filebeat,在这之前,你或许需要了解下之前的配置结构[ELK6.3.2安装与配置[跨网络转发思路]](https://www.linuxea.com/1889.html),我又将配置优化了下。仅仅因为我一个目录下有多个nginx日志。配置filebeat之前使用过用一个个日志来做单个的日志过滤,现在使用*.log匹配所有以log结尾的日志在发送到redis中在配置filebeat中将/data/wwwlogs/的所有以.log结尾的文件都会被收集到%{[fields.list_id]的变量名称中,在这个示例中是100_nginx_access,output到redis,key名称则是100_nginx_access,这其中包含error日志[root@linuxea-0702-DTNode01 ~]# cat /etc/filebeat/filebeat.yml filebeat.prospectors: - type: log enabled: true paths: - /data/wwwlogs/*.log fields: list_id: 172_nginx_access exclude_files: - ^access - ^error - \.gz$ filebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: false setup.template.settings: index.number_of_shards: 3 output.redis: hosts: ["47.90.33.131:6379"] password: "OTdmOWI4ZTM4NTY1M2M4OTZh" db: 2 timeout: 5 key: "%{[fields.list_id]:unknow}"排除文件可以这样exclude_files: ["/var/wwwlogs/error.log"]为了提升性能,redis关闭持久存储save "" #save 900 1 #save 300 10 #save 60 10000 appendonly no aof-rewrite-incremental-fsync nologstash配置文件假如你也是rpm安装的logstash的话,那就巧了,我也是在logstash中修pipeline.workers的线程数和ouput的线程数以及batch.size,线程数可以和内核数量持平,如果是单独运行logstash,可以设置稍大些。配置文件过滤后就是这样[root@linuxea-VM-Node117 /etc/logstash]# cat logstash.yml node.name: node1 path.data: /data/logstash/data #path.config: *.yml log.level: info path.logs: /data/logstash/logs pipeline.workers: 16 pipeline.output.workers: 16 pipeline.batch.size: 10000 pipeline.batch.delay: 10pipelines 配置文件pipelines文件中包含了所有的日志配置文件,也就是管道存放的位置和启动的workers[root@linuxea-VM-Node117 /etc/logstash]# cat pipelines.yml # This file is where you define your pipelines. You can define multiple. # For more information on multiple pipelines, see the documentation: # https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html - pipeline.id: 172_nginx_access pipeline.workers: 1 path.config: "/etc/logstash/conf.d/172_nginx_access.conf" - pipeline.id: 76_nginx_access pipeline.workers: 1 path.config: "/etc/logstash/conf.d/76_nginx_access.conf"jvm.optionsjvm.options配置文件中修改xms的起始大小和最大的大小,视配置而定-Xms4g -Xmx7g文件目录树:[root@linuxea-VM-Node117 /etc/logstash]# tree ./ ./ |-- conf.d | |-- 172_nginx_access.conf | `-- 76_nginx_access.conf |-- GeoLite2-City.mmdb |-- jvm.options |-- log4j2.properties |-- logstash.yml |-- patterns.d | |-- nginx | |-- nginx2 | `-- nginx_error |-- pipelines.yml `-- startup.options 2 directories, 20 filesnginx配置文件在conf.d目录下存放是单个配置文件,他可以存放多个。单个大致这样的input { redis { host => "47.31.21.369" port => "6379" key => "172_nginx_access" data_type => "list" password => "OTdmOM4OTZh" threads => "5" db => "2" } } filter { if [fields][list_id] == "172_nginx_access" { grok { patterns_dir => [ "/etc/logstash/patterns.d/" ] match => { "message" => "%{NGINXACCESS}" } match => { "message" => "%{NGINXACCESS_B}" } match => { "message" => "%{NGINXACCESS_ERROR}" } match => { "message" => "%{NGINXACCESS_ERROR2}" } overwrite => [ "message" ] remove_tag => ["_grokparsefailure"] timeout_millis => "0" } geoip { source => "clent_ip" target => "geoip" database => "/etc/logstash/GeoLite2-City.mmdb" } useragent { source => "User_Agent" target => "userAgent" } urldecode { all_fields => true } mutate { gsub => ["User_Agent","[\"]",""] #将user_agent中的 " 换成空 convert => [ "response","integer" ] convert => [ "body_bytes_sent","integer" ] convert => [ "bytes_sent","integer" ] convert => [ "upstream_response_time","float" ] convert => [ "upstream_status","integer" ] convert => [ "request_time","float" ] convert => [ "port","integer" ] } date { match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ] } } } output { if [fields][list_id] == "172_nginx_access" { elasticsearch { hosts => ["10.10.240.113:9200","10.10.240.114:9200"] index => "logstash-172_nginx_access-%{+YYYY.MM.dd}" user => "elastic" password => "dtopsadmin" } } stdout {codec => rubydebug} }其中: match字段的文件位置和在/etc/logstash/patterns.d/ patterns_dir => [ "/etc/logstash/patterns.d/" ] match => { "message" => "%{NGINXACCESS}" } match => { "message" => "%{NGINXACCESS_B}" } match => { "message" => "%{NGINXACCESS_ERROR}" } match => { "message" => "%{NGINXACCESS_ERROR2}" }nginx日志grok字段[root@linuxea-VM-Node117 /etc/logstash]# cat patterns.d/nginx NGUSERNAME [a-zA-Z\.\@\-\+_%]+ NGUSER %{NGUSERNAME} NGINXACCESS %{IP:clent_ip} (?:-|%{USER:ident}) \[%{HTTPDATE:log_date}\] \"%{WORD:http_verb} (?:%{PATH:baseurl}\?%{NOTSPACE:params}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" (%{IPORHOST:url_domain}|%{URIHOST:ur_domain}|-)\[(%{BASE16FLOAT:request_time}|-)\] %{NOTSPACE:request_body} %{QS:referrer_rul} %{GREEDYDATA:User_Agent} \[%{GREEDYDATA:ssl_protocol}\] \[(?:%{GREEDYDATA:ssl_cipher}|-)\]\[%{NUMBER:time_duration}\] \[%{NUMBER:http_status_code}\] \[(%{BASE10NUM:upstream_status}|-)\] \[(%{NUMBER:upstream_response_time}|-)\] \[(%{URIHOST:upstream_addr}|-)\] [root@linuxea-VM-Node117 /etc/logstash]# 由于使用了4层,nginx日志被报错在编译时候的日志格式,也做了grok[root@linuxea-VM-Node117 /etc/logstash]# cat patterns.d/nginx2 NGUSERNAME [a-zA-Z\.\@\-\+_%]+ NGUSER %{NGUSERNAME} NGINXACCESS_B %{IPORHOST:clientip} (?:-|(%{WORD}.%{WORD})) (?:-|%{USER:ident}) \[%{HTTPDATE:timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:http_status_code} %{NOTSPACE:request_body} "%{GREEDYDATA:User_Agent}" [root@linuxea-VM-Node117 /etc/logstash]# nginx错误日志的grok[root@linuxea-VM-Node117 /etc/logstash]# cat patterns.d/nginx_error NGUSERNAME [a-zA-Z\.\@\-\+_%]+ NGUSER %{NGUSERNAME} NGINXACCESS_ERROR (?<time>\d{4}/\d{2}/\d{2}\s{1,}\d{2}:\d{2}:\d{2})\s{1,}\[%{DATA:err_severity}\]\s{1,}(%{NUMBER:pid:int}#%{NUMBER}:\s{1,}\*%{NUMBER}|\*%{NUMBER}) %{DATA:err_message}(?:,\s{1,}client:\s{1,}(?<client_ip>%{IP}|%{HOSTNAME}))(?:,\s{1,}server:\s{1,}%{IPORHOST:server})(?:, request: %{QS:request})?(?:, host: %{QS:client_ip})?(?:, referrer: \"%{URI:referrer})? NGINXACCESS_ERROR2 (?<time>\d{4}/\d{2}/\d{2}\s{1,}\d{2}:\d{2}:\d{2})\s{1,}\[%{DATA:err_severity}\]\s{1,}%{GREEDYDATA:err_message} [root@linuxea-VM-Node117 /etc/logstash]#
2018年08月16日
4,982 阅读
0 评论
0 点赞
2018-08-08
linuxea:logstash6.3.2与redis+filebeat示例(三)
在之前的一篇中提到使用redis作为转发思路在前面两篇中写的都是elk的安装,这篇叙述在6.3.2中的一些filebeat收集日志和处理的问题,以nginx为例,后面的可能会有,也可能不会有filebeat安装和配置filebeat会将日志发送到reids,在这期间包含几个配置技巧,在配置文件出会有一些说明下载和安装[root@linuxea-VM_Node-113 ~]# wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.3.2-x86_64.rpm -O $PWD/filebeat-6.3.2-x86_64.rpm [root@linuxea-VM_Node_113 ~]# yum localinstall $PWD/filebeat-6.3.2-x86_64.rpm -y启动[root@linuxea-VM_Node-113 /etc/filebeat/modules.d]# systemctl start filebeat.service 查看日志[root@linuxea-VM_Node-113 /etc/filebeat/modules.d]# tail -f /var/log/filebeat/filebeat 2018-08-03T03:13:32.716-0400 INFO pipeline/module.go:81 Beat name: linuxea-VM-Node43_241_158_113.cluster.com 2018-08-03T03:13:32.717-0400 INFO instance/beat.go:315 filebeat start running. 2018-08-03T03:13:32.717-0400 INFO [monitoring] log/log.go:97 Starting metrics logging every 30s 2018-08-03T03:13:32.717-0400 INFO registrar/registrar.go:80 No registry file found under: /var/lib/filebeat/registry. Creating a new registry file. 2018-08-03T03:13:32.745-0400 INFO registrar/registrar.go:117 Loading registrar data from /var/lib/filebeat/registry 2018-08-03T03:13:32.745-0400 INFO registrar/registrar.go:124 States Loaded from registrar: 0 2018-08-03T03:13:32.745-0400 INFO crawler/crawler.go:48 Loading Inputs: 1 2018-08-03T03:13:32.745-0400 INFO crawler/crawler.go:82 Loading and starting Inputs completed. Enabled inputs: 0 2018-08-03T03:13:32.746-0400 INFO cfgfile/reload.go:122 Config reloader started 2018-08-03T03:13:32.746-0400 INFO cfgfile/reload.go:214 Loading of config files completed. 2018-08-03T03:14:02.719-0400 INFO [monitoring] log/log.go:124 Non-zero metrics in the last 30s配置文件在此配中paths下的是写日志的路径,可以使用通配符,但是如果你使用通配符后就意味着目录下的日志写在一个fields的id中,这个id会传到redis中,在传递到logstash中,最终以一个id的形式传递到kibana当然,这里测试用两个来玩,如下filebeat.prospectors: - type: log enabled: true paths: - /data/wwwlogs/1015.log fields: list_id: 113_1015_nginx_access - input_type: log paths: - /data/wwwlogs/1023.log fields: list_id: 113_1023_nginx_access filebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: false setup.template.settings: index.number_of_shards: 3 output.redis: hosts: ["IP:PORT"] password: "OTdmOWI4ZTM4NTY1M2M4OTZh" db: 2 timeout: 5 key: "%{[fields.list_id]:unknow}"在output中的key: "%{[fields.list_id]:unknow}"意思是如果[fields.list_id]有值就匹配,如果没有就unknow,最终传递给redis中redis安装在我意淫的这套里面,redis用来转发数据的,他可以说集群也可以说单点,取决于数据量的大小按照我以往的骚操作,redis当然要用docker来跑,运行一下命令进行安装curl -Lks4 https://raw.githubusercontent.com/LinuxEA-Mark/docker-alpine-Redis/master/Sentinel/install_redis.sh|bash安装完成在/data/rds下有一个docker-compose.yaml文件,如下:[root@iZ /data/rds]# cat docker-compose.yaml version: '2' services: redis: build: context: https://raw.githubusercontent.com/LinuxEA-Mark/docker-alpine-Redis/master/Sentinel/Dockerfile container_name: redis restart: always network_mode: "host" privileged: true environment: - REQUIREPASSWD=OTdmOWI4ZTM4NTY1M2M4OTZh - MASTERAUTHPAD=OTdmOWI4ZTM4NTY1M2M4OTZh volumes: - /etc/localtime:/etc/localtime:ro - /data/redis-data:/data/redis:Z - /data/logs:/data/logsredis查看写入情况[root@iZ /etc/logstash/conf.d]# redis-cli -h 127.0.0.1 -a OTdmOWI4ZTM4NTY1M2M4OTZh 127.0.0.1:6379> select 2 OK 127.0.0.1:6379[2]> keys * 1) "113_1015_nginx_access" 2) "113_1023_nginx_access" 127.0.0.1:6379[2]> lrange 113_1023_nginx_access 0 -1 1) "{\"@timestamp\":\"2018-08-04T04:36:26.075Z\",\"@metadata\":{\"beat\":\"\",\"type\":\"doc\",\"version\":\"6.3.2\"},\"beat\":{\"name\":\"linuxea-VM-Node43_13.cluster.com\",\"hostname\":\"linuxea-VM-Node43_23.cluster.com\",\"version\":\"6.3.2\"},\"host\":{\"name\":\"linuxea-VM-Node43_23.cluster.com\"},\"offset\":863464,\"message\":\"IP - [\xe\xe9\x9797\xb4:0.005 [200] [200] \xe5\x9b4:[0.005] \\\"IP:51023\\\"\",\"source\":\"/data/wwwlogs/1023.log\",\"fields\":{\"list_id\":\"113_1023_nginx_access\"}}"logstash安装和配置logstash在内网进行安装和配置,用来抓取公网redis的数据,抓到本地后发送es,在到看kibana[root@linuxea-VM-Node117 ~]# curl -Lk https://artifacts.elastic.co/downloads/logstash/logstash-6.3.2.tar.gz|tar xz -C /usr/local && useradd elk && cd /usr/local/ && ln -s logstash-6.3.2 logstash && mkdir /data/logstash/{db,logs} -p && chown -R elk.elk /data/logstash/ /usr/local/logstash-6.3.2 && cd logstash/config/ && mv logstash.yml logstash.yml.bak 配置文件在这个配置文件之前下载ip库,在地图中会用到,稍后配置到配置文件准备工作安装GeoLite2-City[root@linuxea-VM-Node117 ~]# curl -Lk http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz|tar xz -C /usr/local/logstash-6.3.2/config/在之前5.5版本也做过nginx的格式化,直接参考groknginx log_format准备log_format upstream2 '$proxy_add_x_forwarded_for $remote_user [$time_local] "$request" $http_host' '[$body_bytes_sent] $request_body "$http_referer" "$http_user_agent" [$ssl_protocol] [$ssl_cipher]' '[$request_time] [$status] [$upstream_status] [$upstream_response_time] [$upstream_addr]';nginx patterns准备,将日志和patterns可以放在kibana grok检查,也可以在grokdebug试试,不过6.3.2的两个结果并不相同[root@linuxea-VM-Node117 /usr/local/logstash-6.3.2/config]# cat patterns.d/nginx NGUSERNAME [a-zA-Z\.\@\-\+_%]+ NGUSER %{NGUSERNAME} NGINXACCESS %{IP:clent_ip} (?:-|%{USER:ident}) \[%{HTTPDATE:log_date}\] \"%{WORD:http_verb} (?:%{PATH:baseurl}\?%{NOTSPACE:params}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" (%{IPORHOST:url_domain}|%{URIHOST:ur_domain}|-)\[(%{BASE16FLOAT:request_time}|-)\] %{NOTSPACE:request_body} %{QS:referrer_rul} %{GREEDYDATA:User_Agent} \[%{GREEDYDATA:ssl_protocol}\] \[(?:%{GREEDYDATA:ssl_cipher}|-)\]\[%{NUMBER:time_duration}\] \[%{NUMBER:http_status_code}\] \[(%{BASE10NUM:upstream_status}|-)\] \[(%{NUMBER:upstream_response_time}|-)\] \[(%{URIHOST:upstream_addr}|-)\]配置文件如下:在input中的key写的是reids中的key其中在filebeat的 key是"%{[fields.list_id]:unknow}",这里进行匹配[fields.list_id],在其中表现的是if [fields][list_id] 如果等于113_1015_nginx_access,匹配成功则进行处理grok部分是nginx的patternsgeoip中的database需要指明,source到clent_ip对useragent也进行处理ooutput中需要填写 用户和密码以便于链接到es,当然如果你没有破解或者使用正版,你是不能使用验证的,但是你可以参考x-pack的破解input { redis { host => "47" port => "6379" key => "113_1015_nginx_access" data_type => "list" password => "I4ZTM4NTY1M2M4OTZh" threads => "5" db => "2" } } filter { if [fields][list_id] == "113_1023_nginx_access" { grok { patterns_dir => [ "/usr/local/logstash-6.3.2/config/patterns.d/" ] match => { "message" => "%{NGINXACCESS}" } overwrite => [ "message" ] } geoip { source => "clent_ip" target => "geoip" database => "/usr/local/logstash-6.3.2/config/GeoLite2-City.mmdb" } useragent { source => "User_Agent" target => "userAgent" } urldecode { all_fields => true } mutate { gsub => ["User_Agent","[\"]",""] #将user_agent中的 " 换成空 convert => [ "response","integer" ] convert => [ "body_bytes_sent","integer" ] convert => [ "bytes_sent","integer" ] convert => [ "upstream_response_time","float" ] convert => [ "upstream_status","integer" ] convert => [ "request_time","float" ] convert => [ "port","integer" ] } date { match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ] } } } output { if [fields][list_id] == "113_1023_nginx_access" { elasticsearch { hosts => ["10.10.240.113:9200","10.10.240.114:9200"] index => "logstash-113_1023_nginx_access-%{+YYYY.MM.dd}" user => "elastic" password => "linuxea" } } stdout {codec => rubydebug} }json但是也不是很骚,于是这次加上json,像这样log_format json '{"@timestamp":"$time_iso8601",' '"clent_ip":"$proxy_add_x_forwarded_for",' '"user-agent":"$http_user_agent",' '"host":"$server_name",' '"status":"$status",' '"method":"$request_method",' '"domain":"$host",' '"domain2":"$http_host",' '"url":"$request_uri",' '"url2":"$uri",' '"args":"$args",' '"referer":"$http_referer",' '"ssl-type":"$ssl_protocol",' '"ssl-key":"$ssl_cipher",' '"body_bytes_sent":"$body_bytes_sent",' '"request_length":"$request_length",' '"request_body":"$request_body",' '"responsetime":"$request_time",' '"upstreamname":"$upstream_http_name",' '"upstreamaddr":"$upstream_addr",' '"upstreamresptime":"$upstream_response_time",' '"upstreamstatus":"$upstream_status"}';在nginx.conf中添加后,在主机段进行修改,但是这样一来,你日志的可读性就低了。但是,你的lostash性能会提升,因为logstash不会处理grok,直接将收集的日子转发到es这里需要说明的是,我并没有使用json,是因为他不能将useragent处理好,我并没有找到可行的方式,如果你知道,你可以告诉我但是,你可以这样。比如说使用*.log输入所有到redis,一直到kibana,然后通过kibana来做分组显示启动:nohup sudo -u elk /usr/local/logstash-6.3.2/bin/logstash -f ./conf.d/*.yml >./nohup.out 2>&1 &如果不出意外,你会在kibana中看到以logstash-113_1023_nginx_access-%{+YYYY.MM.dd}的索引
2018年08月08日
3,442 阅读
0 评论
0 点赞
2018-08-08
linuxea:ELK6.3.2 x-pack 破解 (二)
在elasticsearch中有30天的试用期,我找到网上大神的一些文章,试用了之后发现可以进行破解使用,整个过程比较简单,特此写下笔记一,破解x-pack 6.3.2我也不清楚为什么叫做破解,顾名思义就是打开限制的功能,达到我们所想要的目的。我个人是不赞同用破解(盗版)这个词的。因为对我而言除了登录那个好看一些的画面外,我仍然可以使用nginx或者ip地址限制来做,或者使用grafana也是个不错的选择。但是作为一个以侠客自居的手艺人,工匠精神必然不能少,于是怀着要弄就弄一套的想法,在略带忧伤的情绪下,还是给弄好了并分享。在开始之前,破解这个限制的顺序有必要说明下,顺序如下:1,安装elk,关闭x-pack启动2,重新打x-pack包,修改license3,修改license后成为白金用户后在修改密码4,开启x-pack重要提示 : xpack.security.enabled只有在破解之后,并且配置好ssl,才能为true,当设置了密码就可以登录1.1 修改license准备LicenseVerifier.java 和XPackBuild.java两个文件后进行替换LicenseVerifier.java如下:package org.elasticsearch.license; import java.nio.*; import java.util.*; import java.security.*; import org.elasticsearch.common.xcontent.*; import org.apache.lucene.util.*; import org.elasticsearch.common.io.*; import java.io.*; public class LicenseVerifier { public static boolean verifyLicense(final License license, final byte[] encryptedPublicKeyData) { return true; } public static boolean verifyLicense(final License license) { return true; } }XPackBuild.java如下:package org.elasticsearch.xpack.core; import org.elasticsearch.common.io.*; import java.net.*; import org.elasticsearch.common.*; import java.nio.file.*; import java.io.*; import java.util.jar.*; public class XPackBuild { public static final XPackBuild CURRENT; private String shortHash; private String date; @SuppressForbidden(reason = "looks up path of xpack.jar directly") static Path getElasticsearchCodebase() { final URL url = XPackBuild.class.getProtectionDomain().getCodeSource().getLocation(); try { return PathUtils.get(url.toURI()); } catch (URISyntaxException bogus) { throw new RuntimeException(bogus); } } XPackBuild(final String shortHash, final String date) { this.shortHash = shortHash; this.date = date; } public String shortHash() { return this.shortHash; } public String date(){ return this.date; } static { final Path path = getElasticsearchCodebase(); String shortHash = null; String date = null; Label_0157: { shortHash = "Unknown"; date = "Unknown"; } CURRENT = new XPackBuild(shortHash, date); } }1.1.2 打包成class打包成class,而后替换。如果你的安装在/usr/local下,那么大概如下LicenseVerifierjavac -cp "/usr/local//elasticsearch-6.3.2/lib/elasticsearch-6.3.2.jar:/usr/local/elasticsearch-6.3.2/lib/lucene-core-7.3.1.jar:/usr/local/elasticsearch-6.3.2/modules/x-pack/x-pack-core/x-pack-core-6.3.2.jar" LicenseVerifier.javaXPackBuildjavac -cp "/usr/local/elasticsearch-6.3.2/lib/elasticsearch-6.3.2.jar:/usr/local/elasticsearch-6.3.2/lib/lucene-core-7.3.1.jar:/usr/local/elasticsearch-6.3.2/modules/x-pack/x-pack-core/x-pack-core-6.3.2.jar:/usr/local/elasticsearch-6.3.2/lib/elasticsearch-core-6.3.2.jar" XPackBuild.java1.1.3 替换而后在将x-pack-core/x-pack-core-6.3.2.jar拿到本地解压复制到本地cp -a /usr/local/elasticsearch-6.3.2/modules/x-pack/x-pack-core/x-pack-core-6.3.2.jar .到此,目录下有5个文件[root@linuxea-vm-Node113 /es]# ll 总用量 1736 -rw-r--r-- 1 root root 410 8月 7 20:53 LicenseVerifier.class -rw-r--r-- 1 root root 593 8月 7 20:50 LicenseVerifier.java -rw-r--r-- 1 root root 1508 8月 7 20:53 XPackBuild.class -rw-r--r-- 1 root root 1358 8月 7 20:51 XPackBuild.java -rw-r--r-- 1 root root 1759804 8月 7 20:49 x-pack-core-6.3.2.jar为了能够分辨的更清楚,创建一个目录jardir,复制进去后解压,而后删除原来的包或者备份[root@linuxea-vm-Node113 /es]# mkdir jardir [root@linuxea-vm-Node113 /es]# cp x-pack-core-6.3.2.jar jardir/ [root@linuxea-vm-Node113 /es]# cd jardir/ [root@linuxea-vm-Node113 /es/jardir]# jar -xf x-pack-core-6.3.2.jar [root@linuxea-vm-Node113 /es/jardir]# \rm -rf x-pack-core-6.3.2.jar 将class覆盖进去[root@linuxea-vm-Node113 /es/jardir]# cd .. [root@linuxea-vm-Node113 /es]# cp -a LicenseVerifier.class jardir/org/elasticsearch/license/ cp:是否覆盖"jardir/org/elasticsearch/license/LicenseVerifier.class"? yes [root@linuxea-vm-Node113 /es]# cp -a XPackBuild.class jardir/org/elasticsearch/xpack/core/ cp:是否覆盖"jardir/org/elasticsearch/xpack/core/XPackBuild.class"? yes当文件覆盖到jardir中的org/elasticsearch/xpack/core和org/elasticsearch/license中后,开始打包[root@linuxea-vm-Node113 /es]# cd jardir/ [root@linuxea-vm-Node113 /es/jardir]# jar -cvf x-pack-core-6.3.2.jar * 已添加清单 正在添加: logstash-index-template.json(输入 = 994) (输出 = 339)(压缩了 65%) 正在忽略条目META-INF/ 正在忽略条目META-INF/MANIFEST.MF 正在添加: META-INF/LICENSE.txt(输入 = 13675) (输出 = 5247)(压缩了 61%)生成一个新的x-pack-core-6.3.2.jar包后覆盖到/usr/local/elasticsearch-6.3.2/modules/x-pack/x-pack-core/下,license修改完成,而后重启注意,旧的在替换之前就删除了,新的是重新jar -cvf生成的[root@linuxea-vm-Node113 ~]# ps aux|egrep ^elk|awk '{print $2}'|xargs kill && sudo -u elk /usr/local/elasticsearch-6.3.2/bin/elasticsearch -d1.1.4 申请license打开elastic申请页面进行申请,会发送到邮箱,下载后进行编辑将 "expiry_date_in_millis":1565135999999修改"expiry_date_in_millis":2565135999999将"type":"basic"修改为"type":"platinum"他表现的样子大概是这样的(当然,你不能用下面这种格式进行update,请使用在官网申请的license,他会发送到你填写的邮箱){"license":{ "uid":"2651b126-fef3-480e-ad4c-a60eb696a733", "type":"platinum", # 白金 "issue_date_in_millis":1533513600000, "expiry_date_in_millis":2565135999999, # 到期时间 "max_nodes":100," issued_to":"mark tang (www.linuxea.com)", "issuer":"Web Form", "signature":"AAAAAwAAAA2Of4OxzPNK/yl15sO4AAABmC9ZN0hjZDBGYnVyRXpCOW5Bb3FjZDAxOWpSbTVoMVZwUzRxVk1PSmkxaktJRVl5MUYvUWh3bHZVUTllbXNPbzBUemtnbWpBbmlWRmRZb25KNFlBR2x0TXc2K2p1Y1VtMG1UQU9TRGZVSGRwaEJGUjE3bXd3LzRqZ05iLzRteWFNekdxRGpIYlFwYkJiNUs0U1hTVlJKNVlXekMrSlVUdFIvV0FNeWdOYnlESDc3MWhlY3hSQmdKSjJ2ZTcvYlBFOHhPQlV3ZHdDQ0tHcG5uOElCaDJ4K1hob29xSG85N0kvTWV3THhlQk9NL01VMFRjNDZpZEVXeUtUMXIyMlIveFpJUkk2WUdveEZaME9XWitGUi9WNTZVQW1FMG1DenhZU0ZmeXlZakVEMjZFT2NvOWxpZGlqVmlHNC8rWVVUYzMwRGVySHpIdURzKzFiRDl4TmM1TUp2VTBOUlJZUlAyV0ZVL2kvVk10L0NsbXNFYVZwT3NSU082dFNNa2prQ0ZsclZ4NTltbU1CVE5lR09Bck93V2J1Y3c9PQAAAQAPymKvMYxtKy8+1tbaE0yvRbt4USyN5VYwY1+vBfxNyjUtrIgW3RQJfj/3McveTM7hiKHZXeDT+BAn9NdgFIBJ5ztA94s72RlkUJBQjSiqg50/1Nu5OTKloPKCs4R7pk42uapNISWubpRIXyGGer0KKLkpoBBlQkvwETNHk/aDGnzBzOJ/vppRYQgUtQx5ZXVo+U391w1sNj8lXuZrLwEByYU5ms25HVG1Ith0THelZMqoB0x2gvZklR5RQbEmWPGXOsBXLnfLPM571Op63TxGt+vsiNIvxBjsuq62tuhRkgAHkyqY2z+RLFDafQxUXtz41b6fgRLV5XPCDqiOWYvB", "start_date_in_millis":1533513600000}}当你已经修改了时间和白金类型后,上传来到 Management 选择 License Management,点击update license上传以及修改好的License ,如下(当然,我的已经修改好了,那个很骚的PLATINUM就是)修改好后来到License Management查看已经到了April 15, 2051 9:46 AM CST ok,到此es License 已经修改完成,也就是破解成功,那么接下来就是用下ssl的验证功能二 ,elasticsearch ssl6.3中x-pack是不能够开启密码登陆的,但这并不阻碍我们进行了解,他存在一些权限的问题,我在使用中发现无法使用,这里的信息可做参考,后面的配置中并不启用请注意权限问题说明chmod +r $PATH/cerp/* chown -R elk.elk /data/elasticsearch2.1.1 颁发创建证书颁发机构ca,会输出一个elastic-stack-ca.p12的文件在当前目录下,其中包含ca公用证书以及节点的签名和私钥。[root@linuxea-vm-Node113 ~/crt]# /usr/local/elasticsearch-6.3.2/bin/elasticsearch-certutil ca 在提示输入密码保护时候输入密码并记住(假如你输了的话)生成证书和私钥,输入设置的保护密码(如果没有则不需要输入)[root@linuxea-vm-Node113 ~/crt]# /usr/local/elasticsearch-6.3.2/bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12如果不出所料你将看到两个文件-rw------- 1 root root 3443 8月 6 13:00 elastic-certificates.p12 -rw------- 1 root root 2527 8月 6 12:59 elastic-stack-ca.p122.2.2 使用创建证书目录[root@linuxea-vm-Node113 ~/crt]# mkdir /usr/local/elasticsearch-6.3.2/config/certs/ [root@linuxea-vm-Node113 ~/crt]# cp elastic-* /usr/local/elasticsearch-6.3.2/config/certs/传递给其他elasticsearch机器(当然,目录还要创建)[root@linuxea-vm-Node113 ~/crt]# scp elastic-* 10.10.240.114:/usr/local/elasticsearch-6.3.2/config/certs/ [root@linuxea-vm-Node113 ~/crt]# scp elastic-* 10.0.1.49:/usr/local/elasticsearch-6.3.2/config/certs/而后修改权限,主要给java访问,否则报错Caused by: java.nio.file.AccessDeniedException[root@linuxea-vm-Node113 ~/crt]# chmod +r /usr/local/elasticsearch-6.3.2/config/certs/ [root@linuxea-vm-Node113 ~/crt]# chown -R elk.elk /data/elasticsearch2.2.3 在配置文件使用配置到配置文件将一下配置文件写到两台elasticsearch里面当然,如果你不了解之前怎么配置的,参考ELK6.3.2安装与配置[跨网络转发思路](https://www.linuxea.com/1889.html)其中包含配置和安装信息xpack.security.enabled: true xpack.monitoring.collection.enabled: true xpack.security.transport.ssl.enabled: true xpack.ssl.verification_mode: none xpack.security.transport.ssl.verification_mode: certificate xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12 xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12注意将xpack.security.enabled改成 true,之前没有启用时false,并且填写相对的路径xpack.ssl.verification_mode必须为 none,否则报错,意思大概就是忽略服务器密钥验证,只考虑使用,其中会失去一些诊断的机制xpack.security.transport.ssl.enabled: true开启,否则报[o.e.x.s.t.n.SecurityNetty4ServerTransport] [master] exception caught on transport layer [NettyTcpChannel错误三,修改密码当执行完上面的操作后,重启并未 报错的情况下选择一台elasticsearch执行elasticsearch-setup-passwords interactive如果按照我说演示的操作会出现以下对话,输入密码即可[root@linuxea-vm-Node113 ~]# /usr/local/elasticsearch-6.3.2/bin/elasticsearch-setup-passwords interactive Initiating the setup of passwords for reserved users elastic,kibana,logstash_system,beats_system. You will be prompted to enter passwords as the process progresses. Please confirm that you would like to continue [y/N]y Enter password for [elastic]: Reenter password for [elastic]: Enter password for [kibana]: Reenter password for [kibana]: Enter password for [logstash_system]: Reenter password for [logstash_system]: Enter password for [beats_system]: Reenter password for [beats_system]: Changed password for user [kibana] Changed password for user [logstash_system] Changed password for user [beats_system] Changed password for user [elastic]但密码输入完成后会同步到其他节点只需要修改kibana和logstash的配置即可完成3.1 kibana配置密码验证将xpack.security.enabled改为true开启监控和添加验证xpack.security.enabled: true xpack.monitoring.enabled: true elasticsearch.username: "elastic" elasticsearch.password: "linuxea"重启kibana就可以完成登录注意如果顺序错误这可能会失败,但是请关注你的日志报错,正常的顺序一定是要先破解了之后才能使用x-pack
2018年08月08日
4,930 阅读
0 评论
0 点赞
2018-08-07
linuxea:ELK6.3.2安装与配置[跨网络转发思路](一)
由于一些原因,我需要在内网搭建elk平台,采取云机器的日志,并且云节点并不是一家的,这就意味着这些云机器内网不通,分布广泛在内网搭建elk环境,并且只想用拉取的模式,也就是说,我内网并没有ip想被外网调用(无NAT),只要内网能上网就要可以用内网设备资源成本低基于以上三点来配置如下场景:散列的云节点往一台(意淫中的是集群组)redis(kafka密码配置过于复杂)节点接入数据,而后通过内网elk去抓取redis的日志到本地需要注意的是redis的防火墙规则匹配好,涉及到安全(有功夫的同学直接撸kafka)我们可以去官网下载RPM包和tar.gz二进制包来进行安装,我在这里分别都测试过,均用作x-pack的破解测试(后面会有破解的例子)先决条件:安装jdkyum install http://10.10.240.145/windows-client/jdk/jdk-8u171-linux-x64.rpm -y如果链接10.10.240.145 失败,不要紧张,10.10.240.145是我内网的mirrors (^_^)修改文件系统参数:echo "vm.max_map_count=262144" >> /etc/sysctl.conf echo "elk - nofile 65536" >> /etc/security/limits.conf 一,elasticsearch node install下载elasticsearch安装包并安装在elasticsearch的节点(这里用内网的mirrors下载使用的)1,创建用户2,创建db和logs目录3,备份原来的配置文件4,修改属主给解压目录和数据目录已经日志目录curl -Lk http://10.10.240.145/elk/elasticsearch-6.3.2.tar.gz|tar xz -C /usr/local/ && useradd elk && cd /usr/local/ && ln -s elasticsearch-6.3.2 elasticsearch && mkdir /data/elasticsearch/{db,logs} -p && chown -R elk.elk /data/elasticsearch/ /usr/local/elasticsearch* && cd elasticsearch/config/ && mv elasticsearch.yml elasticsearch.yml.bak1.2 elasticsearch 配置文件elk配置文件分为三份,一份node1。一份node2 ,一份协调节点,所差不大1.2.1 elasticsearch_node1cluster.name: linuxea-app_ds node.name: master path.data: /data/elasticsearch/db path.logs: /data/elasticsearch/logs bootstrap.system_call_filter: false bootstrap.memory_lock: false http.cors.enabled: true http.cors.allow-origin: "*" http.cors.allow-headers: Authorization network.host: 10.10.240.113 http.port: 9200 node.master: true node.data: true discovery.zen.ping.unicast.hosts: ["10.10.240.113"] xpack.security.enabled: false启动[root@linux-vm-Node113 ~]# sudo -u elk /usr/local/elasticsearch-6.3.2/bin/elasticsearch -d1.2.2 elasticsearch_node2[root@linux-vm-Node114 /usr/local/elasticsearch-6.3.2/config]# cat /usr/local/elasticsearch-6.3.2/config/elasticsearch.yml cluster.name: linuxea-app_ds node.name: slave path.data: /data/elasticsearch/db path.logs: /data/elasticsearch/logs bootstrap.system_call_filter: false bootstrap.memory_lock: false http.cors.enabled: true http.cors.allow-origin: "*" http.cors.allow-headers: Authorization network.host: 10.10.240.114 http.port: 9200 node.master: true node.data: true discovery.zen.ping.unicast.hosts: ["10.10.240.113"] #xpack.monitoring.collection.enabled: true xpack.security.enabled: false启动[root@linux-vm-Node114 ~]# sudo -u elk /usr/local/elasticsearch-6.3.2/bin/elasticsearch -d1.2.3 放行防火墙添加到配置文件中-A INPUT -s 10.0.1.49 -p tcp -m tcp -m state --state NEW -m multiport --dports 9200,9300 -m comment --comment "logstash" -j ACCEPT -A INPUT -s 10.10.240.117 -p tcp -m tcp -m state --state NEW -m multiport --dports 9200,9300 -m comment --comment "kibana" -j ACCEPT -A INPUT -s 10.10.240.114 -p tcp -m tcp -m state --state NEW -m multiport --dports 9200,9300 -m comment --comment "elasticsearch-114" -j ACCEPT -A INPUT -s 10.10.240.113 -p tcp -m tcp -m state --state NEW -m multiport --dports 9200,9300 -m comment --comment "elasticsearch-113" -j ACCEPT添加临时规则,放行9200和9300iptables -I INPUT 5 -s 10.0.1.49 -p tcp -m tcp -m state --state NEW -m multiport --dports 9200,9300 -m comment --comment "logstash" -j ACCEP iptables -I INPUT 5 -s 10.10.240.117 -p tcp -m tcp -m state --state NEW -m multiport --dports 9200,9300 -m comment --comment "kibana" -j ACCEP iptables -I INPUT 5 -s 10.10.240.114 -p tcp -m tcp -m state --state NEW -m multiport --dports 9200,9300 -m comment --comment "elasticsearch-114" -j ACCEP iptables -I INPUT 5 -s 10.10.240.113 -p tcp -m tcp -m state --state NEW -m multiport --dports 9200,9300 -m comment --comment "elasticsearch-113" -j ACCEPso,当node2启动后你应该关注日志,查看是否出错二,配置es协调节点以及kibana协调节点也就是所说的负载均衡,他将搜索请求或批量索引请求之类的请求,可能涉及保存在不同数据节点上的数据。例如,搜索请求在两个阶段中执行,这两个阶段由接收客户端请求的节点 到协调节点协调分散阶段,协调节点将请求转发到保存数据的数据节点。每个数据节点在本地执行请求并将其结果返回给协调节点。在收集 阶段,协调节点将每个数据节点的结果减少为单个全局结果集 node.master,node.data并node.ingest设置为false仅作为协调节点2.1 配置协调节点这里安装用的二进制包,使用elk用户启动[root@linux-vm-Node49 ~]# curl -Lk http://10.10.240.145/elk/elasticsearch-6.3.2.tar.gz|tar xz -C /usr/local/ [root@linux-vm-Node49 ~]# useradd elk [root@linux-vm-Node49 ~]# cd /usr/local/ && ln -s elasticsearch-6.3.2 elasticsearch [root@linux-vm-Node49 /usr/local]# mkdir /data/elasticsearch/{db,logs} -p [root@linux-vm-Node49 /usr/local]# chown -R elk.elk /data/elasticsearch/ /usr/local/elasticsearch-6.3.2 [root@linux-vm-Node49 /usr/local]# cd elasticsearch/config/ [root@linux-vm-Node49 /usr/local/elasticsearch/config]# mv elasticsearch.yml elasticsearch.yml.bak协调节点配置文件协调节点和kibana在一台机器,负责转发cluster.name: linuxea-app_ds node.name: coordinating path.data: /data/elasticsearch/db path.logs: /data/elasticsearch/logs bootstrap.system_call_filter: false bootstrap.memory_lock: false http.cors.enabled: true http.cors.allow-origin: "*" http.cors.allow-headers: Authorization network.host: 10.0.1.49 http.port: 9200 discovery.zen.ping.unicast.hosts: ["10.10.240.113"] node.master: false node.data: false node.ingest: false search.remote.connect: false node.ml: false xpack.security.enabled: false discovery.zen.minimum_master_nodes: 1你仍然需要先决条件的配置,并且修改属主和属组并且启动2.2 kibana install[root@linux-vm-Node49 ~]# curl -Lk http://10.10.240.145/elk/kibana-6.3.2-linux-x86_64.tar.gz|tar xz -C /usr/local/ [root@linux-vm-Node49 ~]# mkdir /data/kibana/logs/ -pserver.name: kibana server.port: 5601 server.host: "10.0.1.49" elasticsearch.url: "http://10.10.240.113:9200" logging.dest: /data/kibana/logs/kibana.log #logging.dest: stdout logging.silent: false logging.quiet: false kibana.index: ".kibana" xpack.security.enabled: flash #xpack.monitoring.enabled: true #elasticsearch.username: "elastic" #elasticsearch.password: "linuxea"需要注意,这里并没有启用x-pack,直接打开就撸的
2018年08月07日
3,420 阅读
0 评论
0 点赞
2017-09-13
linuxea:ELK-kibana5.5使用高德地图
下载IP地址归类查询库下载地址:https://dev.maxmind.com/zh-hans/geoip/geoip2/geolite2-%E5%BC%80%E6%BA%90%E6%95%B0%E6%8D%AE%E5%BA%93/下载国家的那个解压之后得到这么一个文件:[root@linuxea.com-Node49 /etc/logstash]# ll GeoLite2-City.mmdb -rw-r--r-- 1 logstash logstash 58082983 8月 3 08:57 GeoLite2-City.mmdb应用到配置比如nginx访问日志: geoip { source => "clent_ip" target => "geoip" database => "/etc/logstash/GeoLite2-City.mmdb"database指到地图的位置上在kibana5.5中只需要修改配置文件即可修改kibana在配置文件最后加上一句即可tilemap.url: 'http://webrd02.is.autonavi.com/appmaptile?lang=zh_cn&size=1&scale=1&style=7&x={x}&y={y}&z={z}'
2017年09月13日
6,165 阅读
0 评论
0 点赞
2017-09-11
linuxea:ELK5.5-elasticsearch-x-pack破解
ELK 6.3.2 x-pack破解参考 https://www.linuxea.com/1895.html创建LicenseVerifier.java文件[root@linuxea.com-Node61 /elk/]# cat LicenseVerifier.java package org.elasticsearch.license; import java.nio.*; import java.util.*; import java.security.*; import org.elasticsearch.common.xcontent.*; import org.apache.lucene.util.*; import org.elasticsearch.common.io.*; import java.io.*; public class LicenseVerifier { public static boolean verifyLicense(final License license, final byte[] encryptedPublicKeyData) { return true; } public static boolean verifyLicense(final License license) { return true; } }编译class文件[root@linuxea.com-Node49 ~/elk]# javac -cp "/usr/share/elasticsearch/lib/elasticsearch-5.5.1.jar:/usr/share/elasticsearch/lib/lucene-core-6.6.0.jar:/usr/share/elasticsearch/plugins/x-pack/x-pack-5.5.1.jar" LicenseVerifier.java [root@linuxea.com-Node49 ~/elk]# ls LicenseVerifier.class LicenseVerifier.java [root@linuxea.com-Node49 ~/elk]# cd /usr/share/elasticsearch/plugins/x-pack/ [root@linuxea.com-Node49 /usr/share/elasticsearch/plugins/x-pack]# mkdir test [root@linuxea.com-Node49 /usr/share/elasticsearch/plugins/x-pack]# mv x-pack-5.5.1.jar test/ [root@linuxea.com-Node49 /usr/share/elasticsearch/plugins/x-pack]# 备份下x-pack-5.5.1.jar[root@linuxea.com-Node49 /usr/share/elasticsearch/plugins/x-pack/test]# cp xvf x-pack-5.5.1.jar /opt解压[root@linuxea.com-Node49 /usr/share/elasticsearch/plugins/x-pack/test]# jar xvf x-pack-5.5.1.jar 替换class[root@linuxea.com-Node49 /usr/share/elasticsearch/plugins/x-pack/test]# cd org/elasticsearch/license [root@linuxea.com-Node49 /usr/share/elasticsearch/plugins/x-pack/test/org/elasticsearch/license]# cp /root/elk/LicenseVerifier.class ./回到test目录打包[root@linuxea.com-Node49 /usr/share/elasticsearch/plugins/x-pack/test/org/elasticsearch/license]# cd /usr/share/elasticsearch/plugins/x-pack/test/ [root@linuxea.com-Node49 /usr/share/elasticsearch/plugins/x-pack/test]# jar cvf x-pack-5.5.1.jar .将打包好的文件放回x-pack目录下[root@linuxea.com-Node49 /usr/share/elasticsearch/plugins/x-pack/test]# cp x-pack-5.5.1.jar ../申请licensehttps://license.elastic.co/registration申请完成后很快会发送到邮箱,而后修改license文件它分有不同的版本,版本有不同的权限,如下:open source开源版本basic基础版本gold是黄金版PLATINUM铂金版 curl -XPUT -u elastic 'http://<host>:<port>/_xpack/license' -H "Content-Type: application/json" -d @license.json修改license申请一个license后会发到邮箱,然后修改下即可{"license":{"uid":"d13W1FM-ef9XWi-45eAKLH6-afT5b4-b8erC7460","type":"platinum","issue_date_in_millis":11042324000000,"expiry_date_in_millis":2535123399999,"max_nodes":100,"issued_to":"sean wang (alibaba)","issuer":"Web Form","signature":"AAAAAwAAAA2kxmZrvpZZohthD/HAAAABmC9ZN0hjZDBGYnVyRXpCOW5Bb3FjZDAxOWpSbTVoMVZwUzRxVk1PSmkxaktJRVl5MUYvUWWpBbmlWRmRZb25KNFlBR2x0TXc2K2p1Y1VtMG1UQU9TRGZVSGRwaEJGUjE3bXd3LzRqZ05iLzRteWFNekNUs0U1hTVlJK2E1AD93AD04A03C3DF7565FA377223916FA881A19A675E9BD2F78680EE545265lESDc3MWhlY3hSQmdKSjJ2ZTcvYlBFOHhPQlV3ZHdDQ0tHcG5uOElCaDJ4K1hob29xSG85N0kvTWV3THhlQk9NL01VMFRjNDZpZEVXeUtUMXIyMlIveFpJUkk2WUdveEZaME9XWitGUi9WNTZVQW1FMG1DenhC8rWVVUYzMwRGVySHpIdURzKzFiRDl4TmM1TUp2VTBOUlJZUlAyV0ZVL2kvVk10L0NsbXNFYVZwT3NSU082dFNNa2prQ0ZsclZ4NTltbU1CVE5lR09Bck93V2J1Y3c9PQAAAQBvSGrvXPAAtLbErFH431nJyyyuZ1A5Mqnq2mmEY2NiFA1GUTjzEorVn9rWD20vTAZaR/EUbdQ1xAKLH1/WK/Ur4ct5Gpv3KwPVI1Lvn7q5BqoO5F4AYGcaUJqu8erCuGYz9XHGipAYpCUDVppRC294MsR/o6XJLNn7VTp+FHXRIVAbgWidQQHxaT3MQo/y38t7pKZvMQQ7l5DEp0foPhgW9Nm4coK4WXoT87/LkhCwMtH5NLmD80rZKy0XKX8AXEK+usf+gtv1iIY35t7wB8EbHPO+mUlBT5rAb","start_date_in_millis":1504224000000}}将文件保存license.json没修改前:[root@linuxea.com-Node49 ~/elk]# curl -XGET -u elastic:linuxea 'http://10.0.1.49:9200/_license' { "license" : { "status" : "active", "uid" : "427cbb8e-9d96-435f-b56d-fa2efeb438c5", "type" : "trial", "issue_date" : "2017-09-01T14:28:04.736Z", "issue_date_in_millis" : 1504276084736, "expiry_date" : "2017-10-01T14:28:04.736Z", "expiry_date_in_millis" : 1506868084736, "max_nodes" : 1000, "issued_to" : "linuxea-app", "issuer" : "elasticsearch", "start_date_in_millis" : -1 } }输入密码进行修改:[root@linuxea.com-Node49 ~/elk]# curl -XPUT -u elastic 'http://10.0.1.49:9200/_xpack/license' -H "Content-Type: application/json" -d @license.json Enter host password for user 'elastic': {"acknowledged":true,"license_status":"valid"}修改完成后查看[root@linuxea.com-Node49 ~/elk]# curl -XPUT -u elastic 'http://10.0.1.49:9200/_xpack/license' -H "Content-Type: application/json"curl -XGET -u elastic:linuxea 'http://10.0.1.49:9200/_license' { "license" : { "status" : "active", "uid" : "d13W1FM-ef9XWi-45eAKLH6-afT5b4-b8erC7460", "type" : "platinum", "issue_date" : "2017-09-01T00:00:00.000Z", "issue_date_in_millis" : 11042324000000, "expiry_date" : "2050-05-11T01:46:39.999Z", "expiry_date_in_millis" : 2535123399999, "max_nodes" : 100, "issued_to" : "sean wang (alibaba)", "issuer" : "Web Form", "start_date_in_millis" : 11042324000000 } } [root@linuxea.com-Node49 ~/elk]#
2017年09月11日
10,982 阅读
5 评论
0 点赞
1
2
...
4