首页
About Me
推荐
weibo
github
Search
1
linuxea:gitlab-ci之docker镜像质量品质报告
49,451 阅读
2
linuxea:如何复现查看docker run参数命令
23,046 阅读
3
Graylog收集文件日志实例
18,582 阅读
4
linuxea:jenkins+pipeline+gitlab+ansible快速安装配置(1)
18,275 阅读
5
git+jenkins发布和回滚示例
18,181 阅读
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack/logs
Open-Falcon
Prometheus
victoriaMetrics
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
互联咨询
最后的净土
软件交付
持续集成
gitops
devops
登录
Search
标签搜索
kubernetes
docker
zabbix
Golang
mariadb
持续集成工具
白话容器
elk
linux基础
nginx
dockerfile
Gitlab-ci/cd
最后的净土
基础命令
gitops
jenkins
docker-compose
Istio
haproxy
saltstack
marksugar
累计撰写
690
篇文章
累计收到
139
条评论
首页
栏目
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack/logs
Open-Falcon
Prometheus
victoriaMetrics
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
互联咨询
最后的净土
软件交付
持续集成
gitops
devops
页面
About Me
推荐
weibo
github
搜索到
27
篇与
的结果
2018-08-08
linuxea:ELK6.3.2 x-pack 破解 (二)
在elasticsearch中有30天的试用期,我找到网上大神的一些文章,试用了之后发现可以进行破解使用,整个过程比较简单,特此写下笔记一,破解x-pack 6.3.2我也不清楚为什么叫做破解,顾名思义就是打开限制的功能,达到我们所想要的目的。我个人是不赞同用破解(盗版)这个词的。因为对我而言除了登录那个好看一些的画面外,我仍然可以使用nginx或者ip地址限制来做,或者使用grafana也是个不错的选择。但是作为一个以侠客自居的手艺人,工匠精神必然不能少,于是怀着要弄就弄一套的想法,在略带忧伤的情绪下,还是给弄好了并分享。在开始之前,破解这个限制的顺序有必要说明下,顺序如下:1,安装elk,关闭x-pack启动2,重新打x-pack包,修改license3,修改license后成为白金用户后在修改密码4,开启x-pack重要提示 : xpack.security.enabled只有在破解之后,并且配置好ssl,才能为true,当设置了密码就可以登录1.1 修改license准备LicenseVerifier.java 和XPackBuild.java两个文件后进行替换LicenseVerifier.java如下:package org.elasticsearch.license; import java.nio.*; import java.util.*; import java.security.*; import org.elasticsearch.common.xcontent.*; import org.apache.lucene.util.*; import org.elasticsearch.common.io.*; import java.io.*; public class LicenseVerifier { public static boolean verifyLicense(final License license, final byte[] encryptedPublicKeyData) { return true; } public static boolean verifyLicense(final License license) { return true; } }XPackBuild.java如下:package org.elasticsearch.xpack.core; import org.elasticsearch.common.io.*; import java.net.*; import org.elasticsearch.common.*; import java.nio.file.*; import java.io.*; import java.util.jar.*; public class XPackBuild { public static final XPackBuild CURRENT; private String shortHash; private String date; @SuppressForbidden(reason = "looks up path of xpack.jar directly") static Path getElasticsearchCodebase() { final URL url = XPackBuild.class.getProtectionDomain().getCodeSource().getLocation(); try { return PathUtils.get(url.toURI()); } catch (URISyntaxException bogus) { throw new RuntimeException(bogus); } } XPackBuild(final String shortHash, final String date) { this.shortHash = shortHash; this.date = date; } public String shortHash() { return this.shortHash; } public String date(){ return this.date; } static { final Path path = getElasticsearchCodebase(); String shortHash = null; String date = null; Label_0157: { shortHash = "Unknown"; date = "Unknown"; } CURRENT = new XPackBuild(shortHash, date); } }1.1.2 打包成class打包成class,而后替换。如果你的安装在/usr/local下,那么大概如下LicenseVerifierjavac -cp "/usr/local//elasticsearch-6.3.2/lib/elasticsearch-6.3.2.jar:/usr/local/elasticsearch-6.3.2/lib/lucene-core-7.3.1.jar:/usr/local/elasticsearch-6.3.2/modules/x-pack/x-pack-core/x-pack-core-6.3.2.jar" LicenseVerifier.javaXPackBuildjavac -cp "/usr/local/elasticsearch-6.3.2/lib/elasticsearch-6.3.2.jar:/usr/local/elasticsearch-6.3.2/lib/lucene-core-7.3.1.jar:/usr/local/elasticsearch-6.3.2/modules/x-pack/x-pack-core/x-pack-core-6.3.2.jar:/usr/local/elasticsearch-6.3.2/lib/elasticsearch-core-6.3.2.jar" XPackBuild.java1.1.3 替换而后在将x-pack-core/x-pack-core-6.3.2.jar拿到本地解压复制到本地cp -a /usr/local/elasticsearch-6.3.2/modules/x-pack/x-pack-core/x-pack-core-6.3.2.jar .到此,目录下有5个文件[root@linuxea-vm-Node113 /es]# ll 总用量 1736 -rw-r--r-- 1 root root 410 8月 7 20:53 LicenseVerifier.class -rw-r--r-- 1 root root 593 8月 7 20:50 LicenseVerifier.java -rw-r--r-- 1 root root 1508 8月 7 20:53 XPackBuild.class -rw-r--r-- 1 root root 1358 8月 7 20:51 XPackBuild.java -rw-r--r-- 1 root root 1759804 8月 7 20:49 x-pack-core-6.3.2.jar为了能够分辨的更清楚,创建一个目录jardir,复制进去后解压,而后删除原来的包或者备份[root@linuxea-vm-Node113 /es]# mkdir jardir [root@linuxea-vm-Node113 /es]# cp x-pack-core-6.3.2.jar jardir/ [root@linuxea-vm-Node113 /es]# cd jardir/ [root@linuxea-vm-Node113 /es/jardir]# jar -xf x-pack-core-6.3.2.jar [root@linuxea-vm-Node113 /es/jardir]# \rm -rf x-pack-core-6.3.2.jar 将class覆盖进去[root@linuxea-vm-Node113 /es/jardir]# cd .. [root@linuxea-vm-Node113 /es]# cp -a LicenseVerifier.class jardir/org/elasticsearch/license/ cp:是否覆盖"jardir/org/elasticsearch/license/LicenseVerifier.class"? yes [root@linuxea-vm-Node113 /es]# cp -a XPackBuild.class jardir/org/elasticsearch/xpack/core/ cp:是否覆盖"jardir/org/elasticsearch/xpack/core/XPackBuild.class"? yes当文件覆盖到jardir中的org/elasticsearch/xpack/core和org/elasticsearch/license中后,开始打包[root@linuxea-vm-Node113 /es]# cd jardir/ [root@linuxea-vm-Node113 /es/jardir]# jar -cvf x-pack-core-6.3.2.jar * 已添加清单 正在添加: logstash-index-template.json(输入 = 994) (输出 = 339)(压缩了 65%) 正在忽略条目META-INF/ 正在忽略条目META-INF/MANIFEST.MF 正在添加: META-INF/LICENSE.txt(输入 = 13675) (输出 = 5247)(压缩了 61%)生成一个新的x-pack-core-6.3.2.jar包后覆盖到/usr/local/elasticsearch-6.3.2/modules/x-pack/x-pack-core/下,license修改完成,而后重启注意,旧的在替换之前就删除了,新的是重新jar -cvf生成的[root@linuxea-vm-Node113 ~]# ps aux|egrep ^elk|awk '{print $2}'|xargs kill && sudo -u elk /usr/local/elasticsearch-6.3.2/bin/elasticsearch -d1.1.4 申请license打开elastic申请页面进行申请,会发送到邮箱,下载后进行编辑将 "expiry_date_in_millis":1565135999999修改"expiry_date_in_millis":2565135999999将"type":"basic"修改为"type":"platinum"他表现的样子大概是这样的(当然,你不能用下面这种格式进行update,请使用在官网申请的license,他会发送到你填写的邮箱){"license":{ "uid":"2651b126-fef3-480e-ad4c-a60eb696a733", "type":"platinum", # 白金 "issue_date_in_millis":1533513600000, "expiry_date_in_millis":2565135999999, # 到期时间 "max_nodes":100," issued_to":"mark tang (www.linuxea.com)", "issuer":"Web Form", "signature":"AAAAAwAAAA2Of4OxzPNK/yl15sO4AAABmC9ZN0hjZDBGYnVyRXpCOW5Bb3FjZDAxOWpSbTVoMVZwUzRxVk1PSmkxaktJRVl5MUYvUWh3bHZVUTllbXNPbzBUemtnbWpBbmlWRmRZb25KNFlBR2x0TXc2K2p1Y1VtMG1UQU9TRGZVSGRwaEJGUjE3bXd3LzRqZ05iLzRteWFNekdxRGpIYlFwYkJiNUs0U1hTVlJKNVlXekMrSlVUdFIvV0FNeWdOYnlESDc3MWhlY3hSQmdKSjJ2ZTcvYlBFOHhPQlV3ZHdDQ0tHcG5uOElCaDJ4K1hob29xSG85N0kvTWV3THhlQk9NL01VMFRjNDZpZEVXeUtUMXIyMlIveFpJUkk2WUdveEZaME9XWitGUi9WNTZVQW1FMG1DenhZU0ZmeXlZakVEMjZFT2NvOWxpZGlqVmlHNC8rWVVUYzMwRGVySHpIdURzKzFiRDl4TmM1TUp2VTBOUlJZUlAyV0ZVL2kvVk10L0NsbXNFYVZwT3NSU082dFNNa2prQ0ZsclZ4NTltbU1CVE5lR09Bck93V2J1Y3c9PQAAAQAPymKvMYxtKy8+1tbaE0yvRbt4USyN5VYwY1+vBfxNyjUtrIgW3RQJfj/3McveTM7hiKHZXeDT+BAn9NdgFIBJ5ztA94s72RlkUJBQjSiqg50/1Nu5OTKloPKCs4R7pk42uapNISWubpRIXyGGer0KKLkpoBBlQkvwETNHk/aDGnzBzOJ/vppRYQgUtQx5ZXVo+U391w1sNj8lXuZrLwEByYU5ms25HVG1Ith0THelZMqoB0x2gvZklR5RQbEmWPGXOsBXLnfLPM571Op63TxGt+vsiNIvxBjsuq62tuhRkgAHkyqY2z+RLFDafQxUXtz41b6fgRLV5XPCDqiOWYvB", "start_date_in_millis":1533513600000}}当你已经修改了时间和白金类型后,上传来到 Management 选择 License Management,点击update license上传以及修改好的License ,如下(当然,我的已经修改好了,那个很骚的PLATINUM就是)修改好后来到License Management查看已经到了April 15, 2051 9:46 AM CST ok,到此es License 已经修改完成,也就是破解成功,那么接下来就是用下ssl的验证功能二 ,elasticsearch ssl6.3中x-pack是不能够开启密码登陆的,但这并不阻碍我们进行了解,他存在一些权限的问题,我在使用中发现无法使用,这里的信息可做参考,后面的配置中并不启用请注意权限问题说明chmod +r $PATH/cerp/* chown -R elk.elk /data/elasticsearch2.1.1 颁发创建证书颁发机构ca,会输出一个elastic-stack-ca.p12的文件在当前目录下,其中包含ca公用证书以及节点的签名和私钥。[root@linuxea-vm-Node113 ~/crt]# /usr/local/elasticsearch-6.3.2/bin/elasticsearch-certutil ca 在提示输入密码保护时候输入密码并记住(假如你输了的话)生成证书和私钥,输入设置的保护密码(如果没有则不需要输入)[root@linuxea-vm-Node113 ~/crt]# /usr/local/elasticsearch-6.3.2/bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12如果不出所料你将看到两个文件-rw------- 1 root root 3443 8月 6 13:00 elastic-certificates.p12 -rw------- 1 root root 2527 8月 6 12:59 elastic-stack-ca.p122.2.2 使用创建证书目录[root@linuxea-vm-Node113 ~/crt]# mkdir /usr/local/elasticsearch-6.3.2/config/certs/ [root@linuxea-vm-Node113 ~/crt]# cp elastic-* /usr/local/elasticsearch-6.3.2/config/certs/传递给其他elasticsearch机器(当然,目录还要创建)[root@linuxea-vm-Node113 ~/crt]# scp elastic-* 10.10.240.114:/usr/local/elasticsearch-6.3.2/config/certs/ [root@linuxea-vm-Node113 ~/crt]# scp elastic-* 10.0.1.49:/usr/local/elasticsearch-6.3.2/config/certs/而后修改权限,主要给java访问,否则报错Caused by: java.nio.file.AccessDeniedException[root@linuxea-vm-Node113 ~/crt]# chmod +r /usr/local/elasticsearch-6.3.2/config/certs/ [root@linuxea-vm-Node113 ~/crt]# chown -R elk.elk /data/elasticsearch2.2.3 在配置文件使用配置到配置文件将一下配置文件写到两台elasticsearch里面当然,如果你不了解之前怎么配置的,参考ELK6.3.2安装与配置[跨网络转发思路](https://www.linuxea.com/1889.html)其中包含配置和安装信息xpack.security.enabled: true xpack.monitoring.collection.enabled: true xpack.security.transport.ssl.enabled: true xpack.ssl.verification_mode: none xpack.security.transport.ssl.verification_mode: certificate xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12 xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12注意将xpack.security.enabled改成 true,之前没有启用时false,并且填写相对的路径xpack.ssl.verification_mode必须为 none,否则报错,意思大概就是忽略服务器密钥验证,只考虑使用,其中会失去一些诊断的机制xpack.security.transport.ssl.enabled: true开启,否则报[o.e.x.s.t.n.SecurityNetty4ServerTransport] [master] exception caught on transport layer [NettyTcpChannel错误三,修改密码当执行完上面的操作后,重启并未 报错的情况下选择一台elasticsearch执行elasticsearch-setup-passwords interactive如果按照我说演示的操作会出现以下对话,输入密码即可[root@linuxea-vm-Node113 ~]# /usr/local/elasticsearch-6.3.2/bin/elasticsearch-setup-passwords interactive Initiating the setup of passwords for reserved users elastic,kibana,logstash_system,beats_system. You will be prompted to enter passwords as the process progresses. Please confirm that you would like to continue [y/N]y Enter password for [elastic]: Reenter password for [elastic]: Enter password for [kibana]: Reenter password for [kibana]: Enter password for [logstash_system]: Reenter password for [logstash_system]: Enter password for [beats_system]: Reenter password for [beats_system]: Changed password for user [kibana] Changed password for user [logstash_system] Changed password for user [beats_system] Changed password for user [elastic]但密码输入完成后会同步到其他节点只需要修改kibana和logstash的配置即可完成3.1 kibana配置密码验证将xpack.security.enabled改为true开启监控和添加验证xpack.security.enabled: true xpack.monitoring.enabled: true elasticsearch.username: "elastic" elasticsearch.password: "linuxea"重启kibana就可以完成登录注意如果顺序错误这可能会失败,但是请关注你的日志报错,正常的顺序一定是要先破解了之后才能使用x-pack
2018年08月08日
4,930 阅读
0 评论
0 点赞
2018-08-07
linuxea:ELK6.3.2安装与配置[跨网络转发思路](一)
由于一些原因,我需要在内网搭建elk平台,采取云机器的日志,并且云节点并不是一家的,这就意味着这些云机器内网不通,分布广泛在内网搭建elk环境,并且只想用拉取的模式,也就是说,我内网并没有ip想被外网调用(无NAT),只要内网能上网就要可以用内网设备资源成本低基于以上三点来配置如下场景:散列的云节点往一台(意淫中的是集群组)redis(kafka密码配置过于复杂)节点接入数据,而后通过内网elk去抓取redis的日志到本地需要注意的是redis的防火墙规则匹配好,涉及到安全(有功夫的同学直接撸kafka)我们可以去官网下载RPM包和tar.gz二进制包来进行安装,我在这里分别都测试过,均用作x-pack的破解测试(后面会有破解的例子)先决条件:安装jdkyum install http://10.10.240.145/windows-client/jdk/jdk-8u171-linux-x64.rpm -y如果链接10.10.240.145 失败,不要紧张,10.10.240.145是我内网的mirrors (^_^)修改文件系统参数:echo "vm.max_map_count=262144" >> /etc/sysctl.conf echo "elk - nofile 65536" >> /etc/security/limits.conf 一,elasticsearch node install下载elasticsearch安装包并安装在elasticsearch的节点(这里用内网的mirrors下载使用的)1,创建用户2,创建db和logs目录3,备份原来的配置文件4,修改属主给解压目录和数据目录已经日志目录curl -Lk http://10.10.240.145/elk/elasticsearch-6.3.2.tar.gz|tar xz -C /usr/local/ && useradd elk && cd /usr/local/ && ln -s elasticsearch-6.3.2 elasticsearch && mkdir /data/elasticsearch/{db,logs} -p && chown -R elk.elk /data/elasticsearch/ /usr/local/elasticsearch* && cd elasticsearch/config/ && mv elasticsearch.yml elasticsearch.yml.bak1.2 elasticsearch 配置文件elk配置文件分为三份,一份node1。一份node2 ,一份协调节点,所差不大1.2.1 elasticsearch_node1cluster.name: linuxea-app_ds node.name: master path.data: /data/elasticsearch/db path.logs: /data/elasticsearch/logs bootstrap.system_call_filter: false bootstrap.memory_lock: false http.cors.enabled: true http.cors.allow-origin: "*" http.cors.allow-headers: Authorization network.host: 10.10.240.113 http.port: 9200 node.master: true node.data: true discovery.zen.ping.unicast.hosts: ["10.10.240.113"] xpack.security.enabled: false启动[root@linux-vm-Node113 ~]# sudo -u elk /usr/local/elasticsearch-6.3.2/bin/elasticsearch -d1.2.2 elasticsearch_node2[root@linux-vm-Node114 /usr/local/elasticsearch-6.3.2/config]# cat /usr/local/elasticsearch-6.3.2/config/elasticsearch.yml cluster.name: linuxea-app_ds node.name: slave path.data: /data/elasticsearch/db path.logs: /data/elasticsearch/logs bootstrap.system_call_filter: false bootstrap.memory_lock: false http.cors.enabled: true http.cors.allow-origin: "*" http.cors.allow-headers: Authorization network.host: 10.10.240.114 http.port: 9200 node.master: true node.data: true discovery.zen.ping.unicast.hosts: ["10.10.240.113"] #xpack.monitoring.collection.enabled: true xpack.security.enabled: false启动[root@linux-vm-Node114 ~]# sudo -u elk /usr/local/elasticsearch-6.3.2/bin/elasticsearch -d1.2.3 放行防火墙添加到配置文件中-A INPUT -s 10.0.1.49 -p tcp -m tcp -m state --state NEW -m multiport --dports 9200,9300 -m comment --comment "logstash" -j ACCEPT -A INPUT -s 10.10.240.117 -p tcp -m tcp -m state --state NEW -m multiport --dports 9200,9300 -m comment --comment "kibana" -j ACCEPT -A INPUT -s 10.10.240.114 -p tcp -m tcp -m state --state NEW -m multiport --dports 9200,9300 -m comment --comment "elasticsearch-114" -j ACCEPT -A INPUT -s 10.10.240.113 -p tcp -m tcp -m state --state NEW -m multiport --dports 9200,9300 -m comment --comment "elasticsearch-113" -j ACCEPT添加临时规则,放行9200和9300iptables -I INPUT 5 -s 10.0.1.49 -p tcp -m tcp -m state --state NEW -m multiport --dports 9200,9300 -m comment --comment "logstash" -j ACCEP iptables -I INPUT 5 -s 10.10.240.117 -p tcp -m tcp -m state --state NEW -m multiport --dports 9200,9300 -m comment --comment "kibana" -j ACCEP iptables -I INPUT 5 -s 10.10.240.114 -p tcp -m tcp -m state --state NEW -m multiport --dports 9200,9300 -m comment --comment "elasticsearch-114" -j ACCEP iptables -I INPUT 5 -s 10.10.240.113 -p tcp -m tcp -m state --state NEW -m multiport --dports 9200,9300 -m comment --comment "elasticsearch-113" -j ACCEPso,当node2启动后你应该关注日志,查看是否出错二,配置es协调节点以及kibana协调节点也就是所说的负载均衡,他将搜索请求或批量索引请求之类的请求,可能涉及保存在不同数据节点上的数据。例如,搜索请求在两个阶段中执行,这两个阶段由接收客户端请求的节点 到协调节点协调分散阶段,协调节点将请求转发到保存数据的数据节点。每个数据节点在本地执行请求并将其结果返回给协调节点。在收集 阶段,协调节点将每个数据节点的结果减少为单个全局结果集 node.master,node.data并node.ingest设置为false仅作为协调节点2.1 配置协调节点这里安装用的二进制包,使用elk用户启动[root@linux-vm-Node49 ~]# curl -Lk http://10.10.240.145/elk/elasticsearch-6.3.2.tar.gz|tar xz -C /usr/local/ [root@linux-vm-Node49 ~]# useradd elk [root@linux-vm-Node49 ~]# cd /usr/local/ && ln -s elasticsearch-6.3.2 elasticsearch [root@linux-vm-Node49 /usr/local]# mkdir /data/elasticsearch/{db,logs} -p [root@linux-vm-Node49 /usr/local]# chown -R elk.elk /data/elasticsearch/ /usr/local/elasticsearch-6.3.2 [root@linux-vm-Node49 /usr/local]# cd elasticsearch/config/ [root@linux-vm-Node49 /usr/local/elasticsearch/config]# mv elasticsearch.yml elasticsearch.yml.bak协调节点配置文件协调节点和kibana在一台机器,负责转发cluster.name: linuxea-app_ds node.name: coordinating path.data: /data/elasticsearch/db path.logs: /data/elasticsearch/logs bootstrap.system_call_filter: false bootstrap.memory_lock: false http.cors.enabled: true http.cors.allow-origin: "*" http.cors.allow-headers: Authorization network.host: 10.0.1.49 http.port: 9200 discovery.zen.ping.unicast.hosts: ["10.10.240.113"] node.master: false node.data: false node.ingest: false search.remote.connect: false node.ml: false xpack.security.enabled: false discovery.zen.minimum_master_nodes: 1你仍然需要先决条件的配置,并且修改属主和属组并且启动2.2 kibana install[root@linux-vm-Node49 ~]# curl -Lk http://10.10.240.145/elk/kibana-6.3.2-linux-x86_64.tar.gz|tar xz -C /usr/local/ [root@linux-vm-Node49 ~]# mkdir /data/kibana/logs/ -pserver.name: kibana server.port: 5601 server.host: "10.0.1.49" elasticsearch.url: "http://10.10.240.113:9200" logging.dest: /data/kibana/logs/kibana.log #logging.dest: stdout logging.silent: false logging.quiet: false kibana.index: ".kibana" xpack.security.enabled: flash #xpack.monitoring.enabled: true #elasticsearch.username: "elastic" #elasticsearch.password: "linuxea"需要注意,这里并没有启用x-pack,直接打开就撸的
2018年08月07日
3,420 阅读
0 评论
0 点赞
2017-09-13
linuxea:ELK-kibana5.5使用高德地图
下载IP地址归类查询库下载地址:https://dev.maxmind.com/zh-hans/geoip/geoip2/geolite2-%E5%BC%80%E6%BA%90%E6%95%B0%E6%8D%AE%E5%BA%93/下载国家的那个解压之后得到这么一个文件:[root@linuxea.com-Node49 /etc/logstash]# ll GeoLite2-City.mmdb -rw-r--r-- 1 logstash logstash 58082983 8月 3 08:57 GeoLite2-City.mmdb应用到配置比如nginx访问日志: geoip { source => "clent_ip" target => "geoip" database => "/etc/logstash/GeoLite2-City.mmdb"database指到地图的位置上在kibana5.5中只需要修改配置文件即可修改kibana在配置文件最后加上一句即可tilemap.url: 'http://webrd02.is.autonavi.com/appmaptile?lang=zh_cn&size=1&scale=1&style=7&x={x}&y={y}&z={z}'
2017年09月13日
6,165 阅读
0 评论
0 点赞
2017-09-11
linuxea:ELK5.5-elasticsearch-x-pack破解
ELK 6.3.2 x-pack破解参考 https://www.linuxea.com/1895.html创建LicenseVerifier.java文件[root@linuxea.com-Node61 /elk/]# cat LicenseVerifier.java package org.elasticsearch.license; import java.nio.*; import java.util.*; import java.security.*; import org.elasticsearch.common.xcontent.*; import org.apache.lucene.util.*; import org.elasticsearch.common.io.*; import java.io.*; public class LicenseVerifier { public static boolean verifyLicense(final License license, final byte[] encryptedPublicKeyData) { return true; } public static boolean verifyLicense(final License license) { return true; } }编译class文件[root@linuxea.com-Node49 ~/elk]# javac -cp "/usr/share/elasticsearch/lib/elasticsearch-5.5.1.jar:/usr/share/elasticsearch/lib/lucene-core-6.6.0.jar:/usr/share/elasticsearch/plugins/x-pack/x-pack-5.5.1.jar" LicenseVerifier.java [root@linuxea.com-Node49 ~/elk]# ls LicenseVerifier.class LicenseVerifier.java [root@linuxea.com-Node49 ~/elk]# cd /usr/share/elasticsearch/plugins/x-pack/ [root@linuxea.com-Node49 /usr/share/elasticsearch/plugins/x-pack]# mkdir test [root@linuxea.com-Node49 /usr/share/elasticsearch/plugins/x-pack]# mv x-pack-5.5.1.jar test/ [root@linuxea.com-Node49 /usr/share/elasticsearch/plugins/x-pack]# 备份下x-pack-5.5.1.jar[root@linuxea.com-Node49 /usr/share/elasticsearch/plugins/x-pack/test]# cp xvf x-pack-5.5.1.jar /opt解压[root@linuxea.com-Node49 /usr/share/elasticsearch/plugins/x-pack/test]# jar xvf x-pack-5.5.1.jar 替换class[root@linuxea.com-Node49 /usr/share/elasticsearch/plugins/x-pack/test]# cd org/elasticsearch/license [root@linuxea.com-Node49 /usr/share/elasticsearch/plugins/x-pack/test/org/elasticsearch/license]# cp /root/elk/LicenseVerifier.class ./回到test目录打包[root@linuxea.com-Node49 /usr/share/elasticsearch/plugins/x-pack/test/org/elasticsearch/license]# cd /usr/share/elasticsearch/plugins/x-pack/test/ [root@linuxea.com-Node49 /usr/share/elasticsearch/plugins/x-pack/test]# jar cvf x-pack-5.5.1.jar .将打包好的文件放回x-pack目录下[root@linuxea.com-Node49 /usr/share/elasticsearch/plugins/x-pack/test]# cp x-pack-5.5.1.jar ../申请licensehttps://license.elastic.co/registration申请完成后很快会发送到邮箱,而后修改license文件它分有不同的版本,版本有不同的权限,如下:open source开源版本basic基础版本gold是黄金版PLATINUM铂金版 curl -XPUT -u elastic 'http://<host>:<port>/_xpack/license' -H "Content-Type: application/json" -d @license.json修改license申请一个license后会发到邮箱,然后修改下即可{"license":{"uid":"d13W1FM-ef9XWi-45eAKLH6-afT5b4-b8erC7460","type":"platinum","issue_date_in_millis":11042324000000,"expiry_date_in_millis":2535123399999,"max_nodes":100,"issued_to":"sean wang (alibaba)","issuer":"Web Form","signature":"AAAAAwAAAA2kxmZrvpZZohthD/HAAAABmC9ZN0hjZDBGYnVyRXpCOW5Bb3FjZDAxOWpSbTVoMVZwUzRxVk1PSmkxaktJRVl5MUYvUWWpBbmlWRmRZb25KNFlBR2x0TXc2K2p1Y1VtMG1UQU9TRGZVSGRwaEJGUjE3bXd3LzRqZ05iLzRteWFNekNUs0U1hTVlJK2E1AD93AD04A03C3DF7565FA377223916FA881A19A675E9BD2F78680EE545265lESDc3MWhlY3hSQmdKSjJ2ZTcvYlBFOHhPQlV3ZHdDQ0tHcG5uOElCaDJ4K1hob29xSG85N0kvTWV3THhlQk9NL01VMFRjNDZpZEVXeUtUMXIyMlIveFpJUkk2WUdveEZaME9XWitGUi9WNTZVQW1FMG1DenhC8rWVVUYzMwRGVySHpIdURzKzFiRDl4TmM1TUp2VTBOUlJZUlAyV0ZVL2kvVk10L0NsbXNFYVZwT3NSU082dFNNa2prQ0ZsclZ4NTltbU1CVE5lR09Bck93V2J1Y3c9PQAAAQBvSGrvXPAAtLbErFH431nJyyyuZ1A5Mqnq2mmEY2NiFA1GUTjzEorVn9rWD20vTAZaR/EUbdQ1xAKLH1/WK/Ur4ct5Gpv3KwPVI1Lvn7q5BqoO5F4AYGcaUJqu8erCuGYz9XHGipAYpCUDVppRC294MsR/o6XJLNn7VTp+FHXRIVAbgWidQQHxaT3MQo/y38t7pKZvMQQ7l5DEp0foPhgW9Nm4coK4WXoT87/LkhCwMtH5NLmD80rZKy0XKX8AXEK+usf+gtv1iIY35t7wB8EbHPO+mUlBT5rAb","start_date_in_millis":1504224000000}}将文件保存license.json没修改前:[root@linuxea.com-Node49 ~/elk]# curl -XGET -u elastic:linuxea 'http://10.0.1.49:9200/_license' { "license" : { "status" : "active", "uid" : "427cbb8e-9d96-435f-b56d-fa2efeb438c5", "type" : "trial", "issue_date" : "2017-09-01T14:28:04.736Z", "issue_date_in_millis" : 1504276084736, "expiry_date" : "2017-10-01T14:28:04.736Z", "expiry_date_in_millis" : 1506868084736, "max_nodes" : 1000, "issued_to" : "linuxea-app", "issuer" : "elasticsearch", "start_date_in_millis" : -1 } }输入密码进行修改:[root@linuxea.com-Node49 ~/elk]# curl -XPUT -u elastic 'http://10.0.1.49:9200/_xpack/license' -H "Content-Type: application/json" -d @license.json Enter host password for user 'elastic': {"acknowledged":true,"license_status":"valid"}修改完成后查看[root@linuxea.com-Node49 ~/elk]# curl -XPUT -u elastic 'http://10.0.1.49:9200/_xpack/license' -H "Content-Type: application/json"curl -XGET -u elastic:linuxea 'http://10.0.1.49:9200/_license' { "license" : { "status" : "active", "uid" : "d13W1FM-ef9XWi-45eAKLH6-afT5b4-b8erC7460", "type" : "platinum", "issue_date" : "2017-09-01T00:00:00.000Z", "issue_date_in_millis" : 11042324000000, "expiry_date" : "2050-05-11T01:46:39.999Z", "expiry_date_in_millis" : 2535123399999, "max_nodes" : 100, "issued_to" : "sean wang (alibaba)", "issuer" : "Web Form", "start_date_in_millis" : 11042324000000 } } [root@linuxea.com-Node49 ~/elk]#
2017年09月11日
10,982 阅读
5 评论
0 点赞
2017-09-10
linuxea:ELK5.5-redis日志grok处理(filebeat)
收集notice的log日志,日志中有很多信息,常规的收集一些报错信息即可,所以就在收集时候做了控制,只收集错误日志和警告日志include_lines: ["WARNING","ERR"]include_lines 一个正则表达式的列表,以匹配您希望Filebeat包含的行。Filebeat仅导出与列表中正则表达式匹配的行。默认情况下,导出所有行。exclude_files 一个正则表达式的列表,以匹配您想要Filebeat忽略的文件。默认情况下不排除文件参考:https://www.elastic.co/guide/en/beats/filebeat/current/configuration-filebeat-options.html多行参考:https://www.elastic.co/guide/en/beats/filebeat/current/configuration-filebeat-options.html#multiline日志示例1:M 08 Sep 11:42:43.806 # Server started, Redis version 3.2.9 1:M 08 Sep 11:41:44.806 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. 1:M 08 Sep 11:12:32.806 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled. 1:M 08 Sep 11:12:32.822 * DB loaded from disk: 0.016 seconds 1:M 08 Sep 11:12:32.822 * The server is now ready to accept connections on port 6379 _._ _.-``__ ''-._ _.-`` `. `_. ''-._ Redis 3.2.9 (00000000/0) 64 bit .-`` .-```. ```\/ _.,_ ''-._ ( ' , .-` | `, ) Running in standalone mode |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379 | `-._ `._ / _.-' | PID: 1 `-._ `-._ `-./ _.-' _.-' |`-._`-._ `-.__.-' _.-'_.-'| | `-._`-._ _.-'_.-' | http://redis.io `-._ `-._`-.__.-'_.-' _.-' |`-._`-._ `-.__.-' _.-'_.-'| | `-._`-._ _.-'_.-' | `-._ `-._`-.__.-'_.-' _.-' `-._ `-.__.-' _.-' `-._ _.-' `-.__.-' 1:M 08 Sep 11:40:45.806 # ERROR 123最终收集的效果如下:ERRWARNING安装rediscurl -Lks4 https://raw.githubusercontent.com/LinuxEA-Mark/docker-alpine-Redis/master/Sentinel/install_redis.sh|bashredis日志配置loglevel notice logfile "/data/logs/redis_6379.log"配置文件示例:[root@linuxea.com-Node98 /data/rds]# cat /etc/redis/redis.conf bind 0.0.0.0 protected-mode yes port 6379 tcp-backlog 511 timeout 0 tcp-keepalive 300 daemonize no supervised no pidfile "/var/run/redis_6379.pid" loglevel notice logfile "/data/logs/redis_6379.log" databases 8 save 900 1 save 300 10 save 60 10000 stop-writes-on-bgsave-error yes rdbcompression yes rdbchecksum yes dbfilename "dump.rdb" dir "/data/redis" slave-serve-stale-data yes slave-read-only yes repl-diskless-sync no repl-diskless-sync-delay 5 repl-disable-tcp-nodelay no slave-priority 100 appendonly no appendfilename "appendonly.aof" appendfsync everysec no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb aof-load-truncated yes lua-time-limit 5000 slowlog-log-slower-than 100 slowlog-max-len 1000 latency-monitor-threshold 0 notify-keyspace-events "" hash-max-ziplist-entries 512 hash-max-ziplist-value 64 list-max-ziplist-size -2 list-compress-depth 0 set-max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 hll-sparse-max-bytes 3000 activerehashing yes client-output-buffer-limit normal 0 0 0 client-output-buffer-limit slave 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 hz 10 aof-rewrite-incremental-fsync yes masterauth "OTdmOWI4ZTM4NTY1M2M4OTZh" requirepass "OTdmOWI4ZTM4NTY1M2M4OTZh" # Generated by CONFIG REWRITE #slaveof 172.25. 6379filebeat配置[root@linuxea.com-Node117 /data/logs]# cat /etc/filebeat/filebeat.yml filebeat.prospectors: - input_type: log paths: - /data/logs/access_nginx.log document_type: nginx-access-117 - input_type: log paths: - /data/logs/slow_log.CSV document_type: mysql-slow-117 - input_type: log paths: - /data/logs/redis_6379.log document_type: redis-6379-117 include_lines: ["WARNING","ERR"] output.redis: hosts: ["10.10.0.98"] password: "OTdmOWI4ZTM4NTY1M2M4OTZh" key: "default_list" db: 5 timeout: 5 keys: - key: "%{[type]}" mapping: "nginx-access-117": "nginx-access-117" "mysql-slow-117" : "mysql-slow-117" "redis-6379-117" : "redis-6379-117"logstash配置patterns[root@linuxea.com-Node49 /etc/logstash/patterns.d]# cat redis REDISTIMESTAMP %{MONTHDAY} %{MONTH} %{TIME} REDISLOG %{POSINT:pid}\:%{WORD:role} %{REDISTIMESTAMP:timestamp} %{DATA:loglevel} %{GREEDYDATA:msg}logstash配置文件input redis { host => "10.10.0.98" port => "6379" key => "redis-6379-117" data_type => "list" password => "OTdmOWI4ZTM4NTY1M2M4OTZh" threads => "5" db => "5" }filter if [type] == "redis-6379-117" { grok { patterns_dir => "/etc/logstash/patterns.d" match => { "message" => "%{REDISLOG}" } } mutate { gsub => [ "loglevel", "\.", "debug", "loglevel", "\-", "verbose", "loglevel", "\*", "notice", "loglevel", "\#", "warring", "role","X","sentinel", "role","C","RDB/AOF writing child", "role","S","slave", "role","M","master" ] } date { match => [ "timestamp" , "dd MMM HH:mm:ss.SSS" ] target => "@timestamp" remove_field => [ "timestamp" ] } }output if [type] == "redis-6379-117" { elasticsearch { hosts => ["10.0.1.49:9200"] index => "logstash-redis-6379-117-%{+YYYY.MM.dd}" user => "elastic" password => "linuxea" }
2017年09月10日
4,721 阅读
0 评论
0 点赞
2017-09-10
linuxea:ELK5.5-Haproxy日志grok处理(filebeat)
haproxy本身的话是不建议rsyslog的日志,一般都是关闭的,但是我觉得可以用elk做一些切割尝试。我们先安装编译安装参考:https://www.linuxea.com/1328.html在后面的filebeat中会exclude_lines过滤掉["started","Pausing","Enabling","DOWN","UP","admin_stats","backend"]的信息,日志最终收集的效果如下:日志开启修改rsyslog.conf配置文件如下:$ModLoad imudp $UDPServerRun 514 local3.* /var/log/haproxy.log注释掉 #*.info;mail.none;authpriv.none;cron.none /var/log/messages 添加 *.info;mail.none;authpriv.none;cron.none;local3.none /var/log/messages修改rsyslog[root@LinuxEA haproxy]# vim /etc/sysconfig/rsyslog SYSLOGD_OPTIONS="-r -m 0 -c 2" [root@LinuxEA haproxy]# systemctl restart rsyslog.service日志格式是这样的:2017-09-07T14:19:41+08:00 localhost haproxy[32171]: 10.10.0.96:50482 [07/Sep/2017:14:19:41.179] frontend-web.com linuxea-webgroup.com/<NOSRV> 0/-1/-1/-1/0 503 212 - - SC-- 0/0/0/0/0 0/0 "GET /favicon.ico HTTP/1.1"filebeat配置文件[root@linuxea.com-Node117 /data/logs]# cat /etc/filebeat/filebeat.yml filebeat.prospectors: - input_type: log paths: - /data/logs/access_nginx.log document_type: nginx-access-117 - input_type: log paths: - /data/logs/slow_log.CSV document_type: mysql-slow-117 - input_type: log paths: - /data/logs/redis_6379.log document_type: redis-6379-117 include_lines: ["WARNING","ERR"] - input_type: log paths: - /data/logs/haproxy.log exclude_lines: ["started","Pausing","Enabling","DOWN","UP","admin_stats","backend"] document_type: haproxy-117 output.redis: hosts: ["10.10.0.98"] password: "OTdmOWI4ZTM4NTY1M2M4OTZh" key: "default_list" db: 5 timeout: 5 keys: - key: "%{[type]}" mapping: "nginx-access-117": "nginx-access-117" "mysql-slow-117" : "mysql-slow-117" "redis-6379-117" : "redis-6379-117" "haproxy-117" : "haproxy-117" [root@linuxea.com-Node117 /data/logs]# Logstash配置路径下有很多自带的patterns:/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-4.1.1/patterns [root@linuxea.com-Node49 /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-4.1.1/patterns]# ll 总用量 112 -rw-r--r-- 1 logstash logstash 1831 7月 19 05:15 aws -rw-r--r-- 1 logstash logstash 4831 7月 19 05:15 bacula -rw-r--r-- 1 logstash logstash 260 7月 19 05:15 bind -rw-r--r-- 1 logstash logstash 2154 7月 19 05:15 bro -rw-r--r-- 1 logstash logstash 879 7月 19 05:15 exim -rw-r--r-- 1 logstash logstash 10095 7月 19 05:15 firewalls -rw-r--r-- 1 logstash logstash 5338 7月 19 05:15 grok-patterns -rw-r--r-- 1 logstash logstash 3251 7月 19 05:15 haproxy -rw-r--r-- 1 logstash logstash 987 7月 19 05:15 httpd -rw-r--r-- 1 logstash logstash 1265 7月 19 05:15 java -rw-r--r-- 1 logstash logstash 1087 7月 19 05:15 junos -rw-r--r-- 1 logstash logstash 1037 7月 19 05:15 linux-syslog -rw-r--r-- 1 logstash logstash 74 7月 19 05:15 maven -rw-r--r-- 1 logstash logstash 49 7月 19 05:15 mcollective -rw-r--r-- 1 logstash logstash 190 7月 19 05:15 mcollective-patterns -rw-r--r-- 1 logstash logstash 614 7月 19 05:15 mongodb -rw-r--r-- 1 logstash logstash 9597 7月 19 05:15 nagios -rw-r--r-- 1 logstash logstash 142 7月 19 05:15 postgresql -rw-r--r-- 1 logstash logstash 845 7月 19 05:15 rails -rw-r--r-- 1 logstash logstash 224 7月 19 05:15 redis -rw-r--r-- 1 logstash logstash 188 7月 19 05:15 ruby -rw-r--r-- 1 logstash logstash 404 7月 19 05:15 squid [root@linuxea.com-Node49 /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-4.1.1/patterns]# 也可以自己写到定义的位置,如:patterns_dir => ["/etc/logstash/patterns.d"][root@linuxea.com-Node49 /etc/logstash/patterns.d]# cat ../conf.d/redis-output.yml input { redis { host => "10.10.0.98" port => "6379" key => "haproxy-117" data_type => "list" password => "OTdmOWI4ZTM4NTY1M2M4OTZh" threads => "5" db => "5" } } filter { if [type] == "haproxy-117" { grok { # patterns_dir => ["/etc/logstash/patterns.d"] match => ["message", "%{HAPROXYHTTP}"] } date { match => ["accept_date", "dd/MMM/yyyy:HH:mm:ss.SSS"] } geoip { source => "client_ip" database => "/etc/logstash/GeoLite2-City.mmdb" } } } output { if "_grokparsefailure" in [tags] { file { path => "/var/log/logstash/grokparsefailure-%{[type]}-%{+YYYY.MM.dd}.log" } } if [type] == "haproxy-117" { elasticsearch { hosts => ["10.0.1.49:9200"] index => "logstash-haproxy-117-%{+YYYY.MM.dd}" user => "elastic" password => "linuxea" } } stdout {codec => rubydebug} } [root@linuxea.com-Node49 /etc/logstash/patterns.d]#
2017年09月10日
5,248 阅读
0 评论
0 点赞
2017-09-09
linuxea:ELK5.5-Tomcat-Access日志grok处理(filebeat)
tomcat的访问日志有很多可以调整,这里使用%h %l %u %t [%r] %s [%{Referer}i] [%{User-Agent}i] %b %T,如下日志格式%h 访问的用户IP地址 %l 访问逻辑用户名,通常返回'-' %u 访问验证用户名,通常返回'-' %t 访问日时 %r 访问的方式(post或者是get),访问的资源和使用的http协议版本 %s 访问返回的http状态 %b 访问资源返回的流量 %T 访问所使用的时间 [%{Referer}i] [%{User-Agent}i]其他可参考:http://tomcat.apache.org/tomcat-8.5-doc/config/valve.html#Access_Logging修改配置文件[root@linuxea.com-Node117 /data/tomcat]# tail -9 conf/server.xml <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="access_log" suffix=".log" rotatable="true" resolveHosts="false" pattern="%h %l %u %t [%r] %s [%{Referer}i] [%{User-Agent}i] %b %T" /> </Host> </Engine> </Service> </Server> [root@linuxea.com-Node117 /data/tomcat]# 格式如下: <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="access_log" fileDateFormat="yyyy-MM-dd.HH" suffix=".log" rotatable="true" resolveHosts="false" pattern="%h %l %u %t [%r] %s [%{Referer}i] [%{User-Agent}i] %b %T" />那么设置后显示出的日志是这样的:10.10.0.96 - - [04/Sep/2017:19:54:07 +0800] [GET / HTTP/1.1] 200 [-] [Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36] 5 0.104Pattern是这样的:[root@linuxea.com-Node49 /etc/logstash/patterns.d]# cat java JETTYAUDIT %{IP:clent_ip} (?:-|%{USER:logic_user}) (?:-|%{USER:verification_user}) \[%{HTTPDATE:timestamp}\] \[(?:%{WORD:http_verb} %{NOTSPACE:request_url}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\] %{NUMBER:status} \[(?:-|%{NOTSPACE:request_url_2})\] \[%{GREEDYDATA:agent}\] (?:-|%{NUMBER:curl_size}) (?:-|%{NUMBER:responsetime})最终收集的结果图标是这样的:filebeat配置[root@linuxea.com-Node117 /data/tomcat]# cat /etc/filebeat/filebeat.yml filebeat.prospectors: - input_type: log paths: - /data/logs/access_nginx.log document_type: nginx-access-117 - input_type: log paths: - /data/logs/slow_log.CSV - input_type: log paths: - /data/logs/java.log document_type: java-117 output.redis: hosts: ["10.10.0.98"] password: "OTdmOWI4ZTM4NTY1M2M4OTZh" key: "default_list" db: 5 timeout: 5 keys: - key: "%{[type]}" mapping: "nginx-access-117": "nginx-access-117" "mysql-slow-117" : "mysql-slow-117" "java-117" : "java-117"logstash配置input redis { host => "10.10.0.98" port => "6379" key => "java-117" data_type => "list" password => "OTdmOWI4ZTM4NTY1M2M4OTZh" threads => "5" db => "5" }filter if [type] == "java-117" { grok { patterns_dir => "/etc/logstash/patterns.d" match => { "message" => "%{JETTYAUDIT}" } } useragent { source => "agent" target => "userAgent" } urldecode { all_fields => true } mutate { gsub => ["agent","[\"]",""] #将agent中的 " 换成空 convert => [ "response","integer" ] convert => [ "body_bytes_sent","integer" ] convert => [ "bytes_sent","integer" ] convert => [ "upstream_response_time","float" ] convert => [ "upstream_status","integer" ] convert => [ "request_time","float" ] convert => [ "port","integer" ] } geoip { source => "client_ip" database => "/etc/logstash/GeoLite2-City.mmdb" } if [params] { kv { field_split => ",?" source => "params" } } if [source] =~ /\/API/ { mutate { add_field => { "mode" => "API"} } } else { mutate { add_field => { "mode" => "ENT"} } } date { match => [ "date" , "yyyy-MM-dd HH:mm:ss.SSS" ] } }ouput if [type] == "java-117" { elasticsearch { hosts => ["10.0.1.49:9200"] index => "logstash-java-117-%{+YYYY.MM.dd}" user => "elastic" password => "linuxea" } }完整的配置[root@linuxea.com-Node49 /etc/logstash/patterns.d]# cat ../conf.d/redis-output.yml input { redis { host => "10.10.0.98" port => "6379" key => "nginx-access-117" data_type => "list" password => "OTdmOWI4ZTM4NTY1M2M4OTZh" threads => "5" db => "5" } redis { host => "10.10.0.98" port => "6379" key => "mysql-slow-117" data_type => "list" password => "OTdmOWI4ZTM4NTY1M2M4OTZh" threads => "5" db => "5" } redis { host => "10.10.0.98" port => "6379" key => "java-117" data_type => "list" password => "OTdmOWI4ZTM4NTY1M2M4OTZh" threads => "5" db => "5" } } filter { if [type] == "nginx-access-117" { grok { patterns_dir => [ "/etc/logstash/patterns.d" ] match => { "message" => "%{NGINXACCESS}" } overwrite => [ "message" ] } geoip { source => "clent_ip" target => "geoip" # database => "/etc/logstash/GeoLiteCity.dat" database => "/etc/logstash/GeoLite2-City.mmdb" } useragent { source => "User_Agent" target => "userAgent" } urldecode { all_fields => true } mutate { gsub => ["User_Agent","[\"]",""] #将user_agent中的 " 换成空 convert => [ "response","integer" ] convert => [ "body_bytes_sent","integer" ] convert => [ "bytes_sent","integer" ] convert => [ "upstream_response_time","float" ] convert => [ "upstream_status","integer" ] convert => [ "request_time","float" ] convert => [ "port","integer" ] } date { match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ] } } #########################mysql-slow##################################### if [type] == "mysql-slow-117" { csv { columns => [ "timestamp", "user_host", "query_time", "lock_time", "rows_sent", "rows_examined", "db", "last_insert_id", "insert_id", "server_id", "sql_text", "thread_id", "rows_affected" ] } mutate { convert => { "rows_sent" => "integer" } convert => { "rows_examined" => "integer" } convert => { "last_insert_id" => "integer" } convert => { "insert_id" => "integer" } convert => { "server_id" => "integer" } convert => { "thread_id" => "integer" } convert => { "rows_affected" => "integer" } } date { match => [ "timestamp", "yyyy-MM-dd HH:mm:ss.SSSSSS" ] remove_field => [ "timestamp" ] } # mutate { remove_field => [ "message" ] } mutate { gsub => [ "query_time", "(.*\.)(\d)(\d)\d+", "\1\2\3", "lock_time", "(.*\.)(\d)(\d)\d+", "\1\2\3" ] } ruby { code => "event.set('query_time' , event.get('query_time') ? event.get('query_time').split(':').inject(0){|a, m| a = a * 60 + m.to_f} : 0)"} ruby { code => "event.set('lock_time' , event.get('lock_time') ? event.get('lock_time').split(':').inject(0){|a, m| a = a * 60 + m.to_f} : 0)" } } #########################java##################################### if [type] == "java-117" { grok { patterns_dir => "/etc/logstash/patterns.d" match => { "message" => "%{JETTYAUDIT}" } } useragent { source => "agent" target => "userAgent" } urldecode { all_fields => true } mutate { gsub => ["agent","[\"]",""] #将agent中的 " 换成空 convert => [ "response","integer" ] convert => [ "body_bytes_sent","integer" ] convert => [ "bytes_sent","integer" ] convert => [ "upstream_response_time","float" ] convert => [ "upstream_status","integer" ] convert => [ "request_time","float" ] convert => [ "port","integer" ] } geoip { source => "client_ip" database => "/etc/logstash/GeoLite2-City.mmdb" } if [params] { kv { field_split => ",?" source => "params" } } if [source] =~ /\/API/ { mutate { add_field => { "mode" => "API"} } } else { mutate { add_field => { "mode" => "ENT"} } } date { match => [ "date" , "yyyy-MM-dd HH:mm:ss.SSS" ] } } #########################java##################################### } output { if "_grokparsefailure" in [tags] { file { path => "/var/log/logstash/grokparsefailure-%{[type]}-%{+YYYY.MM.dd}.log" } } if [type] == "nginx-access-117" { elasticsearch { hosts => ["10.0.1.49:9200"] index => "logstash-nginx-access-117-%{+YYYY.MM.dd}" user => "elastic" password => "linuxea" } } if [type] == "mysql-slow-117" { elasticsearch { hosts => ["10.0.1.49:9200"] index => "logstash-mysql-slow-117-%{+YYYY.MM.dd}" user => "elastic" password => "linuxea" } } if [type] == "java-117" { elasticsearch { hosts => ["10.0.1.49:9200"] index => "logstash-java-117-%{+YYYY.MM.dd}" user => "elastic" password => "linuxea" } } stdout {codec => rubydebug} } [root@linuxea.com-Node49 /etc/logstash/patterns.d]# 最后收集到日志是这样的
2017年09月09日
5,716 阅读
0 评论
0 点赞
1
2
3
4