首页
常用命令
About Me
推荐
weibo
github
Search
1
linuxea:gitlab-ci之docker镜像质量品质报告
30,580 阅读
2
Graylog收集文件日志实例
17,013 阅读
3
linuxea:jenkins+pipeline+gitlab+ansible快速安装配置(1)
16,475 阅读
4
git+jenkins发布和回滚示例
16,306 阅读
5
linuxea:如何复现查看docker run参数命令
16,075 阅读
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack
Open-Falcon
Prometheus
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
音乐
影视
music
Internet Consulting
最后的净土
软件交付
持续集成
gitops
devops
登录
Search
标签搜索
kubernetes
docker
zabbix
Golang
mariadb
持续集成工具
白话容器
linux基础
nginx
elk
dockerfile
Gitlab-ci/cd
最后的净土
基础命令
docker-compose
saltstack
haproxy
jenkins
GitLab
prometheus
marksugar
累计撰写
657
篇文章
累计收到
140
条评论
首页
栏目
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack
Open-Falcon
Prometheus
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
音乐
影视
music
Internet Consulting
最后的净土
软件交付
持续集成
gitops
devops
页面
常用命令
About Me
推荐
weibo
github
搜索到
22
篇与
ELK Stack
的结果
2017-09-06
linuxea:ELK5.5-nginx访问日志grok切割(filebeat)
监控nginx访问日志filebeat+redis+logstashfilebeat收集日志后传给redis,logstash读取redis后grok后存储安装filebeat[root@linuxea.com-Node117 ~]# wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.5.1-x86_64.rpm [root@linuxea.com-Node117 ~]# yum install filebeat-5.5.1-x86_64.rpm -y传递给redis配置文件如下[root@linuxea.com-Node117 /etc/filebeat]# cat filebeat.yml filebeat.prospectors: - input_type: log paths: - /data/logs/access_nginx.log document_type: nginx-access117 output.redis: hosts: ["10.10.0.98"] password: "OTdmOWI4ZTM4NTY1M2M4OTZh" key: "default_list" db: 1 timeout: 5 keys: - key: "%{[type]}" mapping: "nginx-access117": "nginx-access117"启动程序[root@linuxea.com-Node117 /etc/filebeat]# systemctl restart filebeat [root@linuxea.com-Node117 /etc/filebeat]# tail -f /var/log/filebeat/filebeat 2017-08-25T20:53:09+08:00 INFO States Loaded from registrar: 11 2017-08-25T20:53:09+08:00 INFO Loading Prospectors: 1 2017-08-25T20:53:09+08:00 INFO Prospector with previous states loaded: 1 2017-08-25T20:53:09+08:00 WARN DEPRECATED: document_type is deprecated. Use fields instead. 2017-08-25T20:53:09+08:00 INFO Starting prospector of type: log; id: 12123466383741208858 2017-08-25T20:53:09+08:00 INFO Loading and starting Prospectors completed. Enabled prospectors: 1 2017-08-25T20:53:09+08:00 INFO Metrics logging every 30s 2017-08-25T20:53:09+08:00 INFO Starting Registrar 2017-08-25T20:53:09+08:00 INFO Start sending events to output 2017-08-25T20:53:09+08:00 INFO Starting spooler: spool_size: 2048; idle_timeout: 5s 2017-08-25T20:53:29+08:00 INFO Harvester started for file: /data/logs/access_nginx.log 2017-08-25T20:53:39+08:00 INFO Non-zero metrics in the last 30s: filebeat.harvester.open_files=1 filebeat.harvester.running=1 filebeat.harvester.started=1 libbeat.publisher.published_events=243 libbeat.redis.publish.read_bytes=1367 libbeat.redis.publish.write_bytes=126046 publish.events=245 registrar.states.current=11 registrar.states.update=245 registrar.writes=2 2017-08-25T20:54:09+08:00 INFO No non-zero metrics in the last 30sredis查看但启动后,写入access_nginx.log日志后就会写到redis,这个时候如果没有被拿走是可以看到的,如下[root@linuxea.com-Node98 ~]# redis-cli -h 10.10.0.98 -a OTdmOWI4ZTM4NTY1M2M4OTZh 10.10.0.98:6379> select 1 OK 10.10.0.98:6379[1]> keys * 1) "nginx-access117" 10.10.0.98:6379[1]> type "nginx-access117" list 10.10.0.98:6379[1]> lrange nginx-access117 0 -1 1) "{\"@timestamp\":\"2017-08-25T12:53:29.279Z\",\"beat\":{\"hostname\":\"linuxea.com-Node117.cluster.com\",\"name\":\"linuxea.com-Node117.cluster.com\",\"version\":\"5.5.1\"},\"input_type\":\"log\",\"message\":\"10.10.0.96 - - [25/Aug/2017:12:53:21 +0000] GET / HTTP/1.1 - 304 0 - Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.101 Safari/537.36 -\",\"offset\":48321607,\"source\":\"/data/logs/access_nginx.log\",\"type\":\"nginx-access117\"}"创建patterns目录和文件现看下具体格式有哪些:nginx日志格式如下:log_format upstream2 '$proxy_add_x_forwarded_for $remote_user [$time_local] "$request" $http_host' '[$body_bytes_sent] $request_body "$http_referer" "$http_user_agent" [$ssl_protocol] [$ssl_cipher]' '[$request_time] [$status] [$upstream_status] [$upstream_response_time] [$upstream_addr]';这个是在logstash机器上创建patterns.d目录存放grok格式[root@linuxea.com-Node49 /etc/logstash/conf.d]# mkdir /etc/logstash/patterns.d/ -p把patterns写到文件[root@linuxea.com-Node49 /etc/logstash/conf.d]# cat /etc/logstash/patterns.d/nginx NGUSERNAME [a-zA-Z\.\@\-\+_%]+ NGUSER %{NGUSERNAME} NGINXACCESS %{IP:clent_ip} (?:-|%{USER:ident}) \[%{HTTPDATE:log_date}\] \"%{WORD:http_verb} (?:%{PATH:baseurl}\?%{NOTSPACE:params}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" (%{IPORHOST:url_domain}|%{URIHOST:ur_domain}|-)\[(%{BASE16FLOAT:request_time}|-)\] %{NOTSPACE:request_body} %{QS:referrer_rul} %{GREEDYDATA:User_Agent} \[%{GREEDYDATA:ssl_protocol}\] \[(?:%{GREEDYDATA:ssl_cipher}|-)\]\[%{NUMBER:time_duration}\] \[%{NUMBER:http_status_code}\] \[(%{BASE10NUM:upstream_status}|-)\] \[(%{NUMBER:upstream_response_time}|-)\] \[(%{URIHOST:upstream_addr}|-)\]但是在安装完成kibana后,在dev tools中有grok debugger,如果日志格式不同,增减后直接simulate测试即可,如下图:写进elasticsearch配置如下其中用了GeoLite2-City.mmdb,但是发现没有什么卵用下载地址:https://dev.maxmind.com/zh-hans/geoip/geoip2/geolite2-%E5%BC%80%E6%BA%90%E6%95%B0%E6%8D%AE%E5%BA%93/或者就用自带的,注释database即可[root@linuxea-Node49 /etc/logstash/conf.d]# /usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-geoip [root@linuxea.com-Node49 /etc/logstash]# /usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-user-agent -> Downloading ingest-user-agent from elastic [=================================================] 100% -> Installed ingest-user-agent [root@linuxea.com-Node49 /etc/logstash]# input从redis取数据发送给elasticsearch[root@linuxea.com-Node49 /etc/logstash/conf.d]# cat /etc/logstash/conf.d/redis_input.conf input { redis { host => "10.10.0.98" port => "6379" key => "nginx-access117" data_type => "list" password => "OTdmOWI4ZTM4NTY1M2M4OTZh" threads => 10 db => "1" } } filter { if [type] == "nginx-access-117" { grok { patterns_dir => [ "/etc/logstash/patterns.d" ] match => { "message" => "%{NGINXACCESS}" } overwrite => [ "message" ] } geoip { source => "clent_ip" target => "geoip" # database => "/etc/logstash/GeoLiteCity.dat" database => "/etc/logstash/GeoLite2-City.mmdb" } useragent { source => "User_Agent" target => "userAgent" } urldecode { all_fields => true } mutate { gsub => ["User_Agent","[\"]",""] #将user_agent中的 " 换成空 convert => [ "response","integer" ] convert => [ "body_bytes_sent","integer" ] convert => [ "bytes_sent","integer" ] convert => [ "upstream_response_time","float" ] convert => [ "upstream_status","integer" ] convert => [ "request_time","float" ] convert => [ "port","integer" ] } date { match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ] } } } output { if [type] == "nginx-access117" { elasticsearch { hosts => ["10.0.1.49:9200"] index => "logstash-nginx-access-117-%{+YYYY.MM.dd}" user => "elastic" password => "linuxea" } } stdout {codec => rubydebug} }最后几步在启动logstash时候可以观察下日志:打开kibana,在management-->create即可,输入logstash-nginx-access-117-*,如下图:当日志写入,字段会grok,在kibana上表现这样ok,基本上日志切割完成
2017年09月06日
6,430 阅读
0 评论
0 点赞
2017-09-03
linuxea:Elasticsearch5.5集群部署(x-pack)
两台机器都安装,因为之前安装了一台,这次直接配置即可在前面ELK5.5安装和配置这里介绍的是单机的安装,我们这次引入haproxy和基于x-pack,x-pack激活后面会介绍方式两台elasticsearch配置后会冗余,基于recovery机制:参考这里当elasticsearch出现故障会通过haproxy转到另外一台正常的elasticsearch中,从filebeat-->redis--->logstash--->elasticsearch--->haproxy---->kibana,如下图:安装elasticsearch和x-pack[root@linuxea-Node61 ~]# yum install -y jdk-8u131-linux-x64.rpm [root@linuxea-Node61 ~]# yum install -y elasticsearch-5.5.1.rpm [root@linuxea-Node61 ~]# /usr/share/elasticsearch/bin/elasticsearch-plugin install x-pack -> Downloading x-pack from elastic [=================================================] 100% @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: plugin requires additional permissions @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ * java.io.FilePermission \\.\pipe\* read,write * java.lang.RuntimePermission accessClassInPackage.com.sun.activation.registries * java.lang.RuntimePermission getClassLoader * java.lang.RuntimePermission setContextClassLoader * java.lang.RuntimePermission setFactory * java.security.SecurityPermission createPolicy.JavaPolicy * java.security.SecurityPermission getPolicy * java.security.SecurityPermission putProviderProperty.BC * java.security.SecurityPermission setPolicy * java.util.PropertyPermission * read,write * java.util.PropertyPermission sun.nio.ch.bugLevel write * javax.net.ssl.SSLPermission setHostnameVerifier See http://docs.oracle.com/javase/8/docs/technotes/guides/security/permissions.html for descriptions of what these permissions allow and the associated risks. Continue with installation? [y/N]y @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: plugin forks a native controller @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ This plugin launches a native controller that is not subject to the Java security manager nor to system call filters. Continue with installation? [y/N]y -> Installed x-pack [root@linuxea-Node61 ~]# 这里的用户和密码建议保持一致(如果是x-pack,则都需要安装x-pack)[root@linuxea-Node61 ~]# curl -u elastic -XPUT '10.0.1.61:9200/_xpack/security/user/elastic/_password?pretty' -H 'Content-Type: application/json' -d' { "password": "linuxea" } ' Enter host password for user 'elastic': { }node1master配置文件:elasticsearch配置完成启动时有启动顺序的,先启动主,在启动副[root@linuxea-Node49 ~]# cat /etc/elasticsearch/elasticsearch.yml cluster.name: linuxea-app node.name: master path.data: /elk/data path.logs: /elk/logs bootstrap.system_call_filter: false bootstrap.memory_lock: false http.cors.enabled: true http.cors.allow-origin: "*" http.cors.allow-headers: Authorization network.host: 10.0.1.49 http.port: 9200 node.master: true node.data: true discovery.zen.ping.unicast.hosts: ["10.0.1.49"] [root@linuxea-Node49 ~/elk]# mkdir /elk/{logs,data} -p && chown elasticsearch.elasticsearch -R /elk/node2slave配置文件:[root@linuxea-Node61 ~]# cat /etc/elasticsearch/elasticsearch.yml cluster.name: linuxea-app node.name: slave path.data: /elk/data path.logs: /elk/logs bootstrap.system_call_filter: false bootstrap.memory_lock: false http.cors.enabled: true http.cors.allow-origin: "*" http.cors.allow-headers: Authorization network.host: 10.0.1.61 http.port: 9200 node.master: true node.data: true discovery.zen.ping.unicast.hosts: ["10.0.1.49"] [root@linuxea-Node61 ~/elk]# mkdir /elk/{logs,data} -p && chown elasticsearch.elasticsearch -R /elk/启动顺序先启动node1,在启动node2,启动后会有日志抛出安装haproxytcp层代理即可defaults mode tcp log global option dontlognull option httpclose option httplog option forwardfor option abortonclose option redispatch timeout connect 5000ms timeout client 500000 timeout server 500000 maxconn 100000 retries 3 listen stats mode http bind *:1080 stats refresh 30s stats uri /stats stats realm Haproxy Manager stats auth admin:admin #stats hide-version frontend frontend-web.com bind *:2900 mode tcp # option httplog # log global default_backend elk.linuxea.com backend elk.linuxea.com # option forwardfor header X-REALL-IP # option httpchk HEAD / HTTP/1.0 balance roundrobin server node1 10.0.1.61:9200 check inter 2000 rise 30 fall 15 server node2 10.0.1.49:9200 check inter 2000 rise 30 fall 15kibana配置kibana连接haproxy[root@linuxea-Node49 /elk/data]# egrep -v "^$|^#" /etc/kibana/kibana.yml server.port: 5601 server.host: "0.0.0.0" elasticsearch.url: "http://10.10.240.20:2900" kibana.index: ".kibana" elasticsearch.username: "elastic" elasticsearch.password: "linuxea" logging.dest: stdout logging.silent: false logging.quiet: false尝试启动输入用户和密码即可进入登录完成查看下集群状态机器负载状态详细状态以及index等状态
2017年09月03日
4,047 阅读
0 评论
0 点赞
2017-09-01
linuxea:ELK5.5安装和配置
kibana5和之前的3版本差距是很大的,提供了一些非常不错功能,比如登陆验证和其他组建插件等,直接进入安装:但是x-pack不是无偿的。结构如下:安装包下载https://artifacts.elastic.co/downloads/logstash/logstash-5.5.1.rpm https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.5.1.rpm https://artifacts.elastic.co/downloads/kibana/kibana-5.5.1-x86_64.rpm https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.5.1-x86_64.rpm安装 elasticsearch[root@linuxea-Node49 ~/elk]# yum install elasticsearch -y1,安装x-pack这个插件如果反复安装的话需要删除/etc/elasticsearch/x-pack/[root@linuxea-Node49 ~/elk]# /usr/share/elasticsearch/bin/elasticsearch-plugin install x-pack -> Downloading x-pack from elastic [=================================================] 100% @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: plugin requires additional permissions @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ * java.io.FilePermission \\.\pipe\* read,write * java.lang.RuntimePermission accessClassInPackage.com.sun.activation.registries * java.lang.RuntimePermission getClassLoader * java.lang.RuntimePermission setContextClassLoader * java.lang.RuntimePermission setFactory * java.security.SecurityPermission createPolicy.JavaPolicy * java.security.SecurityPermission getPolicy * java.security.SecurityPermission putProviderProperty.BC * java.security.SecurityPermission setPolicy * java.util.PropertyPermission * read,write * java.util.PropertyPermission sun.nio.ch.bugLevel write * javax.net.ssl.SSLPermission setHostnameVerifier See http://docs.oracle.com/javase/8/docs/technotes/guides/security/permissions.html for descriptions of what these permissions allow and the associated risks. Continue with installation? [y/N]y @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: plugin forks a native controller @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ This plugin launches a native controller that is not subject to the Java security manager nor to system call filters. Continue with installation? [y/N]y -> Installed x-pack [root@linuxea-Node49 ~/elk]# 2,修改配置文件修改networkhost[root@linuxea-Node49 ~/elk]# sed -i 's/#network.host: 192.168.0.1/network.host: 0.0.0.0/g' /etc/elasticsearch/elasticsearch.yml [root@linuxea-Node49 ~/elk]# sed -i 's/#cluster.name: my-application/cluster.name: linuxea-app/g' /etc/elasticsearch/elasticsearch.yml [root@linuxea-Node49 ~/elk]# mkdir /elk/logs && chown elasticsearch.elasticsearch -R /elk/ [root@linuxea-Node49 ~/elk]# sed -i 's@#path.logs: /path/to/logs@path.logs: /elk/logs@g' /etc/elasticsearch/elasticsearch.yml [root@linuxea-Node49 ~/elk]# systemctl restart elasticsearch.service配置文件示例[root@linuxea-Node49 /etc/logstash]# cat /etc/elasticsearch/elasticsearch.yml cluster.name: linuxea-app node.name: master path.data: /elk/data path.logs: /elk/logs bootstrap.system_call_filter: false bootstrap.memory_lock: false http.cors.enabled: true http.cors.allow-origin: "*" http.cors.allow-headers: Authorization action.auto_create_index: .security,.monitoring*,.watches,.triggered_watches,.watcher-history*,.ml*,logstash-* network.host: 0.0.0.0 http.port: 9200 node.master: true node.data: true discovery.zen.ping.unicast.hosts: ["10.0.1.49"] #discovery.zen.minimum_master_nodes: #xpack.security.audit.enabled: true #xpack.security.authc.accept_default_password: false [root@linuxea-Node49 /etc/logstash]# 3,配置登录认证配置elastic 密码,需要输入密码:changeme,返回为空说明正确[root@linuxea-Node49 /data/logs]# curl -u elastic -XPUT '127.0.0.1:9200/_xpack/security/user/elastic/_password?pretty' -H 'Content-Type: application/json' -d' { "password": "linuxea" } ' Enter host password for user 'elastic': 输入:changeme { }可以省略[root@linuxea-Node49 ~/elk]# /usr/share/elasticsearch/bin/x-pack/syskeygen Storing generated key in [/etc/elasticsearch/x-pack/system_key]... Ensure the generated key can be read by the user that Elasticsearch runs as, permissions are set to owner read/write only修改权限[root@linuxea-Node49 ~/elk]# chmod 400 /etc/elasticsearch/x-pack/system_key [root@linuxea-Node49 ~/elk]# chown elasticsearch.elasticsearch /etc/elasticsearch/x-pack/system_key [root@linuxea-Node49 ~/elk]# echo "xpack.security.audit.enabled: true" >> /etc/elasticsearch/elasticsearch.yml 看日志:install kibana[root@linuxea-Node49 ~/elk]# yum install kibana -y配置文件示例[root@linuxea-Node49 /etc/logstash]# egrep -v "^#|^$" /etc/kibana/kibana.yml server.port: 5601 server.host: "0.0.0.0" elasticsearch.url: "http://10.10.240.20:2900" elasticsearch.username: "elastic" elasticsearch.password: "linuxea" tilemap.url: 'http://webrd02.is.autonavi.com/appmaptile?lang=zh_cn&size=1&scale=1&style=7&x={x}&y={y}&z={z}' [root@linuxea-Node49 /etc/logstash]# 安装插件[root@linuxea-Node49 /elk/logs]# /usr/share/kibana/bin/kibana-plugin install x-pack Attempting to transfer from x-pack Attempting to transfer from https://artifacts.elastic.co/downloads/kibana-plugins/x-pack/x-pack-5.5.1.zip Transferring 119276972 bytes.................... Transfer complete Retrieving metadata from plugin archive Extracting plugin archive Extraction complete Optimizing and caching browser bundles... Plugin installation complete [root@linuxea-Node49 /elk/logs]# 配置文件解释:https://www.elastic.co/guide/en/kibana/5.5/settings.html[root@linuxea-Node49 /elk/logs]# sed -i 's/#server.host: "localhost"/server.host: "0.0.0.0"/g' /etc/kibana/kibana.yml [root@linuxea-Node49 /elk/logs]# sed -i 's/#elasticsearch.username: "user"/elasticsearch.username: "elastic"/g' /etc/kibana/kibana.yml [root@linuxea-Node49 /elk/logs]# sed -i 's/#elasticsearch.password: "pass"/elasticsearch.password: "linuxea"/g' /etc/kibana/kibana.yml [root@linuxea-Node49 /elk/logs]# sed -i 's@#elasticsearch.url: "http://localhost:9200"@elasticsearch.url: "http://127.0.0.1:9200"@g' /etc/kibana/kibana.yml [root@linuxea-Node49 /var/log]# sed -i 's/#server.port: 5601/server.port: 5601/g' /etc/kibana/kibana.yml 配置kibana密码,需要输入密码:linuxea[root@linuxea-Node49 /data/logs]# curl -u elastic -XPUT '127.0.0.1:9200/_xpack/security/user/kibana/_password?pretty' -H 'Content-Type: application/json' -d' { "password": "linuxea" } ' Enter host password for user 'elastic': { }install logstach直接yum即可配置文件示例[root@DS-VM-Node49 /etc/logstash]# cat logstash.yml node.name: node1 path.data: /elk/logstash/data path.config: /etc/logstash/conf.d log.level: info path.logs: /elk/logstash/logs安装x-pack[root@linuxea-Node49 /elk/elasticsearch-head]# /usr/share/logstash/bin/logstash-plugin install x-pack Downloading file: https://artifacts.elastic.co/downloads/logstash-plugins/x-pack/x-pack-5.5.1.zip Downloading [=============================================================] 100% Installing file: /tmp/studtmp-2d494e1b2f721643348e5d8787188f1234f43369beb164da7a73bc94b899/x-pack-5.5.1.zip Install successful [root@linuxea-Node49 /etc/logstash]# sed -i 's/#log.level: info/log.level: info/g' /etc/logstash/logstash.yml这里需要密码改一下[root@linuxea-Node49 /data/logs]# curl -u elastic -XPUT '127.0.0.1:9200/_xpack/security/user/logstash_system/_password?pretty' -H 'Content-Type: application/json' -d' { "password": "linuxea" } ' Enter host password for user 'elastic': { }redis链接,redis已经部署好了,当然,这里使用的docker[root@linuxea-Node49 /etc/logstash/conf.d]# cat redis_input.conf input { redis { host => "10.10.0.98" port => "6379" key => "filebeat" data_type => "list" password => "OTdmOWI4ZTM4NTY1M2M4OTZh" threads => 20 } } output { elasticsearch { hosts => ["127.0.0.1:9200"] index => "logstash-nginx-error-%{+YYYY.MM.dd}" user => "elastic" password => "linuxea" } stdout {codec => rubydebug} }安装模块:这些模块后面会用到[root@linuxea-Node49 /etc/logstash/conf.d]# /usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-geoip -> Downloading ingest-geoip from elastic [=================================================] 100% @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: plugin requires additional permissions @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ * java.lang.RuntimePermission accessDeclaredMembers See http://docs.oracle.com/javase/8/docs/technotes/guides/security/permissions.html for descriptions of what these permissions allow and the associated risks. Continue with installation? [y/N]y -> Installed ingest-geoip [root@linuxea-Node49 /etc/logstash/conf.d]# /usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-user-agent -> Downloading ingest-user-agent from elastic [=================================================] 100% -> Installed ingest-user-agent [root@linuxea-Node49 /etc/logstash/conf.d]# 到此elk安装完成
2017年09月01日
4,200 阅读
0 评论
0 点赞
2017-03-29
Graylog收集文件日志实例
在graylog中可选日志收集方式有很多,我们这里使用自带的collector-sidecar进行收集,收集的姿势很多,这里不在啰嗦,简图如下:官网页面:http://docs.graylog.org/en/latest/pages/collector_sidecar.htmlnxlog安装可参考:yum install http://nxlog.co/system/files/products/files/1/nxlog-ce-2.9.1716-1_rhel7.x86_64.rpm客户端安装:[root@linuxea-113 /etc]# yum install -y https://github.com/Graylog2/collector-sidecar/releases/download/0.0.9/collector-sidecar-0.0.9-1.x86_64.rpm [root@linuxea-113 /etc]# graylog-collector-sidecar -service install 配置将nginx的access.log发送到graylog,如下: [root@linuxea-113 /etc]# cat /etc/graylog/collector-sidecar/collector_sidecar.yml server_url: http://10.10.240.117:9000/api/ update_interval: 10 tls_skip_verify: false send_status: true list_log_files: - /var/log/nginx/access.log node_id: 10.0.1.49 collector_id: file:/etc/graylog/collector-sidecar/collector-id log_path: /var/log/graylog/collector-sidecar log_rotation_time: 86400 log_max_age: 604800 tags: - nginx_access backends: # - name: nxlog # enabled: false # binary_path: /usr/bin/nxlog # configuration_path: /etc/graylog/collector-sidecar/generated/nxlog.conf - name: filebeat enabled: true binary_path: /usr/bin/filebeat configuration_path: /etc/graylog/collector-sidecar/generated/filebeat.yml配置完成,启动collector-sidecar1,打开web界面在system下拉菜单中选择collectors2,选择manage configurations3,创建configuration4,create output ,hosts为graylog ip地址5,create input,在type of input file选择output的name6,在tags中需要和配置文件中的tags一致7,但设置完成回到collectors中,会发现node_id命名的名称和tags名称,说明已经ok8,在system的inputs中,选择beats new input但发现有流量在动,说明已经配置生效能看到日志已经被收集其他功能在下次再说
2017年03月29日
17,013 阅读
3 评论
1 点赞
2017-03-29
Graylog2.2详细部署安装
elk不是很善于处理多行日志,同样也不能保留原始日志格式,Graylog可以收集监控多种不同应用的日志,包括文件,系统日志,在客户端方面支持手工插件等,而且官网提供多种安装方式,安装便捷,部署在一台主机即可,并且他可以进行规则报警,我们来看如下:1,一体化方案,安装方便,不像ELK有3个独立系统间的集成问题。2,采集原始日志,并可以事后再添加字段,比如http_status_code,response_time等等。3,自己开发采集日志的脚本,并用curl/nc发送到Graylog Server,发送格式是自定义的GELF,Flunted和Logstash都有相应的输出GELF消息的插件。自己开发带来很大的自由度。实际上只需要用inotifywait监控日志的modify事件,并把日志的新增行用curl/netcat发送到Graylog Server就可。4,搜索结果高亮显示,就像google一样。5,搜索语法简单,比如: source:mongo AND reponse_time_ms:>5000,避免直接输入elasticsearch搜索json语法6,搜索条件可以导出为elasticsearch的搜索json文本,方便直接开发调用elasticsearch rest api的搜索脚本。本次安装最新版本2.2Graylog(和ELK)具有特殊的操作模式(日志处理固有的),定期创建新的索引。因此,由于架构已经在较高级别(跨索引)上划分,因此不需要分割每个单独的索引。保留=保留标准*群集中最大索引数。官网:graylog.org官网文档:http://docs.graylog.org/en/latest/index.htmlmongodb安装docker安装:curl -Lk https://raw.githubusercontent.com/LinuxEA-Mark/docker-mongodb/master/docker_install_mongodb.sh|bask常规安装: [root@linuxea.com ~]# yum install java-1.8.0-openjdk-headless.x86_64 epel-release pwgen -y [root@linuxea.com ~]# cat > /etc/yum.repos.d/mongodb-org-3.2.repo << EOF [mongodb-org-3.2] name = MongoDB Repository baseurl = https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.2/x86_64/ gpgcheck = 1 enabled = 1 gpgkey = https://www.mongodb.org/static/pgp/server-3.2.asc EOF [root@linuxea.com ~]# yum install mongodb-org -y [root@linuxea.com ~]# chkconfig --add mongod [root@linuxea.com ~]# systemctl daemon-reload [root@linuxea.com ~]# systemctl enable mongod.service [root@linuxea.com ~]# systemctl start mongod.serviceelasticsearch安装[root@linuxea.com ~]# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch [root@linuxea.com ~]# cat > /etc/yum.repos.d/elasticsearch.repo << EOF [elasticsearch-2.x] name=Elasticsearch repository for 2.x packages baseurl=https://packages.elastic.co/elasticsearch/2.x/centos gpgcheck=1 gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch enabled=1 EOF [root@linuxea.com ~]# yum install elasticsearch -y配置:[root@linuxea.com /etc/rsyslog.d]# egrep -v "^$|^#" /etc/elasticsearch/elasticsearch.yml cluster.name: graylog path.data: /data/Elasticsearch path.logs: /data/Elasticsearch/logs network.host: 10.10.240.117 http.port: 9200启动:[root@linuxea.com ~]# chkconfig --add elasticsearch [root@linuxea.com ~]# systemctl daemon-reload [root@linuxea.com ~]# systemctl enable elasticsearch.service [root@linuxea.com ~]# systemctl restart elasticsearch.servicegraylog-server安装:[root@linuxea.com ~]# yum install -y java-1.8.0-openjdk-headless.x86_64 [root@linuxea.com ~]# rpm -Uvh https://packages.graylog2.org/repo/packages/graylog-2.2-repository_latest.rpm [root@linuxea.com ~]# yum install graylog-server -y [root@linuxea.com /data/mongodb]# cat /etc/graylog/server/server.conf is_master = true node_id_file = /etc/graylog/server/node-id root_username = admin root_timezone = Asia/Shanghai password_secret = ef92b778bafe771e89245b89ecbc08a44a4e166c06659911881f383d4473e94f root_password_sha2 = 5e884898da28047151d0e56f8dc6292773603d0d6aabbdd62a11ef721d1542d8 plugin_dir = /usr/share/graylog-server/plugin rest_listen_uri = http://10.10.240.117:9000/api/ web_listen_uri = http://10.10.240.117:9000/ web_endpoint_uri = http://10.10.240.117:9000/api web_enable = true web_enable_cors = true elasticsearch_discovery_zen_ping_unicast_hosts = 10.10.240.117:9300 rotation_strategy = count elasticsearch_max_docs_per_index = 20000000 elasticsearch_max_number_of_indices = 20 retention_strategy = delete elasticsearch_shards = 1 elasticsearch_replicas = 0 elasticsearch_index_prefix = graylog elasticsearch_cluster_name = graylog allow_leading_wildcard_searches = false allow_highlighting = true elasticsearch_analyzer = standard output_batch_size = 500 output_flush_interval = 1 output_fault_count_threshold = 5 output_fault_penalty_seconds = 30 processbuffer_processors = 5 outputbuffer_processors = 3 processor_wait_strategy = blocking ring_size = 65536 inputbuffer_ring_size = 65536 inputbuffer_processors = 2 inputbuffer_wait_strategy = blocking message_journal_enabled = true message_journal_dir = /var/lib/graylog-server/journal lb_recognition_period_seconds = 3 mongodb_uri = mongodb://localhost/graylog mongodb_max_connections = 1000 mongodb_threads_allowed_to_block_multiplier = 5 content_packs_dir = /usr/share/graylog-server/contentpacks content_packs_auto_load = grok-patterns.json proxied_requests_thread_pool_size = 32直接admin,password登陆
2017年03月29日
8,023 阅读
0 评论
0 点赞
2016-04-02
日志实时收集分析-ELK Stack
ELK stackELK stack是又Elasticsearch,lostash,kibana 三个开源软件的组合而成,形成一款强大的实时日志收集分析展示系统。Logstash:日志收集工具,可以从本地磁盘,网络服务(自己监听端口,接受用户日志),消息队列中收集各种各样的日志,然后进行过滤分析,并将日志输入到Elasticsearch中。Elasticsearch:日志分布式存储/搜索工具,原生支持集群功能,可以将指定时间的日志生成一个索引,加快日志查询和访问。Kibana:可视化日志web展示工具,对Elasticsearch中存储的日志进行展示,还可以生成炫丽的仪表盘。拓扑nginx代理两台Elasticsearch集群,logstash将客户端端日志手到redis,redis将数据传递给es,客户端使用lostash将日志传递给redis环境[root@localhost logs]# cat /etc/redhat-release CentOS release 6.6 (Final) [root@localhost logs]# uname -rm 2.6.32-504.el6.x86_64 x86_64 [root@localhost logs]# 使用软件elasticsearch-1.7.4.tar.gz kibana-4.1.1-linux-x64.tar.gz logstash-1.5.5.tar.gz 时间同步ntpdate time.nist.govElasticsearch集群安装配置一,192.168.1.8下载安装 elasticsearchyum -y install java-1.8.0 lrzsz git wget -P /usr/local https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.4.tar.gz cd /usr/local tar xf elasticsearch-1.7.4.tar.gz ln -s elasticsearch-1.7.4 elasticsearch 修改配置文件vim elasticsearch/config/elasticsearch.ymlcluster.name: LinuxEA 群集名称 node.name: "linuxEA-ES1" 节点名称 node.master: true 是否为主 node.data: true 是否存储 index.number_of_shards: 5 分片 index.number_of_replicas: 1 path.conf: /usr/local/elasticsearch/config/ 配置文件路径 path.data: /data/es-data date路径 path.work: /data/es-worker path.logs: /usr/local/elasticsearch/logs/ 日志 path.plugins: /usr/local/elasticsearch/plugins 模块 bootstrap.mlockall: true 不写入内存 network.host: 192.168.1.8 http.port: 9200 创建目录mkdir /data/es-data -p mkdir /data/es-worker -p mkdir /usr/local/elasticsearch/logs mkdir /usr/local/elasticsearch/plugins 下载启动配置文件git clone https://github.com/elastic/elasticsearch-servicewrapper.git mv elasticsearch-servicewrapper/service/ /usr/local/elasticsearch/bin/ /usr/local/elasticsearch/bin/service/elasticsearch install 修改配置文件vim /usr/local/elasticsearch/bin/service/elasticsearch.conf set.default.ES_HOME=/usr/local/elasticsearch #设置ES的安装路径,必须和安装路径保持一直 set.default.ES_HEAP_SIZE=1024 启动[root@elk1 local]# /etc/init.d/elasticsearch start Starting Elasticsearch... Waiting for Elasticsearch...... running: PID:4355 [root@elk1 local]# netstat -tlntp|grep -E "9200|9300" tcp 0 0 ::ffff:192.168.1.8:9300 :::* LISTEN 4357/java tcp 0 0 ::ffff:192.168.1.8:9200 :::* LISTEN 4357/java [root@elk1 local]# curl[root@elk1 local]# curl http://192.168.1.8:9200 { "status" : 200, "name" : "linuxEA-ES1", "cluster_name" : "LinuxEA", "version" : { "number" : "1.7.4", "build_hash" : "0d3159b9fc8bc8e367c5c40c09c2a57c0032b32e", "build_timestamp" : "2015-12-15T11:25:18Z", "build_snapshot" : false, "lucene_version" : "4.10.4" }, "tagline" : "You Know, for Search" } [root@elk1 local]# Elasticsearch2二,192.168.1.7 Elasticsearch2[root@elk2 local]# vim elasticsearch/config/elasticsearch.yml cluster.name: LinuxEA node.name: "linuxEA-ES2" node.master: true node.data: true index.number_of_shards: 5 index.number_of_replicas: 1 path.conf: /usr/local/elasticsearch/config/ path.data: /data/es-data path.work: /data/es-worker path.logs: /usr/local/elasticsearch/logs/ path.plugins: /usr/local/elasticsearch/plugins bootstrap.mlockall: true network.host: 192.168.1.7 http.port: 9200 创建目录mkdir /data/es-data -p mkdir /data/es-worker -p mkdir /usr/local/elasticsearch/logs mkdir /usr/local/elasticsearch/plugins 下载启动配置文件git clone https://github.com/elastic/elasticsearch-servicewrapper.git mv elasticsearch-servicewrapper/service/ /usr/local/elasticsearch/bin/ /usr/local/elasticsearch/bin/service/elasticsearch install 修改配置文件vim /usr/local/elasticsearch/bin/service/elasticsearch.conf set.default.ES_HOME=/usr/local/elasticsearch #设置ES的安装路径,必须和安装路径保持一直 set.default.ES_HEAP_SIZE=1024 启动[root@elk2 local]# /etc/init.d/elasticsearch start Starting Elasticsearch... Waiting for Elasticsearch...... running: PID:4355 [root@elk2 ~]# netstat -tlntp|grep -E "9200|9300" tcp 0 0 ::ffff:192.168.1.7:9300 :::* LISTEN 4568/java tcp 0 0 ::ffff:192.168.1.7:9200 :::* LISTEN 4568/java [root@elk2 ~]# curl[root@elk2 ~]# curl http://192.168.1.7:9200 { "status" : 200, "name" : "linuxEA-ES2", "cluster_name" : "LinuxEA", "version" : { "number" : "1.7.4", "build_hash" : "0d3159b9fc8bc8e367c5c40c09c2a57c0032b32e", "build_timestamp" : "2015-12-15T11:25:18Z", "build_snapshot" : false, "lucene_version" : "4.10.4" }, "tagline" : "You Know, for Search" } [root@elk2 ~]# 集群插件elasticsearch-head三,192.168.1.7 elasticsearch-head安装 五星表示主节点,原点表示工作节点[root@elk2 ~]# /usr/local/elasticsearch/bin/plugin -i mobz/elasticsearch-head redis+logstash四,192.168.1.6安装redis+logstash,主要用于将redis数据传递到es安装java依赖包yum -y install java-1.8.0 lrzsz git wget -P /usr/local https://download.elastic.co/logstash/logstash/logstash-1.5.5.tar.gz cd /usr/local tar xf logstash-1.5.5.tar.gz ln -s logstash-1.5.5 logstash 启动脚本[root@localhost local]# vim /etc/init.d/logstash #!/bin/sh # Init script for logstash # Maintained by Elasticsearch # Generated by pleaserun. # Implemented based on LSB Core 3.1: # * Sections: 20.2, 20.3 # ### BEGIN INIT INFO # Provides: logstash # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: # Description: Starts Logstash as a daemon. ### END INIT INFO PATH=/sbin:/usr/sbin:/bin:/usr/bin export PATH if [ `id -u` -ne 0 ]; then echo "You need root privileges to run this script" exit 1 fi name=logstash pidfile="/var/run/$name.pid" export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar export PATH=$PATH:$JAVA_HOME/bin LS_USER=logstash LS_GROUP=logstash LS_HOME=/usr/local/logstash LS_HEAP_SIZE="500m" LS_JAVA_OPTS="-Djava.io.tmpdir=${LS_HOME}" LS_LOG_DIR=/usr/local/logstash LS_LOG_FILE="${LS_LOG_DIR}/$name.log" LS_CONF_FILE=/etc/logstash.conf LS_OPEN_FILES=16384 LS_NICE=19 LS_OPTS="" [ -r /etc/default/$name ] && . /etc/default/$name [ -r /etc/sysconfig/$name ] && . /etc/sysconfig/$name program=/usr/local/logstash/bin/logstash args="agent -f ${LS_CONF_FILE} -l ${LS_LOG_FILE} ${LS_OPTS}" start() { JAVA_OPTS=${LS_JAVA_OPTS} HOME=${LS_HOME} export PATH HOME JAVA_OPTS LS_HEAP_SIZE LS_JAVA_OPTS LS_USE_GC_LOGGING # set ulimit as (root, presumably) first, before we drop privileges ulimit -n ${LS_OPEN_FILES} # Run the program! nice -n ${LS_NICE} sh -c " cd $LS_HOME ulimit -n ${LS_OPEN_FILES} exec \"$program\" $args " > "${LS_LOG_DIR}/$name.stdout" 2> "${LS_LOG_DIR}/$name.err" & # Generate the pidfile from here. If we instead made the forked process # generate it there will be a race condition between the pidfile writing # and a process possibly asking for status. echo $! > $pidfile echo "$name started." return 0 } stop() { # Try a few times to kill TERM the program if status ; then pid=`cat "$pidfile"` echo "Killing $name (pid $pid) with SIGTERM" kill -TERM $pid # Wait for it to exit. for i in 1 2 3 4 5 ; do echo "Waiting $name (pid $pid) to die..." status || break sleep 1 done if status ; then echo "$name stop failed; still running." else echo "$name stopped." fi fi } status() { if [ -f "$pidfile" ] ; then pid=`cat "$pidfile"` if kill -0 $pid > /dev/null 2> /dev/null ; then # process by this pid is running. # It may not be our pid, but that's what you get with just pidfiles. # TODO(sissel): Check if this process seems to be the same as the one we # expect. It'd be nice to use flock here, but flock uses fork, not exec, # so it makes it quite awkward to use in this case. return 0 else return 2 # program is dead but pid file exists fi else return 3 # program is not running fi } force_stop() { if status ; then stop status && kill -KILL `cat "$pidfile"` fi } case "$1" in start) status code=$? if [ $code -eq 0 ]; then echo "$name is already running" else start code=$? fi exit $code ;; stop) stop ;; force-stop) force_stop ;; status) status code=$? if [ $code -eq 0 ] ; then echo "$name is running" else echo "$name is not running" fi exit $code ;; restart) stop && start ;; reload) stop && start ;; *) echo "Usage: $SCRIPTNAME {start|stop|force-stop|status|restart}" >&2 exit 3 ;; esac exit $? 开机启动[root@localhost local]# chmod +X /etc/init.d/logstash chkconfig --add logstash chkconfig logstash on 1,编辑logstash配置文件[root@localhost local]# vim /etc/logstash.conf input { #表示从标准输入中收集日志 stdin {} } output { elasticsearch { #表示将日志输出到ES中 host => ["172.16.4.102:9200","172.16.4.103:9200"] #可以指定多台主机,也可以指定集群中的单台主机 protocol => "http" } } 2.手动写入数据[root@localhost local]# /usr/local/logstash/bin/logstash -f /etc/logstash.conf Logstash startup completed hello word! 3.写入完成,查看ES中已经写入,并自动建立一个索引4.redis1,安装redisyum -y install redis vim /etc/redis.conf bind 192.168.1.6 /etc/init.d/redis start 2,安装logstash,如上即可3,logstash+redislogstash来读取redis内容到escat /etc/logstash.conf input { redis { host => "192.168.1.6" data_type => "list" key => "nginx-access.log" port => "6379" db => "2" } } output { elasticsearch { host => ["192.168.1.7:9200","192.168.1.8:9200"] index => "nginx-access-log-%{+YYYY.MM.dd}" protocol => "http" workers => 5 template_overwrite => true } } nginx+logstash示例五,192.168.1.4 安装logstash和nginx,logstash将nginx数据传递到redis即可logstash如第四步安装即可yum -y install pcre pcre-devel openssl-devel oepnssl http://nginx.org/download/nginx-1.6.3.tar.gz groupadd -r nginx useradd -g nginx -r nginx ln -s /usr/local/nginx-1.6.3 /usr/local/nginx 编译安装 ./configure \ --prefix=/usr/local/nginx \ --conf-path=/etc/nginx/nginx.conf \ --user=nginx --group=nginx \ --error-log-path=/var/log/nginx/error.log \ --http-log-path=/var/log/nginx/access.log \ --pid-path=/var/run/nginx/nginx.pid \ --lock-path=/var/lock/nginx.lock \ --with-http_ssl_module \ --with-http_stub_status_module \ --with-http_gzip_static_module \ --with-http_flv_module \ --with-http_mp4_module \ --http-client-body-temp-path=/var/tmp/nginx/client \ --http-proxy-temp-path=/var/tmp/nginx/proxy \ --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi \ --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi make && make install mkdir -pv /var/tmp/nginx/{client,fastcgi,proxy,uwsgi} mkdir /usr/local/nginx/logs/ /usr/local/nginx/sbin/nginx 修改日志格式vim /etc/nginx/nginx.conflog_format logstash_json '{"@timestamp":"$time_iso8601",' '"host": "$server_addr",' '"client": "$remote_addr",' '"size": $body_bytes_sent,' '"responsetime": $request_time,' '"domain": "$host",' '"url":"$uri",' '"referer": "$http_referer",' '"agent": "$http_user_agent",' '"status":"$status"}'; access_log logs/access_json.access.log logstash_json; 日志已经生成[root@localhost nginx]# ll logs/ total 8 -rw-r--r--. 1 root root 6974 Mar 31 08:44 access_json.access.log 日志格式已经被修改好[root@localhost nginx]# cat /usr/local/nginx/logs/access_json.access.log {"@timestamp":"2016-03-31T08:44:48-07:00","host": "192.168.1.4","client": "192.168.1.200","size": 0,"responsetime": 0.000,"domain": "192.168.1.4","url":"/index.html","referer": "-","agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.80 Safari/537.36","status":"304"} {"@timestamp":"2016-03-31T08:44:48-07:00","host": "192.168.1.4","client": "192.168.1.200","size": 0,"responsetime": 0.000,"domain": "192.168.1.4","url":"/index.html","referer": "-","agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.80 Safari/537.36","status":"304"} {"@timestamp":"2016-03-31T08:44:48-07:00","host": "192.168.1.4","client": "192.168.1.200","size": 0,"responsetime": 0.000,"domain": "192.168.1.4","url":"/index.html","referer": "-","agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.80 Safari/537.36","status":"304"}将nginx日志传递给redis[root@elk1 logs]# cat /etc/logstash.conf input { file { path => "/usr/local/nginx/logs/access_json.access.log" codec => "json" } } output { redis { host => "192.168.1.6" data_type => "list" key => "nginx-access.log" port => "6379" db => "2" } } [root@elk1 logs]# 分别在redis上,和nginx上启动logstashnohup /usr/local/logstash/bin/logstash -f /etc/logstash.confel+kibana六,192.168.1.7 el+kibanawget https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz tar xf kibana-4.1.1-linux-x64.tar.gz ln -sv kibana-4.1.1-linux-x64 kibana vim /usr/local/kibana/config/kibana.yml elasticsearch_url: "http://192.168.1.7:9200" pid_file: /var/run/kibana.pid log_file: /usr/local/kibana/kibana.log nohup ./kibana/bin/kibana & 192.168.1.8 el+kibanawget https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz tar xf kibana-4.1.1-linux-x64.tar.gz ln -sv kibana-4.1.1-linux-x64 kibana vim /usr/local/kibana/config/kibana.yml elasticsearch_url: "http://192.168.1.8:9200" pid_file: /var/run/kibana.pid log_file: /usr/local/kibana/kibana.log nohup ./kibana/bin/kibana & nginx代理七,192.168.1.200 Nginx反向代理el+kibana(192.168.1.7和192.168.1.8)基于账户和IP做控制auth_basic "Only for VIPs";定义名称auth_basic_user_file /etc/nginx/users/.htpasswd;定义控制用户名的文件路径,为隐藏文件}deny 172.16.0.1;#拒绝172.16.0.1访问,允许便是allow比如,只允许172.16.0.1,其他拒绝:`allow 172.16.0.1/16;deny all;`如下:[root@localhost nginx]# vim nginx.conf worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format logstash_json '{"@timestamp":"$time_iso8601",' '"host": "$server_addr",' '"client": "$remote_addr",' '"size": $body_bytes_sent,' '"responsetime": $request_time,' '"domain": "$host",' '"url":"$uri",' '"referer": "$http_referer",' '"agent": "$http_user_agent",' '"status":"$status"}'; access_log logs/access_json.access.log logstash_json; sendfile on; keepalive_timeout 65; upstream kibana { #定义后端主机组 server 192.168.1.8:5601 weight=1 max_fails=2 fail_timeout=2; server 192.168.1.7:5601 weight=1 max_fails=2 fail_timeout=2; } server { listen 80; server_name localhost; auth_basic "Only for ELK Stack VIPs"; #basic auth_basic_user_file /etc/nginx/.htpasswd; #用户认证密码文件位置 allow 192.168.1.200; #允许192.168.1.200 allow 192.168.1.0/24; #允许192.168.1.0网段 allow 10.0.0.1; #允许10.0.0.1 allow 10.0.0.254; #允许10.0.0.254 deny all; #拒绝所有 location / { #定义反向代理,将访问自己的请求,都转发到kibana服务器 proxy_pass http://kibana/; index index.html index.htm; } } } 修改权限[root@localhost nginx]# chmod 400 /etc/nginx/.htpasswd [root@localhost nginx]# chown nginx. /etc/nginx/.htpasswd [root@localhost nginx]# cat /etc/nginx/.htpasswd linuxea:$apr1$EGCdQ5wx$bD2CwXgww3y/xcCjVBcCD0 [root@localhost nginx]# 添加用户和密码[root@localhost ~]# htpasswd -c -m /etc/nginx/.htpasswd linuxea New password: Re-type new password: Adding password for user linuxea [root@localhost ~]# 现在就可以用192.168.1.4访问,这里收集的就是代理nginx自己的日志kibana打开后,点击settings,add,这里的名称需要遵循固定格式YYYY.MM.DD,日志名称可在http://IP:9200/_plugin/head/查看即可如:搜索ip段:status:200 AND hosts:192.168.1.200status:200 OR status:400status:[400 TO 499]如果你有多个你可以输入后,会自动索引出来,而后create即可如果有多个log +add new即可而后选择discover,选择合适的时间你可以根据想要的结果而输入对应的字段搜索点击visualize选择对应内容,出图也可以在discover界面选择,点击visualize如下kibana更多出图可参考kibana.logstash.es一台机器有多个日志收集,通过if,kye,db区分input { file { type => "apache" path => "/date/logs/access.log" } file { type => "php-error.log" path => "/data/logs/php-error.log" } } output { if [type] == "apache" redis { host => "192.168.1.6" port => "6379" db => "1" data_type => "list" key => "access.log" } } if [type] == "php-error.log" redis { host => "192.168.1.6" port => "6379" db => "2" data_type => "list" key => "php-error.log" } }文档下载密码pe2n
2016年04月02日
3,736 阅读
0 评论
0 点赞
2016-03-21
logstash-nginx-json-es(6)
安装nginxyum -y install pcre pcre-devel openss-devel http://nginx.org/download/nginx-1.6.3.tar.gz groupadd -r nginx useradd -g nginx -r nginx ln -s /usr/local/nginx-1.6.3 /usr/local/nginx 编译 ./configure \ --prefix=/usr/local/nginx \ --conf-path=/etc/nginx/nginx.conf \ --user=nginx --group=nginx \ --error-log-path=/var/log/nginx/error.log \ --http-log-path=/var/log/nginx/access.log \ --pid-path=/var/run/nginx/nginx.pid \ --lock-path=/var/lock/nginx.lock \ --with-http_ssl_module \ --with-http_stub_status_module \ --with-http_gzip_static_module \ --with-http_flv_module \ --with-http_mp4_module \ --http-client-body-temp-path=/var/tmp/nginx/client \ --http-proxy-temp-path=/var/tmp/nginx/proxy \ --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi \ --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi make && make install mkdir -pv /var/tmp/nginx/{client,fastcgi,proxy,uwsgi} mkdir /usr/local/nginx/logs/ /usr/local/sbin/nginx 编辑nginx配置文件:vim /etc/nginx/nginx.conf 添加如下字段:#access_log logs/access.log main; log_format logstash_json '{"@timestamp":"$time_iso8601",' '"host": "$server_addr",' '"client": "$remote_addr",' '"size": $body_bytes_sent,' '"responsetime": $request_time,' '"domain": "$host",' '"url":"$uri",' '"referer": "$http_referer",' '"agent": "$http_user_agent",' '"status":"$status"}'; 修改如下: access_log logs/access_json.access.log logstash_json; 访问后测试:[root@elk1 logs]# ab -n1000 -c10 http://192.168.1.4:81/ 查看日志[root@elk1 nginx]# cat /usr/local/nginx/logs/access_json.access.log {"@timestamp":"2016-03-20T05:46:57-07:00","host": "192.168.1.4","client": "192.168.1.3","size": 612,"responsetime": 0.000,"domain": "192.168.1.4","url":"/index.html","referer": "-","agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.80 Safari/537.36","status":"200"} {"@timestamp":"2016-03-20T05:46:57-07:00","host": "192.168.1.4","client": "192.168.1.3","size": 570,"responsetime": 0.000,"domain": "192.168.1.4","url":"/favicon.ico","referer": "http://192.168.1.4:81/","agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.80 Safari/537.36","status":"404"} {"@timestamp":"2016-03-20T05:46:59-07:00","host": "192.168.1.4","client": "192.168.1.3","size": 0,"responsetime": 0.000,"domain": "192.168.1.4","url":"/index.html","referer": "-","agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.80 Safari/537.36","status":"304"} {"@timestamp":"2016-03-20T05:46:59-07:00","host": "192.168.1.4","client": "192.168.1.3","size": 0,"responsetime": 0.000,"domain": "192.168.1.4","url":"/index.html","referer": "-","agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.80 Safari/537.36","status":"304"} [root@elk1 nginx]# 产生一些日志让logstash收集[root@elk1 nginx]# ab -n1000 -c10 http://192.168.1.4:81/ [root@elk1 logs]# ll total 440 -rw-r--r-- 1 root root 449286 Mar 20 05:57 access_json.access.log [root@elk1 logs]# 当测试日志可用后,修改logstash配置文件,将access_json.access.log推到redis[root@elk1 logs]# cat /etc/logstash.conf input { # file { # path => "/var/log/messages" # type => "system-log" # } file { path => "/usr/local/nginx/logs/access_json.access.log" codec => "json" } } output { # redis { # host => "192.168.1.6" # data_type => "list" # key => "system.messages" # port => "6379" # db => "1" #} redis { host => "192.168.1.6" data_type => "list" key => "nginx-access.log" port => "6379" db => "2" } } [root@elk1 logs]# 而后在模拟一些日志[root@elk1 logs]# ab -n1000 -c10 http://192.168.1.4:81/ 而后在redis上查看是否传递到redisredis 192.168.1.6:6379[2]> select 2 OK redis 192.168.1.6:6379[2]> keys * 1) "nginx-access.log" redis 192.168.1.6:6379[2]> llen nginx-access.log (integer) 1000 redis 192.168.1.6:6379[2]> 验证数据存在,修改logstash文件传递到es,配置如下:[root@yum-down ~]# cat /etc/logstash.conf input { # redis { # host => "192.168.1.6" # data_type => "list" # key => "test.log" # port => "6379" # db => "1" #} redis { host => "192.168.1.6" data_type => "list" key => "nginx-access.log" #key名称和redis保持一致 port => "6379" db => "2" #db2 } } output { # elasticsearch { # host => ["192.168.1.4:9200","192.168.1.5:9200"] # index => "redis-system-messages-%{+YYYY.MM.dd.HH}" # protocol => "http" # workers => 5 # template_overwrite => true # } elasticsearch { host => ["192.168.1.4:9200","192.168.1.5:9200"] index => "nginx-access-log-%{+YYYY.MM.dd.HH}" #修改es中日志名称 protocol => "http" workers => 5 template_overwrite => true } } [root@yum-down ~]#
2016年03月21日
4,523 阅读
0 评论
0 点赞
2016-03-20
logstash-redis-es(5)
安装redis,logstash日志将会存放到redis,在经过redis上的logstash发送到esyum -y install redis vim /etc/redis.conf bind 192.168.1.6 /etc/init.d/redis starthsi 连接:redis-cli -h 192.168.1.6 logstash配置测试[root@elk1 ~]# vim /etc/logstash.conf input { file { path => "/var/log/messages" type => "system-log" } file { path => "/root/test.log" type => "test.log" } } output { if [type] == "system-log" { elasticsearch { host => ["192.168.1.4:9200","192.168.1.5:9200"] index => "system-messages-%{+YYYY.MM.dd.HH}" protocol => "http" workers => 5 template_overwrite => true } } if [type] == "test.log" { elasticsearch { host => ["192.168.1.4:9200","192.168.1.5:9200"] index => "test.log-%{+YYYY.MM.dd.HH}" protocol => "http" workers => 5 template_overwrite => true } } redis { host => "192.168.1.6" redis主机ip date_type => "list" 指定数据类型为list key => "test.log" 存入的key值 prot => "6379" 端口 db => "1" db类型。可区分其他日志类型 } } 给/var/log/messages中添加内容,以便于测试:[root@elk1 ~]# cat /etc/logstash.conf >> /var/log/messages [root@elk1 ~]# cat /etc/logstash.conf >> /var/log/messages 登录redis查看[root@yum-down ~]# redis-cli -h 192.168.1.6 redis 192.168.1.6:6379> select 1 OK redis 192.168.1.6:6379[1]> keys * 1) "test.log" redis 192.168.1.6:6379[1]> LLEN test.log 查看有多少行 (integer) 75 redis 192.168.1.6:6379[1]> LINDEX test.log -1 查看最后一行 "{\"message\":\"}\",\"@version\":\"1\",\"@timestamp\":\"2016-03-20T11:24:04.602Z\",\"host\":\"elk1\",\"path\":\"/var/log/messages\",\"type\":\"system-log\"}" redis 192.168.1.6:6379[1]> 测试完成后再redis机器上安装logstash来读取redis内容到estar xf logstash-1.5.5.tar.gz ln -sv logstash-1.5.5 logstash logstash配置文件[root@elk1 ~]# cat /etc/logstash.conf input { file { path => "/var/log/messages" type => "system-log" } } output { redis { host => "192.168.1.6" data_type => "list" key => "system.messages" port => "6379" db => "1" } } [root@elk1 ~]# redis+logstash配置文件[root@yum-down ~]# cat /etc/logstash.conf input { redis { host => "192.168.1.6" data_type => "list" key => "test.log" port => "6379" db => "1" } } output { elasticsearch { host => ["192.168.1.4:9200","192.168.1.5:9200"] index => "redis-system-messages-%{+YYYY.MM.dd.HH}" protocol => "http" workers => 5 template_overwrite => true } } [root@yum-down ~]# [root@elk1 ~]# cat /etc/shadow >> /var/log/messages 插入后,则看到有日志输入
2016年03月20日
3,399 阅读
0 评论
0 点赞
2016-03-20
logstash-1.5.5测试笔记(4)
YUM安装rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch Add the following in your /etc/yum.repos.d/ directory in a file with a .repo suffix, for example logstash.repo [logstash-2.2] name=Logstash repository for 2.2.x packages baseurl=http://packages.elastic.co/logstash/2.2/centos gpgcheck=1 gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch enabled=1 借鉴与:http://udn.yyuap.com/doc/logstash-best-practice-cn/output/elasticsearch.html 由于之前用2.2.2.1很多不熟悉,导致很多问题,这次试用1.5.5借鉴与西门飞冰,也是我的好友的文章,感谢编译安装:wget https://download.elastic.co/logstash/logstash/logstash-1.5.5.tar.gz yum -y install java-1.8.0 tar zxf logstash-1.5.4.tar.gz mv logstash-1.5.4 /usr/local/ ln -s /usr/local/logstash-1.5.4/ /usr/local/logstash 启动脚本:vim /etc/init.d/logstash #!/bin/sh # Init script for logstash # Maintained by Elasticsearch # Generated by pleaserun. # Implemented based on LSB Core 3.1: # * Sections: 20.2, 20.3 # ### BEGIN INIT INFO # Provides: logstash # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: # Description: Starts Logstash as a daemon. ### END INIT INFO PATH=/sbin:/usr/sbin:/bin:/usr/bin export PATH if [ `id -u` -ne 0 ]; then echo "You need root privileges to run this script" exit 1 fi name=logstash pidfile="/var/run/$name.pid" export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar export PATH=$PATH:$JAVA_HOME/bin LS_USER=logstash LS_GROUP=logstash LS_HOME=/usr/local/logstash LS_HEAP_SIZE="500m" LS_JAVA_OPTS="-Djava.io.tmpdir=${LS_HOME}" LS_LOG_DIR=/usr/local/logstash LS_LOG_FILE="${LS_LOG_DIR}/$name.log" LS_CONF_FILE=/etc/logstash.conf LS_OPEN_FILES=16384 LS_NICE=19 LS_OPTS="" [ -r /etc/default/$name ] && . /etc/default/$name [ -r /etc/sysconfig/$name ] && . /etc/sysconfig/$name program=/usr/local/logstash/bin/logstash args="agent -f ${LS_CONF_FILE} -l ${LS_LOG_FILE} ${LS_OPTS}" start() { JAVA_OPTS=${LS_JAVA_OPTS} HOME=${LS_HOME} export PATH HOME JAVA_OPTS LS_HEAP_SIZE LS_JAVA_OPTS LS_USE_GC_LOGGING # set ulimit as (root, presumably) first, before we drop privileges ulimit -n ${LS_OPEN_FILES} # Run the program! nice -n ${LS_NICE} sh -c " cd $LS_HOME ulimit -n ${LS_OPEN_FILES} exec \"$program\" $args " > "${LS_LOG_DIR}/$name.stdout" 2> "${LS_LOG_DIR}/$name.err" & # Generate the pidfile from here. If we instead made the forked process # generate it there will be a race condition between the pidfile writing # and a process possibly asking for status. echo $! > $pidfile echo "$name started." return 0 } stop() { # Try a few times to kill TERM the program if status ; then pid=`cat "$pidfile"` echo "Killing $name (pid $pid) with SIGTERM" kill -TERM $pid # Wait for it to exit. for i in 1 2 3 4 5 ; do echo "Waiting $name (pid $pid) to die..." status || break sleep 1 done if status ; then echo "$name stop failed; still running." else echo "$name stopped." fi fi } status() { if [ -f "$pidfile" ] ; then pid=`cat "$pidfile"` if kill -0 $pid > /dev/null 2> /dev/null ; then # process by this pid is running. # It may not be our pid, but that's what you get with just pidfiles. # TODO(sissel): Check if this process seems to be the same as the one we # expect. It'd be nice to use flock here, but flock uses fork, not exec, # so it makes it quite awkward to use in this case. return 0 else return 2 # program is dead but pid file exists fi else return 3 # program is not running fi } force_stop() { if status ; then stop status && kill -KILL `cat "$pidfile"` fi } case "$1" in start) status code=$? if [ $code -eq 0 ]; then echo "$name is already running" else start code=$? fi exit $code ;; stop) stop ;; force-stop) force_stop ;; status) status code=$? if [ $code -eq 0 ] ; then echo "$name is running" else echo "$name is not running" fi exit $code ;; restart) stop && start ;; reload) stop && start ;; *) echo "Usage: $SCRIPTNAME {start|stop|force-stop|status|restart}" >&2 exit 3 ;; esac exit $? 执行权限和开机启动chkconfig --add logstash chkconfig logstash on chkconfig --list logstash 配置文件:[root@elk1 ~]# cat /etc/logstash.conf input { file { path => "/var/log/messages" type => "system-log" #指定日志类型,以便在一个配置文件中收集多个日志,用来区别输出 } file { path => "/root/test.log" type => "test.log" #指定日志类型,以便在一个配置文件中收集多个日志,用来区别输出 } } output { if [type] == "system-log" { elasticsearch { host => ["192.168.1.4:9200","192.168.1.5:9200"] index => "system-messages-%{+YYYY.MM.dd.HH}" protocol => "http" workers => 5 template_overwrite => true } } if [type] == "test.log" { #对input中的输入进行判断,如果日志类型为nginx-access则执行以下输出,否则不执行 elasticsearch { host => ["192.168.1.4:9200","192.168.1.5:9200"] index => "test.log-%{+YYYY.MM.dd.HH}" protocol => "http" workers => 5 template_overwrite => true } } } [root@elk1 ~]# 启动[root@elk1 ~]# /usr/local/logstash/bin/logstash -f /etc/logstash.conf Logstash startup completed 导入日志测试:[root@elk1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 >> /root/test.log [root@elk1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 >> /var/log/messages
2016年03月20日
3,289 阅读
0 评论
0 点赞
2016-03-08
简单的logstash输入输出(3)
[root@node conf.d]# cat /etc/logstash/conf.d/logstash.conf input { file { path => "/var/log/messages" } } output { file { path => "/logstash-test/%{+YYYY-MM-dd-HH}.messages.gz" gzip => true } # elasticsearch { # hosts => "10.10.0.200" # protocol => "http" # index => "system-messages-%{+YYYY-MM-dd}" #} } [root@node conf.d]# 创建目录和授权[root@node conf.d]# mkdir /logstash-test [root@node conf.d]# chown logstash.logstash /logstash-test [root@node conf.d]# chown logstash.logstash /var/log/messages 尝试写入:[root@node conf.d]# cat /etc/sysconfig/network-scripts/ifcfg-eth1 >> /var/log/messages 查看[root@node conf.d]# ll /logstash-test/ total 8 -rw-r--r-- 1 logstash logstash 126 Mar 8 07:51 2016-03-08-15.messages.gz -rw-r--r-- 1 logstash logstash 431 Mar 8 07:50 2016-03-08.messages.gz [root@node conf.d]# 至于权限,如果我没有修改messages权限,则会警告,我并没有尝试如果修改后日志还是否正常记录。如果你觉得有问题的地方,请告知我,谢谢!{:timestamp=>"2016-03-08T07:42:00.876000-0800", :message=>"failed to open /var/log/messages: Permission denied - /var/log/messages", :level=>:warn} {:timestamp=>"2016-03-08T07:43:14.534000-0800", :message=>"SIGTERM received. Shutting down the pipeline.", :level=>:warn}
2016年03月08日
5,942 阅读
1 评论
0 点赞
1
2
3