首页
常用命令
About Me
推荐
weibo
github
Search
1
linuxea:gitlab-ci之docker镜像质量品质报告
48,760 阅读
2
linuxea:如何复现查看docker run参数命令
19,489 阅读
3
Graylog收集文件日志实例
17,808 阅读
4
git+jenkins发布和回滚示例
17,364 阅读
5
linuxea:jenkins+pipeline+gitlab+ansible快速安装配置(1)
17,353 阅读
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack
Open-Falcon
Prometheus
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
音乐
影视
music
Internet Consulting
最后的净土
软件交付
持续集成
gitops
devops
登录
Search
标签搜索
kubernetes
docker
zabbix
Golang
mariadb
持续集成工具
白话容器
linux基础
nginx
elk
dockerfile
Gitlab-ci/cd
最后的净土
基础命令
jenkins
docker-compose
gitops
haproxy
saltstack
Istio
marksugar
累计撰写
675
篇文章
累计收到
140
条评论
首页
栏目
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack
Open-Falcon
Prometheus
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
音乐
影视
music
Internet Consulting
最后的净土
软件交付
持续集成
gitops
devops
页面
常用命令
About Me
推荐
weibo
github
搜索到
22
篇与
nginx
的结果
2022-04-18
linuxea:k8s下kube-prometheus监控ingress-nginx
首先需要已经配置好了一个ingress-nginx亦或者使用ACK上的ingress-nginx鉴于对ingress-nginx的状态,或者流量的监控是有一定的必要性,配置监控的指标有助于了解更多细节通过使用kube-prometheus的项目来监控ingress-nginx,首先需要在nginx-ingress-controller的yaml中配置10254的端口,并且配置一个service,最后加入到ServiceMonitor即可。start如果是helm,则需要如下修改helm.. controller: metrics: enabled: true service: annotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" ..如果不是 helm,则必须像这样编辑清单:服务清单: - name: prometheus port: 10254 targetPort: prometheusprometheus将会在service中被调用apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: "true" prometheus.io/port: "10254" .. spec: ports: - name: prometheus port: 10254 targetPort: prometheus ..deploymentapiVersion: v1 kind: Deployment metadata: annotations: prometheus.io/scrape: "true" prometheus.io/port: "10254" .. spec: ports: - name: prometheus containerPort: 10254 ..测试10254的/metrics的url能够被访问到bash-5.1$ curl 127.0.0.1:10254/metrics # HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles. # TYPE go_gc_duration_seconds summary go_gc_duration_seconds{quantile="0"} 1.9802e-05 go_gc_duration_seconds{quantile="0.25"} 3.015e-05 go_gc_duration_seconds{quantile="0.5"} 4.2054e-05 go_gc_duration_seconds{quantile="0.75"} 9.636e-05 go_gc_duration_seconds{quantile="1"} 0.000383868 go_gc_duration_seconds_sum 0.000972498 go_gc_duration_seconds_count 11 # HELP go_goroutines Number of goroutines that currently exist. # TYPE go_goroutines gauge go_goroutines 92 # HELP go_info Information about the Go environment.Service And ServiceMonitor另外需要配置一个ServiceMonitor, 这取决于kube-promentheus的发行版spec部分字段如下spec: endpoints: - interval: 15s # 15s频率 port: metrics # port的名称 path: /metrics # url路径 namespaceSelector: matchNames: - kube-system # ingress-nginx所在的名称空间 selector: matchLabels: app: ingress-nginx # ingress-nginx的标签最终配置如下:service在ingress-nginx的名称空间下配置,而ServiceMonitor在kube-prometheus的monitoring名称空间下,使用endpoints定义port名称,使用namespaceSelector.matchNames指定了ingress pod的名称空间,selector.matchLabels和标签apiVersion: v1 kind: Service metadata: name: ingress-nginx-metrics namespace: kube-system labels: app: ingress-nginx annotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" spec: type: ClusterIP ports: - name: metrics port: 10254 targetPort: 10254 protocol: TCP selector: app: ingress-nginx --- apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: ingress-nginx-metrics namespace: monitoring spec: endpoints: - interval: 15s port: metrics path: /prometheus namespaceSelector: matchNames: - kube-system selector: matchLabels: app: ingress-nginxgrafana在grafana的dashboards中搜索ingress-nginx,得到的和github的官网的模板一样https://grafana.com/grafana/dashboards/9614?pg=dashboards&plcmt=featured-dashboard-4或者下面这个模板这些在prometheus的targets中被发现参考ingress-nginx monitoringprometheus and grafana install
2022年04月18日
1,335 阅读
0 评论
0 点赞
2018-08-13
linuxea:nginx流量监控模块nginx-module-vts使用
nginx-module-vts他可以记录单个页面的流量,http status的流量,后端代理的流量已经动态dns的流量,还有来自地区/国家的流量,其中可以进行限制流量,并且他还有一个页面,可以根据server_name进行统计域名的流量已经状态码,只需要简单的配置和编译就可以实现,如果希望使用docker,那就太好了,因为我已经为你准备好了示例 docker安装nginx1.14.0-vts模块的下载这里还加了luajit-2.0[root@linuxea-VM-Node203 ~]# git clone git://github.com/vozlt/nginx-module-vts.git "/usr/local/nginx-module-vts" [root@linuxea-VM-Node203 ~]# git clone git://github.com/vozlt/nginx-module-sts.git "/usr/local/nginx-module-sts" [root@linuxea-VM-Node203 ~]# git clone git://github.com/vozlt/nginx-module-stream-sts.git "/usr/local/nginx-module-stream-sts" ### git clone lua_module [root@linuxea-VM-Node203 ~]# curl -Lk https://github.com/simplresty/ngx_devel_kit/archive/v0.3.1rc1.tar.gz |tar xz -C /usr/local [root@linuxea-VM-Node203 ~]# curl -Lk https://github.com/openresty/lua-nginx-module/archive/v0.10.13.tar.gz |tar xz -C /usr/local [root@linuxea-VM-Node203 ~]# curl -Lk http://luajit.org/download/LuaJIT-2.0.5.tar.gz |tar xz -C /usr/local [root@linuxea-VM-Node203 ~]# cd /usr/local/LuaJIT-2.0.5 && make && make install [root@linuxea-VM-Node203 ~]# export LUAJIT_LIB=/usr/local/lib [root@linuxea-VM-Node203 ~]# export LUAJIT_INC=/usr/local/include/luajit-2.0编译安装我们下载最新的nginx,创建用户,编译并添加模块[root@linuxea-VM-Node203 ~]# useradd www -s /sbin/nologin -M [root@linuxea-VM-Node203 ~]# curl -Lk http://nginx.org/download/nginx-1.14.0.tar.gz |tar xz -C /usr/local [root@linuxea-VM-Node203 ~]# cd /usr/local/nginx-1.14.0 && ./configure \ --prefix=/usr/local/nginx \ --conf-path=/etc/nginx/nginx.conf \ --user=www \ --group=www \ --error-log-path=/var/log/nginx/error.log \ --http-log-path=/var/log/nginx/access.log \ --pid-path=/var/run/nginx/nginx.pid \ --lock-path=/var/lock/nginx.lock \ --with-http_ssl_module \ --with-http_stub_status_module \ --with-http_gzip_static_module \ --with-http_flv_module \ --with-http_mp4_module \ --with-http_geoip_module \ --http-client-body-temp-path=/var/tmp/nginx/client \ --http-proxy-temp-path=/var/tmp/nginx/proxy \ --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi \ --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi \ --add-module=/usr/local/lua-nginx-module-0.10.13 \ --add-module=/usr/local/ngx_devel_kit-0.3.1rc1 \ --add-module=/usr/local/nginx-module-vts \ --with-stream \ --add-module=/usr/local/nginx-module-sts \ --add-module=/usr/local/nginx-module-stream-sts && make -j2 && make install [root@linuxea-VM-Node203 /usr/local/nginx-1.14.]# ln -s /usr/local/lib/libluajit-5.1.so.2 /lib/ nginx-module-vts配置文件我分别编辑了两个conf文件include进去加载主配置文件在http部分 : nginx-module-vts_zone.confgeoip_country /etc/nginx/GeoIP.dat; # 使用GeoIP计算各个国家/地区的流量 vhost_traffic_status_zone; # 必须的指令 vhost_traffic_status_filter_by_host on; # 以server_name的形式展示 #vhost_traffic_status_bypass_stats on; # 不会统计流量页面的数据流量 vhost_traffic_status_filter_by_set_key $geoip_country_code country::*; # # 使用GeoIP计算各个国家/地区的流量 map $http_user_agent $filter_user_agent { # 计算单个用户代理的流量 default 'unknown'; ~iPhone ios; ~Android android; ~(MSIE|Mozilla) windows; } vhost_traffic_status_filter_by_set_key $filter_user_agent agent::*; # 计算单个用户代理的流量加载vhost的server部分: nginx-module-vts_zone.confvhost_traffic_status_set_by_filter $variable group/zone/name; # 获取存储在共享内存中的指定状态值。 vhost_traffic_status_filter_by_set_key $geoip_country_code country::$server_name; # 以server_name的形式展示 vhost_traffic_status_bypass_stats on; # 不会统计流量页面的数据流量 vhost_traffic_status_filter_by_set_key $status $server_name; # 计算详细的http状态代码的流量 vhost_traffic_status_filter_by_set_key $filter_user_agent agent::$server_name; # 计算单个用户代理的流量或者可以这样http段直接加 vhost_traffic_status_zone; vhost_traffic_status_filter_by_host on; server段直接加server { listen 8295; server_name localhost; # disaned status vhost_traffic_status off; # vhost_traffic_status off; location /status { vhost_traffic_status_display; vhost_traffic_status_display_format html; } }打开浏览器web-name:8295/status这个模块本身是可以直接用来prometheus使用的,只要访问/status/format/prometheus即可,本地来搞一下看看效果,过滤一段试试[root@linuxea-VM-Node203 /etc/nginx]# curl 10.10.240.203:8295/status/format/prometheus|grep nginx_vts_server_requests_total % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 7631 100 7631 0 0 10.3M 0 --:--:-- --:--:-- --:--:-- 7452k # HELP nginx_vts_server_requests_total The requests counter # TYPE nginx_vts_server_requests_total counter nginx_vts_server_requests_total{host="10.10.240.203",code="1xx"} 0 nginx_vts_server_requests_total{host="10.10.240.203",code="2xx"} 1663 nginx_vts_server_requests_total{host="10.10.240.203",code="3xx"} 0 nginx_vts_server_requests_total{host="10.10.240.203",code="4xx"} 0 nginx_vts_server_requests_total{host="10.10.240.203",code="5xx"} 0 nginx_vts_server_requests_total{host="10.10.240.203",code="total"} 1663 nginx_vts_server_requests_total{host="linuxea.ds.com",code="1xx"} 0 nginx_vts_server_requests_total{host="linuxea.ds.com",code="2xx"} 0 nginx_vts_server_requests_total{host="linuxea.ds.com",code="3xx"} 294 nginx_vts_server_requests_total{host="linuxea.ds.com",code="4xx"} 0 nginx_vts_server_requests_total{host="linuxea.ds.com",code="5xx"} 0 nginx_vts_server_requests_total{host="linuxea.ds.com",code="total"} 294 nginx_vts_server_requests_total{host="*",code="1xx"} 0 nginx_vts_server_requests_total{host="*",code="2xx"} 1663 nginx_vts_server_requests_total{host="*",code="3xx"} 294 nginx_vts_server_requests_total{host="*",code="4xx"} 0 nginx_vts_server_requests_total{host="*",code="5xx"} 0 nginx_vts_server_requests_total{host="*",code="total"} 1957 [root@linuxea-VM-Node203 /etc/nginx]# 当然这样一来安全就有些问题了配置nginx认证在公网上跑的时候出来iptables的防火墙对固定ip放行端口的同时,一定要在骚一些弄个用户验证生成一个htpasswd的用户和密码,用户名:linuxea 密码:www.linuxea.com[root@linuxea-VM-Node63 /etc/nginx/vhost]# htpasswd -c /usr/local/ngxpasswd linuxea New password: Re-type new password: Adding password for user linuxea添加到nginx的状态页面中来主要添加如下: auth_basic "Please enter your id and password!"; auth_basic_user_file /etc/nginx/ngxpasswd;如下:server { listen 8295; server_name localhost; auth_basic "Please enter your id and password!"; auth_basic_user_file /etc/nginx/ngxpasswd; #disaned status vhost_traffic_status off; vhost_traffic_status off; location /status { vhost_traffic_status_display; vhost_traffic_status_display_format html; } }接着打开就需要认证了到此nginx认证完成配置prometheus抓取端打开prometheus进行配置文件配置,prometheus安装参考metrics_path字段的位置需要写明/status/format/prometheusbasic_auth 用户和密码,在上面进行配置nginx的认证的那些 - job_name: "nginx" metrics_path: /status/format/prometheus basic_auth: username: linuxea password: 'www.linuxea.com' static_configs: - targets: - '10.10.240.203:8295' labels: group: 'nginx'添加到proentheus上可以抓取,如果有问题,你应该检查targets是否up安装nginx-vts-exporter事实上在我对比后,nginx-vts-exporter更适用prometheus抓取,里面有一些是nginx-module-vts没有的,so,我们进行安装nginx-vts-exporter[root@linuxea-VM-Node203 /etc/nginx/vhost]# docker pull sophos/nginx-vts-exporter:latest [root@linuxea-VM-Node203 ~]# docker run -ti --rm --env NGINX_STATUS="http://linuxea:www.linuxea.com@localhost:8295/status/format/json" sophos/nginx-vts-exporter这时候会启动9913端口,通过浏览器可以访问(你可能需要做好防火墙规则),因为之前加了验证,这里需要添加用户和密码http://linuxea:www.linuxea.com@localhost:8295/status/format/json通过9913端口可以查看所有的指标。我这里用linuxea做测试[root@linuxea-VM-Node203 /etc/nginx/vhost]# curl http://10.10.240.203:9913/metrics|grep "linuxea" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 7028 100 7028 0 0 2013k 0 --:--:-- --:--:-- --:--:-- 2287k nginx_server_bytes{direction="in",host="linuxea.ds.com"} 1.700065e+06 nginx_server_bytes{direction="out",host="linuxea.ds.com"} 1.240604e+06 nginx_server_cache{host="linuxea.ds.com",status="bypass"} 0 nginx_server_cache{host="linuxea.ds.com",status="expired"} 0 nginx_server_cache{host="linuxea.ds.com",status="hit"} 0 nginx_server_cache{host="linuxea.ds.com",status="miss"} 0 nginx_server_cache{host="linuxea.ds.com",status="revalidated"} 0 nginx_server_cache{host="linuxea.ds.com",status="scarce"} 0 nginx_server_cache{host="linuxea.ds.com",status="stale"} 0 nginx_server_cache{host="linuxea.ds.com",status="updating"} 0 nginx_server_requestMsec{host="linuxea.ds.com"} 0 nginx_server_requests{code="1xx",host="linuxea.ds.com"} 0 nginx_server_requests{code="2xx",host="linuxea.ds.com"} 40 nginx_server_requests{code="3xx",host="linuxea.ds.com"} 3792 nginx_server_requests{code="4xx",host="linuxea.ds.com"} 1 nginx_server_requests{code="5xx",host="linuxea.ds.com"} 0 nginx_server_requests{code="total",host="linuxea.ds.com"} 3833直接使用prometheus如:查看nginx_server_requests指标,host为linuxea.ds.com,30s的数据,只显示code和host字段sum (irate(nginx_server_requests{host!="*",host="linuxea.ds.com",code!="total"}[30s])) by (code,host)并且可以和grafana配合使用,我这里将官网的模板也inport进去了,你可以去我gitlhub下载nginx-vts-stats_rev2 (1).json,也可以去grafana下载当你Import dashboard 后你会看到这样的一个画面额外的nginx-module-sts配置nginx http段添加stream_server_traffic_status_zone;在http内include vhost/stream.conf;在http外include stream_server.conf;创建server段文件[root@linuxea-VM-Node203 /etc/nginx]# cat vhost/stream.conf server { listen 82; server_name linuxea.ds.com; location /status { stream_server_traffic_status_display; stream_server_traffic_status_display_format html; } }创建stream_server.conf 文件[root@linuxea-VM-Node203 /etc/nginx]# cat stream_server.conf stream { geoip_country /etc/nginx/GeoIP.dat; server_traffic_status_zone; server_traffic_status_filter_by_set_key $geoip_country_code country::*; server { server_traffic_status_filter_by_set_key $geoip_country_code country::$server_addr:$server_port; }部分参数对location指令的正则表达式匹配的单个storage的流量。http { vhost_traffic_status_zone; ... server { ... location ~ ^/storage/(.+)/.*$ { set $volume $1; vhost_traffic_status_filter_by_set_key $volume storage::$server_name; } location /status { vhost_traffic_status_display; vhost_traffic_status_display_format html; } } }计算单个用户代理的流量计算个人的流量 http_user_agenthttp { vhost_traffic_status_zone; map $http_user_agent $filter_user_agent { default 'unknown'; ~iPhone ios; ~Android android; ~(MSIE|Mozilla) windows; } vhost_traffic_status_filter_by_set_key $filter_user_agent agent::*; ... server { ... vhost_traffic_status_filter_by_set_key $filter_user_agent agent::$server_name; location /status { vhost_traffic_status_display; vhost_traffic_status_display_format html; } } }http status code状态码的流量http { vhost_traffic_status_zone; server { ... vhost_traffic_status_filter_by_set_key $status $server_name; location /status { vhost_traffic_status_display; vhost_traffic_status_display_format html; } } }计算动态dns的流量如果域具有多个DNS A记录,则可以使用过滤器功能或proxy_pass中的变量计算域的各个IP的流量。http { vhost_traffic_status_zone; upstream backend { elb.example.org:80; } ... server { ... location /backend { vhost_traffic_status_filter_by_set_key $upstream_addr upstream::backend; proxy_pass backend; } } }计算域的各个IP的流量elb.example.org。如果elb.example.org有多个DNS A记录,将显示所有IP filterZones。在上述设置中,当NGINX启动或重新加载配置时,它会查询DNS服务器以解析域,并将DNS A记录缓存在内存中。因此,即使DNS管理员对DNS A记录进行了分区,DNS A记录也不会在内存中更改,除非NGINX重新启动或重新加载。http { vhost_traffic_status_zone; resolver 10.10.10.53 valid=10s ... server { ... location /backend { set $backend_server elb.example.org; proxy_pass http://$backend_server; } } }计算域的各个IP的流量elb.example.org。如果elb.example.org更改了DNS A记录,将同时显示旧IP和新IP ::nogroups。与第一个上游组设置不同,即使DNS管理员对DNS A记录进行了分析,第二个设置也能正常工作。永久保留统计数据http { vhost_traffic_status_zone; vhost_traffic_status_dump /var/log/nginx/vts.db; ... server { ... } }vhost_traffic_status_filter_by_host on; 会更加不同的server_name进行统计参考 : https://github.com/vozlt/nginx-module-vts
2018年08月13日
7,277 阅读
0 评论
0 点赞
2018-07-28
linuxea:nginx容器优化方案(小米容器cpu检测)
容器中的nginx优化在物理机上配置Nginx时通常会将Nginx的worker进程数配置为CPU核心数并且会将每个worker绑定到特定CPU上,这可以有效提升进程的Cache命中率,从而减少内存访问损耗。在Nginx配置中一般指定worker_processes指令的参数为auto,来自动检测系统的CPU核心数从而启动相应个数的worker进程。在Linux系统上Nginx获取CPU核心数是通过系统调用 sysconf(_SC_NPROCESSORS_ONLN) 来获取的,对于容器来说目前还只是一个轻量级的隔离环境,它并不是一个真正的操作系统,所以容器内也是通过系统调用sysconf(_SC_NPROCESSORS_ONLN)来获取的,这就导致在容器内,使用Nginx如果worker_processes配置为auto,看到的也是宿主机的CPU核心数。通常当我们绑定好CPU并设置nginx进程是CPU核数的两倍,用于提高整体的并发小米运维使用了一种的解决的方式是:劫持系统调用sysconf,在类Unix系统上可以通过LD_PRELOAD这种机制预先加载个人编写的的动态链接库,在动态链接库中劫持系统调用sysconf并根据cgroup信息动态计算出可用的CPU核心数。而XCFS目前仅支持改变容器的CPU视图(/proc/cpuinfo文件内容)并且只有--cpuset-cpus参数可以生效,对于系统调用sysconf(_SC_NPROCESSORS_ONLN)返回的同样还是物理机的CPU核数。我抱着试一试的态度,对此作了一些测试,如下:开始测试我们在4核CPU的机器做测试,对容器的CPU资源做限制(如果不做CPU限制默认使用所有的资源),我们使用cpuset: '1,3'绑定cpu-2和cpu-4上,然后观察仍然是4个,对容器设置 cpu-shares 和 cpu-quota 也会得到同样的结果。 我们进行测试配置cpu限制cpuset: '1,3'[root@linuxea-VM-Node_10_10_240_144 /data/mirror]$ cat docker-compose.yml version: '2' services: nginx_createrepo: image: marksugar/nginx_createrepo # build: # context: https://raw.githubusercontent.com/LinuxEA-Mark/docker-createrepo/master/Dockerfile container_name: nginx_createrepo restart: always network_mode: "host" cpuset: '1,3' cpu_quota: 400000 volumes: - /data/mirrors:/data environment: - NGINX_PORT=80 - SERVER_NAME=localhostnginx配置worker_processes auto;我们先用 cpuset: '1,3'绑定cpu2和cpu4,按照我们之前的逻辑,两颗CPU,在worker_processes auto的情况下,nginx进程应该是两个的在过滤下更明显[root@linuxea-VM-Node_10_10_240_144 ~]$ docker exec -it nginx_createrepo ps aux|grep nginx root 13 0.0 0.0 45912 6024 ? S 08:01 0:00 nginx: master p www 35 0.0 0.3 66324 23704 ? S 08:01 0:00 nginx: worker p www 36 0.0 0.3 66324 23704 ? S 08:01 0:00 nginx: worker p www 37 0.0 0.3 66324 23704 ? S 08:01 0:00 nginx: worker p www 38 0.0 0.3 66324 23704 ? S 08:01 0:00 nginx: worker pcontainer_cpu_detection我们克隆小米开源的container_cpu_detection(容器cpu检测)测试下代码克隆[root@linuxea-VM-Node_10_10_240_144 /data/mirror]$ git clone https://github.com/agile6v/container_cpu_detection.git 正克隆到 'container_cpu_detection'... remote: Counting objects: 46, done. remote: Compressing objects: 100% (25/25), done. remote: Total 46 (delta 19), reused 41 (delta 17), pack-reused 0 Unpacking objects: 100% (46/46), done. [root@linuxea-VM-Node_10_10_240_144 /data/mirror]$make[linuxea-VM-Node_10_10_240_144 /data/mirror/container_cpu_detection]$ make gcc -std=c99 -Wall -shared -g -fPIC -ldl detection.c -o detection.so gcc sysconf_linuxea.c -o sysconf_linuxea [root@linuxea-VM-Node_10_10_240_144 /data/mirror/container_cpu_detection]$ sed -i 's/test/linuxea/g' * && mv sysconf_test.c sysconf_linuxea.c挂载到容器里面,添加这些参数到容器:资源限制cpuset: '1,3'cpu_quota: 200000变量传递DETECTION_TARGETS=nginxLD_PRELOAD=/usr/lib/detection.so文件挂载/data/container_cpu_detection/detection.so:/usr/lib/detection.so/data/container_cpu_detection/sysconf_linuxea:/tmp/sysconf_linuxea完整的compose如下[root@linuxea-VM-Node_10_10_240_144 /data/mirror]$ cat docker-compose.yml version: '2' services: nginx_createrepo: image: marksugar/nginx_createrepo # build: # context: https://raw.githubusercontent.com/LinuxEA-Mark/docker-createrepo/master/Dockerfile container_name: nginx_createrepo restart: always network_mode: "host" cpuset: '1,3' cpu_quota: 200000 volumes: - /data/mirrors:/data - /data/container_cpu_detection/detection.so:/usr/lib/detection.so - /data/container_cpu_detection/sysconf_linuxea:/tmp/sysconf_linuxea environment: - NGINX_PORT=80 - SERVER_NAME=localhost - DETECTION_TARGETS=nginx - LD_PRELOAD=/usr/lib/detection.soup起来,查看CPU个数仍然不变[root@linuxea-VM-Node_10_10_240_144 /data/container_cpu_detection]$ docker exec -it nginx_createrepo top top - 09:02:09 up 41 days, 2:40, 0 users, load average: 0.07, 0.02, 0.00 Tasks: 8 total, 1 running, 7 sleeping, 0 stopped, 0 zombie %Cpu0 : 0.0/0.0 0[ ] %Cpu1 : 0.0/0.0 0[ ] %Cpu2 : 0.0/0.8 1[ ] %Cpu3 : 0.0/0.0 0[ ] KiB Mem : 33.5/7188532 [|||||||||||||||||| ] KiB Swap: 0.0/4190204 [ ] PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 20 0 13756 2716 2500 S 0.0 0.0 0:00.02 startup.sh 10 root 20 0 116972 16976 7104 S 0.0 0.2 0:00.20 supervisord 13 root 20 0 13756 2640 2444 S 0.0 0.0 0:00.00 bash 14 root 20 0 48028 6192 5140 S 0.0 0.1 0:00.01 nginx 15 root 20 0 10664 392 324 S 0.0 0.0 0:00.00 inotifywait 38 www 20 0 68440 23732 2380 S 0.0 0.3 0:00.00 nginx 39 www 20 0 68440 23732 2380 S 0.0 0.3 0:00.03 nginx 131 root 20 0 58212 3900 3384 R 0.0 0.1 0:00.03 top 在看nginx进程已经如限制的核心数一致[root@linuxea-VM-Node_10_10_240_144 /data/mirror]$ docker exec -it nginx_createrepo ps aux|grep nginx root 13 0.0 0.0 48028 6016 ? S 13:19 0:00 nginx: master p www 32 0.0 0.3 68440 23696 ? S 13:20 0:00 nginx: worker p www 33 0.0 0.3 68440 23696 ? S 13:20 0:00 nginx: worker pcpu资源约束信息--cpu-quota=<value>对容器施加CPU CFS配额。--cpu-period限制前容器限制为每秒的微秒数。作为有效上限。如果您使用Docker 1.13或更高版本,请--cpus改用。--cpuset-cpus限制容器可以使用的特定CPU或核心。如果您有多个CPU,则容器可以使用的以逗号分隔的列表或连字符分隔的CPU范围。第一个CPU编号为0.有效值可能是0-3(使用第一个,第二个,第三个和第四个CPU)或1,3(使用第二个和第四个CPU)。--cpu-shares将此标志设置为大于或小于默认值1024的值,以增加或减少容器的重量,并使其可以访问主机的CPU周期的较大或较小比例。仅在CPU周期受限时才会强制执行此操作。当有足够的CPU周期时,所有容器都会根据需要使用尽可能多的CPU。这样,这是一个软限制。--cpu-shares不会阻止容器以群集模式进行调度。它为可用的CPU周期优先考虑容器CPU资源。它不保证或保留任何特定的CPU访问权限。--cpu-period=<value>指定CPU CFS调度程序周期,它与并用 --cpu-quota。默认为100微秒。大多数用户不会更改默认设置。如果您使用Docker 1.13或更高版本,请--cpus改用。-cpus=<value>指定容器可以使用的可用CPU资源量。例如,如果主机有两个CPU并且您已设置--cpus="1.5",则容器最多保证一个半CPU。这相当于设置--cpu-period="100000"和--cpu-quota="150000"。可在Docker 1.13及更高版本中使用。-c, --cpu-shares=0 :CPU份额(相对权重)--cpus=0.000 : CPU数量。数字是小数。0.000表示没有限制--cpu-period=0 : 限制CPU CFS(完全公平计划程序)期间--cpuset-cpus="":允许执行的CPU(0-3,0,1)--cpuset-mems="": 允许执行的存储器节点(MEM)(0-3,0,1)。仅对NUMA系统有效。 --cpu-quota=0 :限制CPU CFS(完全公平计划程序)配额--cpu-rt-period=0 : 限制CPU实时周期。在几微秒内。需要设置父cgroup并且不能高于父cgroup。还要检查rtprio ulimits--cpu-rt-runtime=0 :限制CPU实时运行时。在几微秒内。需要设置父cgroup并且不能高于父cgroup。还要检查rtprio ulimits参考:https://docs.docker.com/engine/reference/run/#cpuset-constraint https://docs.docker.com/config/containers/resource_constraints/#configure-the-default-cfs-scheduler https://docs.docker.com/compose/compose-file/compose-file-v2/#cpu-and-other-resources https://github.com/agile6v/container_cpu_detection。
2018年07月28日
3,736 阅读
0 评论
0 点赞
2017-03-14
Nginx平滑处理echo模块收集POST日志
Nginx可以轻松处理大量的HTTP流量。每次NGINX处理连接时,将生成一个日志条目,以存储此连接(例如远程IP地址,响应大小和状态代码等)的某些信息。可在此处找到包含更多详细信息的完整记录信息集。在某些情况下,您可能更愿意存储请求的主体,特别是POST请求。幸运的是,NGINX生态系统是丰富的,并且包括很多 方便的模块。一个这样的模块是 回声模块,它提供的东西等是有用的功能:echo,time,和sleep 命令。在我们的用例中,要记录请求体,我们需要的是使用echo_read_request_body命令和$request_body变量(包含Echo模块的请求体)。然而,这个模块不是默认分配给NGINX,为了能够使用它,我们必须通过构建包含Echo模块的源代码的源代码构建NGINX。 以下步骤详细介绍了如何构建NGINX以便包含Echo模块(这里是完整的构建bash文件)使用以下命令下载NGINX和Echo的源代码:[root@linuxea ]# curl -Lk https://github.com/openresty/echo-nginx-module/archive/v0.58.tar.gz -o /usr/local/ [root@linuxea ]# tar xf v0.58.tar.gz -C /tmp/echo-nginx-module源nginx安装目录位于/usr/local/webserver/nginx,我们直接下载同样版本的nginx,进行编译./configure --user=www \ --group=www \ --prefix=/usr/local/webserver/nginx \ --with-http_stub_status_module \ --with-http_ssl_module \ --with-http_gunzip_module \ --with-http_mp4_module \ --with-http_flv_module \ --with-pcre \ --with-http_gzip_static_module \ --with-http_realip_module \ --with-ld-opt=-ljemalloc \ --add-module=/tmp/echo-nginx-module编译完成后,仅仅只进行make即可在/usr/local/webserver/nginx/sbin/下,删除或者重新命名nginxmv nginx old_nginxmv或者rm后nginx还是在运行着[root@linuxea sbin]# ps aux|grep nginx root 1796 0.0 0.9 124408 36180 ? Ss Mar06 0:02 nginx: master process /usr/local/webserver/nginx/sbin/nginx -c /usr/local/webserver/nginx/conf/nginx.conf www 15851 0.3 2.0 169464 78680 ? S 05:05 0:00 nginx: worker process www 15852 0.6 1.9 169464 78072 ? S 05:05 0:00 nginx: worker process www 15853 0.5 1.9 169464 77168 ? S 05:05 0:00 nginx: worker process www 15854 0.3 2.0 169464 78680 ? S 05:05 0:00 nginx: worker process www 15855 0.3 2.0 169464 78600 ? S 05:05 0:00 nginx: worker process www 15856 1.7 1.9 171512 77972 ? S 05:05 0:00 nginx: worker process [root@linuxea sbin]# cp /usr/local/nginx-1.8.0/objs/nginx /usr/local/webserver/nginx/sbin/ [root@linuxea sbin]# /etc/init.d/nginx reloadnginx主配置文件添加:log_format upstream2 '$proxy_add_x_forwarded_for $remote_user [$time_local] "$request" $http_host' '$body_bytes_sent $request_body "$http_referer" "$http_user_agent" $ssl_protocol $ssl_cipher' '$request_time [$status] [$upstream_status] [$upstream_response_time] "$upstream_addr"'; 在server段添加: echo_read_request_body; access_log /data/logs/wwwlogs/access.log upstream2;查看日志: 脚本;if ! rpm -ql GeoIP-devel >/dev/null 2>&1;then yum install GeoIP-devel -y;fi mkdir -p /tmp/nginx_build/{echo-nginx-module,ngx_http_geoip2_module} curl -Lks http://nginx.org/download/nginx-1.10.3.tar.gz|tar -xz -C /tmp/nginx_build/ --strip-components=1 curl -Lks $(curl -Lks 'https://github.com/openresty/echo-nginx-module/releases'| awk -F'"' '/tar.gz"/{print "https://github.com"$2;exit}')| tar -xz -C /tmp/nginx_build/echo-nginx-module/ --strip-components=1 cd /tmp/nginx_build/ngx_http_geoip2_module git clone https://github.com/voxxit/dockerfiles.git mv dockerfiles/nginx-geoip2/ngx_http_geoip2_module-1.0/* . && cd ../ ./configure --user=www --group=www --prefix=/usr/local/webserver/nginx --with-http_stub_status_module --with-http_ssl_module --with-http_gunzip_module --with-http_mp4_module --with-http_flv_module --with-http_realip_module --with-pcre --with-http_gzip_static_module --with-ld-opt=-ljemalloc --add-module=./echo-nginx-module --with-http_geoip_module make -j$(getconf _NPROCESSORS_ONLN) && make install && cd && \rm -rf /tmp/nginx_build echo 'export PATH=/usr/local/webserver/nginx/sbin:$PATH' > /etc/profile.d/nginx.sh . /etc/profile.d/nginx.sh && nginx -V本文参考:https://developers.redhat.com/blog/2016/05/23/configuring-nginx-to-log-post-data-on-linux-rhel/
2017年03月14日
4,899 阅读
0 评论
0 点赞
2016-12-16
Centos7+LNMP+Discuz_X3.2_SC_UTF8详细部署
OS status:nginx-1.10.2 php-5.6.29 Discuz_X3.2 mariadb-10.1.19安装nginx创建运行nginx用户[root@linuxea-com ~]# groupadd -r -g 499 nginx [root@linuxea-com ~]# useradd -u 499 -s /sbin/nologin -c 'web server' -g nginx nginx -M下载nginx[root@linuxea-com ~]# cd /usr/local [root@linuxea-com /usr/local]# curl -s http://nginx.org/download/nginx-1.10.2.tar.gz -o/usr/local/nginx-1.10.2.tar.gz解压[root@linuxea-com /usr/local]# cd /usr/local && tar xf nginx-1.10.2.tar.gz && rm -rf nginx-1.10.2.tar.gz安装依赖包[root@linuxea-com /usr/local]# yum install openssl-devel pcre pcre-devel gcc make -y开始编译[root@linuxea-com /usr/local]# cd nginx-1.10.2 && ./configure --prefix=/usr/local/nginx --conf-path=/etc/nginx/nginx.conf --user=nginx --group=nginx --error-log-path=/data/logs/nginx/error.log --http-log-path=/data/logs/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/lock/nginx.lock --with-http_ssl_module --with-http_stub_status_module --with-http_gzip_static_module --with-http_flv_module --with-http_mp4_module --with-http_realip_module --http-client-body-temp-path=/var/tmp/nginx/client --http-proxy-temp-path=/var/tmp/nginx/proxy --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi && make && make install创建所需的目录[root@linuxea-com /usr/local]# mkdir -p /var/tmp/nginx/{client,fastcgi,proxy,uwsgi} /data/logs/nginx /data/wwwroot 下载配置文件[root@linuxea-com /usr/local]# rm -rf /etc/nginx/nginx.conf [root@linuxea-com /usr/local]# curl -Lks4 https://raw.githubusercontent.com/LinuxEA-Mark/nmp/master/nginx.conf -o /etc/nginx/nginx.cnf下载启动脚本[root@linuxea-com /usr/local]# curl -Lks4 https://raw.githubusercontent.com/LinuxEA-Mark/nmp/master/nginx -o /etc/init.d/nginx && chmod +x /etc/init.d/nginx安装php-fpmphp install下载php[root@linuxea-com /usr/local]# curl -s http://tw1.php.net/distributions/php-5.6.29.tar.gz -o /usr/local/php-5.6.29.tar.gz && cd /usr/local解压[root@linuxea-com /usr/local]# tar xf php-5.6.29.tar.gz && rm -rf php-5.5.26.tar.gz [root@linuxea-com /usr/local]# cd php-5.6.29创建php用户[root@linuxea-com /usr/local]# groupadd -g 498 -r php-fpm && useradd -u 498 -g php-fpm -r php-fpm -s /sbin/nologin 安装依赖包[root@linuxea-com /usr/local]# yum install epel-release -y && yum install -y gcc automake autoconf libtool make libxml2-devel openssl openssl-devel bzip2 bzip2-devel libpng libpng-devel freetype freetype-devel libcurl-devel libcurl libjpeg libjpeg-devel libpng libpng-devel freetype freetype-devel libmcrypt-devel libmcrypt libtool-ltdl-devel libxslt-devel mhash mhash-devel axel编译安装:[root@linuxea-com /usr/local]# cd php-5.6.29 && ./configure --prefix=/usr/local/php --disable-pdo --disable-debug --disable-rpath --enable-inline-optimization --enable-sockets --enable-sysvsem--enable-sysvshm --enable-pcntl --enable-mbregex --enable-xml --enable-zip --enable-fpm --enable-mbstring --with-pcre-regex --with-mysql --with-mysqli --with-gd --with-jpeg-dir --with-bz2 --with-zlib --with-mhash --with-curl --with-mcrypt --with-jpeg-dir --with-png-dir && make && make install 创建日志路径[root@linuxea-com /usr/local]# mkdir /data/logs/php-fpm复制php.ini文件[root@linuxea-com /usr/local]# cp /usr/local/php-5.6.29/php.ini-production /usr/local/php/lib/php.ini修改php.ini时区[root@linuxea-com /usr/local]# sed -i 's/;date.timezone =/date.timezone = Asia\/Shanghai/' /usr/local/php/lib/php.ini下载php启动文件[root@linuxea-com /usr/local]# curl -Lks4 https://raw.githubusercontent.com/LinuxEA-Mark/nmp/master/php-fpm -o /etc/init.d/php-fpm && chmod +x /etc/init.d/php-fpm下载php-fpm.conf配置文件[root@linuxea-com /usr/local]# curl -Lks4 https://raw.githubusercontent.com/LinuxEA-Mark/nmp/master/php-fpm.conf -o /usr/local/php/etc/php-fpm.conf安装mariadb安装mariadb下载mariadb[root@linuxea-com /usr/local]# cd /usr/local && axel -n 30 http://sgp1.mirrors.digitalocean.com/mariadb//mariadb-10.1.19/bintar-linux-x86_64/mariadb-10.1.19-linux-x86_64.tar.gz解压[root@linuxea-com /usr/local]# tar xf mariadb-10.1.19-linux-x86_64.tar.gz && ln -s mariadb-10.1.19-linux-x86_64 mysql创建用户[root@linuxea-com /usr/local]# groupadd -g 497 -r mysql && useradd -u 497 -g mysql -r mysql -s /sbin/nologin && mkdir /data/mysql开始二进制安装[root@linuxea-com /usr/local]# cd mysql && scripts/mysql_install_db --user=mysql --datadir=/data/mysql 修改权限[root@linuxea-com /usr/local]# chown -r mysql.mysql /data/mysql复制启动脚本[root@linuxea-com /usr/local]# cp /usr/local/mysql/support-files/mysql.server /etc/init.d/mysqld && chown +x /etc/init.d/mysqld #cp /usr/local/mysql/support-files/my-large.cnf /etc/my.cnf 下载mysql配置文件[root@linuxea-com /usr/local]# curl -Lks4 https://raw.githubusercontent.com/LinuxEA-Mark/nmp/master/my.cnf -o /etc/my.cnf创建mysql软连接[root@linuxea-com /usr/local]# ln -s /usr/local/mysql/bin/mysql /usr/bin/ 启动mysql ,nginx ,php-fpm [root@linuxea-com /usr/local]# systemctl start mysqld && systemct start nginx && systemcrt start php-fpm授权数据库数据库授权[root@linuxea-com /usr/local]# mysql -e "DELETE FROM mysql.user WHERE User='';" [root@linuxea-com /usr/local]# mysql -e "DELETE FROM mysql.db WHERE Db LIKE 'test%';" [root@linuxea-com /usr/local]# mysql -e "DROP DATABASE test;" 创建库bbs,密码8K79Xucb5uXC,root密码:abc8K7123 [root@linuxea-com /usr/local]# mysql -e "CREATE DATABASE bbs charset='utf8';" [root@linuxea-com /usr/local]# mysql -e "GRANT ALL PRIVILEGES ON bbs.* To 'bbs'@'%' IDENTIFIED BY '8K79Xucb5uXC';" [root@linuxea-com /usr/local]# mysql -e "UPDATE mysql.user SET password = password('abc8K7123') WHERE user = 'root';" [root@linuxea-com /usr/local]# mysql -e "flush privileges;" [root@linuxea-com /usr/local]# myqsl -uroot -pabc8K7123 -e "flush privileges;"部署discuz下载Discuz[root@linuxea-com /usr/local]# cd /data/wwwroot [root@linuxea-com /data/wwwroot]# wget http://download.comsenz.com/DiscuzX/3.2/Discuz_X3.2_SC_UTF8.zip && unzip Discuz_X3.2_SC_UTF8.zip 删除一些无用的文件[root@linuxea-com /data/wwwroot]# rm -rf readme utility/ Discuz_X3.2_SC_UTF8.zip 将网页文件复制到当前目录并修改权限[root@linuxea-com /data/wwwroot]# mv upload/* ./ && chown -R nginx.nginx /data/wwwroot/ 打开web输入ip,同意即可下一步,通常这一步也不会报错下一步,全新安装输入之前创建的库和账号密码和论坛邮箱密码安装完成登录源图片的路径存放在网站根目录data/attachment/forum/下[root@DS-VM-Node49 /data/wwwroot]# ll data/attachment/forum/201612/16/114158ponqltotvq9ouuwl.jpg -rw-r--r-- 1 nginx nginx 68028 12月 16 11:41 data/attachment/forum/201612/16/114158ponqltotvq9ouuwl.jpg [root@DS-VM-Node49 /data/wwwroot]# 删掉安装目录 rm -rf install/开启伪静态开启伪静态:如图所示点击查看当前的 Rewrite 规则URL,在弹出的新页面中,复制nginx配置的server段中,如下图所示在重新加载配置/etc/init.d/nginx reload.如下图所示:
2016年12月16日
8,241 阅读
0 评论
0 点赞
2016-09-15
Nginx1.10.1lua环境编译安装
installdownload nginx & pcre & LuaJIT[root@LinuxEA local]# curl -sO http://nginx.org/download/nginx-1.10.1.tar.gz [root@LinuxEA local]# curl -sO http://nchc.dl.sourceforge.net/project/pcre/pcre/8.39/pcre-8.39.tar.gz [root@LinuxEA local]# curl -sO http://luajit.org/download/LuaJIT-2.0.4.tar.gz 解压[root@LinuxEA local]# tar xf nginx-1.10.1.tar.gz [root@LinuxEA local]# ln -s nginx-1.10.1 nginx [root@LinuxEA local]# tar xf pcre-8.39.tar.gz [root@LinuxEA local]# tar xf LuaJIT-2.0.4.tar.gz 编译LuaJIT[root@LinuxEA local]# yum install gcc -y [root@LinuxEA local]# cd LuaJIT-2.0.4 [root@LinuxEA LuaJIT-2.0.4]# make && make install 编译PCRE[root@LinuxEA local]# yum install gcc-c++ -y [root@LinuxEA local]# cd pcre-8.39 && ./configure [root@LinuxEA local]# make && make install 设置环境变量[root@LinuxEA local]# export LUAJIT_LIB=/usr/local/lib [root@LinuxEA LuaJIT-2.0.4]# export LUAJIT_INC=/usr/local/include/luajit-2.0/ 创建用户[root@LinuxEA nginx]# useradd -s /sbin/nologin -M nginx [root@LinuxEA LuaJIT-2.0.4]# cd ../nginx ### get ngx_devel_kit & lua-nginx-module 在编译之前,我们到此处下载相应的模块https://github.com/simpl/ngx_devel_kit#warning-using-ndk_all[root@LinuxEA local]# yum install git [root@LinuxEA local]# git clone https://github.com/simpl/ngx_devel_kit.git 在下载一个lua-nginx-module [root@LinuxEA local]# git clone https://github.com/openresty/lua-nginx-module.git 开始编译nginx--add指定目录即可在编译之前,我们把依赖包安装[root@LinuxEA nginx]# yum install -y openssl openssl-devel [root@LinuxEA nginx]# ./configure --prefix=/usr/local/nginx \ --user=nginx \ --group=nginx \ --with-http_ssl_module \ --with-http_stub_status_module \ --with-file-aio \ --add-module=../ngx_devel_kit/ \ --add-module=../lua-nginx-module/ \ --with-http_gzip_static_module \ --with-http_flv_module \ --with-pcre=/usr/local/pcre-8.39 \ --with-http_mp4_module \ --http-client-body-temp-path=/var/tmp/nginx/client \ --http-proxy-temp-path=/var/tmp/nginx/proxy \ --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi \ --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi \ --pid-path=/var/run/nginx/nginx.pid \ --lock-path=/var/lock/nginx.lock [root@LinuxEA nginx]# make -j2 && make install [root@LinuxEA nginx-1.10.1]# ln -s /usr/local/lib/libluajit-5.1.so.2 /lib64/ [root@LinuxEA nginx-1.10.1]# mkdir -p /var/tmp/nginx/{client,fastcgi,proxy,uwsgi} 我们在server中添加一个nginx lua[root@LinuxEA conf]# vi nginx.conf location /linuxea { default_type 'text/plain'; content_by_lua 'ngx.say("hello,lua")'; } [root@LinuxEA conf]# /usr/local/nginx/sbin/nginx -t nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful 关闭防火墙并启动[root@LinuxEA conf]# setenforce 0 [root@LinuxEA conf]# echo -e 'net.ipv6.conf.all.disable_ipv6 = 1\nnet.ipv6.conf.default.disable_ipv6 = 1' >> /etc/sysctl.conf && sysctl -p net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 [root@LinuxEA conf]# systemctl mask firewalld [root@LinuxEA conf]# systemctl stop firewalld [root@LinuxEA conf]# /usr/local/nginx/sbin/nginx
2016年09月15日
3,503 阅读
0 评论
0 点赞
2016-01-30
keepalived脑裂切换思路
keepalived切换脚本实现ps -ef查看nginx进程少于2个则判定nginx宕机,而后关闭keepalived[root@nginx-proxy ~]# cat keepalived.sh #!/bin/bash while true do if [ `ps -ef|grep nginx|grep -v grep|wc -l` -lt 2 ] then /etc/init.d/keepalived stop fi sleep 5 done [root@nginx-proxy ~]# 两台机器通过同一个机房使用keepalived时候,竟可能使用串口线之类的直连1,使用单独的网卡直连2,使用电源管理器3,脚本(只要发生切换则报警,人为介入)例子:假设备用节点收到vip地址且主节点还活着则人为是脑裂状态可以如下的判定,在备节点上运行脚本ping主节点,如果主节点能ping同,且vip飘逸到备几点则判定为脑裂另外,我们可以使用arping来ping,如下[root@DS-VM-linuxea /etc/graylog/collector-sidecar/generated]# arping -c 1 10.10.194.100 ARPING 10.10.194.100 from 10.10.231.61 eth0 Unicast reply from 10.10.194.100 [88:88:2F:9A:97:84] 1.768ms Unicast reply from 10.10.194.100 [88:88:2F:60:CD:40] 1.858ms Sent 1 probes (1 broadcast(s)) Received 2 response(s) [root@DS-VM-linuxea /etc/graylog/collector-sidecar/generated]# 但我们在旁路机器上arping vip地址如果出现脑裂这会有两个mac地址!更多可参考这几篇文章:heartbeat心跳问题解决二fence思路 http://www.linuxea.com/939.htmlheartbeat列脑的发生和防止思路 http://www.linuxea.com/941.html[root@nginx-proxy scripts]# cat lienao.sh #!/bin/sh while true do ping -c 5 -w 3 10.0.0.91 &>/dev/null if [ $? -eq 0 -a `ip add|grep 10.0.0.100|wc -l` -eq 1 ] then echo "HA is Brain column!" else echo "HA is run OK!" fi sleep 5 done [root@nginx-proxy scripts]# keepalived双主[root@nginx-proxy scripts]# cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { 734943463@qq.com } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS_DEVEL } vrrp_instance VI_1 { state MASTER interface eth1 virtual_router_id 51 priority 150 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.0.0.100/24 } } vrrp_instance VI_2 { state BACKUP interface eth1 virtual_router_id 52 priority 50 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.0.0.101/24 } } [root@nginx-proxy scripts]# [root@nginx-proxy2 scripts]# cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { 734943463@qq.com } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS_DEVEL } vrrp_instance VI_1 { state BACKUP interface eth1 virtual_router_id 51 priority 50 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.0.0.100/24 } }vrrp_instance VI_2 { state MASTER interface eth1 virtual_router_id 52 priority 150 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.0.0.101/24 } } [root@nginx-proxy scripts]#
2016年01月30日
7,314 阅读
0 评论
0 点赞
2016-01-25
Nginx+keepalived实现简单切换
keepalived是集群管理中保证集群高可用的一个服务软件,其功能类似于heartbeat,用来防止单点故障。keepalived是以VRRP协议为实现基础的,他的切换速度非常迅速!虽然他比较轻量,功能稍弱,单可以通过脚本来实现安装keepalived网盘下载keepalived包http://www.keepalived.org/index.html [root@nginx-proxy ~]# ln -s /usr/src/kernels/2.6.32-504.el6.x86_64/ /usr/src/linux [root@nginx-proxy ~]# tar xf keepalived-1.2.13.tar.gz [root@nginx-proxy ~]# cd keepalived-1.2.13 [root@nginx-proxy keepalived-1.2.13]# ./configure [root@nginx-proxy keepalived-1.2.13]# make [root@nginx-proxy keepalived-1.2.13]# make install 生成启动脚本[root@nginx-proxy keepalived-1.2.13]# cp /usr/local/etc/rc.d/init.d/keepalived /etc/init.d/ 复制配置启动脚本[root@nginx-proxy keepalived-1.2.13]# cp /usr/local/etc/sysconfig/keepalived /etc/sysconfig/ 创建配置文件路径[root@nginx-proxy keepalived-1.2.13]# mkdir /etc/keepalived 配置文件模板[root@nginx-proxy keepalived-1.2.13]# cp /usr/local/etc/keepalived/keepalived.conf /etc/keepalived/ [root@nginx-proxy keepalived-1.2.13]# cp /usr/local/sbin/keepalived /usr/sbin/ [root@nginx-proxy keepalived-1.2.13]# /etc/init.d/keepalived start Starting keepalived: [ OK ] [root@nginx-proxy local]# ps -ef|grep keep|grep -v grep root 3355 1 0 05:55 ? 00:00:00 keepalived -D root 3357 3355 0 05:55 ? 00:00:00 keepalived -D root 3358 3355 0 05:55 ? 00:00:00 keepalived -D [root@nginx-proxy local]# 配置文件说明[root@nginx-proxy keepalived]# vim /etc/keepalived/keepalived.conf vrrp_instance VI_1 { state MASTER 主/备 interface eth0 监听网卡 virtual_router_id 51 实例id号 priority 100 优先级 advert_int 1 心跳间隔时间 authentication { auth_type PASS auth_pass 1111 密码 } virtual_ipaddress { VIP 192.168.200.16 192.168.200.17 192.168.200.18 } }配置文件修改[root@nginx-proxy keepalived]# cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { 734943463@qq.com } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS_DEVEL } vrrp_instance VI_1 { state MASTER interface eth1 virtual_router_id 51 priority 150 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.0.0.100/24 } } [root@nginx-proxy keepalived]# /etc/init.d/keepalived restart Stopping keepalived: [ OK ] Starting keepalived: [ OK ] [root@nginx-proxy keepalived]# 启动后vip会在权重高的一端[root@nginx-proxy keepalived]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:5c:67:b6 brd ff:ff:ff:ff:ff:ff inet 10.0.0.90/16 brd 10.0.255.255 scope global eth1 inet 10.0.0.100/24 scope global eth1 inet6 fe80::20c:29ff:fe5c:67b6/64 scope link valid_lft forever preferred_lft forever [root@nginx-proxy keepalived]# ip addr|grep 10.0.0.100 inet 10.0.0.100/24 scope global eth1 [root@nginx-proxy keepalived]# nginx-proxy2的配置需要修改1,router_id不能一致2,state MASTER/BACKUP3, priority 权重不能一致这里需要注意的是网卡的信息,两端需要网卡号一致密码一致,其他等[root@nginx-proxy2 ~]# cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { 734943463@qq.com } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS_DEVEL_2 } vrrp_instance VI_1 { state BACKUP interface eth1 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.0.0.100/24 } } [root@nginx-proxy2 ~]#
2016年01月25日
3,382 阅读
0 评论
0 点赞
2016-01-24
Nginx代理简单的优化
对sysctl.conf内核文件进行修改net.ipv4.tcp_fin_timeout = 2 表示如果套接字由本端要求关闭,这个参数决定了它保持在FIN-WAIT-2状态时间,默认为60秒,该参数对应系统路径:/proc/sys/net/ipv4/tcp_fin_timeout 60net.ipv4.tcp_tw_reuse = 1表示开启重用,允许TIME-WAIT sockets重新用于新的tcp连接,默认为0,表示关闭,系统路径:/proc/sys/net/ipv4/tcp_tw_reuse 0net.ipv4.tcp_tw_recycle = 1表示开启tcp连接中,TIME-WAIT sockets的快速回收,改参数对应系统路径为:/proc/sys/net/ipv4/tcp_tw_recycle = 1,默认0关闭net.ipv4.tcp_syncookies = 1表示开启SYN Cppkies功能,当出现SYN等待队列一处,启用cookies来处理,可防止少量的SYN攻击,centos5默认为1,表示开启,因此这个参数可以不添加,路径为:/proc/sys/net/ipv4/tcp_syncookies,默认为1net.ipv4.tcp_keepalive_time = 600表示当keepalive启用的时候,tcp发送keepalive消息的频度,缺省是2小时,改为10分钟,该参数对应系统路径为:/proc/sys/net/ipv4/tcp_keepalive_time,默认为7200秒net.ipv4.ip_local_port_range = 4000 65000用来设定允许系统打开的端口范围,既用于向外链接的端口范围,路径:/proc/sys/net/ipv4/ip_local_port_range = 32768 61000net.ipv4.tcp_max_syn_backlog = 8192表示SYN的队列长度,默认为1024,加大队列长度为8192,可以容纳更度等待连接的网络连接数,选项为服务器用于记录那些尚未收到客户端确认信息的连接请求数的最大值,改参数路径为:/proc/sys/net/ipv4/tcp_max_syn_backlognet.ipv4.tcp_max_tw_buckets = 36000表示系统同时保持TIME_WAIT套接字的最大数量,如果超过这个数,TIME_WAIT套接字将立即被清除并打印警告信息,默认为18000,对于apache,nginx,等服务器说可以调整稍低,如:改为5000-30000,不同业务服务器也可以改大,比如lvs,squid次参数对于squid效果不是很理想,可以控制TIME_WAIT套接字的最大数量,避免squid服务被大量TIME_WAIT套接字优化,该参数对应系统位置:/proc/sys/net/ipv4/tcp_max_tw_bucketsnet.ipv4.route.gc_timeout = 100net.ipv4.route.gc_timeout = 100 路由缓存刷新频率, 当一个路由失败后多长时间跳到另一个默认是300net.ipv4.tcp_syn_retries = 1在内核泛起建立连接之前发送SYN包的数量,路径为:/proc/sys/net/ipv4/tcp_syn_retries 5net.ipv4.tcp_synack_retries = 1决定内核放弃连接之前的发送的SYN+ACK包的数量,路径为:/proc/sys/net/ipv4/tcp_synack_retries默认为5net.core.somaxconn = 16384默认为128,这个参数用于调节系统同时发起的tcp连接数,在搞并发的请求中,默认的值可能会导致连接超时或者重传,因此,需要结合并发请求来进行调节,路径为:/proc/sys/net/core/somaxconn 128net.core.netdev_max_backlog = 16384当每个网络接口接收数据包的速率比内核处理这些包的速率快时,允许发送到队列的数据包的最大值,路径为:/proc/sys/net/core/netdev_max_backlog,默认为1000net.ipv4.tcp_max_orphans = 16384用于设定系统中最多有多少个TCP套接字不被关联到任何一个用户文件句柄,如果超过,孤立的连接将立即被复位并打印警告信息,这个限制知识为了防止简单的DOS攻击,不过效果并不是很理想化,更多的情况是增加这个数值,路径为:/proc/sys/net/ipv4/tcp_max_orphans 65536以下参数是对iptables防火墙的优化,防火墙不开会提示,可以忽略不理。net.ipv4.ip_conntrack_max = 25000000 net.ipv4.netfilter.ip_conntrack_max=25000000 net.ipv4.netfilter.ip_conntrack_tcp_timeout_established=180 net.ipv4.netfilter.ip_conntrack_tcp_timeout_time_wait=120 net.ipv4.netfilter.ip_conntrack_tcp_timeout_close_wait=60 net.ipv4.netfilter.ip_conntrack_tcp_timeout_fin_wait=120
2016年01月24日
3,314 阅读
0 评论
0 点赞
2016-01-23
linuxea: Nginx动静分离的几种方式
这篇文章简单的介绍nginx proxy_pass和一些配置。仅作为参考使用。proxy_pass属于ngx_http_proxy_module,此模块可以将请求发送到一台服务器上。proxy_pass放在location标签中,可使用http。如下:location /poy{ proxy_pass http://10.0.0.90; }参数:proxy_set_header,(当后端web服务器上有多个虚拟主机,需要使用header来区分代理的主机名)获取用户的主机名或者真是IP,以及代理者的IP地址server { listen 80; server_name www.linuea123.com; location / { root html; index index.html index.htm; proxy_pass http://backend; proxy_set_header Host $host; }获取用户的真是iP,截获客户请求地址(nginx开启即可,appche需要在LogFormat ""%{X-Forwareded-for}i添加)proxy_set_header Host $host;proxy_set_header X-Forwarded-For $remote_addr 获取用户的真是iP,截获客户请求地址client_body_buffer_size:用于指导客户端请求主题缓冲区大小,用户访问先保存到本地在传递给用户proxy_connect_timeout:表示与后端服务器连接的超时时间,既发起握手等待响应的超时时间proxy_send_timeoit:表示后端服务数据返回最短时间,超过则断开proxy_read_timeout:设置nginx从代理的后端服务器获取信息proxy_buffer_size:设置缓存区大小,默认缓冲大小指令:proxy_buffers设置location /poy{ proxy_pass http://10.0.0.90; proxy_set_header Host $host; }proxy.conf如下:proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k;目录和扩展名分离测试(还有浏览器的分离):upstream jingtai { server 10.0.1.10:80 weight=5 max_fails=10 fail_timeout=10s; server 10.0.1.11:80 weight=5 max_fails=10 fail_timeout=10s; } upstream dongtai { server 10.0.2.10:80 weight=5 max_fails=10 fail_timeout=10s; server 10.0.2.11:80 weight=5 max_fails=10 fail_timeout=10s; } server { listen 80; server_name www.linuxea.com; location / { root html; index index.html index.htm; proxy_pass http://dongtai; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } location /image/ { proxy_pass http://jingtai;扩展名分离location ~ .* (gif|jpg|jpeg|png|bmp|swf|css|js)$ { proxy_pass http://jingtai; include proxy.conf; } location ~ .*. (php|php3|php5.3)$ { proxy_pass httpd://jingtai; include proxy.conf; }ifif($request_uri ~* ".*\.(jsp|jsp*|do|do*)$") { proxy_pas httpd://jingtai; }android-iphonelisten 80; server_name app.linuxea.com; location / { if (http_user_agent ~* "android") { proxy_pass http://android; } if (http_user_agent ~* "iphone") { proxu_pass http:/iphone; } proxy_pass http://pc; include proxy.conf; }proxy如下proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } location /dongtai/ { proxy_pass http://dongtai; proxy_redirect off; proxy_set_header Host $host; p2roxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } }proxy参数proxy_set_header Host $host; 用户请求的header传递给后端访问p2roxy_set_header X-Real-IP $remote_addr;接收客户端ip,通过X-forwared给后端proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;代理和web超时响应参数如下:proxy_connect_timeout 60; proxy_send_timeout 30; proxy_read_timeout 30;内存和磁盘缓冲参数如下:proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k;
2016年01月23日
3,979 阅读
0 评论
0 点赞
1
2
3