首页
常用命令
About Me
推荐
weibo
github
Search
1
linuxea:gitlab-ci之docker镜像质量品质报告
48,760 阅读
2
linuxea:如何复现查看docker run参数命令
19,489 阅读
3
Graylog收集文件日志实例
17,808 阅读
4
git+jenkins发布和回滚示例
17,364 阅读
5
linuxea:jenkins+pipeline+gitlab+ansible快速安装配置(1)
17,353 阅读
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack
Open-Falcon
Prometheus
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
音乐
影视
music
Internet Consulting
最后的净土
软件交付
持续集成
gitops
devops
登录
Search
标签搜索
kubernetes
docker
zabbix
Golang
mariadb
持续集成工具
白话容器
linux基础
nginx
elk
dockerfile
Gitlab-ci/cd
最后的净土
基础命令
jenkins
docker-compose
gitops
haproxy
saltstack
Istio
marksugar
累计撰写
675
篇文章
累计收到
140
条评论
首页
栏目
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack
Open-Falcon
Prometheus
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
音乐
影视
music
Internet Consulting
最后的净土
软件交付
持续集成
gitops
devops
页面
常用命令
About Me
推荐
weibo
github
搜索到
38
篇与
Web
的结果
2019-08-03
linuxea: nginx php7.3.8 编译常见错误笔记
php7.3.8 编译常见错误依赖包yum install gcc autoconf gcc-c++ -y yum install libxml2 libxml2-devel openssl openssl-devel bzip2 bzip2-devel libcurl libcurl-devel libjpeg libjpeg-devel libpng libpng-devel freetype freetype-devel gmp gmp-devel readline readline-devel libxslt libxslt-devel -y yum install systemd-devel -y yum install openjpeg-devel -y yum install -y curl-devel yum install libzip libzip-devel -ychecking for libzip… configure: error: system libzip must be upgraded to version >= 0.11# checking for libzip… configure: error: system libzip must be upgraded to version >= 0.11 yum remove -y libzip wget https://nih.at/libzip/libzip-1.2.0.tar.gz tar -zxvf libzip-1.2.0.tar.gz cd libzip-1.2.0 ./configure make && make installconfigure: error: off_t undefined; check your library configurationecho '/usr/local/lib64 /usr/local/lib /usr/lib /usr/lib64'>>/etc/ld.so.conf ldconfig -vusr/local/include/zip.h:59:21: fatal error: zipconf.h: No such file or directorycp /usr/local/lib/libzip/include/zipconf.h /usr/local/include/zipconf.h编译脚本参考:curl -Lk https://raw.githubusercontent.com/marksugar/Maops/master/php/php7/phpInstall.sh |bash -s www 7.3.8nginx:curl -Lk https://raw.githubusercontent.com/marksugar/Maops/master/nginx/nginxInstall.sh|bash -s www 1.16.0
2019年08月03日
2,844 阅读
0 评论
0 点赞
2018-08-13
linuxea:nginx流量监控模块nginx-module-vts使用
nginx-module-vts他可以记录单个页面的流量,http status的流量,后端代理的流量已经动态dns的流量,还有来自地区/国家的流量,其中可以进行限制流量,并且他还有一个页面,可以根据server_name进行统计域名的流量已经状态码,只需要简单的配置和编译就可以实现,如果希望使用docker,那就太好了,因为我已经为你准备好了示例 docker安装nginx1.14.0-vts模块的下载这里还加了luajit-2.0[root@linuxea-VM-Node203 ~]# git clone git://github.com/vozlt/nginx-module-vts.git "/usr/local/nginx-module-vts" [root@linuxea-VM-Node203 ~]# git clone git://github.com/vozlt/nginx-module-sts.git "/usr/local/nginx-module-sts" [root@linuxea-VM-Node203 ~]# git clone git://github.com/vozlt/nginx-module-stream-sts.git "/usr/local/nginx-module-stream-sts" ### git clone lua_module [root@linuxea-VM-Node203 ~]# curl -Lk https://github.com/simplresty/ngx_devel_kit/archive/v0.3.1rc1.tar.gz |tar xz -C /usr/local [root@linuxea-VM-Node203 ~]# curl -Lk https://github.com/openresty/lua-nginx-module/archive/v0.10.13.tar.gz |tar xz -C /usr/local [root@linuxea-VM-Node203 ~]# curl -Lk http://luajit.org/download/LuaJIT-2.0.5.tar.gz |tar xz -C /usr/local [root@linuxea-VM-Node203 ~]# cd /usr/local/LuaJIT-2.0.5 && make && make install [root@linuxea-VM-Node203 ~]# export LUAJIT_LIB=/usr/local/lib [root@linuxea-VM-Node203 ~]# export LUAJIT_INC=/usr/local/include/luajit-2.0编译安装我们下载最新的nginx,创建用户,编译并添加模块[root@linuxea-VM-Node203 ~]# useradd www -s /sbin/nologin -M [root@linuxea-VM-Node203 ~]# curl -Lk http://nginx.org/download/nginx-1.14.0.tar.gz |tar xz -C /usr/local [root@linuxea-VM-Node203 ~]# cd /usr/local/nginx-1.14.0 && ./configure \ --prefix=/usr/local/nginx \ --conf-path=/etc/nginx/nginx.conf \ --user=www \ --group=www \ --error-log-path=/var/log/nginx/error.log \ --http-log-path=/var/log/nginx/access.log \ --pid-path=/var/run/nginx/nginx.pid \ --lock-path=/var/lock/nginx.lock \ --with-http_ssl_module \ --with-http_stub_status_module \ --with-http_gzip_static_module \ --with-http_flv_module \ --with-http_mp4_module \ --with-http_geoip_module \ --http-client-body-temp-path=/var/tmp/nginx/client \ --http-proxy-temp-path=/var/tmp/nginx/proxy \ --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi \ --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi \ --add-module=/usr/local/lua-nginx-module-0.10.13 \ --add-module=/usr/local/ngx_devel_kit-0.3.1rc1 \ --add-module=/usr/local/nginx-module-vts \ --with-stream \ --add-module=/usr/local/nginx-module-sts \ --add-module=/usr/local/nginx-module-stream-sts && make -j2 && make install [root@linuxea-VM-Node203 /usr/local/nginx-1.14.]# ln -s /usr/local/lib/libluajit-5.1.so.2 /lib/ nginx-module-vts配置文件我分别编辑了两个conf文件include进去加载主配置文件在http部分 : nginx-module-vts_zone.confgeoip_country /etc/nginx/GeoIP.dat; # 使用GeoIP计算各个国家/地区的流量 vhost_traffic_status_zone; # 必须的指令 vhost_traffic_status_filter_by_host on; # 以server_name的形式展示 #vhost_traffic_status_bypass_stats on; # 不会统计流量页面的数据流量 vhost_traffic_status_filter_by_set_key $geoip_country_code country::*; # # 使用GeoIP计算各个国家/地区的流量 map $http_user_agent $filter_user_agent { # 计算单个用户代理的流量 default 'unknown'; ~iPhone ios; ~Android android; ~(MSIE|Mozilla) windows; } vhost_traffic_status_filter_by_set_key $filter_user_agent agent::*; # 计算单个用户代理的流量加载vhost的server部分: nginx-module-vts_zone.confvhost_traffic_status_set_by_filter $variable group/zone/name; # 获取存储在共享内存中的指定状态值。 vhost_traffic_status_filter_by_set_key $geoip_country_code country::$server_name; # 以server_name的形式展示 vhost_traffic_status_bypass_stats on; # 不会统计流量页面的数据流量 vhost_traffic_status_filter_by_set_key $status $server_name; # 计算详细的http状态代码的流量 vhost_traffic_status_filter_by_set_key $filter_user_agent agent::$server_name; # 计算单个用户代理的流量或者可以这样http段直接加 vhost_traffic_status_zone; vhost_traffic_status_filter_by_host on; server段直接加server { listen 8295; server_name localhost; # disaned status vhost_traffic_status off; # vhost_traffic_status off; location /status { vhost_traffic_status_display; vhost_traffic_status_display_format html; } }打开浏览器web-name:8295/status这个模块本身是可以直接用来prometheus使用的,只要访问/status/format/prometheus即可,本地来搞一下看看效果,过滤一段试试[root@linuxea-VM-Node203 /etc/nginx]# curl 10.10.240.203:8295/status/format/prometheus|grep nginx_vts_server_requests_total % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 7631 100 7631 0 0 10.3M 0 --:--:-- --:--:-- --:--:-- 7452k # HELP nginx_vts_server_requests_total The requests counter # TYPE nginx_vts_server_requests_total counter nginx_vts_server_requests_total{host="10.10.240.203",code="1xx"} 0 nginx_vts_server_requests_total{host="10.10.240.203",code="2xx"} 1663 nginx_vts_server_requests_total{host="10.10.240.203",code="3xx"} 0 nginx_vts_server_requests_total{host="10.10.240.203",code="4xx"} 0 nginx_vts_server_requests_total{host="10.10.240.203",code="5xx"} 0 nginx_vts_server_requests_total{host="10.10.240.203",code="total"} 1663 nginx_vts_server_requests_total{host="linuxea.ds.com",code="1xx"} 0 nginx_vts_server_requests_total{host="linuxea.ds.com",code="2xx"} 0 nginx_vts_server_requests_total{host="linuxea.ds.com",code="3xx"} 294 nginx_vts_server_requests_total{host="linuxea.ds.com",code="4xx"} 0 nginx_vts_server_requests_total{host="linuxea.ds.com",code="5xx"} 0 nginx_vts_server_requests_total{host="linuxea.ds.com",code="total"} 294 nginx_vts_server_requests_total{host="*",code="1xx"} 0 nginx_vts_server_requests_total{host="*",code="2xx"} 1663 nginx_vts_server_requests_total{host="*",code="3xx"} 294 nginx_vts_server_requests_total{host="*",code="4xx"} 0 nginx_vts_server_requests_total{host="*",code="5xx"} 0 nginx_vts_server_requests_total{host="*",code="total"} 1957 [root@linuxea-VM-Node203 /etc/nginx]# 当然这样一来安全就有些问题了配置nginx认证在公网上跑的时候出来iptables的防火墙对固定ip放行端口的同时,一定要在骚一些弄个用户验证生成一个htpasswd的用户和密码,用户名:linuxea 密码:www.linuxea.com[root@linuxea-VM-Node63 /etc/nginx/vhost]# htpasswd -c /usr/local/ngxpasswd linuxea New password: Re-type new password: Adding password for user linuxea添加到nginx的状态页面中来主要添加如下: auth_basic "Please enter your id and password!"; auth_basic_user_file /etc/nginx/ngxpasswd;如下:server { listen 8295; server_name localhost; auth_basic "Please enter your id and password!"; auth_basic_user_file /etc/nginx/ngxpasswd; #disaned status vhost_traffic_status off; vhost_traffic_status off; location /status { vhost_traffic_status_display; vhost_traffic_status_display_format html; } }接着打开就需要认证了到此nginx认证完成配置prometheus抓取端打开prometheus进行配置文件配置,prometheus安装参考metrics_path字段的位置需要写明/status/format/prometheusbasic_auth 用户和密码,在上面进行配置nginx的认证的那些 - job_name: "nginx" metrics_path: /status/format/prometheus basic_auth: username: linuxea password: 'www.linuxea.com' static_configs: - targets: - '10.10.240.203:8295' labels: group: 'nginx'添加到proentheus上可以抓取,如果有问题,你应该检查targets是否up安装nginx-vts-exporter事实上在我对比后,nginx-vts-exporter更适用prometheus抓取,里面有一些是nginx-module-vts没有的,so,我们进行安装nginx-vts-exporter[root@linuxea-VM-Node203 /etc/nginx/vhost]# docker pull sophos/nginx-vts-exporter:latest [root@linuxea-VM-Node203 ~]# docker run -ti --rm --env NGINX_STATUS="http://linuxea:www.linuxea.com@localhost:8295/status/format/json" sophos/nginx-vts-exporter这时候会启动9913端口,通过浏览器可以访问(你可能需要做好防火墙规则),因为之前加了验证,这里需要添加用户和密码http://linuxea:www.linuxea.com@localhost:8295/status/format/json通过9913端口可以查看所有的指标。我这里用linuxea做测试[root@linuxea-VM-Node203 /etc/nginx/vhost]# curl http://10.10.240.203:9913/metrics|grep "linuxea" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 7028 100 7028 0 0 2013k 0 --:--:-- --:--:-- --:--:-- 2287k nginx_server_bytes{direction="in",host="linuxea.ds.com"} 1.700065e+06 nginx_server_bytes{direction="out",host="linuxea.ds.com"} 1.240604e+06 nginx_server_cache{host="linuxea.ds.com",status="bypass"} 0 nginx_server_cache{host="linuxea.ds.com",status="expired"} 0 nginx_server_cache{host="linuxea.ds.com",status="hit"} 0 nginx_server_cache{host="linuxea.ds.com",status="miss"} 0 nginx_server_cache{host="linuxea.ds.com",status="revalidated"} 0 nginx_server_cache{host="linuxea.ds.com",status="scarce"} 0 nginx_server_cache{host="linuxea.ds.com",status="stale"} 0 nginx_server_cache{host="linuxea.ds.com",status="updating"} 0 nginx_server_requestMsec{host="linuxea.ds.com"} 0 nginx_server_requests{code="1xx",host="linuxea.ds.com"} 0 nginx_server_requests{code="2xx",host="linuxea.ds.com"} 40 nginx_server_requests{code="3xx",host="linuxea.ds.com"} 3792 nginx_server_requests{code="4xx",host="linuxea.ds.com"} 1 nginx_server_requests{code="5xx",host="linuxea.ds.com"} 0 nginx_server_requests{code="total",host="linuxea.ds.com"} 3833直接使用prometheus如:查看nginx_server_requests指标,host为linuxea.ds.com,30s的数据,只显示code和host字段sum (irate(nginx_server_requests{host!="*",host="linuxea.ds.com",code!="total"}[30s])) by (code,host)并且可以和grafana配合使用,我这里将官网的模板也inport进去了,你可以去我gitlhub下载nginx-vts-stats_rev2 (1).json,也可以去grafana下载当你Import dashboard 后你会看到这样的一个画面额外的nginx-module-sts配置nginx http段添加stream_server_traffic_status_zone;在http内include vhost/stream.conf;在http外include stream_server.conf;创建server段文件[root@linuxea-VM-Node203 /etc/nginx]# cat vhost/stream.conf server { listen 82; server_name linuxea.ds.com; location /status { stream_server_traffic_status_display; stream_server_traffic_status_display_format html; } }创建stream_server.conf 文件[root@linuxea-VM-Node203 /etc/nginx]# cat stream_server.conf stream { geoip_country /etc/nginx/GeoIP.dat; server_traffic_status_zone; server_traffic_status_filter_by_set_key $geoip_country_code country::*; server { server_traffic_status_filter_by_set_key $geoip_country_code country::$server_addr:$server_port; }部分参数对location指令的正则表达式匹配的单个storage的流量。http { vhost_traffic_status_zone; ... server { ... location ~ ^/storage/(.+)/.*$ { set $volume $1; vhost_traffic_status_filter_by_set_key $volume storage::$server_name; } location /status { vhost_traffic_status_display; vhost_traffic_status_display_format html; } } }计算单个用户代理的流量计算个人的流量 http_user_agenthttp { vhost_traffic_status_zone; map $http_user_agent $filter_user_agent { default 'unknown'; ~iPhone ios; ~Android android; ~(MSIE|Mozilla) windows; } vhost_traffic_status_filter_by_set_key $filter_user_agent agent::*; ... server { ... vhost_traffic_status_filter_by_set_key $filter_user_agent agent::$server_name; location /status { vhost_traffic_status_display; vhost_traffic_status_display_format html; } } }http status code状态码的流量http { vhost_traffic_status_zone; server { ... vhost_traffic_status_filter_by_set_key $status $server_name; location /status { vhost_traffic_status_display; vhost_traffic_status_display_format html; } } }计算动态dns的流量如果域具有多个DNS A记录,则可以使用过滤器功能或proxy_pass中的变量计算域的各个IP的流量。http { vhost_traffic_status_zone; upstream backend { elb.example.org:80; } ... server { ... location /backend { vhost_traffic_status_filter_by_set_key $upstream_addr upstream::backend; proxy_pass backend; } } }计算域的各个IP的流量elb.example.org。如果elb.example.org有多个DNS A记录,将显示所有IP filterZones。在上述设置中,当NGINX启动或重新加载配置时,它会查询DNS服务器以解析域,并将DNS A记录缓存在内存中。因此,即使DNS管理员对DNS A记录进行了分区,DNS A记录也不会在内存中更改,除非NGINX重新启动或重新加载。http { vhost_traffic_status_zone; resolver 10.10.10.53 valid=10s ... server { ... location /backend { set $backend_server elb.example.org; proxy_pass http://$backend_server; } } }计算域的各个IP的流量elb.example.org。如果elb.example.org更改了DNS A记录,将同时显示旧IP和新IP ::nogroups。与第一个上游组设置不同,即使DNS管理员对DNS A记录进行了分析,第二个设置也能正常工作。永久保留统计数据http { vhost_traffic_status_zone; vhost_traffic_status_dump /var/log/nginx/vts.db; ... server { ... } }vhost_traffic_status_filter_by_host on; 会更加不同的server_name进行统计参考 : https://github.com/vozlt/nginx-module-vts
2018年08月13日
7,277 阅读
0 评论
0 点赞
2017-03-14
Nginx平滑处理echo模块收集POST日志
Nginx可以轻松处理大量的HTTP流量。每次NGINX处理连接时,将生成一个日志条目,以存储此连接(例如远程IP地址,响应大小和状态代码等)的某些信息。可在此处找到包含更多详细信息的完整记录信息集。在某些情况下,您可能更愿意存储请求的主体,特别是POST请求。幸运的是,NGINX生态系统是丰富的,并且包括很多 方便的模块。一个这样的模块是 回声模块,它提供的东西等是有用的功能:echo,time,和sleep 命令。在我们的用例中,要记录请求体,我们需要的是使用echo_read_request_body命令和$request_body变量(包含Echo模块的请求体)。然而,这个模块不是默认分配给NGINX,为了能够使用它,我们必须通过构建包含Echo模块的源代码的源代码构建NGINX。 以下步骤详细介绍了如何构建NGINX以便包含Echo模块(这里是完整的构建bash文件)使用以下命令下载NGINX和Echo的源代码:[root@linuxea ]# curl -Lk https://github.com/openresty/echo-nginx-module/archive/v0.58.tar.gz -o /usr/local/ [root@linuxea ]# tar xf v0.58.tar.gz -C /tmp/echo-nginx-module源nginx安装目录位于/usr/local/webserver/nginx,我们直接下载同样版本的nginx,进行编译./configure --user=www \ --group=www \ --prefix=/usr/local/webserver/nginx \ --with-http_stub_status_module \ --with-http_ssl_module \ --with-http_gunzip_module \ --with-http_mp4_module \ --with-http_flv_module \ --with-pcre \ --with-http_gzip_static_module \ --with-http_realip_module \ --with-ld-opt=-ljemalloc \ --add-module=/tmp/echo-nginx-module编译完成后,仅仅只进行make即可在/usr/local/webserver/nginx/sbin/下,删除或者重新命名nginxmv nginx old_nginxmv或者rm后nginx还是在运行着[root@linuxea sbin]# ps aux|grep nginx root 1796 0.0 0.9 124408 36180 ? Ss Mar06 0:02 nginx: master process /usr/local/webserver/nginx/sbin/nginx -c /usr/local/webserver/nginx/conf/nginx.conf www 15851 0.3 2.0 169464 78680 ? S 05:05 0:00 nginx: worker process www 15852 0.6 1.9 169464 78072 ? S 05:05 0:00 nginx: worker process www 15853 0.5 1.9 169464 77168 ? S 05:05 0:00 nginx: worker process www 15854 0.3 2.0 169464 78680 ? S 05:05 0:00 nginx: worker process www 15855 0.3 2.0 169464 78600 ? S 05:05 0:00 nginx: worker process www 15856 1.7 1.9 171512 77972 ? S 05:05 0:00 nginx: worker process [root@linuxea sbin]# cp /usr/local/nginx-1.8.0/objs/nginx /usr/local/webserver/nginx/sbin/ [root@linuxea sbin]# /etc/init.d/nginx reloadnginx主配置文件添加:log_format upstream2 '$proxy_add_x_forwarded_for $remote_user [$time_local] "$request" $http_host' '$body_bytes_sent $request_body "$http_referer" "$http_user_agent" $ssl_protocol $ssl_cipher' '$request_time [$status] [$upstream_status] [$upstream_response_time] "$upstream_addr"'; 在server段添加: echo_read_request_body; access_log /data/logs/wwwlogs/access.log upstream2;查看日志: 脚本;if ! rpm -ql GeoIP-devel >/dev/null 2>&1;then yum install GeoIP-devel -y;fi mkdir -p /tmp/nginx_build/{echo-nginx-module,ngx_http_geoip2_module} curl -Lks http://nginx.org/download/nginx-1.10.3.tar.gz|tar -xz -C /tmp/nginx_build/ --strip-components=1 curl -Lks $(curl -Lks 'https://github.com/openresty/echo-nginx-module/releases'| awk -F'"' '/tar.gz"/{print "https://github.com"$2;exit}')| tar -xz -C /tmp/nginx_build/echo-nginx-module/ --strip-components=1 cd /tmp/nginx_build/ngx_http_geoip2_module git clone https://github.com/voxxit/dockerfiles.git mv dockerfiles/nginx-geoip2/ngx_http_geoip2_module-1.0/* . && cd ../ ./configure --user=www --group=www --prefix=/usr/local/webserver/nginx --with-http_stub_status_module --with-http_ssl_module --with-http_gunzip_module --with-http_mp4_module --with-http_flv_module --with-http_realip_module --with-pcre --with-http_gzip_static_module --with-ld-opt=-ljemalloc --add-module=./echo-nginx-module --with-http_geoip_module make -j$(getconf _NPROCESSORS_ONLN) && make install && cd && \rm -rf /tmp/nginx_build echo 'export PATH=/usr/local/webserver/nginx/sbin:$PATH' > /etc/profile.d/nginx.sh . /etc/profile.d/nginx.sh && nginx -V本文参考:https://developers.redhat.com/blog/2016/05/23/configuring-nginx-to-log-post-data-on-linux-rhel/
2017年03月14日
4,899 阅读
0 评论
0 点赞
2016-12-16
Centos7+LNMP+Discuz_X3.2_SC_UTF8详细部署
OS status:nginx-1.10.2 php-5.6.29 Discuz_X3.2 mariadb-10.1.19安装nginx创建运行nginx用户[root@linuxea-com ~]# groupadd -r -g 499 nginx [root@linuxea-com ~]# useradd -u 499 -s /sbin/nologin -c 'web server' -g nginx nginx -M下载nginx[root@linuxea-com ~]# cd /usr/local [root@linuxea-com /usr/local]# curl -s http://nginx.org/download/nginx-1.10.2.tar.gz -o/usr/local/nginx-1.10.2.tar.gz解压[root@linuxea-com /usr/local]# cd /usr/local && tar xf nginx-1.10.2.tar.gz && rm -rf nginx-1.10.2.tar.gz安装依赖包[root@linuxea-com /usr/local]# yum install openssl-devel pcre pcre-devel gcc make -y开始编译[root@linuxea-com /usr/local]# cd nginx-1.10.2 && ./configure --prefix=/usr/local/nginx --conf-path=/etc/nginx/nginx.conf --user=nginx --group=nginx --error-log-path=/data/logs/nginx/error.log --http-log-path=/data/logs/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/lock/nginx.lock --with-http_ssl_module --with-http_stub_status_module --with-http_gzip_static_module --with-http_flv_module --with-http_mp4_module --with-http_realip_module --http-client-body-temp-path=/var/tmp/nginx/client --http-proxy-temp-path=/var/tmp/nginx/proxy --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi && make && make install创建所需的目录[root@linuxea-com /usr/local]# mkdir -p /var/tmp/nginx/{client,fastcgi,proxy,uwsgi} /data/logs/nginx /data/wwwroot 下载配置文件[root@linuxea-com /usr/local]# rm -rf /etc/nginx/nginx.conf [root@linuxea-com /usr/local]# curl -Lks4 https://raw.githubusercontent.com/LinuxEA-Mark/nmp/master/nginx.conf -o /etc/nginx/nginx.cnf下载启动脚本[root@linuxea-com /usr/local]# curl -Lks4 https://raw.githubusercontent.com/LinuxEA-Mark/nmp/master/nginx -o /etc/init.d/nginx && chmod +x /etc/init.d/nginx安装php-fpmphp install下载php[root@linuxea-com /usr/local]# curl -s http://tw1.php.net/distributions/php-5.6.29.tar.gz -o /usr/local/php-5.6.29.tar.gz && cd /usr/local解压[root@linuxea-com /usr/local]# tar xf php-5.6.29.tar.gz && rm -rf php-5.5.26.tar.gz [root@linuxea-com /usr/local]# cd php-5.6.29创建php用户[root@linuxea-com /usr/local]# groupadd -g 498 -r php-fpm && useradd -u 498 -g php-fpm -r php-fpm -s /sbin/nologin 安装依赖包[root@linuxea-com /usr/local]# yum install epel-release -y && yum install -y gcc automake autoconf libtool make libxml2-devel openssl openssl-devel bzip2 bzip2-devel libpng libpng-devel freetype freetype-devel libcurl-devel libcurl libjpeg libjpeg-devel libpng libpng-devel freetype freetype-devel libmcrypt-devel libmcrypt libtool-ltdl-devel libxslt-devel mhash mhash-devel axel编译安装:[root@linuxea-com /usr/local]# cd php-5.6.29 && ./configure --prefix=/usr/local/php --disable-pdo --disable-debug --disable-rpath --enable-inline-optimization --enable-sockets --enable-sysvsem--enable-sysvshm --enable-pcntl --enable-mbregex --enable-xml --enable-zip --enable-fpm --enable-mbstring --with-pcre-regex --with-mysql --with-mysqli --with-gd --with-jpeg-dir --with-bz2 --with-zlib --with-mhash --with-curl --with-mcrypt --with-jpeg-dir --with-png-dir && make && make install 创建日志路径[root@linuxea-com /usr/local]# mkdir /data/logs/php-fpm复制php.ini文件[root@linuxea-com /usr/local]# cp /usr/local/php-5.6.29/php.ini-production /usr/local/php/lib/php.ini修改php.ini时区[root@linuxea-com /usr/local]# sed -i 's/;date.timezone =/date.timezone = Asia\/Shanghai/' /usr/local/php/lib/php.ini下载php启动文件[root@linuxea-com /usr/local]# curl -Lks4 https://raw.githubusercontent.com/LinuxEA-Mark/nmp/master/php-fpm -o /etc/init.d/php-fpm && chmod +x /etc/init.d/php-fpm下载php-fpm.conf配置文件[root@linuxea-com /usr/local]# curl -Lks4 https://raw.githubusercontent.com/LinuxEA-Mark/nmp/master/php-fpm.conf -o /usr/local/php/etc/php-fpm.conf安装mariadb安装mariadb下载mariadb[root@linuxea-com /usr/local]# cd /usr/local && axel -n 30 http://sgp1.mirrors.digitalocean.com/mariadb//mariadb-10.1.19/bintar-linux-x86_64/mariadb-10.1.19-linux-x86_64.tar.gz解压[root@linuxea-com /usr/local]# tar xf mariadb-10.1.19-linux-x86_64.tar.gz && ln -s mariadb-10.1.19-linux-x86_64 mysql创建用户[root@linuxea-com /usr/local]# groupadd -g 497 -r mysql && useradd -u 497 -g mysql -r mysql -s /sbin/nologin && mkdir /data/mysql开始二进制安装[root@linuxea-com /usr/local]# cd mysql && scripts/mysql_install_db --user=mysql --datadir=/data/mysql 修改权限[root@linuxea-com /usr/local]# chown -r mysql.mysql /data/mysql复制启动脚本[root@linuxea-com /usr/local]# cp /usr/local/mysql/support-files/mysql.server /etc/init.d/mysqld && chown +x /etc/init.d/mysqld #cp /usr/local/mysql/support-files/my-large.cnf /etc/my.cnf 下载mysql配置文件[root@linuxea-com /usr/local]# curl -Lks4 https://raw.githubusercontent.com/LinuxEA-Mark/nmp/master/my.cnf -o /etc/my.cnf创建mysql软连接[root@linuxea-com /usr/local]# ln -s /usr/local/mysql/bin/mysql /usr/bin/ 启动mysql ,nginx ,php-fpm [root@linuxea-com /usr/local]# systemctl start mysqld && systemct start nginx && systemcrt start php-fpm授权数据库数据库授权[root@linuxea-com /usr/local]# mysql -e "DELETE FROM mysql.user WHERE User='';" [root@linuxea-com /usr/local]# mysql -e "DELETE FROM mysql.db WHERE Db LIKE 'test%';" [root@linuxea-com /usr/local]# mysql -e "DROP DATABASE test;" 创建库bbs,密码8K79Xucb5uXC,root密码:abc8K7123 [root@linuxea-com /usr/local]# mysql -e "CREATE DATABASE bbs charset='utf8';" [root@linuxea-com /usr/local]# mysql -e "GRANT ALL PRIVILEGES ON bbs.* To 'bbs'@'%' IDENTIFIED BY '8K79Xucb5uXC';" [root@linuxea-com /usr/local]# mysql -e "UPDATE mysql.user SET password = password('abc8K7123') WHERE user = 'root';" [root@linuxea-com /usr/local]# mysql -e "flush privileges;" [root@linuxea-com /usr/local]# myqsl -uroot -pabc8K7123 -e "flush privileges;"部署discuz下载Discuz[root@linuxea-com /usr/local]# cd /data/wwwroot [root@linuxea-com /data/wwwroot]# wget http://download.comsenz.com/DiscuzX/3.2/Discuz_X3.2_SC_UTF8.zip && unzip Discuz_X3.2_SC_UTF8.zip 删除一些无用的文件[root@linuxea-com /data/wwwroot]# rm -rf readme utility/ Discuz_X3.2_SC_UTF8.zip 将网页文件复制到当前目录并修改权限[root@linuxea-com /data/wwwroot]# mv upload/* ./ && chown -R nginx.nginx /data/wwwroot/ 打开web输入ip,同意即可下一步,通常这一步也不会报错下一步,全新安装输入之前创建的库和账号密码和论坛邮箱密码安装完成登录源图片的路径存放在网站根目录data/attachment/forum/下[root@DS-VM-Node49 /data/wwwroot]# ll data/attachment/forum/201612/16/114158ponqltotvq9ouuwl.jpg -rw-r--r-- 1 nginx nginx 68028 12月 16 11:41 data/attachment/forum/201612/16/114158ponqltotvq9ouuwl.jpg [root@DS-VM-Node49 /data/wwwroot]# 删掉安装目录 rm -rf install/开启伪静态开启伪静态:如图所示点击查看当前的 Rewrite 规则URL,在弹出的新页面中,复制nginx配置的server段中,如下图所示在重新加载配置/etc/init.d/nginx reload.如下图所示:
2016年12月16日
8,241 阅读
0 评论
0 点赞
2016-09-15
Nginx1.10.1lua环境编译安装
installdownload nginx & pcre & LuaJIT[root@LinuxEA local]# curl -sO http://nginx.org/download/nginx-1.10.1.tar.gz [root@LinuxEA local]# curl -sO http://nchc.dl.sourceforge.net/project/pcre/pcre/8.39/pcre-8.39.tar.gz [root@LinuxEA local]# curl -sO http://luajit.org/download/LuaJIT-2.0.4.tar.gz 解压[root@LinuxEA local]# tar xf nginx-1.10.1.tar.gz [root@LinuxEA local]# ln -s nginx-1.10.1 nginx [root@LinuxEA local]# tar xf pcre-8.39.tar.gz [root@LinuxEA local]# tar xf LuaJIT-2.0.4.tar.gz 编译LuaJIT[root@LinuxEA local]# yum install gcc -y [root@LinuxEA local]# cd LuaJIT-2.0.4 [root@LinuxEA LuaJIT-2.0.4]# make && make install 编译PCRE[root@LinuxEA local]# yum install gcc-c++ -y [root@LinuxEA local]# cd pcre-8.39 && ./configure [root@LinuxEA local]# make && make install 设置环境变量[root@LinuxEA local]# export LUAJIT_LIB=/usr/local/lib [root@LinuxEA LuaJIT-2.0.4]# export LUAJIT_INC=/usr/local/include/luajit-2.0/ 创建用户[root@LinuxEA nginx]# useradd -s /sbin/nologin -M nginx [root@LinuxEA LuaJIT-2.0.4]# cd ../nginx ### get ngx_devel_kit & lua-nginx-module 在编译之前,我们到此处下载相应的模块https://github.com/simpl/ngx_devel_kit#warning-using-ndk_all[root@LinuxEA local]# yum install git [root@LinuxEA local]# git clone https://github.com/simpl/ngx_devel_kit.git 在下载一个lua-nginx-module [root@LinuxEA local]# git clone https://github.com/openresty/lua-nginx-module.git 开始编译nginx--add指定目录即可在编译之前,我们把依赖包安装[root@LinuxEA nginx]# yum install -y openssl openssl-devel [root@LinuxEA nginx]# ./configure --prefix=/usr/local/nginx \ --user=nginx \ --group=nginx \ --with-http_ssl_module \ --with-http_stub_status_module \ --with-file-aio \ --add-module=../ngx_devel_kit/ \ --add-module=../lua-nginx-module/ \ --with-http_gzip_static_module \ --with-http_flv_module \ --with-pcre=/usr/local/pcre-8.39 \ --with-http_mp4_module \ --http-client-body-temp-path=/var/tmp/nginx/client \ --http-proxy-temp-path=/var/tmp/nginx/proxy \ --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi \ --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi \ --pid-path=/var/run/nginx/nginx.pid \ --lock-path=/var/lock/nginx.lock [root@LinuxEA nginx]# make -j2 && make install [root@LinuxEA nginx-1.10.1]# ln -s /usr/local/lib/libluajit-5.1.so.2 /lib64/ [root@LinuxEA nginx-1.10.1]# mkdir -p /var/tmp/nginx/{client,fastcgi,proxy,uwsgi} 我们在server中添加一个nginx lua[root@LinuxEA conf]# vi nginx.conf location /linuxea { default_type 'text/plain'; content_by_lua 'ngx.say("hello,lua")'; } [root@LinuxEA conf]# /usr/local/nginx/sbin/nginx -t nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful 关闭防火墙并启动[root@LinuxEA conf]# setenforce 0 [root@LinuxEA conf]# echo -e 'net.ipv6.conf.all.disable_ipv6 = 1\nnet.ipv6.conf.default.disable_ipv6 = 1' >> /etc/sysctl.conf && sysctl -p net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 [root@LinuxEA conf]# systemctl mask firewalld [root@LinuxEA conf]# systemctl stop firewalld [root@LinuxEA conf]# /usr/local/nginx/sbin/nginx
2016年09月15日
3,503 阅读
0 评论
0 点赞
2016-04-01
apache日志切割
一,cronolog下载ronolog包:wget -P /usr/lcoal http://cronolog.org/download/cronolog-1.6.2.tar.gz1,install:cd /usr/local && tar xf cronolog-1.6.2.tar.gz && cd cronolog-1.6.2 && ./configure && make && make install 2,配置httpd.conf[tangzhengchao@Aliyun-live2 conf]$ vim httpd.conf 搜索关键字找到位置修改278 ErrorLog "|/usr/local/sbin/cronolog /alidata/server/httpd/logs/error_%w.log" 307 CustomLog "|/usr/local/sbin/cronolog /alidata/server/httpd/logs/access_%w.log" combined 日志存放在/alidata/server/httpd/logs/下,以access_和error_开头命名,以%w.log结尾,%w表示0-6的某一天,从开始计算,一直轮询[tangzhengchao@Aliyun-live2 vhosts]$ ll /alidata/server/httpd/logs/ total 480 -rw-r--r-- 1 root root 1203 Mar 30 17:29 error_3.log 3,配置dl.16889999.com.conf[tangzhengchao@Aliyun-live2 vhosts]$ vim dl.16889999.com.conf ErrorLog "|/usr/local/sbin/cronolog /alidata/log/httpd/dl.16889999.com-error-%w.log" CustomLog "|/usr/local/sbin/cronolog /alidata/log/httpd/dl.16889999.com-access-%w.log" vhost_common env=!dontlog [tangzhengchao@Aliyun-live2 vhosts]$ ll /alidata/log/httpd/dl.16889999.com* -rw-r--r-- 1 root root 165445 Mar 30 23:59 /alidata/log/httpd/dl.16889999.com-access-3.log -rw-r--r-- 1 root root 266742 Mar 31 10:39 /alidata/log/httpd/dl.16889999.com-access-4.log 4,配置[tangzhengchao@Aliyun-live2 logs]$ vim ../conf/vhosts/jds.jince.com.confErrorLog "|/usr/local/sbin/cronolog /alidata/log/httpd/jds.jince.com-error-%w.log" CustomLog "|/usr/local/sbin/cronolog /alidata/log/httpd/jds.jince.com-access-%w.log" vhost_common env=!dontlog [tangzhengchao@Aliyun-live2 vhosts]$ ll /alidata/log/httpd/ total 69996 -rw-r--r-- 1 root root 57273 Mar 30 17:37 jds.jince.com-access-3.log -rw-r--r-- 1 root root 38541 Mar 31 10:56 jds.jince.com-access-4.log cronolog配置:ronolog安装完成后,默认/usr/local/sbin/cronolog在$apache.conf中(或者vhosts配置文件)错误日志 "|cronolog位置 /日志存放路径_%w周期.logErrorLog "|/usr/local/sbin/cronolog /alidata/server/httpd/logs/error_%w.log"访问日志 "|cronolog位置 /日志存放路径_%w周期.log combined 结尾CustomLog "|/usr/local/sbin/cronolog /alidata/server/httpd/logs/access_%w.log" combined当设置完成后,会从日期的形式(每日)自动分割日志。如:access_0.log ,直到access_6.log结尾,下个周期则轮训覆盖,只保留最新7天日志!二:rotatelogs分割找出rotatelogs位置[root@localhost local]# find / -name rotatelogs /usr/sbin/rotatelogs 修改配置文件ErrorLog "| /usr/sbin/rotatelogs /var/log/httpd/error_%Y%m%d.log 86400 480" CustomLog "| /usr/sbin/rotatelogs /var/log/httpd/access_%Y%m%d.log 86400 480" common 86400:86400 表示一天,即每天生成一个新的日志文件。480:相对于UTC的时差的分钟数。如果省略,则默认为0,并使用UTC时间。比如,要指定UTC时差为-5小时的地区的当地时间,则此参数应为-300,北京时间为+8时间,应设置为480。这样日志里的时间才会和服务器上的时间一致,方便查看日志。
2016年04月01日
3,647 阅读
1 评论
0 点赞
2016-02-21
LAMT基于mod_proxy负载均衡
mod_proxy负载均衡LAMT基于mod_proxy调度使用基于mod_proxy负载均衡需要proxy_balancer_module (shared)模块的支持修改mod_proxu.conf文件,添加两台tomcat主机,lb为负载均衡命名[root@nginx-proxy2 conf.d]# cat mod_proxy.conf ProxyVia on ProxyRequests off ProxyPreserveHost on <Proxy balancer://lb> BalancerMember http://10.0.0.53:8080 loadfactor=1 route=TomcatA BalancerMember http://10.0.0.54:8080 loadfactor=1 route=TomcatB </Proxy> ProxyPass / balancer://lb/ stickysession=JSESSIONID ProxyPassReverse / balancer://lb/ <Location /> Order Allow,Deny Allow from all </Location> [root@nginx-proxy2 conf.d]# stickysession=JSESSIONID可以实现回话保持修改 workers.properties ,添加如下内容,分别是8009(mod_proxy可以http和ajp方式)[root@nginx-proxy2 conf.d]# cat workers.properties worker.list=lbcA,statA worker.TomcatA.port=8009 worker.TomcatA.host=10.0.0.53 worker.TomcatA.type=ajp13 worker.TomcatA.lbfactor=1 worker.TomcatB.port=8009 worker.TomcatB.host=10.0.0.54 worker.TomcatB.type=ajp13 worker.TomcatB.lbfactor=1 worker.lbcA.type=lb worker.lbcA.sticky_session=0 worker.lbcA.balance_workers = TomcatA,TomcatB worker.statA.type = status [root@nginx-proxy2 conf.d]# 状态信息[root@nginx-proxy2 conf.d]# cat mod_proxy.conf ProxyVia on ProxyRequests off ProxyPreserveHost on <Proxy balancer://lb> BalancerMember http://10.0.0.53:8080 loadfactor=1 route=TomcatA BalancerMember http://10.0.0.54:8080 loadfactor=2 route=TomcatB </Proxy> ProxyPass / balancer://lb/ ProxyPassReverse / balancer://lb/ ——————————————————————状态信息————————————————————————— <Location /lbmanager> SetHandler balancer-manager </Location> ProxyPass /lbmanager ! ———————————————————————————————————————————————————————— <Location /> Order Allow,Deny Allow from all </Location>
2016年02月21日
2,997 阅读
0 评论
0 点赞
2016-02-20
LAMT基于mod_jk负载均衡
mod_jk负载均衡配置基于mod_jk的负载均衡mod_jk文章1、 为了避免用户直接访问后端Tomcat实例,影响负载均衡的效果,建议在Tomcat 7的各实例上禁用HTTP/1.1连接器。2、为每一个Tomcat 7实例的引擎添加jvmRoute参数,并通过其为当前引擎设置全局惟一标识符。如下所示。需要注意的是,每一个实例的jvmRoute的值均不能相同。<Engine name=”Standalone” defaultHost=”localhost” jvmRoute=” TomcatA ”>而后去配置apache,修改/etc/httpd/extra/httpd-jk.conf为如下内容:LoadModule jk_module modules/mod_jk.soJkWorkersFile /etc/httpd/extra/workers.propertiesJkLogFile logs/mod_jk.logJkLogLevel debugJkMount /* lbcluster1JkMount /jkstatus/ stat1编辑/etc/httpd/extra/workers.properties,添加如下内容:worker.list = lbcluster1,stat1 worker.TomcatA.type = ajp13 worker.TomcatA.host = 172.16.100.1 worker.TomcatA.port = 8009 worker.TomcatA.lbfactor = 5 worker.TomcatB.type = ajp13 worker.TomcatB.host = 172.16.100.2 worker.TomcatB.port = 8009 worker.TomcatB.lbfactor = 5 worker.lbcluster1.type = lb worker.lbcluster1.sticky_session = 1 worker.lbcluster1.balance_workers = TomcatA, TomcatB worker.stat1.type = status 范例:10.0.0.53/54是tomcat机器,10.0.0.91是httpd负载安装jdk[root@NFS-WEB2 local]# rpm -ivh jdk-7u9-linux-x64.rpm Preparing... ########################################### [100%] 1:jdk ########################################### [100%] 安装apa-tomat[root@NFS-WEB2 local]# tar xf apache-tomcat-7.0.67.tar.gz -C /usr/local/ [root@NFS-WEB2 local]# ln -sv apache-tomcat-7.0.67 tomcat `tomcat' -> `apache-tomcat-7.0.67' [root@NFS-WEB2 bin]# cat /etc/profile.d/tomcat.sh export CATALINA_HOME=/usr/local/tomcat export PATH=$CATALINA_HOME/bin:$PATH [root@NFS-WEB2 bin]# cat /etc/profile.d/java.sh export JAVA_HOME=/usr/java/latest export PATH=$JAVA_HOME/bin:$PATHtomcat的init.d启动脚本[root@NFS-WEB2 bin]# cat /etc/init.d/tomcat #!/bin/sh # Tomcat init script for Linux. # # chkconfig: 2345 96 14 # description: The Apache Tomcat servlet/JSP container. JAVA_HOME=/usr/java/latest CATALINA_HOME=/usr/local/tomcat export JAVA_HOME CATALINA_HOME case $1 in start) exec $CATALINA_HOME/bin/catalina.sh start;; stop) exec $CATALINA_HOME/bin/catalina.sh stop;; restart) $CATALINA_HOME/bin/catalina.sh stop sleep 2 exec $CATALINA_HOME/bin/catalina.sh start;; *) echo "Usage: `basename $0` {start|stop|restart}" exit 1 ;; esac [root@NFS-WEB2 bin]# [root@NFS-WEB2 local]# chmod +x /etc/init.d/tomcat [root@NFS-WEB2 local]# chkconfig --add tomcat 在两个节点分别提供两个不同的测试页面:10.0.0.53[root@NFS-WEB2 webapps]# cd /usr/local/tomcat/webapps [root@NFS-WEB2 webapps]# mkdir testapp [root@NFS-WEB2 webapps]# cd testapp/ [root@NFS-WEB2 testapp]# mkdir -p WEB-INF/{classes,lib} [root@NFS-WEB2 webapps]# vim index.jsp 演示效果,在TomcatA上某context中(如/test),提供如下页面<%@ page language="java" %> <html> <head><title>TomcatA</title></head> <body> <h1><font color="red">TomcatA </font></h1> <table align="centre" border="1"> <tr> <td>Session ID</td> <% session.setAttribute("abc","abc"); %> <td><%= session.getId() %></td> </tr> <tr> <td>Created on</td> <td><%= session.getCreationTime() %></td> </tr> </table> </body> </html> 10.0.0.54[root@NFS-WEB1 webapps]# cd /usr/local/tomcat/webapps [root@NFS-WEB1 webapps]# mkdir testapp [root@NFS-WEB1 webapps]# cd testapp/ [root@NFS-WEB1 testapp]# mkdir -p WEB-INF/{classes,lib} [root@NFS-WEB1 webapps]# vim index.jsp 演示效果,在TomcatB上某context中(如/test),提供如下页面 <%@ page language="java" %> <html> <head><title>TomcatA</title></head> <body> <h1><font color="blue">TomcatA </font></h1> <table align="centre" border="1"> <tr> <td>Session ID</td> <% session.setAttribute("abc","abc"); %> <td><%= session.getId() %></td> </tr> <tr> <td>Created on</td> <td><%= session.getCreationTime() %></td> </tr> </table> </body> </html> mod_jk 10.0.0.91[root@nginx-proxy2 modules]# cat /etc/httpd/conf.d/mod_jk.conf LoadModule jk_module modules/mod_jk.so JkWorkersFile /etc/httpd/conf.d/workers.properties JkLogFile logs/mod_jk.log JkLogLevel notice JkMount /* lbcA JkMount /jkstatus/ statA [root@nginx-proxy2 modules]# cat /etc/httpd/conf.d/workers.properties worker.list=lbcA,statA worker.TomcatA.port=8009 worker.TomcatA.host=10.0.0.53 worker.TomcatA.type=ajp13 worker.TomcatA.lbfactor=1 worker.TomcatB.port=8009 worker.TomcatB.host=10.0.0.54 worker.TomcatB.type=ajp13 worker.TomcatB.lbfactor=1 worker.lbcA.type=lb worker.lbcA.sticky_session=0 回话不保持 worker.lbcA.balance_workers = TomcatA,TomcatB worker.statA.type = status [root@nginx-proxy2 modules]#
2016年02月20日
2,904 阅读
0 评论
0 点赞
2016-02-19
LAMT基于mod_jk配置使用
mod_jk配置apache通过mod_jk模块与Tomcat连接mod_jk是ASF的一个项目,是一个工作于apache端基于AJP协议与Tomcat通信的连接器,它是apache的一个模块,是AJP协议的客户端(服务端是Tomcat的AJP连接器)。[root@nginx-proxy2 conf.d]# cd [root@nginx-proxy2 ~]# tar xf tomcat-connectors-1.2.40-src.tar.gz [root@nginx-proxy2 ~]# cd tomcat-connectors-1.2.40-src/native/ [root@nginx-proxy2 native]# rpm -ql httpd-devel|grep apxs 确保apxs存在,确保httpd-devel安装[root@nginx-proxy2 native]# yum -y install httpd-devel [root@nginx-proxy2 native]# rpm -ql httpd-devel|grep apxs /usr/sbin/apxs /usr/share/man/man8/apxs.8.gz [root@nginx-proxy2 native]# [root@nginx-proxy2 native]# ./configure --with-apxs=/usr/sbin/apxs [root@nginx-proxy2 native]# make && make install 确保mod_jk.so存在[root@nginx-proxy2 modules]# ls /usr/lib64/httpd/modules/ |grep mod_jk mod_jk.so [root@nginx-proxy2 modules]# 编辑配置文件:[root@nginx-proxy2 conf.d]# cat mod_jk.conf LoadModule jk_module modules/mod_jk.so JkWorkersFile /etc/httpd/conf.d/workers.properties JkLogFile logs/mod_jk.log JkLogLevel debug JkMount /* TomcatA JkMount /jkstatus/ stat-LinuxEA 编辑workers.properties!由于是mod_jk,使用ajp,so,端口为8009,[root@nginx-proxy2 conf.d]# cat workers.properties worker.list=TomcatA,stat-LinuxEA worker.TomcatA.port=8009 worker.TomcatA.host=10.0.0.53 worker.TomcatA.type=ajp13 worker.TomcatA.lbfactor=1 worker.stat-LinuxEA.type = status [root@nginx-proxy2 conf.d]# pwd /etc/httpd/conf.d [root@nginx-proxy2 conf.d]# 查看日志是否正常:[root@nginx-proxy2 conf.d]# tail /var/log/httpd/mod_jk.log [Thu Feb 18 06:13:02.200 2016] [7345:139943077504992] [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1403): 03d0 75 6E 64 3A 20 23 44 32 41 34 31 43 3B 0A 20 20 - und:.#D2A41C;... [Thu Feb 18 06:13:02.200 2016] [7345:139943077504992] [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1403): 03e0 7D 0A 20 20 74 64 2E 68 65 61 64 65 72 2D 6C 65 - }...td.header-le [Thu Feb 18 06:13:02.200 2016] [7345:139943077504992] [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1403): 03f0 66 74 20 7B 0A 20 20 20 20 74 65 78 74 2D 61 6C - ft.{.....text-al [Thu Feb 18 06:13:02.201 2016] [7345:139943077504992] [debug] ws_write::mod_jk.c (552): written 8065 out of 8065 [root@nginx-proxy2 conf.d]# jk_mod正常后修改日志级别:JkLogLevel notice[root@nginx-proxy2 conf.d]# cat mod_jk.conf LoadModule jk_module modules/mod_jk.so JkWorkersFile /etc/httpd/conf.d/workers.properties JkLogFile logs/mod_jk.log JkLogLevel notice JkMount /* TomcatA JkMount /jkstatus/ stat-LinuxEA [root@nginx-proxy2 conf.d]# 在查看则正常[root@nginx-proxy2 conf.d]# tail /var/log/httpd/mod_jk.log [Thu Feb 18 06:17:19.447 2016] [7346:139943077504992] [debug] jk_shm_close::jk_shm.c (700): Closed shared memory /etc/httpd/logs/jk-runtime-status.7337 childs=8 [Thu Feb 18 06:17:19.448 2016] [7345:139943077504992] [debug] jk_shm_close::jk_shm.c (700): Closed shared memory /etc/httpd/logs/jk-runtime-status.7337 childs=7 [Thu Feb 18 06:17:19.449 2016] [7344:139943077504992] [debug] jk_shm_close::jk_shm.c (700): Closed shared memory /etc/httpd/logs/jk-runtime-status.7337 childs=6 [Thu Feb 18 06:17:19.451 2016] [7343:139943077504992] [debug] jk_shm_close::jk_shm.c (700): Closed shared [root@nginx-proxy2 conf.d]# 如果是编译安装,--with-apxs则指定相应的位置即可[root@NFS-WEB1 native]# ./configure --with-apxs=/usr/local/apache/bin/apxs [root@NFS-WEB1 native]# make && make install 说明如下:apache要使用mod_jk连接器,需要在启动时加载此连接器模块。为了便于管理与mod_jk模块相关的配置,这里使用一个专门的配置文件/etc/httpd/extra/httpd-jk.conf来保存相关指令及其设置。其内容如下:# Load the mod_jk LoadModule jk_module modules/mod_jk.so JkWorkersFile /etc/httpd/extra/workers.properties JkLogFile logs/mod_jk.log JkLogLevel debug JkMount /* TomcatA JkMount /status/ stat1 除了需要使用LoadModule指令在apache中装载模块外,mod_jk还需要在apache的主配置文件中设置其它一些指令来配置其工作属性。如JkWorkersFile则用于指定保存了worker相关工作属性定义的配置文件,JkLogFile则用于指定mod_jk模块的日志文件,JkLogLevel则可用于指定日志的级别(info, error, debug),此外还可以使用JkRequestLogFormat自定义日志信息格式。而JkMount(格式: JkMount <URL to match> <Tomcat worker name>)指定则用于控制URL与Tomcat workers的对应关系。为了让apache能使用/etc/httpd/extra/httpd-jk.conf配置文件中的配置信息,需要编辑/etc/httpd/httpd.conf,添加如下一行:Include /etc/httpd/extra/httpd-jk.conf对于apache代理来说,每一个后端的Tomcat实例中的engine都可以视作一个worker,而每一个worker的地址、连接器的端口等信息都需要在apache端指定以便apache可以识别并使用这些worker。约定俗成,配置这些信息的文件通常为workers.properties,其具体路径则是使用前面介绍过的JkWorkersFile指定的,在apache启动时,mod_jk会扫描此文件获取每一个worker的配置信息。比如,我们这里使用/etc/httpd/extra/workers.properties。workers.properties文件一般由两类指令组成:一是mod_jk可以连接的各worker名称列表,二是每一个worker的属性配置信息。它们分别遵循如下使用语法。worker.list = < a comma separated list of worker names > worker. <worker name> .<property> = <property value> 其中worker.list指令可以重复指定多次,而worker name则是Tomcat中engine组件jvmRoute参数的值。如:worker.TomcatA.host=172.16.100.1根据其工作机制的不同,worker有多种不同的类型,这是需要为每个worker定义的一项属性woker.<work name>.type。常见的类型如下:◇ ajp13:此类型表示当前worker为一个运行着的Tomcat实例。◇ lb:lb即load balancing,专用于负载均衡场景中的woker;此worker并不真正负责处理用户请求,而是将用户请求调度给其它类型为ajp13的worker。◇ status:用户显示分布式环境中各实际worker工作状态的特殊worker,它不处理任何请求,也不关联到任何实际工作的worker实例。具体示例如请参见后文中的配置。worker其它常见的属性说明:◇ host:Tomcat 7的worker实例所在的主机;◇ port:Tomcat 7实例上AJP1.3连接器的端口;◇ connection_pool_minsize:最少要保存在连接池中的连接的个数;默认为pool_size/2;◇ connection_pool_timeout:连接池中连接的超时时长;◇ mount:由当前worker提供的context路径,如果有多个则使用空格格开;此属性可以由JkMount指令替代;◇ retries:错误发生时的重试次数;◇ socket_timeout:mod_jk等待worker响应的时长,默认为0,即无限等待;◇ socket_keepalive:是否启用keep alive的功能,1表示启用,0表示禁用;◇ lbfactor:worker的权重,可以在负载均衡的应用场景中为worker定义此属性;另外,在负载均衡模式中,专用的属性还有:◇balance_workers:用于负载均衡模式中的各worker的名称列表,需要注意的是,出现在此处的worker名称一定不能在任何worker.list属性列表中定义过,并且worker.list属性中定义的worker名字必须包含负载均衡worker。具体示例请参见后文中的定义。◇ method:可以设定为R、T或B;默认为R,即根据请求的个数进行调度;T表示根据已经发送给worker的实际流量大小进行调度;B表示根据实际负载情况进行调度。◇sticky_session:在将某请求调度至某worker后,源于此址的所有后续请求都将直接调度至此worker,实现将用户session与某worker绑定。默认为值为1,即启用此功能。如果后端的各worker之间支持session复制,则可以将此属性值设为0。根据前文中的指定,这里使用/etc/httpd/extra/workers.properties来定义一个名为TomcatA的worker,并为其指定几个属性。文件内容如下:worker.list=TomcatA,stat1 worker.TomcatA.port=8009 worker.TomcatA.host=172.16.100.1 worker.TomcatA.type=ajp13 worker.TomcatA.lbfactor=1 worker.stat1.type = status 至此,一个基于mod_jk模块与后端名为TomcatA的worker通信的配置已经完成,重启httpd服务即可生效。
2016年02月19日
3,187 阅读
0 评论
0 点赞
2016-02-18
LAMT基于mod_proxy调度使用
使用单台nginx直接调度tomcat如下:http { upstream tomcat_server { server 10.0.0.20:8080; server 10.0.0.30:8080; # server 10.0.0.100:8080 backup; } server { location ~* \.(jsp|do)$ { proxy_pass http://tomcat_server; } } } 通常tomcat不直接和前端用户交互,则由nginx_proxy进行,如下图LAMT:配置apache通过mod_proxy模块与Tomcat连接要使用mod_proxy与Tomcat实例连接,需要apache已经装载mod_proxy、mod_proxy_http、mod_proxy_ajp和proxy_balancer_module(实现Tomcat集群时用到)等模块:# /usr/local/apache/bin/httpd -D DUMP_MODULES | grep proxy proxy_module (shared) proxy_connect_module (shared) proxy_ftp_module (shared) proxy_http_module (shared) proxy_fcgi_module (shared) proxy_scgi_module (shared) proxy_ajp_module (shared) proxy_balancer_module (shared) proxy_express_module (shared) rpm安装的httpd[root@nginx-proxy2 ~]# rpm -qa httpd httpd-2.2.15-39.el6.centos.x86_64 需要启动的模块:[root@nginx-proxy2 ~]# ls /usr/lib64/httpd/modules/ proxy_module proxy_http_module proxy_ajp_module proxy_balancer_module [root@nginx-proxy2 ~]# httpd -M proxy_http_module (shared) proxy_ajp_module (shared) proxy_module (shared) proxy_balancer_module (shared)编译安装tar xf httpd-2.4.2.tar.gz cd httpd-2.4.2 ./configure \ --prefix=/usr/local/apache 、 --sysconfdir=/etc/httpd \ --enable-so --enable-ssl \ --enable-cgi \ --enable-rewrite \ --with-zlib \ --with-pcre \ --with-apr=/usr/local/apr \ --with-apr-util=/usr/local/apr-util \ --enable-proxy --enable-proxy-http \ --enable-proxy-ajp make && make install 编译安装:需要手动启动,默认编译不启动2、在httpd.conf的全局配置段或虚拟主机中添加如下内容:ProxyVia Off 关闭正向代理 ProxyRequests Off ProxyPreserveHost Off <Proxy *> Require all granted </Proxy> ProxyPass / ajp://172.16.100.1:8009/ ProxyPassReverse / ajp://172.16.100.1:8009/ <Location / > Require all granted </Location> 关于如上apache指令的说明:ProxyPreserveHost {On|Off}:如果启用此功能,代理会将用户请求报文中的Host:行发送给后端的服务器,而不再使用ProxyPass指定的服务器地址。如果想在反向代理中支持虚拟主机,则需要开启此项,否则就无需打开此功能。ProxyVia {On|Off|Full|Block}:用于控制在http首部是否使用Via:,主要用于在多级代理中控制代理请求的流向。默认为Off,即不启用此功能;On表示每个请求和响应报文均添加Via:;Full表示每个Via:行都会添加当前apache服务器的版本号信息;Block表示每个代理请求报文中的Via:都会被移除。ProxyRequests {On|Off}:是否开启apache正向代理的功能;启用此项时为了代理http协议必须启用mod_proxy_http模块。同时,如果为apache设置了ProxyPass,则必须将ProxyRequests设置为Off。ProxyPass [path] !|url [key=value key=value ...]]:将后端服务器某URL与当前服务器的某虚拟路径关联起来作为提供服务的路径,path为当前服务器上的某虚拟路径,url为后端服务器上某URL路径。使用此指令时必须将ProxyRequests的值设置为Off。需要注意的是,如果path以“/”结尾,则对应的url也必须以“/”结尾,反之亦然。另外,mod_proxy模块在httpd 2.1的版本之后支持与后端服务器的连接池功能,连接在按需创建在可以保存至连接池中以备进一步使用。连接池大小或其它设定可以通过在ProxyPass中使用key=value的方式定义。常用的key如下所示:◇ min:连接池的最小容量,此值与实际连接个数无关,仅表示连接池最小要初始化的空间大小。◇ max:连接池的最大容量,每个MPM都有自己独立的容量;都值与MPM本身有关,如Prefork的总是为1,而其它的则取决于ThreadsPerChild指令的值。◇ loadfactor:用于负载均衡集群配置中,定义对应后端服务器的权重,取值范围为1-100。◇ retry:当apache将请求发送至后端服务器得到错误响应时等待多长时间以后再重试。单位是秒钟。如果Proxy指定是以balancer://开头,即用于负载均衡集群时,其还可以接受一些特殊的参数,如下所示:◇lbmethod:apache实现负载均衡的调度方法,默认是byrequests,即基于权重将统计请求个数进行调度,bytraffic则执行基于权重的流量计数调度,bybusyness通过考量每个后端服务器的当前负载进行调度。◇ maxattempts:放弃请求之前实现故障转移的次数,默认为1,其最大值不应该大于总的节点数。◇ nofailover:取值为On或Off,设置为On时表示后端服务器故障时,用户的session将损坏;因此,在后端服务器不支持session复制时可将其设置为On。◇ stickysession:调度器的sticky session的名字,根据web程序语言的不同,其值为JSESSIONID或PHPSESSIONID。上述指令除了能在banlancer://或ProxyPass中设定之外,也可使用ProxySet指令直接进行设置,如:<Proxy balancer://hotcluster>BalancerMember http://www1.linuxea.com:8080 loadfactor=1BalancerMember http://www2.linuxea.com:8080 loadfactor=2ProxySet lbmethod=bytraffic</Proxy>ProxyPassReverse:用于让apache调整HTTP重定向响应报文中的Location、Content-Location及URI标签所对应的URL,在反向代理环境中必须使用此指令避免重定向报文绕过proxy服务器。示例:[root@nginx-proxy2 conf.d]# pwd /etc/httpd/conf.d [root@nginx-proxy2 conf.d]# cat mod_proxy.conf ProxyVia on ProxyRequests off ProxyPreserveHost on ProxyPass / http://10.0.0.53:8080/ ProxyPassReverse / http://10.0.0.53:8080/ <Location /> Order Allow,Deny Allow from all </Location> [root@nginx-proxy2 conf.d]# apache最大的好处是可以基于ajp向后转发,不走http协议[root@nginx-proxy2 conf.d]# pwd /etc/httpd/conf.d [root@nginx-proxy2 conf.d]# cat mod_proxy.conf ProxyVia on ProxyRequests off ProxyPreserveHost on ProxyPass / ajp://10.0.0.53:8009/ ProxyPassReverse / ajp://10.0.0.53:8009/ <Location /> Order Allow,Deny Allow from all </Location> [root@nginx-proxy2 conf.d]# admin-gui定义[root@nginx-proxy2 conf.d]# vim tomcat-user.xml <role rolename="manager-gui"/> <role rolename="admin-gui"/> <user username="tomcat" password="tomcat" roles="tomcat,manager-gui,admin-gui"/> <!-- <user username="both" password="tomcat" roles="tomcat,role1"/> <user username="role1" password="tomcat" roles="role1"/> --> </tomcat-users>
2016年02月18日
3,553 阅读
1 评论
0 点赞
1
2
...
4