首页
About Me
推荐
weibo
github
Search
1
linuxea:gitlab-ci之docker镜像质量品质报告
49,197 阅读
2
linuxea:如何复现查看docker run参数命令
21,470 阅读
3
Graylog收集文件日志实例
18,257 阅读
4
git+jenkins发布和回滚示例
17,882 阅读
5
linuxea:jenkins+pipeline+gitlab+ansible快速安装配置(1)
17,778 阅读
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack
Open-Falcon
Prometheus
victoriaMetrics
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
互联咨询
最后的净土
软件交付
持续集成
gitops
devops
登录
Search
标签搜索
kubernetes
docker
zabbix
Golang
mariadb
持续集成工具
白话容器
linux基础
nginx
elk
dockerfile
Gitlab-ci/cd
最后的净土
基础命令
jenkins
docker-compose
gitops
haproxy
saltstack
Istio
marksugar
累计撰写
676
篇文章
累计收到
140
条评论
首页
栏目
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack
Open-Falcon
Prometheus
victoriaMetrics
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
互联咨询
最后的净土
软件交付
持续集成
gitops
devops
页面
About Me
推荐
weibo
github
搜索到
3
篇与
的结果
2018-10-13
linuxea:lvs-dr4层代理
我们试图将后端的业务通过lvs转到nginx代理,nginx分别作为4成和7层代理,这里使用DR模型,那就意味着只能进行做4层代理。类似如:tomcat,mq等也可以进行代理,试图减少代码的耦合度,将他们拆分,使用一个vpi加端口的形式。我准备了4台机器加一台redis做测试lvs 10.10.240.144 and 10.10.240.143 nginx-proxy 10.10.240.113 And 10.10.240.114 redis 10.10.240.145 VIP 10.10.240.188安装lvs+keepalived[root@linuxea-vm-Node_10_10_240_144 ~]# yum install keepalived ipvsadm -y修改keepavlied配置文件/etc/keepalived/keepalived.conf ,添加vip 10.10.240.188,已经后端的代理节点的端口以及Ip配置信息[root@linuxea-vm-Node_10_10_240_144 /etc/keepalived]$ cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_instance platformtransfer { state MASTER interface eth0 virtual_router_id 63 priority 99 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.10.240.188/16 brd 10.10.255.255 dev eth0 label eth0:trans1 } } virtual_server 10.10.240.188 880 { delay_loop 6 lb_algo rr lb_kind DR protocol TCP nopreempt garp_master_delay 10 real_server 10.10.240.114 880 { weight 1 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 880 } } real_server 10.10.240.113 880 { weight 1 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 880 } } } virtual_server 10.10.240.188 6379 { delay_loop 6 lb_algo rr lb_kind DR protocol TCP nopreempt garp_master_delay 10 real_server 10.10.240.114 6379 { weight 1 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 6379 } } real_server 10.10.240.113 6379 { weight 1 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 6379 } } }添加lvs脚本/scripts/lvs.sh [root@linuxea-vm-Node_10_10_240_144 /etc/keepalived]$ cat /scripts/lvs.sh #!/bin/bash #注:下文中LVS_IP_INF变量中的信息为需要配置的需求变量,变量之间分别以@符合隔开,总共有7个变量。以本例的需求来解析: #lvs节点通过vip 10.10.240.188,将880,6379端口以DR模型的方式转发到10.10.240.114和113节点的880,6379端口 #[-g|i|m]: LVS类型 # -g: DR # -i: TUN # -m: NAT #本脚本中默认的调度模式为rr #通过以上解析,第一个变量绑定vip的网卡,第二个变量为vip,第三个变量为lvs-RS转发节点(可多个,以空格隔开), #第四个变量为lvs-RS节点端口号,第五个变量为该业务备注信息,第六个变量为lvs节点中使用的端口号,第七个变量为lvs的类型 LVS_IP_INF=`cat << EOF eth0@10.10.240.188@10.10.240.114 10.10.240.113@880@test880@880@g eth0@10.10.240.188@10.10.240.114 10.10.240.113@6379@redis6379@6379@g EOF` case "$1" in start) /usr/sbin/ipvsadm -C echo "$LVS_IP_INF" | while read line;do read NET_FACE VIP PROJ SPORT MODE < <(echo "$line" | awk -F"@" '{print $1 " " $2 " " $5 " " $6 " " $7}') RIPS=$(echo $line | awk -F"@" '{print $3}') PORTS=$(echo $line | awk -F"@" '{print $4}') echo "添加项目LVS ${PROJ}: 网卡--- ${NET_FACE} 虚拟IP--- ${VIP} 真实主机--- ${RIPS} 代理端口--- ${PORTS}" for port in ${PORTS};do echo "添加虚拟服务器记录 ipvsadm -At ${VIP}:${SPORT} -s rr" /usr/sbin/ipvsadm -At ${VIP}:${SPORT} -s rr for rip in ${RIPS};do echo "添加真实服务器记录 ipvsadm -at ${VIP}:${SPORT} -r ${rip}:${port} -${MODE}" /usr/sbin/ipvsadm -at ${VIP}:${SPORT} -r ${rip}:${port} -${MODE} done echo done echo done echo "当前LVS状态:" /usr/sbin/ipvsadm -Ln ;; stop) /usr/sbin/ipvsadm -C /usr/sbin/ipvsadm -Ln ;; add) date echo "$LVS_IP_INF" | while read line;do read NET_FACE VIP PROJ SPORT MODE < <(echo "$line" | awk -F"@" '{print $1 " " $2 " " $5 " " $6 " " $7}') RIPS=`echo $(echo $line | awk -F"@" '{print $3}')` PORTS=`echo $(echo $line | awk -F"@" '{print $4}')` echo "添加LVS项目 ${PROJ}: 网卡---${NET_FACE} 虚拟IP---${VIP} 真实主机---${RIPS} 代理端口---${PORTS}" for port in ${PORTS};do /usr/sbin/ipvsadm -Ln | grep -v '-' | grep ${VIP}:${SPORT} > /dev/null [ $? -eq 0 ] && echo $VIP:${SPORT}"已存在" && continue echo "添加虚拟服务器记录 ipvsadm -At ${VIP}:${SPORT} -s rr" /usr/sbin/ipvsadm -At ${VIP}:${SPORT} -s rr for rip in ${RIPS};do echo "添加真实服务器记录 ipvsadm -at ${VIP}:${SPORT} -r ${rip}:${port} -${MODE}" /usr/sbin/ipvsadm -at ${VIP}:${SPORT} -r ${rip}:${port} -${MODE} done echo done echo done echo "当前LVS状态:" /usr/sbin/ipvsadm -Ln ;; *) echo "Usage: $0 {start|stop|add}" ;; esac而后需要添加x执行权限chmod +x /scripts/lvs.sh 配置systemd风格的启动脚本/usr/lib/systemd/system/addvip.service and /usr/lib/systemd/system/keepalived.service cat /usr/lib/systemd/system/addvip.service [Unit] Description=Add lvs vip After=keepalived.service multi-user.target [Service] Type=oneshot RemainAfterExit=yes ExecStart=/scripts/lvs.sh start ExecStop=/scripts/lvs.sh stop [Install] WantedBy=multi-user.target cat /usr/lib/systemd/system/keepalived.service [Unit] Description=LVS and VRRP High Availability Monitor After=syslog.target network-online.target [Service] Type=forking PIDFile=/var/run/keepalived.pid KillMode=process EnvironmentFile=-/etc/sysconfig/keepalived ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS ExecReload=/bin/kill -HUP $MAINPID [Install] WantedBy=multi-user.target放行112端口,两台keepaliaved之间-A INPUT -p 112 -s 10.10.240.143 -j ACCEPT -A INPUT -p tcp -m tcp -m state --state NEW -m multiport --dports 880,6379 -m comment --comment "lvs" -j ACCEPT-A INPUT -p 112 -s 10.10.240.144 -j ACCEPT -A INPUT -p tcp -m tcp -m state --state NEW -m multiport --dports 880,6379 -m comment --comment "lvs" -j ACCEPT启动systemctl start keepalived.service addvip.service启动后的ip会进行绑定到keepalived的master节点之上[root@linuxea-vm-Node_10_10_240_144 /etc/keepalived]$ ip a 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 88:88:2f:f0:48:19 brd ff:ff:ff:ff:ff:ff inet 10.10.240.144/8 brd 10.255.255.255 scope global dynamic eth0 valid_lft 84948sec preferred_lft 84948sec inet 10.10.240.188/16 brd 172.25.255.255 scope global eth0:trans1 valid_lft forever preferred_lft forever[root@linuxea-vm-Node_10_10_240_144 /etc/keepalived]$ ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.10.240.188:880 rr -> 10.10.240.113:880 Route 1 0 0 -> 10.10.240.114:880 Route 1 0 0 TCP 10.10.240.188:6379 rr -> 10.10.240.113:6379 Route 1 0 0 -> 10.10.240.114:6379 Route 1 0 0 配置4层负载配置4层nginx代理后端的redis节点,便于测试,仅仅配置了一台redis,在配置nginx之前,需要将vip绑定到nginx层的lo网络接口上,如下:[root@linuxea-vm-Node113 /etc/nginx/stream]# cat /scripts/lvs.sh #!/bin/bash ######################################################################### # File Name: /scripts/lvs.sh # Author: LookBack # Email: admin#dwhd.org # Version: # Created Time: 2018年10月13日 星期六 10时20分41秒 ######################################################################### VIP1=10.10.240.188 case "$1" in start) ip addr add ${VIP1}/32 brd $VIP1 dev lo label lo:0 ip route add $VIP1 dev lo:0 echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce sysctl -p >/dev/null 2>&1 echo "RealServer Start OK" ;; stop) ip addr del ${VIP1}/32 brd $VIP1 dev lo label lo:0 ip route del $VIP1 dev lo:0 echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce echo "RealServer Stoped" ;; *) echo "Usage: $0 {start|stop}" ;; esac两台都需要绑定,而后如下:[root@linuxea-vm-Node113 /etc/nginx/stream]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet 10.10.188.188/32 brd 10.10.188.188 scope global lo:0 valid_lft forever preferred_lft forever inet 10.10.240.188/32 brd 10.10.240.188 scope global lo:0 valid_lft forever preferred_lft forever[root@linuxea-vm-Node114 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet 10.10.188.188/32 brd 10.10.188.188 scope global lo:0 valid_lft forever preferred_lft forever inet 10.10.240.188/32 brd 10.10.240.188 scope global lo:0 valid_lft forever preferred_lft forever并且配置放行从lvs到nginx的端口-A INPUT -s 10.10.240.0/24 -p tcp -m tcp -m state --state NEW -m multiport --dports 880,6379 -j ACCEPT配置nginx[root@linuxea-vm-Node113 ~]# cat docker-compose-nginx_vts.yml version: '3' services: nginx_vts: image: marksugar/nginx:v1.14.0-vts container_name: nginx restart: always network_mode: "host" volumes: - /etc/nginx:/etc/nginx/ - /data/:/data/ environment: - NGINXCONF=on - NGINX_PORT=80 - SERVER_NAME=www.linuxea.net - PHP_FPM_SERVER=127.0.0.1:9000 ports: - "80"修改添加/etc/nginx/nginx.conf一个字段stream { include stream/*.conf; }而后添加redis节点的upstream[root@linuxea-vm-Node113 ~]# cat /etc/nginx/stream/redis.conf upstream 6379 { server 10.10.240.145:6379; } server { listen 6379; proxy_pass 6379; }配置redisredis使用之前的编写的docker配置[root@linuxea-vm-Node_10_10_240_145 /data1/redis]$ cat docker-compose-redis-4-0-11.yml version: '2' services: redis: image: marksugar/redis:4.0.11 container_name: redis restart: always network_mode: "host" privileged: true environment: - REDIS_CONF=on - REQUIREPASSWD=OTdmOWI4ZTM4NTY1M2M4OTZh - MASTERAUTHPAD=OTdmOWI4ZTM4NTY1M2M4OTZh - MAXCLIENTS_NUM=600 - MAXMEMORY_SIZE=4096 volumes: - /etc/localtime:/etc/localtime:ro - /etc/redis:/etc/redis - /data/redis-data:/data/redis:Z - /data/logs:/data/logs测试在redis本机访问lvs dr的vip代理的redis ip和端口。[root@linuxea-vm-Node_10_10_240_145 /data1/redis]$ redis-cli -h 10.10.240.188 -p 6379 -a OTdmOWI4ZTM4NTY1M2M4OTZh info |grep cpu used_cpu_sys:805.85 used_cpu_user:747.25 used_cpu_sys_children:0.00 used_cpu_user_children:0.00
2018年10月13日
2,815 阅读
0 评论
0 点赞
2015-05-26
lvs负载均衡之lvs_dr(notes二)
lvs-dr:直接路由 Director在实现转发时不修改请求的IP首部,而是通过直接封装MAC首部完成转发;目标MAC是Director根据调度方法挑选出某RS的MAC地址;拓扑结构有别有NAT类型; 架构特性: (1) 保证前端路由器将目标地址为VIP的请求报文通过ARP地址解析后送往Director 解决方案: 静态绑定:在前端路由直接将VIP对应的目标MAC静态配置为Director的MAC地址; arptables:在各RS上,通过arptables规则拒绝其响应对VIP的ARP广播请求; 内核参数:在RS上修改内核参数,并结合地址的配置方式实现拒绝响应对VIP的ARP广播请求; (2) RS的RIP可以使用私有地址;但也可以使用公网地址,此时可通过互联网上的主机直接对此RS发起管理操作; (3) 请求报文必须经由Director调度,但响应报文必须不能经由Director; (4) 各RIP必须与DIP在同一个物理网络中; (5) 不支持端口映射; (6) RS可以使用大多数的OS; (7) RS的网关一定不能指向Director; (1) 各RS要直接响应Client,因此,各RS均得配置VIP;但仅能够让Director上的VIP能够与本地路由直接通信; (2) Director不会拆除或修改请求报文的IP首部,而是通过封闭新的帧首部(源MAC为Director的MAC,目标MAC为挑选出的RS的MAC)完成调度; 2.4.26, 2.6.4 kernel引入了两个内核参数: arp_announce:定义arp通知级别; arp_ignore:定义arp忽略arp请求或arp通告的级别; /proc/sys/net/ipv4/conf/INTERFACE node3 在/proc/sys/net/ipv4/conf下 1,修改两个接口,all和lo [root@node3 conf]# echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore [root@node3 conf]# echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore [root@node3 conf]# echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce [root@node3 conf]# echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce [root@node3 conf]# sysctl -a | grep arp net.ipv4.conf.lo.arp_announce = 2 net.ipv4.conf.lo.arp_ignore = 1 2,添加lo:0地址, [root@node3 ~]# ifconfig lo:0 172.16.100.202 netmask 255.255.255.255 broadcast 172.16.100.202 up [root@node3 ~]# ifconfig lo:0 lo:0 Link encap:Local Loopback inet addr:172.16.100.202 Mask:255.255.255.255 UP LOOPBACK RUNNING MTU:65536 Metric:1 3,添加路由,如果你访问的地址是172.16.100.202,要求必须通过lo:0进出 [root@node3 ~]route add -host 172.16.100.202 dev lo:0 4,启动httpd,创建测试页面 node2 1,修改两个接口,all和lo [root@node2 ~]# echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore [root@node2 ~]# echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore [root@node2 ~]# echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce [root@node2 ~]# echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce [root@node2 ~]# sysctl -a | grep arp net.ipv4.conf.lo.arp_announce = 2 net.ipv4.conf.lo.arp_ignore = 1 2,添加lo:0地址 [root@node2 ~]# ifconfig lo:0 172.16.100.202 netmask 255.255.255.255 broadcast 172.16.100.202 up [root@node2 ~]# ifconfig lo:0 lo:0 Link encap:Local Loopback inet addr:172.16.100.202 Mask:255.255.255.255 UP LOOPBACK RUNNING MTU:65536 Metric:1 3,添加路由,如果你访问的地址是172.16.100.202,要求必须通过lo:0进出 [root@node2 ~]route add -host 172.16.100.202 dev lo:0 4,启动httpd,创建测试页面 node1 1,添加eht1:0地址,(仅对自己访问,广播域broadcast限制) root@node1 ~]# ifconfig eth1:0 172.16.100.202 netmask 255.255.255.255 broadcast 172.16.100.202 up [root@node1 ~]# ifconfig eth1:0 eth1:0 Link encap:Ethernet HWaddr 00:0C:29:78:10:11 inet addr:172.16.100.202 Bcast:172.16.100.202 Mask:255.255.255.255 2,清空ipvsadm配置 [root@node1 ~]# ipvsadm -C 3,清空iptables配置 [root@node1 ~]# iptables -F 4,给本机添加路由指向 [root@node1 ~]# route add -host 172.16.100.202 dev eth1:0 5,开启网卡转发 [root@node1 ~]# sysctl -a |grep ip_forward net.ipv4.ip_forward = 1 6,添加集群服务,-s指明调度方法rr [root@node1 ~]# ipvsadm -A -t 172.16.100.202:80 -s rr 7,添加172.16.100.202:80的客户端服务器172.16.249.157,类型为-g DR,权重为5 [root@node1 ~]# ipvsadm -a -t 172.16.100.202:80 -r 172.16.249.157 -g -w 5 8,添加172.16.100.202:80的客户端服务器172.16.249.186,类型为-g DR,权重为1 [root@node1 ~]# ipvsadm -a -t 172.16.100.202:80 -r 172.16.249.186 -g -w 1 其实在dr中,权重是无任何意义的 [root@node1 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 172.16.100.202:80 rr -> 172.16.249.157:80 Route 5 0 0 -> 172.16.249.186:80 Route 1 0 0 [root@node1 ~]# 客户端: 在客户机arp -a 查看两个ip地址的mac地址 172.16.100.202 00-0c-29-78-10-11 172.16.249.117 00-0c-29-78-10-11 浏览器测试 不同ip直接构建,需要现将网络打通,其他便简单 [root@node3 ~]# ifconfig eth2 192.168.0.101/24 up [root@node3 ~]# route add default gw 192.168.0.254 [root@node2 ~]# ifconfig eth2 192.168.0.102/24 up [root@node2 ~]# route add default gw 192.168.0.254 [root@node1 ~]# ifconfig eth1 192.168.0.100/24 up [root@node1 ~]# ipvsadm -A -t 172.16.100.202:80 -s rr [root@node1 ~]# ipvsadm -a -t 172.16.100.202:80 -r 192.168.0.101 -g -w 1 [root@node1 ~]# ipvsadm -a -t 172.16.100.202:80 -r 192.168.0.102 -g -w 3 [root@node1 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 172.16.100.202:80 rr -> 192.168.0.101:80 Route 1 0 0 -> 192.168.0.102:80 Route 3 0 0 [root@node1 ~]#
2015年05月26日
2,957 阅读
0 评论
0 点赞
2015-05-25
lvs-nat(一)
lvs-nat:类似于DNAT, 但支持多目标转发; 它通过修改请求报文的目标地址为根据调度算法所挑选出的某RS的RIP来进行转发; 架构特性: (1) RS应该使用私有地址,即RIP应该为私有地址;各RS的网关必须指向DIP; (2) 请求和响应报文都经由Director转发;高负载场景中,Director易于成为系统瓶颈; (3) 支持端口映射; (4) RS可以使用任意类型的OS; (5) RS的RIP必须与Director的DIP在同一网络; 配置rs1 rip:192.168.131.2 1,安装httpd 2,创建测试页面 vim /var/www/html/index.html node2.linuxea.com 3,添加路由条目 route add default gw 192.168.131.1 配置rs1 rip:192.168.131.3 1,安装httpd 2,创建测试页面 vim /var/www/html/index.htm node3.linuxea.com 3,添加路由条目 route add default gw 192.168.131.1 配置调度器Director vip:172.16.249.117 dip:192.168.131.1 1,查看是否支持ipvsadm [root@node1 ~]# grep -i "ipvs" -A 5 /boot/config-2.6.32-504.el6.x86_64 # IPVS transport protocol load balancing support # CONFIG_IP_VS_PROTO_TCP=y 已经编辑到内核 CONFIG_IP_VS_PROTO_UDP=y CONFIG_IP_VS_PROTO_AH_ESP=y CONFIG_IP_VS_PROTO_ESP=y -- # IPVS scheduler # CONFIG_IP_VS_RR=m CONFIG_IP_VS_WRR=m CONFIG_IP_VS_LC=m CONFIG_IP_VS_WLC=m -- # IPVS application helper # CONFIG_IP_VS_FTP=m CONFIG_IP_VS_PE_SIP=m # [root@node1 ~]# 2,安装ipvsadm [root@node1 ~]# yum -y install ipvsadm 3,查看是否存在规则 [root@node1 ~]# ipvsadm -L -N -N: unknown option [root@node1 ~]# ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn 4,定义ipvsadm集群, 如果用户请求172.16.249.117:80端口,都使用-s rr调度 [root@node1 ~]# ipvsadm -A -t 172.16.249.117:80 -s rr 如果是172.16.249.117:80,使用-r调度至192.168.131.2,指明类型为nat,权重为1 [root@node1 ~]# ipvsadm -a -t 172.16.249.117:80 -r 192.168.131.2 -m -w 1 如果是172.16.249.117:80,使用-r调度至192.168.131.3,指明类型为nat,权重为3 [root@node1 ~]# ipvsadm -a -t 172.16.249.117:80 -r 192.168.131.3 -m -w 3 [root@node1 ~]# ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 172.16.249.117:80 rr -> 172.16.249.157:80 Masq 1 0 0 -> 172.16.249.186:80 Masq 3 0 0 5,开启核心转发 [root@node1 ~]# cat /proc/sys/net/ipv4/ip_forward 0 [root@node1 ~]# vim /etc/sysctl.conf net.ipv4.ip_forward = 1 测试 [root@localhost ~]# ipvsadm -E -t 172.16.249.117:80 -s wrr [root@localhost ~]# ipvsadm -Ln --stats IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Conns InPkts OutPkts InBytes OutBytes -> RemoteAddress:Port TCP 172.16.249.117:80 199 987 969 111348 91785 -> 192.168.131.2:80 78 387 381 43326 38539 -> 192.168.131.3:80 121 600 588 68022 53246 [root@localhost ~]# ipvsadm -Ln --stats
2015年05月25日
3,350 阅读
0 评论
0 点赞