首页
常用命令
About Me
推荐
weibo
github
Search
1
linuxea:gitlab-ci之docker镜像质量品质报告
48,760 阅读
2
linuxea:如何复现查看docker run参数命令
19,491 阅读
3
Graylog收集文件日志实例
17,808 阅读
4
git+jenkins发布和回滚示例
17,364 阅读
5
linuxea:jenkins+pipeline+gitlab+ansible快速安装配置(1)
17,353 阅读
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack
Open-Falcon
Prometheus
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
音乐
影视
music
Internet Consulting
最后的净土
软件交付
持续集成
gitops
devops
登录
Search
标签搜索
kubernetes
docker
zabbix
Golang
mariadb
持续集成工具
白话容器
linux基础
nginx
elk
dockerfile
Gitlab-ci/cd
最后的净土
基础命令
jenkins
docker-compose
gitops
haproxy
saltstack
Istio
marksugar
累计撰写
675
篇文章
累计收到
140
条评论
首页
栏目
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack
Open-Falcon
Prometheus
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
音乐
影视
music
Internet Consulting
最后的净土
软件交付
持续集成
gitops
devops
页面
常用命令
About Me
推荐
weibo
github
搜索到
675
篇与
marksugar
的结果
2023-02-06
linuxea:使用robusta收集事件pod崩溃OOM日志
robusta的功能远不止本章介绍的这些,它可以去监控Kubernetes,提供观测性,可以于prometheus接入,作为告警的二次处理,自动修复等,也提供了事件的时间线。此前使用的是阿里的kube-eventer,kube-eventer仅仅只是提供了一个转发,因此kube-eventer只能解决的是事件触发的通知。当然, 如果robusta也是仅仅止步于此,那也没用多少必要性去使用它。它还提供了另外一种非常有用的功能: 事件告警。 在robusta的事件告警中,当侦测到后,会将预设中预设的pod状态连同最近一段日志发送到slack. 这也是为什么会有这篇文章最重要的原因。基础依赖python版本必须等于大于3.7,于是我们升级版本升级pythonwget https://www.python.org/ftp/python/3.9.16/Python-3.9.16.tar.xz yum install gcc zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gdbm-devel db4-devel libpcap-devel xz-devel libffi-devel -y yum install libffi-devel -y yum install zlib* -y tar xf Python-3.9.16.tar.xz cd Python-3.9.16 ./configure --with-ssl --prefix=/usr/local/python3 make make install rm -rf /usr/bin/python3 /usr/bin/pip3 ln -s /usr/local/python3/bin/python3 /usr/bin/python3 ln -s /usr/local/python3/bin/pip3 /usr/bin/pip3准备国内源mkdir -p ~/.pip/ cat > ~/.pip/pip.conf << EOF [global] trusted-host = mirrors.aliyun.com index-url = http://mirrors.aliyun.com/pypi/simple EOFrobusta.dev参考官方文档开始安装pip3 install -U robusta-cli --no-cache robusta gen-config由于网络问题,我个人将使用使用docker进行配置curl -fsSL -o robusta https://docs.robusta.dev/master/_static/robusta chmod +x robusta ./robusta gen-config开始之前,务必下载我中转的镜像docker pull registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/robusta-cli:latest docker tag us-central1-docker.pkg.dev/genuine-flight-317411/devel/robusta-cli:latest registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/robusta-cli:latest[root@master1 opt]# ./robusta gen-config Robusta reports its findings to external destinations (we call them "sinks"). We'll define some of them now. Configure Slack integration? This is HIGHLY recommended. [Y/n]: y # 强烈建议配置slack If your browser does not automatically launch, open the below url: https://api.robusta.dev/integrations/slack?id=64a3ee7c-5691-466f-80da-85e8ece80359 # 浏览器打开 ====================================================================== Error getting slack token ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) ====================================================================== ====================================================================== Error getting slack token HTTPSConnectionPool(host='api.robusta.dev', port=443): Max retries exceeded with url: /integrations/slack/get-token?id=64a3ee7c-5691-466f-80da-85e8ece80359 (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f50b1f18cd0>: Failed to establish a new connection: [Errno 110] Connection timed out')) ====================================================================== You've just connected Robusta to the Slack of: crow as a cock Which slack channel should I send notifications to? # 根据提示打开If your browser does not automatically launch, open the below url: https://api.robusta.dev/integrations/slack?id=64a3ee7c-5691-466f-80da-85e8ece80359勾选允许如下此时slack已经有了 robusta应用继续下一步,在提示种选择了频道后Which slack channel should I send notifications to? # devops会受到一封消息执行完成后,如下:[root@master1 opt]# ./robusta gen-config Robusta reports its findings to external destinations (we call them "sinks"). We'll define some of them now. Configure Slack integration? This is HIGHLY recommended. [Y/n]: y If your browser does not automatically launch, open the below url: https://api.robusta.dev/integrations/slack?id=d1fcbb13-5174-4027-a176-a3dcab10c27a ====================================================================== Error getting slack token HTTPSConnectionPool(host='api.robusta.dev', port=443): Max retries exceeded with url: /integrations/slack/get-token?id=d1fcbb13-5174-4027-a176-a3dcab10c27a (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f0ec508eee0>: Failed to establish a new connection: [Errno 110] Connection timed out')) ====================================================================== You've just connected Robusta to the Slack of: crow as a cock Which slack channel should I send notifications to? # devops Configure MsTeams integration? [y/N]: n 配置MsTeams集成?[y / N]: N Configure Robusta UI sink? This is HIGHLY recommended. [Y/n]: y 配置Robusta UI接收器?这是强烈推荐的。[Y / n]: Enter your Gmail/Google address. This will be used to login: user@gmail.com 输入您的Gmail/谷歌地址。这将用于登录: Choose your account name (e.g your organization name): marksugar 选择您的帐户名称(例如您的组织名称): Successfully registered. Robusta can use Prometheus as an alert source. If you haven't installed it yet, Robusta can install a pre-configured Prometheus. Would you like to do so? [y/N]: y 罗布斯塔可以使用普罗米修斯作为警报源。 如果你还没有安装它,罗布斯塔可以安装一个预先配置的Prometheus。 你愿意这样做吗?[y / N]: Please read and approve our End User License Agreement: https://api.robusta.dev/eula.html Do you accept our End User License Agreement? [y/N]: y 请阅读并批准我们的最终用户许可协议:https://api.robusta.dev/eula.html 您是否接受我们的最终用户许可协议?[y / N]: Last question! Would you like to help us improve Robusta by sending exception reports? [y/N]: n 最后一个问题!你愿意通过发送异常报告来帮助我们改进Robusta吗?[y / N]: Saved configuration to ./generated_values.yaml - save this file for future use! Finish installing with Helm (see the Robusta docs). Then login to Robusta UI at https://platform.robusta.dev By the way, we'll send you some messages later to get feedback. (We don't store your API key, so we scheduled future messages using Slack'sAPI) 保存配置到。/generated_values。保存这个文件以备将来使用! 完成Helm的安装(参见罗布斯塔文档)。然后登录到罗布斯塔用户界面https://platform.robusta.dev 顺便说一下,我们稍后会给你发一些信息以获得反馈。(我们不存储你的API密钥,所以我们使用Slack的API来安排未来的消息)上述完成后,创建了一个generated_values.yamlglobalConfig: signing_key: 92a8195-a3fa879b3f88 account_id: 79efaf9c433294 sinksConfig: - slack_sink: name: main_slack_sink slack_channel: devops api_key: xoxb-4715825756487-4749501ZZylPy1f - robusta_sink: name: robusta_ui_sink token: eyJhY2NvjIn0= enablePrometheusStack: true enablePlatformPlaybooks: true runner: sendAdditionalTelemetry: false rsa: private: LS0tLS1CRUdJTiBRCBSU0EgUFJJVkFURSBLRVktLS0tLQo= public: LS0tLS1CRUdJTiBQTElDIEtFWS0tLS0tCg==helm紧接着使用上述创建的yaml文件进行安装。我们适当调整下内容关于触发器的种类非常多,我们可以参考:example-triggers, java-troubleshooting,event-enrichmentmiscellaneous,kubernetes-triggers。我们可以针对某一组pod或者名称空间进行过滤去监控的特定的信息。我们节选一些测试,并且加到generated_values.yaml种,如下:globalConfig: signing_key: 92a8195-a3fa879b3f88 account_id: 79efaf9c433294 sinksConfig: - slack_sink: name: main_slack_sink slack_channel: devops api_key: xoxb-4715825756487-4749501ZZylPy1f - robusta_sink: name: robusta_ui_sink token: eyJhY2NvjIn0= enablePrometheusStack: false enablePlatformPlaybooks: true runner: sendAdditionalTelemetry: false rsa: private: LS0tLS1CRUdJTiBRCBSU0EgUFJJVkFURSBLRVktLS0tLQo= public: LS0tLS1CRUdJTiBQTElDIEtFWS0tLS0tCg== customPlaybooks: - triggers: - on_deployment_update: {} actions: - resource_babysitter: omitted_fields: [] fields_to_monitor: ["spec.replicas"] - triggers: - on_pod_crash_loop: restart_reason: "CrashLoopBackOff" restart_count: 1 rate_limit: 3600 actions: - report_crash_loop: {} - triggers: - on_pod_oom_killed: rate_limit: 900 exclude: - name: "oomkilled-pod" namespace: "default" actions: - pod_graph_enricher: resource_type: Memory display_limits: true - triggers: - on_container_oom_killed: rate_limit: 900 exclude: - name: "oomkilled-container" namespace: "default" actions: - oomkilled_container_graph_enricher: resource_type: Memory - triggers: - on_job_failure: namespace_prefix: robusta actions: - create_finding: title: "Job $name on namespace $namespace failed" aggregation_key: "Job Failure" - job_events_enricher: { } runner: image: registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/robusta-runner:0.10.10 imagePullPolicy: IfNotPresent kubewatch: image: registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/kubewatch:v2.0 imagePullPolicy: IfNotPresent现在我们开始使用helm安装helm repo add robusta https://robusta-charts.storage.googleapis.com && helm repo update helm upgrade --install robusta --namespace robusta --create-namespace robusta/robusta -f ./generated_values.yaml \ --set clusterName=test 也可以使用如下命令调试 helm upgrade --install robusta --namespace robusta robusta/robusta -f ./generated_values.yaml --set clusterName=test --dry-run 如下[root@master1 opt]# helm upgrade --install robusta --namespace robusta --create-namespace robusta/robusta -f ./generated_values.yaml \ > --set clusterName=test Release "robusta" does not exist. Installing it now. NAME: robusta LAST DEPLOYED: Thu Feb 2 15:58:32 2023 NAMESPACE: robusta STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Thank you for installing Robusta 0.10.10 As an open source project, we collect general usage statistics. This data is extremely limited and contains only general metadata to help us understand usage patterns. If you are willing to share additional data, please do so! It really help us improve Robusta. You can set sendAdditionalTelemetry: true as a Helm value to send exception reports and additional data. This is disabled by default. To opt-out of telemetry entirely, set a ENABLE_TELEMETRY=false environment variable on the robusta-runner deployment. Visit the web UI at: https://platform.robusta.dev/等待pod就绪[root@master1 opt]# kubectl -n robusta get pod -w NAME READY STATUS RESTARTS AGE robusta-forwarder-78964b4455-vnt77 1/1 Running 0 2m55s robusta-runner-758cf9c986-87l4x 0/1 ContainerCreating 0 2m55s robusta-runner-758cf9c986-87l4x 1/1 Running 0 7m6s此时如果你的集群上pod有异常状态的而崩溃的,在被删除前,将会将日志发送到slack, slack上已经可以收到日志信息了选择点击以展开内联,即可查看详细信息
2023年02月06日
74 阅读
0 评论
0 点赞
2023-02-04
linuxea:istio bookinfo配置演示(11)
bookinfo其中包包中有一个 bookinfo的示例,这个应用模仿在线书店的一个分类,显示一本书的信息。 页面上会显示一本书的描述,书籍的细节(ISBN、页数等),以及关于这本书的一些评论。Bookinfo 应用分为四个单独的微服务:productpage. 这个微服务会调用 details 和 reviews 两个微服务,用来生成页面。details. 这个微服务中包含了书籍的信息。reviews. 这个微服务中包含了书籍相关的评论。它还会调用 ratings 微服务。ratings. 这个微服务中包含了由书籍评价组成的评级信息。reviews 微服务有 3 个版本:v1 版本不会调用 ratings 服务。v2 版本会调用 ratings 服务,并使用 1 到 5 个黑色星形图标来显示评分信息。v3 版本会调用 ratings 服务,并使用 1 到 5 个红色星形图标来显示评分信息。拓扑结构如下:Bookinfo 应用中的几个微服务是由不同的语言编写的。 这些服务对 Istio 并无依赖,但是构成了一个有代表性的服务网格的例子:它由多个服务、多个语言构成(接口API统一),并且 reviews 服务具有多个版本安装解压istio后,在samples/bookinfo目录下是相关bookinfo目录,参考官网中的getting-startrd[root@linuxea_48 /usr/local/istio-1.14.1]# ls samples/bookinfo/ -ll total 20 -rwxr-xr-x 1 root root 3869 Jun 8 10:11 build_push_update_images.sh drwxr-xr-x 2 root root 4096 Jun 8 10:11 networking drwxr-xr-x 3 root root 18 Jun 8 10:11 platform drwxr-xr-x 2 root root 46 Jun 8 10:11 policy -rw-r--r-- 1 root root 3539 Jun 8 10:11 README.md drwxr-xr-x 8 root root 123 Jun 8 10:11 src -rw-r--r-- 1 root root 6329 Jun 8 10:11 swagger.yaml而后安装 platform/kube/bookinfo.yaml文件[root@linuxea_48 /usr/local/istio-1.14.1]# kubectl -n java-demo apply -f samples/bookinfo/platform/kube/bookinfo.yaml service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created错误处理Unhandled exception Type=Bus error vmState=0x00000000 J9Generic_Signal_Number=00000028 Signal_Number=00000007 Error_Value=00000000 Signal_Code=00000002 Handler1=00007F368FD0AD30 Handler2=00007F368F5F72F0 InaccessibleAddress=00002AAAAAC00000 RDI=00007F369017F7D0 RSI=0000000000000008 RAX=00007F369018CBB0 RBX=00007F369017F7D0 RCX=00007F369003A9D0 RDX=0000000000000000 R8=0000000000000000 R9=0000000000000000 R10=00007F36900008D0 R11=0000000000000000 R12=00007F369017F7D0 R13=00007F3679C00000 R14=0000000000000001 R15=0000000000000080 RIP=00007F368DA7395B GS=0000 FS=0000 RSP=00007F3694D1E4A0 EFlags=0000000000010202 CS=0033 RBP=00002AAAAAC00000 ERR=0000000000000006 TRAPNO=000000000000000E OLDMASK=0000000000000000 CR2=00002AAAAAC00000 xmm0 0000003000000020 (f: 32.000000, d: 1.018558e-312) xmm1 0000000000000000 (f: 0.000000, d: 0.000000e+00) xmm2 ffffffff00000002 (f: 2.000000, d: -nan) xmm3 40a9000000000000 (f: 0.000000, d: 3.200000e+03) xmm4 dddddddd000a313d (f: 667965.000000, d: -1.456815e+144) xmm5 0000000000000994 (f: 2452.000000, d: 1.211449e-320) xmm6 00007f369451ac40 (f: 2488380416.000000, d: 6.910614e-310) xmm7 0000000000000000 (f: 0.000000, d: 0.000000e+00) xmm8 dd006b6f6f68396a (f: 1869101440.000000, d: -9.776703e+139) xmm9 0000000000000000 (f: 0.000000, d: 0.000000e+00) xmm10 0000000000000000 (f: 0.000000, d: 0.000000e+00) xmm11 0000000049d70a38 (f: 1238829568.000000, d: 6.120632e-315) xmm12 000000004689a022 (f: 1183424512.000000, d: 5.846894e-315) xmm13 0000000047ac082f (f: 1202456576.000000, d: 5.940925e-315) xmm14 0000000048650dc0 (f: 1214582272.000000, d: 6.000833e-315) xmm15 0000000046b73e38 (f: 1186414080.000000, d: 5.861665e-315) Module=/opt/ibm/java/jre/lib/amd64/compressedrefs/libj9jit29.so Module_base_address=00007F368D812000 Target=2_90_20200901_454898 (Linux 3.10.0-693.el7.x86_64) CPU=amd64 (32 logical CPUs) (0x1f703dd000 RAM) ----------- Stack Backtrace ----------- (0x00007F368DA7395B [libj9jit29.so+0x26195b]) (0x00007F368DA7429B [libj9jit29.so+0x26229b]) (0x00007F368D967C57 [libj9jit29.so+0x155c57]) J9VMDllMain+0xb44 (0x00007F368D955C34 [libj9jit29.so+0x143c34]) (0x00007F368FD1D041 [libj9vm29.so+0xa7041]) (0x00007F368FDB4070 [libj9vm29.so+0x13e070]) (0x00007F368FC87E94 [libj9vm29.so+0x11e94]) (0x00007F368FD2581F [libj9vm29.so+0xaf81f]) (0x00007F368F5F8053 [libj9prt29.so+0x1d053]) (0x00007F368FD1F9ED [libj9vm29.so+0xa99ed]) J9_CreateJavaVM+0x75 (0x00007F368FD15B75 [libj9vm29.so+0x9fb75]) (0x00007F36942F4305 [libjvm.so+0x12305]) JNI_CreateJavaVM+0xa82 (0x00007F36950C9B02 [libjvm.so+0xab02]) (0x00007F3695ADDA94 [libjli.so+0xfa94]) (0x00007F3695CF76DB [libpthread.so.0+0x76db]) clone+0x3f (0x00007F36955FAA3F [libc.so.6+0x121a3f]) --------------------------------------- JVMDUMP039I Processing dump event "gpf", detail "" at 2022/07/20 08:59:38 - please wait. JVMDUMP032I JVM requested System dump using '/opt/ibm/wlp/output/defaultServer/core.20220720.085938.1.0001.dmp' in response to an event JVMDUMP010I System dump written to /opt/ibm/wlp/output/defaultServer/core.20220720.085938.1.0001.dmp JVMDUMP032I JVM requested Java dump using '/opt/ibm/wlp/output/defaultServer/javacore.20220720.085938.1.0002.txt' in response to an event JVMDUMP012E Error in Java dump: /opt/ibm/wlp/output/defaultServer/javacore.20220720.085938.1.0002.txt JVMDUMP032I JVM requested Snap dump using '/opt/ibm/wlp/output/defaultServer/Snap.20220720.085938.1.0003.trc' in response to an event JVMDUMP010I Snap dump written to /opt/ibm/wlp/output/defaultServer/Snap.20220720.085938.1.0003.trc JVMDUMP032I JVM requested JIT dump using '/opt/ibm/wlp/output/defaultServer/jitdump.20220720.085938.1.0004.dmp' in response to an event JVMDUMP013I Processed dump event "gpf", detail "".如下echo 0 > /proc/sys/vm/nr_hugepages见34510,13389配置完成,pod准备结束(base) [root@k8s-01 bookinfo]# kubectl -n java-demo get pod NAME READY STATUS RESTARTS AGE details-v1-6d89cf9847-46c4z 2/2 Running 0 27m productpage-v1-f44fc594c-fmrf4 2/2 Running 0 27m ratings-v1-6c77b94555-twmls 2/2 Running 0 27m reviews-v1-765697d479-tbprw 2/2 Running 0 6m30s reviews-v2-86855c588b-sm6w2 2/2 Running 0 6m2s reviews-v3-6ff967c97f-g6x8b 2/2 Running 0 5m55s sleep-557747455f-46jf5 2/2 Running 0 5d1.gateway配置hosts为*,也就是默认的配置,匹配所有。也就意味着,可以使用ip地址访问在VirtualService中的访问入口如下 http: - match: - uri: exact: /productpage只要pod正常启动,南北流量的访问就能够被引入到网格内部,并且可以通过ip/productpage进行访问yaml如下apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - "*" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080apply(base) [root@k8s-01 bookinfo]# kubectl -n java-demo apply -f networking/bookinfo-gateway.yaml gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created而后便可以通过浏览器打开同时。这里的版本是随着刷新一直在变化reviews-v1reviews-v3reviews-v2此时在kiali中能看到一个简单的拓扑:请求从ingress-gateway进入后,到达productpage的v1版本,而后调度到details的v1, 其中reviews流量等比例的被切割到v1,v2,v3,并且v2,v3比v1还多了一个ratings服务,如下图网格测试安装完成,我们进行一些测试,比如:请求路由,故障注入等1.请求路由要开始,需要将destination rules中配置的子集规则,如下apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: productpage spec: host: productpage subsets: - name: v1 labels: version: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: reviews spec: host: reviews subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 - name: v3 labels: version: v3 --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ratings spec: host: ratings subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 - name: v2-mysql labels: version: v2-mysql - name: v2-mysql-vm labels: version: v2-mysql-vm --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: details spec: host: details subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 ---applykubectl -n java-demo apply -f samples/bookinfo/networking/destination-rule-all.yaml> kubectl -n java-demo apply -f samples/bookinfo/networking/destination-rule-all.yaml destinationrule.networking.istio.io/productpage created destinationrule.networking.istio.io/reviews created destinationrule.networking.istio.io/ratings created destinationrule.networking.istio.io/details created而后,对于非登录用户,将流量全发送到v1的版本,展开的yaml如下apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: productpage spec: hosts: - productpage http: - route: - destination: host: productpage subset: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - route: - destination: host: ratings subset: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: details spec: hosts: - details http: - route: - destination: host: details subset: v1 ---使用如下命令创建即可kubectl -n java-demo apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml> kubectl -n java-demo apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml virtualservice.networking.istio.io/productpage created virtualservice.networking.istio.io/reviews created virtualservice.networking.istio.io/ratings created virtualservice.networking.istio.io/details created PS E:\ops\k8s-1.23.1-latest\istio-企鹅通\istio-1.14此时在去访问,流量都会到v1这取决于定义了三个reviews的子集,如下apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: reviews spec: host: reviews subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 - name: v3 labels: version: v3随后指明了reviews调度到v1apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v1如果没有VirtualService这reviews的配置,就会在三个版本中不断切换2.用户标识调度此时我们希望某个用户登录就让他转发到某个版本如果end-user等于json就转发到v2apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v1在/productpageBookinfo 应用程序上,以 user 身份登录jason。登录后kiali变化如下3.故障注入要了解故障注入,需要了解混沌工程。在云原生上,在某些时候希望能够抵御某种程度局部故障。比如希望允许客户端重试,超时来解决局部问题istio原生支持两种故障注入来模拟混动工程的效果,注入超时,或者重试故障基于此前上的两个之上$ kubectl -n java-demo apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml $ kubectl -n java-demo apply -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml使用上述配置,请求流程如下:productpage→ reviews:v2→ ratings(仅限用户jason)productpage→ reviews:v1(对于其他所有人)注入延迟故障要测试 Bookinfo 应用程序微服务的弹性,在userreviews:v2和微服务之间注入 7s 延迟。此测试将发现一个有意引入 Bookinfo 应用程序的错误。ratings`jason`如果是jason,在100%的流量上,注入7秒延迟,路由到v1版本,其他的也路由到v1版本,唯一不同是jason访问是有 延迟的,其他人正常,yaml如下apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - match: - headers: end-user: exact: jason fault: delay: percentage: value: 100.0 fixedDelay: 7s route: - destination: host: ratings subset: v1 - route: - destination: host: ratings subset: v1apply> kubectl.exe -n java-demo apply -f samples/bookinfo/networking/virtual-service-ratings-test-delay.yaml virtualservice.networking.istio.io/ratings configured我们打开浏览器测试并且可以看到Sorry, product reviews are currently unavailable for this book.中断故障注入测试微服务弹性的另一种方法是引入 HTTP 中止故障。ratings在此任务中,将为测试用户的微服务引入 HTTP 中止jason。在这种情况下,希望页面立即加载并显示Ratings service is currently unavailable消息。apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - match: - headers: end-user: exact: jason fault: abort: percentage: value: 100.0 httpStatus: 500 route: - destination: host: ratings subset: v1 - route: - destination: host: ratings subset: v1apply> kubectl.exe -n java-demo apply -f samples/bookinfo/networking/virtual-service-ratings-test-abort.yaml virtualservice.networking.istio.io/ratings configuredkiali如下4.流量迁移如何将流量从一个微服务版本转移到另一个版本。一个常见的用例是将流量从旧版本的微服务逐渐迁移到新版本。在 Istio 中,可以通过配置一系列路由规则来实现这一目标,这些规则将一定比例的流量从一个目的地重定向到另一个目的地。在此任务中,将使用将 50% 的流量发送到reviews:v1和 50% 到reviews:v3。然后,将通过将 100% 的流量发送到 来完成迁移reviews:v3。首先,运行此命令将所有流量路由到v1每个微服务的版本。apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: productpage spec: hosts: - productpage http: - route: - destination: host: productpage subset: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - route: - destination: host: ratings subset: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: details spec: hosts: - details http: - route: - destination: host: details subset: v1 ---apply> kubectl.exe -n java-demo apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml virtualservice.networking.istio.io/productpage unchanged virtualservice.networking.istio.io/reviews configured virtualservice.networking.istio.io/ratings configured virtualservice.networking.istio.io/details unchanged现在无论刷新多少次,页面的评论部分都不会显示评分星。这是因为将 Istio 配置为将 reviews 服务的所有流量路由到该版本reviews:v1,并且该版本的服务不访问星级评分服务。reviews:v1使用reviews:v3以下清单传输 50% 的流量apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v1 weight: 50 - destination: host: reviews subset: v3 weight: 50apply> kubectl.exe -n java-demo apply -f samples/bookinfo/networking/virtual-service-reviews-50-v3.yaml virtualservice.networking.istio.io/reviews configured如下kiali5.请求超时使用路由规则的timeout字段指定 HTTP 请求的超时。默认情况下,请求超时被禁用,但在此任务中,将服务超时覆盖为 1 秒。但是,为了查看其效果,还可以在调用服务时人为地引入 2 秒延迟。reviews`ratings`在开始之前,将所有的请求指派到v1kubectl -n java-demo apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml而后将reviews调度到v2kubectl -n java-demo apply -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v2 EOF在ratings的v1上注入一个2秒钟的延迟reviews会访问到ratingskubectl -n java-demo apply -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - fault: delay: percent: 100 fixedDelay: 2s route: - destination: host: ratings subset: v1 EOF访问如下图,当应用程序调到ratings的时候,会超时2秒一旦应用程序的上游响应缓慢,势必影响到服务体验,于是,我们将调整,如果上游服务响应超过0.5s就不去请求kubectl -n java-demo apply -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v2 timeout: 0.5s EOF此时刷新提示Sorry, product reviews are currently unavailable for this book.因为服务响应超过0.5秒,不去请求了如果此时,我们认为3秒是可以接受的,就改成3,服务就可以访问到ratings了
2023年02月04日
128 阅读
0 评论
0 点赞
2023-02-03
linuxea:istio 故障注入/重试和容错/流量镜像(10)
6.故障注入istiO支持两种故障注入,分别是延迟故障和中断故障延迟故障:超时,重新发送请求abort中断故障:重试故障注入仍然在http层进行定义中断故障 fault: abort: # 中断故障 percentage: value: 20 # 在多大的比例流量上注入 httpStatus: 567 # 故障响应码延迟故障 fault: delay: percentage: value: 20 # 在百分之20的流量上注入 fixedDelay: 6s # 注入三秒的延迟yaml如下apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - "dpment.linuxea.com" # 对应于gateways/proxy-gateway - "dpment" gateways: - istio-system/dpment-gateway # 相关定义仅应用于Ingress Gateway上 - mesh http: - name: version match: - uri: prefix: /version/ rewrite: uri: / route: - destination: host: dpment subset: v10 fault: abort: percentage: value: 20 httpStatus: 567 - name: default route: - destination: host: dpment subset: v11 fault: delay: percentage: value: 20 fixedDelay: 6s此时,当我们用curl访问 dpment.linuxea.com的时候,有20的流量会被中断6秒(base) [root@master1 7]# while true;do date;curl dpment.linuxea.com; date;sleep 0.$RANDOM;done 2022年 08月 07日 星期日 18:10:40 CST linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 2022年 08月 07日 星期日 18:10:40 CST 2022年 08月 07日 星期日 18:10:41 CST linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 2022年 08月 07日 星期日 18:10:41 CST 2022年 08月 07日 星期日 18:10:41 CST linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 2022年 08月 07日 星期日 18:10:41 CST 2022年 08月 07日 星期日 18:10:41 CST linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 2022年 08月 07日 星期日 18:10:47 CST 2022年 08月 07日 星期日 18:10:47 CST linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 2022年 08月 07日 星期日 18:10:53 CST 2022年 08月 07日 星期日 18:10:54 CST linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 2022年 08月 07日 星期日 18:10:54 CST 2022年 08月 07日 星期日 18:10:55 CST如果我们访问dpment.linuxea.com/version/的时候,有20%的流量返回的状态码是567(base) [root@master1 7]# while true;do echo -e "===============";curl dpment.linuxea.com/version/ -I ; sleep 0.$RANDOM;done =============== HTTP/1.1 567 Unknown content-length: 18 content-type: text/plain date: Sun, 07 Aug 2022 10:16:40 GMT server: istio-envoy =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:31 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 2 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:32 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 1 =============== HTTP/1.1 567 Unknown content-length: 18 content-type: text/plain date: Sun, 07 Aug 2022 10:16:42 GMT server: istio-envoy =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:33 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 3 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:33 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 1 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:33 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 1 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:33 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 3 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:34 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 1 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:34 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 1 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:35 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 1 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:35 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 2 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:36 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 2 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:36 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 2 =============== HTTP/1.1 567 Unknown content-length: 18 content-type: text/plain date: Sun, 07 Aug 2022 10:16:46 GMT server: istio-envoy =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:37 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 1如果使用curl命令直接访问会看到fault filter abort(base) [root@master1 7]# while true;do echo -e "\n";curl dpment.linuxea.com/version/ ; sleep 0.$RANDOM;done linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 fault filter abort linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 fault filter abort fault filter abort linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 fault filter abort linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 fault filter abort linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0回到kiali6.1. 重试和容错请求重试条件:5xx:上游主机返回5xx响应码,或者根本未响应(端口,重置,读取超时)gateway-error: 网关错误,类似于5xx策略,但仅为502,503,504应用进行重试connection-failure:在tcp级别与上游服务建立连接失败时进行重试retriable-4xx:上游服务器返回可重复的4xx响应码时进行重试refused-stream:上游服务器使用REFUSED-STREAM错误码重置时进行重试retrable-status-codes:上游服务器的响应码与重试策略或者x-envoy-retriable-status-codes标头值中定义的响应码匹配时进行重试reset:上游主机完全不响应(disconnect/reset/read超时),envoy将进行重试retriable-headers:如果上游服务器响应报文匹配重试策略或x-envoy-retriable-header-names标头中包含的任何标头,则envoy将尝试重试envoy-rateliited:标头中存在x-envoy-ratelimited时重试重试条件2(同x-envoy-grpc-on标头):cancelled: grpc应答标头中的状态码是"cancelled"时进行重试deadline-exceeded: grpc应答标头中的状态码是"deadline-exceeded"时进行重试internal: grpc应答标头中的状态码是“internal”时进行重试resource-exhausted:grpc应答标头中的状态码是"resource-exhausted"时进行重试unavailable:grpc应答标头中的状态码是“unavailable”时进行重试默认情况下,envoy不会进行任何类型的重试操作,除非明确定义我们假设现在有多个服务,A->B->C,A向后代理,或者访问其中的B出现了响应延迟,在A上配置容错机制,如下apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - "dpment.linuxea.com" # 对应于gateways/proxy-gateway - "dpment" gateways: - istio-system/dpment-gateway # 相关定义仅应用于Ingress Gateway上 http: - name: default route: - destination: host: A timeout: 1s # 如果上游超过1秒响应,就返回超时结果 retries: # 重试 attempts: 5 # 重试次数 perTryTimeout: 1s # 重试时间 retryOn: 5xx,connect-failure,refused-stream # 对那些条件进行重试如果上游服务超过1秒未响应就进行重试,对于5开头的响应码,tcp链接失败的,或者是GRPC的Refused-stream的建立链接也拒绝了,就重试五次,每次重试1秒。这个重试的 5次过程中,如果在1s内,有成功的则会成功 。7.流量镜像流量镜像,也叫影子流量(Traffic shadowing),是一种通过复制生产环境的流量到其他环境进行测试开发的工作模式。在traffic-mirror中,我们可以直接使用mirror来指定给一个版本 - name: default route: - destination: host: dpment subset: v11 mirror: host: dpment subset: v12于是,我们在此前的配置上修改apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - "dpment.linuxea.com" # 对应于gateways/proxy-gateway - "dpment" gateways: - istio-system/dpment-gateway # 相关定义仅应用于Ingress Gateway上 - mesh http: - name: version match: - uri: prefix: /version/ rewrite: uri: / route: - destination: host: dpment subset: v10 - name: default route: - destination: host: dpment subset: v11 mirror: host: dpment subset: v12我们发起curl请求 while ("true"){ curl http://dpment.linuxea.com/ ;sleep 1}而后在v12中查看日志以获取是否流量被镜像进来(base) [root@master1 10]# kubectl -n java-demo exec -it dpment-linuxea-c-568b9fcb5c-ltdcg -- /bin/bash bash-5.0# curl 127.0.0.1 linuxea-dpment-linuxea-c-568b9fcb5c-ltdcg.com-127.0.0.1/8 130.130.1.125/24 version number 3.0 bash-5.0# tail -f /data/logs/access.log 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:27:59 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:00 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:01 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:02 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:03 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:04 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:05 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:06 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:07 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:08 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:11 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:12 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:13 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:14 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:15 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:16 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:17 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:18 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:19 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:20 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:21 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:23 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:24 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:25 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-"
2023年02月03日
118 阅读
0 评论
0 点赞
2023-01-17
linuxea:istio 基于headers请求头路由(9)
5.请求首部条件路由正常情况下从客户端请求的流量会发送到sidecar-proxy(eneoy),请求发送到上游的pod,而后响应到envoy,并从envoy响应给客户端。在这个过程中,客户端到envoy是不能够被修改的,只有从envoy到上游serrvice中才是可以被操作的,因为这段报文是由envoy发起的。而从serrvice响应到envoy的也是不能够被修改的。并且从envoy响应到客户端的报文由envoy发起响应也是可以被修改的。总共可配置的只有发送到上游service的请求和发给下游客户端的响应。而另外两端并不是envoy生成的,因此没用办法去操作标头的。request: 发送给上游请求serviceresponse: 响应给下游客户端1.如果请求的首部有x-for-canary等于true则路由到v10,如果浏览器是Mozilla就路由给v11,并且修改发送给上游的请求标头的 User-Agent: Mozilla,其次,在响应给客户端的标头添加一个 x-canary: "true" - name: canary match: - headers: x-for-canary: exact: "true" route: - destination: host: dpment subset: v11 headers: request: # 修改发送给上游的请求标头的 User-Agent: Mozilla set: User-Agent: Mozilla response: # 响应给客户端的标头添加一个 x-canary: "true" add: x-canary: "true"没有匹配到这些规则,就给默认规则匹配,就路由给v10,并且添加一个下游的响应报文: 全局标头X-Envoy: linuxea - name: default headers: response: add: X-Envoy: linuxea route: - destination: host: dpment subset: v10yaml如下apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - dpment http: - name: canary match: - headers: x-for-canary: exact: "true" route: - destination: host: dpment subset: v11 headers: request: # 修改发送给上游的请求标头的 User-Agent: Mozilla set: User-Agent: Mozilla response: # 响应给客户端的标头添加一个 x-canary: "true" add: x-canary: "marksugar" - name: default headers: response: add: X-Envoy: linuxea route: - destination: host: dpment subset: v10我们添加上gateway部分apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - "dpment.linuxea.com" # 对应于gateways/proxy-gateway - "dpment" gateways: - istio-system/dpment-gateway # 相关定义仅应用于Ingress Gateway上 - mesh http: - name: canary match: - headers: x-for-canary: exact: "true" route: - destination: host: dpment subset: v11 headers: request: set: User-Agent: Mozilla response: add: x-canary: "marksugar" - name: default headers: response: add: X-Envoy: linuxea route: - destination: host: dpment subset: v105.1 测试此时可以使用curl来模拟访问请求# curl dpment.linuxea.com linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0添加头部, -H "x-for-canary: true"# curl -H "x-for-canary: true" dpment.linuxea.com linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0而后查看日志的user-agent ,这里是Mozilla,默认是curl130.130.0.0, 127.0.0.6 - [07/Aug/2022:08:18:33 +0000] "GET / HTTP/1.1" dpment.linuxea.com94 "-" "Mozilla" - -0.000 [200] [-] [-] "-"我们使用curl发起请求,模拟的是Mozilla此时使用-I,查看标头的添加信息x-canary: marksugarPS C:\Users\Administrator> curl -H "x-for-canary: true" dpment.linuxea.com -I HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 09:47:45 GMT content-type: text/html content-length: 94 last-modified: Wed, 03 Aug 2022 07:58:30 GMT etag: "62ea2aa6-5e" accept-ranges: bytes x-envoy-upstream-service-time: 4 x-canary: marksugar如果不加头部-H "x-for-canary: true"则响应报文的是x-envoy: linuxeaPS C:\Users\Administrator> curl dpment.linuxea.com -I HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 09:51:53 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 7 x-envoy: linuxea
2023年01月17日
158 阅读
0 评论
0 点赞
2023-01-15
linuxea:istio 基于权重路由(8)
紧接前面,这篇我们希望访问dpment服务的请求在百分之90的流量在原来的v10的pod,而百分之10的在新的v11的pod,因此我们配置weight来实现基于权重比例的流量切割首先部署dpment-a和dpment-b仍然需要配置service关联到后端的pod标签apiVersion: v1 kind: Service metadata: name: dpment namespace: java-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app type: ClusterIP配置一个DestinationRule,并且配置上subsets根据标签匹配,通过标签 匹配到两个service上,将子集定义完成此前的篇幅中我门配置过deployment,并且标签配置完成,此时的子集直接引用--- apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: dpment namespace: java-demo spec: host: dpment subsets: - name: v11 labels: version: v1.1 - name: v10 labels: version: v1.0在VirtualService中添加一项weight分别指定两个subsets.name的权重--- apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - dpment http: - name: weight-based-routing route: - destination: host: dpment subset: v10 weight: 90 - destination: host: dpment subset: v11 weight: 10yaml如下apiVersion: v1 kind: Service metadata: name: dpment namespace: java-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app type: ClusterIP --- apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: dpment namespace: java-demo spec: host: dpment subsets: - name: v11 labels: version: v1.1 - name: v10 labels: version: v1.0 --- apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - dpment http: - name: weight-based-routing route: - destination: host: dpment subset: v10 weight: 90 - destination: host: dpment subset: v11 weight: 10而后我们在cli的容器,或者其他容器都可以进行访问测试/ $ while true;do curl dpment;sleep 0.3;done linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-695mr.com-127.0.0.1/8 130.130.1.108/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-p4prt.com-127.0.0.1/8 130.130.0.76/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-695mr.com-127.0.0.1/8 130.130.1.108/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-ktz84.com-127.0.0.1/8 130.130.1.107/24 version number 1.0此时在kiali界面会看到流量走向现在我们希望增加流量权重比例,修改即可,比如百分之60到v10,百分之40到v11--- apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - dpment http: - name: weight-based-routing route: - destination: host: dpment subset: v10 weight: 60 - destination: host: dpment subset: v11 weight: 40此时的kiali可以看到流量的比例情况如下如果希望流量全部切换到一方,修改为100即可--- apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - dpment http: - name: weight-based-routing route: - destination: host: dpment subset: v10 weight: 0 - destination: host: dpment subset: v11 weight: 100一旦流量全部倒向100,另外为0的权重的service将不接收流量,并且从kiali界面上移除4.1 gateway那如果是前端页面就需要添加一些其他配置我们假设,有一个服务是dpment,现在的版本是1.1,现在要按照权重比例升级到1.2但是此前我们只配置了v1.0和v1.1的子集,所以现在我们添加一个v1.2因此,我们配置v1.3的pod组,和DestinationRule主要配置DestinationRule是通过标签来关联的,因此pod的标签需要进行修改 matchLabels: app: linuxea_app version: v1.2在此前的DestinationRule之上,我们添加 - name: v12 labels: version: v1.2 都关联到一个dpment的host下yaml如下apiVersion: v1 kind: Service metadata: name: dpment namespace: java-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: dpment-linuxea-c namespace: java-demo spec: replicas: 3 selector: matchLabels: app: linuxea_app version: v1.2 template: metadata: labels: app: linuxea_app version: v1.2 spec: containers: - name: dpment-linuxea-c # imagePullPolicy: Always image: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v3.0 ports: - name: http containerPort: 80 --- apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: dpment namespace: java-demo spec: host: dpment # 与service保持一致 subsets: # 逻辑组 - name: v11 # 定义v11并根据标签,筛选v1.1到v11子集 labels: version: v1.1 - name: v10 # 定义v10并根据标签,筛选v1.0到v10子集 labels: version: v1.0 - name: v12 labels: version: v1.2 而后在vs调整比例 - destination: host: dpment subset: v10 weight: 90 - destination: host: dpment subset: v12 weight: 10yaml如下apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - "dpment.linuxea.com" # 对应于gateways/proxy-gateway - "dpment" gateways: - istio-system/dpment-gateway # 相关定义仅应用于Ingress Gateway上 - mesh http: - name: weight-based-routing route: - destination: host: dpment subset: v11 weight: 90 - destination: host: dpment subset: v12 weight: 10curlPS C:\Users\usert> while ("true"){ curl http://dpment.linuxea.com/ ;sleep 1} linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-c-568b9fcb5c-ltdcg.com-127.0.0.1/8 130.130.1.125/24 version number 3.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-c-568b9fcb5c-phn77.com-127.0.0.1/8 130.130.0.24/24 version number 3.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-c-568b9fcb5c-phn77.com-127.0.0.1/8 130.130.0.24/24 version number 3.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0由于配的host是 hosts: - "dpment.linuxea.com" - "dpment"因此在pod里面也可以进行访问要在网格内和ingress同时访问, - mesh配置至关重要创建一个clikubectl -n java-demo run cli --image=marksugar/alpine:netools -it --rm --restart=Never --command -- /bin/bashcurl开始模拟访问bash-4.4# while true;do curl dpment;sleep 0.4;done linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-c-568b9fcb5c-ltdcg.com-127.0.0.1/8 130.130.1.125/24 version number 3.0 linuxea-dpment-linuxea-c-568b9fcb5c-phn77.com-127.0.0.1/8 130.130.0.24/24 version number 3.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-c-568b9fcb5c-phn77.com-127.0.0.1/8 130.130.0.24/24 version number 3.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-c-568b9fcb5c-ltdcg.com-127.0.0.1/8 130.130.1.125/24 version number 3.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0此时的kiali的示意图就变成了
2023年01月15日
95 阅读
0 评论
0 点赞
2023-01-03
linuxea:istio 发布web到集群外(7)
3.开放到外网我们通过域名通过外网来访问集群内的这两个pod,就需定义gateway和vs,vs也是定义在网关gateway打开侦听器gateway必须在网格部署的所在名称空间内,否则有可能注入失败VirtualService定义路由信息等此前定义的VirtualService并没有指定网关,如果没有指定,就只会在网格内的各sidecar内使用如果只是这样,那么网格内部是不能访问的,如果需要让网格内部访问,就需要加上- mesh通常,集群内部使用的是service名称访问配置Gateway接受ingress入网的hosts流量apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: dpment-gateway namespace: istio-system # 要指定为ingress gateway pod所在名称空间 spec: selector: app: istio-ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "dpment.linuxea.com" - "dpment1.linuxea.com" 配置VirtualService并且关联istio-system/dpment-gateway ,对应之上的gateway的hosts,前后呼应apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - "dpment.linuxea.com" # 对应于gateways/proxy-gateway - "dpment1.linuxea.com" gateways: - istio-system/dpment-gateway # 相关定义仅应用于Ingress Gateway上 #- mesh http: - name: dpment route: - destination: host: dpment ---配置一个serviceapiVersion: v1 kind: Service metadata: name: dpment namespace: java-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app type: ClusterIP ---此时可以通过浏览器访问但是这样的方式是会将流量轮询到app: linuxea_app标签的pod,因此,我们添加一个path url路径,允许通过外部网络访问。我们希望,如果请求没有附带/version/就发送到v11,如果附带了/version/重写为/发送到v10 http: - name: version match: - uri: prefix: /version/ rewrite: uri: / route: - destination: host: dpment subset: v10 - name: default route: - destination: host: dpment subset: v11我们修改VirtualService,使用subset,因此额外添加一个DestinationRuleapiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: # hosts: # - dpment hosts: - "dpment.linuxea.com" # 对应于gateways/proxy-gateway gateways: - istio-system/dpment-gateway # 相关定义仅应用于Ingress Gateway上 http: - name: version match: - uri: prefix: /version/ rewrite: uri: / route: - destination: host: dpment subset: v10 - name: default route: - destination: host: dpment subset: v11 --- apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: dpment namespace: java-demo spec: host: dpment subsets: - name: v11 labels: version: v1.1 - name: v10 labels: version: v1.0配置本地hosts进行测试PS C:\Users\usert> while ("true"){ curl http://dpment.linuxea.com/ http://dpment.linuxea.com/version/ ;sleep 1} linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0而在kiali中,绘制的图片已经发生了变化
2023年01月03日
217 阅读
0 评论
0 点赞
2022-12-27
linuxea:istio 定义subset子集(6)
定义subset子集我们将两个版本归类到一个版本的pod上,去进行适配到一个pod上去,通过标签关联来做区分对于多个版本,在同一个host,通过标签来标注不同的版本信息来进行管理,而后在vs中进行调用子集需要在DestinationRule集群上面进行配置DestinationRule在cluster配置的,通过routes进行调度基于子集,在本案例中根据version标签来备注,类似如下: selector: app: linuxea_app version: v0.2service首先仍然照样创建一个service关联到标签apiVersion: v1 kind: Service metadata: name: dpment namespace: java-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app type: ClusterIP定义DestinationRule集群而后创建DestinationRule,在一个Host里面使用subsets通过标签版本关联两个service,两个service分别关联不同的pod,版本也不同host 与service保持一致使用subsets定义v11,并根据标签筛选v1.1到v11子集定义v10,并根据标签筛选v1.0到v10子集我们修改下标签pod yamldpment-b 如下--- apiVersion: v1 kind: Service metadata: name: dpment-b namespace: java-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app version: v0.2 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: dpment-linuxea-b namespace: java-demo spec: replicas: 2 selector: matchLabels: app: linuxea_app version: v1.1 template: metadata: labels: app: linuxea_app version: v1.1 spec: containers: - name: nginx-b # imagePullPolicy: Always image: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v2.0 ports: - name: http containerPort: 80dpment-a如下--- apiVersion: v1 kind: Service metadata: name: dpment-a namespace: java-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app version: v1.0 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: dpment-linuxea-a namespace: java-demo spec: replicas: selector: matchLabels: app: linuxea_app version: v1.0 template: metadata: labels: app: linuxea_app version: v1.0 spec: containers: - name: nginx-a # imagePullPolicy: Always image: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v1.0 ports: - name: http containerPort: 80创建完成(base) [root@master1 2]# kubectl -n java-demo get svc,pod NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/dpment ClusterIP 10.96.155.138 <none> 80/TCP 22h service/dpment-a ClusterIP 10.99.74.80 <none> 80/TCP 12s service/dpment-b ClusterIP 10.101.155.240 <none> 80/TCP 33s NAME READY STATUS RESTARTS AGE pod/cli 2/2 Running 0 22h pod/dpment-linuxea-a-777847fd74-fsnsv 2/2 Running 0 12s pod/dpment-linuxea-b-55694cb7f5-576qs 2/2 Running 0 32s pod/dpment-linuxea-b-55694cb7f5-lhkrb 2/2 Running 0 32sDestinationRule如果有多个版本,此时的subsets的逻辑组内就可以有很多个版本标签来匹配相对应的每个不同版本的服务dr如下--- apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: dpment namespace: java-demo spec: host: dpment # 与service保持一致 subsets: # 逻辑组 - name: v11 # 定义v11并根据标签,筛选v1.1到v11子集 labels: version: v1.1 - name: v10 # 定义v10并根据标签,筛选v1.0到v10子集 labels: version: v1.0 ---dr一旦创建完成在cluster中就能看到相关信息IMP=$(kubectl -n java-demo get pod -l app=linuxea_app -o jsonpath={.items[0].metadata.name})使用istioctl proxy-config cluster $IMP.java-demo 查看定义好的cluster(base) [root@master1 2]# istioctl proxy-config cluster $IMP.java-demo ... dpment-a.java-demo.svc.cluster.local 80 - outbound EDS dpment-b.java-demo.svc.cluster.local 80 - outbound EDS dpment.java-demo.svc.cluster.local 80 - outbound EDS dpment.java-demo dpment.java-demo.svc.cluster.local 80 v10 outbound EDS dpment.java-demo dpment.java-demo.svc.cluster.local 80 v11 outbound EDS dpment.java-demo ...可以看到,在cluster中,每个service都是一个集群,这些并且可以被访问bash-4.4# curl dpment linuxea-dpment-linuxea-a-68dc49d5d-h6v6v.com-127.0.0.1/8 130.130.0.3/24 version number 1.0 bash-4.4# curl dpment-a linuxea-dpment-linuxea-a-68dc49d5d-c9pcb.com-127.0.0.1/8 130.130.0.4/24 version number 1.0 bash-4.4# curl dpment-b linuxea-dpment-linuxea-b-59b448f49c-nfkfh.com-127.0.0.1/8 130.130.0.13/24 version number 2.0因此,我们删除多余的dpment-a和dpment-bdpment已经是我们现在的子集的service,dr和vs都是使用的dpment,删掉不会影响到dpment一旦删除dpment-a和dpment-b,listnrners和cluster,routes都会被删除(base) [root@master1 2]# kubectl -n java-demo delete svc dpment-a dpment-b service "dpment-a" deleted service "dpment-b" deletedVirtualService而后仍然需要配置一个VirtualService用于url路径路由,路由规则不变,但是路由的host就不变,都是dpment。只是subset不同spec: hosts: - dpment # service http: - name: version match: - uri: prefix: /version/ rewrite: uri: / route: - destination: host: dpment # service subset: v10 - name: default route: - destination: host: dpment # service subset: v11如果访问的是/version/的url就重写为/并且路由到dpment-b,否则就路由到dpment-ayaml如下apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - dpment http: - name: version match: - uri: prefix: /version/ rewrite: uri: / route: - destination: host: dpment subset: v10 - name: default route: - destination: host: dpment subset: v11创建完成,在kiali中可以查看已经配置好的配置services在vs和dr中,均已配置完成通过命令测试bash-4.4# while true;do curl dpment; curl dpment/version/;sleep 0.$RANDOM;done linuxea-dpment-linuxea-a-68dc49d5d-c9pcb.com-127.0.0.1/8 130.130.0.4/24 version number 1.0 linuxea-dpment-linuxea-b-59b448f49c-j7gk9.com-127.0.0.1/8 130.130.1.121/24 version number 2.0 linuxea-dpment-linuxea-a-68dc49d5d-h6v6v.com-127.0.0.1/8 130.130.0.3/24 version number 1.0 linuxea-dpment-linuxea-b-59b448f49c-j7gk9.com-127.0.0.1/8 130.130.1.121/24 version number 2.0 linuxea-dpment-linuxea-a-68dc49d5d-svl52.com-127.0.0.1/8 130.130.1.119/24 version number 1.0 linuxea-dpment-linuxea-b-59b448f49c-j7gk9.com-127.0.0.1/8 130.130.1.121/24 version number 2.0 linuxea-dpment-linuxea-a-68dc49d5d-c9pcb.com-127.0.0.1/8 130.130.0.4/24 version number 1.0 linuxea-dpment-linuxea-b-59b448f49c-nfkfh.com-127.0.0.1/8 130.130.0.13/24 version number 2.0 linuxea-dpment-linuxea-a-68dc49d5d-c9pcb.com-127.0.0.1/8 130.130.0.4/24 version number 1.0 linuxea-dpment-linuxea-b-59b448f49c-nfkfh.com-127.0.0.1/8 130.130.0.13/24 version number 2.0仍然能看到一样的效果
2022年12月27日
221 阅读
0 评论
0 点赞
2022-12-03
linuxea:istio 基于url路径路由(5)
流量治理在istio1.5后的版本,istiod充当控制平面,并且将配置分发到所有的sidecar代理和网关,能够支持网格的应用实现只能话的负载均衡机制。而envoy通过简单的二次开发后在istio中称为istio-proxy,被用于围绕某个pod,以sidecar的模式允许在某个应用的pod中。除此之外envoy还负责ingress和egress,分别是入向和出向网关。在k8s中,一旦istio部署完成,并添加了标签到名称空间下后,sidecar是以自动注入的方式注入到pod中,而无需手动配置。无论手动还是自动都是借助istio 控制器。ingress作为统一入口,是必须有的,而egress并非必然。pod中的应用程序,向外发送的时候是通过sidecar来进行处理,而当前的代理作为接收端的时候只是仅仅进行转发到后端应用程序即可。如果是 集群外的,则先从ingerss引入进来,而后在往后端转发。而在istio中服务注册表是用来发现所有的服务,并且将每个服务的服务名称为主机,pod为上游集群,访问主机的流量默认没有路由条件的转给名称对应的所有pod。每个seidcar会拿到服务的所有配置,每一个服务对应的服务转换为虚拟主机的配置。虚拟主机适配的主机头就是服务的服务名,主机头适配的所有流量都转发给后端的所有pod, service的作用是负责发现pod,并不介入流量转发在envoy中定义一个组件listencr,并且定义一个上游服务器组cluster,流量进入后去往哪里,需要定义多个vhosts,根据主机头就那些匹配到某虚拟主机,根据路由匹配规则url等,来判定转发给上游某个集群。cluster的角色和nginx的upstrm很像,调度,以及会话等。如果此时要进行流量比例,就需要在这里进行配置。调度算法由destnartionrule来进行定义,如:路由等。而虚拟机主机是由virtualService定义hosts。而如果是外部流量ingress gateway就需要定义一个gateway的crd。在istio中,如果使用envoy的方式那就太复杂了,因此,要想配置流量治理,我们需要了解配置一些CRD,如:Gateway为网格引入外部流量,但是不会下发到整个网格,只会将配置下发到ingress-gateway这个pod,而这个pod是没用sidecar的serviceEntty对于出站流量统一配置需要serviceEntty来定义, 也会转换成envoy的原生api配置,这些配置只会下发到egress gateway用来管控出向流量vitrual services只要定义了网格,istiod就会将k8s集群上控制平面和数据平面的所有service(istio-system和打了标签的namespace)自动发现并转换为envoy配置,下发到各个sidecar-proxy。这些配置的下发是所有的,service和service之间本身都是可以互相访问的, 每个service都会被转换成pod envoy的egress listerners, 因此,只要service存在, service之间通过envoy配置的listeners,以及路由,cluster,这些本身就可以进行互相访问。istio将网格中每个service端口创建为listener,而其匹配到的endpoint将组和为一个cluster而vitrual services是对网格内的流量配置的补充,对于一个service到达另外一个cluster之间的扩充,一个到另外一个的调度算法等其他高级功能,比如:1.路由规则,子集2.url3.权重等vitrual services就是配置在listeners上的vitrual hosts和router configDestination rulesdestination rules将配置好的配置指向某个后端的cluster,在cluster上指明均衡机制,异常探测等类的流量分发机制。这些配置应用后会在所有的网格内的每个sidecar内被下发,大部分都在outbound出站上Destination rules和vitrual services是配置的扩充,因此Destination和vitrual services每次并非都需要配置,只有在原生默认配置无法满足的时候,比如需要配置高级功能的时候才需要扩充配置要配置这些流量治理,需要virtualService,并且需要定义destnartionrule。实际上,我们至少需要让集群被外界访问,而后配置ingress-gateway,指定虚拟主机配置virtualService和destnartionrule外部的入站流量会经由ingress gateway到达集群内部,也就是南北流量经由gateway定义的ingress的vhsots包括目标流量访问的"host",以及虚拟主机监听的端口号集群内部的流量仅会在sidecar之间流动,也就是东西向流量, 大都在egress gateway发挥作用virtualService为sedecar envoy定义的listener(定义流量路由机制等)DestinationRule为sedecar envoy定义的cluster(包括发现端点等)网格内的流量无论是发布测试等,都是通过访问发起段的正向代理出站envoy进行配置,并且网格内的流量配置与动向都是在数据平面完成。而控制平面只是在进行下发配置策略的定义。要想在egress或者ingress 定义相应的配置,需要通过virtualService来进行定义,1. 基于url路径路由新的版本与旧版本之间,我们希望百分之一的流量在新版本之上,而百分之99还在旧的版本上我们重新配置下清单,我准备了两个pod,当打开根目录的时候显示版本是1nginx:v1.0linuxea-dpment-linuxea-x-xxxxx version number 1.0nginx:v1.1linuxea-dpment-linuxea-x-xxxxx version number 1.1/version/显示同样的信息。准备两个版本的pod,根路径和/version/都存在,且版本号不一样,而后配置进行测试、registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v1.0 registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v1.1使用如上两个版本来测试1.1 dpment-a要想被istio发现,我们必须创建一个service,而后创建一个dpment-a的deployment的pod清单如下--- apiVersion: v1 kind: Service metadata: name: dpment-a namespace: java-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app version: v0.1 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: dpment-linuxea-a namespace: java-demo spec: replicas: selector: matchLabels: app: linuxea_app version: v0.1 template: metadata: labels: app: linuxea_app version: v0.1 spec: containers: - name: nginx-a # imagePullPolicy: Always image: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v1.0 ports: - name: http containerPort: 80此时我们可以通过命令来查看当前创建的这个pod在istio的表现获取当前的pod名称(base) [root@linuxea.com test]# INGS=$(kubectl -n java-demo get pod -l app=linuxea_app -o jsonpath={.items[0].metadata.name}) (base) [root@linuxea.com test]# echo $INGS dpment-linuxea-a-68dc49d5d-c9pcb查看proxy-status(base) [root@linuxea.com test]# istioctl proxy-status NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION dpment-linuxea-a-68dc49d5d-c9pcb.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1 dpment-linuxea-a-68dc49d5d-h6v6v.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1 dpment-linuxea-a-68dc49d5d-svl52.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1 istio-egressgateway-65b46d7874-xdjkr.istio-system Kubernetes SYNCED SYNCED SYNCED NOT SENT NOT SENT istiod-8689fcd796-mqd8n 1.14.1 istio-ingressgateway-559d4ffc58-7rgft.istio-system Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1 sleep-557747455f-46jf5.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1查看routes的80端口,可见dpment-a已经被创建(base) [root@linuxea.com test]# istioctl proxy-config routes $INGS.java-demo --name 80 NAME DOMAINS MATCH VIRTUAL SERVICE 80 argocd-server.argocd, 10.98.127.60 /* 80 dpment-a, dpment-a.java-demo + 1 more... /* 80 ingress-nginx.ingress-nginx, 10.99.195.253 /* 80 istio-egressgateway.istio-system, 10.97.213.128 /* 80 istio-ingressgateway.istio-system, 10.97.154.56 /* 80 kuboard.kube-system, 10.97.104.136 /* 80 skywalking-ui.skywalking, 10.104.119.238 /* 80 sleep, sleep.java-demo + 1 more... /* 80 tracing.istio-system, 10.104.76.74 /* 80 web-nginx.test, 10.104.18.194 /* 查看cluster也被发现到(base) [root@linuxea.com test]# istioctl proxy-config cluster $INGS.java-demo | grep dpment-a dpment-a.java-demo.svc.cluster.local 80 - outbound EDS 在endpionts中能看到后端的ip(base) [root@linuxea.com test]# istioctl proxy-config endpoints $INGS.java-demo | grep dpment-a 130.130.0.3:80 HEALTHY OK outbound|80||dpment-a.java-demo.svc.cluster.local 130.130.0.4:80 HEALTHY OK outbound|80||dpment-a.java-demo.svc.cluster.local 130.130.1.119:80 HEALTHY OK outbound|80||dpment-a.java-demo.svc.cluster.local或者使用cluster来过滤(base) [root@linuxea.com test]# istioctl proxy-config endpoints $INGS.java-demo --cluster "outbound|80||dpment-a.java-demo.svc.cluster.local" ENDPOINT STATUS OUTLIER CHECK CLUSTER 130.130.0.3:80 HEALTHY OK outbound|80||dpment-a.java-demo.svc.cluster.local 130.130.0.4:80 HEALTHY OK outbound|80||dpment-a.java-demo.svc.cluster.local 130.130.1.119:80 HEALTHY OK outbound|80||dpment-a.java-demo.svc.cluster.local这里的ip就是pod的ip(base) [root@linuxea.com test]# kubectl -n java-demo get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES dpment-linuxea-a-68dc49d5d-c9pcb 2/2 Running 0 30m 130.130.0.4 master2 <none> <none> dpment-linuxea-a-68dc49d5d-h6v6v 2/2 Running 0 31m 130.130.0.3 master2 <none> <none> dpment-linuxea-a-68dc49d5d-svl52 2/2 Running 0 30m 130.130.1.119 k8s-03 <none> <none>a.查看而后我们run一个pod来这个pod也会被加入到istio中来kubectl run cli --image=marksugar/alpine:netools -it --rm --restart=Never --command -- /bin/bash如下(base) [root@linuxea.com test]# kubectl -n java-demo run cli --image=marksugar/alpine:netools -it --rm --restart=Never --command -- /bin/bash If you don't see a command prompt, try pressing enter. bash-4.4# 通过service名称访问bash-4.4# curl dpment-a linuxea-dpment-linuxea-a-68dc49d5d-h6v6v.com-127.0.0.1/8 130.130.0.3/24 version number 1.0 bash-4.4# curl dpment-a linuxea-dpment-linuxea-a-68dc49d5d-svl52.com-127.0.0.1/8 130.130.1.119/24 version number 1.0 bash-4.4# curl dpment-a linuxea-dpment-linuxea-a-68dc49d5d-c9pcb.com-127.0.0.1/8 130.130.0.4/24 version number 1.0并且listeners的端口也在这个pod内bash-4.4# ss -tlnpp State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 0.0.0.0:15021 0.0.0.0:* LISTEN 0 128 0.0.0.0:15021 0.0.0.0:* LISTEN 0 128 0.0.0.0:15090 0.0.0.0:* LISTEN 0 128 0.0.0.0:15090 0.0.0.0:* LISTEN 0 128 127.0.0.1:15000 0.0.0.0:* LISTEN 0 128 0.0.0.0:15001 0.0.0.0:* LISTEN 0 128 0.0.0.0:15001 0.0.0.0:* LISTEN 0 128 127.0.0.1:15004 0.0.0.0:* LISTEN 0 128 0.0.0.0:15006 0.0.0.0:* LISTEN 0 128 0.0.0.0:15006 0.0.0.0:* LISTEN 0 128 *:15020 *:* 此时,可以过滤listeners的80端口来查看他的侦听器bash-4.4# curl -s 127.0.0.1:15000/listeners | grep 80 10.102.80.102_10257::10.102.80.102:10257 0.0.0.0_8080::0.0.0.0:8080 0.0.0.0_80::0.0.0.0:80 10.104.119.238_80::10.104.119.238:80 10.109.18.63_8083::10.109.18.63:8083 0.0.0.0_11800::0.0.0.0:11800 10.104.18.194_80::10.104.18.194:80 10.96.124.32_12800::10.96.124.32:12800 10.103.47.163_8080::10.103.47.163:8080 0.0.0.0_8060::0.0.0.0:8060 10.96.59.20_8084::10.96.59.20:8084 10.106.152.2_8080::10.106.152.2:8080 10.96.171.119_8080::10.96.171.119:8080 10.96.132.151_8080::10.96.132.151:8080 10.99.185.170_8080::10.99.185.170:8080 10.105.132.58_8082::10.105.132.58:8082 10.96.59.20_8081::10.96.59.20:8081 0.0.0.0_8085::0.0.0.0:8085查看clusterbash-4.4# curl -s 127.0.0.1:15000/clusters|grep dpment-a outbound|80||dpment-a.java-demo.svc.cluster.local::observability_name::outbound|80||dpment-a.java-demo.svc.cluster.local outbound|80||dpment-a.java-demo.svc.cluster.local::default_priority::max_connections::4294967295 outbound|80||dpment-a.java-demo.svc.cluster.local::default_priority::max_pending_requests::4294967295 outbound|80||dpment-a.java-demo.svc.cluster.local::default_priority::max_requests::4294967295 outbound|80||dpment-a.java-demo.svc.cluster.local::default_priority::max_retries::4294967295 outbound|80||dpment-a.java-demo.svc.cluster.local::high_priority::max_connections::1024 outbound|80||dpment-a.java-demo.svc.cluster.local::high_priority::max_pending_requests::1024 outbound|80||dpment-a.java-demo.svc.cluster.local::high_priority::max_requests::1024 outbound|80||dpment-a.java-demo.svc.cluster.local::high_priority::max_retries::3 outbound|80||dpment-a.java-demo.svc.cluster.local::added_via_api::true outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::cx_active::2 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::cx_connect_fail::0 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::cx_total::2 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::rq_active::0 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::rq_error::0 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::rq_success::2 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::rq_timeout::0 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::rq_total::2 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::hostname:: outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::health_flags::healthy outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::weight::1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::region:: outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::zone:: outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::sub_zone:: outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::canary::false outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::priority::0 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::success_rate::-1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::local_origin_success_rate::-1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::cx_active::1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::cx_connect_fail::0 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::cx_total::1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::rq_active::0 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::rq_error::0 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::rq_success::1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::rq_timeout::0 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::rq_total::1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::hostname:: outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::health_flags::healthy outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::weight::1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::region:: outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::zone:: outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::sub_zone:: outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::canary::false outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::priority::0 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::success_rate::-1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::local_origin_success_rate::-1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::cx_active::1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::cx_connect_fail::0 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::cx_total::1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::rq_active::0 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::rq_error::0 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::rq_success::1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::rq_timeout::0 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::rq_total::1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::hostname:: outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::health_flags::healthy outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::weight::1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::region:: outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::zone:: outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::sub_zone:: outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::canary::false outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::priority::0 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::success_rate::-1 outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::local_origin_success_rate::-1此时的,我们在run起的这个pod中通过curl命令来请求dpment-adpment-a本身是在service中实现的,但是在istio介入后,就委托给istio实现while true;do curl dpment-a;sleep 0.5;done(base) [root@linuxea.com ~]# kubectl -n java-demo run cli --image=marksugar/alpine:netools -it --rm --restart=Never --command -- /bin/bash If you don't see a command prompt, try pressing enter. bash-4.4# while true;do curl dpment-a;sleep 0.5;done linuxea-dpment-linuxea-a-68dc49d5d-h6v6v.com-127.0.0.1/8 130.130.0.3/24 version number 1.0 linuxea-dpment-linuxea-a-68dc49d5d-h6v6v.com-127.0.0.1/8 130.130.0.3/24 version number 1.0 linuxea-dpment-linuxea-a-68dc49d5d-svl52.com-127.0.0.1/8 130.130.1.119/24 version number 1.0 linuxea-dpment-linuxea-a-68dc49d5d-c9pcb.com-127.0.0.1/8 130.130.0.4/24 version number 1.0 linuxea-dpment-linuxea-a-68dc49d5d-c9pcb.com-127.0.0.1/8 130.130.0.4/24 version number 1.0 linuxea-dpment-linuxea-a-68dc49d5d-c9pcb.com-127.0.0.1/8 130.130.0.4/24 version number 1.0 linuxea-dpment-linuxea-a-68dc49d5d-svl52.com-127.0.0.1/8 130.130.1.119/24 version number 1.0 linuxea-dpment-linuxea-a-68dc49d5d-h6v6v.com-127.0.0.1/8 130.130.0.3/24 version number 1.0 linuxea-dpment-linuxea-a-68dc49d5d-svl52.com-127.0.0.1/8 130.130.1.119/24 version number 1.0 linuxea-dpment-linuxea-a-68dc49d5d-svl52.com-127.0.0.1/8 130.130.1.119/24 version number 1.0 linuxea-dpment-linuxea-a-68dc49d5d-h6v6v.com-127.0.0.1/8 130.130.0.3/24 version number 1.0 linuxea-dpment-linuxea-a-68dc49d5d-c9pcb.com-127.0.0.1/8 130.130.0.4/24 version number 1.0 linuxea-dpment-linuxea-a-68dc49d5d-c9pcb.com-127.0.0.1/8 130.130.0.4/24 version number 1.0kialicli pod请求的是dpment-a,经过自己的sidecar-envoy(egress-listener)根据请求调度了dpment-a的请求,请求先在cli的sidecar上发生的,出站流量通过egress listener的dpment-a的服务,对于这个主机的请求是通过egress listener的cluster调度到后端进行响应b.ingress-gw如果此时要被外部访问,就需要配置ingress-gw因此,配置即可apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: dpment-gateway namespace: istio-system # 要指定为ingress gateway pod所在名称空间 spec: selector: app: istio-ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "dpment.linuxea.com" - "dpment1.linuxea.com" --- apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - "dpment.linuxea.com" # 对应于gateways/proxy-gateway - "dpment1.linuxea.com" gateways: - istio-system/dpment-gateway # 相关定义仅应用于Ingress Gateway上 #- mesh http: - name: dpment-a route: - destination: host: dpment-a ---apply后在本地解析域名即可1.2 dpment-b此时 ,我们在创建一个dpment-b的service--- apiVersion: v1 kind: Service metadata: name: dpment-b namespace: java-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app version: v0.2 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: dpment-linuxea-b namespace: java-demo spec: replicas: 2 selector: matchLabels: app: linuxea_app version: v0.2 template: metadata: labels: app: linuxea_app version: v0.2 spec: containers: - name: nginx-b # imagePullPolicy: Always image: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v2.0 ports: - name: http containerPort: 80创建完成(base) [root@linuxea.com test]# kubectl -n java-demo get pod,svc NAME READY STATUS RESTARTS AGE pod/cli 2/2 Running 0 5h59m pod/dpment-linuxea-a-68dc49d5d-c9pcb 2/2 Running 0 23h pod/dpment-linuxea-a-68dc49d5d-h6v6v 2/2 Running 0 23h pod/dpment-linuxea-a-68dc49d5d-svl52 2/2 Running 0 23h pod/dpment-linuxea-b-59b448f49c-j7gk9 2/2 Running 0 29m pod/dpment-linuxea-b-59b448f49c-nfkfh 2/2 Running 0 29m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/dpment-a ClusterIP 10.107.148.63 <none> 80/TCP 23h service/dpment-b ClusterIP 10.109.153.119 <none> 80/TCP 29m如下(base) [root@linuxea.com test]# kubectl -n java-demo get pod,svc -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/cli 2/2 Running 0 5h59m 130.130.0.9 master2 <none> <none> pod/dpment-linuxea-a-68dc49d5d-c9pcb 2/2 Running 0 23h 130.130.0.4 master2 <none> <none> pod/dpment-linuxea-a-68dc49d5d-h6v6v 2/2 Running 0 23h 130.130.0.3 master2 <none> <none> pod/dpment-linuxea-a-68dc49d5d-svl52 2/2 Running 0 23h 130.130.1.119 k8s-03 <none> <none> pod/dpment-linuxea-b-59b448f49c-j7gk9 2/2 Running 0 29m 130.130.1.121 k8s-03 <none> <none> pod/dpment-linuxea-b-59b448f49c-nfkfh 2/2 Running 0 29m 130.130.0.13 master2 <none> <none> NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/dpment-a ClusterIP 10.107.148.63 <none> 80/TCP 23h app=linuxea_app,version=v0.1 service/dpment-b ClusterIP 10.109.153.119 <none> 80/TCP 29m app=linuxea_app,version=v0.21.3 dpment此时dpment-a和dpment-b已经被创建,他们会生成相应的listener,clusters,routes,endpions,而后我们在创建一个dpment而后我们创建一个dpment的VirtualService在网格内做url转发如果是/version/的就重定向到/,并转发到dpment-b否则就转发到dpment-a配置如下--- apiVersion: v1 kind: Service metadata: name: dpment namespace: java-demo spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: dpment type: ClusterIP --- apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - dpment http: - name: version match: - uri: prefix: /version/ rewrite: uri: / route: - destination: host: dpment-b - name: default route: - destination: host: dpment-a关键配置注释spec: hosts: - dpment # 与service名称一致 http: # 7层路由机制 - name: version match: - uri: # 请求报文中的url prefix: /version/ # 如果以/version/为前缀 rewrite: # 重写 uri: / # 如果以/version/为前缀就重写到/ route: - destination: host: dpment-b # 如果以/version/为前缀就重写到/,并且发送到 dpment-b 的host - name: default # 不能匹配/version/的都会发送到default,并且路由到dpment-a route: - destination: host: dpment-a我们定义了一个路由规则,如果访问的是/version/的url就重写为/并且路由到dpment-b,否则就路由到dpment-a创建dpment , 现在多了一个svc(base) [root@linuxea.com test]# kubectl -n java-demo get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dpment ClusterIP 10.96.155.138 <none> 80/TCP 6s dpment-a ClusterIP 10.107.148.63 <none> 80/TCP 23h dpment-b ClusterIP 10.109.153.119 <none> 80/TCP 51m还有一个vs(base) [root@master2 ~]# kubectl -n java-demo get vs NAME GATEWAYS HOSTS AGE dpment ["dpment"] 19s此时我们查看routes,如下图(base) [root@linuxea.com ~]# istioctl proxy-config routes $IMP.java-demo | grep 80 web-nginx.test.svc.cluster.local:80 * /* 8060 webhook-dingtalk.monitoring, 10.107.177.232 /* 8080 argocd-applicationset-controller.argocd, 10.96.132.151 /* 8080 cloud-his-gateway-nodeport.default, 10.96.171.119 /* 8080 cloud-his-gateway.default, 10.103.47.163 /* 8080 devops-system-nodeport.default, 10.106.152.2 /* 8080 devops-system.default, 10.99.185.170 /* 8080 jenkins-master-service.devops, 10.100.245.168 /* 8080 jenkins-service.jenkins, 10.98.131.142 /* 8085 cloud-base-uaa.devops, 10.109.0.226 /* 80 argocd-server.argocd, 10.98.127.60 /* 80 dpment-a, dpment-a.java-demo + 1 more... /* 80 dpment-b, dpment-b.java-demo + 1 more... /* 80 dpment, dpment.java-demo + 1 more... /version/* dpment.java-demo 80 dpment, dpment.java-demo + 1 more... /* dpment.java-demo 80 ingress-nginx.ingress-nginx, 10.99.195.253 /* 80 istio-egressgateway.istio-system, 10.97.213.128 /* 80 istio-ingressgateway.istio-system, 10.97.154.56 /* 80 kuboard.kube-system, 10.97.104.136 /* 80 skywalking-ui.skywalking, 10.104.119.238 /* 80 tracing.istio-system, 10.104.76.74 /* 80 web-nginx.test, 10.104.18.194 /* argocd-applicationset-controller.argocd.svc.cluster.local:8080 * /* devops-system-nodeport.default.svc.cluster.local:8080 * /* argocd-metrics.argocd.svc.cluster.local:8082 * 我们在java-demo 的pod内进行测试kubectl -n java-demo run cli --image=marksugar/alpine:netools -it --rm --restart=Never --command -- /bin/bash仍然起一个cli进行测试如果请求直接访问的发送到v0.1, 如果请求携带version的url,发送到0.2bash-4.4# while true;do curl dpment ; sleep 0.$RANDOM;done linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0我们循环的访问后,在kiali页面能看到不通的状态 while true;do curl dpment; curl dpment/version/;sleep 0.$RANDOM;done打开web页面观测
2022年12月03日
269 阅读
0 评论
0 点赞
2022-11-18
linuxea:istio 1.14.1 kiali配置(4)
4.kiali配置ingress和egress分别处理的出和入的流量,而大部分相关资源都需要定义资源来完成。每个pod访问外部的时候,流量到达sedcar上,而sedcar上有对应的配置。这些配置包括:集群内有多少个服务服务对应的pod是什么访问给服务流量的比例是多少而virtualServIce就是为了这些来定义,这类似于nginx的虚拟主机。virtualServIce为每一个服务定义一个虚拟主机或者定义一个path路径,一旦用户流量到达虚拟主机上,根据访问将请求发送到“upstrem server”。而在istio之上是借助kubernetes的service,为每一个虚拟主机,虚拟主机的名称就是服务service的名字,虚拟主机背后上游中有多少个节点和真正的主机是借助kubernetes的服务来发现pod,每一个pod在istio之上被称为Destination,一个目标。一个对应的服务对应一个主机名字来进行匹配,客户端流量请求的就是主机头的流量就会被这个服务匹配到目标上,而目标有多少个pod取决于kubernetes集群上的服务有多少个pod, 在envoy中这些pod被称为cluster.virtualServIce就是用来定义虚拟主机有多少个,发往主机的流量匹配的不同规则,调度到那些,Destination就是定义后端集群中有多少个pod,这些pod如果需要分组就需要Destination rule来进行定义子集。比如说,一个主机服务是A,MatchA,对于根的流量,送给由A服务的所有pod上,流量就被调度给这些pod,负载均衡取决于istio和envoy的配置。对于这些流量而言,可以将pod分为两部分,v1和v2, 99%的流量给v1,1%的流量给v2,并且对1%的流量进行超时,重试,故障注入等。要想配置流量治理,我们需要配置virtualService,并且需要定义destnartionrule。实际上,我们至少需要让集群被外界访问,而后配置ingress-gateway,紧接着配置virtualService和destnartionrule4.1 测试一个pod此前有一个java-demo此时再创建一个pod,满足app和version这两个标签 app: linuxea_app version: v1.0如下--- apiVersion: v1 kind: Service metadata: name: dpment spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app version: v1.0 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: dpment-linuxea spec: replicas: 1 selector: matchLabels: app: linuxea_app version: v0.1 template: metadata: labels: app: linuxea_app version: v0.1 spec: containers: - name: nginx-a image: marksugar/nginx:1.14.a ports: - name: http containerPort: 80 apply> kubectl.exe -n java-demo apply -f .\linuxeav1.yaml service/dpment created deployment.apps/dpment-linuxea createdistioctl ps能看到网格中有sidcar的pod和gateway> kubectl.exe -n java-demo get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dpment ClusterIP 10.68.212.243 <none> 80/TCP 20h java-demo NodePort 10.68.4.28 <none> 8080:31181/TCP 6d21h > kubectl.exe -n java-demo get pod NAME READY STATUS RESTARTS AGE dpment-linuxea-54b8b64c75-b6mqj 2/2 Running 2 (43m ago) 20h java-demo-79485b6d57-rd6bm 2/2 Running 2 (43m ago) 42h[root@linuxea-48 ~]# istioctl ps NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION dpment-linuxea-54b8b64c75-b6mqj.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-56d9c5557-tffdv 1.14.1 istio-egressgateway-7fcb98978c-8t685.istio-system Kubernetes SYNCED SYNCED SYNCED NOT SENT NOT SENT istiod-56d9c5557-tffdv 1.14.1 istio-ingressgateway-55b6cffcbc-9rn99.istio-system Kubernetes SYNCED SYNCED SYNCED NOT SENT NOT SENT istiod-56d9c5557-tffdv 1.14.1 java-demo-79485b6d57-rd6bm.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-56d9c5557-tffdv 1.14.14.2 配置LoadBalancer我们配置一个VIP172.16.100.110/24来模拟LoadBalancer[root@linuxea-11 ~]# ip addr add 172.16.100.110/24 dev eth0 [root@linuxea-11 ~]# ip a | grep 172.16.100.110 inet 172.16.100.110/24 scope global secondary eth0 [root@linuxea-11 ~]# ping 172.16.100.110 PING 172.16.100.110 (172.16.100.110) 56(84) bytes of data. 64 bytes from 172.16.100.110: icmp_seq=1 ttl=64 time=0.030 ms 64 bytes from 172.16.100.110: icmp_seq=2 ttl=64 time=0.017 ms 64 bytes from 172.16.100.110: icmp_seq=3 ttl=64 time=0.024 ms 64 bytes from 172.16.100.110: icmp_seq=4 ttl=64 time=0.037 ms ^C --- 172.16.100.110 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3081ms rtt min/avg/max/mdev = 0.017/0.027/0.037/0.007 ms而后使用kubectl -n istio-system edit svc istio-ingressgateway编辑 27 clusterIP: 10.68.113.92 28 externalIPs: 29 - 172.16.100.110 30 clusterIPs: 31 - 10.68.113.92 32 externalTrafficPolicy: Cluster 33 internalTrafficPolicy: Cluster 34 ipFamilies: 35 - IPv4如下一旦修改,可通过命令查看[root@linuxea-48 /usr/local/istio/samples/addons]# kubectl -n istio-system get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE grafana ClusterIP 10.68.57.153 <none> 3000/TCP 45h istio-egressgateway ClusterIP 10.68.66.165 <none> 80/TCP,443/TCP 2d16h istio-ingressgateway LoadBalancer 10.68.113.92 172.16.100.110 15021:31787/TCP,80:32368/TCP,443:30603/TCP,31400:30435/TCP,15443:32099/TCP 2d16h istiod ClusterIP 10.68.7.43 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 2d16h jaeger-collector ClusterIP 10.68.50.134 <none> 14268/TCP,14250/TCP,9411/TCP 45h kiali ClusterIP 10.68.203.141 <none> 20001/TCP,9090/TCP 45h prometheus ClusterIP 10.68.219.101 <none> 9090/TCP 45h tracing ClusterIP 10.68.193.43 <none> 80/TCP,16685/TCP 45h zipkin ClusterIP 10.68.101.144 <none> 9411/TCP 45h4.3.1 nodeportistio-ingressgateway一旦修改为nodeport打开就需要使用ip:port来进行访问nodeport作为clusterip的增强版,但是如果是在一个云环境下可能就需要LoadBalancer事实上LoadBalancer并非是在上述例子中的自己设置的ip,而是一个高可用的ip开始修改nodeport编辑 kubectl.exe -n istio-system edit svc istio-ingressgateway,修改为 type: NodePort,如下: selector: app: istio-ingressgateway istio: ingressgateway sessionAffinity: None type: NodePort status: loadBalancer: {}随机一个端口PS C:\Users\usert> kubectl.exe -n istio-system get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE grafana ClusterIP 10.110.43.38 <none> 3000/TCP 14d istio-egressgateway ClusterIP 10.97.213.128 <none> 80/TCP,443/TCP 14d istio-ingressgateway NodePort 10.97.154.56 <none> 15021:32514/TCP,80:30142/TCP,443:31060/TCP,31400:30785/TCP,15443:32082/TCP 14d istiod ClusterIP 10.98.150.70 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 14d jaeger-collector ClusterIP 10.111.33.218 <none> 14268/TCP,14250/TCP,9411/TCP 14d kiali ClusterIP 10.111.90.166 <none> 20001/TCP,9090/TCP 14d prometheus ClusterIP 10.99.141.97 <none> 9090/TCP 14d tracing ClusterIP 10.104.76.74 <none> 80/TCP,16685/TCP 14d zipkin ClusterIP 10.100.238.112 <none> 9411/TCP 14d4.3.2 kiali开放至外部要想开放kiali至集群外部,需要定义并创建kiali VirtualService,Gateway,DestinationRule资源对象当安装完成,默认安装一些crds(base) [root@linuxea-master1 ~]# kubectl -n istio-system get crds | grep istio authorizationpolicies.security.istio.io 2022-07-14T02:28:00Z destinationrules.networking.istio.io 2022-07-14T02:28:00Z envoyfilters.networking.istio.io 2022-07-14T02:28:00Z gateways.networking.istio.io 2022-07-14T02:28:00Z istiooperators.install.istio.io 2022-07-14T02:28:00Z peerauthentications.security.istio.io 2022-07-14T02:28:00Z proxyconfigs.networking.istio.io 2022-07-14T02:28:00Z requestauthentications.security.istio.io 2022-07-14T02:28:00Z serviceentries.networking.istio.io 2022-07-14T02:28:00Z sidecars.networking.istio.io 2022-07-14T02:28:00Z telemetries.telemetry.istio.io 2022-07-14T02:28:00Z virtualservices.networking.istio.io 2022-07-14T02:28:00Z wasmplugins.extensions.istio.io 2022-07-14T02:28:00Z workloadentries.networking.istio.io 2022-07-14T02:28:00Z workloadgroups.networking.istio.io 2022-07-14T02:28:00Z和api(base) [root@linuxea-master1 ~]# kubectl -n istio-system api-resources | grep istio wasmplugins extensions.istio.io true WasmPlugin istiooperators iop,io install.istio.io true IstioOperator destinationrules dr networking.istio.io true DestinationRule envoyfilters networking.istio.io true EnvoyFilter gateways gw networking.istio.io true Gateway proxyconfigs networking.istio.io true ProxyConfig serviceentries se networking.istio.io true ServiceEntry sidecars networking.istio.io true Sidecar virtualservices vs networking.istio.io true VirtualService workloadentries we networking.istio.io true WorkloadEntry workloadgroups wg networking.istio.io true WorkloadGroup authorizationpolicies security.istio.io true AuthorizationPolicy peerauthentications pa security.istio.io true PeerAuthentication requestauthentications ra security.istio.io true RequestAuthentication telemetries telemetry telemetry.istio.io true Telemetry并且可以通过命令过滤--api-group=networking.istio.io(base) [root@linuxea-master1 ~]# kubectl -n istio-system api-resources --api-group=networking.istio.io NAME SHORTNAMES APIGROUP NAMESPACED KIND destinationrules dr networking.istio.io true DestinationRule envoyfilters networking.istio.io true EnvoyFilter gateways gw networking.istio.io true Gateway proxyconfigs networking.istio.io true ProxyConfig serviceentries se networking.istio.io true ServiceEntry sidecars networking.istio.io true Sidecar virtualservices vs networking.istio.io true VirtualService workloadentries we networking.istio.io true WorkloadEntry workloadgroups wg networking.istio.io true WorkloadGroup这些可以通过帮助看来是如何定义的,比如gw的配置kubectl explain gw.spec.server定义gatewayGateway标签匹配istio-ingressgateway selector: app: istio-ingressgateway创建到istio-ingressgaetway下,如下apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: kiali-gateway namespace: istio-system spec: selector: app: istio-ingressgateway servers: - port: number: 20001 name: http-kiali protocol: HTTP hosts: - "kiali.linuxea.com" ---创建只会可用 istioctl proxy-status查看此时我们通过标签获取到istio-ingress的pod名称kubectl -n istio-system get pod -l app=istio-ingressgateway -o jsonpath={.items[0].metadata.name} && echo(base) [root@linuxea-master1 ~]# kubectl -n istio-system get pod -l app=istio-ingressgateway -o jsonpath={.items[0].metadata.name} && echo istio-ingressgateway-559d4ffc58-7rgft并且配置成变量进行调用(base) [root@linuxea-master1 ~]# INGS=$(kubectl -n istio-system get pod -l app=istio-ingressgateway -o jsonpath={.items[0].metadata.name}) (base) [root@linuxea-master1 ~]# echo $INGS istio-ingressgateway-559d4ffc58-7rgft随后查看以及定义的侦听器(base) [root@linuxea-master1 ~]# istioctl -n istio-system proxy-config listeners $INGS ADDRESS PORT MATCH DESTINATION 0.0.0.0 8080 ALL Route: http.8080 0.0.0.0 15021 ALL Inline Route: /healthz/ready* 0.0.0.0 15090 ALL Inline Route: /stats/prometheus* 0.0.0.0 20001 ALL Route: http.20001可以看到,此时的0.0.0.0 20001 ALL Route: http.20001以及被添加但是在Ingress中是不会自动创建routed,因此在routes中VIRTUAL SERVICE是404(base) [root@linuxea-master1 ~]# istioctl -n istio-system proxy-config routes $INGS NAME DOMAINS MATCH VIRTUAL SERVICE http.8080 * /productpage bookinfo.java-demo http.8080 * /static* bookinfo.java-demo http.8080 * /login bookinfo.java-demo http.8080 * /logout bookinfo.java-demo http.8080 * /api/v1/products* bookinfo.java-demo http.20001 * /* 404 * /stats/prometheus* * /healthz/ready*gateway创建完成在名称空间下(base) [root@linuxea-master1 package]# kubectl -n istio-system get gw NAME AGE kiali-gateway 3m于是,我们创建VirtualServiceVirtualService在gateway中配置了hosts,于是在VirtualService中需要指明hosts,并且需要指明流量规则适配的位置,比如Ingress-gateway,在上的配置里面ingress-gateway的名字是kiali-gateway,于是在这里就配置上。gateway中的端口是20001,默认。将流量路由到一个kiali的serivce,端口是20001,该service将流量调度到后端的podVirtualService要么配置在ingress gateway作为接入流量,要么就在配在集群内处理内部流量hosts确保一致关联gateway的名称(base) [root@master2 ~]# kubectl -n istio-system get gw NAME AGE kiali-gateway 1m18sroute 的host指向的是上游集群的cluster,而这个cluster名称和svc的名称是一样的。请注意,这里的流量不会发送到svc ,svc负责发现,流量发从到istio的cluster中,这与ingress-nginx的发现相似(base) [root@master2 ~]# kubectl -n istio-system get svc kiali NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kiali ClusterIP 10.111.90.166 <none> 20001/TCP,9090/TCP 14m如下:apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: kiali-virtualservice namespace: istio-system spec: hosts: - "kiali.linuxea.com" gateways: - kiali-gateway http: - match: - port: 20001 route: - destination: host: kiali port: number: 20001 ---此时routes中就会发现到http.20001 kiali.linuxea.com /* kiali-virtualservice.istio-system(base) [root@linuxea-master1 ~]# istioctl -n istio-system proxy-config routes $INGS NAME DOMAINS MATCH VIRTUAL SERVICE http.8080 * /productpage bookinfo.java-demo http.8080 * /static* bookinfo.java-demo http.8080 * /login bookinfo.java-demo http.8080 * /logout bookinfo.java-demo http.8080 * /api/v1/products* bookinfo.java-demo http.20001 kiali.linuxea.com /* kiali-virtualservice.istio-system * /stats/prometheus* * /healthz/ready*创建完成后vs也会在名称空间下被创建(base) [root@linuxea-master1 package]# kubectl -n istio-system get vs NAME GATEWAYS HOSTS AGE kiali-virtualservice ["kiali-gateway"] ["kiali.linuxea.com"] 26m同时查看cluster(base) [root@linuxea-master1 ~]# istioctl -n istio-system proxy-config cluster $INGS|grep kiali kiali.istio-system.svc.cluster.local 9090 - outbound EDS kiali.istio-system.svc.cluster.local 20001 - outbound EDS要想通过web访问,svc里面必然需要有这个20001端口,如果没用肯定是不能够访问的,因为我们在配置里配置的就是20001,因此修改(base) [root@linuxea-master1 ~]# kubectl -n istio-system edit svc istio-ingressgateway .... - name: http-kiali nodePort: 32653 port: 20001 protocol: TCP targetPort: 20001 ...svc里面以及由了一个20001端口被映射成32653(base) [root@linuxea-master1 ~]# kubectl -n istio-system get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE grafana ClusterIP 10.110.43.38 <none> 3000/TCP 14d istio-egressgateway ClusterIP 10.97.213.128 <none> 80/TCP,443/TCP 15d istio-ingressgateway LoadBalancer 10.97.154.56 172.16.15.111 15021:32514/TCP,80:30142/TCP,20001:32653/TCP,443:31060/TCP,31400:30785/TCP,15443:32082/TCP 15d这两个端口都可以访问2000132653并且在proxy-config的routes中也会体现出来(base) [root@linuxea-master1 ~]# istioctl proxy-config routes $INGS.istio-system NAME DOMAINS MATCH VIRTUAL SERVICE http.8080 * /productpage bookinfo.java-demo http.8080 * /static* bookinfo.java-demo http.8080 * /login bookinfo.java-demo http.8080 * /logout bookinfo.java-demo http.8080 * /api/v1/products* bookinfo.java-demo http.20001 kiali.linuxea.com /* kiali-virtualservice.istio-system * /stats/prometheus* * /healthz/ready* DestinationRuleDestinationRule默认会自动生成,并非必须要定义的,这却决于是否需要更多的功能扩展定义该kiali的serivce将流量调度到后端的pod。此时ingress-gateway将流量转到kiali的pod之间是否需要流量加密,或者不加密,这部分的调度是由cluster决定。而其中使用什么样的调度算法,是否启用链路加密,是用DestinationRule来定义的。如:tls:mode: DISABLE 不使用链路加密 trafficPolicy: tls: mode: DISABLEhost: kiali : 关键配置,匹配到service的name如下apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: kiali namespace: istio-system spec: host: kiali trafficPolicy: tls: mode: DISABLEapply只会会创建一个DestinationRule(base) [root@linuxea-master1 ~]# kubectl -n istio-system get dr NAME HOST AGE kiali kiali 88s配置完成查看cluster(base) [root@linuxea-master1 ~]# istioctl proxy-config cluster $INGS.istio-system ... kiali.istio-system.svc.cluster.local 9090 - outbound EDS kiali.istio-system kiali.istio-system.svc.cluster.local 20001 - outbound EDS kiali.istio-system ...gateway是不会生效到网格内部的istio-system是istio安装组件的名称空间,也就是控制平面(base) [root@linuxea-master1 ~]# istioctl proxy-config listeners $INGS.istio-system ADDRESS PORT MATCH DESTINATION 0.0.0.0 8080 ALL Route: http.8080 0.0.0.0 15021 ALL Inline Route: /healthz/ready* 0.0.0.0 15090 ALL Inline Route: /stats/prometheus* 0.0.0.0 20001 ALL Route: http.20001java-demo是数据平面的名称空间在java-demo中只有出站反向的PassthroughCluster,而这个PassthroughCluster是由service生成的一旦创建service就自动创建,在这里创建到网关上,并没有在网格内(base) [root@linuxea-master1 ~]# istioctl -n java-demo proxy-config listeners marksugar --port 20001 ADDRESS PORT MATCH DESTINATION 0.0.0.0 20001 Trans: raw_buffer; App: http/1.1,h2c Route: 20001 0.0.0.0 20001 ALL PassthroughCluster80端口的清单如下:apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: kiali-virtualservice namespace: istio-system spec: hosts: - "kiali.linuxea.com" gateways: - kiali-gateway http: - match: - uri: prefix: / route: - destination: host: kiali port: number: 20001 --- apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: kiali-gateway namespace: istio-system spec: selector: app: istio-ingressgateway servers: - port: number: 80 name: http-kiali protocol: HTTP hosts: - "kiali.linuxea.com" --- apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: kiali namespace: istio-system spec: host: kiali trafficPolicy: tls: mode: DISABLE ---apply> kubectl.exe apply -f .\kiali.linuxea.com.yaml virtualservice.networking.istio.io/kiali-virtualservice created gateway.networking.istio.io/kiali-gateway created destinationrule.networking.istio.io/kiali created此时我们通过istioctl proxy-config查看[root@linuxea-48 ~]# istioctl -n istio-system proxy-config all istio-ingressgateway-55b6cffcbc-4vc94 SERVICE FQDN PORT SUBSET DIRECTION TYPE DESTINATION RULE BlackHoleCluster - - - STATIC agent - - - STATIC 此处删除,省略 skywalking-ui.skywalking.svc.cluster.local 80 - outbound EDS tracing.istio-system.svc.cluster.local 80 - outbound EDS tracing.istio-system.svc.cluster.local 16685 - outbound EDS xds-grpc - - - STATIC zipkin - - - STRICT_DNS zipkin.istio-system.svc.cluster.local 9411 - outbound EDS ADDRESS PORT MATCH DESTINATION 0.0.0.0 8080 ALL Route: http.8080 0.0.0.0 15021 ALL Inline Route: /healthz/ready* 0.0.0.0 15090 ALL Inline Route: /stats/prometheus* NAME DOMAINS MATCH VIRTUAL SERVICE http.8080 kiali.linuxea.com /* kiali-virtualservice.istio-system * /stats/prometheus* * /healthz/ready* RESOURCE NAME TYPE STATUS VALID CERT SERIAL NUMBER NOT AFTER NOT BEFORE default Cert Chain ACTIVE true 244178067234775886684219941410566024258 2022-07-18T06:25:04Z 2022-07-17T06:23:04Z ROOTCA CA ACTIVE true 102889470196612755194280100451505524786 2032-07-10T16:33:20Z 2022-07-13T16:33:20Z同时可以使用命令查看路由[root@linuxea-48 /usr/local]# istioctl -n java-demo pc route dpment-linuxea-54b8b64c75-b6mqj NAME DOMAINS MATCH VIRTUAL SERVICE 80 argocd-server.argocd, 10.68.36.89 /* 80 dpment, dpment.java-demo + 1 more... /* 此处省略 /* inbound|80|| * /* 15014 istiod.istio-system, 10.68.7.43 /* 16685 tracing.istio-system, 10.68.193.43 /* 20001 kiali.istio-system, 10.68.203.141 /* * /stats/prometheus* 默认就是cluster,因此VIRTUAL SERVICE是空的,可以通过命令查看EDS是envoy中的,表示能够通过eds动态的方式来发现后端的pod并生成一个集群的我们使用命令过滤后端有几个pod,此时我们只有一个[root@linuxea-48 /usr/local]# istioctl -n java-demo pc endpoint dpment-linuxea-54b8b64c75-b6mqj |grep dpment 172.20.1.12:80 HEALTHY OK outbound|80||dpment.java-demo.svc.cluster.local如果我们进行scale,将会发生变化[root@linuxea-11 /usr/local]# istioctl -n java-demo pc endpoint dpment-linuxea-54b8b64c75-b6mqj |grep dpment 172.20.1.12:80 HEALTHY OK outbound|80||dpment.java-demo.svc.cluster.local 172.20.2.168:80 HEALTHY OK outbound|80||dpment.java-demo.svc.cluster.local此时我们准备在访问kiali修改本地Hosts172.16.100.110 kiali.linuxea.comkiali.linuxea.com4.3.3 grafana在kiali中,配置了20002的端口进行访问,并且还修改添加了service的pod才完成访问,而后又补充了一个80端口的访问,于是乎,我们将grafana也配置成80端口访问一旦配置了80端口,hosts就不能配置成*了配置关键点apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: grafana-gateway namespace: istio-system spec: selector: app: istio-ingressgateway servers: - port: number: 80 # 80端口侦听器 name: http protocol: HTTP hosts: - "grafana.linuxea.com" # 域名1,这里可以是多个域名,不能为* --- apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: grafana-virtualservice namespace: istio-system spec: hosts: - "grafana.linuxea.com" # 匹配Gateway的hosts gateways: - grafana-gateway # 匹配Gateway的name,如果不是同一个名称空间的需要加名称空间引用 http: - match: # 80端口这里已经不能作为识别标准,于是match url - uri: prefix: / # 只要是针对grafana.linuxea.com发起的请求,无论是什么路径 route: - destination: host: grafana port: number: 3000 --- # DestinationRule 在这里是可有可无的 apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: grafana namespace: istio-system spec: host: grafana trafficPolicy: tls: mode: DISABLE ---于是,我们先创建gatewayapiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: grafana-gateway namespace: istio-system spec: selector: app: istio-ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "grafana.linuxea.com"apply(base) [root@linuxea-master1 ~]# kubectl -n istio-system get gw NAME AGE grafana-gateway 52s kiali-gateway 24hgateway创建后,这里的端口是808080端口是作为流量拦截的,这里的80端口都会被转换成8080,访问仍然是请求80端口(base) [root@linuxea-master1 ~]# istioctl proxy-config listeners $INGS.istio-system ADDRESS PORT MATCH DESTINATION 0.0.0.0 8080 ALL Route: http.8080 0.0.0.0 15021 ALL Inline Route: /healthz/ready* 0.0.0.0 15090 ALL Inline Route: /stats/prometheus* 0.0.0.0 20001 ALL Route: http.20001定义virtualserviceapiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: grafana-virtualservice namespace: istio-system spec: hosts: - "grafana.linuxea.com" gateways: - grafana-gateway http: - match: - uri: prefix: / route: - destination: host: grafana port: number: 3000apply(base) [root@linuxea-master1 ~]# kubectl -n istio-system get vs NAME GATEWAYS HOSTS AGE grafana-virtualservice ["grafana-gateway"] ["grafana.linuxea.com"] 82s kiali-virtualservice ["kiali-gateway"] ["kiali.linuxea.com"] 24hroutes中已经配置有了DOMAINS和VIRTUAL SERVICE的配置(base) [root@linuxea-master1 opt]# istioctl -n istio-system proxy-config routes $INGS NAME DOMAINS MATCH VIRTUAL SERVICE http.8080 * /productpage bookinfo.java-demo http.8080 * /static* bookinfo.java-demo http.8080 * /login bookinfo.java-demo http.8080 * /logout bookinfo.java-demo http.8080 * /api/v1/products* bookinfo.java-demo http.8080 grafana.linuxea.com /* grafana-virtualservice.istio-system http.20001 kiali.linuxea.com /* kiali-virtualservice.istio-system而在cluster中的grafana的出站是存在的(base) [root@linuxea-master1 opt]# istioctl -n istio-system proxy-config cluster $INGS|grep grafana grafana.istio-system.svc.cluster.local 3000 - outbound EDS grafana.monitoring.svc.cluster.local 3000 - outbound EDS 此时 ,我们 配置本地hosts后就可以打开grafana其中默认已经添加了模板4.4 简单测试于是,我们在java-demo pod里面ping dpment[root@linuxea-48 ~]# kubectl -n java-demo exec -it java-demo-79485b6d57-rd6bm -- /bin/bash Defaulting container name to java-demo. Use 'kubectl describe pod/java-demo-79485b6d57-rd6bm -n java-demo' to see all of the containers in this pod. bash-5.1$ while true;do curl dpment; sleep 0.2;done linuxea-dpment-linuxea-54b8b64c75-b6mqj.com-127.0.0.1/8 172.20.1.254/24如下图此时 ,我们的访问的dpment并没有走service,而是服务网格的sidecat而后返回kiali在所属名称空间查看4.5 dpment开放至外部在上面,我们将kiali ui开放至外部访问,现在我们将dpment也开放至外部。因此,我除了deplpoyment的yaml,我们还需要配置其他的部分此前的deployment.yaml--- apiVersion: v1 kind: Service metadata: name: dpment spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: linuxea_app version: v0.1 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: dpment-linuxea spec: replicas: 1 selector: matchLabels: app: linuxea_app version: v0.1 template: metadata: labels: app: linuxea_app version: v0.1 spec: containers: - name: nginx-a image: marksugar/nginx:1.14.a ports: - name: http containerPort: 80 配置istio外部访问apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment-virtualservice namespace: java-demo spec: hosts: - "kiali.linuxea.com" gateways: - kiali-gateway http: - match: - uri: prefix: / route: - destination: host: dpment port: number: 80 --- apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: dpment-gateway namespace: java-demo spec: selector: app: istio-ingressgateway servers: - port: number: 80 name: dpment protocol: HTTP hosts: - "kiali.linuxea.com" --- apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: dpment-destinationRule namespace: java-demo spec: host: kiali trafficPolicy: tls: mode: DISABLE ---而后就可以通过浏览器进行访问如下关键概念listenersservice本身就会发现后端的pod的ip端口信息,而listeners是借助于发现的service实现的
2022年11月18日
398 阅读
0 评论
0 点赞
2022-10-18
linuxea:初时istio服务网格(3)
网格中有很多service,sidecar就会看到很多egerss listeners通过正向代理来确保pod访问之外服务的时候是通过sidecar来代理,ingress是来接受外部访问内部的,但这并不多,该pod被后端端点的service 的端口所属的服务会在该pod的sidecar生成ingress listeners,通过ingress反向代理完成访问如: istio-system和被打上标签的名称空间,这两个名称空间下的service会被发现并转换成sidecar的envoy的网格内配置,经过网格的流量转化为sidecar转发。而service主要被istio用于服务发现服务而存在的 sidecar通过VirtualService来管理的,流量到达sidecar后被拦截且重定向到一个统一的端口,所有出去的流量也会被在该pod被iptables拦截重定向到这个唯一的端口,分别是15001和15006的虚拟端口,这个过程会生成很多iptables规则,这个功能就称为流量拦截,拦截后被交给eneoy作为正向或者反向代理此前的prox-status能看到配置下发的状态PS C:\Users\usert> istioctl.exe proxy-status NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION details-v1-6d89cf9847-46c4z.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1 istio-egressgateway-65b46d7874-xdjkr.istio-system Kubernetes SYNCED SYNCED SYNCED NOT SENT NOT SENT istiod-8689fcd796-mqd8n 1.14.1 istio-ingressgateway-559d4ffc58-7rgft.istio-system Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1 marksugar.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1 productpage-v1-f44fc594c-fmrf4.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1 ratings-v1-6c77b94555-twmls.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1 reviews-v1-765697d479-tbprw.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1 reviews-v2-86855c588b-sm6w2.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1 reviews-v3-6ff967c97f-g6x8b.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1 sleep-557747455f-46jf5.java-demo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8689fcd796-mqd8n 1.14.1proxy-config能查看对应pod之上sidecar的配置信息,这比config_dump查看的要直观3.1 查看listeners查看marksugar之上的listenersistioctl -n java-demo proxy-config listeners marksugar查看marksugar之上sidecar的listeners,默认是格式化后的格式展示PS C:\Users\usert> istioctl.exe proxy-config listeners marksugar.java-demo ADDRESS PORT MATCH DESTINATION 10.96.0.10 53 ALL Cluster: outbound|53||kube-dns.kube-system.svc.cluster.local 0.0.0.0 80 Trans: raw_buffer; App: http/1.1,h2c Route: 80 0.0.0.0 80 ALL PassthroughCluster 10.104.119.238 80 Trans: raw_buffer; App: http/1.1,h2c Route: skywalking-ui.skywalking.svc.cluster.local:80 10.104.119.238 80 ALL Cluster: outbound|80||skywalking-ui.skywalking.svc.cluster.local 10.104.18.194 80 Trans: raw_buffer; App: http/1.1,h2c Route: web-nginx.test.svc.cluster.local:80 10.104.18.194 80 ALL Cluster: outbound|80||web-nginx.test.svc.cluster.local 10.107.112.228 80 Trans: raw_buffer; App: http/1.1,h2c Route: marksugar.java-demo.svc.cluster.local:80 10.107.112.228 80 ALL Cluster: outbound|80||marksugar.java-demo.svc.cluster.local 10.102.45.140 443 ALL Cluster: outbound|443||ingress-nginx-controller-admission.ingress-nginx.svc.cluster.local 10.107.160.181 443 ALL Cluster: outbound|443||metrics-server.kube-system.svc.cluster.local 10.109.235.93 443 ALL Cluster: outbound|443||prometheus-adapter.monitoring.svc.cluster.local 10.96.0.1 443 ALL Cluster: outbound|443||kubernetes.default.svc.cluster.local ........对于每一个网格内的服务都会有两个listeners,一个向外outbound,一个向内的Route对于这些,我们针对性进行--port过滤查看istioctl -n java-demo proxy-config listeners marksugar --port 80PS C:\Users\usert> istioctl.exe -n java-demo proxy-config listeners marksugar --port 80 ADDRESS PORT MATCH DESTINATION 0.0.0.0 80 Trans: raw_buffer; App: http/1.1,h2c Route: 80 0.0.0.0 80 ALL PassthroughCluster 10.104.119.238 80 Trans: raw_buffer; App: http/1.1,h2c Route: skywalking-ui.skywalking.svc.cluster.local:80 10.104.119.238 80 ALL Cluster: outbound|80||skywalking-ui.skywalking.svc.cluster.local 10.104.18.194 80 Trans: raw_buffer; App: http/1.1,h2c Route: web-nginx.test.svc.cluster.local:80 10.104.18.194 80 ALL Cluster: outbound|80||web-nginx.test.svc.cluster.local 10.107.112.228 80 Trans: raw_buffer; App: http/1.1,h2c Route: marksugar.java-demo.svc.cluster.local:80 10.107.112.228 80 ALL Cluster: outbound|80||marksugar.java-demo.svc.cluster.local在或者添加ip过滤 --addressistioctl -n java-demo proxy-config listeners marksugar --port 80 --address 10.104.119.238PS C:\Users\usert> istioctl.exe -n java-demo proxy-config listeners marksugar --port 80 --address 10.104.119.238 ADDRESS PORT MATCH DESTINATION 10.104.119.238 80 Trans: raw_buffer; App: http/1.1,h2c Route: skywalking-ui.skywalking.svc.cluster.local:80 10.104.119.238 80 ALL Cluster: outbound|80||skywalking-ui.skywalking.svc.cluster.local如果需要查看更详细的配置需要在后添加-o yaml,其他参考--help3.2 查看routes当路由进入侦听器后,路由的匹配规则是先匹配虚拟主机,而后在虚拟主机内部匹配流量匹配路由条件MATCHistioctl -n java-demo proxy-config routes marksugar过滤80istioctl -n java-demo proxy-config routes marksugar --name 80匹配DOMAINS匹配MATCH路由目标VIRTUAL SERVICE没有显示VIRTUAL SERVICE会被路由到DOMAINS到后端端点PS C:\Users\usert> istioctl.exe -n java-demo proxy-config routes marksugar --name 80 NAME DOMAINS MATCH VIRTUAL SERVICE 80 argocd-server.argocd, 10.98.127.60 /* 80 details.java-demo.svc.cluster.local /* details.java-demo 80 dpment-a, dpment-a.java-demo + 1 more... /* 80 dpment-b, dpment-b.java-demo + 1 more... /* 80 dpment, dpment.java-demo + 1 more... /* dpment.java-demo 80 dpment, dpment.java-demo + 1 more... /* dpment.java-demo 80 istio-egressgateway.istio-system, 10.97.213.128 /* 80 istio-ingressgateway.istio-system, 10.97.154.56 /* 80 kuboard.kube-system, 10.97.104.136 /* 80 marksugar, marksugar.java-demo + 1 more... /* 80 productpage.java-demo.svc.cluster.local /* productpage.java-demo 80 ratings.java-demo.svc.cluster.local /* ratings.java-demo 80 reviews.java-demo.svc.cluster.local /* reviews.java-demo 80 skywalking-ui.skywalking, 10.104.119.238 /* 80 sleep, sleep.java-demo + 1 more... /* 80 tracing.istio-system, 10.104.76.74 /* 80 web-nginx.test, 10.104.18.194 /*其他参考--help3.3 查看cluster查看istioctl.exe -n java-demo proxy-config cluster marksugar过滤端口istioctl -n java-demo proxy-config cluster marksugar --port 80inbound为入站侦听器,outbound为出站PS C:\Users\usert> istioctl.exe -n java-demo proxy-config cluster marksugar --port 80 SERVICE FQDN PORT SUBSET DIRECTION TYPE DESTINATION RULE 80 - inbound ORIGINAL_DST argocd-server.argocd.svc.cluster.local 80 - outbound EDS dpment-a.java-demo.svc.cluster.local 80 - outbound EDS dpment-b.java-demo.svc.cluster.local 80 - outbound EDS dpment.java-demo.svc.cluster.local 80 - outbound EDS istio-egressgateway.istio-system.svc.cluster.local 80 - outbound EDS istio-ingressgateway.istio-system.svc.cluster.local 80 - outbound EDS kuboard.kube-system.svc.cluster.local 80 - outbound EDS marksugar.java-demo.svc.cluster.local 80 - outbound EDS skywalking-ui.skywalking.svc.cluster.local 80 - outbound EDS sleep.java-demo.svc.cluster.local 80 - outbound EDS tracing.istio-system.svc.cluster.local 80 - outbound EDS web-nginx.test.svc.cluster.local 80 - outbound EDS除此之外可以使用 --direction查看特定方向的详情,其他参考--help istioctl.exe -n java-demo proxy-config cluster marksugar --port 80 --direction inbound3.4 查看endpoints对于集群而言还可以看endpoints使用 --port 80过滤80端口PS C:\Users\usert> istioctl.exe -n java-demo proxy-config endpoints marksugar --port 80 ENDPOINT STATUS OUTLIER CHECK CLUSTER 130.130.0.106:80 HEALTHY OK outbound|80||marksugar.java-demo.svc.cluster.local 130.130.0.12:80 HEALTHY OK outbound|80||kuboard.kube-system.svc.cluster.local 130.130.0.16:80 HEALTHY OK outbound|80||web-nginx.test.svc.cluster.local 130.130.0.17:80 HEALTHY OK outbound|80||web-nginx.test.svc.cluster.local 130.130.0.18:80 HEALTHY OK outbound|80||web-nginx.test.svc.cluster.local 130.130.1.103:80 HEALTHY OK outbound|80||sleep.java-demo.svc.cluster.local 130.130.1.60:80 HEALTHY OK outbound|80||web-nginx.test.svc.cluster.local 130.130.1.61:80 HEALTHY OK outbound|80||web-nginx.test.svc.cluster.local如果要查看所有的信息,使用all即可istioctl -n java-demo proxy-config all marksugar --port 80PS C:\Users\usert> istioctl.exe -n java-demo proxy-config all marksugar --port 80 SERVICE FQDN PORT SUBSET DIRECTION TYPE DESTINATION RULE 80 - inbound ORIGINAL_DST argocd-server.argocd.svc.cluster.local 80 - outbound EDS dpment-a.java-demo.svc.cluster.local 80 - outbound EDS dpment-b.java-demo.svc.cluster.local 80 - outbound EDS dpment.java-demo.svc.cluster.local 80 - outbound EDS nginx.test.svc.cluster.local ........... 10.107.112.228 80 Trans: raw_buffer; App: http/1.1,h2c Route: marksugar.java-demo.svc.cluster.local:80 10.107.112.228 80 ALL Cluster: outbound|80||marksugar.java- .............. RESOURCE NAME TYPE STATUS VALID CERT SERIAL NUMBER NOT AFTER NOT BEFORE default Cert Chain ACTIVE true 102059676829591632788425320896870277908 2022-07-27T21:03:04Z 2022-07-26T21:01:04Z ROOTCA CA ACTIVE true 301822650017575269000203210584654904630 2032-07-11T02:27:37Z 2022-07-14T02:27:37Z3.5 查看bootstrap有很多配置是之后加载的,而bootstrap是启动的基础配置istioctl -n java-demo proxy-config bootstrap marksugar
2022年10月18日
380 阅读
0 评论
0 点赞
1
2
...
68