首页
About Me
推荐
weibo
github
Search
1
linuxea:gitlab-ci之docker镜像质量品质报告
49,197 阅读
2
linuxea:如何复现查看docker run参数命令
21,469 阅读
3
Graylog收集文件日志实例
18,257 阅读
4
git+jenkins发布和回滚示例
17,882 阅读
5
linuxea:jenkins+pipeline+gitlab+ansible快速安装配置(1)
17,778 阅读
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack
Open-Falcon
Prometheus
victoriaMetrics
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
互联咨询
最后的净土
软件交付
持续集成
gitops
devops
登录
Search
标签搜索
kubernetes
docker
zabbix
Golang
mariadb
持续集成工具
白话容器
linux基础
nginx
elk
dockerfile
Gitlab-ci/cd
最后的净土
基础命令
jenkins
docker-compose
gitops
haproxy
saltstack
Istio
marksugar
累计撰写
676
篇文章
累计收到
140
条评论
首页
栏目
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack
Open-Falcon
Prometheus
victoriaMetrics
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
互联咨询
最后的净土
软件交付
持续集成
gitops
devops
页面
About Me
推荐
weibo
github
搜索到
676
篇与
的结果
2023-03-07
linuxea: docker-compose构建postgresql12主从
postgresql的同步流复制,我们就简单叫主库和备库来表示两个不同的角色,我将分享主从的搭建过程为了减少在安装上的繁琐流程,我将使用docker镜像,使用docker-compose编排Docker-Compose version: v2.16.0 image: registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/postgres:12.14-alpine3.17因此,需要提前安装docker和docker-compose,镜像使用的是官方镜像,只是被搬到阿里而已IDIP角色配置1172.168.204.41master4c8g2172.168.204.42slave4c8g启动的部分命令如下wal_level = replica # 这个是设置主为wal的主机 max_wal_senders = 5 # 这个设置了可以最多有几个流复制连接,差不多有几个从,就设置几个 wal_keep_segments = 128 # 设置流复制保留的最多的xlog数目 wal_sender_timeout = 60s # 设置流复制主机发送数据的超时时间 max_connections = 200 # 一般查多于写的应用从库的最大连接数要比较大 hot_standby = on # 说明这台机器不仅仅是用于数据归档,也用于数据查询 max_standby_streaming_delay = 30s # 数据流备份的最大延迟时间 wal_receiver_status_interval = 10s # 多久向主报告一次从的状态,当然从每次数据复制都会向主报告状态,这里只是设置最长的间隔时间 hot_standby_feedback = on # 如果有错误的数据复制,是否向主进行反馈 wal_log_hints = on # also do full page writes of non-critical updates这段摘自其他网页master使用docker-compose后,就需要在Command种使用-c指定即可,如下version: '3.3' services: postgresql12-m: container_name: postgresql12-m image: registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/postgres:12.14-alpine3.17 restart: always environment: - POSTGRES_USER=postgres - POSTGRES_PASSWORD=mysecretpassword - PGDATA=/var/lib/postgresql/data/pgdata command: "postgres -c wal_level=replica -c max_wal_senders=15 -c wal_keep_segments=128 -c wal_sender_timeout=60s -c max_connections=200 -c hot_standby=on -c max_standby_streaming_delay=30s -c wal_receiver_status_interval=10s -c hot_standby_feedback=on -c wal_log_hints=on" volumes: - /etc/localtime:/etc/localtime:ro # 时区2 - /data/postgresql12:/var/lib/postgresql/data - /data/postgresql12/pg_hba.conf:/var/lib/postgresql/data/pgdata/pg_hba.conf logging: driver: "json-file" options: max-size: "50M" ports: - 8080:8080 - 5432:5432而后启动,命令如下docker-compose -f docker-compose.yaml up -d进入容器,创建用户create role replica with replication login password '123456'; alter user replica with password '123456'; 或者 CREATE USER replica WITH REPLICATION LOGIN CONNECTION LIMIT 10 ENCRYPTED PASSWORD '123456';如下docker exec --user=postgres -it postgresql12-m bash 2860b7926327:/$ psql psql (12.14) Type "help" for help. postgres=# create role replica with replication login password '123456'; CREATE ROLE postgres=# alter user replica with password '123456'; ALTER ROLE启动完成后,我们需要删除原有自动生成的配置文件在替换后启动才能生效postgres的配置文件是自动生成的,只有在容器生成在替换才可以被替换因此,修改pg_hba.conf 的内容如下docker rm -f postgresql12-m rm -f /data/postgresql12/pgdata/pg_hba.conf cat > /data/postgresql12/pgdata/pg_hba.conf << EOFFF local all all trust host all all 0.0.0.0/0 md5 host all all ::1/128 trust host replication replica 0.0.0.0/0 md5 EOFFF除此之外,我们还需要配置流复制的必要配置,直接添加到/data/postgresql12/pgdata/postgresql.conf文件内PG_FILE=/data/postgresql12/pgdata/postgresql.conf echo "synchronous_standby_names = 'standbydb1' #同步流复制才配置该值" >> $PG_FILE echo "synchronous_commit = 'remote_write'" >> $PG_FILE最后启动在重新启动docker-compose -f docker-compose.yaml up -d如果有必要,需要关闭防火墙或者配置放行5432端口slave从节点的command命令和master一致即可,修改名称添加-s以便于区分version: '3.3' services: postgresql12-s: container_name: postgresql12-s image: registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/postgres:12.14-alpine3.17 restart: always environment: - POSTGRES_USER=postgres - POSTGRES_PASSWORD=mysecretpassword - PGDATA=/var/lib/postgresql/data/pgdata command: "postgres -c wal_level=replica -c max_wal_senders=15 -c wal_keep_segments=128 -c wal_sender_timeout=60s -c max_connections=200 -c hot_standby=on -c max_standby_streaming_delay=30s -c wal_receiver_status_interval=10s -c hot_standby_feedback=on -c wal_log_hints=on" volumes: - /etc/localtime:/etc/localtime:ro # 时区2 - /data/postgresql12:/var/lib/postgresql/data logging: driver: "json-file" options: max-size: "50M" ports: - 8080:8080 - 5432:5432启动数据库docker-compose -f docker-compose.yaml up -d备份数据备份数据,可以通过pg_start_backup和pg_basebackup,这里我们使用pg_basebackup1,pg_basebackup镜像内本身带有pg_basebackup,因此,我们在从节点,进入容器内,执行如下命令pg_basebackup -h 172.168.204.41 -p 5432 -U replica -Fp -Xs -Pv -R -D /var/lib/postgresql/data/pgdata-latest上述命令远程备份到当前的/var/lib/postgresql/data/pgdata-latest目录下也是在容器的挂载路径内,先存放在pgdata-latest,而后在切换目录即可[root@node2 pgdata-m]# docker exec -it postgresql12-s bash 142531a6f29e:/# pg_basebackup -h 172.168.204.41 -p 5432 -U replica -Fp -Xs -Pv -R -D /var/lib/postgresql/data/pgdata-latest Password: pg_basebackup: initiating base backup, waiting for checkpoint to complete pg_basebackup: checkpoint completed pg_basebackup: write-ahead log start point: 0/6000028 on timeline 1 pg_basebackup: starting background WAL receiver pg_basebackup: created temporary replication slot "pg_basebackup_76" 24656/24656 kB (100%), 1/1 tablespace pg_basebackup: write-ahead log end point: 0/6000100 pg_basebackup: waiting for background process to finish streaming ... pg_basebackup: syncing data to disk ... pg_basebackup: base backup completed此时备份的数据目录是在/var/lib/postgresql/data/pgdata-latest跳过pg_start_backup如果使用了pg_backup,现在可以跳过次方式备份。如果不能使用pg_backup,就在主节点使用pg_start_backup后进行复制目录docker exec -it --user=postgres postgresql12-m bash2 8e481427e025:/$ psql -U postgres psql (12.14) Type "help" for help. postgres=# select pg_start_backup('$DATE',true); pg_start_backup ----------------- 0/2000028 (1 row)使用pg_start_backup这个方法后,所有请求在写日志之后不会再刷新到磁盘。直到执行pg_stop_backup()这个函数。拷贝一份data目录,并通过scp复制到子数据库中cp -r /data/postgresql/pgdata ./pgdata-m tar -zcf pgdata-m.tar.gz pgdata-m scp -r ./pgdata-m.tar.gz 172.168.204.42:~/复制完成停止postgres=# select pg_stop_backup(); NOTICE: WAL archiving is not enabled; you must ensure that all required WAL segments are copied through other means to complete the backup pg_stop_backup ---------------- 0/2000138 (1 row)回到从节点,删除容器,解压从主拿来的数据[root@node2 postgres]# docker rm -f postgresql12-s postgresql12-s [root@node2 ~]# tar xf pgdata-m.tar.gz -C /data/postgresql12/ [root@node2 ~]# ll /data/postgresql12/ total 8 drwx------ 2 70 root 35 Mar 6 17:31 pgdata drwx------ 19 root root 4096 Mar 6 17:33 pgdata-m -rw-r--r-- 1 70 root 113 Mar 6 17:25 pg_hba.conf2,修改数据目录此时备份的数据目录是在/var/lib/postgresql/data/pgdata-latest,修改docker-compose映射的环境变量关系,指向备份好的目录- PGDATA=/var/lib/postgresql/data/pgdata-latest在posgress12种,需要创建文件standby.signal来声明自己是从,并且standby.signal本身也优先于其他创建文件即可,你也可以写点其他信息。这里我们为了怀念老版本,追加standby_mode = 'on'echo "standby_mode = 'on'" > /data/postgresql12/pgdata-latest/standby.signal此时我们还需要检查从节点的配置。从节点的postgresql.auto.conf文件内的属性是否和预期一致,之所以是postgresql.auto.conf,只是因为postgresql.auto.conf优先于postgresql.conf被读取pg_basebackup -R会修改postgresql.auto.conf的授权的权限信息,一旦使用了pg_basebackup ,就需要重新修改此时我的postgresql.auto.conf和postgresql.conf都添加如下配置:其中包含了授权的账号信息hot_standby = 'on' primary_conninfo = 'application_name=standbydb1 user=replica password=123456 host=172.168.204.41 port=5432 sslmode=disable sslcompression=0 gssencmode=disable krbsrvname=postgres target_session_attrs=any'如果修改的是文件,而这样的修改需要重启启动,也可以通过命令行进行配置,如下:show primary_conninfo # 查看 alter system set primary_conninfo = 'application_name=standbydb1 user=replica password=123456 host=172.168.204.41 port=5432 sslmode=disable sslcompression=0 gssencmode=disable krbsrvname=postgres target_session_attrs=any';而后启动从库docker-compose -f docker-compose.yaml up -d验证主从回到主库查看linuxea=# select * from pg_stat_replication; pid | usesysid | usename | application_name | client_addr | client_hostname | client_port | backend_start | backend_xmin | state | sent_lsn | write_lsn | flush_lsn | replay_lsn | write_lag | flush_lag | replay_lag | sync_priority | sync_state | reply_time -----+----------+---------+------------------+----------------+-----------------+-------------+------------------------------+--------------+-----------+-----------+-----------+-----------+------------+-----------+-----------+------------+---------------+------------+------------------------------- 144 | 16384 | replica | standbydb1 | 172.168.204.42 | | 47492 | 2023-03-07 10:18:45.56999+08 | 502 | streaming | 0/F3449D8 | 0/F3449D8 | 0/F3449D8 | 0/F3449D8 | | | | 1 | sync | 2023-03-07 10:25:38.354315+08 (1 row)linuxea=# select pid,state,client_addr,sync_priority,sync_state from pg_stat_replication; pid | state | client_addr | sync_priority | sync_state -----+-----------+----------------+---------------+------------ 207 | streaming | 172.168.204.42 | 1 | sync (1 row)linuxea=# select pg_is_in_recovery(); pg_is_in_recovery ------------------- f (1 row)在主库创建数据库,并插入数据CREATE DATABASE linuxea OWNER postgres; \c linuxea CREATE TABLE test( id integer, test integer)WITH (OIDS=FALSE); ALTER TABLE test OWNER TO postgres;如下postgres=# \c linuxea You are now connected to database "linuxea" as user "postgres". linuxea=# CREATE TABLE test( id integer, test integer)WITH (OIDS=FALSE); CREATE TABLE linuxea=# ALTER TABLE test OWNER TO postgres; ALTER TABLE linuxea=# insert into test SELECT generate_series(1,1000000) as key, (random()*(10^3))::integer; INSERT 0 1000000来到从库[root@node2 pgdata-m]# docker exec -it --user=postgres postgresql12-s bash da91ac9e2a19:/$ psql -U postgres psql (12.14) Type "help" for help. postgres=# select pg_is_in_recovery(); pg_is_in_recovery ------------------- t (1 row) postgres=# \c linuxea You are now connected to database "linuxea" as user "postgres". linuxea=# SELECT * FROM test; id | test ---------+------ 1 | 935 2 | 652 3 | 204 4 | 367 5 | 100 6 | 743 --More--从切主我们假设主挂掉了,一时半会好不了,就简单的将从切换到主,提供服务即可开始之前,我们直接关闭主库模拟主库不可用,而后进入从库的容器,使用 pg_ctl promote -Dc683e39637ea:/$ pg_ctl promote -D /var/lib/postgresql/data/pgdata-latest waiting for server to promote.... done server promoted将数据写入42,测试数据写入是否正常CREATE DATABASE linuxea2 OWNER postgres; \c linuxea2 CREATE TABLE test( id integer, test integer)WITH (OIDS=FALSE); ALTER TABLE test OWNER TO postgres; insert into test SELECT generate_series(1,1000000) as key, (random()*(10^3))::integer; SELECT * FROM test;现在从库已经可以写入数据了配置从库此时主库起来了,如果代理已经将请求改到42上了,我们直接在41的主节点上,同步42数据,将41配置为从库1.备份[root@node1 pgdata]# docker-compose -f ~/postgresql/docker-compose.yaml up -d [+] Running 1/1 ⠿ Container postgresql12-m Started 0.4s [root@node1 pgdata]# docker exec -it postgresql12-m bash d7108d41f908:/# pg_basebackup -h 172.168.204.42 -p 5432 -U replica -Fp -Xs -Pv -R -D /var/lib/postgresql/data/pgdata-latest Password: pg_basebackup: initiating base backup, waiting for checkpoint to complete pg_basebackup: checkpoint completed pg_basebackup: write-ahead log start point: 0/14000028 on timeline 2 pg_basebackup: starting background WAL receiver pg_basebackup: created temporary replication slot "pg_basebackup_100" 147294/147294 kB (100%), 1/1 tablespace pg_basebackup: write-ahead log end point: 0/14000100 pg_basebackup: waiting for background process to finish streaming ... pg_basebackup: syncing data to disk ... pg_basebackup: base backup completed修改数据目录- PGDATA=/var/lib/postgresql/data/pgdata-latest创建文件echo "standby_mode = 'on'" > /data/postgresql12/pgdata-latest/standby.signalpostgresql.auto.conf和postgresql.conf都添加如下配置:hot_standby = 'on' primary_conninfo = 'application_name=standbydb1 user=replica password=123456 host=172.168.204.42 port=5432 sslmode=disable sslcompression=0 gssencmode=disable krbsrvname=postgres target_session_attrs=any'删除容器后,重新启动docker-compose -f ~/postgresql/docker-compose.yaml down docker-compose -f ~/postgresql/docker-compose.yaml up -d6.验证回到主库42CREATE DATABASE linuxea23 OWNER postgres; \c linuxea23 CREATE TABLE test( id integer, test integer)WITH (OIDS=FALSE); ALTER TABLE test OWNER TO postgres; insert into test SELECT generate_series(1,1000000) as key, (random()*(10^3))::integer;而后到从库41查看\c linuxea23 SELECT * FROM test;如下[root@node1 postgresql12]# docker exec -it --user=postgres postgresql12-m bash 06a1e5ecc51b:/$ psql -U postgres psql (12.14) Type "help" for help. postgres=# \c linuxea23 You are now connected to database "linuxea23" as user "postgres". linuxea23=# SELECT * FROM test; id | test ---------+------ 1 | 785 2 | 654 3 | 881 4 | 19 5 | 37 6 | 482 7 | 938 8 | 25 9 | 209 10 | 820 11 | 445 12 | 238 13 | 772 14 | 233 15 | 158 16 | 964 17 | 815 18 | 890 19 | 977 20 | 437 21 | 56 22 | 241 23 | 266 24 | 123 25 | 139 26 | 207 27 | 90 28 | 4 29 | 95 30 | 896 31 | 698 32 | 752 33 | 972 --More--参考PostgreSQL12恢复配置总结
2023年03月07日
468 阅读
0 评论
0 点赞
2023-02-06
linuxea:使用robusta收集事件pod崩溃OOM日志
robusta的功能远不止本章介绍的这些,它可以去监控Kubernetes,提供观测性,可以于prometheus接入,作为告警的二次处理,自动修复等,也提供了事件的时间线。此前使用的是阿里的kube-eventer,kube-eventer仅仅只是提供了一个转发,因此kube-eventer只能解决的是事件触发的通知。当然, 如果robusta也是仅仅止步于此,那也没用多少必要性去使用它。它还提供了另外一种非常有用的功能: 事件告警。 在robusta的事件告警中,当侦测到后,会将预设中预设的pod状态连同最近一段日志发送到slack. 这也是为什么会有这篇文章最重要的原因。基础依赖python版本必须等于大于3.7,于是我们升级版本升级pythonwget https://www.python.org/ftp/python/3.9.16/Python-3.9.16.tar.xz yum install gcc zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gdbm-devel db4-devel libpcap-devel xz-devel libffi-devel -y yum install libffi-devel -y yum install zlib* -y tar xf Python-3.9.16.tar.xz cd Python-3.9.16 ./configure --with-ssl --prefix=/usr/local/python3 make make install rm -rf /usr/bin/python3 /usr/bin/pip3 ln -s /usr/local/python3/bin/python3 /usr/bin/python3 ln -s /usr/local/python3/bin/pip3 /usr/bin/pip3准备国内源mkdir -p ~/.pip/ cat > ~/.pip/pip.conf << EOF [global] trusted-host = mirrors.aliyun.com index-url = http://mirrors.aliyun.com/pypi/simple EOFrobusta.dev参考官方文档开始安装pip3 install -U robusta-cli --no-cache robusta gen-config由于网络问题,我个人将使用使用docker进行配置curl -fsSL -o robusta https://docs.robusta.dev/master/_static/robusta chmod +x robusta ./robusta gen-config开始之前,务必下载我中转的镜像docker pull registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/robusta-cli:latest docker tag us-central1-docker.pkg.dev/genuine-flight-317411/devel/robusta-cli:latest registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/robusta-cli:latest[root@master1 opt]# ./robusta gen-config Robusta reports its findings to external destinations (we call them "sinks"). We'll define some of them now. Configure Slack integration? This is HIGHLY recommended. [Y/n]: y # 强烈建议配置slack If your browser does not automatically launch, open the below url: https://api.robusta.dev/integrations/slack?id=64a3ee7c-5691-466f-80da-85e8ece80359 # 浏览器打开 ====================================================================== Error getting slack token ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) ====================================================================== ====================================================================== Error getting slack token HTTPSConnectionPool(host='api.robusta.dev', port=443): Max retries exceeded with url: /integrations/slack/get-token?id=64a3ee7c-5691-466f-80da-85e8ece80359 (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f50b1f18cd0>: Failed to establish a new connection: [Errno 110] Connection timed out')) ====================================================================== You've just connected Robusta to the Slack of: crow as a cock Which slack channel should I send notifications to? # 根据提示打开If your browser does not automatically launch, open the below url: https://api.robusta.dev/integrations/slack?id=64a3ee7c-5691-466f-80da-85e8ece80359勾选允许如下此时slack已经有了 robusta应用继续下一步,在提示种选择了频道后Which slack channel should I send notifications to? # devops会受到一封消息执行完成后,如下:[root@master1 opt]# ./robusta gen-config Robusta reports its findings to external destinations (we call them "sinks"). We'll define some of them now. Configure Slack integration? This is HIGHLY recommended. [Y/n]: y If your browser does not automatically launch, open the below url: https://api.robusta.dev/integrations/slack?id=d1fcbb13-5174-4027-a176-a3dcab10c27a ====================================================================== Error getting slack token HTTPSConnectionPool(host='api.robusta.dev', port=443): Max retries exceeded with url: /integrations/slack/get-token?id=d1fcbb13-5174-4027-a176-a3dcab10c27a (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f0ec508eee0>: Failed to establish a new connection: [Errno 110] Connection timed out')) ====================================================================== You've just connected Robusta to the Slack of: crow as a cock Which slack channel should I send notifications to? # devops Configure MsTeams integration? [y/N]: n 配置MsTeams集成?[y / N]: N Configure Robusta UI sink? This is HIGHLY recommended. [Y/n]: y 配置Robusta UI接收器?这是强烈推荐的。[Y / n]: Enter your Gmail/Google address. This will be used to login: user@gmail.com 输入您的Gmail/谷歌地址。这将用于登录: Choose your account name (e.g your organization name): marksugar 选择您的帐户名称(例如您的组织名称): Successfully registered. Robusta can use Prometheus as an alert source. If you haven't installed it yet, Robusta can install a pre-configured Prometheus. Would you like to do so? [y/N]: y 罗布斯塔可以使用普罗米修斯作为警报源。 如果你还没有安装它,罗布斯塔可以安装一个预先配置的Prometheus。 你愿意这样做吗?[y / N]: Please read and approve our End User License Agreement: https://api.robusta.dev/eula.html Do you accept our End User License Agreement? [y/N]: y 请阅读并批准我们的最终用户许可协议:https://api.robusta.dev/eula.html 您是否接受我们的最终用户许可协议?[y / N]: Last question! Would you like to help us improve Robusta by sending exception reports? [y/N]: n 最后一个问题!你愿意通过发送异常报告来帮助我们改进Robusta吗?[y / N]: Saved configuration to ./generated_values.yaml - save this file for future use! Finish installing with Helm (see the Robusta docs). Then login to Robusta UI at https://platform.robusta.dev By the way, we'll send you some messages later to get feedback. (We don't store your API key, so we scheduled future messages using Slack'sAPI) 保存配置到。/generated_values。保存这个文件以备将来使用! 完成Helm的安装(参见罗布斯塔文档)。然后登录到罗布斯塔用户界面https://platform.robusta.dev 顺便说一下,我们稍后会给你发一些信息以获得反馈。(我们不存储你的API密钥,所以我们使用Slack的API来安排未来的消息)上述完成后,创建了一个generated_values.yamlglobalConfig: signing_key: 92a8195-a3fa879b3f88 account_id: 79efaf9c433294 sinksConfig: - slack_sink: name: main_slack_sink slack_channel: devops api_key: xoxb-4715825756487-4749501ZZylPy1f - robusta_sink: name: robusta_ui_sink token: eyJhY2NvjIn0= enablePrometheusStack: true enablePlatformPlaybooks: true runner: sendAdditionalTelemetry: false rsa: private: LS0tLS1CRUdJTiBRCBSU0EgUFJJVkFURSBLRVktLS0tLQo= public: LS0tLS1CRUdJTiBQTElDIEtFWS0tLS0tCg==helm紧接着使用上述创建的yaml文件进行安装。我们适当调整下内容关于触发器的种类非常多,我们可以参考:example-triggers, java-troubleshooting,event-enrichmentmiscellaneous,kubernetes-triggers。我们可以针对某一组pod或者名称空间进行过滤去监控的特定的信息。我们节选一些测试,并且加到generated_values.yaml种,如下:globalConfig: signing_key: 92a8195-a3fa879b3f88 account_id: 79efaf9c433294 sinksConfig: - slack_sink: name: main_slack_sink slack_channel: devops api_key: xoxb-4715825756487-4749501ZZylPy1f - robusta_sink: name: robusta_ui_sink token: eyJhY2NvjIn0= enablePrometheusStack: false enablePlatformPlaybooks: true runner: sendAdditionalTelemetry: false rsa: private: LS0tLS1CRUdJTiBRCBSU0EgUFJJVkFURSBLRVktLS0tLQo= public: LS0tLS1CRUdJTiBQTElDIEtFWS0tLS0tCg== customPlaybooks: - triggers: - on_deployment_update: {} actions: - resource_babysitter: omitted_fields: [] fields_to_monitor: ["spec.replicas"] - triggers: - on_pod_crash_loop: restart_reason: "CrashLoopBackOff" restart_count: 1 rate_limit: 3600 actions: - report_crash_loop: {} - triggers: - on_pod_oom_killed: rate_limit: 900 exclude: - name: "oomkilled-pod" namespace: "default" actions: - pod_graph_enricher: resource_type: Memory display_limits: true - triggers: - on_container_oom_killed: rate_limit: 900 exclude: - name: "oomkilled-container" namespace: "default" actions: - oomkilled_container_graph_enricher: resource_type: Memory - triggers: - on_job_failure: namespace_prefix: robusta actions: - create_finding: title: "Job $name on namespace $namespace failed" aggregation_key: "Job Failure" - job_events_enricher: { } runner: image: registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/robusta-runner:0.10.10 imagePullPolicy: IfNotPresent kubewatch: image: registry.cn-zhangjiakou.aliyuncs.com/marksugar-k8s/kubewatch:v2.0 imagePullPolicy: IfNotPresent现在我们开始使用helm安装helm repo add robusta https://robusta-charts.storage.googleapis.com && helm repo update helm upgrade --install robusta --namespace robusta --create-namespace robusta/robusta -f ./generated_values.yaml \ --set clusterName=test 也可以使用如下命令调试 helm upgrade --install robusta --namespace robusta robusta/robusta -f ./generated_values.yaml --set clusterName=test --dry-run 如下[root@master1 opt]# helm upgrade --install robusta --namespace robusta --create-namespace robusta/robusta -f ./generated_values.yaml \ > --set clusterName=test Release "robusta" does not exist. Installing it now. NAME: robusta LAST DEPLOYED: Thu Feb 2 15:58:32 2023 NAMESPACE: robusta STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Thank you for installing Robusta 0.10.10 As an open source project, we collect general usage statistics. This data is extremely limited and contains only general metadata to help us understand usage patterns. If you are willing to share additional data, please do so! It really help us improve Robusta. You can set sendAdditionalTelemetry: true as a Helm value to send exception reports and additional data. This is disabled by default. To opt-out of telemetry entirely, set a ENABLE_TELEMETRY=false environment variable on the robusta-runner deployment. Visit the web UI at: https://platform.robusta.dev/等待pod就绪[root@master1 opt]# kubectl -n robusta get pod -w NAME READY STATUS RESTARTS AGE robusta-forwarder-78964b4455-vnt77 1/1 Running 0 2m55s robusta-runner-758cf9c986-87l4x 0/1 ContainerCreating 0 2m55s robusta-runner-758cf9c986-87l4x 1/1 Running 0 7m6s此时如果你的集群上pod有异常状态的而崩溃的,在被删除前,将会将日志发送到slack, slack上已经可以收到日志信息了选择点击以展开内联,即可查看详细信息
2023年02月06日
650 阅读
0 评论
0 点赞
2023-02-04
linuxea:istio bookinfo配置演示(11)
bookinfo其中包包中有一个 bookinfo的示例,这个应用模仿在线书店的一个分类,显示一本书的信息。 页面上会显示一本书的描述,书籍的细节(ISBN、页数等),以及关于这本书的一些评论。Bookinfo 应用分为四个单独的微服务:productpage. 这个微服务会调用 details 和 reviews 两个微服务,用来生成页面。details. 这个微服务中包含了书籍的信息。reviews. 这个微服务中包含了书籍相关的评论。它还会调用 ratings 微服务。ratings. 这个微服务中包含了由书籍评价组成的评级信息。reviews 微服务有 3 个版本:v1 版本不会调用 ratings 服务。v2 版本会调用 ratings 服务,并使用 1 到 5 个黑色星形图标来显示评分信息。v3 版本会调用 ratings 服务,并使用 1 到 5 个红色星形图标来显示评分信息。拓扑结构如下:Bookinfo 应用中的几个微服务是由不同的语言编写的。 这些服务对 Istio 并无依赖,但是构成了一个有代表性的服务网格的例子:它由多个服务、多个语言构成(接口API统一),并且 reviews 服务具有多个版本安装解压istio后,在samples/bookinfo目录下是相关bookinfo目录,参考官网中的getting-startrd[root@linuxea_48 /usr/local/istio-1.14.1]# ls samples/bookinfo/ -ll total 20 -rwxr-xr-x 1 root root 3869 Jun 8 10:11 build_push_update_images.sh drwxr-xr-x 2 root root 4096 Jun 8 10:11 networking drwxr-xr-x 3 root root 18 Jun 8 10:11 platform drwxr-xr-x 2 root root 46 Jun 8 10:11 policy -rw-r--r-- 1 root root 3539 Jun 8 10:11 README.md drwxr-xr-x 8 root root 123 Jun 8 10:11 src -rw-r--r-- 1 root root 6329 Jun 8 10:11 swagger.yaml而后安装 platform/kube/bookinfo.yaml文件[root@linuxea_48 /usr/local/istio-1.14.1]# kubectl -n java-demo apply -f samples/bookinfo/platform/kube/bookinfo.yaml service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created错误处理Unhandled exception Type=Bus error vmState=0x00000000 J9Generic_Signal_Number=00000028 Signal_Number=00000007 Error_Value=00000000 Signal_Code=00000002 Handler1=00007F368FD0AD30 Handler2=00007F368F5F72F0 InaccessibleAddress=00002AAAAAC00000 RDI=00007F369017F7D0 RSI=0000000000000008 RAX=00007F369018CBB0 RBX=00007F369017F7D0 RCX=00007F369003A9D0 RDX=0000000000000000 R8=0000000000000000 R9=0000000000000000 R10=00007F36900008D0 R11=0000000000000000 R12=00007F369017F7D0 R13=00007F3679C00000 R14=0000000000000001 R15=0000000000000080 RIP=00007F368DA7395B GS=0000 FS=0000 RSP=00007F3694D1E4A0 EFlags=0000000000010202 CS=0033 RBP=00002AAAAAC00000 ERR=0000000000000006 TRAPNO=000000000000000E OLDMASK=0000000000000000 CR2=00002AAAAAC00000 xmm0 0000003000000020 (f: 32.000000, d: 1.018558e-312) xmm1 0000000000000000 (f: 0.000000, d: 0.000000e+00) xmm2 ffffffff00000002 (f: 2.000000, d: -nan) xmm3 40a9000000000000 (f: 0.000000, d: 3.200000e+03) xmm4 dddddddd000a313d (f: 667965.000000, d: -1.456815e+144) xmm5 0000000000000994 (f: 2452.000000, d: 1.211449e-320) xmm6 00007f369451ac40 (f: 2488380416.000000, d: 6.910614e-310) xmm7 0000000000000000 (f: 0.000000, d: 0.000000e+00) xmm8 dd006b6f6f68396a (f: 1869101440.000000, d: -9.776703e+139) xmm9 0000000000000000 (f: 0.000000, d: 0.000000e+00) xmm10 0000000000000000 (f: 0.000000, d: 0.000000e+00) xmm11 0000000049d70a38 (f: 1238829568.000000, d: 6.120632e-315) xmm12 000000004689a022 (f: 1183424512.000000, d: 5.846894e-315) xmm13 0000000047ac082f (f: 1202456576.000000, d: 5.940925e-315) xmm14 0000000048650dc0 (f: 1214582272.000000, d: 6.000833e-315) xmm15 0000000046b73e38 (f: 1186414080.000000, d: 5.861665e-315) Module=/opt/ibm/java/jre/lib/amd64/compressedrefs/libj9jit29.so Module_base_address=00007F368D812000 Target=2_90_20200901_454898 (Linux 3.10.0-693.el7.x86_64) CPU=amd64 (32 logical CPUs) (0x1f703dd000 RAM) ----------- Stack Backtrace ----------- (0x00007F368DA7395B [libj9jit29.so+0x26195b]) (0x00007F368DA7429B [libj9jit29.so+0x26229b]) (0x00007F368D967C57 [libj9jit29.so+0x155c57]) J9VMDllMain+0xb44 (0x00007F368D955C34 [libj9jit29.so+0x143c34]) (0x00007F368FD1D041 [libj9vm29.so+0xa7041]) (0x00007F368FDB4070 [libj9vm29.so+0x13e070]) (0x00007F368FC87E94 [libj9vm29.so+0x11e94]) (0x00007F368FD2581F [libj9vm29.so+0xaf81f]) (0x00007F368F5F8053 [libj9prt29.so+0x1d053]) (0x00007F368FD1F9ED [libj9vm29.so+0xa99ed]) J9_CreateJavaVM+0x75 (0x00007F368FD15B75 [libj9vm29.so+0x9fb75]) (0x00007F36942F4305 [libjvm.so+0x12305]) JNI_CreateJavaVM+0xa82 (0x00007F36950C9B02 [libjvm.so+0xab02]) (0x00007F3695ADDA94 [libjli.so+0xfa94]) (0x00007F3695CF76DB [libpthread.so.0+0x76db]) clone+0x3f (0x00007F36955FAA3F [libc.so.6+0x121a3f]) --------------------------------------- JVMDUMP039I Processing dump event "gpf", detail "" at 2022/07/20 08:59:38 - please wait. JVMDUMP032I JVM requested System dump using '/opt/ibm/wlp/output/defaultServer/core.20220720.085938.1.0001.dmp' in response to an event JVMDUMP010I System dump written to /opt/ibm/wlp/output/defaultServer/core.20220720.085938.1.0001.dmp JVMDUMP032I JVM requested Java dump using '/opt/ibm/wlp/output/defaultServer/javacore.20220720.085938.1.0002.txt' in response to an event JVMDUMP012E Error in Java dump: /opt/ibm/wlp/output/defaultServer/javacore.20220720.085938.1.0002.txt JVMDUMP032I JVM requested Snap dump using '/opt/ibm/wlp/output/defaultServer/Snap.20220720.085938.1.0003.trc' in response to an event JVMDUMP010I Snap dump written to /opt/ibm/wlp/output/defaultServer/Snap.20220720.085938.1.0003.trc JVMDUMP032I JVM requested JIT dump using '/opt/ibm/wlp/output/defaultServer/jitdump.20220720.085938.1.0004.dmp' in response to an event JVMDUMP013I Processed dump event "gpf", detail "".如下echo 0 > /proc/sys/vm/nr_hugepages见34510,13389配置完成,pod准备结束(base) [root@k8s-01 bookinfo]# kubectl -n java-demo get pod NAME READY STATUS RESTARTS AGE details-v1-6d89cf9847-46c4z 2/2 Running 0 27m productpage-v1-f44fc594c-fmrf4 2/2 Running 0 27m ratings-v1-6c77b94555-twmls 2/2 Running 0 27m reviews-v1-765697d479-tbprw 2/2 Running 0 6m30s reviews-v2-86855c588b-sm6w2 2/2 Running 0 6m2s reviews-v3-6ff967c97f-g6x8b 2/2 Running 0 5m55s sleep-557747455f-46jf5 2/2 Running 0 5d1.gateway配置hosts为*,也就是默认的配置,匹配所有。也就意味着,可以使用ip地址访问在VirtualService中的访问入口如下 http: - match: - uri: exact: /productpage只要pod正常启动,南北流量的访问就能够被引入到网格内部,并且可以通过ip/productpage进行访问yaml如下apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - "*" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080apply(base) [root@k8s-01 bookinfo]# kubectl -n java-demo apply -f networking/bookinfo-gateway.yaml gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created而后便可以通过浏览器打开同时。这里的版本是随着刷新一直在变化reviews-v1reviews-v3reviews-v2此时在kiali中能看到一个简单的拓扑:请求从ingress-gateway进入后,到达productpage的v1版本,而后调度到details的v1, 其中reviews流量等比例的被切割到v1,v2,v3,并且v2,v3比v1还多了一个ratings服务,如下图网格测试安装完成,我们进行一些测试,比如:请求路由,故障注入等1.请求路由要开始,需要将destination rules中配置的子集规则,如下apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: productpage spec: host: productpage subsets: - name: v1 labels: version: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: reviews spec: host: reviews subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 - name: v3 labels: version: v3 --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ratings spec: host: ratings subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 - name: v2-mysql labels: version: v2-mysql - name: v2-mysql-vm labels: version: v2-mysql-vm --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: details spec: host: details subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 ---applykubectl -n java-demo apply -f samples/bookinfo/networking/destination-rule-all.yaml> kubectl -n java-demo apply -f samples/bookinfo/networking/destination-rule-all.yaml destinationrule.networking.istio.io/productpage created destinationrule.networking.istio.io/reviews created destinationrule.networking.istio.io/ratings created destinationrule.networking.istio.io/details created而后,对于非登录用户,将流量全发送到v1的版本,展开的yaml如下apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: productpage spec: hosts: - productpage http: - route: - destination: host: productpage subset: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - route: - destination: host: ratings subset: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: details spec: hosts: - details http: - route: - destination: host: details subset: v1 ---使用如下命令创建即可kubectl -n java-demo apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml> kubectl -n java-demo apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml virtualservice.networking.istio.io/productpage created virtualservice.networking.istio.io/reviews created virtualservice.networking.istio.io/ratings created virtualservice.networking.istio.io/details created PS E:\ops\k8s-1.23.1-latest\istio-企鹅通\istio-1.14此时在去访问,流量都会到v1这取决于定义了三个reviews的子集,如下apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: reviews spec: host: reviews subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 - name: v3 labels: version: v3随后指明了reviews调度到v1apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v1如果没有VirtualService这reviews的配置,就会在三个版本中不断切换2.用户标识调度此时我们希望某个用户登录就让他转发到某个版本如果end-user等于json就转发到v2apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v1在/productpageBookinfo 应用程序上,以 user 身份登录jason。登录后kiali变化如下3.故障注入要了解故障注入,需要了解混沌工程。在云原生上,在某些时候希望能够抵御某种程度局部故障。比如希望允许客户端重试,超时来解决局部问题istio原生支持两种故障注入来模拟混动工程的效果,注入超时,或者重试故障基于此前上的两个之上$ kubectl -n java-demo apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml $ kubectl -n java-demo apply -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml使用上述配置,请求流程如下:productpage→ reviews:v2→ ratings(仅限用户jason)productpage→ reviews:v1(对于其他所有人)注入延迟故障要测试 Bookinfo 应用程序微服务的弹性,在userreviews:v2和微服务之间注入 7s 延迟。此测试将发现一个有意引入 Bookinfo 应用程序的错误。ratings`jason`如果是jason,在100%的流量上,注入7秒延迟,路由到v1版本,其他的也路由到v1版本,唯一不同是jason访问是有 延迟的,其他人正常,yaml如下apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - match: - headers: end-user: exact: jason fault: delay: percentage: value: 100.0 fixedDelay: 7s route: - destination: host: ratings subset: v1 - route: - destination: host: ratings subset: v1apply> kubectl.exe -n java-demo apply -f samples/bookinfo/networking/virtual-service-ratings-test-delay.yaml virtualservice.networking.istio.io/ratings configured我们打开浏览器测试并且可以看到Sorry, product reviews are currently unavailable for this book.中断故障注入测试微服务弹性的另一种方法是引入 HTTP 中止故障。ratings在此任务中,将为测试用户的微服务引入 HTTP 中止jason。在这种情况下,希望页面立即加载并显示Ratings service is currently unavailable消息。apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - match: - headers: end-user: exact: jason fault: abort: percentage: value: 100.0 httpStatus: 500 route: - destination: host: ratings subset: v1 - route: - destination: host: ratings subset: v1apply> kubectl.exe -n java-demo apply -f samples/bookinfo/networking/virtual-service-ratings-test-abort.yaml virtualservice.networking.istio.io/ratings configuredkiali如下4.流量迁移如何将流量从一个微服务版本转移到另一个版本。一个常见的用例是将流量从旧版本的微服务逐渐迁移到新版本。在 Istio 中,可以通过配置一系列路由规则来实现这一目标,这些规则将一定比例的流量从一个目的地重定向到另一个目的地。在此任务中,将使用将 50% 的流量发送到reviews:v1和 50% 到reviews:v3。然后,将通过将 100% 的流量发送到 来完成迁移reviews:v3。首先,运行此命令将所有流量路由到v1每个微服务的版本。apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: productpage spec: hosts: - productpage http: - route: - destination: host: productpage subset: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - route: - destination: host: ratings subset: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: details spec: hosts: - details http: - route: - destination: host: details subset: v1 ---apply> kubectl.exe -n java-demo apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml virtualservice.networking.istio.io/productpage unchanged virtualservice.networking.istio.io/reviews configured virtualservice.networking.istio.io/ratings configured virtualservice.networking.istio.io/details unchanged现在无论刷新多少次,页面的评论部分都不会显示评分星。这是因为将 Istio 配置为将 reviews 服务的所有流量路由到该版本reviews:v1,并且该版本的服务不访问星级评分服务。reviews:v1使用reviews:v3以下清单传输 50% 的流量apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v1 weight: 50 - destination: host: reviews subset: v3 weight: 50apply> kubectl.exe -n java-demo apply -f samples/bookinfo/networking/virtual-service-reviews-50-v3.yaml virtualservice.networking.istio.io/reviews configured如下kiali5.请求超时使用路由规则的timeout字段指定 HTTP 请求的超时。默认情况下,请求超时被禁用,但在此任务中,将服务超时覆盖为 1 秒。但是,为了查看其效果,还可以在调用服务时人为地引入 2 秒延迟。reviews`ratings`在开始之前,将所有的请求指派到v1kubectl -n java-demo apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml而后将reviews调度到v2kubectl -n java-demo apply -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v2 EOF在ratings的v1上注入一个2秒钟的延迟reviews会访问到ratingskubectl -n java-demo apply -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - fault: delay: percent: 100 fixedDelay: 2s route: - destination: host: ratings subset: v1 EOF访问如下图,当应用程序调到ratings的时候,会超时2秒一旦应用程序的上游响应缓慢,势必影响到服务体验,于是,我们将调整,如果上游服务响应超过0.5s就不去请求kubectl -n java-demo apply -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v2 timeout: 0.5s EOF此时刷新提示Sorry, product reviews are currently unavailable for this book.因为服务响应超过0.5秒,不去请求了如果此时,我们认为3秒是可以接受的,就改成3,服务就可以访问到ratings了
2023年02月04日
540 阅读
0 评论
0 点赞
2023-02-03
linuxea:istio 故障注入/重试和容错/流量镜像(10)
6.故障注入istiO支持两种故障注入,分别是延迟故障和中断故障延迟故障:超时,重新发送请求abort中断故障:重试故障注入仍然在http层进行定义中断故障 fault: abort: # 中断故障 percentage: value: 20 # 在多大的比例流量上注入 httpStatus: 567 # 故障响应码延迟故障 fault: delay: percentage: value: 20 # 在百分之20的流量上注入 fixedDelay: 6s # 注入三秒的延迟yaml如下apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - "dpment.linuxea.com" # 对应于gateways/proxy-gateway - "dpment" gateways: - istio-system/dpment-gateway # 相关定义仅应用于Ingress Gateway上 - mesh http: - name: version match: - uri: prefix: /version/ rewrite: uri: / route: - destination: host: dpment subset: v10 fault: abort: percentage: value: 20 httpStatus: 567 - name: default route: - destination: host: dpment subset: v11 fault: delay: percentage: value: 20 fixedDelay: 6s此时,当我们用curl访问 dpment.linuxea.com的时候,有20的流量会被中断6秒(base) [root@master1 7]# while true;do date;curl dpment.linuxea.com; date;sleep 0.$RANDOM;done 2022年 08月 07日 星期日 18:10:40 CST linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 2022年 08月 07日 星期日 18:10:40 CST 2022年 08月 07日 星期日 18:10:41 CST linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 2022年 08月 07日 星期日 18:10:41 CST 2022年 08月 07日 星期日 18:10:41 CST linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 2022年 08月 07日 星期日 18:10:41 CST 2022年 08月 07日 星期日 18:10:41 CST linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 2022年 08月 07日 星期日 18:10:47 CST 2022年 08月 07日 星期日 18:10:47 CST linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 2022年 08月 07日 星期日 18:10:53 CST 2022年 08月 07日 星期日 18:10:54 CST linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0 2022年 08月 07日 星期日 18:10:54 CST 2022年 08月 07日 星期日 18:10:55 CST如果我们访问dpment.linuxea.com/version/的时候,有20%的流量返回的状态码是567(base) [root@master1 7]# while true;do echo -e "===============";curl dpment.linuxea.com/version/ -I ; sleep 0.$RANDOM;done =============== HTTP/1.1 567 Unknown content-length: 18 content-type: text/plain date: Sun, 07 Aug 2022 10:16:40 GMT server: istio-envoy =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:31 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 2 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:32 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 1 =============== HTTP/1.1 567 Unknown content-length: 18 content-type: text/plain date: Sun, 07 Aug 2022 10:16:42 GMT server: istio-envoy =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:33 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 3 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:33 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 1 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:33 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 1 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:33 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 3 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:34 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 1 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:34 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 1 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:35 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 1 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:35 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 2 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:36 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 2 =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:36 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 2 =============== HTTP/1.1 567 Unknown content-length: 18 content-type: text/plain date: Sun, 07 Aug 2022 10:16:46 GMT server: istio-envoy =============== HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 10:17:37 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 1如果使用curl命令直接访问会看到fault filter abort(base) [root@master1 7]# while true;do echo -e "\n";curl dpment.linuxea.com/version/ ; sleep 0.$RANDOM;done linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 fault filter abort linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 fault filter abort fault filter abort linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 fault filter abort linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 fault filter abort linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0 linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0回到kiali6.1. 重试和容错请求重试条件:5xx:上游主机返回5xx响应码,或者根本未响应(端口,重置,读取超时)gateway-error: 网关错误,类似于5xx策略,但仅为502,503,504应用进行重试connection-failure:在tcp级别与上游服务建立连接失败时进行重试retriable-4xx:上游服务器返回可重复的4xx响应码时进行重试refused-stream:上游服务器使用REFUSED-STREAM错误码重置时进行重试retrable-status-codes:上游服务器的响应码与重试策略或者x-envoy-retriable-status-codes标头值中定义的响应码匹配时进行重试reset:上游主机完全不响应(disconnect/reset/read超时),envoy将进行重试retriable-headers:如果上游服务器响应报文匹配重试策略或x-envoy-retriable-header-names标头中包含的任何标头,则envoy将尝试重试envoy-rateliited:标头中存在x-envoy-ratelimited时重试重试条件2(同x-envoy-grpc-on标头):cancelled: grpc应答标头中的状态码是"cancelled"时进行重试deadline-exceeded: grpc应答标头中的状态码是"deadline-exceeded"时进行重试internal: grpc应答标头中的状态码是“internal”时进行重试resource-exhausted:grpc应答标头中的状态码是"resource-exhausted"时进行重试unavailable:grpc应答标头中的状态码是“unavailable”时进行重试默认情况下,envoy不会进行任何类型的重试操作,除非明确定义我们假设现在有多个服务,A->B->C,A向后代理,或者访问其中的B出现了响应延迟,在A上配置容错机制,如下apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - "dpment.linuxea.com" # 对应于gateways/proxy-gateway - "dpment" gateways: - istio-system/dpment-gateway # 相关定义仅应用于Ingress Gateway上 http: - name: default route: - destination: host: A timeout: 1s # 如果上游超过1秒响应,就返回超时结果 retries: # 重试 attempts: 5 # 重试次数 perTryTimeout: 1s # 重试时间 retryOn: 5xx,connect-failure,refused-stream # 对那些条件进行重试如果上游服务超过1秒未响应就进行重试,对于5开头的响应码,tcp链接失败的,或者是GRPC的Refused-stream的建立链接也拒绝了,就重试五次,每次重试1秒。这个重试的 5次过程中,如果在1s内,有成功的则会成功 。7.流量镜像流量镜像,也叫影子流量(Traffic shadowing),是一种通过复制生产环境的流量到其他环境进行测试开发的工作模式。在traffic-mirror中,我们可以直接使用mirror来指定给一个版本 - name: default route: - destination: host: dpment subset: v11 mirror: host: dpment subset: v12于是,我们在此前的配置上修改apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - "dpment.linuxea.com" # 对应于gateways/proxy-gateway - "dpment" gateways: - istio-system/dpment-gateway # 相关定义仅应用于Ingress Gateway上 - mesh http: - name: version match: - uri: prefix: /version/ rewrite: uri: / route: - destination: host: dpment subset: v10 - name: default route: - destination: host: dpment subset: v11 mirror: host: dpment subset: v12我们发起curl请求 while ("true"){ curl http://dpment.linuxea.com/ ;sleep 1}而后在v12中查看日志以获取是否流量被镜像进来(base) [root@master1 10]# kubectl -n java-demo exec -it dpment-linuxea-c-568b9fcb5c-ltdcg -- /bin/bash bash-5.0# curl 127.0.0.1 linuxea-dpment-linuxea-c-568b9fcb5c-ltdcg.com-127.0.0.1/8 130.130.1.125/24 version number 3.0 bash-5.0# tail -f /data/logs/access.log 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:27:59 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:00 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:01 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:02 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:03 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:04 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:05 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:06 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:07 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:08 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:11 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:12 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:13 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:14 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:15 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:16 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:17 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:18 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:19 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:20 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:21 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:23 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:24 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-" 130.130.0.0,130.130.1.96, 127.0.0.6 - [08/Aug/2022:04:28:25 +0000] "GET / HTTP/1.1" dpment.linuxea.com-shadow94 "-" "curl/7.83.1" - -0.000 [200] [-] [-] "-"
2023年02月03日
500 阅读
0 评论
0 点赞
2023-01-17
linuxea:istio 基于headers请求头路由(9)
5.请求首部条件路由正常情况下从客户端请求的流量会发送到sidecar-proxy(eneoy),请求发送到上游的pod,而后响应到envoy,并从envoy响应给客户端。在这个过程中,客户端到envoy是不能够被修改的,只有从envoy到上游serrvice中才是可以被操作的,因为这段报文是由envoy发起的。而从serrvice响应到envoy的也是不能够被修改的。并且从envoy响应到客户端的报文由envoy发起响应也是可以被修改的。总共可配置的只有发送到上游service的请求和发给下游客户端的响应。而另外两端并不是envoy生成的,因此没用办法去操作标头的。request: 发送给上游请求serviceresponse: 响应给下游客户端1.如果请求的首部有x-for-canary等于true则路由到v10,如果浏览器是Mozilla就路由给v11,并且修改发送给上游的请求标头的 User-Agent: Mozilla,其次,在响应给客户端的标头添加一个 x-canary: "true" - name: canary match: - headers: x-for-canary: exact: "true" route: - destination: host: dpment subset: v11 headers: request: # 修改发送给上游的请求标头的 User-Agent: Mozilla set: User-Agent: Mozilla response: # 响应给客户端的标头添加一个 x-canary: "true" add: x-canary: "true"没有匹配到这些规则,就给默认规则匹配,就路由给v10,并且添加一个下游的响应报文: 全局标头X-Envoy: linuxea - name: default headers: response: add: X-Envoy: linuxea route: - destination: host: dpment subset: v10yaml如下apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - dpment http: - name: canary match: - headers: x-for-canary: exact: "true" route: - destination: host: dpment subset: v11 headers: request: # 修改发送给上游的请求标头的 User-Agent: Mozilla set: User-Agent: Mozilla response: # 响应给客户端的标头添加一个 x-canary: "true" add: x-canary: "marksugar" - name: default headers: response: add: X-Envoy: linuxea route: - destination: host: dpment subset: v10我们添加上gateway部分apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: dpment namespace: java-demo spec: hosts: - "dpment.linuxea.com" # 对应于gateways/proxy-gateway - "dpment" gateways: - istio-system/dpment-gateway # 相关定义仅应用于Ingress Gateway上 - mesh http: - name: canary match: - headers: x-for-canary: exact: "true" route: - destination: host: dpment subset: v11 headers: request: set: User-Agent: Mozilla response: add: x-canary: "marksugar" - name: default headers: response: add: X-Envoy: linuxea route: - destination: host: dpment subset: v105.1 测试此时可以使用curl来模拟访问请求# curl dpment.linuxea.com linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0添加头部, -H "x-for-canary: true"# curl -H "x-for-canary: true" dpment.linuxea.com linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0而后查看日志的user-agent ,这里是Mozilla,默认是curl130.130.0.0, 127.0.0.6 - [07/Aug/2022:08:18:33 +0000] "GET / HTTP/1.1" dpment.linuxea.com94 "-" "Mozilla" - -0.000 [200] [-] [-] "-"我们使用curl发起请求,模拟的是Mozilla此时使用-I,查看标头的添加信息x-canary: marksugarPS C:\Users\Administrator> curl -H "x-for-canary: true" dpment.linuxea.com -I HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 09:47:45 GMT content-type: text/html content-length: 94 last-modified: Wed, 03 Aug 2022 07:58:30 GMT etag: "62ea2aa6-5e" accept-ranges: bytes x-envoy-upstream-service-time: 4 x-canary: marksugar如果不加头部-H "x-for-canary: true"则响应报文的是x-envoy: linuxeaPS C:\Users\Administrator> curl dpment.linuxea.com -I HTTP/1.1 200 OK server: istio-envoy date: Sun, 07 Aug 2022 09:51:53 GMT content-type: text/html content-length: 93 last-modified: Wed, 03 Aug 2022 07:59:37 GMT etag: "62ea2ae9-5d" accept-ranges: bytes x-envoy-upstream-service-time: 7 x-envoy: linuxea
2023年01月17日
657 阅读
0 评论
0 点赞
1
2
...
136