首页
About Me
Search
1
linuxea:gitlab-ci之docker镜像质量品质报告
49,485 阅读
2
linuxea:如何复现查看docker run参数命令
23,755 阅读
3
Graylog收集文件日志实例
18,638 阅读
4
linuxea:jenkins+pipeline+gitlab+ansible快速安装配置(1)
18,438 阅读
5
git+jenkins发布和回滚示例
18,235 阅读
ops
Openppn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
vue
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack/logs
Open-Falcon
Prometheus
victoriaMetrics
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
互联咨询
最后的净土
软件交付
持续集成
gitops
devops
登录
Search
标签搜索
kubernetes
docker
zabbix
Golang
mariadb
持续集成工具
白话容器
elk
linux基础
nginx
dockerfile
Gitlab-ci/cd
最后的净土
基础命令
gitops
jenkins
docker-compose
Istio
haproxy
saltstack
marksugar
累计撰写
667
篇文章
累计收到
111
条评论
首页
栏目
ops
Openppn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
vue
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack/logs
Open-Falcon
Prometheus
victoriaMetrics
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
互联咨询
最后的净土
软件交付
持续集成
gitops
devops
页面
About Me
搜索到
15
篇与
的结果
2023-11-25
linuxea: Argo Rollouts Analysis测试
温馨提示,这不是一个正经的测试,只是功能尝试当Analysis 通过prometheus检测倒指标正常则会继续进行流量比例的倾斜,如果Analysis run自定义的资源指标查询正常,则 Rollout 将继续操作;如果指标显示失败,则自动回滚;如果指标无法提供成功/失败的答案,则暂停发布。Argo Rollouts架构如上渐进式Argo Rollouts 提供了几种执行分析(Analysis)的方法来推动渐进式交付,首先需要了解几个 CRD 资源:Rollout:Rollout 是 Deployment 资源的直接替代品,它提供额外的 blueGreen 和 canary 更新策略,这些策略可以在更新期间创建 AnalysisRuns 和 Experiments,可以推进更新,或中止更新。AnalysisTemplate:AnalysisTemplate 是一个模板,它定义了如何执行金丝雀分析,例如它应该执行的指标、频率以及被视为成功或失败的值,AnalysisTemplate 可以用输入值进行参数化。ClusterAnalysisTemplate:ClusterAnalysisTemplate 和 AnalysisTemplate 类似,但它是全局范围内的,它可以被整个集群的任何 Rollout 使用。AnalysisRun:AnalysisRun 是 AnalysisTemplate 的实例化。AnalysisRun 就像 Job 一样,它们最终会完成,完成的运行被认为是成功的、失败的或不确定的,运行的结果分别影响 Rollout 的更新是否继续、中止或暂停。这意味着判断成功与失败的这个指标尤为关键,一旦成功或者失败都需要非常精确的衡量指标。1.准备环境1,准备指标我门简单配置一个nginx来通过nginx的状态码来判断是否正常来管理pod的版本信息, 如下:apiVersion: apps/v1 kind: Deployment metadata: name: vts-demo namespace: marksugar labels: app: vts-demo spec: replicas: 1 selector: matchLabels: app: vts-demo template: metadata: labels: app: vts-demo spec: containers: - name: vts-metrics image: uhub.service.ucloud.cn/marksugar-k8s/nginx-vts-exporter:latest env: - name: NGINX_STATUS value: "http://localhost:40080/status/format/json" ports: - containerPort: 9913 - name: web image: registry.cn-zhangjiakou.aliyuncs.com/marksugar/nginx:1.22.1-vts ports: - containerPort: 80 ports: - containerPort: 40080 --- apiVersion: v1 kind: Service metadata: name: vts-demo namespace: marksugar labels: app: vts-demo spec: type: NodePort ports: - name: nginx nodePort: 30888 port: 80 targetPort: 40080 - name: init nodePort: 30913 port: 9913 targetPort: 9913 selector: app: vts-demo此时,我门可以打开一个nginx的监控页面2,添加监控接着我门将他加入到监控里面 - job_name: vts-demo static_configs: - targets: - vts-demo.marksugar:99133,创建prometheushelm repo add prometheus-community https://prometheus-community.github.io/helm-charts helm repo update helm -n marksugar show values prometheus-community/prometheus > values.yaml egrep -v "^%|^#|^ #|^$|^ #|^ #|^ #|^ #" values.yaml > latest.yaml # 我门修改pvc后,并且添加上述的vts-demo后进行安装 helm upgrade --install prometheus --namespace prometheus --create-namespace --dry-run -f .\latest.yaml ./安装完成后得到一个监控指标并且可以通过指标获取到2.配置金丝雀正在执行其部署步骤时,分析可以在后台运行。以下示例是每 10 分钟逐渐将 Canary 权重增加 20%,直到达到 100%。在后台,基于名为 success-rate 的 AnalysisTemplate 启动 AnalysisRun,success-rate 模板查询 Prometheus 服务器,以 5 分钟间隔/样本测量 HTTP 成功率,它没有结束时间,一直持续到停止或失败。如果测量到的指标小于 95%,并且有三个这样的测量值,则分析被视为失败。失败的分析会导致 Rollout 中止,将 Canary 权重设置回零,并且 Rollout 将被视为降级。否则,如果 Rollout 完成其所有 Canary 步骤,则认为 rollout 是成功的,并且控制器将停止运行分析。注意:在上述配置一个nginx的vts模块来模拟success-rate 模板查询 Prometheus 的指标。但是这个指标和测试本身没用关联。仅仅只是测试使用。一旦vts中的404超过预期就会停止并且回滚的方式来模拟实际情况下更新的度量值。那意思是在正常的情况下,是需要一个能够度量服务是否异常的指标来做success-rate的查询指标的。本示例只是测试他的简单实现。官方所示的 Rollout 资源对象:apiVersion: argoproj.io/v1alpha1 kind: Rollout metadata: name: guestbook spec: # ... strategy: canary: analysis: templates: - templateName: success-rate startingStep: 2 # 延迟开始分析,到第3步开始 # args向下传递了名称空间 args: - name: service-name value: guestbook-svc.default.svc.cluster.local steps: - setWeight: 20 - pause: { duration: 10m } - setWeight: 40 - pause: { duration: 10m } - setWeight: 60 - pause: { duration: 10m } - setWeight: 80 - pause: { duration: 10m }args被如下使用... failureLimit: 3 provider: prometheus: address: http://prometheus.example.com:9090 query: | sum(irate( istio_requests_total{reporter="source",destination_service=~"{{args.service-name}}",response_code!~"5.*"}[5m] )) / sum(irate( istio_requests_total{reporter="source",destination_service=~"{{args.service-name}}"}[5m] )) ...因此,我门模拟使用上述的指标来配置,我门修改如下:如果increase(nginx_server_requests{code="4xx", host="*"}[5m])的值大于等于10,三次就被视作失败。其他配置说明:failureCondition可用于导致分析运行失败的。 failureLimit是分析允许的最大失败运行次数。以下示例每 5 分钟持续轮询定义的 Prometheus 服务器以获取错误总数(即 HTTP 响应代码 >= 500),如果遇到十个或更多错误,则会导致测量失败。在三个失败的测量之后,整个分析运行被认为是失败的。而successCondition则省用于分析正确满足的yaml如下:此示例将金丝雀权重设置为 20%,暂停 10 分钟,然后在第二個階段运行分析。如果分析成功,则继续推出,否则中止。为了演示效果,添加了多个阶段的setWeight,也可以使用2个即可。apiVersion: argoproj.io/v1alpha1 kind: Rollout metadata: name: rollout-analysis-step namespace: marksugar spec: replicas: 4 revisionHistoryLimit: 2 selector: matchLabels: app: rollout-analysis-step template: metadata: labels: app: rollout-analysis-step spec: containers: - name: rollouts-demo image: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v1.0 ports: - name: http containerPort: 8080 protocol: TCP resources: requests: memory: 32Mi cpu: 5m strategy: canary: analysis: templates: - templateName: success-rate startingStep: 2 # 延迟开始分析,到第3步开始 setWeight: 40% args: # 暂且不删除,就留着即可 - name: service-name value: guestbook-svc.default.svc.cluster.local steps: - setWeight: 20 - pause: {duration: 10m} - setWeight: 40 - pause: {duration: 10m} - setWeight: 60 - pause: {duration: 10m} - setWeight: 80 - pause: {duration: 10m} --- apiVersion: argoproj.io/v1alpha1 kind: AnalysisTemplate metadata: name: success-rate namespace: marksugar spec: args: - name: service-name metrics: - name: success-rate interval: 5m #注意:prometheus查询以向量的形式返回结果。 #所以常见的是访问返回数组的索引0来获取值 # result[0] >= 10 failureCondition: result[0] <= 0.1 failureLimit: 3 provider: prometheus: address: http://prometheus-server.prometheus.svc.cluster.local:9090 query: | sum(increase(nginx_server_requests{code="4xx", host="*"}[5m])) / 100第一次创建是正常的创建[root@master1 argo-rollouts]# kubectl argo rollouts get rollout rollout-analysis-step -n marksugar Name: rollout-analysis-step Namespace: marksugar Status: Healthy Strategy: Canary Step: 8/8 SetWeight: 100 ActualWeight: 100 Images: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v1.0 (stable) Replicas: Desired: 4 Current: 4 Updated: 4 Ready: 4 Available: 4 NAME KIND STATUS AGE INFO ⟳ rollout-analysis-step Rollout ✔ Healthy 5m2s └──# revision:1 └──⧉ rollout-analysis-step-56bc85bd8f ReplicaSet ✔ Healthy 5m2s stable ├──□ rollout-analysis-step-56bc85bd8f-5wffr Pod ✔ Running 5m2s ready:1/1 ├──□ rollout-analysis-step-56bc85bd8f-mtqxc Pod ✔ Running 5m2s ready:1/1 ├──□ rollout-analysis-step-56bc85bd8f-shtm6 Pod ✔ Running 5m2s ready:1/1 └──□ rollout-analysis-step-56bc85bd8f-vjnhj Pod ✔ Running 5m2s ready:1/1现在,我门修改版本进行测试修改镜像2后,开始金丝雀发布Name: rollout-analysis-step Namespace: marksugar Status: Progressing Message: more replicas need to be updated Strategy: Canary Step: 0/8 SetWeight: 20 ActualWeight: 0 Images: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v1.0 (stable) registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v2.0 (canary) Replicas: Desired: 4 Current: 5 Updated: 1 Ready: 4 Available: 4 NAME KIND STATUS AGE INFO ⟳ rollout-analysis-step Rollout ◌ Progressing 22m ├──# revision:2 │ └──⧉ rollout-analysis-step-567978485d ReplicaSet ◌ Progressing 4s canary │ └──□ rollout-analysis-step-567978485d-bqvdj Pod ◌ ContainerCreating 4s ready:0/1 └──# revision:1 └──⧉ rollout-analysis-step-56bc85bd8f ReplicaSet ✔ Healthy 22m stable ├──□ rollout-analysis-step-56bc85bd8f-5wffr Pod ✔ Running 22m ready:1/1 ├──□ rollout-analysis-step-56bc85bd8f-mtqxc Pod ✔ Running 22m ready:1/1 ├──□ rollout-analysis-step-56bc85bd8f-shtm6 Pod ✔ Running 22m ready:1/1 └──□ rollout-analysis-step-56bc85bd8f-vjnhj Pod ✔ Running 22m ready:1/1第一个pod更新完成[root@master1 argo-rollouts]# kubectl -n marksugar get pod -w NAME READY STATUS RESTARTS AGE rollout-analysis-step-567978485d-bqvdj 0/1 ContainerCreating 0 17s rollout-analysis-step-56bc85bd8f-5wffr 1/1 Running 0 22m rollout-analysis-step-56bc85bd8f-mtqxc 1/1 Running 0 22m rollout-analysis-step-56bc85bd8f-shtm6 1/1 Running 0 22m rollout-analysis-step-56bc85bd8f-vjnhj 1/1 Running 0 22m vts-demo-54fff48556-m6lvz 2/2 Running 0 7h27m rollout-analysis-step-567978485d-bqvdj 1/1 Running 0 54s等候十分钟1.1 配置失败由于配置失败,那么就会被删除Name: rollout-analysis-step Namespace: marksugar Status: ✖ Degraded Message: RolloutAborted: Rollout aborted update to revision 2: Metric "success-rate" assessed Error due to consecutiveErrors (5) > consecutiveErrorLimit (4): "Error Message: Post "http://prometheus-server.prometheus.svc.cluster.local:9090/api/v1/query": dial tcp 10.68.167.148:9090: connect: connection refused" Strategy: Canary Step: 0/8 SetWeight: 0 ActualWeight: 0 Images: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v1.0 (stable) Replicas: Desired: 4 Current: 4 Updated: 0 Ready: 4 Available: 4 NAME KIND STATUS AGE INFO ⟳ rollout-analysis-step Rollout ✖ Degraded 38m ├──# revision:2 │ ├──⧉ rollout-analysis-step-567978485d ReplicaSet • ScaledDown 15m canary │ └──α rollout-analysis-step-567978485d-2 AnalysisRun ⚠ Error 4m59s ⚠ 5 └──# revision:1 └──⧉ rollout-analysis-step-56bc85bd8f ReplicaSet ✔ Healthy 38m stable ├──□ rollout-analysis-step-56bc85bd8f-5wffr Pod ✔ Running 38m ready:1/1 ├──□ rollout-analysis-step-56bc85bd8f-mtqxc Pod ✔ Running 38m ready:1/1 ├──□ rollout-analysis-step-56bc85bd8f-shtm6 Pod ✔ Running 38m ready:1/1 └──□ rollout-analysis-step-56bc85bd8f-ccjjq Pod ✔ Running 4m19s ready:1/1从而镜像被回滚[root@master1 argo-rollouts]# kubectl -n marksugar get pod -w NAME READY STATUS RESTARTS AGE rollout-analysis-step-567978485d-bqvdj 0/1 ContainerCreating 0 17s rollout-analysis-step-56bc85bd8f-5wffr 1/1 Running 0 22m rollout-analysis-step-56bc85bd8f-mtqxc 1/1 Running 0 22m rollout-analysis-step-56bc85bd8f-shtm6 1/1 Running 0 22m rollout-analysis-step-56bc85bd8f-vjnhj 1/1 Running 0 22m vts-demo-54fff48556-m6lvz 2/2 Running 0 7h27m rollout-analysis-step-567978485d-bqvdj 1/1 Running 0 54s rollout-analysis-step-56bc85bd8f-vjnhj 1/1 Terminating 0 33m rollout-analysis-step-567978485d-frp8t 0/1 Pending 0 0s rollout-analysis-step-567978485d-frp8t 0/1 Pending 0 0s rollout-analysis-step-567978485d-frp8t 0/1 ContainerCreating 0 0s rollout-analysis-step-567978485d-frp8t 1/1 Running 0 1s rollout-analysis-step-56bc85bd8f-vjnhj 0/1 Terminating 0 33m rollout-analysis-step-56bc85bd8f-vjnhj 0/1 Terminating 0 33m rollout-analysis-step-56bc85bd8f-vjnhj 0/1 Terminating 0 33m rollout-analysis-step-567978485d-frp8t 1/1 Terminating 0 40s rollout-analysis-step-567978485d-bqvdj 1/1 Terminating 0 11m rollout-analysis-step-56bc85bd8f-ccjjq 0/1 Pending 0 0s rollout-analysis-step-56bc85bd8f-ccjjq 0/1 Pending 0 0s rollout-analysis-step-56bc85bd8f-ccjjq 0/1 ContainerCreating 0 0s rollout-analysis-step-567978485d-frp8t 0/1 Terminating 0 40s rollout-analysis-step-567978485d-bqvdj 0/1 Terminating 0 11m rollout-analysis-step-56bc85bd8f-ccjjq 1/1 Running 0 1s rollout-analysis-step-567978485d-frp8t 0/1 Terminating 0 50s rollout-analysis-step-567978485d-frp8t 0/1 Terminating 0 50s rollout-analysis-step-567978485d-bqvdj 0/1 Terminating 0 11m rollout-analysis-step-567978485d-bqvdj 0/1 Terminating 0 11m [root@master1 argo-rollouts]# kubectl -n marksugar get pod rollout-analysis-step-56bc85bd8f-ccjjq -o yaml|grep image - image: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v1.01.2 正确更新由于上述我门将80端口写成9090所以导致了错误。没有获取到指标更新失败,现在我门重新修改后应用apiVersion: argoproj.io/v1alpha1 kind: Rollout metadata: name: rollout-analysis-step namespace: marksugar spec: # https://argoproj.github.io/argo-rollouts/features/specification/ replicas: 4 revisionHistoryLimit: 2 selector: matchLabels: app: rollout-analysis-step template: metadata: labels: app: rollout-analysis-step spec: containers: - name: rollouts-demo image: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v1.0 ports: - name: http containerPort: 8080 protocol: TCP resources: requests: memory: 32Mi cpu: 5m strategy: canary: analysis: templates: - templateName: success-rate startingStep: 2 # 延迟开始分析,到第3步开始 args: - name: service-name value: guestbook-svc.default.svc.cluster.local steps: - setWeight: 20 - pause: {duration: 3m} - setWeight: 40 - pause: {duration: 3m} --- apiVersion: argoproj.io/v1alpha1 kind: AnalysisTemplate metadata: name: success-rate namespace: marksugar spec: args: - name: service-name metrics: - name: success-rate interval: 30s #注意:prometheus查询以向量的形式返回结果。 #所以常见的是访问返回数组的索引0来获取值 # result[0] >= 10 #successCondition: result[0] > 0.1 failureCondition: result[0] > 0.1 failureLimit: 3 # 3次 provider: prometheus: address: http://prometheus-server.prometheus.svc.cluster.local query: | sum(increase(nginx_server_requests{code="4xx", host="*"}[3m])) / 100修改完成后,重新开始kubectl argo rollouts retry rollout rollout-analysis-step -n marksugar开始更新# kubectl argo rollouts get rollout rollout-analysis-step -n marksugar Name: rollout-analysis-step Namespace: marksugar Status: ॥ Paused Message: CanaryPauseStep Strategy: Canary Step: 1/8 SetWeight: 20 ActualWeight: 20 Images: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v1.0 (stable) registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v2.0 (canary) Replicas: Desired: 4 Current: 5 Updated: 1 Ready: 5 Available: 5 NAME KIND STATUS AGE INFO ⟳ rollout-analysis-step Rollout ॥ Paused 44m ├──# revision:2 │ ├──⧉ rollout-analysis-step-567978485d ReplicaSet ✔ Healthy 22m canary │ │ └──□ rollout-analysis-step-567978485d-jjbq4 Pod ✔ Running 6s ready:1/1 │ └──α rollout-analysis-step-567978485d-2 AnalysisRun ⚠ Error 11m ⚠ 5 └──# revision:1 └──⧉ rollout-analysis-step-56bc85bd8f ReplicaSet ✔ Healthy 44m stable ├──□ rollout-analysis-step-56bc85bd8f-5wffr Pod ✔ Running 44m ready:1/1 ├──□ rollout-analysis-step-56bc85bd8f-mtqxc Pod ✔ Running 44m ready:1/1 ├──□ rollout-analysis-step-56bc85bd8f-shtm6 Pod ✔ Running 44m ready:1/1 └──□ rollout-analysis-step-56bc85bd8f-ccjjq Pod ✔ Running 10m ready:1/1[root@master1 argo-rollouts]# kubectl -n marksugar get pod rollout-analysis-step-567978485d-jjbq4 -o yaml|grep image: - image: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v2.0 image: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v2.0观察10分钟# kubectl argo rollouts get rollout rollout-analysis-step -n marksugar Name: rollout-analysis-step Namespace: marksugar Status: ॥ Paused Message: CanaryPauseStep Strategy: Canary Step: 3/8 SetWeight: 40 ActualWeight: 40 Images: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v1.0 (stable) registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v2.0 (canary) Replicas: Desired: 4 Current: 5 Updated: 2 Ready: 5 Available: 5 NAME KIND STATUS AGE INFO ⟳ rollout-analysis-step Rollout ॥ Paused 68m ├──# revision:2 │ ├──⧉ rollout-analysis-step-567978485d ReplicaSet ✔ Healthy 46m canary │ │ ├──□ rollout-analysis-step-567978485d-q62j7 Pod ✔ Running 10m ready:1/1 │ │ └──□ rollout-analysis-step-567978485d-9wrn2 Pod ✔ Running 5s ready:1/1 │ ├──α rollout-analysis-step-567978485d-2 AnalysisRun ⚠ Error 35m ⚠ 5 │ └──α rollout-analysis-step-567978485d-2.2 AnalysisRun ◌ Running 5s ⚠ 1 └──# revision:1 └──⧉ rollout-analysis-step-56bc85bd8f ReplicaSet ✔ Healthy 68m stable ├──□ rollout-analysis-step-56bc85bd8f-5wffr Pod ✔ Running 68m ready:1/1 ├──□ rollout-analysis-step-56bc85bd8f-mtqxc Pod ✔ Running 68m ready:1/1 └──□ rollout-analysis-step-56bc85bd8f-shtm6 Pod ✔ Running 68m ready:1/1 第二个pod已经run起来了[root@master1 argo-rollouts]# kubectl -n marksugar get pod NAME READY STATUS RESTARTS AGE rollout-analysis-step-567978485d-9wrn2 1/1 Running 0 14s rollout-analysis-step-567978485d-q62j7 1/1 Running 0 10m rollout-analysis-step-56bc85bd8f-5wffr 1/1 Running 0 68m rollout-analysis-step-56bc85bd8f-mtqxc 1/1 Running 0 68m rollout-analysis-step-56bc85bd8f-shtm6 1/1 Running 0 68m vts-demo-54fff48556-m6lvz 2/2 Running 0 8h直到更新完成# kubectl argo rollouts get rollout rollout-analysis-step -n marksugar Name: rollout-analysis-step Namespace: marksugar Status: ✔ Healthy Strategy: Canary Step: 8/8 SetWeight: 100 ActualWeight: 100 Images: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v2.0 (stable) Replicas: Desired: 4 Current: 4 Updated: 4 Ready: 4 Available: 4 NAME KIND STATUS AGE INFO ⟳ rollout-analysis-step Rollout ✔ Healthy 101m ├──# revision:2 │ ├──⧉ rollout-analysis-step-567978485d ReplicaSet ✔ Healthy 78m stable │ │ ├──□ rollout-analysis-step-567978485d-hgzwx Pod ✔ Running 24m ready:1/1 │ │ ├──□ rollout-analysis-step-567978485d-kmzwl Pod ✔ Running 22m ready:1/1 │ │ ├──□ rollout-analysis-step-567978485d-wtm6c Pod ✔ Running 20m ready:1/1 │ │ └──□ rollout-analysis-step-567978485d-788mt Pod ✔ Running 10m ready:1/1 │ ├──α rollout-analysis-step-567978485d-2 AnalysisRun ⚠ Error 67m ⚠ 5 │ └──α rollout-analysis-step-567978485d-2.1 AnalysisRun ✔ Successful 22m ✔ 4,✖ 1 └──# revision:1 └──⧉ rollout-analysis-step-56bc85bd8f ReplicaSet • ScaledDown 101m1.3 注入失败我们现在手动刷新一些404的页面,就会生成404状态被指标探测到,更新失败更新的过程中手动刷出404一旦info中超过次数探测后将会提示回滚# kubectl argo rollouts get rollout rollout-analysis-step -n marksugar Name: rollout-analysis-step Namespace: marksugar Status: Degraded Message: RolloutAborted: Rollout aborted update to revision 5: Metric "success-rate" assessed Failed due to failed (4) > failureLimit (3) Strategy: Canary Step: 0/4 SetWeight: 0 ActualWeight: 0 Images: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v2.0 (stable) Replicas: Desired: 4 Current: 4 Updated: 0 Ready: 4 Available: 4 NAME KIND STATUS AGE INFO ⟳ rollout-analysis-step Rollout ✖ Degraded 2d21h ├──# revision:5 │ ├──⧉ rollout-analysis-step-6566f66f8c ReplicaSet • ScaledDown 4m50s canary │ └──α rollout-analysis-step-6566f66f8c-5 AnalysisRun ✖ Failed 108s ✖ 4 ├──# revision:4 │ ├──⧉ rollout-analysis-step-567978485d ReplicaSet ✔ Healthy 2d21h stable │ │ ├──□ rollout-analysis-step-567978485d-pxmfz Pod ✔ Running 20m ready:1/1 │ │ ├──□ rollout-analysis-step-567978485d-ccgxt Pod ✔ Running 17m ready:1/1 │ │ ├──□ rollout-analysis-step-567978485d-jtlvg Pod ✔ Running 14m ready:1/1 │ │ └──□ rollout-analysis-step-567978485d-x2d7h Pod ✔ Running 18s ready:1/1 │ └──α rollout-analysis-step-567978485d-4 AnalysisRun ✔ Successful 17m ✔ 7 ├──# revision:3 │ └──⧉ rollout-analysis-step-56bc85bd8f ReplicaSet • ScaledDown 2d21pod并且会回滚到更新前[root@master1 argo-rollouts]# kubectl -n marksugar get pod -w NAME READY STATUS RESTARTS AGE rollout-analysis-step-567978485d-4vt7q 1/1 Running 0 10m rollout-analysis-step-567978485d-jsvbg 1/1 Running 0 7m32s rollout-analysis-step-567978485d-xkx5h 1/1 Running 0 4m31s rollout-analysis-step-56bc85bd8f-54gd2 1/1 Running 0 17s rollout-analysis-step-56bc85bd8f-pjxhd 1/1 Running 0 3m18s vts-demo-54fff48556-m6lvz 2/2 Running 2 32h rollout-analysis-step-56bc85bd8f-54gd2 1/1 Terminating 0 3m rollout-analysis-step-56bc85bd8f-pjxhd 1/1 Terminating 0 6m1s rollout-analysis-step-567978485d-hsl6g 0/1 Pending 0 0s rollout-analysis-step-567978485d-hsl6g 0/1 Pending 0 0s rollout-analysis-step-567978485d-hsl6g 0/1 ContainerCreating 0 0s rollout-analysis-step-567978485d-hsl6g 1/1 Running 0 1s rollout-analysis-step-56bc85bd8f-54gd2 0/1 Terminating 0 3m1s rollout-analysis-step-56bc85bd8f-pjxhd 0/1 Terminating 0 6m2s rollout-analysis-step-56bc85bd8f-54gd2 0/1 Terminating 0 3m10s rollout-analysis-step-56bc85bd8f-54gd2 0/1 Terminating 0 3m10s rollout-analysis-step-56bc85bd8f-pjxhd 0/1 Terminating 0 6m11s rollout-analysis-step-56bc85bd8f-pjxhd 0/1 Terminating 0 6m11s同样,这些信息在日志内也可以看到# kubectl -n argo-rollouts logs -f argo-rollouts-5dc7bfb5f-g9d7m time="2023-06-05T06:54:45Z" level=info msg="Patched: {\"status\":{\"availableReplicas\":4,\"conditions\":[{\"lastTransitionTime\":\"2023-06-05T06:50:12Z\",\"lastUpdateTime\":\"2023-06-05T06:50:12Z\",\"message\":\"Rollout is not healthy\",\"reason\":\"RolloutHealthy\",\"status\":\"False\",\"type\":\"Healthy\"},{\"lastTransitionTime\":\"2023-06-05T06:50:12Z\",\"lastUpdateTime\":\"2023-06-05T06:50:12Z\",\"message\":\"RolloutCompleted\",\"reason\":\"RolloutCompleted\",\"status\":\"False\",\"type\":\"Completed\"},{\"lastTransitionTime\":\"2023-06-05T06:53:15Z\",\"lastUpdateTime\":\"2023-06-05T06:53:15Z\",\"message\":\"Rollout is paused\",\"reason\":\"RolloutPaused\",\"status\":\"True\",\"type\":\"Paused\"},{\"lastTransitionTime\":\"2023-06-05T06:54:44Z\",\"lastUpdateTime\":\"2023-06-05T06:54:44Z\",\"message\":\"Rollout aborted update to revision 5: Metric \\\"success-rate\\\" assessed Failed due to failed (4) \\u003e failureLimit (3)\",\"reason\":\"RolloutAborted\",\"status\":\"False\",\"type\":\"Progressing\"},{\"lastTransitionTime\":\"2023-06-05T06:54:45Z\",\"lastUpdateTime\":\"2023-06-05T06:54:45Z\",\"message\":\"Rollout has minimum availability\",\"reason\":\"AvailableReason\",\"status\":\"True\",\"type\":\"Available\"}],\"readyReplicas\":4}}" generation=5 namespace=marksugar resourceVersion=249301 rollout=rollout-analysis-step最后我门来查看AnalysisRun# kubectl -n marksugar describe AnalysisRun rollout-analysis-step-6566f66f8c-5 ... Spec: Args: Name: service-name Value: guestbook-svc.default.svc.cluster.local Metrics: Failure Condition: result[0] > 0.1 Failure Limit: 3 Interval: 30s Name: success-rate Provider: Prometheus: Address: http://prometheus-server.prometheus.svc.cluster.local Query: sum(increase(nginx_server_requests{code="4xx", host="*"}[3m])) / 100 Status: Dry Run Summary: Message: Metric "success-rate" assessed Failed due to failed (4) > failureLimit (3) Metric Results: Count: 4 Failed: 4 Measurements: Finished At: 2023-06-05T06:53:14Z Phase: Failed Started At: 2023-06-05T06:53:14Z Value: [0.6] Finished At: 2023-06-05T06:53:44Z Phase: Failed Started At: 2023-06-05T06:53:44Z Value: [0.6] Finished At: 2023-06-05T06:54:14Z Phase: Failed Started At: 2023-06-05T06:54:14Z Value: [0.6] Finished At: 2023-06-05T06:54:44Z Phase: Failed Started At: 2023-06-05T06:54:44Z Value: [0.6] Metadata: Resolved Prometheus Query: sum(increase(nginx_server_requests{code="4xx", host="*"}[3m])) / 100 Name: success-rate Phase: Failed Phase: Failed Run Summary: Count: 1 Failed: 1 Started At: 2023-06-05T06:53:14Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning MetricFailed 2m22s rollouts-controller Metric 'success-rate' Completed. Result: Failed Warning AnalysisRunFailed 2m22s rollouts-controller Analysis Completed. Result: FailedfailureLimit表示失败的最大次数,到达次数后被认定为整个是失败的,现在。我门根据promehteus的指标返回不符合预期的值终止了更新。1.5 dryRundryRun可用于指标以控制是否在试运行模式下评估该指标。在试运行模式下运行的指标不会影响推出或实验的最终状态,即使它失败或评估结果不确定。 dryRun: - metricName: success-rate并且支持RegEx 通配符,即使在多个指标测量失败的情况下,仍然更新 dryRun: - metricName: success.* measurementRetention: - metricName: success.* limit: 20measurementRetention中的metricName也是匹配的,而limit: 20 表示保留的不是默认的10个,而是20个最近的指标测量值如下apiVersion: argoproj.io/v1alpha1 kind: Rollout metadata: name: rollout-analysis-step namespace: marksugar spec: # https://argoproj.github.io/argo-rollouts/features/specification/ replicas: 4 revisionHistoryLimit: 2 selector: matchLabels: app: rollout-analysis-step template: metadata: labels: app: rollout-analysis-step spec: containers: - name: rollouts-demo image: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v2.0 ports: - name: http containerPort: 8080 protocol: TCP resources: requests: memory: 32Mi cpu: 5m strategy: canary: analysis: templates: - templateName: success-rate startingStep: 2 # 延迟开始分析,到第3步开始 args: - name: service-name value: guestbook-svc.default.svc.cluster.local dryRun: - metricName: success.* measurementRetention: - metricName: success.* limit: 20 steps: - setWeight: 20 - pause: {duration: 3m} - setWeight: 40 - pause: {duration: 3m} --- apiVersion: argoproj.io/v1alpha1 kind: AnalysisTemplate metadata: name: success-rate namespace: marksugar spec: args: - name: service-name metrics: - name: success-rate # initialDelay: 5m # 延迟分析 在多个分析指标中对于某一个指标要延迟来说非常有效 # https://argoproj.github.io/argo-rollouts/features/analysis/#delay-analysis-runs interval: 30s # count 可以和interval配合使用,表示在较长的持续时间内执行多个测量interval #注意:prometheus查询以向量的形式返回结果。 #所以常见的是访问返回数组的索引0来获取值 # result[0] >= 10 #successCondition: result[0] > 0.1 failureCondition: result[0] > 0.1 failureLimit: 3 # 3次 provider: prometheus: address: http://prometheus-server.prometheus.svc.cluster.local query: | sum(increase(nginx_server_requests{code="4xx", host="*"}[3m])) / 100接着更新Name: rollout-analysis-step Namespace: marksugar Status: Healthy Strategy: Canary Step: 4/4 SetWeight: 100 ActualWeight: 100 Images: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v2.0 (stable) Replicas: Desired: 4 Current: 4 Updated: 4 Ready: 4 Available: 4 NAME KIND STATUS AGE INFO rollout-analysis-step Rollout ✔ Healthy 2d23h ├──# revision:8 │ ├── rollout-analysis-step-567978485d ReplicaSet ✔ Healthy 2d23h stable │ │ ├──□ rollout-analysis-step-567978485d-jp2mb Pod ✔ Running 6m15s ready:1/1 │ │ ├──□ rollout-analysis-step-567978485d-vqc5x Pod ✔ Running 3m14s ready:1/1 │ │ ├──□ rollout-analysis-step-567978485d-hsl5g Pod ✔ Running 12s ready:1/1 │ │ └──□ rollout-analysis-step-567978485d-j2vjk Pod ✔ Running 12s ready:1/1 │ └── rollout-analysis-step-567978485d-8 AnalysisRun ✔ Successful 3m14s ✖ 4可以看到,尽管AnalysisRun失败了,但是更新没有停止# kubectl -n marksugar describe AnalysisRun rollout-analysis-step-567978485d-8 ... Spec: Args: Name: service-name Value: guestbook-svc.default.svc.cluster.local Dry Run: Metric Name: success.* Metrics: Failure Condition: result[0] > 0.1 Failure Limit: 3 Interval: 30s Name: success-rate Provider: Prometheus: Address: http://prometheus-server.prometheus.svc.cluster.local Query: sum(increase(nginx_server_requests{code="4xx", host="*"}[3m])) / 100 Status: Dry Run Summary: Count: 1 Failed: 1 Metric Results: Count: 4 Dry Run: true Failed: 4 Measurements: Finished At: 2023-06-05T09:42:19Z Phase: Failed Started At: 2023-06-05T09:42:19Z Value: [0.6] Finished At: 2023-06-05T09:42:49Z Phase: Failed Started At: 2023-06-05T09:42:49Z Value: [0.6] Finished At: 2023-06-05T09:43:19Z Phase: Failed Started At: 2023-06-05T09:43:19Z Value: [0.6] Finished At: 2023-06-05T09:43:49Z Phase: Failed Started At: 2023-06-05T09:43:49Z Value: [0.6] Message: Metric assessed Failed due to failed (4) > failureLimit (3) Metadata: Resolved Prometheus Query: sum(increase(nginx_server_requests{code="4xx", host="*"}[3m])) / 100 Name: success-rate Phase: Failed Phase: Successful Run Summary: Started At: 2023-06-05T09:42:19Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning MetricFailed 2m10s rollouts-controller Metric 'success-rate' Completed. Result: Failed Normal AnalysisRunSuccessful 2m10s rollouts-controller Analysis Completed. Result: Successfulhttps://argoproj.github.io/argo-rollouts/features/analysis/#measurements-retention参考https://argoproj.github.io/argo-rollouts/features/analysis/
2023年11月25日
11 阅读
0 评论
0 点赞
2023-11-24
linuxea:Argocd Rollouts测试
Argo Rollouts 是一个 Kubernetes Operator 实现,它为 Kubernetes 提供更加高级的部署能力,如蓝绿、金丝雀、金丝雀分析、实验和渐进式交付功能,为云原生应用和服务实现自动化、基于 GitOps 的逐步交付。支持如下特性:蓝绿更新策略金丝雀更新策略更加细粒度、加权流量拆分自动回滚手动判断可定制的指标查询和业务 KPI 分析Ingress 控制器集成:NGINX,ALB服务网格集成:Istio,Linkerd,SMIMetrics 指标集成:Prometheus、Wavefront、Kayenta、Web、Kubernetes Jobs、Datadog、New RelicArgo Rollouts 控制器将管理 ReplicaSets 的创建、缩放和删除,这些 ReplicaSet 由 Rollout 资源中的 spec.template 定义,使用与 Deployment 对象相同的 pod 模板。当 spec.template 变更时,这会向 Argo Rollouts 控制器发出信号,表示将引入新的 ReplicaSet,控制器将使用 spec.strategy 字段内的策略来确定从旧 ReplicaSet 到新 ReplicaSet 的 rollout 将如何进行,一旦这个新的 ReplicaSet 被放大(可以选择通过一个 Analysis),控制器会将其标记为稳定。如果在 spec.template 从稳定的 ReplicaSet 过渡到新的 ReplicaSet 的过程中发生了另一次变更(即在发布过程中更改了应用程序版本),那么之前的新 ReplicaSet 将缩小,并且控制器将尝试发布反映更新 spec.template 字段的 ReplicasSet。安装kubectl create namespace argo-rollouts wget https://github.com/argoproj/argo-rollouts/releases/download/v1.5.1/install.yaml我门仍然修改镜像地址sed -i 's@quay.io/argoproj/argo-rollouts:v1.5.1@registry.cn-zhangjiakou.aliyuncs.com/marksugar/argo-rollouts:v1.5.1@g' install.yaml kubectl -n argo-rollouts apply -f install.yaml安装完成[root@master1 argo-rollouts]# kubectl get pods -n argo-rollouts NAME READY STATUS RESTARTS AGE argo-rollouts-5dc7bfb5f-vjkv4 1/1 Running 0 19mkubectl-argo-rollouts安装rollouts接着我门下载一个二进制插件curl -LO https://github.com/argoproj/argo-rollouts/releases/download/v1.2.2/kubectl-argo-rollouts-linux-amd64 cp -r kubectl-argo-rollouts-linux-amd64-1.5.1 /usr/local/sbin/kubectl-argo-rollouts chmod +x /usr/local/sbin/kubectl-argo-rollouts检查下是否能够使用[root@master1 argo-rollouts]# kubectl argo rollouts version kubectl-argo-rollouts: v1.5.1+839f05d BuildDate: 2023-05-24T19:09:27Z GitCommit: 839f05d46f838c04b44eff0e573227d40e89ac7d GitTreeState: clean GoVersion: go1.19.9 Compiler: gc Platform: linux/amd64dashboardargocd提供了一个界面,通过如下命令可以开启kubectl argo rollouts dashboard[root@master1 argo-rollouts]# kubectl argo rollouts dashboard INFO[0000] Argo Rollouts Dashboard is now available at https://localhost:3100/rollouts而后可以通过ip:3100端口打开部署rollout在开始rollout之前 ,我门知道,更新策略都是基于已经有的pod上进行的,因此,我门开始创建一个deployment,这组pod在更新到20%后会一直暂停,需要手动进行放开,而后更新倒40%的时候暂停10秒,60%暂停10秒,80%暂停10秒,而后全部更新。1,模板如下:# basic-rollout.yaml apiVersion: argoproj.io/v1alpha1 kind: Rollout metadata: name: rollouts-demo namespace: marksugar spec: replicas: 5 # 定义5个副本 strategy: # 定义升级策略 canary: # 金丝雀发布 steps: # 发布的节奏 - setWeight: 20 - pause: {} # 会一直暂停 - setWeight: 40 - pause: { duration: 10 } # 暂停10s - setWeight: 60 - pause: { duration: 10 } - setWeight: 80 - pause: { duration: 10 } revisionHistoryLimit: 2 # 下面部分其实是和 Deployment 兼容的 selector: matchLabels: app: rollouts-demo template: metadata: labels: app: rollouts-demo spec: containers: - name: rollouts-demo image: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v1.0 ports: - name: http containerPort: 8080 protocol: TCP resources: requests: memory: 32Mi cpu: 5m --- # basic-service.yaml apiVersion: v1 kind: Service metadata: name: rollouts-demo namespace: marksugar spec: ports: - port: 80 targetPort: http protocol: TCP name: http selector: app: rollouts-demo此时pod已经创建完成[root@master1 ~]# kubectl -n marksugar get pod NAME READY STATUS RESTARTS AGE rollouts-demo-9d5b487bd-48f2d 1/1 Running 0 63s rollouts-demo-9d5b487bd-48xlj 1/1 Running 0 63s rollouts-demo-9d5b487bd-lpbzd 1/1 Running 0 63s rollouts-demo-9d5b487bd-r8rhz 1/1 Running 0 63s rollouts-demo-9d5b487bd-zlzn2 1/1 Running 0 63s同时,我门可以通过命令去追踪这个过程的状态[root@master1 ~]# kubectl argo rollouts get rollout rollouts-demo --watch -n marksugar Name: rollouts-demo Namespace: marksugar Status: ✔ Healthy Strategy: Canary Step: 8/8 SetWeight: 100 ActualWeight: 100 Images: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v1.0 (stable) Replicas: Desired: 5 Current: 5 Updated: 5 Ready: 5 Available: 5 NAME KIND STATUS AGE INFO ⟳ rollouts-demo Rollout ✔ Healthy 14m └──# revision:1 └──⧉ rollouts-demo-9d5b487bd ReplicaSet ✔ Healthy 14m stable ├──□ rollouts-demo-9d5b487bd-48f2d Pod ✔ Running 14m ready:1/1 ├──□ rollouts-demo-9d5b487bd-48xlj Pod ✔ Running 14m ready:1/1 ├──□ rollouts-demo-9d5b487bd-lpbzd Pod ✔ Running 14m ready:1/1 ├──□ rollouts-demo-9d5b487bd-r8rhz Pod ✔ Running 14m ready:1/1 └──□ rollouts-demo-9d5b487bd-zlzn2 Pod ✔ Running 14m ready:1/1当然,可以通过-w来实时跟踪状态。创建完成后,在dashboard中的右上角选择了名称空间后,就可以在web界面观察到创建的配置2,更新镜像我门直接通过命令行来更新镜像版本来观察更新的过程1.更新镜像kubectl argo rollouts set image rollouts-demo \ rollouts-demo=registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v2.0 -n marksugar此时在去观察[root@master1 ~]# kubectl argo rollouts get rollout rollouts-demo --watch -n marksugar -w Name: rollouts-demo Namespace: marksugar Status: ◌ Progressing Message: more replicas need to be updated Strategy: Canary Step: 0/8 SetWeight: 20 ActualWeight: 0 Images: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v1.0 (stable) registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v2.0 (canary) Replicas: Desired: 5 Current: 5 Updated: 1 Ready: 4 Available: 4 NAME KIND STATUS AGE INFO ⟳ rollouts-demo Rollout ◌ Progressing 48m ├──# revision:2 │ └──⧉ rollouts-demo-586dcdd879 ReplicaSet ◌ Progressing 3s canary │ └──□ rollouts-demo-586dcdd879-8zvkg Pod ◌ ContainerCreating 3s ready:0/1 └──# revision:1 └──⧉ rollouts-demo-9d5b487bd ReplicaSet ✔ Healthy 48m stable ├──□ rollouts-demo-9d5b487bd-48f2d Pod ✔ Running 48m ready:1/1 ├──□ rollouts-demo-9d5b487bd-48xlj Pod ✔ Running 48m ready:1/1 ├──□ rollouts-demo-9d5b487bd-lpbzd Pod ◌ Terminating 48m ready:0/1 ├──□ rollouts-demo-9d5b487bd-r8rhz Pod ✔ Running 48m ready:1/1 └──□ rollouts-demo-9d5b487bd-zlzn2 Pod ✔ Running 48m ready:1/1其中一个pod被Terminating,接着被running起Name: rollouts-demo Namespace: marksugar Status: ॥ Paused Message: CanaryPauseStep Strategy: Canary Step: 1/8 SetWeight: 20 ActualWeight: 20 Images: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v1.0 (stable) registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v2.0 (canary) Replicas: Desired: 5 Current: 5 Updated: 1 Ready: 5 Available: 5 NAME KIND STATUS AGE INFO ⟳ rollouts-demo Rollout ॥ Paused 49m ├──# revision:2 │ └──⧉ rollouts-demo-586dcdd879 ReplicaSet ✔ Healthy 56s canary │ └──□ rollouts-demo-586dcdd879-8zvkg Pod ✔ Running 56s ready:1/1 └──# revision:1 └──⧉ rollouts-demo-9d5b487bd ReplicaSet ✔ Healthy 49m stable ├──□ rollouts-demo-9d5b487bd-48f2d Pod ✔ Running 49m ready:1/1 ├──□ rollouts-demo-9d5b487bd-48xlj Pod ✔ Running 49m ready:1/1 ├──□ rollouts-demo-9d5b487bd-r8rhz Pod ✔ Running 49m ready:1/1 └──□ rollouts-demo-9d5b487bd-zlzn2 Pod ✔ Running 49m ready:1/1这些可以在get pod中查看[root@master1 ~]# kubectl -n marksugar get pod NAME READY STATUS RESTARTS AGE rollouts-demo-586dcdd879-8zvkg 1/1 Running 0 90s rollouts-demo-9d5b487bd-48f2d 1/1 Running 0 50m rollouts-demo-9d5b487bd-48xlj 1/1 Running 0 50m rollouts-demo-9d5b487bd-r8rhz 1/1 Running 0 50m rollouts-demo-9d5b487bd-zlzn2 1/1 Running 0 50m继续更新上面配置中,rollout 为金丝雀设置了 20% 的流量权重,并一直暂停 rollout,直到用户取消或促进发布,我门开始调整使其继续更新kubectl argo rollouts promote rollouts-demo -n marksugar放开后,经过几次更新,pod被全部更新完成Name: rollouts-demo Namespace: marksugar Status: ✔ Healthy Strategy: Canary Step: 8/8 SetWeight: 100 ActualWeight: 100 Images: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v2.0 (stable) Replicas: Desired: 5 Current: 5 Updated: 5 Ready: 5 Available: 5 NAME KIND STATUS AGE INFO ⟳ rollouts-demo Rollout ✔ Healthy 61m ├──# revision:2 │ └──⧉ rollouts-demo-586dcdd879 ReplicaSet ✔ Healthy 12m stable │ ├──□ rollouts-demo-586dcdd879-8zvkg Pod ✔ Running 12m ready:1/1 │ ├──□ rollouts-demo-586dcdd879-vgc2q Pod ✔ Running 57s ready:1/1 │ ├──□ rollouts-demo-586dcdd879-chwlt Pod ✔ Running 44s ready:1/1 │ ├──□ rollouts-demo-586dcdd879-8lmk7 Pod ✔ Running 33s ready:1/1 │ └──□ rollouts-demo-586dcdd879-gm6pr Pod ✔ Running 22s ready:1/1get pod查看[root@master1 ~]# kubectl -n marksugar get pod -w NAME READY STATUS RESTARTS AGE rollouts-demo-586dcdd879-8lmk7 1/1 Running 0 83s rollouts-demo-586dcdd879-8zvkg 1/1 Running 0 13m rollouts-demo-586dcdd879-chwlt 1/1 Running 0 94s rollouts-demo-586dcdd879-gm6pr 1/1 Running 0 72s rollouts-demo-586dcdd879-vgc2q 1/1 Running 0 107s按照预期停顿10s后进行更新镜像而回到dashboard,他的版本也是同步进行更新3,中断更新我门在上面编写的yaml更新中,在第一次更新20%后会进行暂停,而后在继续手动放开,接着才会自动更新完成。于是,我门更新到版本3kubectl argo rollouts set image rollouts-demo \ rollouts-demo=registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v3.0 -n marksugar接着更新已经开始了Name: rollouts-demo Namespace: marksugar Status: ◌ Progressing Message: more replicas need to be updated Strategy: Canary Step: 0/8 SetWeight: 20 ActualWeight: 0 Images: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v2.0 (stable) registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v3.0 (canary) Replicas: Desired: 5 Current: 5 Updated: 1 Ready: 4 Available: 4 NAME KIND STATUS AGE INFO ⟳ rollouts-demo Rollout ◌ Progressing 80m ├──# revision:3 │ └──⧉ rollouts-demo-b4ff7678 ReplicaSet ◌ Progressing 5s canary │ └──□ rollouts-demo-b4ff7678-xblv6 Pod ◌ ContainerCreating 4s ready:0/1 ├──# revision:2 │ └──⧉ rollouts-demo-586dcdd879 ReplicaSet ✔ Healthy 31m stable │ ├──□ rollouts-demo-586dcdd879-8zvkg Pod ✔ Running 31m ready:1/1 │ ├──□ rollouts-demo-586dcdd879-vgc2q Pod ✔ Running 19m ready:1/1 │ ├──□ rollouts-demo-586dcdd879-chwlt Pod ✔ Running 19m ready:1/1 │ └──□ rollouts-demo-586dcdd879-8lmk7 Pod ✔ Running 19m ready:1/1镜像拉取完成后pod被running起Name: rollouts-demo Namespace: marksugar Status: ॥ Paused Message: CanaryPauseStep Strategy: Canary Step: 1/8 SetWeight: 20 ActualWeight: 20 Images: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v2.0 (stable) registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v3.0 (canary) Replicas: Desired: 5 Current: 5 Updated: 1 Ready: 5 Available: 5 NAME KIND STATUS AGE INFO ⟳ rollouts-demo Rollout ॥ Paused 82m ├──# revision:3 │ └──⧉ rollouts-demo-b4ff7678 ReplicaSet ✔ Healthy 2m51s canary │ └──□ rollouts-demo-b4ff7678-xblv6 Pod ✔ Running 2m50s ready:1/1 ├──# revision:2 │ └──⧉ rollouts-demo-586dcdd879 ReplicaSet ✔ Healthy 34m stable │ ├──□ rollouts-demo-586dcdd879-8zvkg Pod ✔ Running 34m ready:1/1 │ ├──□ rollouts-demo-586dcdd879-vgc2q Pod ✔ Running 22m ready:1/1 │ ├──□ rollouts-demo-586dcdd879-chwlt Pod ✔ Running 22m ready:1/1 │ └──□ rollouts-demo-586dcdd879-8lmk7 Pod ✔ Running 22m ready:1/1那么现在,我门在更新后直接中断更新过程,那么pod就会被降级到开始的版本。我门中断这个过程,使用他提供了一个 abort 命令来终止kubectl argo rollouts abort rollouts-demo -n marksugar那么现在,他就会回到v2的版本,而不会继续更新了观察如下:Name: rollouts-demo Namespace: marksugar Status: ✖ Degraded Message: RolloutAborted: Rollout aborted update to revision 3 Strategy: Canary Step: 0/8 SetWeight: 0 ActualWeight: 0 Images: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v2.0 (stable) Replicas: Desired: 5 Current: 5 Updated: 0 Ready: 4 Available: 4 NAME KIND STATUS AGE INFO ⟳ rollouts-demo Rollout ✖ Degraded 104m ├──# revision:3 │ └──⧉ rollouts-demo-b4ff7678 ReplicaSet • ScaledDown 24m canary │ └──□ rollouts-demo-b4ff7678-xblv6 Pod ◌ Terminating 24m ready:0/1 ├──# revision:2 │ └──⧉ rollouts-demo-586dcdd879 ReplicaSet ◌ Progressing 55m stable │ ├──□ rollouts-demo-586dcdd879-8zvkg Pod ✔ Running 55m ready:1/1 │ ├──□ rollouts-demo-586dcdd879-vgc2q Pod ✔ Running 44m ready:1/1 │ ├──□ rollouts-demo-586dcdd879-chwlt Pod ✔ Running 44m ready:1/1 │ ├──□ rollouts-demo-586dcdd879-8lmk7 Pod ✔ Running 44m ready:1/1 │ └──□ rollouts-demo-586dcdd879-hlwh5 Pod ◌ ContainerCreating 3s ready:0/1 └──# revision:1 └──⧉ rollouts-demo-9d5b487bd ReplicaSet • ScaledDown 104m同样,这些操作的行为在dashboard也有所体现4,继续更新我门重新进行更新到v3,使用retrykubectl argo rollouts retry rollout rollouts-demo -n marksugar如下Name: rollouts-demo Namespace: marksugar Status: ॥ Paused Message: CanaryPauseStep Strategy: Canary Step: 1/8 SetWeight: 20 ActualWeight: 20 Images: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v2.0 (stable) registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v3.0 (canary) Replicas: Desired: 5 Current: 5 Updated: 1 Ready: 5 Available: 5 NAME KIND STATUS AGE INFO ⟳ rollouts-demo Rollout ॥ Paused 120m ├──# revision:3 │ └──⧉ rollouts-demo-b4ff7678 ReplicaSet ✔ Healthy 40m canary │ └──□ rollouts-demo-b4ff7678-q65tz Pod ✔ Running 118s ready:1/1 ├──# revision:2 │ └──⧉ rollouts-demo-586dcdd879 ReplicaSet ✔ Healthy 72m stable │ ├──□ rollouts-demo-586dcdd879-8zvkg Pod ✔ Running 72m ready:1/1 │ ├──□ rollouts-demo-586dcdd879-vgc2q Pod ✔ Running 60m ready:1/1 │ ├──□ rollouts-demo-586dcdd879-chwlt Pod ✔ Running 60m ready:1/1 │ └──□ rollouts-demo-586dcdd879-8lmk7 Pod ✔ Running 60m ready:1/1 └──# revision:1 └──⧉ rollouts-demo-9d5b487bd ReplicaSet • ScaledDown 120m它没有进行更新是因为Rollout 检测到这是一个回滚,而不是一个更新,并将通过跳过分析和步骤快速部署稳定的 ReplicaSet。按照配置的逻辑,此时应该放开kubectl argo rollouts promote rollouts-demo -n marksugar而后观察pod陆续被更新Name: rollouts-demo Namespace: marksugar Status: ॥ Paused Message: CanaryPauseStep Strategy: Canary Step: 3/8 SetWeight: 40 ActualWeight: 40 Images: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v2.0 (stable) registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v3.0 (canary) Replicas: Desired: 5 Current: 5 Updated: 2 Ready: 5 Available: 5 NAME KIND STATUS AGE INFO ⟳ rollouts-demo Rollout ॥ Paused 121m ├──# revision:3 │ └──⧉ rollouts-demo-b4ff7678 ReplicaSet ✔ Healthy 41m canary │ ├──□ rollouts-demo-b4ff7678-q65tz Pod ✔ Running 2m39s ready:1/1 │ └──□ rollouts-demo-b4ff7678-zsxv7 Pod ✔ Running 7s ready:1/1 ├──# revision:2 │ └──⧉ rollouts-demo-586dcdd879 ReplicaSet ✔ Healthy 72m stable │ ├──□ rollouts-demo-586dcdd879-8zvkg Pod ✔ Running 72m ready:1/1 │ ├──□ rollouts-demo-586dcdd879-vgc2q Pod ✔ Running 61m ready:1/1 │ └──□ rollouts-demo-586dcdd879-chwlt Pod ✔ Running 61m ready:1/1 └──# revision:1 Name: rollouts-demo Namespace: marksugar Status: ◌ Progressing Message: updated replicas are still becoming available Strategy: Canary Step: 8/8 SetWeight: 100 ActualWeight: 100 Images: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v3.0 (canary) Replicas: Desired: 5 Current: 5 Updated: 5 Ready: 4 Available: 4 NAME KIND STATUS AGE INFO ⟳ rollouts-demo Rollout ◌ Progressing 121m ├──# revision:3 │ └──⧉ rollouts-demo-b4ff7678 ReplicaSet ◌ Progressing 41m canary │ ├──□ rollouts-demo-b4ff7678-q65tz Pod ✔ Running 3m6s ready:1/1 │ ├──□ rollouts-demo-b4ff7678-zsxv7 Pod ✔ Running 34s ready:1/1 │ ├──□ rollouts-demo-b4ff7678-xvzvd Pod ✔ Running 23s ready:1/1 │ ├──□ rollouts-demo-b4ff7678-2pj6c Pod ✔ Running 11s ready:1/1 │ └──□ rollouts-demo-b4ff7678-7ltb8 Pod ◌ ContainerCreating 0s ready:0/1 ├──# revision:2 │ └──⧉ rollouts-demo-586dcdd879 ReplicaSet • ScaledDown 73m stable │ └──□ rollouts-demo-586dcdd879-8zvkg Pod ◌ Terminating 73m ready:1/1 └──# revision:1 └──⧉ rollouts-demo-9d5b487bd ReplicaSet • ScaledDown 121m直到所有被更新完成 └──⧉ rollouts-demo-9d5b487bd ReplicaSet • ScaledDown 122m Name: rollouts-demo Namespace: marksugar Status: ✔ Healthy Strategy: Canary Step: 8/8 SetWeight: 100 ActualWeight: 100 Images: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v3.0 (stable) Replicas: Desired: 5 Current: 5 Updated: 5 Ready: 5 Available: 5 NAME KIND STATUS AGE INFO ⟳ rollouts-demo Rollout ✔ Healthy 122m ├──# revision:3 │ └──⧉ rollouts-demo-b4ff7678 ReplicaSet ✔ Healthy 42m stable │ ├──□ rollouts-demo-b4ff7678-q65tz Pod ✔ Running 3m18s ready:1/1 │ ├──□ rollouts-demo-b4ff7678-zsxv7 Pod ✔ Running 46s ready:1/1 │ ├──□ rollouts-demo-b4ff7678-xvzvd Pod ✔ Running 35s ready:1/1 │ ├──□ rollouts-demo-b4ff7678-2pj6c Pod ✔ Running 23s ready:1/1 │ └──□ rollouts-demo-b4ff7678-7ltb8 Pod ✔ Running 12s ready:1/1 ├──# revision:2 │ └──⧉ rollouts-demo-586dcdd879 ReplicaSet • ScaledDown 73m └──# revision:1 └──⧉ rollouts-demo-9d5b487bd ReplicaSet • ScaledDown 122m
2023年11月24日
6 阅读
0 评论
0 点赞
2023-07-30
linuxea: argocd2.7.5简单的配置/使用/通知和监控
Argo CD 在 GitOps 模式中被常用,使用 Git 仓库作为定义所需应用程序状态的真实来源,同时可指定的目标环境中自动部署所需的应用程序状态,应用程序部署可以在 Git 提交时跟踪对分支、标签的更新,或固定到清单的指定版本。Argo CD在 Kubernetes中支持多种方式:kustomizehelm chartsksonnet applicationsjsonnet filesPlain directory of YAML/json manifestsAny custom config management tool configured as a config management plugin在k8s中通过一个控制器持续watch正在运行的程序的状态与git仓库的中的状态进行对比,如果有差异就返回一个outofsync,这些差异被argocd获取。而后手动,或者自动的进行同步。在基于GRPC/REST的api中,提供web ui,cli和CI/CD的接口。分别如下:应用程序管理和状态报告执行应用程序操作(例如同步、回滚、用户定义的操作)存储仓库和集群凭据管理(存储为 K8S Secrets 对象)认证和授权给外部身份提供者RBACGit webhook 事件的侦听器/转发器除此之外,通过一个仓库服务来生成K8s的清单存储 URLrevision 版本(commit、tag、branch)应用路径模板配置:参数、ksonnet 环境、helm values.yaml 等它可以提供的功能有很多, 包括但不限于本章记录的这些。安装我想肯定有人会疑问,为什么总在安装较为新的版本呢,答案是新版本一般解决了旧的问题。但是也不要高兴太早,尽管解决了旧的问题,迎之而来的就会是新的问题。如此而已。我门安装2.7.5最新单机版,如果是生产,那么建议使用ha版本 wget https://raw.githubusercontent.com/argoproj/argo-cd/v2.7.5/manifests/install.yaml sed -i 's@ghcr.io/dexidp/dex:v2.36.0@registry.cn-zhangjiakou.aliyuncs.com/marksugar/dex:v2.36.0@g' install.yaml sed -i 's@redis:7.0.11-alpine@registry.cn-zhangjiakou.aliyuncs.com/marksugar/redis:7.0.11-alpine@g' install.yaml kubectl -n argocd apply -f install.yaml如果你需要ha,则可以试试2.7.6或者我门修改下镜像版本: wget https://raw.githubusercontent.com/argoproj/argo-cd/v2.7.6/manifests/ha/install.yaml -O v2.7.6.yaml kubectl create namespace argocd sed -i 's@ghcr.io/dexidp/dex:v2.36.0@uhub.service.ucloud.cn/marksugar-k8s/dex:v2.36.0@g' v2.7.6.yaml sed -i 's@haproxy:2.6.14-alpine@uhub.service.ucloud.cn/marksugar-k8s/haproxy:2.6.14-alpine@g' v2.7.6.yaml sed -i 's@quay.io/argoproj/argocd:v2.7.6@uhub.service.ucloud.cn/marksugar-k8s/argocd:v2.7.6@g' v2.7.6.yaml sed -i 's@redis:7.0.11-alpine@uhub.service.ucloud.cn/marksugar-k8s/redis:7.0.11-alpine@g' v2.7.6.yaml kubectl apply -n argocd -f v2.7.6.yaml而后,我门需要删除如下字段,来应付msg="gpg --no-permission-warning --logger-fd 1 --batch --gen-key /tmp/gpg-key-recipe3098385539 failed exit status 2"字段的报错,我门需要在名称为argocd-repo-server的deployment中删除如下字段。参考9809和11647 seccompProfile: type: RuntimeDefault而后重新apply 即可。而后下载一个客户端VERSION=$(curl --silent "https://api.github.com/repos/argoproj/argo-cd/releases/latest" | grep '"tag_name"' | sed -E 's/.*"([^"]+)".*/\1/') curl -sSL -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/download/$VERSION/argocd-linux-amd64 chmod +x /usr/local/bin/argocd我们可以通过配置 Ingress 的方式来对外暴露服务,其他 Ingress 控制器的配置可以参考官方文档 https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/ 进行配置。Argo CD 在同一端口 (443) 上提供多个协议 (gRPC/HTTPS),所以当我们为 argocd 服务定义单个 nginx ingress 对象和规则的时候有点麻烦,因为 nginx.ingress.kubernetes.io/backend -protocol 这个 annotation 只能接受一个后端协议(例如 HTTP、HTTPS、GRPC、GRPCS)。为了使用单个 ingress 规则和主机名来暴露 Argo CD APIServer,必须使用 nginx.ingress.kubernetes.io/ssl-passthrough 这个 annotation 来传递 TLS 连接并校验 Argo CD APIServer 上的 TLS。apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: argocd-server-ingress namespace: argocd annotations: nginx.ingress.kubernetes.io/force-ssl-redirect: "true" nginx.ingress.kubernetes.io/ssl-passthrough: "true" spec: ingressClassName: nginx rules: - host: argocd.k8s.local http: paths: - path: / pathType: Prefix backend: service: name: argocd-server port: name: https而后,我门在deploymen中给argocd-server的args中添加--insecure, 如下:... containers: - args: - /usr/local/bin/argocd-server - --insecure env: ...你需要安装ingress-nginxhelm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm upgrade --install ingress-nginx --namespace ingress-nginx --create-namespace -f .\latest.yaml ingress-nginx/ingress-nginx或者简单地在 argocd-cmd-params-cm ConfigMap 中设置 server.insecure: "true" 即可。接着配置本地Hosts就可以打开了使用如下命令获取密码,用户名:admin[root@master1 argocd]# kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d && echo X5sqkyi-ANK5LrGu除此之外,由于 ingress-nginx 的每个 Ingress 对象仅支持一个协议,因此另一种方法是定义两个 Ingress 对象。一个用于 HTTP/HTTPS,另一个用于 gRPCapiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: argocd-server-http-ingress namespace: argocd annotations: nginx.ingress.kubernetes.io/force-ssl-redirect: "true" nginx.ingress.kubernetes.io/backend-protocol: "HTTP" spec: ingressClassName: nginx rules: - http: paths: - path: / pathType: Prefix backend: service: name: argocd-server port: name: http host: argocd.k8s.local tls: - hosts: - argocd.k8s.local secretName: argocd-secret # do not change, this is provided by Argo CD --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: argocd-server-grpc-ingress namespace: argocd annotations: nginx.ingress.kubernetes.io/backend-protocol: "GRPC" spec: ingressClassName: nginx rules: - http: paths: - path: / pathType: Prefix backend: service: name: argocd-server port: name: https host: grpc.argocd.k8s.local tls: - hosts: - grpc.argocd.k8s.local secretName: argocd-secret # do not change, this is provided by Argo CD我门创建grpc的ingress,而后使用下载完成后的cli使用argocd登录grpc.argocd.k8s.local[root@master1 argocd]# argocd login grpc.argocd.k8s.local WARNING: server certificate had error: x509: certificate is valid for ingress.local, not grpc.argocd.k8s.local. Proceed insecurely (y/n)? y Username: admin Password: 'admin:login' logged in successfully Context 'grpc.argocd.k8s.local' updated而后我门修改密码1, 生成一个密码# argocd account bcrypt --password www.linuxea.com && echo $2a$10$64Ywt88aWJD.LYeAzA0UfelaUSENF.paiYSyw9QehsawqW8Nokc/.2, 打补丁的方式进行修改kubectl -n argocd patch secret argocd-secret \ -p '{"stringData": { "admin.password": "$2a$10$64Ywt88aWJD.LYeAzA0UfelaUSENF.paiYSyw9QehsawqW8Nokc/.", "admin.passwordMtime": "'$(date +%FT%T%Z)'" }}'重新登录[root@master1 argocd]# argocd login grpc.argocd.k8s.local WARNING: server certificate had error: x509: certificate is valid for ingress.local, not grpc.argocd.k8s.local. Proceed insecurely (y/n)? y Username: admin Password: 'admin:login' logged in successfully Context 'grpc.argocd.k8s.local' updated [root@master1 argocd]# argocd version argocd: v2.7.2+cbee7e6 BuildDate: 2023-05-12T14:06:49Z GitCommit: cbee7e6011407ed2d1066c482db74e97e0cc6bdb GitTreeState: clean GoVersion: go1.19.9 Compiler: gc Platform: linux/amd64 argocd-server: v2.7.2+cbee7e6.dirty BuildDate: 2023-05-12T13:43:26Z GitCommit: cbee7e6011407ed2d1066c482db74e97e0cc6bdb GitTreeState: dirty GoVersion: go1.19.6 Compiler: gc Platform: linux/amd64 Kustomize Version: v5.0.1 2023-03-14T01:32:48Z Helm Version: v3.11.2+g912ebc1 Kubectl Version: v0.24.2 Jsonnet Version: v0.19.1api添加swagger-ui即可: https://argocd.k8s.local/swagger-ui创建应用argocd支持配置在集群外,然后在将集群加入进来。但是我这里在集群内安装的因此不需要。我门直接创建应用CLI 创建应用argocd app create marksugar-cli \ --repo https://gitee.com/marksugar/argocd-example.git \ --path marksugar \ --dest-server https://kubernetes.default.svc \ --dest-namespace default而后就会在界面中显示出刚创建的项目Argo CD 默认情况下每 3 分钟会检测 Git 仓库一次,用于判断应用实际状态是否和 Git 中声明的期望状态一致,如果不一致,状态就转换为 OutOfSync。默认情况下并不会触发更新,除非通过 syncPolicy 配置了自动同步。CRD创建除了这些,还可以通过CRD创建apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: marksugar-ui namespace: argocd labels: marksugar/marksugar-ui: prod # 标签 spec: project: my-linuxea # 定义的项目名 source: repoURL: https://gitee.com/marksugar/argocd-example.git targetRevision: master path: marksugar destination: server: https://kubernetes.default.svc namespace: default 或者配置一个AppProject类型的,指定他的AppProject权限apiVersion: v1 kind: Namespace metadata: name: marksugar --- apiVersion: argoproj.io/v1alpha1 kind: AppProject metadata: name: my-linuxea namespace: argocd spec: description: Example Project(测试) sourceRepos: - '*' destinations: - namespace: marksugar server: 'https://kubernetes.default.svc' namespaceResourceWhitelist: - group: 'apps' kind: 'Deployment' - group: '' kind: 'Service' - group: '' kind: 'ConfigMap' --- apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: marksugar-ui-crd namespace: argocd labels: marksugar/marksugar-ui: prod # 标签 spec: project: my-linuxea # 定义的项目名 source: repoURL: https://gitee.com/marksugar/argocd-example.git targetRevision: master path: marksugar destination: server: https://kubernetes.default.svc namespace: marksugar 创建完成app部署应用自动更新开关 syncPolicy: # 打开自动更新 automated: prune: false selfHeal: false 手动 syncPolicy: automated: null配置为自动后,开始自动同步apiVersion: v1 kind: Namespace metadata: name: marksugar --- apiVersion: argoproj.io/v1alpha1 kind: AppProject metadata: name: my-linuxea-auto namespace: argocd spec: description: Example Project(测试) sourceRepos: - '*' destinations: - namespace: marksugar server: 'https://kubernetes.default.svc' namespaceResourceWhitelist: - group: 'apps' kind: 'Deployment' - group: '' kind: 'Service' - group: '' kind: 'ConfigMap' --- apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: marksugar-ui-auto namespace: argocd labels: marksugar/marksugar-ui: prod # 标签 spec: project: my-linuxea # 定义的项目名 source: repoURL: https://gitee.com/marksugar/argocd-example.git targetRevision: master path: marksugar destination: server: https://kubernetes.default.svc namespace: marksugar syncPolicy: # 打开自动更新 automated: prune: false selfHeal: false创建完成后就会开始[root@master1 argocd]# kubectl -n marksugar get pod NAME READY STATUS RESTARTS AGE marksugar-nginx-69ccfd5bb4-jvxsg 1/1 Running 0 21scli手动配置我门仍然可以通过argo的二进制cli手动部署1,列出[root@master1 ~]# argocd app list NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET argocd/marksugar-cli https://kubernetes.default.svc default default OutOfSync Missing <none> <none> https://gitee.com/marksugar/argocd-example.git marksugar argocd/marksugar-ui-auto https://kubernetes.default.svc marksugar my-linuxea Synced Healthy Auto <none> https://gitee.com/marksugar/argocd-example.git marksugar master argocd/marksugar-ui-crd https://kubernetes.default.svc marksugar my-linuxea OutOfSync Healthy <none> SharedResourceWarning(2) https://gitee.com/marksugar/argocd-example.git marksugar master2, 查看状态[root@master1 ~]# argocd app get marksugar-ui-auto Name: argocd/marksugar-ui-auto Project: my-linuxea Server: https://kubernetes.default.svc Namespace: marksugar URL: https://grpc.argocd.k8s.local/applications/marksugar-ui-auto Repo: https://gitee.com/marksugar/argocd-example.git Target: master Path: marksugar SyncWindow: Sync Allowed Sync Policy: Automated Sync Status: Synced to master (df898c3) Health Status: Healthy GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE Service marksugar marksugar-ui Synced Healthy service/marksugar-ui configured apps Deployment marksugar marksugar-nginx Synced Healthy deployment.apps/marksugar-nginx unchanged3, 手动同步状态[root@master1 ~]# argocd app sync marksugar-ui-auto TIMESTAMP GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE 2023-06-08T10:35:35+08:00 Service marksugar marksugar-ui Synced Healthy 2023-06-08T10:35:35+08:00 apps Deployment marksugar marksugar-nginx Synced Healthy 2023-06-08T10:35:36+08:00 Service marksugar marksugar-ui Synced Healthy service/marksugar-ui unchanged 2023-06-08T10:35:36+08:00 apps Deployment marksugar marksugar-nginx Synced Healthy deployment.apps/marksugar-nginx unchanged Name: argocd/marksugar-ui-auto Project: my-linuxea Server: https://kubernetes.default.svc Namespace: marksugar URL: https://grpc.argocd.k8s.local/applications/marksugar-ui-auto Repo: https://gitee.com/marksugar/argocd-example.git Target: master Path: marksugar SyncWindow: Sync Allowed Sync Policy: Automated Sync Status: Synced to master (df898c3) Health Status: Healthy Operation: Sync Sync Revision: df898c3db89f1a156d7c9889bb44f0d0d56f2937 Phase: Succeeded Start: 2023-06-08 10:35:35 +0800 CST Finished: 2023-06-08 10:35:36 +0800 CST Duration: 1s Message: successfully synced (all tasks run) GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE Service marksugar marksugar-ui Synced Healthy service/marksugar-ui unchanged apps Deployment marksugar marksugar-nginx Synced Healthy deployment.apps/marksugar-nginx unchanged顶顶通知argocd 组件中ArgoCD Notifications 支持消息通知,并且在安装时候已经被安装,于是我门修改argocd-notifications-cm的configmap即可,在这之前,我门创建一个钉钉的机器人1,创建机器人2,修改配置文件apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.webhook.dingtalk: | url: https://oapi.dingtalk.com/robot/send?access_token=8923fbb89cc6adc7a07163be headers: - name: Content-Type value: application/json context: | argocdUrl: https://argocd.k8s.local template.app-sync-change: | webhook: dingtalk: method: POST body: | { "msgtype": "markdown", "markdown": { "title":"ArgoCD同步状态", "text": "### ArgoCD同步状态\n> - app名称: {{.app.metadata.name}}\n> - app同步状态: {{ .app.status.operationState.phase}}\n> - 时间:{{.app.status.operationState.startedAt}}\n> - URL: [点击跳转ArgoCD]({{.context.argocdUrl}}/applications/{{.app.metadata.name}}?operation=true) \n" } } trigger.on-deployed: | - description: Application is synced and healthy. Triggered once per commit. oncePer: app.status.sync.revision send: [app-sync-change] # template names # trigger condition when: app.status.operationState.phase in ['Succeeded'] and app.status.health.status == 'Healthy' trigger.on-health-degraded: | - description: Application has degraded send: [app-sync-change] when: app.status.health.status == 'Degraded' trigger.on-sync-failed: | - description: Application syncing has failed send: [app-sync-change] # template names when: app.status.operationState.phase in ['Error', 'Failed'] trigger.on-sync-running: | - description: Application is being synced send: [app-sync-change] # template names when: app.status.operationState.phase in ['Running'] trigger.on-sync-status-unknown: | - description: Application status is 'Unknown' send: [app-sync-change] # template names when: app.status.sync.status == 'Unknown' trigger.on-sync-succeeded: | - description: Application syncing has succeeded send: [app-sync-change] # template names when: app.status.operationState.phase in ['Succeeded'] subscriptions: | - recipients: [dingtalk] triggers: [on-sync-running, on-deployed, on-sync-failed, on-sync-succeeded]而后创建到argocd名称空间下[root@master1 argocd]# kubectl -n argocd apply -f argocd-message.yaml configmap/argocd-notifications-cm configured而后就可以通过命令查看kubectl -n argocd get configmap argocd-notifications-cm -o json3,添加本地Hosts到podyaml如下apiVersion: apps/v1 kind: Deployment metadata: labels: app.kubernetes.io/component: notifications-controller app.kubernetes.io/name: argocd-notifications-controller app.kubernetes.io/part-of: argocd name: argocd-notifications-controller spec: selector: matchLabels: app.kubernetes.io/name: argocd-notifications-controller strategy: type: Recreate template: metadata: labels: app.kubernetes.io/name: argocd-notifications-controller spec: hostAliases: - ip: 172.168.204.36 hostnames: - "argocd.k8s.local" containers: - args: - /usr/local/bin/argocd-notifications image: registry.cn-zhangjiakou.aliyuncs.com/marksugar/argocd:v2.7.2 imagePullPolicy: Always livenessProbe: tcpSocket: port: 9001 name: argocd-notifications-controller securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true volumeMounts: - mountPath: /app/config/tls name: tls-certs - mountPath: /app/config/reposerver/tls name: argocd-repo-server-tls workingDir: /app securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault serviceAccountName: argocd-notifications-controller volumes: - configMap: name: argocd-tls-certs-cm name: tls-certs - name: argocd-repo-server-tls secret: items: - key: tls.crt path: tls.crt - key: tls.key path: tls.key - key: ca.crt path: ca.crt optional: true secretName: argocd-repo-server-tls创建[root@master1 argocd]# kubectl -n argocd apply -f notifications.yaml deployment.apps/argocd-notifications-controller configured手动同步一次,就可以收到消息了监控Argo CD 本身暴露了两组 Prometheus 指标,默认情况下 Metrics 指标通过端点 argocd-metrics:8082/metrics 获取指标,包括:应用健康状态指标应用同步状态指标应用同步历史记录Argo CD 的 API 服务的 API 请求和响应相关的指标(请求数、响应码值等等...)通过端点 argocd-server-metrics:8083/metrics 获取如果开启了 endpoints 这种类型的服务自动发现,那么我们可以在几个指标的 Service 上添加 prometheus.io/scrape: "true" 这样的 annotation:# kubectl edit svc argocd-metrics -n argocd apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: "true" # kubectl edit svc argocd-server-metrics -n argocd apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: "true" prometheus.io/port: "8083" # 指定8083端口为指标端口 kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"server","app.kubernetes.io/name":"argocd-server-metrics","app.kubernetes.io/part-of":"argocd"},"name":"argocd-server-metrics","namespace":"argocd"},"spec":{"ports":[{"name":"metrics","port":8083,"protocol":"TCP","targetPort":8083}],"selector":{"app.kubernetes.io/name":"argocd-server"}}} creationTimestamp: "2023-06-06T09:18:46Z" # kubectl edit svc argocd-repo-server -n argocd apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: "true" prometheus.io/port: "8084" # 指定8084端口为指标端口 kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"repo-server","app.kubernetes.io/name":"argocd-repo-server","app.kubernetes.io/part-of":"argocd"},"name":"argocd-repo-server","namespace":"argocd"},"spec":{"ports":[{"name":"server","port":8081,"protocol":"TCP","targetPort":8081},{"name":"metrics","port":8084,"protocol":"TCP","targetPort":8084}],"selector":{"app.kubernetes.io/name":"argocd-repo-server"}}} creationTimestamp: "2023-06-06T09:18:46Z"而后就可以发现这几个指标或者手动创建ServiceMonitor 对象来创建指标对象。 而后在Grafana 中导入 Argo CD 的 Dashboard
2023年07月30日
514 阅读
0 评论
0 点赞
2022-07-11
linuxea:jenkins基于钉钉的构建通知(11)
在之前的几篇中,我分别介绍了基础环境的配置,skywaling+nacos的配置,nexus3的配置,围绕sonarqube的配置和构建镜像的配置。这一篇中,主要配置消息通知阅读此篇,你将了解如下列表中简单的实现方式:jenkins和gitlab触发(已实现)jenkins凭据使用(已实现)juit配置(已实现)sonarqube简单扫描(已实现)sonarqube覆盖率(已实现)打包基于java的skywalking agent(上一章已实现)sonarqube与gitlab关联 (上一章已实现)配置docker中构建docker (上一章已实现)mvn打包(上一章已实现)sonarqube简单分支扫描(上一章已实现)基于gitlab来管理kustomize的k8s配置清单 (上一章已实现)kubectl部署 (上一章已实现)kubeclt deployment的状态跟踪 (上一章已实现)钉钉消息的构建状态推送(本章实现)前面我们断断续续的将最简单的持续集成做好,在cd阶段,使用了kustomize和argocd,并且搭配了kustomize和argocd做了gitops的部分事宜,现在们在添加一个基于钉钉的构建通知我们创建一个钉钉机器人,关键字是DEVOPS我们创建一个函数,其中采用markdown语法,如下:分别需要向DingTalk传递几个行参,分别是:mdTitle 标签,这里的标签也就是我们创建的关键字: DEVOPSmdText 详细文本atUser 需要@谁atAll @所有人SedContent 通知标题函数体如下:def DingTalk(mdTitle, mdText, atAll, atUser = '' ,SedContent){ webhook = "https://oapi.dingtalk.com/robot/send?access_token=55d35d6f09f05388c1a8f7d73955cd9b7eaf4a0dd38" sh """ curl --location --request POST ${webhook} \ --header 'Content-Type: application/json' \ --data '{ "msgtype": "markdown", "markdown": { "title": "${mdTitle}", "text": "${SedContent}\n ${mdText}" }, "at": { "atMobiles": [ "${atUser}" ], "isAtAll": "${atAll}" } }' """ }而在流水线阶段添加post,如下 post { success{ script{ // ItmesName="${JOB_NAME.split('/')[-1]}" env.SedContent="构建通知" mdText = "### ✅ \n ### 发起人: ${BUILD_TRIGGER_BY} \n ### 项目: ${JOB_NAME} \n ### 标签: $IPATH \n ### 时间: ${TIMENOW_CN} \n ### 提交SHA: ${GIT_COMMIT_TAGSHA} \n ### Commit Info: ${GIT_COMMIT_DESCRIBE} \n ### By:  \n" DingTalk("DEVOPS", mdText, true, SedContent) } } failure{ script{ env.SedContent="构建通知" mdText = "### ❌ \n 发起人: ${BUILD_TRIGGER_BY} \n ### 项目: ${JOB_NAME} \n ### 标签: $IPATH \n ### 时间: ${TIMENOW_CN} \n ### 提交SHA: ${GIT_COMMIT_TAGSHA} \n ### Commit Info: ${GIT_COMMIT_DESCRIBE} \n ### By:  \n" DingTalk("DEVOPS", mdText, true, SedContent) } } }当然,现在你看到了上面的函数传递中有很多变量,这些需要我们去获取我们在任意一个阶段中的script中,并用env.声明到全局环境变量,添加如下GIT_COMMIT_DESCRIBE: 提交信息GIT_COMMIT_TAGSHA:提交的SHA值TIMENOW_CN:可阅读的时间格式 env.GIT_COMMIT_DESCRIBE = "${sh(script:'git log --oneline --no-merges|head -1', returnStdout: true)}" env.GIT_COMMIT_TAGSHA=sh(script: """cut -b -40 .git/refs/remotes/origin/master""",returnStdout: true).trim() env.TIMENOW_CN=sh(script: """date +%Y年%m月%d日%H时%M分%S秒""",returnStdout: true).trim()进行构建,一旦构建完成,将会发送一段消息到钉钉如下而最终的管道流水线试图如下:完整的流水线管道代码如下try { if ( "${onerun}" == "gitlabs"){ println("Trigger Branch: ${info_ref}") RefName="${info_ref.split("/")[-1]}" //自定义显示名称 currentBuild.displayName = "#${info_event_name}-${RefName}-${info_checkout_sha}" //自定义描述 currentBuild.description = "Trigger by user ${info_user_username} 自动触发 \n branch: ${RefName} \n commit message: ${info_commits_0_message}" BUILD_TRIGGER_BY="${info_user_username}" BASEURL="${info_project_git_http_url}" } }catch(e){ BUILD_TRIGGER_BY="${currentBuild.getBuildCauses()[0].userId}" currentBuild.description = "Trigger by user ${BUILD_TRIGGER_BY} 非自动触发 \n branch: ${branch} \ngit: ${BASEURL}" } pipeline{ //指定运行此流水线的节点 agent any environment { def tag_time = new Date().format("yyyyMMddHHmm") def IPATH="harbor.marksugar.com/java/${JOB_NAME}:${tag_time}" def kustomize_Git="git@172.16.100.47:devops/k8s-yaml.git" def JOB_NAMES=sh (script: """echo ${kustomize_Git.split("/")[-1]} | cut -d . -f 1""",returnStdout: true).trim() def Projects_Area="dev" def apps_name="java-demo" def projectGroup="java-demo" def PACK_PATH="/usr/local/package" } //管道运行选项 options { skipDefaultCheckout true skipStagesAfterUnstable() buildDiscarder(logRotator(numToKeepStr: '2')) } //流水线的阶段 stages{ //阶段1 获取代码 stage("CheckOut"){ steps { script { println("下载代码 --> 分支: ${env.branch}") checkout( [$class: 'GitSCM', branches: [[name: "${branch}"]], extensions: [], userRemoteConfigs: [[ credentialsId: 'gitlab-mark', url: "${BASEURL}"]]]) } } } stage("unit Test"){ steps{ script{ env.GIT_COMMIT_DESCRIBE = "${sh(script:'git log --oneline --no-merges|head -1', returnStdout: true)}" env.TIMENOW_CN=sh(returnStdout: true, script: 'date +%Y年%m月%d日%H时%M分%S秒') env.GIT_COMMIT_TAGSHA=sh (script: """cut -b -40 .git/refs/remotes/origin/master""",returnStdout: true).trim() sh """ cd linuxea && mvn test -s /var/jenkins_home/.m2/settings.xml2 """ } } post { success { script { junit 'linuxea/target/surefire-reports/*.xml' } } } } stage("coed sonar"){ environment { def JOB_NAMES=sh (script: """echo ${BASEURL.split("/")[-1]} | cut -d . -f 1""",returnStdout: true).trim() def Projects_GitId=sh (script: """curl --silent --heade "PRIVATE-TOKEN: zrv1vpfZTtEFCJGrJczB" "https://gitlab.marksugar.com/api/v4/projects?simple=true"| /usr/local/package/jq-1.6/jq -rc '.[]|select(.path_with_namespace == "java/java-demo")'| /usr/local/package/jq-1.6/jq .id""",returnStdout: true).trim() def SONAR_git_TOKEN="K8DtxxxifxU1gQeDgvDK" def GitLab_Address="https://172.16.100.47" } steps{ script { withCredentials([string(credentialsId: 'sonarqube-token', variable: 'SONAR_TOKEN')]) { sh """ cd linuxea && \ /usr/local/package/sonar-scanner-4.6.2.2472-linux/bin/sonar-scanner \ -Dsonar.host.url=${GitLab_Address}:9000 \ -Dsonar.projectKey=${JOB_NAME} \ -Dsonar.projectName=${JOB_NAME} \ -Dsonar.projectVersion=${BUILD_NUMBER} \ -Dsonar.login=${SONAR_TOKEN} \ -Dsonar.ws.timeout=30 \ -Dsonar.projectDescription="my first project!" \ -Dsonar.links.homepage=${env.BASEURL} \ -Dsonar.links.ci=${BUILD_URL} \ -Dsonar.sources=src \ -Dsonar.sourceEncoding=UTF-8 \ -Dsonar.java.binaries=target/classes \ -Dsonar.java.test.binaries=target/test-classes \ -Dsonar.java.surefire.report=target/surefire-reports \ -Dsonar.core.codeCoveragePlugin=jacoco \ -Dsonar.jacoco.reportPaths=target/jacoco.exec \ -Dsonar.branch.name=${branch} \ -Dsonar.gitlab.commit_sha=${GIT_COMMIT_TAGSHA} \ -Dsonar.gitlab.ref_name=${branch} \ -Dsonar.gitlab.project_id=${Projects_GitId} \ -Dsonar.dynamicAnalysis=reuseReports \ -Dsonar.gitlab.failure_notification_mode=commit-status \ -Dsonar.gitlab.url=${GitLab_Address} \ -Dsonar.gitlab.user_token=${SONAR_git_TOKEN} \ -Dsonar.gitlab.api_version=v4 """ } } } } stage("mvn build"){ steps { script { sh """ cd linuxea mvn clean install -Dautoconfig.skip=true -Dmaven.test.skip=false -Dmaven.test.failure.ignore=true -s /var/jenkins_home/.m2/settings.xml2 """ } } } stage("docker build"){ steps{ script{ sh """ cd linuxea docker ps -a cp -r /usr/local/package/skywalking-agent ./ docker build -f ./Dockerfile -t $IPATH . docker push $IPATH docker rmi -f $IPATH """ } } } stage('Deploy') { steps { sh ''' [ ! -d ${JOB_NAMES} ] || rm -rf ${JOB_NAMES} } git clone ${kustomize_Git} && cd ${JOB_NAMES} && git checkout ${apps_name} echo "push latest images: $IPATH" echo "`date +%F-%T` imageTag: $IPATH buildId: ${BUILD_NUMBER} " >> ./buildhistory-$Projects_Area-${apps_name}.log cd overlays/$Projects_Area ${PACK_PATH}/kustomize edit set image $IPATH cd ../.. git add . git config --global push.default matching git config user.name zhengchao.tang git config user.email usertzc@163.com git commit -m "image tag $IPATH-> ${imageUrlPath}" git push -u origin ${apps_name} ${PACK_PATH}/argocd app sync ${apps_name} --retry-backoff-duration=10s -l marksugar/app=${apps_name} ''' // ${PACK_PATH}/argocd app sync ${apps_name} --retry-backoff-duration=10s -l marksugar/app=${apps_name} } // ${PACK_PATH}/kustomize build overlays/$Projects_Area/ | ${PACK_PATH}/kubectl --kubeconfig /var/jenkins_home/.kube/config-1.23.1-dev apply -f - } stage('status watch') { steps { sh ''' ${PACK_PATH}/kubectl --kubeconfig /var/jenkins_home/.kube/config-1.23.1-dev -n ${projectGroup} rollout status deployment ${apps_name} --watch --timeout=10m ''' } } } post { success{ script{ // ItmesName="${JOB_NAME.split('/')[-1]}" env.SedContent="构建通知" mdText = "### ✅ \n ### 发起人: ${BUILD_TRIGGER_BY} \n ### 项目: ${JOB_NAME} \n ### 标签: $IPATH \n ### 时间: ${TIMENOW_CN} \n ### 提交SHA: ${GIT_COMMIT_TAGSHA} \n ### Commit Info: ${GIT_COMMIT_DESCRIBE} \n ### By:  \n" DingTalk("DEVOPS", mdText, true, SedContent) } } failure{ script{ env.SedContent="构建通知" mdText = "### ❌ \n 发起人: ${BUILD_TRIGGER_BY} \n ### 项目: ${JOB_NAME} \n ### 标签: $IPATH \n ### 时间: ${TIMENOW_CN} \n ### 提交SHA: ${GIT_COMMIT_TAGSHA} \n ### Commit Info: ${GIT_COMMIT_DESCRIBE} \n ### By:  \n" DingTalk("DEVOPS", mdText, true, SedContent) } } } } def DingTalk(mdTitle, mdText, atAll, atUser = '' ,SedContent){ webhook = "https://oapi.dingtalk.com/robot/send?access_token=55d35d6f09f05388c1a8f7d73955cd9b7eaf4a0dd3803abdd1452e83d5b607ab" sh """ curl --location --request POST ${webhook} \ --header 'Content-Type: application/json' \ --data '{ "msgtype": "markdown", "markdown": { "title": "${mdTitle}", "text": "${SedContent}\n ${mdText}" }, "at": { "atMobiles": [ "${atUser}" ], "isAtAll": "${atAll}" } }' """ }现在,一个最简单的gitops的demo项目搭建完成参考gitops
2022年07月11日
1,818 阅读
0 评论
0 点赞
2022-07-10
linuxea:基于kustomize的argocd发布实现(10)
在此前我们配置了Kustomize清单,并且通过kubectl将清单应用到k8s中,之后又做另一个状态跟踪,但这还不够。我们希望通过一个cd工具来配置管理,并且提供一个可视化界面。我们选择argocd我不会在这篇章节中去介绍uI界面到底怎么操作,因为那些显而易见。我只会介绍argocd的二进制程序客户端的操作使用,但是也仅限于完成一个app的创建,集群的添加,项目的添加。仅此而已。argocd是一个成熟的部署工具,如果有时间,我将会在后面的时间里更新其他的必要功能。阅读此篇,你将了解argocd客户端最简单的操作,和一些此前的流水线实现方式列表如下:jenkins和gitlab触发(已实现)jenkins凭据使用(已实现)juit配置(已实现)sonarqube简单扫描(已实现)sonarqube覆盖率(已实现)打包基于java的skywalking agent(已实现)sonarqube与gitlab关联 (已实现)配置docker中构建docker (已实现)mvn打包(已实现)sonarqube简单分支扫描(已实现)基于gitlab来管理kustomize的k8s配置清单(已实现)kubectl部署(已实现)kubeclt deployment的状态跟踪(已实现)kustomize和argocd(本章实现)钉钉消息的构建状态推送1.1 安装2.4.2我们在gitlab上获取此配置文件,并修改镜像此前我拉取了2.4.0和2.4.2的镜像,如下2.4.0 image: registry.cn-hangzhou.aliyuncs.com/marksugar/argocd:dex-v2.30.2 image: registry.cn-hangzhou.aliyuncs.com/marksugar/argocd:haproxy-2.0.25-alpine image: registry.cn-hangzhou.aliyuncs.com/marksugar/argocd:v2.4.0 image: registry.cn-hangzhou.aliyuncs.com/marksugar/argocd:redis-7.0.0-alpine2.4.2 image: registry.cn-hangzhou.aliyuncs.com/marksugar/argocd:dex-v2.30.2 image: registry.cn-hangzhou.aliyuncs.com/marksugar/argocd:haproxy-2.0.25-alpine image: registry.cn-hangzhou.aliyuncs.com/marksugar/argocd:v2.4.2 image: registry.cn-hangzhou.aliyuncs.com/marksugar/argocd:redis-7.0.0-alpine分别替换所有镜像地址,如果是install.yaml就替换,如果是ha-install.yaml也替换sed -i 's@redis:7.0.0-alpine@registry.cn-hangzhou.aliyuncs.com/marksugar/argocd:redis-7.0.0-alpine@g' sed -i 's@ghcr.io/dexidp/dex:v2.30.2@registry.cn-hangzhou.aliyuncs.com/marksugar/argocd:dex-v2.30.2@g' sed -i 's@quay.io/argoproj/argocd:v2.4.0@registry.cn-hangzhou.aliyuncs.com/marksugar/argocd:v2.4.0@g' sed -i 's@haproxy:2.0.25-alpine@registry.cn-hangzhou.aliyuncs.com/marksugar/argocd:haproxy-2.0.25-alpine@g'创建名称空间并applykubectl create namespace argocd kubectl apply -n argocd -f argocd.yaml更新删除不掉的时候的解决办法kubectl patch crd/appprojects.argoproj.io -p '{"metadata":{"finalizers":[]}}' --type=merge等待,到argocd组件准备完成[root@linuxea-11 ~/argocd]# kubectl -n argocd get pod NAME READY STATUS RESTARTS AGE argocd-application-controller-0 1/1 Running 0 7m33s argocd-applicationset-controller-7bbcd5c9bd-rqn84 1/1 Running 0 7m33s argocd-dex-server-75c668865-s9x5d 1/1 Running 0 7m33s argocd-notifications-controller-bc5954bd7-gg4ks 1/1 Running 0 7m33s argocd-redis-ha-haproxy-8658c76475-hdzkv 1/1 Running 0 7m33s argocd-redis-ha-haproxy-8658c76475-jrrtl 1/1 Running 0 7m33s argocd-redis-ha-haproxy-8658c76475-rk868 1/1 Running 0 7m33s argocd-redis-ha-server-0 2/2 Running 0 7m33s argocd-redis-ha-server-1 2/2 Running 0 5m3s argocd-redis-ha-server-2 2/2 Running 0 4m3s argocd-repo-server-567dd6c487-6k89z 1/1 Running 0 7m33s argocd-repo-server-567dd6c487-rt4vq 1/1 Running 0 7m33s argocd-server-677d79497b-k72h2 1/1 Running 0 7m33s argocd-server-677d79497b-pb5gt 1/1 Running 0 7m33s配置域名访问apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: argocd-server-ingress namespace: argocd annotations: cert-manager.io/cluster-issuer: letsencrypt-prod kubernetes.io/ingress.class: nginx kubernetes.io/tls-acme: "true" nginx.ingress.kubernetes.io/ssl-passthrough: "true" nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" spec: rules: - host: argocd.linuxea.com http: paths: - path: / pathType: Prefix backend: service: name: argocd-server port: name: https创建[root@linuxea-11 ~/argocd]# kubectl apply -f argocd-ingress.yaml ingress.networking.k8s.io/argocd-server-ingress created [root@linuxea-11 ~/argocd]# kubectl -n argocd get ingress NAME CLASS HOSTS ADDRESS PORTS AGE argocd-server-ingress nginx argocd.linuxea.com 80 11s配置nodeport我们直接使用nodeport来配置apiVersion: v1 kind: Service metadata: labels: app.kubernetes.io/component: server app.kubernetes.io/name: argocd-server app.kubernetes.io/part-of: argocd name: argocd-server namespace: argocd spec: ports: - name: http port: 80 nodePort: 31080 protocol: TCP targetPort: 8080 - name: https port: 443 nodePort: 31443 protocol: TCP targetPort: 8080 selector: app.kubernetes.io/name: argocd-server type: NodePort用户名admin, 获取密码[root@linuxea-11 ~/argocd]# kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d && echo QOOMW76CV8bEczKO1.2 客户端登录安装完成后,我们通过一个二进制的客户端来操作整个流程,于是我们需要下载一个Linux客户端注意: 和此前的其他包一样,如果是docker运行的jenkins,要将二进制包放到容器内,因此我提供了两种方式wget https://github.com/argoproj/argo-cd/releases/download/v2.4.2/argocd-linux-amd64如果你用私有域名的话,你本地hosts解析需要配置[root@linuxea-48 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.100.11 argocd.linuxea.com下载二进制文件后进行登录即可, 我使用的是nodeportargocd login 172.16.100.11:31080 --grpc-web[root@linuxea-48 ~/.kube]# argocd login 172.16.100.11:31080 --grpc-web WARNING: server certificate had error: x509: cannot validate certificate for 172.16.100.11 because it doesn't contain any IP SANs. Proceed insecurely (y/n)? y Username: admin Password: 'admin:login' logged in successfully Context '172.16.100.11:31080' updated登录会在一段时间后失效,于是我门需要些一个脚本过一段时间登录一次argocd login 172.16.100.11:31080 --grpc-web # 登录 argocd login 172.16.15.137:31080 --grpc-web最好写在脚本里面登录即可容器外脚本# cat /login.sh KCONFIG=/root/.kube/config-1.23.1-dev argocd login 172.16.100.11:31080 --username admin --password $(kubectl --kubeconfig=$KCONFIG -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d ) --insecure --grpc-web容器内下载argocd二进制文件存放到已经映射的目录内,并添加执行权限[root@linuxea-48 /data/jenkins-latest/jenkins_home]# cp /usr/local/sbin/argocd /data/jenkins-latest/package/ [root@linuxea-48 /data/jenkins-latest/jenkins_home]# ll /data/jenkins-latest/package/ total 251084 drwxr-xr-x 6 root root 99 Sep 5 2021 apache-maven-3.8.2 -rw-r--r-- 1 root root 131352410 Jul 9 17:24 argocd drwxr-xr-x 6 root root 105 Sep 6 2021 gradle-6.9.1 drwxr-xr-x 2 root root 16 Oct 18 2021 jq-1.6 -rwxr-xr-x 1 root root 40230912 Jul 9 15:08 kubectl -rwxr-xr-x 1 root root 11976704 Jul 9 15:08 kustomize drwxr-xr-x 6 1001 1001 108 Aug 31 2021 node-v14.17.6-linux-x64 drwxrwxr-x 10 1001 1002 221 Jun 18 11:37 skywalking-agent -rw-r--r-- 1 root root 30443381 Jun 29 23:46 skywalking-java-8.11.0.tar.gz drwxr-xr-x 6 root root 51 May 7 2021 sonar-scanner-4.6.2.2472-linux -rw-r--r-- 1 root root 43099390 Sep 11 2021 sonar-scanner-cli-4.6.2.2472-linux.zip [root@linuxea-48 /data/jenkins-latest/jenkins_home]# chmod +x /data/jenkins-latest/package/argocd 还需要k8s的config配置文件,如果你阅读了上一篇基于jenkins的kustomize配置发布(9),那这里当然是轻车熟路了我的二进制文件存放在/usr/local/package - /data/jenkins-latest/package:/usr/local/package由于我门在容器里面,我门复制config文件到一个位置而后指定即可[root@linuxea-48 ~]# cp -r ~/.kube /data/jenkins-latest/jenkins_home/ [root@linuxea-48 ~]# ls /data/jenkins-latest/jenkins_home/.kube/ cache config config-1.20.2-test config-1.22.1-prod config-1.22.1-test config-1.23.1-dev config2 marksugar-dev-1 marksugar-prod-1容器内登录KUBE_PATH=/usr/local/package KCONFIG=/var/jenkins_home/.kube/config-1.23.1-dev ${KUBE_PATH}/argocd login 172.16.100.11:31080 --username admin --password $(${KUBE_PATH}/kubectl --kubeconfig=$KCONFIG -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d ) --insecure --grpc-web如下bash-5.1# KUBE_PATH=/usr/local/package bash-5.1# KCONFIG=/var/jenkins_home/.kube/config-1.23.1-dev bash-5.1# ${KUBE_PATH}/argocd login 172.16.100.11:31080 --username admin --password $(${KUBE_PATH}/kubectl --kubeconfig=$KCONFIG -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d ) --insecure --grpc-web 'admin:login' logged in successfully Context '172.16.100.11:31080' updated在上面我们说过,一旦登录了只会,登录的凭据是会失效的,因此我们需要在计划任务里面,5个小时登录一次。而后使用计划任务进行登录即可0 5 * * * /bin/bash /login.sh查看版本信息[root@linuxea-48 ~]# argocd version --grpc-web argocd: v2.4.2+c6d0c8b BuildDate: 2022-06-21T21:03:41Z GitCommit: c6d0c8baaa291cd68465acd7ad6bef58b2b6f942 GitTreeState: clean GoVersion: go1.18.3 Compiler: gc Platform: linux/amd64 argocd-server: v2.4.2+c6d0c8b BuildDate: 2022-06-21T20:42:05Z GitCommit: c6d0c8baaa291cd68465acd7ad6bef58b2b6f942 GitTreeState: clean GoVersion: go1.18.3 Compiler: gc Platform: linux/amd64 Kustomize Version: v4.4.1 2021-11-11T23:36:27Z Helm Version: v3.8.1+g5cb9af4 Kubectl Version: v0.23.1 Jsonnet Version: v0.18.01.2.1. 集群凭据管理通常可能存在多个集群,因此,我们使用配置参数指定即可如果只有一个,无需指定,默认config[root@linuxea-48 ~]# ll ~/.kube/ total 56 drwxr-x--- 4 root root 35 Jun 22 00:09 cache -rw-r--r-- 1 root root 6254 Jun 21 23:58 config-1.20.2-test -rw-r--r-- 1 root root 6277 Jun 22 00:07 config-1.22.1-prod -rw-r--r-- 1 root root 6277 Jun 22 00:06 config-1.22.1-test -rw-r--r-- 1 root root 6193 Jun 22 00:09 config-1.23.1-dev -rw-r--r-- 1 root root 6246 Mar 4 23:55 config2 -rw-r--r-- 1 root root 6277 Aug 22 2021 marksugar-dev-1 -rw-r--r-- 1 root root 6277 Aug 22 2021 marksugar-prod-1 如果有多个,需要指定配置文件[root@linuxea-48 ~/.kube]# kubectl --kubeconfig /root/.kube/config-1.23.1-dev -n argocd get pod NAME READY STATUS RESTARTS AGE argocd-application-controller-0 1/1 Running 1 (12m ago) 23h argocd-applicationset-controller-7bbcd5c9bd-rqn84 1/1 Running 1 (12m ago) 23h argocd-dex-server-75c668865-s9x5d 1/1 Running 1 (12m ago) 23h argocd-notifications-controller-bc5954bd7-gg4ks 1/1 Running 1 (12m ago) 23h argocd-redis-ha-haproxy-8658c76475-hdzkv 1/1 Running 1 (12m ago) 23h argocd-redis-ha-haproxy-8658c76475-jrrtl 1/1 Running 1 (12m ago) 23h argocd-redis-ha-haproxy-8658c76475-rk868 1/1 Running 1 (12m ago) 23h argocd-redis-ha-server-0 2/2 Running 2 (12m ago) 23h argocd-redis-ha-server-1 2/2 Running 2 (12m ago) 23h argocd-redis-ha-server-2 2/2 Running 2 (12m ago) 23h argocd-repo-server-567dd6c487-6k89z 1/1 Running 1 (12m ago) 23h argocd-repo-server-567dd6c487-rt4vq 1/1 Running 1 (12m ago) 23h argocd-server-677d79497b-k72h2 1/1 Running 1 (12m ago) 23h argocd-server-677d79497b-pb5gt 1/1 Running 1 (12m ago) 23h\1.2.2 将集群加入argocd仍然需要重申下环境变量的配置export KUBECONFIG=$HOME/.kube/config-1.23.1-dev而后在查看当前的集群[root@linuxea-48 ~/.kube]# kubectl config get-contexts -o name context-cluster1将此集群加入到argocd[root@linuxea-48 ~/.kube]# argocd cluster add context-cluster1 WARNING: This will create a service account `argocd-manager` on the cluster referenced by context `context-cluster1` with full cluster level privileges. Do you want to continue [y/N]? y INFO[0008] ServiceAccount "argocd-manager" created in namespace "kube-system" INFO[0008] ClusterRole "argocd-manager-role" created INFO[0008] ClusterRoleBinding "argocd-manager-role-binding" created Cluster 'https://172.16.100.11:6443' added这里添加完成后,在settings->Clusters 中也将会看到容器内首先将config文件复制到映射的目录内,比如/var/jenkins_home/# 配置kubeconfig位置 bash-5.1# export KUBECONFIG=/var/jenkins_home/.kube/config-1.23.1-dev # 复制二进制文件到sbin,仅仅是方便操作 bash-5.1# cp /usr/local/package/argocd /usr/sbin/ bash-5.1# cp /usr/local/package/kubectl /usr/sbin/ # 测试 bash-5.1# kubectl get pod NAME READY STATUS RESTARTS AGE nfs-client-provisioner-59bd97ddb-qcrpj 1/1 Running 18 (7h51m ago) 26d # 查看当前contexts名称 bash-5.1# kubectl config get-contexts -o name context-cluster1 # 添加到argocd bash-5.1# argocd cluster add context-cluster WARNING: This will create a service account `argocd-manager` on the cluster referenced by context `kubernetes-admin@kubernetes` with full cluster level privileges. Do you want to continue [y/N]? WARNING: This will create a service account `argocd-manager` on the cluster referenced by context `kubernetes-admin@kubernetes` with full cluster level privileges. Do you want to continue [y/N]? y INFO[0003] ServiceAccount "argocd-manager" created in namespace "kube-system" INFO[0003] ClusterRole "argocd-manager-role" created INFO[0003] ClusterRoleBinding "argocd-manager-role-binding" created Cluster 'https://172.16.100.11:6443' added添加完成1.3 定义repo存储库定于存储库有两种方式分别是ssh和http,都可以使用,参考官方文档1.3.1 密钥如果已经有现成的密钥,则不需要创建,如果没有,可以使用ssh-keygen -t ed25519 生成密钥, 并且添加到gitlab中# ssh-keygen -t ed25519 -f /home/jenkins_home/.ssh/ # ls /home/jenkins_home/.ssh/ -ll 总用量 8 -rw------- 1 root root 399 7月 8 16:44 id_rsa -rw-r--r-- 1 root root 93 7月 8 16:44 id_rsa.pubargocd添加git,指定~/.ssh/id_rsa,并使用--insecure-ignore-host-key选项[root@linuxea-48 ~/.kube]# argocd repo add git@172.16.100.47:pipeline-ops/marksugar-ui.git --ssh-private-key-path ~/.ssh/id_rsa --insecure-ignore-host-key Repository 'git@172.16.100.47:pipeline-ops/marksugar-ui.git' added这里添加完成在settings->repositories界面将会看到一个存储库容器内和上面一样,如果已经有现成的密钥,则不需要创建,如果没有,可以使用ssh-keygen -t ed25519 生成密钥, 并且将id_rsa.pub添加到gitlab中下面是docker-compose的密钥 volumes: .... - /home/jenkins_home/.ssh/:/root/.ssh我们在上面已经添加了marksugar-ui, 如果有多个项目,多次添加即可我们开始添加 java-demogit@172.16.100.47:devops/k8s-yaml.git是kustmoize配置清单的地址argocd repo add git@172.16.100.47:devops/k8s-yaml.git --ssh-private-key-path ~/.ssh/id_rsa --insecure-ignore-host-keybash-5.1# argocd repo add git@172.16.100.47:devops/k8s-yaml.git --ssh-private-key-path ~/.ssh/id_rsa --insecure-ignore-host-key Repository 'git@172.16.100.47:devops/k8s-yaml.git' added1.3.2 http我门仍然可以考虑使用http来使用,官方的示例如下argocd repo add https://github.com/argoproj/argocd-example-apps --username <username> --password <password>我的环境如下配置:argocd repo add https://172.16.15.136:180/devops/k8s-yaml --username root --password gitlab.com # 添加repo root@ca060212e6f6:/var/jenkins_home# argocd repo add https://172.16.15.136:180/devops/k8s-yaml.git --username root --password gitlab.com Repository 'https://172.16.15.136:180/devops/k8s-yaml.git' added1.4 定义项目AppProject CRD 是代表应用程序逻辑分组的 Kubernetes 资源对象。它由以下关键信息定义:sourceRepos引用项目中的应用程序可以从中提取清单的存储库。destinations引用项目中的应用程序可以部署到的集群和命名空间(不要使用该name字段,仅server匹配该字段)。roles定义了他们对项目内资源的访问权限的实体列表。一个示例规范如下:在创建之前,我们先在集群内创建一个名称空间:marksugarkubectl create ns marksugar声明式配置如下,指定name,指定marksugar部署的名称空间,其他默认 destinations: - namespace: marksugar server: 'https://172.16.100.11:6443'更多时候我们限制项目内使用的范围,比如我们只配置使用的如:deployment,service,configmap,这些配置取决于控制器apiVersion: v1 kind: ConfigMap ... --- apiVersion: v1 kind: Service ...and DeploymentapiVersion: apps/v1 kind: Deployment如果此时有ingress,那么配置就如下 - group: 'networking.k8s.io' kind: 'Ingress'以此推论。最终我的配置如下: namespaceResourceWhitelist: - group: 'apps' kind: 'Deployment' - group: '' kind: 'Service' - group: '' kind: 'ConfigMap'一个完整的配置如下:apiVersion: argoproj.io/v1alpha1 kind: AppProject metadata: name: my-linuxea # 名称 # name: marksugar namespace: argocd # finalizers: # - resources-finalizer.argocd.argoproj.io spec: description: Example Project(测试) # 更详细的内容 sourceRepos: - '*' destinations: - namespace: marksugar # 名称空间 server: 'https://172.16.100.11:6443' # k8s api地址 # clusterResourceWhitelist: # - group: '' # kind: Namespace # namespaceResourceBlacklist: # - group: '' # kind: ResourceQuota # - group: '' # kind: LimitRange # - group: '' # kind: NetworkPolicy namespaceResourceWhitelist: - group: 'apps' kind: 'Deployment' # 名称空间的内允许让argocd当前app使用的的kind - group: '' kind: 'Service' # 名称空间的内允许让argocd当前app使用的的kind - group: '' kind: 'ConfigMap' # 名称空间的内允许让argocd当前app使用的的kind # kind: Deployment # - group: 'apps' # kind: StatefulSet # roles: # - name: read-only # description: Read-only privileges to my-project # policies: # - p, proj:my-project:read-only, applications, get, my-project/*, allow # groups: # - test-env # - name: ci-role # description: Sync privileges for guestbook-dev # policies: # - p, proj:my-project:ci-role, applications, sync, my-project/guestbook-dev, allow # jwtTokens: # - iat: 1535390316上面的这个有太多注释,精简一下,并进行成我门实际的参数,最终如下:apiVersion: argoproj.io/v1alpha1 kind: AppProject metadata: name: my-linuxea-java-demo namespace: argocd spec: description: Example Project(测试) sourceRepos: - '*' destinations: - namespace: java-demo server: 'https://172.16.100.11:6443' namespaceResourceWhitelist: - group: 'apps' kind: 'Deployment' - group: '' kind: 'Service' - group: '' kind: 'ConfigMap'执行PS E:\ops\k8s-1.23.1-latest\gitops\argocd> kubectl.exe apply -f .\project-new.yaml appproject.argoproj.io/my-linuxea-java-demo created执行完成后,将会创建一个projects,在settings->projects查看1.5 定义应用Application CRD 是 Kubernetes 资源对象,表示环境中已部署的应用程序实例。它由两个关键信息定义:source对 Git 中所需状态的引用(存储库、修订版、路径、环境)destination对目标集群和命名空间的引用。对于集群,可以使用 server 或 name 之一,但不能同时使用两者(这将导致错误)。当服务器丢失时,它会根据名称进行计算并用于任何操作。一个最小的应用程序规范如下:apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: marksugar-ui namespace: argocd labels: marksugar/marksugar-ui: prod # 标签 spec: project: my-linuxea # 定义的项目名 source: repoURL: git@172.16.100.47:pipeline-ops/marksugar-ui.git # git地址 targetRevision: master # git分支 path: overlays/marksugar-ui/prod/ # git路径对应到目录下的配置 destination: server: https://172.16.100.11:6443 # k8s api namespace: marksugar # 名称空间有关其他字段,请参阅application.yaml。只要您完成了入门的第一步,您就可以应用它kubectl apply -n argocd -f application.yaml,Argo CD 将开始部署留言簿应用程序。或者使用下面客户端命令进行配置,比如我此前配置去的marksugar-ui就是命令行配置的,如下:argocd app create marksugar-ui --repo git@172.16.100.47:pipeline-ops/marksugar-ui.git --revision master --path overlays/marksugar-ui/prod/ --dest-server https://172.16.100.11:6443 --dest-namespace marksugar --project=my-linuxea --label=marksugar/marksugar-ui=prod我门仍然进行修改成我门希望的配置样子,yaml如下我这里使用的是httpapiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: java-demo namespace: argocd labels: marksugar/app: java-demo spec: project: my-linuxea-java-demo source: repoURL: git@172.16.100.47:devops/k8s-yaml.git targetRevision: java-demo path: overlays/dev/ destination: server: https://172.16.100.11:6443 namespace: java-demo此时创建了一个appPS E:\ops\k8s-1.23.1-latest\gitops\argocd\java-demo> kubectl.exe apply -f .\app.yaml application.argoproj.io/java-demo created如下只有同步正常,healthy才会变绿如果有多个名称空间,不想混合显示,我们在页面中在做左侧,选择cluster的名称空间后,才能看到名称空间下的app,也就是应用如果你配置的是http的git地址就会是下面这个样子配置apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: java-demo namespace: argocd labels: marksugar/app: java-demo spec: project: my-linuxea-java-demo source: repoURL: https://172.16.15.136:180/devops/k8s-yaml.git targetRevision: java-demo path: overlays/dev/ destination: server: https://172.16.15.137:6443 namespace: java-demo视图1.6 手动同步我门可以点击web页面的上面的sync来进行同步,也可以用命令行手动同步使其生效我门通过argocd app list查看当前的已经有的项目示例:密钥root@9c0cad5ebce8:/# argocd app list NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET java-demo https://172.16.15.137:6443 java-demo my-linuxea-java-demo Unknown Healthy <none> ComparisonError git@172.16.15.136:23857/devops/k8s-yaml.git overlays/dev/ java-demohttproot@ca060212e6f6:/var/jenkins_home# argocd app list NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET java-demo https://172.16.15.137:6443 java-demo my-linuxea-java-demo OutOfSync Missing <none> <none> https://172.16.15.136:180/devops/k8s-yaml.git overlays/dev/ java-demo而我们现在的是这样的bash-5.1# argocd app list NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET java-demo https://172.16.100.11:6443 java-demo my-linuxea-java-demo OutOfSync Missing <none> <none> git@172.16.100.47:devops/k8s-yaml.git overlays/dev/ java-demo marksugar-ui https://172.16.100.11:6443 marksugar my-linuxea Synced Healthy <none> <none> git@172.16.100.47:pipeline-ops/marksugar-ui.git overlays/marksugar-ui/prod/ master而后进行同步即可argocd app sync java-demo --retry-backoff-duration=10s -l marksugar/app=java-demo如下bash-5.1# argocd app sync java-demo --retry-backoff-duration=10s -l marksugar/app=java-demo TIMESTAMP GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE 2022-07-09T19:20:26+08:00 ConfigMap java-demo envinpod-74t9b8htb6 Synced 2022-07-09T19:20:26+08:00 Service java-demo java-demo OutOfSync Missing 2022-07-09T19:20:26+08:00 apps Deployment java-demo java-demo Synced Healthy 2022-07-09T19:20:27+08:00 Service java-demo java-demo OutOfSync Healthy 2022-07-09T19:20:27+08:00 ConfigMap java-demo envinpod-74t9b8htb6 Synced configmap/envinpod-74t9b8htb6 unchanged 2022-07-09T19:20:27+08:00 Service java-demo java-demo OutOfSync Healthy service/java-demo created 2022-07-09T19:20:27+08:00 apps Deployment java-demo java-demo Synced Healthy deployment.apps/java-demo configured Name: java-demo Project: my-linuxea-java-demo Server: https://172.16.100.11:6443 Namespace: java-demo URL: https://172.16.100.11:31080/applications/java-demo Repo: git@172.16.100.47:devops/k8s-yaml.git Target: java-demo Path: overlays/dev/ SyncWindow: Sync Allowed Sync Policy: <none> Sync Status: Synced to java-demo (fd1286f) Health Status: Healthy Operation: Sync Sync Revision: fd1286f64d1edac2def43d4a37bcc13a9f0286d0 Phase: Succeeded Start: 2022-07-09 19:20:26 +0800 CST Finished: 2022-07-09 19:20:27 +0800 CST Duration: 1s Message: successfully synced (all tasks run) GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE ConfigMap java-demo envinpod-74t9b8htb6 Synced configmap/envinpod-74t9b8htb6 unchanged Service java-demo java-demo Synced Healthy service/java-demo created apps Deployment java-demo java-demo Synced Healthy deployment.apps/java-demo configured TIMESTAMP GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE 2022-07-09T19:20:28+08:00 apps Deployment java-demo java-demo Synced Healthy 2022-07-09T19:20:28+08:00 ConfigMap java-demo envinpod-74t9b8htb6 Synced 2022-07-09T19:20:28+08:00 Service java-demo java-demo Synced Healthy 2022-07-09T19:20:28+08:00 apps Deployment java-demo java-demo Synced Healthy deployment.apps/java-demo configured 2022-07-09T19:20:28+08:00 ConfigMap java-demo envinpod-74t9b8htb6 Synced configmap/envinpod-74t9b8htb6 unchanged 2022-07-09T19:20:28+08:00 Service java-demo java-demo Synced Healthy service/java-demo unchanged Name: java-demo Project: my-linuxea-java-demo Server: https://172.16.100.11:6443 Namespace: java-demo URL: https://172.16.100.11:31080/applications/java-demo Repo: git@172.16.100.47:devops/k8s-yaml.git Target: java-demo Path: overlays/dev/ SyncWindow: Sync Allowed Sync Policy: <none> Sync Status: Synced to java-demo (fd1286f) Health Status: Healthy Operation: Sync Sync Revision: fd1286f64d1edac2def43d4a37bcc13a9f0286d0 Phase: Succeeded Start: 2022-07-09 19:20:27 +0800 CST Finished: 2022-07-09 19:20:28 +0800 CST Duration: 1s Message: successfully synced (all tasks run) GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE ConfigMap java-demo envinpod-74t9b8htb6 Synced configmap/envinpod-74t9b8htb6 unchanged Service java-demo java-demo Synced Healthy service/java-demo unchanged apps Deployment java-demo java-demo Synced Healthy deployment.apps/java-demo configured同步完成后状态就会发生改变命令行查看bash-5.1# argocd app list NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET java-demo https://172.16.100.11:6443 java-demo my-linuxea-java-demo Synced Healthy <none> <none> git@172.16.100.47:devops/k8s-yaml.git overlays/dev/ java-demo marksugar-ui https://172.16.100.11:6443 marksugar my-linuxea Synced Healthy <none> <none> git@172.16.100.47:pipeline-ops/marksugar-ui.git overlays/marksugar-ui/prod/ master打开页面查看如果是http的这里会显示http此时正在拉取镜像状态是 Progressing,我们等待拉取完成,而后选中后会点击进入详情页面项目内的仪表盘功能如下图一旦镜像完成拉取,并且runing起来,则显示健康仪表盘功能如下图回到k8s查看[root@linuxea-01 .ssh]# kubectl get all -n java-demo NAME READY STATUS RESTARTS AGE pod/java-demo-6474cb8fc8-6zwlt 1/1 Running 0 7m45s pod/java-demo-6474cb8fc8-92sw7 1/1 Running 0 7m45s pod/java-demo-6474cb8fc8-k8985 1/1 Running 0 7m45s pod/java-demo-6474cb8fc8-ndzpl 1/1 Running 0 7m45s pod/java-demo-6474cb8fc8-rxg2k 1/1 Running 0 7m45s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/java-demo NodePort 10.111.26.148 <none> 8080:31180/TCP 24h NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/java-demo 5/5 5 5 7m45s NAME DESIRED CURRENT READY AGE replicaset.apps/java-demo-6474cb8fc8 5 5 5 7m45s1.7 加入流水线阅读过上一篇基于jenkins的kustomize配置发布(9)你大概就知道,整个简单的流程是怎么走的,我们复制过来修改一下,如下当前阶段流水线阶段,步骤大致如下:1.判断本地是否有git的目录,如果有就删除2.拉取git,并切换到分支3.追加当前的镜像版本到一个buildhistory的文件中4.cd到目录中修改镜像5.修改完成后上传修改你被人6.argocd同步与之不同的就是将kustomize和kubectl改成了argocd代码快如下: stage('Deploy') { steps { sh ''' [ ! -d ${JOB_NAMES} ] || rm -rf ${JOB_NAMES} } git clone ${kustomize_Git} && cd ${JOB_NAMES} && git checkout ${apps_name} echo "push latest images: $IPATH" echo "`date +%F-%T` imageTag: $IPATH buildId: ${BUILD_NUMBER} " >> ./buildhistory-$Projects_Area-${apps_name}.log cd overlays/$Projects_Area ${PACK_PATH}/kustomize edit set image $IPATH cd ../.. git add . git config --global push.default matching git config user.name zhengchao.tang git config user.email usertzc@163.com git commit -m "image tag $IPATH-> ${imageUrlPath}" git push -u origin ${apps_name} ${PACK_PATH}/argocd app sync ${apps_name} --retry-backoff-duration=10s -l marksugar/app=${apps_name} ''' } } 仅此而已在上一篇中忘了截图与此同时,gitlab上已经有了一个版本的历史记录argocd最简单的示例到此告一段落参考gitops
2022年07月10日
2,453 阅读
0 评论
0 点赞
2022-07-09
linuxea:基于jenkins的kustomize配置发布(9)
在之前的几篇中,我分别介绍了基础环境的配置,skywaling+nacos的配置,nexus3的配置,围绕sonarqube的配置和构建镜像的配置。这一篇中,基于构建的镜像进行清单编排。我们需要一种工具来管理配置清单。阅读此篇,你将了解如下列表中简单的实现方式:jenkins和gitlab触发(已实现)jenkins凭据使用(已实现)juit配置(已实现)sonarqube简单扫描(已实现)sonarqube覆盖率(已实现)打包基于java的skywalking agent(上一章已实现)sonarqube与gitlab关联 (上一章已实现)配置docker中构建docker (上一章已实现)mvn打包(上一章已实现)sonarqube简单分支扫描(上一章已实现)基于gitlab来管理kustomize的k8s配置清单(本章实现)kubectl部署(本章实现)kubeclt deployment的状态跟踪(本章实现)钉钉消息的构建状态推送没错,我移情别恋了,在Helm和kustomize中,我选择后者。最大的原因是因为kustomize简单,易于维护。无论从那个角度,我都找不到不用kustomize的理由。这倒不是因为kustomize是多么优秀,仅仅是因为kustomize的方式让一切变得都简单。Helm和kustomizehelm几乎可以完成所有的操作,但是helm的问题是学习有难度,对于小白不友好,配置一旦过多调试将会更复杂。也是因为这种限制,那么使用helm的范围就被缩小了,不管在什么条件下,它都不在是优选。kustomize更直白,无论是开发,还是运维新手,都可以快速上手进行修改添加等基础配置。kustomizekustomize用法在官网的github上已经有所说明了,并且这里温馨的提供了中文示例。讨论如何学习kustomize不在本章的重点遵循kustmoize的版本,在https://github.com/kubernetes-sigs/kustomize/releases找到一个版本,通过https://toolwa.com/github/加速下载Kubectl 版本自定义版本< v1.14不适用v1.14-v1.20v2.0.3v1.21v4.0.5v1.22v4.2.0[root@k8s-01 linuxea]# kustomize version {Version:kustomize/v4.5.5 GitCommit:daa3e5e2c2d3a4b8c94021a7384bfb06734bcd26 BuildDate:2022-05-20T20:25:40Z GoOs:linux GoArch:amd64}创建必要的目录结构阅读示例中的示例:devops和开发配合管理配置数据有助于理解kustomize配置方法场景:在生产环境中有一个基于 Java 由多个内部团队对于业务拆分了不通的组并且有不同的项目的应用程序。这些服务在不同的环境中运行:development、 testing、 staging 和 production,有些配置需要频繁修改的。如果只是维护一个大的配置文件是非常麻烦且困难的 ,而这些配置文件也是需要专业运维人员或者devops工程师来进行操作的,这里面包含了一些片面且偏向运维的工作是开发人员不必知道的。例如:生产环境的敏感数据关键的登录凭据等这些在kustomize中被分成了不通的类因此,kustomize提供了混合管理办法基于相同的 base 创建 n 个 overlays 来创建 n 个集群环境的方法我们将使用 n==2,例如,只使用 development 和 production ,这里也可以使用相同的方法来增加更多的环境。运行 kustomize build 基于 overlay 的 target 来创建集群环境。为了让这一切开始运行,准备如下创建kustomize目录结构创建并配置kustomize配置文件最好创建gitlab项目,将配置存放在gitlab开始此前我写了一篇kustomize变量传入有过一些介绍,我们在简单补充一下。kustomize在1.14版本中已经是Kubectl内置的命令,并且支持kubernetes的原生可复用声明式配置的插件。它引入了一种无需模板的方式来自定义应用程序配置,从而简化了现成应用程序的使用。Kustomize 遍历 Kubernetes 清单以添加、删除或更新配置选项。它既可以作为独立的二进制文件使用,也可以作为kubectl来使用更多的背景可参考它的白皮书,这些在github的Declarative application management in Kubernetes存放。因为总的来说,这篇不是让你如何去理解背后的故事,而是一个最简单的示例常见操作在项目中为所有 Kubernetes 对象设置贯穿性字段是一种常见操作。 贯穿性字段的一些使用场景如下:为所有资源设置相同的名字空间为所有对象添加相同的前缀或后缀为对象添加相同的标签集合为对象添加相同的注解集合为对象添加相同的资源限制以及以及副本数这些通过在overlays目录下不同的配置来区分不通的环境所用的清单信息安装遵循github版本对应规则Kubectl versionKustomize version< v1.14n/av1.14-v1.20v2.0.3v1.21v4.0.5v1.22v4.2.0我的集群是1.23.1,因此我下载4.5.4PS E:\ops\k8s-1.23.1-latest\gitops> kustomize version {Version:kustomize/v4.5.4 GitCommit:cf3a452ddd6f83945d39d582243b8592ec627ae3 BuildDate:2022-03-28T23:12:45Z GoOs:windows GoArch:amd64}java-demo我这里已经配置了一个已经配置好的环境,我将会在这里简单介绍使用方法和配置,我不会详细说明deployment控制器的配置清单,也不会说明和kustomize基本使用无关的配置信息,我只会尽可能的在这个简单的示例中说明整个kustomize的在本示例中的用法。简述:kustomize需要base和Overlays目录,base可以是多个,overlays也可以是多个,overlays下的文件最终会覆盖到base的配置之上,只要配置是合理的,base的配置应该将有共性的配置最终通过overlays来进行配置,以此应对多个环境的配置。java-demo是一个无状态的java应用,使用的是Deployment控制器进行配置,并且创建一个service,于此同时传入skywalking的环境变量信息。1. 目录结构目录结构如下:# tree ./ ./ ├── base │ ├── deployment.yaml │ ├── kustomization.yaml │ └── service.yaml ├── overlays │ ├── dev │ │ ├── env.file │ │ ├── kustomization.yaml │ │ └── resources.yaml │ └── prod │ ├── kustomization.yaml │ ├── replicas.yaml │ └── resources.yaml └── README.md 4 directories, 11 files其中两目录如下:./ ├── base ├── overlays └── README.mdbase: 目录作为基础配置目录,真实的配置文件在这个文件下overlays: 目录作为场景目录,描述与 base 应用配置的差异部分来实现资源复用而在overlays目录下,又有两个目录,分别是dev和prod,分别对应俩个环境的配置,这里可以任意起名来区分,因为在这两个目录下面存放的是各自不通的配置./ ├── base ├── overlays │ ├── dev │ └── prod └── README.md1.1 imagePullSecrets除此之外,我们需要一个拉取镜像的信息使用cat ~/.docker/config.json |base64获取到base64字符串编码,而后复制到.dockerconfigjson: >-下即可apiVersion: v1 data: .dockerconfigjson: >- ewoJImkh0dHBIZWFkZXJzIjogewoJCSJVc2VyLUFnZW50IjogIkRvY2tlci1DbGllbnQvMTkuMDMuMTIgKGxpbnV4KSIKCX0KfQ== kind: Secret metadata: name: 156pull namespace: java-demo type: kubernetes.io/dockerconfigjson2. base目录base目录下分别有三个文件,分别如下├── base │ ├── deployment.yaml │ ├── kustomization.yaml │ └── service.yaml在deployment.yaml中定义必要的属性不定义场景的指标,如标签,名称空间,副本数量和资源限制定义名称,镜像地址,环境变量名这些不定义的属性通过即将配置的overlays中的配置进行贯穿覆盖到这个基础配置之上必须定义的属性表明了贯穿的属性和基础的配置是一份这里的环境变量用的是configmap的方式,值是通过后面传递过来的。如下deployment.yamlapiVersion: apps/v1 kind: Deployment metadata: name: java-demo spec: selector: matchLabels: template: metadata: labels: spec: containers: - image: harbor.marksugar.com/java/linuxea-2022 imagePullPolicy: IfNotPresent name: java-demo ports: - containerPort: 8080 env: - name: SW_AGENT_NAME valueFrom: configMapKeyRef: name: envinpod key: SW_AGENT_NAME - name: SW_AGENT_TRACE_IGNORE_PATH valueFrom: configMapKeyRef: name: envinpod key: SW_AGENT_TRACE_IGNORE_PATH - name: SW_AGENT_COLLECTOR_BACKEND_SERVICES valueFrom: configMapKeyRef: name: envinpod key: SW_AGENT_COLLECTOR_BACKEND_SERVICES imagePullSecrets: - name: 156pull restartPolicy: Alwaysservice.yamlapiVersion: v1 kind: Service metadata: name: java-demo spec: type: NodePort ports: - port: 8080 targetPort: 8080 nodePort: 31180kustomization.yamlkustomization.yaml引入这两个配置文件resources: - deployment.yaml - service.yaml执行 kustomize build /base ,得到的结果如下,这就是当前的原始清单apiVersion: v1 kind: Service metadata: name: java-demo spec: ports: - nodePort: 31180 port: 8080 targetPort: 8080 type: NodePort --- apiVersion: apps/v1 kind: Deployment metadata: name: java-demo spec: selector: matchLabels: null template: metadata: labels: null spec: containers: - env: - name: SW_AGENT_NAME valueFrom: configMapKeyRef: key: SW_AGENT_NAME name: envinpod - name: SW_AGENT_TRACE_IGNORE_PATH valueFrom: configMapKeyRef: key: SW_AGENT_TRACE_IGNORE_PATH name: envinpod - name: SW_AGENT_COLLECTOR_BACKEND_SERVICES valueFrom: configMapKeyRef: key: SW_AGENT_COLLECTOR_BACKEND_SERVICES name: envinpod image: harbor.marksugar.com/java/linuxea-2022:202207091551 imagePullPolicy: IfNotPresent name: java-demo ports: - containerPort: 8080 imagePullSecrets: - name: 156pull restartPolicy: Always3. overlays目录首先,在overlays目录下是有dev和prod目录的,我们先看在dev目录下的kustomization.yamlkustomization.yaml中的内容,包含一组资源和相关的自定义信息,如下更多用法参考官方文档或者github社区kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization patchesStrategicMerge: - resources.yaml # 当如当前的文件 namespace: java-demo # 名称空间 images: - name: harbor.marksugar.com/java/linuxea-2022 # 镜像url必须保持和base中一致 newTag: '202207072119' # 镜像tag bases: - ../../base # 引入bases基础文件 # configmap变量 configMapGenerator: - name: envinpod # 环境变量名称 env: env.file # 环境变量位置 # 副本数 replicas: - name: java-demo # 名称必须保持一致 count: 5 # namePrefix: dev- # pod前缀 # nameSuffix: "-001" # pod后缀 commonLabels: app: java-demo # 标签 # logging: isOk # commonAnnotations: # oncallPager: 897-001删掉那些注释后如下apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization patchesStrategicMerge: - resources.yaml namespace: java-demo images: - name: harbor.marksugar.com/java/linuxea-2022 newTag: '202207071059' bases: - ../../base configMapGenerator: - name: envinpod env: env.file replicas: - name: java-demo count: 5 commonLabels: app: java-demoresources.yaml resources.yaml 中的name必须保持一致apiVersion: apps/v1 kind: Deployment metadata: name: java-demo spec: template: spec: containers: - name: java-demo resources: limits: cpu: "1" memory: 2048Mi requests: cpu: "1" memory: 2048Mienv.fileenv.file定义的变量是对应在base中的,这些是skwayling中的必要信息,参考kubernetes中skywalking9.0部署使用,env的用法参考kustomize变量引入SW_AGENT_NAME=test::java-demo SW_AGENT_TRACE_IGNORE_PATH=GET:/health,GET:/aggreg/health,/eureka/**,xxl-job/** SW_AGENT_COLLECTOR_BACKEND_SERVICES=skywalking-oap.skywalking:11800查看 kustomize build overlays/dev/后的配置清单。如下所示:apiVersion: v1 data: SW_AGENT_COLLECTOR_BACKEND_SERVICES: skywalking-oap.skywalking:11800 SW_AGENT_NAME: test::java-demo SW_AGENT_TRACE_IGNORE_PATH: GET:/health,GET:/aggreg/health,/eureka/**,xxl-job/** kind: ConfigMap metadata: labels: app: java-demo name: envinpod-74t9b8htb6 namespace: java-demo --- apiVersion: v1 kind: Service metadata: labels: app: java-demo name: java-demo namespace: java-demo spec: ports: - nodePort: 31180 port: 8080 targetPort: 8080 selector: app: java-demo type: NodePort --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: java-demo name: java-demo namespace: java-demo spec: replicas: 5 selector: matchLabels: app: java-demo template: metadata: labels: app: java-demo spec: containers: - env: - name: SW_AGENT_NAME valueFrom: configMapKeyRef: key: SW_AGENT_NAME name: envinpod-74t9b8htb6 - name: SW_AGENT_TRACE_IGNORE_PATH valueFrom: configMapKeyRef: key: SW_AGENT_TRACE_IGNORE_PATH name: envinpod-74t9b8htb6 - name: SW_AGENT_COLLECTOR_BACKEND_SERVICES valueFrom: configMapKeyRef: key: SW_AGENT_COLLECTOR_BACKEND_SERVICES name: envinpod-74t9b8htb6 image: harbor.marksugar.com/java/linuxea-2022:202207071059 imagePullPolicy: IfNotPresent name: java-demo ports: - containerPort: 8080 resources: limits: cpu: "1" memory: 2048Mi requests: cpu: "1" memory: 2048Mi imagePullSecrets: - name: 156pull restartPolicy: Alwaysbase作为基础配置,Overlays作为覆盖来区分。base是包含 kustomization.yaml 文件的一个目录,其中包含一组资源及其相关的定制。 base可以是本地目录或者来自远程仓库的目录,只要其中存在 kustomization.yaml 文件即可。 Overlays 也是一个目录,其中包含将其他 kustomization 目录当做 bases 来引用的 kustomization.yaml 文件。 base不了解Overlays的存在,且可被多个Overlays所使用。 Overlays则可以有多个base,且可针对所有base中的资源执行操作,还可以在其上执行定制。通过sed替换Overlays下的文件内容或者kustomize edit set,如:在Overlays下执行kustomize edit set image harbor.marksugar.com/java/linuxea-2022:202207091551:202207071059:1.14.b替换镜像文件。一切符合预期后,使用kustomize.exe build .\overlays\dev\ | kubectl apply -f -使其生效。4. 部署到k8s命令部署两种方式kustomizekustomize build overlays/dev/ | kubectl apply -f -kubectlkubectl apply -k overlays/dev/使用kubectl apply -k生效,如下PS E:\ops\k8s-1.23.1-latest\gitops> kubectl.exe apply -k .\overlays\dev\ configmap/envinpod-74t9b8htb6 unchanged service/java-demo created deployment.apps/java-demo created如果使用的域名是私有的,需要在本地hosts填写本地解析172.16.100.54 harbor.marksugar.com并且需要修改/etc/docker/daemon.json{ "data-root": "/var/lib/docker", "exec-opts": ["native.cgroupdriver=systemd"], "insecure-registries": ["harbor.marksugar.com"], "max-concurrent-downloads": 10, "live-restore": true, "log-driver": "json-file", "log-level": "warn", "log-opts": { "max-size": "50m", "max-file": "1" }, "storage-driver": "overlay2" }查看部署情况PS E:\ops\k8s-1.23.1-latest\gitops\kustomize-k8s-yaml> kubectl.exe -n java-demo get all NAME READY STATUS RESTARTS AGE pod/java-demo-6474cb8fc8-6xs8t 1/1 Running 0 41s pod/java-demo-6474cb8fc8-9z9sd 1/1 Running 0 41s pod/java-demo-6474cb8fc8-jfqv6 1/1 Running 0 41s pod/java-demo-6474cb8fc8-p5ztd 1/1 Running 0 41s pod/java-demo-6474cb8fc8-sqt7b 1/1 Running 0 41s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/java-demo NodePort 10.111.26.148 <none> 8080:31180/TCP 41s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/java-demo 5/5 5 5 41s NAME DESIRED CURRENT READY AGE replicaset.apps/java-demo-6474cb8fc8 5 5 5 42s与此同时,skywalking也加入成功创建git项目在gitlab创建了一个组,在组织里面创建了一个项目,名称以项目命名,在项目内每个应用对应一个分支如: devops组内内新建一个k8s-yaml的项目,项目内创建一个java-demo分支,java-demo分支中存放java-demo的配置文件现在创建key,将密钥加入到项目中ssh-keygen -t ed25519将文件推送到git上$ git clone git@172.16.100.47:devops/k8s-yaml.git Cloning into 'k8s-yaml'... remote: Enumerating objects: 3, done. remote: Counting objects: 100% (3/3), done. remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0 Receiving objects: 100% (3/3), done. $ cd k8s-yaml/ $ git checkout -b java-demo Switched to a new branch 'java-demo $ ls -ll total 1024 -rw-r--r-- 1 Administrator 197121 12 Jul 7 21:09 README.MD drwxr-xr-x 1 Administrator 197121 0 Jun 28 20:15 base/ -rw-r--r-- 1 Administrator 197121 774 Jul 6 18:05 imagepullsecrt.yaml drwxr-xr-x 1 Administrator 197121 0 Jun 28 20:15 overlays/ $ git add . $ git commit -m "first commit" [java-demo a9701f7] first commit 11 files changed, 185 insertions(+) create mode 100644 base/deployment.yaml create mode 100644 base/kustomization.yaml create mode 100644 base/service.yaml create mode 100644 imagepullsecrt.yaml create mode 100644 overlays/dev/env.file create mode 100644 overlays/dev/kustomization.yaml create mode 100644 overlays/dev/resources.yaml create mode 100644 overlays/prod/kustomization.yaml create mode 100644 overlays/prod/replicas.yaml create mode 100644 overlays/prod/resources.yaml $ git push -u origin java-demo Enumerating objects: 19, done. Counting objects: 100% (19/19), done. Delta compression using up to 8 threads Compressing objects: 100% (15/15), done. Writing objects: 100% (17/17), 2.90 KiB | 329.00 KiB/s, done. Total 17 (delta 2), reused 0 (delta 0), pack-reused 0 remote: remote: To create a merge request for java-demo, visit: remote: https://172.16.100.47/devops/k8s-yaml/-/merge_requests/new?merge_request%5Bsource_branch%5D=java-demo remote: To 172.16.100.47:devops/k8s-yaml.git bb67227..a9701f7 java-demo -> java-demo Branch 'java-demo' set up to track remote branch 'java-demo' from 'origin'.添加到流水线首先,kustomize是配置文件是存放在gitlab上,因此,这个git需要我们拉取下来,而后修改镜像名称,应用kustomize的配置后,在push到gitlab上在这里的是kustomize是仅仅来管理yaml清单文件,在后面将使用argocd来做我们在流水线里面配置一个环境变量,指向kustomize配置文件的git地址,并切除git拉取后的目录地址尽可能的在gitlab和jenkins上的项目名称保持一直,才能做好流水线取值或者切出值的时候方便def kustomize_Git="git@172.16.100.47:devops/k8s-yaml.git" def JOB_NAMES=sh (script: """echo ${kustomize_Git.split("/")[-1]} | cut -d . -f 1""",returnStdout: true).trim() 但是kustomize是不能直接去访问集群的,因此还必须用kubectl,那就以为这需要config文件我们使用命令指定配置文件位置kubectl --kubeconfig /var/jenkins_home/.kube/config-1.23.1-dev另外,如果你的jenkins的docker镜像没有kustomize,或者kubectl,需要挂载进去,因此我的就变成了 environment { def tag_time = new Date().format("yyyyMMddHHmm") def IPATH="harbor.marksugar.com/java/${JOB_NAME}:${tag_time}" def kustomize_Git="git@172.16.100.47:devops/k8s-yaml.git" def JOB_NAMES=sh (script: """echo ${kustomize_Git.split("/")[-1]} | cut -d . -f 1""",returnStdout: true).trim() def Projects_Area="dev" def apps_name="java-demo" def projectGroup="java-demo" def PACK_PATH="/usr/local/package" }并且在容器内生成一个密钥,而后加到gitlab中,以供git拉取和上传bash-5.1# ssh-keygen -t rsa而后在复制到/var/jenkins_home下,并且挂载到容器内- /data/jenkins-latest/jenkins_home/.ssh:/root/.ssh第一次拉取需要输入yes,我们规避它echo ' Host * StrictHostKeyChecking no UserKnownHostsFile=/dev/null' >>/root/.ssh/config如果你使用的是宿主机运行的Jenkins,这一步可省略因为资源不足的问题,我们手动修改副本数为1流水线阶段,步骤大致如下:1.判断本地是否有git的目录,如果有就删除2.拉取git,并切换到分支3.追加当前的镜像版本到一个buildhistory的文件中4.cd到目录中修改镜像5.修改完成后上传修改你被人6.kustomize和kubectl应用配置清单代码快如下: stage('Deploy') { steps { sh ''' [ ! -d ${JOB_NAMES} ] || rm -rf ${JOB_NAMES} } git clone ${kustomize_Git} && cd ${JOB_NAMES} && git checkout ${apps_name} echo "push latest images: $IPATH" echo "`date +%F-%T` imageTag: $IPATH buildId: ${BUILD_NUMBER} " >> ./buildhistory-$Projects_Area-${apps_name}.log cd overlays/$Projects_Area ${PACK_PATH}/kustomize edit set image $IPATH cd ../.. git add . git config --global push.default matching git config user.name zhengchao.tang git config user.email usertzc@163.com git commit -m "image tag $IPATH-> ${imageUrlPath}" git push -u origin ${apps_name} ${PACK_PATH}/kustomize build overlays/$Projects_Area/ | ${PACK_PATH}/kubectl --kubeconfig /var/jenkins_home/.kube/config-1.23.1-dev apply -f - ''' } } 观测状态配置清单被生效后,不一定符合预期,此时有很多种情况出现,特别是在使用原生的这些命令和脚本更新的时候我们需要追踪更新后的状态,以便于我们随时做出正确的动作。我此前写过一篇关于kubernetes检测pod部署状态简单实现,如果感兴趣可以查看仍然使用此前的方式,如下 stage('status watch') { steps { sh ''' ${PACK_PATH}/kubectl --kubeconfig /var/jenkins_home/.kube/config-1.23.1-dev -n ${projectGroup} rollout status deployment ${apps_name} --watch --timeout=10m ''' } }构建一次到服务器上查看[root@linuxea-11 ~]# kubectl -n java-demo get pod NAME READY STATUS RESTARTS AGE java-demo-66b98564f6-xsc6z 1/1 Running 0 9m24s其他参考kubernetes中skywalking9.0部署使用,kustomize变量引入
2022年07月09日
2,130 阅读
0 评论
0 点赞
2022-07-07
linuxea:jenkins流水线集成sonar分支扫描/关联gitlab/docker和mvn打包配置二(8)
在前面的jenkins流水线集成juit/sonarqube/覆盖率扫描配置一中介绍了juilt,覆盖率以及soanrqube的一些配置实现。接着上一篇中,我们继续。阅读此篇,你将了解如下列表中简单的实现方式:jenkins和gitlab触发(上一章已实现)jenkins凭据使用(上一章已实现)juit配置(上一章已实现)sonarqube简单扫描(上一章已实现)sonarqube覆盖率(上一章已实现)打包基于java的skywalking agent(上一章已实现)sonarqube与gitlab关联 (本章实现)配置docker中构建docker (本章实现)mvn打包 (本章实现)sonarqube简单分支扫描(本章实现)基于gitlab来管理kustomize的k8s配置清单kubectl部署kubeclt deployment的状态跟踪钉钉消息的构建状态推送4.6 分支扫描我们可能更希望扫描某一个分支,于是我们需要sonarqube-community-branch-plugin插件我们在https://github.com/mc1arke/sonarqube-community-branch-plugin/releases中,留意支持的版本Note: This version supports Sonarqube 8.9 and above. Sonarqube 8.8 and below or 9.0 and above are not supported in this release使用下表查找每个 SonarQube 版本的正确插件版本SonarQube 版本插件版本9.1+1.12.09.01.9.08.91.8.28.7 - 8.81.7.08.5 - 8.61.6.08.2 - 8.41.5.08.11.4.07.8 - 8.01.3.27.4 - 7.71.0.2于是,我们在nexus3上下载1.8.1版本https://github.com/mc1arke/sonarqube-community-branch-plugin/releases/download/1.8.0/sonarqube-community-branch-plugin-1.8.0.jar 或者 https://github.91chifun.workers.dev//https://github.com/mc1arke/sonarqube-community-branch-plugin/releases/download/1.8.0/sonarqube-community-branch-plugin-1.8.0.jar根据安装提示https://github.com/mc1arke/sonarqube-community-branch-plugin#manual-install而后直接将 jar包下载在/data/sonarqube/extensions/plugins/下即可wget https://172.16.100.48/jenkins/sonar-plugins/sonarqube-community-branch-plugin-1.8.0.jar -o /data/sonarqube/extensions/plugins/sonarqube-community-branch-plugin-1.8.0.jar实际上/data/sonarqube/extensions/目录被挂载到nexus的容器内的/opt/sonarqube/extensions下而容器内的位置是不变的,因此挂载映射关系如下: volumes: - /etc/localtime:/etc/localtime - /data/sonarqube/conf:/opt/sonarqube/conf - /data/sonarqube/extensions:/opt/sonarqube/extensions - /data/sonarqube/logs:/opt/sonarqube/logs - /data/sonarqube/data:/opt/sonarqube/data[root@linuxea-47 /data/sonarqube/extensions]# ll plugins/ total 17552 -rwx------ 1 1000 1000 10280677 Oct 10 2021 sonar-gitlab-plugin-4.1.0-SNAPSHOT.jar -rwx------ 1 1000 1000 61903 Sep 11 2021 sonar-l10n-zh-plugin-8.9.jar -rwx------ 1 1000 1000 7623167 Oct 10 2021 sonarqube-community-branch-plugin-1.8.0.jar而后,我们在本地是/data/sonarqube/conf下的创建一个配置文件sonar.properties,内容如下sonar.web.javaAdditionalOpts=-javaagent:./extensions/plugins/sonarqube-community-branch-plugin-1.8.0.jar=web sonar.ce.javaAdditionalOpts=-javaagent:./extensions/plugins/sonarqube-community-branch-plugin-1.8.0.jar=ce这个配置文件被映射到容器内的/opt/sonarqube/conf进入容器查看[root@linuxea-47 /data/sonarqube]# ls extensions/plugins/ -ll total 17552 -rwx------ 1 1000 1000 61903 Sep 11 2021 sonar-l10n-zh-plugin-8.9.jar -rwx------ 1 1000 1000 7623167 Oct 10 2021 sonarqube-community-branch-plugin-1.8.0.jar分支扫描参数增加 –Dsonar.branch.name=-Dsonar.branch.name=master那现在的projetctkey就不需要加分支名字了 -Dsonar.projectKey=${JOB_NAME}_${branch} \ -Dsonar.projectName=${JOB_NAME}_${branch} \直接在一个项目中就可以看到多个分支的扫描结果了 stage("coed sonar"){ steps{ script { withCredentials([string(credentialsId: 'sonarqube-token', variable: 'SONAR_TOKEN')]) { sh """ cd linuxea && \ /usr/local/package/sonar-scanner-4.6.2.2472-linux/bin/sonar-scanner \ -Dsonar.host.url=https://172.16.100.47:9000 \ -Dsonar.projectKey=${JOB_NAME} \ -Dsonar.projectName=${JOB_NAME} \ -Dsonar.projectVersion=${BUILD_NUMBER} \ -Dsonar.login=${SONAR_TOKEN} \ -Dsonar.ws.timeout=30 \ -Dsonar.projectDescription="my first project!" \ -Dsonar.links.homepage=${env.BASEURL} \ -Dsonar.links.ci=${BUILD_URL} \ -Dsonar.sources=src \ -Dsonar.sourceEncoding=UTF-8 \ -Dsonar.java.binaries=target/classes \ -Dsonar.java.test.binaries=target/test-classes \ -Dsonar.java.surefire.report=target/surefire-reports \ -Dsonar.core.codeCoveragePlugin=jacoco \ -Dsonar.jacoco.reportPaths=target/jacoco.exec \ -Dsonar.branch.name=${branch} """ } } } }此时我们分别构建master和web后,在sonarqube的UI中就会有两个分支的扫描结果注意事项如果你使用的是不同的版本,而不同的版本配置是不一样的。见github的每个分支,比如:1.5.04.7 关联gitlab在https://github.com/gabrie-allaigre/sonar-gitlab-plugin下载插件,参阅用法中版本对应,我们下载4.1.0https://github.com/gabrie-allaigre/sonar-gitlab-plugin/releases/download/4.1.0/sonar-gitlab-plugin-4.1.0-SNAPSHOT.jar而后仍然存放到sonarqube的plugin目录下[root@linuxea-47 ~]# ls /data/sonarqube/extensions/plugins/ -ll total 17552 -rwx------ 1 1000 1000 10280677 Oct 10 2021 sonar-gitlab-plugin-4.1.0-SNAPSHOT.jar -rwx------ 1 1000 1000 61903 Sep 11 2021 sonar-l10n-zh-plugin-8.9.jar -rwx------ 1 1000 1000 7623167 Oct 10 2021 sonarqube-community-branch-plugin-1.8.0.jar这在启动的时候,实际上可以看到日志加载根据文档,要完成扫描必须提供如下必要参数-Dsonar.gitlab.commit_sha=1632c729e8f78f913cbf0925baa2a8c893e4473b \ 版本sha -Dsonar.gitlab.ref_name=master \ 分支 -Dsonar.gitlab.project_id=16 \ 项目id -Dsonar.dynamicAnalysis=reuseReports \ 扫描方式 -Dsonar.gitlab.failure_notification_mode=commit-status \ 更改提交状态 -Dsonar.gitlab.url=https://192.168.1.200 \ gitlab地址 -Dsonar.gitlab.user_token=k8xLe6dYTzdtoewSysmy \ gitlab token -Dsonar.gitlab.api_version=v41.配置一个全局token至少需要如下权限令牌如下K8DtxxxifxU1gQeDgvDK其他信息根据现有的项目输入即可-Dsonar.gitlab.commit_sha=4a5bb3db1c845cddc86290d137ef694b3b076d0e \ 版本sha -Dsonar.gitlab.ref_name=master \ 分支 -Dsonar.gitlab.project_id=19 \ 项目id -Dsonar.dynamicAnalysis=reuseReports \ 扫描方式 -Dsonar.gitlab.failure_notification_mode=commit-status \ 更改提交状态 -Dsonar.gitlab.url=https://172.16.100.47 \ gitlab地址 -Dsonar.gitlab.user_token=K8DtxxxifxU1gQeDgvDK \ gitlab token -Dsonar.gitlab.api_version=v42.将上述命令添加到sonarqube的流水线中/var/jenkins_home/package/sonar-scanner/bin/sonar-scanner \ -Dsonar.host.url=https://172.16.15.136:9000 \ -Dsonar.projectKey=java-demo \ -Dsonar.projectName=java-demo \ -Dsonar.projectVersion=120 \ -Dsonar.login=636558affea60cc5f264247de36e7c27c817530b \ -Dsonar.ws.timeout=30 \ -Dsonar.projectDescription="my first project!" \ -Dsonar.links.homepage=https://172.16.15.136:180/devops/java-demo.git \ -Dsonar.links.ci=https://172.16.15.136:8088/job/java-demo/120/ \ -Dsonar.sources=src \ -Dsonar.sourceEncoding=UTF-8 \ -Dsonar.java.binaries=target/classes -Dsonar.java.test.binaries=target/test-classes \ -Dsonar.java.surefire.report=target/surefire-reports \ -Dsonar.branch.name=main \ -Dsonar.gitlab.commit_sha=9353e89a7b42e0d93ddf95520408ecfde9a5144a \ -Dsonar.gitlab.ref_name=main \ -Dsonar.gitlab.project_id=2 \ -Dsonar.dynamicAnalysis=reuseReports \ -Dsonar.gitlab.failure_notification_mode=commit-status \ -Dsonar.gitlab.url=https://172.16.15.136:180 \ -Dsonar.gitlab.user_token=9mszu2KXx7nHXiwJveBs \ -Dsonar.gitlab.api_version=v4运行测试正常是什么样的呢,换一个环境配置下/usr/local/package/sonar-scanner-4.6.2.2472-linux/bin/sonar-scanner \ -Dsonar.host.url=https://172.16.100.47:9000 \ -Dsonar.projectKey=java-demo \ -Dsonar.projectName=java-demo \ -Dsonar.projectVersion=20 \ -Dsonar.login=bc826f124d691127c351388274667d7deb1cc9b2 \ -Dsonar.ws.timeout=30 \ -Dsonar.projectDescription="my first project!" \ -Dsonar.links.homepage=www.baidu.com \ -Dsonar.links.ci=20 \ -Dsonar.sources=src \ -Dsonar.sourceEncoding=UTF-8 \ -Dsonar.java.binaries=target/classes \ -Dsonar.java.test.binaries=target/test-classes \ -Dsonar.java.surefire.report=target/surefire-reports \ -Dsonar.core.codeCoveragePlugin=jacoco \ -Dsonar.jacoco.reportPaths=target/jacoco.exec \ -Dsonar.branch.name=master \ -Dsonar.gitlab.commit_sha=4a5bb3db1c845cddc86290d137ef694b3b076d0e \ -Dsonar.gitlab.ref_name=master \ -Dsonar.gitlab.project_id=19 \ -Dsonar.dynamicAnalysis=reuseReports \ -Dsonar.gitlab.failure_notification_mode=commit-status \ -Dsonar.gitlab.url=https://172.16.100.47 \ -Dsonar.gitlab.user_token=K8DtxxxifxU1gQeDgvDK \ -Dsonar.gitlab.api_version=v4 执行之后INFO: SCM Publisher SCM provider for this project is: git INFO: SCM Publisher 2 source files to be analyzed INFO: SCM Publisher 2/2 source files have been analyzed (done) | time=704ms INFO: CPD Executor 2 files had no CPD blocks INFO: CPD Executor Calculating CPD for 0 files INFO: CPD Executor CPD calculation finished (done) | time=0ms INFO: Analysis report generated in 42ms, dir size=74 KB INFO: Analysis report compressed in 14ms, zip size=13 KB INFO: Analysis report uploaded in 468ms INFO: ANALYSIS SUCCESSFUL, you can browse https://172.16.100.47:9000/dashboard?id=java-demo&branch=master INFO: Note that you will be able to access the updated dashboard once the server has processed the submitted analysis report INFO: More about the report processing at https://172.16.100.47:9000/api/ce/task?id=AYHOP018DZyaRsN1subY INFO: Executing post-job 'GitLab Commit Issue Publisher' INFO: Waiting quality gate to complete... INFO: Quality gate status: OK INFO: Duplicated Lines : 0 INFO: Lines of Code : 18 INFO: Report status=success, desc=SonarQube reported QualityGate is ok, with 2 ok, no issues INFO: Analysis total time: 7.130 s INFO: ------------------------------------------------------------------------ INFO: EXECUTION SUCCESS INFO: ------------------------------------------------------------------------ INFO: Total time: 7.949s INFO: Final Memory: 17M/60M INFO: ------------------------------------------------------------------------流水线已通过3.获取参数现在的问题是,手动输入gitlab的这些值不可能在jenkins中输入,我们需要自动获取这些。分支的环境变量通过传递来,用变量获取即可commit_sha通过读取当前代码中的文件实现gitlab token放到密钥管理当中于是,我们通过jq来获取格式化gitlab api返回值获取缺省的项目id需要下载一个jq程序在jenkins节点上。于是我们在https://stedolan.github.io/jq/download/页面下载一个 binaries二进制的即可https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64获取项目id curl --silent --header "PRIVATE-TOKEN: K8DtxxxifxU1gQeDgvDK" "https://gitlab.marksugar.com/api/v4/projects?simple=true"| jq -rc '.[]|select(.name == "java-demo")'|jq .id示例1:如果项目名称在所有组内是唯一的,就可以使用jq -rc '.[]|select(.name == "java-demo")',如下.name == "java-demo": 项目名curl --silent --header "PRIVATE-TOKEN: K8DtxxxifxU1gQeDgvDK" "https://gitlab.marksugar.com/api/v4/projects?simple=true"| jq -rc '.[]|select(.name == "java-demo")' | jq .id示例2:如果项目名称在所有组内不是唯一,且有多个的,用jq -rc '.[]|select(.path_with_namespace == "java/java-demo")',如下.path_with_namespace == java/java-demo : 组名/项目名curl --silent --header "PRIVATE-TOKEN: K8DtxxxifxU1gQeDgvDK" "https://gitlab.marksugar.com/api/v4/projects?simple=true"| jq -rc '.[]|select(.path_with_namespace == "java/java-demo")'|jq .id获取当前的sha版本号获取办版本号只需要在当前项目目录内读取文件或者命令即可,it log --pretty=oneline|head -1| cut -b 1-40,如下[root@linuxea-48 /data/jenkins-latest/jenkins_home/workspace/linuxea-2022]# git log --pretty=oneline|head -1| cut -b 1-40 4a5bb3db1c845cddc86290d137ef694b3b076d0e除此之外使用cut -b -40 .git/refs/remotes/origin/master 能获得一样的效果[root@linuxea-48 /data/jenkins-latest/jenkins_home/workspace/linuxea-2022]# cut -b -40 .git/refs/remotes/origin/master 4a5bb3db1c845cddc86290d137ef694b3b076d0e项目名称项目名称,我们可以使用Jenkins的项目名字。但是,这个名字有时候未必和git的项目名称一样,于是,我们直接截取项目的地址名称JOB_NAMES=sh (script: """echo ${BASEURL.split("/")[-1]} | cut -d . -f 1""",returnStdout: true).trim() 那么现在已经具备上面的几个关键参数,现在分别命名GIT_COMMIT_TAGSHA和Projects_GitId,JOB_NAMESenvironment { def GIT_COMMIT_TAGSHA=sh (script: """cut -b -40 .git/refs/remotes/origin/master""",returnStdout: true).trim() def JOB_NAMES=sh (script: """echo ${BASEURL.split("/")[-1]} | cut -d . -f 1""",returnStdout: true).trim() def Projects_GitId=sh (script: """curl --silent --header "PRIVATE-TOKEN: zrv1vpfZTtEFCJGrJczB" "https://gitlab.marksugar.com/api/v4/projects?simple=true"| ${buildMap["jq"]} -rc '.[]|select(.path_with_namespace == "java/java-demo")'| ${buildMap["jq"]} .id""",returnStdout: true).trim() }那么现在的环境变量就是 environment { def tag_time = new Date().format("yyyyMMddHHmm") def IPATH="harbor.marksugar.com/java/${JOB_NAME}:${tag_time}" def GIT_COMMIT_TAGSHA=sh (script: """cut -b -40 .git/refs/remotes/origin/master""",returnStdout: true).trim() def JOB_NAMES=sh (script: """echo ${BASEURL.split("/")[-1]} | cut -d . -f 1""",returnStdout: true).trim() def Projects_GitId=sh (script: """curl --silent --header "PRIVATE-TOKEN: zrv1vpfZTtEFCJGrJczB" "https://gitlab.marksugar.com/api/v4/projects?simple=true"| ${buildMap["jq"]} -rc '.[]|select(.path_with_namespace == "java/java-demo")'| ${buildMap["jq"]} .id""",returnStdout: true).trim() def SONAR_git_TOKEN="K8DtxxxifxU1gQeDgvDK" def GitLab_Address="https://172.16.100.47" } 而新增的调用的命令如下 -Dsonar.gitlab.commit_sha=${GIT_COMMIT_TAGSHA} \ -Dsonar.gitlab.ref_name=${branch} \ -Dsonar.gitlab.project_id=${Projects_GitId} \ -Dsonar.dynamicAnalysis=reuseReports \ -Dsonar.gitlab.failure_notification_mode=commit-status \ -Dsonar.gitlab.url=${GitLab_Address} \ -Dsonar.gitlab.user_token=${SONAR_git_TOKEN} \ -Dsonar.gitlab.api_version=v4 构建一次能够看到已经获取到的值,构建成功的完整的阶段代码如下: stage("coed sonar"){ environment { def GIT_COMMIT_TAGSHA=sh (script: """cut -b -40 .git/refs/remotes/origin/master""",returnStdout: true).trim() def JOB_NAMES=sh (script: """echo ${BASEURL.split("/")[-1]} | cut -d . -f 1""",returnStdout: true).trim() def Projects_GitId=sh (script: """curl --silent --heade "PRIVATE-TOKEN: zrv1vpfZTtEFCJGrJczB" "https://gitlab.marksugar.com/api/v4/projects?simple=true"| /usr/local/package/jq-1.6/jq -rc '.[]|select(.path_with_namespace == "java/java-demo")'| /usr/local/package/jq-1.6/jq .id""",returnStdout: true).trim() def SONAR_git_TOKEN="K8DtxxxifxU1gQeDgvDK" def GitLab_Address="https://172.16.100.47" } steps{ script { withCredentials([string(credentialsId: 'sonarqube-token', variable: 'SONAR_TOKEN')]) { sh """ cd linuxea && \ /usr/local/package/sonar-scanner-4.6.2.2472-linux/bin/sonar-scanner \ -Dsonar.host.url=${GitLab_Address}:9000 \ -Dsonar.projectKey=${JOB_NAME} \ -Dsonar.projectName=${JOB_NAME} \ -Dsonar.projectVersion=${BUILD_NUMBER} \ -Dsonar.login=${SONAR_TOKEN} \ -Dsonar.ws.timeout=30 \ -Dsonar.projectDescription="my first project!" \ -Dsonar.links.homepage=${env.BASEURL} \ -Dsonar.links.ci=${BUILD_URL} \ -Dsonar.sources=src \ -Dsonar.sourceEncoding=UTF-8 \ -Dsonar.java.binaries=target/classes \ -Dsonar.java.test.binaries=target/test-classes \ -Dsonar.java.surefire.report=target/surefire-reports \ -Dsonar.core.codeCoveragePlugin=jacoco \ -Dsonar.jacoco.reportPaths=target/jacoco.exec \ -Dsonar.branch.name=${branch} \ -Dsonar.gitlab.commit_sha=${GIT_COMMIT_TAGSHA} \ -Dsonar.gitlab.ref_name=${branch} \ -Dsonar.gitlab.project_id=${Projects_GitId} \ -Dsonar.dynamicAnalysis=reuseReports \ -Dsonar.gitlab.failure_notification_mode=commit-status \ -Dsonar.gitlab.url=${GitLab_Address} \ -Dsonar.gitlab.user_token=${SONAR_git_TOKEN} \ -Dsonar.gitlab.api_version=v4 """ } } } }4.8 mvn 打包我们是哟个一条命令直接进行打包-Dmaven.test.skip=true,不执行测试用例,也不编译测试用例类-Dmaven.test.failure.ignore=true ,忽略单元测试失败-s ~/.m2/settings.xml,指定mvn构建的配置文件位置mvn clean install -Dautoconfig.skip=true -Dmaven.test.skip=false -Dmaven.test.failure.ignore=true -s /var/jenkins_home/.m2/settings.xml阶段如下 stage("mvn build"){ steps { script { sh """ cd linuxea mvn clean install -Dautoconfig.skip=true -Dmaven.test.skip=false -Dmaven.test.failure.ignore=true -s /var/jenkins_home/.m2/settings.xml """ } } }4.9 推送镜像我们先需要将docker配置好,首先容器内需要安装docker,而后挂载socket如果你的系统是和容器系统的库文件一样,你可以将本地的docker二进制文件挂载到容器内,但是我使用的是alpine,因此我在容器内安装了docker,此时只需要挂载目录和sock即可也可以将docker挂载到容器内即可 - /usr/bin/docker:/usr/bin/docker - /etc/docker:/etc/docker - /var/run/docker.sock:/var/run/docker.sock并在容器内登录docker容器内登录,或者在流水线阶段中登录也可以[root@linuxea-48 /data/jenkins-latest/jenkins_home]# docker exec -it jenkins bash bash-5.1# cat ~/.docker/config.json { "auths": { "harbor.marksugar.com": { "auth": "YWRtaW46SGFyYm9yMTIzNDU=" } } }将配置复制到主机并挂载到容器内,或者在主机登录挂载到容器都可以- /data/jenkins-latest/.docker:/root/.docker能够在容器内查看docker命令bash-5.1# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 536cb1dbeb3f registry.cn-hangzhou.aliyuncs.com/marksugar/jenkins:2.332-3-alpine-ansible-maven3-nodev16.15-latest "/sbin/tini -- /usr/…" About an hour ago Up About an hour jenkins而后配置docker推送阶段开始之前要配置环境变量,用于获取镜像的时间tag_time随机时间 agent any environment { def tag_time = new Date().format("yyyyMMddHHmm") def IPATH="harbor.marksugar.com/java/${JOB_NAME}:${tag_time}" }docker阶段请注意:此时在COPY skywalking-agent的时候,需要将包拷贝到当前目录才能COPY到容器内 stage("docker build"){ steps{ script{ sh """ cd linuxea docker ps -a cp -r /usr/local/package/skywalking-agent ./ docker build -f ./Dockerfile -t $IPATH . docker push $IPATH docker rmi -f $IPATH """ } } }与此同时需要修改Dockerfile中的COPY 目录而后创建harbor仓库开始构建一旦构建完成,镜像将会推送到harbor仓库此时的pipeline流水线i清单如下try { if ( "${onerun}" == "gitlabs"){ println("Trigger Branch: ${info_ref}") RefName="${info_ref.split("/")[-1]}" //自定义显示名称 currentBuild.displayName = "#${info_event_name}-${RefName}-${info_checkout_sha}" //自定义描述 currentBuild.description = "Trigger by user ${info_user_username} 自动触发 \n branch: ${RefName} \n commit message: ${info_commits_0_message}" BUILD_TRIGGER_BY="${info_user_username}" BASEURL="${info_project_git_http_url}" } }catch(e){ BUILD_TRIGGER_BY="${currentBuild.getBuildCauses()[0].userId}" currentBuild.description = "Trigger by user ${BUILD_TRIGGER_BY} 非自动触发 \n branch: ${branch} \ngit: ${BASEURL}" } pipeline{ //指定运行此流水线的节点 agent any environment { def tag_time = new Date().format("yyyyMMddHHmm") def IPATH="harbor.marksugar.com/java/${JOB_NAME}:${tag_time}" } //管道运行选项 options { skipDefaultCheckout true skipStagesAfterUnstable() buildDiscarder(logRotator(numToKeepStr: '2')) } //流水线的阶段 stages{ //阶段1 获取代码 stage("CheckOut"){ steps { script { println("下载代码 --> 分支: ${env.branch}") checkout( [$class: 'GitSCM', branches: [[name: "${branch}"]], extensions: [], userRemoteConfigs: [[ credentialsId: 'gitlab-mark', url: "${BASEURL}"]]]) } } } stage("unit Test"){ steps{ script{ sh """ cd linuxea && mvn test -s /var/jenkins_home/.m2/settings.xml2 """ } } post { success { script { junit 'linuxea/target/surefire-reports/*.xml' } } } } stage("coed sonar"){ environment { def GIT_COMMIT_TAGSHA=sh (script: """cut -b -40 .git/refs/remotes/origin/master""",returnStdout: true).trim() def JOB_NAMES=sh (script: """echo ${BASEURL.split("/")[-1]} | cut -d . -f 1""",returnStdout: true).trim() def Projects_GitId=sh (script: """curl --silent --heade "PRIVATE-TOKEN: zrv1vpfZTtEFCJGrJczB" "https://gitlab.marksugar.com/api/v4/projects?simple=true"| /usr/local/package/jq-1.6/jq -rc '.[]|select(.path_with_namespace == "java/java-demo")'| /usr/local/package/jq-1.6/jq .id""",returnStdout: true).trim() def SONAR_git_TOKEN="K8DtxxxifxU1gQeDgvDK" def GitLab_Address="https://172.16.100.47" } steps{ script { withCredentials([string(credentialsId: 'sonarqube-token', variable: 'SONAR_TOKEN')]) { sh """ cd linuxea && \ /usr/local/package/sonar-scanner-4.6.2.2472-linux/bin/sonar-scanner \ -Dsonar.host.url=${GitLab_Address}:9000 \ -Dsonar.projectKey=${JOB_NAME} \ -Dsonar.projectName=${JOB_NAME} \ -Dsonar.projectVersion=${BUILD_NUMBER} \ -Dsonar.login=${SONAR_TOKEN} \ -Dsonar.ws.timeout=30 \ -Dsonar.projectDescription="my first project!" \ -Dsonar.links.homepage=${env.BASEURL} \ -Dsonar.links.ci=${BUILD_URL} \ -Dsonar.sources=src \ -Dsonar.sourceEncoding=UTF-8 \ -Dsonar.java.binaries=target/classes \ -Dsonar.java.test.binaries=target/test-classes \ -Dsonar.java.surefire.report=target/surefire-reports \ -Dsonar.core.codeCoveragePlugin=jacoco \ -Dsonar.jacoco.reportPaths=target/jacoco.exec \ -Dsonar.branch.name=${branch} \ -Dsonar.gitlab.commit_sha=${GIT_COMMIT_TAGSHA} \ -Dsonar.gitlab.ref_name=${branch} \ -Dsonar.gitlab.project_id=${Projects_GitId} \ -Dsonar.dynamicAnalysis=reuseReports \ -Dsonar.gitlab.failure_notification_mode=commit-status \ -Dsonar.gitlab.url=${GitLab_Address} \ -Dsonar.gitlab.user_token=${SONAR_git_TOKEN} \ -Dsonar.gitlab.api_version=v4 """ } } } } stage("mvn build"){ steps { script { sh """ cd linuxea mvn clean install -Dautoconfig.skip=true -Dmaven.test.skip=false -Dmaven.test.failure.ignore=true -s /var/jenkins_home/.m2/settings.xml2 """ } } } stage("docker build"){ steps{ script{ sh """ cd linuxea docker ps -a cp -r /usr/local/package/skywalking-agent ./ docker build -f ./Dockerfile -t $IPATH . docker push $IPATH docker rmi -f $IPATH """ } } } } }
2022年07月07日
1,859 阅读
0 评论
0 点赞
1
2
3