首页
常用命令
About Me
推荐
weibo
github
Search
1
linuxea:gitlab-ci之docker镜像质量品质报告
41,896 阅读
2
Graylog收集文件日志实例
17,184 阅读
3
linuxea:如何复现查看docker run参数命令
16,699 阅读
4
linuxea:jenkins+pipeline+gitlab+ansible快速安装配置(1)
16,694 阅读
5
git+jenkins发布和回滚示例
16,529 阅读
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack
Open-Falcon
Prometheus
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
音乐
影视
music
Internet Consulting
最后的净土
软件交付
持续集成
gitops
devops
登录
Search
标签搜索
kubernetes
docker
zabbix
Golang
mariadb
持续集成工具
白话容器
linux基础
nginx
elk
dockerfile
Gitlab-ci/cd
最后的净土
基础命令
docker-compose
saltstack
haproxy
jenkins
gitops
GitLab
marksugar
累计撰写
662
篇文章
累计收到
140
条评论
首页
栏目
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Service Mesh
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack
Open-Falcon
Prometheus
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
音乐
影视
music
Internet Consulting
最后的净土
软件交付
持续集成
gitops
devops
页面
常用命令
About Me
推荐
weibo
github
搜索到
20
篇与
harbor
的结果
2022-07-11
linuxea:jenkins基于钉钉的构建通知(11)
在之前的几篇中,我分别介绍了基础环境的配置,skywaling+nacos的配置,nexus3的配置,围绕sonarqube的配置和构建镜像的配置。这一篇中,主要配置消息通知阅读此篇,你将了解如下列表中简单的实现方式:jenkins和gitlab触发(已实现)jenkins凭据使用(已实现)juit配置(已实现)sonarqube简单扫描(已实现)sonarqube覆盖率(已实现)打包基于java的skywalking agent(上一章已实现)sonarqube与gitlab关联 (上一章已实现)配置docker中构建docker (上一章已实现)mvn打包(上一章已实现)sonarqube简单分支扫描(上一章已实现)基于gitlab来管理kustomize的k8s配置清单 (上一章已实现)kubectl部署 (上一章已实现)kubeclt deployment的状态跟踪 (上一章已实现)钉钉消息的构建状态推送(本章实现)前面我们断断续续的将最简单的持续集成做好,在cd阶段,使用了kustomize和argocd,并且搭配了kustomize和argocd做了gitops的部分事宜,现在们在添加一个基于钉钉的构建通知我们创建一个钉钉机器人,关键字是DEVOPS我们创建一个函数,其中采用markdown语法,如下:分别需要向DingTalk传递几个行参,分别是:mdTitle 标签,这里的标签也就是我们创建的关键字: DEVOPSmdText 详细文本atUser 需要@谁atAll @所有人SedContent 通知标题函数体如下:def DingTalk(mdTitle, mdText, atAll, atUser = '' ,SedContent){ webhook = "https://oapi.dingtalk.com/robot/send?access_token=55d35d6f09f05388c1a8f7d73955cd9b7eaf4a0dd38" sh """ curl --location --request POST ${webhook} \ --header 'Content-Type: application/json' \ --data '{ "msgtype": "markdown", "markdown": { "title": "${mdTitle}", "text": "${SedContent}\n ${mdText}" }, "at": { "atMobiles": [ "${atUser}" ], "isAtAll": "${atAll}" } }' """ }而在流水线阶段添加post,如下 post { success{ script{ // ItmesName="${JOB_NAME.split('/')[-1]}" env.SedContent="构建通知" mdText = "### ✅ \n ### 发起人: ${BUILD_TRIGGER_BY} \n ### 项目: ${JOB_NAME} \n ### 标签: $IPATH \n ### 时间: ${TIMENOW_CN} \n ### 提交SHA: ${GIT_COMMIT_TAGSHA} \n ### Commit Info: ${GIT_COMMIT_DESCRIBE} \n ### By:  \n" DingTalk("DEVOPS", mdText, true, SedContent) } } failure{ script{ env.SedContent="构建通知" mdText = "### ❌ \n 发起人: ${BUILD_TRIGGER_BY} \n ### 项目: ${JOB_NAME} \n ### 标签: $IPATH \n ### 时间: ${TIMENOW_CN} \n ### 提交SHA: ${GIT_COMMIT_TAGSHA} \n ### Commit Info: ${GIT_COMMIT_DESCRIBE} \n ### By:  \n" DingTalk("DEVOPS", mdText, true, SedContent) } } }当然,现在你看到了上面的函数传递中有很多变量,这些需要我们去获取我们在任意一个阶段中的script中,并用env.声明到全局环境变量,添加如下GIT_COMMIT_DESCRIBE: 提交信息GIT_COMMIT_TAGSHA:提交的SHA值TIMENOW_CN:可阅读的时间格式 env.GIT_COMMIT_DESCRIBE = "${sh(script:'git log --oneline --no-merges|head -1', returnStdout: true)}" env.GIT_COMMIT_TAGSHA=sh(script: """cut -b -40 .git/refs/remotes/origin/master""",returnStdout: true).trim() env.TIMENOW_CN=sh(script: """date +%Y年%m月%d日%H时%M分%S秒""",returnStdout: true).trim()进行构建,一旦构建完成,将会发送一段消息到钉钉如下而最终的管道流水线试图如下:完整的流水线管道代码如下try { if ( "${onerun}" == "gitlabs"){ println("Trigger Branch: ${info_ref}") RefName="${info_ref.split("/")[-1]}" //自定义显示名称 currentBuild.displayName = "#${info_event_name}-${RefName}-${info_checkout_sha}" //自定义描述 currentBuild.description = "Trigger by user ${info_user_username} 自动触发 \n branch: ${RefName} \n commit message: ${info_commits_0_message}" BUILD_TRIGGER_BY="${info_user_username}" BASEURL="${info_project_git_http_url}" } }catch(e){ BUILD_TRIGGER_BY="${currentBuild.getBuildCauses()[0].userId}" currentBuild.description = "Trigger by user ${BUILD_TRIGGER_BY} 非自动触发 \n branch: ${branch} \ngit: ${BASEURL}" } pipeline{ //指定运行此流水线的节点 agent any environment { def tag_time = new Date().format("yyyyMMddHHmm") def IPATH="harbor.marksugar.com/java/${JOB_NAME}:${tag_time}" def kustomize_Git="git@172.16.100.47:devops/k8s-yaml.git" def JOB_NAMES=sh (script: """echo ${kustomize_Git.split("/")[-1]} | cut -d . -f 1""",returnStdout: true).trim() def Projects_Area="dev" def apps_name="java-demo" def projectGroup="java-demo" def PACK_PATH="/usr/local/package" } //管道运行选项 options { skipDefaultCheckout true skipStagesAfterUnstable() buildDiscarder(logRotator(numToKeepStr: '2')) } //流水线的阶段 stages{ //阶段1 获取代码 stage("CheckOut"){ steps { script { println("下载代码 --> 分支: ${env.branch}") checkout( [$class: 'GitSCM', branches: [[name: "${branch}"]], extensions: [], userRemoteConfigs: [[ credentialsId: 'gitlab-mark', url: "${BASEURL}"]]]) } } } stage("unit Test"){ steps{ script{ env.GIT_COMMIT_DESCRIBE = "${sh(script:'git log --oneline --no-merges|head -1', returnStdout: true)}" env.TIMENOW_CN=sh(returnStdout: true, script: 'date +%Y年%m月%d日%H时%M分%S秒') env.GIT_COMMIT_TAGSHA=sh (script: """cut -b -40 .git/refs/remotes/origin/master""",returnStdout: true).trim() sh """ cd linuxea && mvn test -s /var/jenkins_home/.m2/settings.xml2 """ } } post { success { script { junit 'linuxea/target/surefire-reports/*.xml' } } } } stage("coed sonar"){ environment { def JOB_NAMES=sh (script: """echo ${BASEURL.split("/")[-1]} | cut -d . -f 1""",returnStdout: true).trim() def Projects_GitId=sh (script: """curl --silent --heade "PRIVATE-TOKEN: zrv1vpfZTtEFCJGrJczB" "http://gitlab.marksugar.com/api/v4/projects?simple=true"| /usr/local/package/jq-1.6/jq -rc '.[]|select(.path_with_namespace == "java/java-demo")'| /usr/local/package/jq-1.6/jq .id""",returnStdout: true).trim() def SONAR_git_TOKEN="K8DtxxxifxU1gQeDgvDK" def GitLab_Address="http://172.16.100.47" } steps{ script { withCredentials([string(credentialsId: 'sonarqube-token', variable: 'SONAR_TOKEN')]) { sh """ cd linuxea && \ /usr/local/package/sonar-scanner-4.6.2.2472-linux/bin/sonar-scanner \ -Dsonar.host.url=${GitLab_Address}:9000 \ -Dsonar.projectKey=${JOB_NAME} \ -Dsonar.projectName=${JOB_NAME} \ -Dsonar.projectVersion=${BUILD_NUMBER} \ -Dsonar.login=${SONAR_TOKEN} \ -Dsonar.ws.timeout=30 \ -Dsonar.projectDescription="my first project!" \ -Dsonar.links.homepage=${env.BASEURL} \ -Dsonar.links.ci=${BUILD_URL} \ -Dsonar.sources=src \ -Dsonar.sourceEncoding=UTF-8 \ -Dsonar.java.binaries=target/classes \ -Dsonar.java.test.binaries=target/test-classes \ -Dsonar.java.surefire.report=target/surefire-reports \ -Dsonar.core.codeCoveragePlugin=jacoco \ -Dsonar.jacoco.reportPaths=target/jacoco.exec \ -Dsonar.branch.name=${branch} \ -Dsonar.gitlab.commit_sha=${GIT_COMMIT_TAGSHA} \ -Dsonar.gitlab.ref_name=${branch} \ -Dsonar.gitlab.project_id=${Projects_GitId} \ -Dsonar.dynamicAnalysis=reuseReports \ -Dsonar.gitlab.failure_notification_mode=commit-status \ -Dsonar.gitlab.url=${GitLab_Address} \ -Dsonar.gitlab.user_token=${SONAR_git_TOKEN} \ -Dsonar.gitlab.api_version=v4 """ } } } } stage("mvn build"){ steps { script { sh """ cd linuxea mvn clean install -Dautoconfig.skip=true -Dmaven.test.skip=false -Dmaven.test.failure.ignore=true -s /var/jenkins_home/.m2/settings.xml2 """ } } } stage("docker build"){ steps{ script{ sh """ cd linuxea docker ps -a cp -r /usr/local/package/skywalking-agent ./ docker build -f ./Dockerfile -t $IPATH . docker push $IPATH docker rmi -f $IPATH """ } } } stage('Deploy') { steps { sh ''' [ ! -d ${JOB_NAMES} ] || rm -rf ${JOB_NAMES} } git clone ${kustomize_Git} && cd ${JOB_NAMES} && git checkout ${apps_name} echo "push latest images: $IPATH" echo "`date +%F-%T` imageTag: $IPATH buildId: ${BUILD_NUMBER} " >> ./buildhistory-$Projects_Area-${apps_name}.log cd overlays/$Projects_Area ${PACK_PATH}/kustomize edit set image $IPATH cd ../.. git add . git config --global push.default matching git config user.name zhengchao.tang git config user.email usertzc@163.com git commit -m "image tag $IPATH-> ${imageUrlPath}" git push -u origin ${apps_name} ${PACK_PATH}/argocd app sync ${apps_name} --retry-backoff-duration=10s -l marksugar/app=${apps_name} ''' // ${PACK_PATH}/argocd app sync ${apps_name} --retry-backoff-duration=10s -l marksugar/app=${apps_name} } // ${PACK_PATH}/kustomize build overlays/$Projects_Area/ | ${PACK_PATH}/kubectl --kubeconfig /var/jenkins_home/.kube/config-1.23.1-dev apply -f - } stage('status watch') { steps { sh ''' ${PACK_PATH}/kubectl --kubeconfig /var/jenkins_home/.kube/config-1.23.1-dev -n ${projectGroup} rollout status deployment ${apps_name} --watch --timeout=10m ''' } } } post { success{ script{ // ItmesName="${JOB_NAME.split('/')[-1]}" env.SedContent="构建通知" mdText = "### ✅ \n ### 发起人: ${BUILD_TRIGGER_BY} \n ### 项目: ${JOB_NAME} \n ### 标签: $IPATH \n ### 时间: ${TIMENOW_CN} \n ### 提交SHA: ${GIT_COMMIT_TAGSHA} \n ### Commit Info: ${GIT_COMMIT_DESCRIBE} \n ### By:  \n" DingTalk("DEVOPS", mdText, true, SedContent) } } failure{ script{ env.SedContent="构建通知" mdText = "### ❌ \n 发起人: ${BUILD_TRIGGER_BY} \n ### 项目: ${JOB_NAME} \n ### 标签: $IPATH \n ### 时间: ${TIMENOW_CN} \n ### 提交SHA: ${GIT_COMMIT_TAGSHA} \n ### Commit Info: ${GIT_COMMIT_DESCRIBE} \n ### By:  \n" DingTalk("DEVOPS", mdText, true, SedContent) } } } } def DingTalk(mdTitle, mdText, atAll, atUser = '' ,SedContent){ webhook = "https://oapi.dingtalk.com/robot/send?access_token=55d35d6f09f05388c1a8f7d73955cd9b7eaf4a0dd3803abdd1452e83d5b607ab" sh """ curl --location --request POST ${webhook} \ --header 'Content-Type: application/json' \ --data '{ "msgtype": "markdown", "markdown": { "title": "${mdTitle}", "text": "${SedContent}\n ${mdText}" }, "at": { "atMobiles": [ "${atUser}" ], "isAtAll": "${atAll}" } }' """ }现在,一个最简单的gitops的demo项目搭建完成参考gitops
2022年07月11日
185 阅读
0 评论
0 点赞
2022-07-09
linuxea:基于jenkins的kustomize配置发布(9)
在之前的几篇中,我分别介绍了基础环境的配置,skywaling+nacos的配置,nexus3的配置,围绕sonarqube的配置和构建镜像的配置。这一篇中,基于构建的镜像进行清单编排。我们需要一种工具来管理配置清单。阅读此篇,你将了解如下列表中简单的实现方式:jenkins和gitlab触发(已实现)jenkins凭据使用(已实现)juit配置(已实现)sonarqube简单扫描(已实现)sonarqube覆盖率(已实现)打包基于java的skywalking agent(上一章已实现)sonarqube与gitlab关联 (上一章已实现)配置docker中构建docker (上一章已实现)mvn打包(上一章已实现)sonarqube简单分支扫描(上一章已实现)基于gitlab来管理kustomize的k8s配置清单(本章实现)kubectl部署(本章实现)kubeclt deployment的状态跟踪(本章实现)钉钉消息的构建状态推送没错,我移情别恋了,在Helm和kustomize中,我选择后者。最大的原因是因为kustomize简单,易于维护。无论从那个角度,我都找不到不用kustomize的理由。这倒不是因为kustomize是多么优秀,仅仅是因为kustomize的方式让一切变得都简单。Helm和kustomizehelm几乎可以完成所有的操作,但是helm的问题是学习有难度,对于小白不友好,配置一旦过多调试将会更复杂。也是因为这种限制,那么使用helm的范围就被缩小了,不管在什么条件下,它都不在是优选。kustomize更直白,无论是开发,还是运维新手,都可以快速上手进行修改添加等基础配置。kustomizekustomize用法在官网的github上已经有所说明了,并且这里温馨的提供了中文示例。讨论如何学习kustomize不在本章的重点遵循kustmoize的版本,在https://github.com/kubernetes-sigs/kustomize/releases找到一个版本,通过http://toolwa.com/github/加速下载Kubectl 版本自定义版本< v1.14不适用v1.14-v1.20v2.0.3v1.21v4.0.5v1.22v4.2.0[root@k8s-01 linuxea]# kustomize version {Version:kustomize/v4.5.5 GitCommit:daa3e5e2c2d3a4b8c94021a7384bfb06734bcd26 BuildDate:2022-05-20T20:25:40Z GoOs:linux GoArch:amd64}创建必要的目录结构阅读示例中的示例:devops和开发配合管理配置数据有助于理解kustomize配置方法场景:在生产环境中有一个基于 Java 由多个内部团队对于业务拆分了不通的组并且有不同的项目的应用程序。这些服务在不同的环境中运行:development、 testing、 staging 和 production,有些配置需要频繁修改的。如果只是维护一个大的配置文件是非常麻烦且困难的 ,而这些配置文件也是需要专业运维人员或者devops工程师来进行操作的,这里面包含了一些片面且偏向运维的工作是开发人员不必知道的。例如:生产环境的敏感数据关键的登录凭据等这些在kustomize中被分成了不通的类因此,kustomize提供了混合管理办法基于相同的 base 创建 n 个 overlays 来创建 n 个集群环境的方法我们将使用 n==2,例如,只使用 development 和 production ,这里也可以使用相同的方法来增加更多的环境。运行 kustomize build 基于 overlay 的 target 来创建集群环境。为了让这一切开始运行,准备如下创建kustomize目录结构创建并配置kustomize配置文件最好创建gitlab项目,将配置存放在gitlab开始此前我写了一篇kustomize变量传入有过一些介绍,我们在简单补充一下。kustomize在1.14版本中已经是Kubectl内置的命令,并且支持kubernetes的原生可复用声明式配置的插件。它引入了一种无需模板的方式来自定义应用程序配置,从而简化了现成应用程序的使用。Kustomize 遍历 Kubernetes 清单以添加、删除或更新配置选项。它既可以作为独立的二进制文件使用,也可以作为kubectl来使用更多的背景可参考它的白皮书,这些在github的Declarative application management in Kubernetes存放。因为总的来说,这篇不是让你如何去理解背后的故事,而是一个最简单的示例常见操作在项目中为所有 Kubernetes 对象设置贯穿性字段是一种常见操作。 贯穿性字段的一些使用场景如下:为所有资源设置相同的名字空间为所有对象添加相同的前缀或后缀为对象添加相同的标签集合为对象添加相同的注解集合为对象添加相同的资源限制以及以及副本数这些通过在overlays目录下不同的配置来区分不通的环境所用的清单信息安装遵循github版本对应规则Kubectl versionKustomize version< v1.14n/av1.14-v1.20v2.0.3v1.21v4.0.5v1.22v4.2.0我的集群是1.23.1,因此我下载4.5.4PS E:\ops\k8s-1.23.1-latest\gitops> kustomize version {Version:kustomize/v4.5.4 GitCommit:cf3a452ddd6f83945d39d582243b8592ec627ae3 BuildDate:2022-03-28T23:12:45Z GoOs:windows GoArch:amd64}java-demo我这里已经配置了一个已经配置好的环境,我将会在这里简单介绍使用方法和配置,我不会详细说明deployment控制器的配置清单,也不会说明和kustomize基本使用无关的配置信息,我只会尽可能的在这个简单的示例中说明整个kustomize的在本示例中的用法。简述:kustomize需要base和Overlays目录,base可以是多个,overlays也可以是多个,overlays下的文件最终会覆盖到base的配置之上,只要配置是合理的,base的配置应该将有共性的配置最终通过overlays来进行配置,以此应对多个环境的配置。java-demo是一个无状态的java应用,使用的是Deployment控制器进行配置,并且创建一个service,于此同时传入skywalking的环境变量信息。1. 目录结构目录结构如下:# tree ./ ./ ├── base │ ├── deployment.yaml │ ├── kustomization.yaml │ └── service.yaml ├── overlays │ ├── dev │ │ ├── env.file │ │ ├── kustomization.yaml │ │ └── resources.yaml │ └── prod │ ├── kustomization.yaml │ ├── replicas.yaml │ └── resources.yaml └── README.md 4 directories, 11 files其中两目录如下:./ ├── base ├── overlays └── README.mdbase: 目录作为基础配置目录,真实的配置文件在这个文件下overlays: 目录作为场景目录,描述与 base 应用配置的差异部分来实现资源复用而在overlays目录下,又有两个目录,分别是dev和prod,分别对应俩个环境的配置,这里可以任意起名来区分,因为在这两个目录下面存放的是各自不通的配置./ ├── base ├── overlays │ ├── dev │ └── prod └── README.md1.1 imagePullSecrets除此之外,我们需要一个拉取镜像的信息使用cat ~/.docker/config.json |base64获取到base64字符串编码,而后复制到.dockerconfigjson: >-下即可apiVersion: v1 data: .dockerconfigjson: >- ewoJImkh0dHBIZWFkZXJzIjogewoJCSJVc2VyLUFnZW50IjogIkRvY2tlci1DbGllbnQvMTkuMDMuMTIgKGxpbnV4KSIKCX0KfQ== kind: Secret metadata: name: 156pull namespace: java-demo type: kubernetes.io/dockerconfigjson2. base目录base目录下分别有三个文件,分别如下├── base │ ├── deployment.yaml │ ├── kustomization.yaml │ └── service.yaml在deployment.yaml中定义必要的属性不定义场景的指标,如标签,名称空间,副本数量和资源限制定义名称,镜像地址,环境变量名这些不定义的属性通过即将配置的overlays中的配置进行贯穿覆盖到这个基础配置之上必须定义的属性表明了贯穿的属性和基础的配置是一份这里的环境变量用的是configmap的方式,值是通过后面传递过来的。如下deployment.yamlapiVersion: apps/v1 kind: Deployment metadata: name: java-demo spec: selector: matchLabels: template: metadata: labels: spec: containers: - image: harbor.marksugar.com/java/linuxea-2022 imagePullPolicy: IfNotPresent name: java-demo ports: - containerPort: 8080 env: - name: SW_AGENT_NAME valueFrom: configMapKeyRef: name: envinpod key: SW_AGENT_NAME - name: SW_AGENT_TRACE_IGNORE_PATH valueFrom: configMapKeyRef: name: envinpod key: SW_AGENT_TRACE_IGNORE_PATH - name: SW_AGENT_COLLECTOR_BACKEND_SERVICES valueFrom: configMapKeyRef: name: envinpod key: SW_AGENT_COLLECTOR_BACKEND_SERVICES imagePullSecrets: - name: 156pull restartPolicy: Alwaysservice.yamlapiVersion: v1 kind: Service metadata: name: java-demo spec: type: NodePort ports: - port: 8080 targetPort: 8080 nodePort: 31180kustomization.yamlkustomization.yaml引入这两个配置文件resources: - deployment.yaml - service.yaml执行 kustomize build /base ,得到的结果如下,这就是当前的原始清单apiVersion: v1 kind: Service metadata: name: java-demo spec: ports: - nodePort: 31180 port: 8080 targetPort: 8080 type: NodePort --- apiVersion: apps/v1 kind: Deployment metadata: name: java-demo spec: selector: matchLabels: null template: metadata: labels: null spec: containers: - env: - name: SW_AGENT_NAME valueFrom: configMapKeyRef: key: SW_AGENT_NAME name: envinpod - name: SW_AGENT_TRACE_IGNORE_PATH valueFrom: configMapKeyRef: key: SW_AGENT_TRACE_IGNORE_PATH name: envinpod - name: SW_AGENT_COLLECTOR_BACKEND_SERVICES valueFrom: configMapKeyRef: key: SW_AGENT_COLLECTOR_BACKEND_SERVICES name: envinpod image: harbor.marksugar.com/java/linuxea-2022:202207091551 imagePullPolicy: IfNotPresent name: java-demo ports: - containerPort: 8080 imagePullSecrets: - name: 156pull restartPolicy: Always3. overlays目录首先,在overlays目录下是有dev和prod目录的,我们先看在dev目录下的kustomization.yamlkustomization.yaml中的内容,包含一组资源和相关的自定义信息,如下更多用法参考官方文档或者github社区kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization patchesStrategicMerge: - resources.yaml # 当如当前的文件 namespace: java-demo # 名称空间 images: - name: harbor.marksugar.com/java/linuxea-2022 # 镜像url必须保持和base中一致 newTag: '202207072119' # 镜像tag bases: - ../../base # 引入bases基础文件 # configmap变量 configMapGenerator: - name: envinpod # 环境变量名称 env: env.file # 环境变量位置 # 副本数 replicas: - name: java-demo # 名称必须保持一致 count: 5 # namePrefix: dev- # pod前缀 # nameSuffix: "-001" # pod后缀 commonLabels: app: java-demo # 标签 # logging: isOk # commonAnnotations: # oncallPager: 897-001删掉那些注释后如下apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization patchesStrategicMerge: - resources.yaml namespace: java-demo images: - name: harbor.marksugar.com/java/linuxea-2022 newTag: '202207071059' bases: - ../../base configMapGenerator: - name: envinpod env: env.file replicas: - name: java-demo count: 5 commonLabels: app: java-demoresources.yaml resources.yaml 中的name必须保持一致apiVersion: apps/v1 kind: Deployment metadata: name: java-demo spec: template: spec: containers: - name: java-demo resources: limits: cpu: "1" memory: 2048Mi requests: cpu: "1" memory: 2048Mienv.fileenv.file定义的变量是对应在base中的,这些是skwayling中的必要信息,参考kubernetes中skywalking9.0部署使用,env的用法参考kustomize变量引入SW_AGENT_NAME=test::java-demo SW_AGENT_TRACE_IGNORE_PATH=GET:/health,GET:/aggreg/health,/eureka/**,xxl-job/** SW_AGENT_COLLECTOR_BACKEND_SERVICES=skywalking-oap.skywalking:11800查看 kustomize build overlays/dev/后的配置清单。如下所示:apiVersion: v1 data: SW_AGENT_COLLECTOR_BACKEND_SERVICES: skywalking-oap.skywalking:11800 SW_AGENT_NAME: test::java-demo SW_AGENT_TRACE_IGNORE_PATH: GET:/health,GET:/aggreg/health,/eureka/**,xxl-job/** kind: ConfigMap metadata: labels: app: java-demo name: envinpod-74t9b8htb6 namespace: java-demo --- apiVersion: v1 kind: Service metadata: labels: app: java-demo name: java-demo namespace: java-demo spec: ports: - nodePort: 31180 port: 8080 targetPort: 8080 selector: app: java-demo type: NodePort --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: java-demo name: java-demo namespace: java-demo spec: replicas: 5 selector: matchLabels: app: java-demo template: metadata: labels: app: java-demo spec: containers: - env: - name: SW_AGENT_NAME valueFrom: configMapKeyRef: key: SW_AGENT_NAME name: envinpod-74t9b8htb6 - name: SW_AGENT_TRACE_IGNORE_PATH valueFrom: configMapKeyRef: key: SW_AGENT_TRACE_IGNORE_PATH name: envinpod-74t9b8htb6 - name: SW_AGENT_COLLECTOR_BACKEND_SERVICES valueFrom: configMapKeyRef: key: SW_AGENT_COLLECTOR_BACKEND_SERVICES name: envinpod-74t9b8htb6 image: harbor.marksugar.com/java/linuxea-2022:202207071059 imagePullPolicy: IfNotPresent name: java-demo ports: - containerPort: 8080 resources: limits: cpu: "1" memory: 2048Mi requests: cpu: "1" memory: 2048Mi imagePullSecrets: - name: 156pull restartPolicy: Alwaysbase作为基础配置,Overlays作为覆盖来区分。base是包含 kustomization.yaml 文件的一个目录,其中包含一组资源及其相关的定制。 base可以是本地目录或者来自远程仓库的目录,只要其中存在 kustomization.yaml 文件即可。 Overlays 也是一个目录,其中包含将其他 kustomization 目录当做 bases 来引用的 kustomization.yaml 文件。 base不了解Overlays的存在,且可被多个Overlays所使用。 Overlays则可以有多个base,且可针对所有base中的资源执行操作,还可以在其上执行定制。通过sed替换Overlays下的文件内容或者kustomize edit set,如:在Overlays下执行kustomize edit set image harbor.marksugar.com/java/linuxea-2022:202207091551:202207071059:1.14.b替换镜像文件。一切符合预期后,使用kustomize.exe build .\overlays\dev\ | kubectl apply -f -使其生效。4. 部署到k8s命令部署两种方式kustomizekustomize build overlays/dev/ | kubectl apply -f -kubectlkubectl apply -k overlays/dev/使用kubectl apply -k生效,如下PS E:\ops\k8s-1.23.1-latest\gitops> kubectl.exe apply -k .\overlays\dev\ configmap/envinpod-74t9b8htb6 unchanged service/java-demo created deployment.apps/java-demo created如果使用的域名是私有的,需要在本地hosts填写本地解析172.16.100.54 harbor.marksugar.com并且需要修改/etc/docker/daemon.json{ "data-root": "/var/lib/docker", "exec-opts": ["native.cgroupdriver=systemd"], "insecure-registries": ["harbor.marksugar.com"], "max-concurrent-downloads": 10, "live-restore": true, "log-driver": "json-file", "log-level": "warn", "log-opts": { "max-size": "50m", "max-file": "1" }, "storage-driver": "overlay2" }查看部署情况PS E:\ops\k8s-1.23.1-latest\gitops\kustomize-k8s-yaml> kubectl.exe -n java-demo get all NAME READY STATUS RESTARTS AGE pod/java-demo-6474cb8fc8-6xs8t 1/1 Running 0 41s pod/java-demo-6474cb8fc8-9z9sd 1/1 Running 0 41s pod/java-demo-6474cb8fc8-jfqv6 1/1 Running 0 41s pod/java-demo-6474cb8fc8-p5ztd 1/1 Running 0 41s pod/java-demo-6474cb8fc8-sqt7b 1/1 Running 0 41s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/java-demo NodePort 10.111.26.148 <none> 8080:31180/TCP 41s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/java-demo 5/5 5 5 41s NAME DESIRED CURRENT READY AGE replicaset.apps/java-demo-6474cb8fc8 5 5 5 42s与此同时,skywalking也加入成功创建git项目在gitlab创建了一个组,在组织里面创建了一个项目,名称以项目命名,在项目内每个应用对应一个分支如: devops组内内新建一个k8s-yaml的项目,项目内创建一个java-demo分支,java-demo分支中存放java-demo的配置文件现在创建key,将密钥加入到项目中ssh-keygen -t ed25519将文件推送到git上$ git clone git@172.16.100.47:devops/k8s-yaml.git Cloning into 'k8s-yaml'... remote: Enumerating objects: 3, done. remote: Counting objects: 100% (3/3), done. remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0 Receiving objects: 100% (3/3), done. $ cd k8s-yaml/ $ git checkout -b java-demo Switched to a new branch 'java-demo $ ls -ll total 1024 -rw-r--r-- 1 Administrator 197121 12 Jul 7 21:09 README.MD drwxr-xr-x 1 Administrator 197121 0 Jun 28 20:15 base/ -rw-r--r-- 1 Administrator 197121 774 Jul 6 18:05 imagepullsecrt.yaml drwxr-xr-x 1 Administrator 197121 0 Jun 28 20:15 overlays/ $ git add . $ git commit -m "first commit" [java-demo a9701f7] first commit 11 files changed, 185 insertions(+) create mode 100644 base/deployment.yaml create mode 100644 base/kustomization.yaml create mode 100644 base/service.yaml create mode 100644 imagepullsecrt.yaml create mode 100644 overlays/dev/env.file create mode 100644 overlays/dev/kustomization.yaml create mode 100644 overlays/dev/resources.yaml create mode 100644 overlays/prod/kustomization.yaml create mode 100644 overlays/prod/replicas.yaml create mode 100644 overlays/prod/resources.yaml $ git push -u origin java-demo Enumerating objects: 19, done. Counting objects: 100% (19/19), done. Delta compression using up to 8 threads Compressing objects: 100% (15/15), done. Writing objects: 100% (17/17), 2.90 KiB | 329.00 KiB/s, done. Total 17 (delta 2), reused 0 (delta 0), pack-reused 0 remote: remote: To create a merge request for java-demo, visit: remote: http://172.16.100.47/devops/k8s-yaml/-/merge_requests/new?merge_request%5Bsource_branch%5D=java-demo remote: To 172.16.100.47:devops/k8s-yaml.git bb67227..a9701f7 java-demo -> java-demo Branch 'java-demo' set up to track remote branch 'java-demo' from 'origin'.添加到流水线首先,kustomize是配置文件是存放在gitlab上,因此,这个git需要我们拉取下来,而后修改镜像名称,应用kustomize的配置后,在push到gitlab上在这里的是kustomize是仅仅来管理yaml清单文件,在后面将使用argocd来做我们在流水线里面配置一个环境变量,指向kustomize配置文件的git地址,并切除git拉取后的目录地址尽可能的在gitlab和jenkins上的项目名称保持一直,才能做好流水线取值或者切出值的时候方便def kustomize_Git="git@172.16.100.47:devops/k8s-yaml.git" def JOB_NAMES=sh (script: """echo ${kustomize_Git.split("/")[-1]} | cut -d . -f 1""",returnStdout: true).trim() 但是kustomize是不能直接去访问集群的,因此还必须用kubectl,那就以为这需要config文件我们使用命令指定配置文件位置kubectl --kubeconfig /var/jenkins_home/.kube/config-1.23.1-dev另外,如果你的jenkins的docker镜像没有kustomize,或者kubectl,需要挂载进去,因此我的就变成了 environment { def tag_time = new Date().format("yyyyMMddHHmm") def IPATH="harbor.marksugar.com/java/${JOB_NAME}:${tag_time}" def kustomize_Git="git@172.16.100.47:devops/k8s-yaml.git" def JOB_NAMES=sh (script: """echo ${kustomize_Git.split("/")[-1]} | cut -d . -f 1""",returnStdout: true).trim() def Projects_Area="dev" def apps_name="java-demo" def projectGroup="java-demo" def PACK_PATH="/usr/local/package" }并且在容器内生成一个密钥,而后加到gitlab中,以供git拉取和上传bash-5.1# ssh-keygen -t rsa而后在复制到/var/jenkins_home下,并且挂载到容器内- /data/jenkins-latest/jenkins_home/.ssh:/root/.ssh第一次拉取需要输入yes,我们规避它echo ' Host * StrictHostKeyChecking no UserKnownHostsFile=/dev/null' >>/root/.ssh/config如果你使用的是宿主机运行的Jenkins,这一步可省略因为资源不足的问题,我们手动修改副本数为1流水线阶段,步骤大致如下:1.判断本地是否有git的目录,如果有就删除2.拉取git,并切换到分支3.追加当前的镜像版本到一个buildhistory的文件中4.cd到目录中修改镜像5.修改完成后上传修改你被人6.kustomize和kubectl应用配置清单代码快如下: stage('Deploy') { steps { sh ''' [ ! -d ${JOB_NAMES} ] || rm -rf ${JOB_NAMES} } git clone ${kustomize_Git} && cd ${JOB_NAMES} && git checkout ${apps_name} echo "push latest images: $IPATH" echo "`date +%F-%T` imageTag: $IPATH buildId: ${BUILD_NUMBER} " >> ./buildhistory-$Projects_Area-${apps_name}.log cd overlays/$Projects_Area ${PACK_PATH}/kustomize edit set image $IPATH cd ../.. git add . git config --global push.default matching git config user.name zhengchao.tang git config user.email usertzc@163.com git commit -m "image tag $IPATH-> ${imageUrlPath}" git push -u origin ${apps_name} ${PACK_PATH}/kustomize build overlays/$Projects_Area/ | ${PACK_PATH}/kubectl --kubeconfig /var/jenkins_home/.kube/config-1.23.1-dev apply -f - ''' } } 观测状态配置清单被生效后,不一定符合预期,此时有很多种情况出现,特别是在使用原生的这些命令和脚本更新的时候我们需要追踪更新后的状态,以便于我们随时做出正确的动作。我此前写过一篇关于kubernetes检测pod部署状态简单实现,如果感兴趣可以查看仍然使用此前的方式,如下 stage('status watch') { steps { sh ''' ${PACK_PATH}/kubectl --kubeconfig /var/jenkins_home/.kube/config-1.23.1-dev -n ${projectGroup} rollout status deployment ${apps_name} --watch --timeout=10m ''' } }构建一次到服务器上查看[root@linuxea-11 ~]# kubectl -n java-demo get pod NAME READY STATUS RESTARTS AGE java-demo-66b98564f6-xsc6z 1/1 Running 0 9m24s其他参考kubernetes中skywalking9.0部署使用,kustomize变量引入
2022年07月09日
176 阅读
0 评论
0 点赞
2022-07-07
linuxea:jenkins流水线集成sonar分支扫描/关联gitlab/docker和mvn打包配置二(8)
在前面的jenkins流水线集成juit/sonarqube/覆盖率扫描配置一中介绍了juilt,覆盖率以及soanrqube的一些配置实现。接着上一篇中,我们继续。阅读此篇,你将了解如下列表中简单的实现方式:jenkins和gitlab触发(上一章已实现)jenkins凭据使用(上一章已实现)juit配置(上一章已实现)sonarqube简单扫描(上一章已实现)sonarqube覆盖率(上一章已实现)打包基于java的skywalking agent(上一章已实现)sonarqube与gitlab关联 (本章实现)配置docker中构建docker (本章实现)mvn打包 (本章实现)sonarqube简单分支扫描(本章实现)基于gitlab来管理kustomize的k8s配置清单kubectl部署kubeclt deployment的状态跟踪钉钉消息的构建状态推送4.6 分支扫描我们可能更希望扫描某一个分支,于是我们需要sonarqube-community-branch-plugin插件我们在https://github.com/mc1arke/sonarqube-community-branch-plugin/releases中,留意支持的版本Note: This version supports Sonarqube 8.9 and above. Sonarqube 8.8 and below or 9.0 and above are not supported in this release使用下表查找每个 SonarQube 版本的正确插件版本SonarQube 版本插件版本9.1+1.12.09.01.9.08.91.8.28.7 - 8.81.7.08.5 - 8.61.6.08.2 - 8.41.5.08.11.4.07.8 - 8.01.3.27.4 - 7.71.0.2于是,我们在nexus3上下载1.8.1版本https://github.com/mc1arke/sonarqube-community-branch-plugin/releases/download/1.8.0/sonarqube-community-branch-plugin-1.8.0.jar 或者 https://github.91chifun.workers.dev//https://github.com/mc1arke/sonarqube-community-branch-plugin/releases/download/1.8.0/sonarqube-community-branch-plugin-1.8.0.jar根据安装提示https://github.com/mc1arke/sonarqube-community-branch-plugin#manual-install而后直接将 jar包下载在/data/sonarqube/extensions/plugins/下即可wget http://172.16.100.48/jenkins/sonar-plugins/sonarqube-community-branch-plugin-1.8.0.jar -o /data/sonarqube/extensions/plugins/sonarqube-community-branch-plugin-1.8.0.jar实际上/data/sonarqube/extensions/目录被挂载到nexus的容器内的/opt/sonarqube/extensions下而容器内的位置是不变的,因此挂载映射关系如下: volumes: - /etc/localtime:/etc/localtime - /data/sonarqube/conf:/opt/sonarqube/conf - /data/sonarqube/extensions:/opt/sonarqube/extensions - /data/sonarqube/logs:/opt/sonarqube/logs - /data/sonarqube/data:/opt/sonarqube/data[root@linuxea-47 /data/sonarqube/extensions]# ll plugins/ total 17552 -rwx------ 1 1000 1000 10280677 Oct 10 2021 sonar-gitlab-plugin-4.1.0-SNAPSHOT.jar -rwx------ 1 1000 1000 61903 Sep 11 2021 sonar-l10n-zh-plugin-8.9.jar -rwx------ 1 1000 1000 7623167 Oct 10 2021 sonarqube-community-branch-plugin-1.8.0.jar而后,我们在本地是/data/sonarqube/conf下的创建一个配置文件sonar.properties,内容如下sonar.web.javaAdditionalOpts=-javaagent:./extensions/plugins/sonarqube-community-branch-plugin-1.8.0.jar=web sonar.ce.javaAdditionalOpts=-javaagent:./extensions/plugins/sonarqube-community-branch-plugin-1.8.0.jar=ce这个配置文件被映射到容器内的/opt/sonarqube/conf进入容器查看[root@linuxea-47 /data/sonarqube]# ls extensions/plugins/ -ll total 17552 -rwx------ 1 1000 1000 61903 Sep 11 2021 sonar-l10n-zh-plugin-8.9.jar -rwx------ 1 1000 1000 7623167 Oct 10 2021 sonarqube-community-branch-plugin-1.8.0.jar分支扫描参数增加 –Dsonar.branch.name=-Dsonar.branch.name=master那现在的projetctkey就不需要加分支名字了 -Dsonar.projectKey=${JOB_NAME}_${branch} \ -Dsonar.projectName=${JOB_NAME}_${branch} \直接在一个项目中就可以看到多个分支的扫描结果了 stage("coed sonar"){ steps{ script { withCredentials([string(credentialsId: 'sonarqube-token', variable: 'SONAR_TOKEN')]) { sh """ cd linuxea && \ /usr/local/package/sonar-scanner-4.6.2.2472-linux/bin/sonar-scanner \ -Dsonar.host.url=http://172.16.100.47:9000 \ -Dsonar.projectKey=${JOB_NAME} \ -Dsonar.projectName=${JOB_NAME} \ -Dsonar.projectVersion=${BUILD_NUMBER} \ -Dsonar.login=${SONAR_TOKEN} \ -Dsonar.ws.timeout=30 \ -Dsonar.projectDescription="my first project!" \ -Dsonar.links.homepage=${env.BASEURL} \ -Dsonar.links.ci=${BUILD_URL} \ -Dsonar.sources=src \ -Dsonar.sourceEncoding=UTF-8 \ -Dsonar.java.binaries=target/classes \ -Dsonar.java.test.binaries=target/test-classes \ -Dsonar.java.surefire.report=target/surefire-reports \ -Dsonar.core.codeCoveragePlugin=jacoco \ -Dsonar.jacoco.reportPaths=target/jacoco.exec \ -Dsonar.branch.name=${branch} """ } } } }此时我们分别构建master和web后,在sonarqube的UI中就会有两个分支的扫描结果注意事项如果你使用的是不同的版本,而不同的版本配置是不一样的。见github的每个分支,比如:1.5.04.7 关联gitlab在https://github.com/gabrie-allaigre/sonar-gitlab-plugin下载插件,参阅用法中版本对应,我们下载4.1.0https://github.com/gabrie-allaigre/sonar-gitlab-plugin/releases/download/4.1.0/sonar-gitlab-plugin-4.1.0-SNAPSHOT.jar而后仍然存放到sonarqube的plugin目录下[root@linuxea-47 ~]# ls /data/sonarqube/extensions/plugins/ -ll total 17552 -rwx------ 1 1000 1000 10280677 Oct 10 2021 sonar-gitlab-plugin-4.1.0-SNAPSHOT.jar -rwx------ 1 1000 1000 61903 Sep 11 2021 sonar-l10n-zh-plugin-8.9.jar -rwx------ 1 1000 1000 7623167 Oct 10 2021 sonarqube-community-branch-plugin-1.8.0.jar这在启动的时候,实际上可以看到日志加载根据文档,要完成扫描必须提供如下必要参数-Dsonar.gitlab.commit_sha=1632c729e8f78f913cbf0925baa2a8c893e4473b \ 版本sha -Dsonar.gitlab.ref_name=master \ 分支 -Dsonar.gitlab.project_id=16 \ 项目id -Dsonar.dynamicAnalysis=reuseReports \ 扫描方式 -Dsonar.gitlab.failure_notification_mode=commit-status \ 更改提交状态 -Dsonar.gitlab.url=http://192.168.1.200 \ gitlab地址 -Dsonar.gitlab.user_token=k8xLe6dYTzdtoewSysmy \ gitlab token -Dsonar.gitlab.api_version=v41.配置一个全局token至少需要如下权限令牌如下K8DtxxxifxU1gQeDgvDK其他信息根据现有的项目输入即可-Dsonar.gitlab.commit_sha=4a5bb3db1c845cddc86290d137ef694b3b076d0e \ 版本sha -Dsonar.gitlab.ref_name=master \ 分支 -Dsonar.gitlab.project_id=19 \ 项目id -Dsonar.dynamicAnalysis=reuseReports \ 扫描方式 -Dsonar.gitlab.failure_notification_mode=commit-status \ 更改提交状态 -Dsonar.gitlab.url=http://172.16.100.47 \ gitlab地址 -Dsonar.gitlab.user_token=K8DtxxxifxU1gQeDgvDK \ gitlab token -Dsonar.gitlab.api_version=v42.将上述命令添加到sonarqube的流水线中/var/jenkins_home/package/sonar-scanner/bin/sonar-scanner \ -Dsonar.host.url=http://172.16.15.136:9000 \ -Dsonar.projectKey=java-demo \ -Dsonar.projectName=java-demo \ -Dsonar.projectVersion=120 \ -Dsonar.login=636558affea60cc5f264247de36e7c27c817530b \ -Dsonar.ws.timeout=30 \ -Dsonar.projectDescription="my first project!" \ -Dsonar.links.homepage=http://172.16.15.136:180/devops/java-demo.git \ -Dsonar.links.ci=http://172.16.15.136:8088/job/java-demo/120/ \ -Dsonar.sources=src \ -Dsonar.sourceEncoding=UTF-8 \ -Dsonar.java.binaries=target/classes -Dsonar.java.test.binaries=target/test-classes \ -Dsonar.java.surefire.report=target/surefire-reports \ -Dsonar.branch.name=main \ -Dsonar.gitlab.commit_sha=9353e89a7b42e0d93ddf95520408ecfde9a5144a \ -Dsonar.gitlab.ref_name=main \ -Dsonar.gitlab.project_id=2 \ -Dsonar.dynamicAnalysis=reuseReports \ -Dsonar.gitlab.failure_notification_mode=commit-status \ -Dsonar.gitlab.url=http://172.16.15.136:180 \ -Dsonar.gitlab.user_token=9mszu2KXx7nHXiwJveBs \ -Dsonar.gitlab.api_version=v4运行测试正常是什么样的呢,换一个环境配置下/usr/local/package/sonar-scanner-4.6.2.2472-linux/bin/sonar-scanner \ -Dsonar.host.url=http://172.16.100.47:9000 \ -Dsonar.projectKey=java-demo \ -Dsonar.projectName=java-demo \ -Dsonar.projectVersion=20 \ -Dsonar.login=bc826f124d691127c351388274667d7deb1cc9b2 \ -Dsonar.ws.timeout=30 \ -Dsonar.projectDescription="my first project!" \ -Dsonar.links.homepage=www.baidu.com \ -Dsonar.links.ci=20 \ -Dsonar.sources=src \ -Dsonar.sourceEncoding=UTF-8 \ -Dsonar.java.binaries=target/classes \ -Dsonar.java.test.binaries=target/test-classes \ -Dsonar.java.surefire.report=target/surefire-reports \ -Dsonar.core.codeCoveragePlugin=jacoco \ -Dsonar.jacoco.reportPaths=target/jacoco.exec \ -Dsonar.branch.name=master \ -Dsonar.gitlab.commit_sha=4a5bb3db1c845cddc86290d137ef694b3b076d0e \ -Dsonar.gitlab.ref_name=master \ -Dsonar.gitlab.project_id=19 \ -Dsonar.dynamicAnalysis=reuseReports \ -Dsonar.gitlab.failure_notification_mode=commit-status \ -Dsonar.gitlab.url=http://172.16.100.47 \ -Dsonar.gitlab.user_token=K8DtxxxifxU1gQeDgvDK \ -Dsonar.gitlab.api_version=v4 执行之后INFO: SCM Publisher SCM provider for this project is: git INFO: SCM Publisher 2 source files to be analyzed INFO: SCM Publisher 2/2 source files have been analyzed (done) | time=704ms INFO: CPD Executor 2 files had no CPD blocks INFO: CPD Executor Calculating CPD for 0 files INFO: CPD Executor CPD calculation finished (done) | time=0ms INFO: Analysis report generated in 42ms, dir size=74 KB INFO: Analysis report compressed in 14ms, zip size=13 KB INFO: Analysis report uploaded in 468ms INFO: ANALYSIS SUCCESSFUL, you can browse http://172.16.100.47:9000/dashboard?id=java-demo&branch=master INFO: Note that you will be able to access the updated dashboard once the server has processed the submitted analysis report INFO: More about the report processing at http://172.16.100.47:9000/api/ce/task?id=AYHOP018DZyaRsN1subY INFO: Executing post-job 'GitLab Commit Issue Publisher' INFO: Waiting quality gate to complete... INFO: Quality gate status: OK INFO: Duplicated Lines : 0 INFO: Lines of Code : 18 INFO: Report status=success, desc=SonarQube reported QualityGate is ok, with 2 ok, no issues INFO: Analysis total time: 7.130 s INFO: ------------------------------------------------------------------------ INFO: EXECUTION SUCCESS INFO: ------------------------------------------------------------------------ INFO: Total time: 7.949s INFO: Final Memory: 17M/60M INFO: ------------------------------------------------------------------------流水线已通过3.获取参数现在的问题是,手动输入gitlab的这些值不可能在jenkins中输入,我们需要自动获取这些。分支的环境变量通过传递来,用变量获取即可commit_sha通过读取当前代码中的文件实现gitlab token放到密钥管理当中于是,我们通过jq来获取格式化gitlab api返回值获取缺省的项目id需要下载一个jq程序在jenkins节点上。于是我们在https://stedolan.github.io/jq/download/页面下载一个 binaries二进制的即可https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64获取项目id curl --silent --header "PRIVATE-TOKEN: K8DtxxxifxU1gQeDgvDK" "http://gitlab.marksugar.com/api/v4/projects?simple=true"| jq -rc '.[]|select(.name == "java-demo")'|jq .id示例1:如果项目名称在所有组内是唯一的,就可以使用jq -rc '.[]|select(.name == "java-demo")',如下.name == "java-demo": 项目名curl --silent --header "PRIVATE-TOKEN: K8DtxxxifxU1gQeDgvDK" "http://gitlab.marksugar.com/api/v4/projects?simple=true"| jq -rc '.[]|select(.name == "java-demo")' | jq .id示例2:如果项目名称在所有组内不是唯一,且有多个的,用jq -rc '.[]|select(.path_with_namespace == "java/java-demo")',如下.path_with_namespace == java/java-demo : 组名/项目名curl --silent --header "PRIVATE-TOKEN: K8DtxxxifxU1gQeDgvDK" "http://gitlab.marksugar.com/api/v4/projects?simple=true"| jq -rc '.[]|select(.path_with_namespace == "java/java-demo")'|jq .id获取当前的sha版本号获取办版本号只需要在当前项目目录内读取文件或者命令即可,it log --pretty=oneline|head -1| cut -b 1-40,如下[root@linuxea-48 /data/jenkins-latest/jenkins_home/workspace/linuxea-2022]# git log --pretty=oneline|head -1| cut -b 1-40 4a5bb3db1c845cddc86290d137ef694b3b076d0e除此之外使用cut -b -40 .git/refs/remotes/origin/master 能获得一样的效果[root@linuxea-48 /data/jenkins-latest/jenkins_home/workspace/linuxea-2022]# cut -b -40 .git/refs/remotes/origin/master 4a5bb3db1c845cddc86290d137ef694b3b076d0e项目名称项目名称,我们可以使用Jenkins的项目名字。但是,这个名字有时候未必和git的项目名称一样,于是,我们直接截取项目的地址名称JOB_NAMES=sh (script: """echo ${BASEURL.split("/")[-1]} | cut -d . -f 1""",returnStdout: true).trim() 那么现在已经具备上面的几个关键参数,现在分别命名GIT_COMMIT_TAGSHA和Projects_GitId,JOB_NAMESenvironment { def GIT_COMMIT_TAGSHA=sh (script: """cut -b -40 .git/refs/remotes/origin/master""",returnStdout: true).trim() def JOB_NAMES=sh (script: """echo ${BASEURL.split("/")[-1]} | cut -d . -f 1""",returnStdout: true).trim() def Projects_GitId=sh (script: """curl --silent --header "PRIVATE-TOKEN: zrv1vpfZTtEFCJGrJczB" "http://gitlab.marksugar.com/api/v4/projects?simple=true"| ${buildMap["jq"]} -rc '.[]|select(.path_with_namespace == "java/java-demo")'| ${buildMap["jq"]} .id""",returnStdout: true).trim() }那么现在的环境变量就是 environment { def tag_time = new Date().format("yyyyMMddHHmm") def IPATH="harbor.marksugar.com/java/${JOB_NAME}:${tag_time}" def GIT_COMMIT_TAGSHA=sh (script: """cut -b -40 .git/refs/remotes/origin/master""",returnStdout: true).trim() def JOB_NAMES=sh (script: """echo ${BASEURL.split("/")[-1]} | cut -d . -f 1""",returnStdout: true).trim() def Projects_GitId=sh (script: """curl --silent --header "PRIVATE-TOKEN: zrv1vpfZTtEFCJGrJczB" "http://gitlab.marksugar.com/api/v4/projects?simple=true"| ${buildMap["jq"]} -rc '.[]|select(.path_with_namespace == "java/java-demo")'| ${buildMap["jq"]} .id""",returnStdout: true).trim() def SONAR_git_TOKEN="K8DtxxxifxU1gQeDgvDK" def GitLab_Address="http://172.16.100.47" } 而新增的调用的命令如下 -Dsonar.gitlab.commit_sha=${GIT_COMMIT_TAGSHA} \ -Dsonar.gitlab.ref_name=${branch} \ -Dsonar.gitlab.project_id=${Projects_GitId} \ -Dsonar.dynamicAnalysis=reuseReports \ -Dsonar.gitlab.failure_notification_mode=commit-status \ -Dsonar.gitlab.url=${GitLab_Address} \ -Dsonar.gitlab.user_token=${SONAR_git_TOKEN} \ -Dsonar.gitlab.api_version=v4 构建一次能够看到已经获取到的值,构建成功的完整的阶段代码如下: stage("coed sonar"){ environment { def GIT_COMMIT_TAGSHA=sh (script: """cut -b -40 .git/refs/remotes/origin/master""",returnStdout: true).trim() def JOB_NAMES=sh (script: """echo ${BASEURL.split("/")[-1]} | cut -d . -f 1""",returnStdout: true).trim() def Projects_GitId=sh (script: """curl --silent --heade "PRIVATE-TOKEN: zrv1vpfZTtEFCJGrJczB" "http://gitlab.marksugar.com/api/v4/projects?simple=true"| /usr/local/package/jq-1.6/jq -rc '.[]|select(.path_with_namespace == "java/java-demo")'| /usr/local/package/jq-1.6/jq .id""",returnStdout: true).trim() def SONAR_git_TOKEN="K8DtxxxifxU1gQeDgvDK" def GitLab_Address="http://172.16.100.47" } steps{ script { withCredentials([string(credentialsId: 'sonarqube-token', variable: 'SONAR_TOKEN')]) { sh """ cd linuxea && \ /usr/local/package/sonar-scanner-4.6.2.2472-linux/bin/sonar-scanner \ -Dsonar.host.url=${GitLab_Address}:9000 \ -Dsonar.projectKey=${JOB_NAME} \ -Dsonar.projectName=${JOB_NAME} \ -Dsonar.projectVersion=${BUILD_NUMBER} \ -Dsonar.login=${SONAR_TOKEN} \ -Dsonar.ws.timeout=30 \ -Dsonar.projectDescription="my first project!" \ -Dsonar.links.homepage=${env.BASEURL} \ -Dsonar.links.ci=${BUILD_URL} \ -Dsonar.sources=src \ -Dsonar.sourceEncoding=UTF-8 \ -Dsonar.java.binaries=target/classes \ -Dsonar.java.test.binaries=target/test-classes \ -Dsonar.java.surefire.report=target/surefire-reports \ -Dsonar.core.codeCoveragePlugin=jacoco \ -Dsonar.jacoco.reportPaths=target/jacoco.exec \ -Dsonar.branch.name=${branch} \ -Dsonar.gitlab.commit_sha=${GIT_COMMIT_TAGSHA} \ -Dsonar.gitlab.ref_name=${branch} \ -Dsonar.gitlab.project_id=${Projects_GitId} \ -Dsonar.dynamicAnalysis=reuseReports \ -Dsonar.gitlab.failure_notification_mode=commit-status \ -Dsonar.gitlab.url=${GitLab_Address} \ -Dsonar.gitlab.user_token=${SONAR_git_TOKEN} \ -Dsonar.gitlab.api_version=v4 """ } } } }4.8 mvn 打包我们是哟个一条命令直接进行打包-Dmaven.test.skip=true,不执行测试用例,也不编译测试用例类-Dmaven.test.failure.ignore=true ,忽略单元测试失败-s ~/.m2/settings.xml,指定mvn构建的配置文件位置mvn clean install -Dautoconfig.skip=true -Dmaven.test.skip=false -Dmaven.test.failure.ignore=true -s /var/jenkins_home/.m2/settings.xml阶段如下 stage("mvn build"){ steps { script { sh """ cd linuxea mvn clean install -Dautoconfig.skip=true -Dmaven.test.skip=false -Dmaven.test.failure.ignore=true -s /var/jenkins_home/.m2/settings.xml """ } } }4.9 推送镜像我们先需要将docker配置好,首先容器内需要安装docker,而后挂载socket如果你的系统是和容器系统的库文件一样,你可以将本地的docker二进制文件挂载到容器内,但是我使用的是alpine,因此我在容器内安装了docker,此时只需要挂载目录和sock即可也可以将docker挂载到容器内即可 - /usr/bin/docker:/usr/bin/docker - /etc/docker:/etc/docker - /var/run/docker.sock:/var/run/docker.sock并在容器内登录docker容器内登录,或者在流水线阶段中登录也可以[root@linuxea-48 /data/jenkins-latest/jenkins_home]# docker exec -it jenkins bash bash-5.1# cat ~/.docker/config.json { "auths": { "harbor.marksugar.com": { "auth": "YWRtaW46SGFyYm9yMTIzNDU=" } } }将配置复制到主机并挂载到容器内,或者在主机登录挂载到容器都可以- /data/jenkins-latest/.docker:/root/.docker能够在容器内查看docker命令bash-5.1# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 536cb1dbeb3f registry.cn-hangzhou.aliyuncs.com/marksugar/jenkins:2.332-3-alpine-ansible-maven3-nodev16.15-latest "/sbin/tini -- /usr/…" About an hour ago Up About an hour jenkins而后配置docker推送阶段开始之前要配置环境变量,用于获取镜像的时间tag_time随机时间 agent any environment { def tag_time = new Date().format("yyyyMMddHHmm") def IPATH="harbor.marksugar.com/java/${JOB_NAME}:${tag_time}" }docker阶段请注意:此时在COPY skywalking-agent的时候,需要将包拷贝到当前目录才能COPY到容器内 stage("docker build"){ steps{ script{ sh """ cd linuxea docker ps -a cp -r /usr/local/package/skywalking-agent ./ docker build -f ./Dockerfile -t $IPATH . docker push $IPATH docker rmi -f $IPATH """ } } }与此同时需要修改Dockerfile中的COPY 目录而后创建harbor仓库开始构建一旦构建完成,镜像将会推送到harbor仓库此时的pipeline流水线i清单如下try { if ( "${onerun}" == "gitlabs"){ println("Trigger Branch: ${info_ref}") RefName="${info_ref.split("/")[-1]}" //自定义显示名称 currentBuild.displayName = "#${info_event_name}-${RefName}-${info_checkout_sha}" //自定义描述 currentBuild.description = "Trigger by user ${info_user_username} 自动触发 \n branch: ${RefName} \n commit message: ${info_commits_0_message}" BUILD_TRIGGER_BY="${info_user_username}" BASEURL="${info_project_git_http_url}" } }catch(e){ BUILD_TRIGGER_BY="${currentBuild.getBuildCauses()[0].userId}" currentBuild.description = "Trigger by user ${BUILD_TRIGGER_BY} 非自动触发 \n branch: ${branch} \ngit: ${BASEURL}" } pipeline{ //指定运行此流水线的节点 agent any environment { def tag_time = new Date().format("yyyyMMddHHmm") def IPATH="harbor.marksugar.com/java/${JOB_NAME}:${tag_time}" } //管道运行选项 options { skipDefaultCheckout true skipStagesAfterUnstable() buildDiscarder(logRotator(numToKeepStr: '2')) } //流水线的阶段 stages{ //阶段1 获取代码 stage("CheckOut"){ steps { script { println("下载代码 --> 分支: ${env.branch}") checkout( [$class: 'GitSCM', branches: [[name: "${branch}"]], extensions: [], userRemoteConfigs: [[ credentialsId: 'gitlab-mark', url: "${BASEURL}"]]]) } } } stage("unit Test"){ steps{ script{ sh """ cd linuxea && mvn test -s /var/jenkins_home/.m2/settings.xml2 """ } } post { success { script { junit 'linuxea/target/surefire-reports/*.xml' } } } } stage("coed sonar"){ environment { def GIT_COMMIT_TAGSHA=sh (script: """cut -b -40 .git/refs/remotes/origin/master""",returnStdout: true).trim() def JOB_NAMES=sh (script: """echo ${BASEURL.split("/")[-1]} | cut -d . -f 1""",returnStdout: true).trim() def Projects_GitId=sh (script: """curl --silent --heade "PRIVATE-TOKEN: zrv1vpfZTtEFCJGrJczB" "http://gitlab.marksugar.com/api/v4/projects?simple=true"| /usr/local/package/jq-1.6/jq -rc '.[]|select(.path_with_namespace == "java/java-demo")'| /usr/local/package/jq-1.6/jq .id""",returnStdout: true).trim() def SONAR_git_TOKEN="K8DtxxxifxU1gQeDgvDK" def GitLab_Address="http://172.16.100.47" } steps{ script { withCredentials([string(credentialsId: 'sonarqube-token', variable: 'SONAR_TOKEN')]) { sh """ cd linuxea && \ /usr/local/package/sonar-scanner-4.6.2.2472-linux/bin/sonar-scanner \ -Dsonar.host.url=${GitLab_Address}:9000 \ -Dsonar.projectKey=${JOB_NAME} \ -Dsonar.projectName=${JOB_NAME} \ -Dsonar.projectVersion=${BUILD_NUMBER} \ -Dsonar.login=${SONAR_TOKEN} \ -Dsonar.ws.timeout=30 \ -Dsonar.projectDescription="my first project!" \ -Dsonar.links.homepage=${env.BASEURL} \ -Dsonar.links.ci=${BUILD_URL} \ -Dsonar.sources=src \ -Dsonar.sourceEncoding=UTF-8 \ -Dsonar.java.binaries=target/classes \ -Dsonar.java.test.binaries=target/test-classes \ -Dsonar.java.surefire.report=target/surefire-reports \ -Dsonar.core.codeCoveragePlugin=jacoco \ -Dsonar.jacoco.reportPaths=target/jacoco.exec \ -Dsonar.branch.name=${branch} \ -Dsonar.gitlab.commit_sha=${GIT_COMMIT_TAGSHA} \ -Dsonar.gitlab.ref_name=${branch} \ -Dsonar.gitlab.project_id=${Projects_GitId} \ -Dsonar.dynamicAnalysis=reuseReports \ -Dsonar.gitlab.failure_notification_mode=commit-status \ -Dsonar.gitlab.url=${GitLab_Address} \ -Dsonar.gitlab.user_token=${SONAR_git_TOKEN} \ -Dsonar.gitlab.api_version=v4 """ } } } } stage("mvn build"){ steps { script { sh """ cd linuxea mvn clean install -Dautoconfig.skip=true -Dmaven.test.skip=false -Dmaven.test.failure.ignore=true -s /var/jenkins_home/.m2/settings.xml2 """ } } } stage("docker build"){ steps{ script{ sh """ cd linuxea docker ps -a cp -r /usr/local/package/skywalking-agent ./ docker build -f ./Dockerfile -t $IPATH . docker push $IPATH docker rmi -f $IPATH """ } } } } }
2022年07月07日
112 阅读
0 评论
0 点赞
2022-07-01
linuxea:gitlab和jenkins自动和手动触发构建(6)
在前面几章里面,我们配置了基本的组件,围绕java构建配置了nexus3,配置了skywalking,能够打包和构建镜像,但是我们需要将这些串联起来,组成一个流水线,并且需要将skywalking的agent打包在镜像内,并配置必要的参数。与此同时,我们使用一个简单的实现方式用作在jenkins上,那就是pipeline和部分groovy语法的函数,至少来完成一下场景。场景1: A方希望提交代码或者打TAG来触发jenkins构建,在构建之前使用sonarqube进行代码扫描,并且配置简单的阈值。而后去上述的流水线整合。按道理,sonarqube的配置是有一些讲究的,处于整体考虑sonarqube只用作流水线管道的一部分,本次不去考虑sonarqube的代码扫描策略,也不会将扫描结果关联到gitlab,只是仅仅将文件反馈到Jenkins。这些在后面如果有时间在进行配置在本次中我只仅仅使用pipeline,并不是共享库。阅读此篇,你将了解如下列表中简单的实现方式:jenkins和gitlab触发(本章实现)jenkins凭据使用(本章实现)juit配置sonarqube简单扫描配置docker中构建docker打包基于java的skywalking agent(本章实现)基于gitlab来管理kustomize的k8s配置清单kubectl部署kubeclt deployment的状态跟踪钉钉消息的构建状态推送拓扑如下图:1.添加skywalking agent此前在基于nexus3代码构建和harbor镜像打包(3)一骗你中,我们已经有了一个java-hello-world的包,提供了一个8086的端口,并且我们将Dockerfile之类的都已准备妥当,此时延续此前的流程继续走。如果没有,在此页面克隆。1.现在我们下载一个skywaling的agent(8.11.0)端来到Dockerfile中,要实现,需要下载包到jenkins服务器上,或者打在镜像内。https://www.apache.org/dyn/closer.cgi/skywalking/java-agent/8.11.0/apache-skywalking-java-agent-8.11.0.tgz2.gitlab上创建一个java组,创建一个java-demo的项目,将代码和代码中的Dockerfile推送到gitlab仓库中3.在Dockerfile中添加COPY agent,并在启动的时候添加到启动命令中,如下docker-compose中映射关系中,/data/jenkins-latest/package:/usr/local/package。于是我们将skywalking包存放在/data/jenkins-latest/package下,而后在Dockerfile中/usr/local/package的路径即可COPY /usr/local/package/skywalking-agent /skywalking-agent而后启动的时候引入到启动命令中 -javaagent:/skywalking-agent/skywalking-agent.jarCMD java ${JAVA_OPTS} -javaagent:/skywalking-agent/skywalking-agent.jar -jar *.jarDockerfile如下我们需要修改下目录 结构,提前创建/skywalking-agent/logs并且授权并且,skywalking-agent目录需要提前在流水线中复制到当前目录中来FROM registry.cn-hangzhou.aliyuncs.com/marksugar/jdk:8u202 MAINTAINER by mark ENV JAVA_OPTS="\ -server \ -Xms2048m \ -Xmx2048m \ -Xmn512m \ -Xss256k \ -XX:+UseConcMarkSweepGC \ -XX:+UseCMSInitiatingOccupancyOnly \ -XX:CMSInitiatingOccupancyFraction=70 \ -XX:+HeapDumpOnOutOfMemoryError \ -XX:HeapDumpPath=/data/logs" \ MY_USER=linuxea \ MY_USER_ID=316 RUN addgroup -g ${MY_USER_ID} -S ${MY_USER} \ && adduser -u ${MY_USER_ID} -S -H -s /sbin/nologin -g 'java' -G ${MY_USER} ${MY_USER} \ && mkdir /data/logs /skywalking-agent -p COPY target/*.jar /data/ COPY skywalking-agent /skywalking-agent/ RUN chown -R 316.316 /skywalking-agent WORKDIR /data USER linuxea CMD java ${JAVA_OPTS} -javaagent:/skywalking-agent/skywalking-agent.jar -jar *.jar 注意事项我们需要通过trace-ignore-plugin来过滤跟踪系统需要忽略的url,因此我们需要根据trace-ignore-plugin进行配置。这很有必要匹配规则遵循 Ant Path 匹配风格,如 /path/、/path/*、/path/?。将apm-trace-ignore-plugin-x.jar复制到agent/plugins,重启agent即可生效插件。于是,我们将这个插件复制到plugins下tar xf apache-skywalking-java-agent-8.11.0.tar.gz cd skywalking-agent/ cp optional-plugins/apm-trace-ignore-plugin-8.11.0.jar plugins/忽略参数(忽略参数在k8syaml中进行配置)有两种方法可以配置忽略模式。通过系统环境设置具有更高的优先级。1.系统环境变量设置,需要在系统变量中添加skywalking.trace.ignore_path,值为需要忽略的路径,多条路径之间用,分隔2.将/agent/optional-plugins/apm-trace-ignore-plugin/apm-trace-ignore-plugin.config 复制到/agent/config/ 目录下,并添加过滤跟踪的规则trace.ignore_path=/your/path/1/,/your/path/2/4.将gitlab的java-demo项目拉到本地后,将java-helo-word项目文件移动到私有gitlab,并且将Dockerfile放入[root@Node-172_16_100_48 /data]# git clone git@172.16.100.47:java/java-demo.git Cloning into 'java-demo'... remote: Enumerating objects: 3, done. remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 3 Receiving objects: 100% (3/3), done. [root@Node-172_16_100_48 /data]# mv java-helo-word/* java-demo/ [root@Node-172_16_100_48 /data]# tree java-demo/linuxea/ java-demo/ ├── bbin.png ├── cn-site-service.iml ├── Dockerfile .......... 23 directories, 26 files放入完成后的Dockerfile的内容如下FROM registry.cn-hangzhou.aliyuncs.com/marksugar/jdk:8u202 MAINTAINER by mark ENV JAVA_OPTS="\ -server \ -Xms2048m \ -Xmx2048m \ -Xmn512m \ -Xss256k \ -XX:+UseConcMarkSweepGC \ -XX:+UseCMSInitiatingOccupancyOnly \ -XX:CMSInitiatingOccupancyFraction=70 \ -XX:+HeapDumpOnOutOfMemoryError \ -XX:HeapDumpPath=/data/logs" \ MY_USER=linuxea \ MY_USER_ID=316 RUN addgroup -g ${MY_USER_ID} -S ${MY_USER} \ && adduser -u ${MY_USER_ID} -S -H -s /sbin/nologin -g 'java' -G ${MY_USER} ${MY_USER} \ && mkdir /data/logs /skywalking-agent -p COPY target/*.jar /data/ COPY skywalking-agent /skywalking-agent/ RUN chown -R 316.316 /skywalking-agent WORKDIR /data USER linuxea CMD java ${JAVA_OPTS} -javaagent:/skywalking-agent/skywalking-agent.jar -jar *.jar Dockerfile添加skywalking到此完成。[root@linuxea-48 /data/java-demo]# git add . && git commit -m "first commit" && git push -u origin main [main 2f6d866] first commit 25 files changed, 545 insertions(+) create mode 100644 Dockerfile create mode 100644 bbin.png create mode 100644 cn-site-service.iml create mode 100644 fenghuang.png create mode 100644 index.html create mode 100644 pom.xml create mode 100644 src/main/java/com/dt/info/InfoSiteServiceApplication.java create mode 100644 src/main/java/com/dt/info/controller/HelloController.java create mode 100644 src/main/resources/account.properties create mode 100644 src/main/resources/application.yml create mode 100644 src/main/resources/log4j.properties ........... remote: remote: To create a merge request for main, visit: remote: http://172.16.100.47/java/java-demo/-/merge_requests/new?merge_request%5Bsource_branch%5D=main remote: To git@172.16.100.47:java/java-demo.git * [new branch] main -> main Branch main set up to track remote branch main from origin.代码上传到gitlab后开始配置jenkinsB.new jar你也可以生成一个空的java包来测试准备一个jar包,可以是一个java已有的程序或者下载一个空的,如下在https://start.spring.io/页面默认选择,选择java 8,而后点击CENERATE下载demo包,解压这个包将代码推送到gitlab将项目拉到本地后在上传demo包Administrator@DESKTOP-RD8S1SJ MINGW64 /h/k8s-1.20.2/gitops $ git clone git@172.16.100.47:pipeline-ops/2022-test.git Cloning into '2022-test'... remote: Enumerating objects: 3, done. remote: Counting objects: 100% (3/3), done. remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0 Receiving objects: 100% (3/3), done. Administrator@DESKTOP-RD8S1SJ MINGW64 /h/k8s-1.20.2/gitops $ unzip demo.zip -d 2022-test $ git add . && git commit -m "first commit" && git push2.关联jenkins和gitlab为了能够实现gitlab自动触发,我们需要配置一个webhook,并且jenkins安装插件来完成。首先我们需要插件,并且在gitlab配置一个webhook,一旦gitlab发生事件后就会触发到jenkins,jenkins启动流水线作业。我将会在流水线作业来判断作业被触发是通过gitlab还是其他的。2.1 jenkins插件安装Generic Webhook Trigger插件,而后点击新建imtes->输入一个名称—>选择pipeline例如,创建了一个Linuxea-2022的项目,勾选了Generic Webhook Trigger,并且在下方的token,输入了一个marksugar测试pipelinepipeline{ //指定运行此流水线的节点 agent any //管道运行选项 options { skipStagesAfterUnstable() } //流水线的阶段 stages{ //阶段1 获取代码 stage("CheckOut"){ steps{ script{ println("获取代码") } } } stage("Build"){ steps{ script{ println("运行构建") } } } } post { always{ script{ println("流水线结束后,经常做的事情") } } success{ script{ println("流水线成功后,要做的事情") } } failure{ script{ println("流水线失败后,要做的事情") } } aborted{ script{ println("流水线取消后,要做的事情") } } } }手动触发测试curl --location --request GET 'http://172.16.100.48:58080/generic-webhook-trigger/invoke/?token=marksugar'运行一次[root@linuxea-47 ~]# curl --location --request GET 'http://172.16.100.48:58080/generic-webhook-trigger/invoke/?token=marksugar' {"jobs":{"linuxea-2022":{"regexpFilterExpression":"","triggered":true,"resolvedVariables":{},"regexpFilterText":"","id":4,"url":"queue/item/4/"}},"message":"Triggered jobs."}You have new mail in /var/spool/mail/root2.2 配置gitlab webhook1.在右上角的preferences中的最下方Localization选择简体中文保存2.管理元->设置->网络->下拉菜单中的“出战请求”勾选 允许来自 web hooks 和服务对本地网络的请求回到gitlab首先进入项目后选择settings->webhooks->urlurl输入http://172.16.100.48:58080/generic-webhook-trigger/invoke/?token=marksugar?marksugar, 问号后面为设置的token而后选中push events和tag push events和最下面的ssl verfication : Enable SSL verification兵点击Add webhook测试在最下方 -> test下拉菜单中选择一个被我们选中过的事件,而后点击。模拟一次push在edit中 -> 最下方 View details 查看Request body,Request body就是发送的内容,这些内容可以被获取到并且解析回到jenkins,查看已经开始构建3.1 自动与手动关联在上面已经配置了自动触发jenkins构建,但是这还不够,我们想在jenkins上体现出来,那一次是自动构建,那一次是手动点击,于是我们添加try和catch因此,我们配置try,并且在现有的阶段添加两个必要的环境变量来应对手动触发3.2 添加手动参数branch: 分支BASEURL:git地址3.2 配置识别try语法我们需要获取请求过来的数据,因此我们获取所有的json请求,配置如下自动触发而后获取的变量方式解析后,我将必要的值进行拼接后如下println("Trigger User: ${info_user_username}") println("Trigger Branch: ${info_ref}" ) println("Trigger event: ${info_event_name}") println("Trigger application: ${info_project_name}") println("Trigger version number: ${info_checkout_sha}") println("Trigger commit message: ${info_commits_0_message}") println("Trigger commit time: ${info_commits_0_timestamp}")而我们只需要部分,因此就变成了如下这般try { println("Trigger Branch: ${info_ref}") RefName="${info_ref.split("/")[-1]}" //自定义显示名称 currentBuild.displayName = "#${info_event_name}-${RefName}-${info_checkout_sha}" //自定义描述 currentBuild.description = "Trigger by user ${info_user_username} 非gitlab自动触发 \n branch: ${RefName} \n commit message: ${info_commits_0_message}" BUILD_TRIGGER_BY="${info_user_username}" BASEURL="${info_project_git_http_url}" }catch(e){ BUILD_TRIGGER_BY="${currentBuild.getBuildCauses()[0].userId}" currentBuild.description = "Trigger by user ${BUILD_TRIGGER_BY} 手动触发 \n branch: ${branch} \n git url: ${BASEURL}" } pipeline{ //指定运行此流水线的节点 agent any //管道运行选项 options { skipDefaultCheckout true skipStagesAfterUnstable() buildDiscarder(logRotator(numToKeepStr: '2')) } //流水线的阶段 stages{ //阶段1 获取代码 stage("env"){ steps{ script{ println("${BASEURL}") } } } } }一旦被触发后,如下图3.3 判断gitlab但这还没玩,尽管此时已经能够识别自动触发了,但是我们无法识别到底是从哪里来的自动触发。但是,我们只需要知道那些是gitlab来的请求和不是gitlab来的请求即可。简单来说,就是需要一个参数来判断触发这次构建的来源。于是,我们配置请求参数来识别判断在Request parameters-> 输入onerunonerun: 用来判断的参数而后在gitlab的url中添加上传递参数http://172.16.100.48:58080/generic-webhook-trigger/invoke/?onerun=gitlabs&token=marksugar这里的onerun=gitlabs,如下我们在try中进行判断即可onerun=gitlabstry { if ( "${onerun}" == "gitlabs"){ println("从带有gitlabs请求来的构建") } }catch(e){ println("从没有带有gitlabs请求来的构建") }在本次中,配置如下try { if ( "${onerun}" == "gitlabs"){ println("Trigger Branch: ${info_ref}") RefName="${info_ref.split("/")[-1]}" //自定义显示名称 currentBuild.displayName = "#${info_event_name}-${RefName}-${info_checkout_sha}" //自定义描述 currentBuild.description = "Trigger by user ${info_user_username} 自动触发 \n branch: ${RefName} \n commit message: ${info_commits_0_message}" BUILD_TRIGGER_BY="${info_user_username}" BASEURL="${info_project_git_http_url}" } }catch(e){ BUILD_TRIGGER_BY="${currentBuild.getBuildCauses()[0].userId}" currentBuild.description = "Trigger by user ${BUILD_TRIGGER_BY} 非自动触发 \n branch: ${branch} \ngit: ${BASEURL}" } pipeline{ //指定运行此流水线的节点 agent any //管道运行选项 options { skipDefaultCheckout true skipStagesAfterUnstable() buildDiscarder(logRotator(numToKeepStr: '2')) } //流水线的阶段 stages{ //阶段1 获取代码 stage("env"){ steps{ script{ println("${BASEURL}") } } } } }手动构建一次通过命令构建[root@linuxea-48 ~]# curl --location --request GET 'http://172.16.100.48:58080/generic-webhook-trigger/invoke/?onerun=gitlabs&token=marksugar' && echo {"jobs":{"linuxea-2022":{"regexpFilterExpression":"","triggered":true,"resolvedVariables":{"info":"","onerun":"gitlabs","onerun_0":"gitlabs"},"regexpFilterText":"","id":14,"url":"queue/item/14/"}},"message":"Triggered jobs."}如下通过这样的配置,我们至少能从jenkins上看到构建的触发类型而已。
2022年07月01日
155 阅读
0 评论
0 点赞
2022-06-29
linuxea:基于nexus3代码构建和harbor镜像打包(3)
在前两篇中,主要叙述了即将做什么以及基础环境搭建。此时我们需要一个通用的java程序来配置这些东西,但是这其中又需要配置不少的东西,比如nexus3等。因此,本章将围绕nexus3配置,而后通过nexus3配置maven打包。将jar包构建,接着配置harbor,并且将打包的镜像推送到harbor镜像仓库。如下图红色阴影部分内容:阅读此篇,你将了解如下信息:nexus3配置java编译打包与nexus3harbor安装配置和使用基于alpine构建jdk编写Dockerfile技巧和构建和推送镜像我们仅仅使用非Https的harbor仓库,如果要配置https已经helm仓库,阅读habor2.5的helm仓库和镜像仓库使用(5)进行配置即可配置java和node环境变量[root@linuxea-01 local]# tar xf apache-maven-3.8.6-bin.tar.gz -C /usr/local/ [root@linuxea-01 local]# tar xf node-v16.15.1-linux-x64.tar.xz -C /usr/local/ [root@linuxea-01 local]# MAVEN_PATH=/usr/local/apache-maven-3.8.6 [root@linuxea-01 local]# NODE_PATH=/usr/local/node-v16.15.1-linux-x64 [root@linuxea-01 local]# PATH=$PATH:$NODE_PATH/bin:$PATH:$MAVEN_PATH/bin1.修改为阿里源settings.xml修改阿里云源<?xml version="1.0" encoding="UTF-8"?> <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd"> <pluginGroups> </pluginGroups> <proxies> </proxies> <servers> <server> <id>maven-releases</id> <username>admin</username> <password>admin</password> </server> <server> <id>maven-snapshots</id> <username>admin</username> <password>admin</password> </server> </servers> <mirrors> <!-- <mirror> <id>nexus</id> <mirrorOf>local</mirrorOf> <name>nexus</name> <url>http://172.16.15.136:8081/repository/maven-public/</url> </mirror>--> <mirror> <id>alimaven</id> <name>aliyun maven</name> <url>http://maven.aliyun.com/nexus/content/groups/public/</url> <mirrorOf>central</mirrorOf> </mirror> </mirrors> <profiles> </profiles> </settings>可以使用-s指定settings.yaml你也可以指定pom.xml </parent> 之下 <repositories> <repository> <id>alimaven</id> <name>aliyun maven</name> <url>http://maven.aliyun.com/nexus/content/groups/public/</url> </repository> </repositories>2 配置nexus32.1 创建Blob Stores如果剩余10G就报警2.2 创建proxy创建repositories->选择maven2(proxy)http://maven.aliyun.com/nexus/content/groups/public我们着重修改和存储桶如法炮制,继续将下面的都创建maven2-proxy1. aliyun http://maven.aliyun.com/nexus/content/groups/public 2. apache_snapshot https://repository.apache.org/content/repositories/snapshots/ 3. apache_release https://repository.apache.org/content/repositories/releases/ 4. atlassian https://maven.atlassian.com/content/repositories/atlassian-public/ 5. central.maven.org http://central.maven.org/maven2/ 6. datanucleus http://www.datanucleus.org/downloads/maven2 7. maven-central (安装后自带,仅需设置Cache有效期即可) https://repo1.maven.org/maven2/ 8. nexus.axiomalaska.com http://nexus.axiomalaska.com/nexus/content/repositories/public 9. oss.sonatype.org https://oss.sonatype.org/content/repositories/snapshots 10.pentaho https://public.nexus.pentaho.org/content/groups/omni/ 11.central http://maven.aliyun.com/nexus/content/repositories/central2.3 创建local在创建一个maven2-local2.4 创建group创建group,将上面所有创建的拉入到当前group2.5 配置xml文件配置settings.xml,修改nexus3地址,如下所示[root@linuxea-01 linuxea]# cat ~/.m2/settings.xml <?xml version="1.0" encoding="UTF-8"?> <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd"> <pluginGroups> </pluginGroups> <proxies> </proxies> <servers> <server> <id>maven-releases</id> <username>admin</username> <password>admin</password> </server> <server> <id>maven-snapshots</id> <username>admin</username> <password>admin</password> </server> <server> <id>alimaven</id> <username>admin</username> <password>admin</password> </server> </servers> <mirrors> <!-- <mirror> <id>nexus</id> <mirrorOf>local</mirrorOf> <name>nexus</name> <url>http://172.16.15.136:8081/repository/maven-public/</url> </mirror>--> <mirror> <id>alimaven</id> <name>aliyun maven</name> <url>http://172.16.15.136:8081/repository/maven2-group/</url> <mirrorOf>central</mirrorOf> </mirror> </mirrors> <profiles> </profiles> </settings>打包测试mvn clean install -Dautoconfig.skip=true -Dmaven.test.skip=false -Dmaven.test.failure.ignore=true -s ~/.m2/settings.xml截图如下s) Downloaded from alimaven: http://172.16.15.136:8081/repository/maven2-group/commons-codec/commons-codec/1.6/commons-codec-1.6.jar (233 kB at 991 kB/s) [INFO] Installing /data/java-helo-word/linuxea/target/hello-world-0.0.6.jar to /root/.m2/repository/com/dt/hello-world/0.0.6/hello-world-0.0.6.jar [INFO] Installing /data/java-helo-word/linuxea/pom.xml to /root/.m2/repository/com/dt/hello-world/0.0.6/hello-world-0.0.6.pom [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 01:04 min [INFO] Finished at: 2022-06-24T17:07:10+08:00 [INFO] ------------------------------------------------------------------------ [root@linuxea-01 linuxea]#3. 配置harbor登录harbor项目后创建一个项目而后可以看到推送命令docker tag SOURCE_IMAGE[:TAG] 172.16.100.54/linuxea/REPOSITORY[:TAG] docker push 172.16.100.54/linuxea/REPOSITORY[:TAG]我们直接把镜像打成172.16.100.54/linuxea/java-demo:TAG即可,而不是要去修改tag,而后直接上传即可4. 打包和构建使用alpine的最大好处就是可以适量的最小化缩减镜像体积。这也是alpine流行的最大因素。由于一直使用的都是jdk8,因此仍然使用jdk8版本,基础镜像仍然使用alpine:3.15,我参考了dockerhub上一个朋友的镜像,重新构建了jdk8u202,整个镜像大小大概在453M左右。可以通过如下地址进行获取docker pull registry.cn-hangzhou.aliyuncs.com/marksugar/jdk:8u2024.1 构建基础镜像jdl的基础镜像已经构建完成,在本地,仍然按照这里的dockerfile进行构建而后我们创建一个base仓库来存放登录并推送[root@linuxea-48 ~]# docker login harbor.marksugar.com Authenticating with existing credentials... Stored credentials invalid or expired Username (admin): admin Password: WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded [root@linuxea-48 ~]# docker push harbor.marksugar.com/base/jdk:8u202 The push refers to repository [harbor.marksugar.com/base/jdk] 788766eb7d3e: Pushed 8d3ac3489996: Pushed 8u202: digest: sha256:516cd5bd65041d4b00587127417c1a9a3aea970fa533d330f60b07395aa5e5ca size: 7414.2 打包java镜像此前我找了一个java的hello world的包,现在在我的github上可以找到将它拉到本地构建,进行测试[root@linuxea-48 /data]# git clone https://ghproxy.futils.com/https://github.com/marksugar/java-helo-word.git Cloning into 'java-helo-word'... remote: Enumerating objects: 110, done. remote: Total 110 (delta 0), reused 0 (delta 0), pack-reused 110 Receiving objects: 100% (110/110), 28.09 KiB | 0 bytes/s, done.开始打包jar包构建频繁出错,需要解决的是依赖包,可能需要添加nexus3仓库的代理,这些通过搜索引擎解决。一旦构建完成,在target目录下就会有一个jar包[root@linuxea-48 /data/java-helo-word/linuxea]# ll target/hello-world-0.0.6.jar -rw-r--r-- 1 root root 17300624 Jun 26 01:00 target/hello-world-0.0.6.jar而后这个jar可以进行启动的,并监听了一个8086的端口号[root@linuxea-48 /data/java-helo-word/linuxea]# java -jar target/hello-world-0.0.6.jar . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v1.5.10.RELEASE) 2022-06-26 01:05:52.217 INFO 38183 --- [ main] com.dt.info.InfoSiteServiceApplication : Starting InfoSiteServiceApplication v0.0.6 on Node172_16_100_48.marksugar.me with PID 38183 (/data/java-helo-word/linuxea/target/hello-world-0.0.6.jar started by root in /data/java-helo-word/linuxea) 2022-06-26 01:05:52.219 INFO 38183 --- [ main] com.dt.info.InfoSiteServiceApplication : No active profile set, falling back to default profiles: default 2022-06-26 01:05:52.265 INFO 38183 --- [ main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@48533e64: startup date [Sun Jun 26 01:05:52 CST 2022]; root of context hierarchy 2022-06-26 01:05:53.118 INFO 38183 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat initialized with port(s): 8086 (http) 2022-06-26 01:05:53.126 INFO 38183 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat] 2022-06-26 01:05:53.129 INFO 38183 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/8.5.27 2022-06-26 01:05:53.180 INFO 38183 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext 2022-06-26 01:05:53.180 INFO 38183 --- [ost-startStop-1] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 916 ms 2022-06-26 01:05:53.256 INFO 38183 --- [ost-startStop-1] o.s.b.w.servlet.ServletRegistrationBean : Mapping servlet: 'dispatcherServlet' to [/] 2022-06-26 01:05:53.257 INFO 38183 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'characterEncodingFilter' to: [/*] 2022-06-26 01:05:53.258 INFO 38183 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'hiddenHttpMethodFilter' to: [/*] 2022-06-26 01:05:53.258 INFO 38183 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'httpPutFormContentFilter' to: [/*] 2022-06-26 01:05:53.258 INFO 38183 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'requestContextFilter' to: [/*] 2022-06-26 01:05:53.283 INFO 38183 --- [ main] o.s.s.c.ThreadPoolTaskScheduler : Initializing ExecutorService 2022-06-26 01:05:53.287 INFO 38183 --- [ main] o.s.s.c.ThreadPoolTaskScheduler : Initializing ExecutorService 'getThreadPoolTaskScheduler' 2022-06-26 01:05:53.459 INFO 38183 --- [ main] s.w.s.m.m.a.RequestMappingHandlerAdapter : Looking for @ControllerAdvice: org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@48533e64: startup date [Sun Jun 26 01:05:52 CST 2022]; root of context hierarchy 2022-06-26 01:05:53.502 INFO 38183 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/index]}" onto public java.lang.String com.dt.info.controller.HelloController.hello() 2022-06-26 01:05:53.505 INFO 38183 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error],produces=[text/html]}" onto public org.springframework.web.servlet.ModelAndView org.springframework.boot.autoconfigure.web.BasicErrorController.errorHtml(javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse) 2022-06-26 01:05:53.505 INFO 38183 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error]}" onto public org.springframework.http.ResponseEntity<java.util.Map<java.lang.String, java.lang.Object>> org.springframework.boot.autoconfigure.web.BasicErrorController.error(javax.servlet.http.HttpServletRequest) 2022-06-26 01:05:53.529 INFO 38183 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler] 2022-06-26 01:05:53.529 INFO 38183 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler] 2022-06-26 01:05:53.551 INFO 38183 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**/favicon.ico] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler] 2022-06-26 01:05:53.568 INFO 38183 --- [ main] oConfiguration$WelcomePageHandlerMapping : Adding welcome page: class path resource [static/index.html] 2022-06-26 01:05:53.639 INFO 38183 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup 2022-06-26 01:05:53.680 INFO 38183 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8086 (http) 2022-06-26 01:05:53.682 INFO 38183 --- [ main] com.dt.info.InfoSiteServiceApplication : Started InfoSiteServiceApplication in 2.392 seconds (JVM running for 2.647)访问现在jar包和nexus3准备好了4.3 编写Dockerfile当这一切准备妥当,开始编写Dockerfile,我们需要注意以下其他配置配置内存限制资源配置普通用户,并已普通用户启动pod应用程序如下FROM registry.cn-hangzhou.aliyuncs.com/marksugar/jdk:8u202 MAINTAINER www.linuxea.com by mark ENV JAVA_OPTS="\ -server \ -Xms2048m \ -Xmx2048m \ -Xmn512m \ -Xss256k \ -XX:+UseConcMarkSweepGC \ -XX:+UseCMSInitiatingOccupancyOnly \ -XX:CMSInitiatingOccupancyFraction=70 \ -XX:+HeapDumpOnOutOfMemoryError \ -XX:HeapDumpPath=/data/logs" \ MY_USER=linuxea \ MY_USER_ID=316 RUN addgroup -g ${MY_USER_ID} -S ${MY_USER} \ && adduser -u ${MY_USER_ID} -S -H -s /sbin/nologin -g 'java' -G ${MY_USER} ${MY_USER} \ && mkdir /data/logs -p COPY target/*.jar /data/ WORKDIR /data USER linuxea CMD java ${JAVA_OPTS} -jar *.jar开始构建我们指定配置文件位置进行创建docker build -t hello-java -f ./Dockerfile .如下[root@linuxea-48 /data/java-helo-word/linuxea]# docker build -t hello-java -f ./Dockerfile . Sending build context to Docker daemon 17.5MB Step 1/7 : FROM registry.cn-hangzhou.aliyuncs.com/marksugar/jdk:8u202 ---> 5919494d49c0 Step 2/7 : MAINTAINER www.linuxea.com by mark ---> Running in 51ea254cd0c3 Removing intermediate container 51ea254cd0c3 ---> 109317878a94 Step 3/7 : ENV JAVA_OPTS=" -server -Xms2048m -Xmx2048m -Xmn512m -Xss256k -XX:+UseConcMarkSweepGC -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/data/logs" MY_USER=linuxea MY_USER_ID=316 ---> Running in 5745dbc7928b Removing intermediate container 5745dbc7928b ---> a7d40e22389a Step 4/7 : RUN addgroup -g ${MY_USER_ID} -S ${MY_USER} && adduser -u ${MY_USER_ID} -S -H -s /sbin/nologin -g 'java' -G ${MY_USER} ${MY_USER} && mkdir /data/logs -p ---> Running in 2e4c34e11b62 Removing intermediate container 2e4c34e11b62 ---> d2fdac4de2fa Step 5/7 : COPY target/*.jar /data/ ---> 5538b183318b Step 6/7 : WORKDIR /data ---> Running in 7d0ac5b1dcc2 Removing intermediate container 7d0ac5b1dcc2 ---> e03a5699e97c Step 7/7 : CMD java ${JAVA_OPTS} jar *.jar ---> Running in 58ff0459e4d7 Removing intermediate container 58ff0459e4d7 ---> d1689a9a179f Successfully built d1689a9a179f Successfully tagged hello-java:latest接着我们run起来[root@linuxea-48 /data/java-helo-word/linuxea]# docker run --rm hello-java . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v1.5.10.RELEASE) 2022-06-25 17:26:22.052 INFO 1 --- [ main] com.dt.info.InfoSiteServiceApplication : Starting InfoSiteServiceApplication v0.0.6 on f18e65565a19 with PID 1 (/data/hello-world-0.0.6.jar started by linuxea in /data) 2022-06-25 17:26:22.054 INFO 1 --- [ main] com.dt.info.InfoSiteServiceApplication : No active profile set, falling back to default profiles: default 2022-06-25 17:26:22.121 INFO 1 --- [ main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@48533e64: startup date [Sat Jun 25 17:26:22 GMT 2022]; root of context hierarchy 2022-06-25 17:26:23.079 INFO 1 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat initialized with port(s): 8086 (http) 2022-06-25 17:26:23.087 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat] 2022-06-25 17:26:23.089 INFO 1 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/8.5.27 2022-06-25 17:26:23.148 INFO 1 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext 2022-06-25 17:26:23.149 INFO 1 --- [ost-startStop-1] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 1028 ms 2022-06-25 17:26:23.236 INFO 1 --- [ost-startStop-1] o.s.b.w.servlet.ServletRegistrationBean : Mapping servlet: 'dispatcherServlet' to [/] 2022-06-25 17:26:23.240 INFO 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'characterEncodingFilter' to: [/*] 2022-06-25 17:26:23.240 INFO 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'hiddenHttpMethodFilter' to: [/*] 2022-06-25 17:26:23.240 INFO 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'httpPutFormContentFilter' to: [/*] 2022-06-25 17:26:23.240 INFO 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'requestContextFilter' to: [/*] 2022-06-25 17:26:23.273 INFO 1 --- [ main] o.s.s.c.ThreadPoolTaskScheduler : Initializing ExecutorService 2022-06-25 17:26:23.279 INFO 1 --- [ main] o.s.s.c.ThreadPoolTaskScheduler : Initializing ExecutorService 'getThreadPoolTaskScheduler' 2022-06-25 17:26:23.459 INFO 1 --- [ main] s.w.s.m.m.a.RequestMappingHandlerAdapter : Looking for @ControllerAdvice: org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@48533e64: startup date [Sat Jun 25 17:26:22 GMT 2022]; root of context hierarchy 2022-06-25 17:26:23.508 INFO 1 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/index]}" onto public java.lang.String com.dt.info.controller.HelloController.hello() 2022-06-25 17:26:23.511 INFO 1 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error],produces=[text/html]}" onto public org.springframework.web.servlet.ModelAndView org.springframework.boot.autoconfigure.web.BasicErrorController.errorHtml(javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse) 2022-06-25 17:26:23.511 INFO 1 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error]}" onto public org.springframework.http.ResponseEntity<java.util.Map<java.lang.String, java.lang.Object>> org.springframework.boot.autoconfigure.web.BasicErrorController.error(javax.servlet.http.HttpServletRequest) 2022-06-25 17:26:23.534 INFO 1 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler] 2022-06-25 17:26:23.534 INFO 1 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler] 2022-06-25 17:26:23.559 INFO 1 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**/favicon.ico] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler] 2022-06-25 17:26:23.654 INFO 1 --- [ main] oConfiguration$WelcomePageHandlerMapping : Adding welcome page: class path resource [static/index.html] 2022-06-25 17:26:23.786 INFO 1 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup 2022-06-25 17:26:23.841 INFO 1 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8086 (http) 2022-06-25 17:26:23.845 INFO 1 --- [ main] com.dt.info.InfoSiteServiceApplication : Started InfoSiteServiceApplication in 2.076 seconds (JVM running for 2.329)进入容器可以看到当前使用的是linuxea用户bash-5.1$ ps aux PID USER TIME COMMAND 1 linuxea 0:00 bash 15 linuxea 0:00 ps aux4.4 推送仓库将构建的镜像推送到仓库,以备用途,于是,我们登录harbor创建一个项目存放修改镜像名称并pushdocker tag hello-java harbor.marksugar.com/linuxea/hello-world:latest docker push harbor.marksugar.com/linuxea/hello-world:latest如下[root@linuxea-48 /data/java-helo-word/linuxea]# docker tag hello-java harbor.marksugar.com/linuxea/hello-world:latest [root@linuxea-48 /data/java-helo-word/linuxea]# docker push harbor.marksugar.com/linuxea/hello-world:latest The push refers to repository [harbor.marksugar.com/linuxea/hello-world] 9435dbe70451: Pushed 8c3c8b0adf90: Pushed 788766eb7d3e: Mounted from base/jdk 8d3ac3489996: Mounted from base/jdk latest: digest: sha256:2248bf99e35cf864d521441d8d2efc9aedbed56c24625e4f60e93df5e8fc65c3 size: 1161此时harbor仓库已经有了已经已经打包完成的镜像,也就是所谓的一个制品
2022年06月29日
193 阅读
0 评论
0 点赞
2022-06-28
linuxea:habor2.5的helm仓库和镜像仓库使用(5)
在早先,容器仓库有Portus和harbor最为瞩目,harbor是vmware的产品,而前者则是由suse团队维护,但在2019portus提供了最后一个版本。但随着时间的推移,harbor提供了更多与时俱进的功能服务,这也导致harbor愈发的收到关注和使用。并且harbor是由vmware中国区成员参与,众所周知,一款提供中文界面且有优秀的产品,往往都是更受欢迎的。harbor不仅可以存放容器镜像,还可以提供helm的仓库,简单列出他具备的功能云原生注册表:支持容器镜像和Helm图表,Harbor 充当云原生环境(如容器运行时和编排平台)的注册表。基于角色的访问控制:用户通过“项目”访问不同的存储库,并且用户可以对项目下的图像或 Helm 图表具有不同的权限。基于策略的复制:可以使用过滤器(存储库、标签和标签)基于策略在多个注册表实例之间复制(同步)图像和图表。如果遇到任何错误,Harbor 会自动重试复制。这可用于辅助负载平衡、实现高可用性以及促进混合和多云场景中的多数据中心部署。漏洞扫描:Harbor 定期扫描映像以查找漏洞,并进行策略检查以防止部署易受攻击的映像。LDAP/AD 支持:Harbor 与现有的企业 LDAP/AD 集成以进行用户身份验证和管理,并支持将 LDAP 组导入 Harbor,然后可以授予特定项目的权限。OIDC 支持:Harbor 利用 OpenID Connect (OIDC) 来验证由外部授权服务器或身份提供者认证的用户的身份。可以启用单点登录以登录到 Harbor 门户。图像删除和垃圾收集:系统管理员可以运行垃圾收集作业,以便可以删除图像(悬空清单和未引用的 blob)并定期释放它们的空间。Notary:支持使用 Docker Content Trust(利用 Notary)对容器镜像进行签名,以保证真实性和出处。此外,还可以激活防止部署未签名映像的策略。图形用户门户:用户可以轻松浏览、搜索存储库和管理项目。审计:通过日志跟踪对存储库的所有操作。RESTful API:提供 RESTful API 是为了方便管理操作,并且易于用于与外部系统集成。嵌入式 Swagger UI 可用于探索和测试 API。易于部署:Harbor 可以通过 Docker compose 以及 Helm Chart 进行部署,最近还添加了一个 Harbor Operator。以上内容特征从github获取,阅读本章,你将了解如何利用harbor配置基本的docker容器仓库和helm仓库的使用harbor如果你是nginx,则可以直接使用如下命令创建。openssl req -x509 -nodes -days 36500 -newkey rsa:2048 -keyout linuxea.key -out linuxea.crt -subj /C=CH/ST=ShangHai/L=Xian/O=Devops/CN=linuxea.test.com1.准备证书我们按照官方文档进行创建ssl证书与之不同的是我们尽可能的将证书配置的旧一些。因为无论在什么环境,证书最大的问题就是会过期,后期替换的时候会波及到正常使用。域名:harbor.local.com创建的证书目录:/data/cert-date +%F复制下面的命令在sh脚本中修改${YOU_DOMAIN}和${CERT_PATH}后执行即可CERT_PATH=/data/cert-`date +%F`/ YOU_DOMAIN=harbor.local.com mkdir -p ${CERT_PATH} openssl genrsa -out ca.key 4096 openssl req -x509 -new -nodes -sha512 -days 365000 \ -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=${YOU_DOMAIN}" \ -key ca.key \ -out ca.crt openssl genrsa -out ${YOU_DOMAIN}.key 4096 openssl req -sha512 -new \ -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=${YOU_DOMAIN}" \ -key ${YOU_DOMAIN}.key \ -out ${YOU_DOMAIN}.csr cat > v3.ext <<-EOF authorityKeyIdentifier=keyid,issuer basicConstraints=CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment extendedKeyUsage = serverAuth subjectAltName = @alt_names [alt_names] DNS.1=${YOU_DOMAIN} DNS.2=yourdomain DNS.3=hostname EOF openssl x509 -req -sha512 -days 365000 \ -extfile v3.ext \ -CA ca.crt -CAkey ca.key -CAcreateserial \ -in ${YOU_DOMAIN}.csr \ -out ${YOU_DOMAIN}.crt cp ${YOU_DOMAIN}.crt ${YOU_DOMAIN}.key ${CERT_PATH} 2.harbor和docker接着将证书复制到harbor和docker的目录,使其生效1.复制crt和key 到harbor的证书目录cp ${YOU_DOMAIN}.crt ${YOU_DOMAIN}.key ${CERT_PATH} 2.转换yourdomain.com.crt为yourdomain.com.cert, 供 Docker 使用openssl x509 -inform PEM -in harbor.local.com.crt -out harbor.local.com.cert本机mkdir -p /etc/docker/certs.d/harbor.local.com/ cp harbor.local.com.cert /etc/docker/certs.d/harbor.local.com/ cp harbor.local.com.key /etc/docker/certs.d/harbor.local.com/ cp ca.crt /etc/docker/certs.d/harbor.local.com/ systemctl reload docker其他节点scp harbor.local.com.cert harbor.local.com.key ca.crt 172.16.15.136:/etc/docker/certs.d/harbor.local.com/ systemctl reload docker如果不是80端口请创建文件夹/etc/docker/certs.d/yourdomain.com:port或/etc/docker/certs.d/harbor_IP:port.3.harbor.yml经过过滤得到如下配置[root@b.linuxea.com harbor]# egrep -v "^$|^#|^ #|^ #" harbor.yml hostname: reg.mydomain.com http: port: 80 https: port: 443 certificate: /your/certificate/path private_key: /your/private/key/path harbor_admin_password: Harbor12345 database: password: root123 max_idle_conns: 100 max_open_conns: 900 data_volume: /data trivy: ignore_unfixed: false skip_update: false offline_scan: false insecure: false jobservice: max_job_workers: 10 notification: webhook_job_max_retry: 10 chart: absolute_url: disabled log: level: info local: rotate_count: 50 rotate_size: 200M location: /var/log/harbor _version: 2.5.0 proxy: http_proxy: https_proxy: no_proxy: components: - core - jobservice - trivy upload_purging: enabled: true age: 168h interval: 24h dryrun: false3.1.将服务器证书和密钥复制到 Harbor 主机上的 certficates 文件夹中。创建一个存储目录:mkdir /data/harbor/data配置文件。进行sed替换即可,主要修改如下:hostname:域名certificate: cert目录地址private_key: key目录地址harbor_admin_password: 登录密码data_volume: docker-compose的所有挂载目录sed -i 's@hostname: reg.mydomain.com@hostname: harbor.local.com@g' harbor.yml sed -i 's@certificate: /your/certificate/path@certificate: /data/cert-2022-06-28/harbor.local.com.crt@g' harbor.yml sed -i 's@private_key: /your/private/key/path@private_key: /data/cert-2022-06-28/harbor.local.com.key@g' harbor.yml sed -i 's@harbor_admin_password: Harbor12345@harbor_admin_password: admin@g' harbor.yml sed -i 's@data_volume: /data@data_volume: /data/harbor/data@g' harbor.yml4.安装harbor一切准备妥当,执行./install.sh脚本自动安装[root@b.linuxea.com harbor]# ./install.sh [Step 0]: checking if docker is installed ... Note: docker version: 19.03.6 [Step 1]: checking docker-compose is installed ... ....... [Step 5]: starting Harbor ... Creating network "harbor_harbor" with the default driver Creating harbor-log ... done Creating harbor-db ... done Creating registry ... done Creating redis ... done Creating harbor-portal ... done Creating registryctl ... done Creating harbor-core ... done Creating harbor-jobservice ... done Creating nginx ... done ✔ ----Harbor has been installed and started successfully.----5.测试容器仓库回到其他节点测试docker的应有配置目录结构如下[root@a.linuxea.com ~]# tree /etc/docker/ /etc/docker/ ├── certs.d │ └── harbor.local.com │ ├── ca.crt │ ├── harbor.local.com.cert │ └── harbor.local.com.key ├── daemon.json └── key.json 2 directories, 5 files (base) [root@a.linuxea.com ~]# cat /etc/docker/daemon.json {"insecure-registries":["172.16.100.150:8443",harbor.local.com]}登录测试[root@a.linuxea.com ~]# docker login harbor.local.com Username: admin Password: WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded上传镜像测试创建一个仓库harbor提供的命令如下docker push harbor.local.com/base/REPOSITORY[:TAG]测试[root@a.linuxea.com ~]# docker tag mysql:8.0.16 harbor.local.com/base/mysql:8.0.16 [root@a.linuxea.com ~]# docker push harbor.local.com/base/mysql:8.0.16 The push refers to repository [harbor.local.com/base/mysql] 605d208195c7: Pushed 9d87c3455758: Pushed 80f1020054a4: Pushed b0425df45fae: Pushed 680666c6bf72: Pushed 7e7fffcdabb3: Pushed 77737de99484: Pushed 2f1b41b24201: Pushed 007a7f930352: Pushed c6926fcee191: Pushed b78ec9586b34: Pushed d56055da3352: Pushed 8.0.16: digest: sha256:036b8908469edac85afba3b672eb7cbc58d6d6b90c70df0bb3fe2ab4fd939b22 size: 2828docker没有问题后配置helm仓库helm31.helm3安装在官网下载一个helm,解压后并将可执行文件放置sbin下wget https://get.helm.sh/helm-v3.8.2-linux-amd64.tar.gz tar xf helm-v3.8.2-linux-amd64.tar.gz cp linux-amd64/helm /usr/local/sbin安装完成[root@a.linuxea.com ~]# helm version version.BuildInfo{Version:"v3.8.2", GitCommit:"5cb9af4b1b271d11d7a97a71df3ac337dd94ad37", GitTreeState:"clean", GoVersion:"go1.17.5"}2.添加helm源添加一个azure源,并将其更新helm repo add stable http://mirror.azure.cn/kubernetes/charts/ helm repo list helm repo update helm search repo stable3.登录helm[root@a.linuxea.com ~]# helm registry login harbor.local.com Username: admin Password: Login Succeeded我们还需要让系统信任这个ca,于是我们将 /etc/docker/certs.d/harbor.local.com/ca.crt的内容追加到如/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem文件中,并复制ca.crt到/etc/pki/ca-trust/source/anchors/cp /etc/docker/certs.d/harbor.local.com/ca.crt /etc/pki/ca-trust/source/anchors/3.中转charts我们一个共有的mirros的包中转到私有的仓库上。在中转之前,需要添加一个源来提供现有的charts下载redishelm fetch stable/redis[root@a.linuxea.com ~]# ls redis-10.5.7.tgz redis-10.5.7.tgz推送到harbor.local.com推送 chart 到当前项目helm push redis-10.5.7.tgz oci://harbor.local.com/redis登录harbor查看点进这个Artifacts内,就能看到更多的信息4.上传本地charts现在在本地创建一个charts用作测试,上传到harbor中[root@a.linuxea.com data]# helm create test Creating test [root@a.linuxea.com data]# ls test/ charts Chart.yaml templates values.yaml打包推送[root@a.linuxea.com data]# helm package test Successfully packaged chart and saved it to: /data/test-0.1.0.tgz推送到helm服务器[root@a.linuxea.com data]# helm push test-0.1.0.tgz oci://harbor.local.com/redis Pushed: harbor.local.com/redis/test:0.1.0 Digest: sha256:1a86bc2ae87a8760398099a9c0966ce41141eacc7270673d03dfc4005bc349db5.使用私有charts回到仓库里面,鼠标放在拉取按钮上将会显示拉取的命令如下helm pull oci://harbor.local.com/redis/redis --version 10.5.7拉到本地[root@a.linuxea.com opt]# helm pull oci://harbor.local.com/redis/redis --version 10.5.7 Pulled: harbor.local.com/redis/redis:10.5.7 Digest: sha256:41643fa64d23797d0a874a2b264c9fc1f7323b08b9a02fb3010d72805b54bc3a [root@a.linuxea.com opt]# ls redis-10.5.7.tgz解压后使用template可以看到模板的配置清单信息[root@a.linuxea.com opt]# tar xf redis-10.5.7.tgz [root@a.linuxea.com redis]# helm template test ./安装测试helm upgrade --install -f values.yaml test-redis stable/redis --namespace redis --create-namespace--create-namespace: 如果名称空间不存在就创建upgrade: 如果存在就更新,不存在就创建[root@a.linuxea.com redis]# helm install -f values.yaml test-redis stable/redis --namespace redis --create-namespace WARNING: This chart is deprecated NAME: test-redis LAST DEPLOYED: Tue Jun 28 14:50:56 2022 NAMESPACE: redis STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: This Helm chart is deprecated Given the `stable` deprecation timeline (https://github.com/helm/charts#deprecation-timeline), the Bitnami maintained Redis Helm chart is now located at bitnami/charts (https://github.com/bitnami/charts/). The Bitnami repository is already included in the Hubs and we will continue providing the same cadence of updates, support, etc that we've been keepinghere these years. Installation instructions are very similar, just adding the _bitnami_ repo and using it during the installation (`bitnami/<chart>` instead of `stable/<chart>`) $ helm repo add bitnami https://charts.bitnami.com/bitnami$ helm install my-release bitnami/<chart> # Helm 3$ helm install --name my-release bitnami/<chart> # Helm 2 To update an exisiting _stable_ deployment with a chart hosted in the bitnami repository you can executerepo add bitnami https://charts.bitnami.com/bitnami $ helm upgrade my-release bitnami/<chart> Issues and PRs related to the chart itself will be redirected to `bitnami/charts` GitHub repository. In the same way, we'll be happy to answer questions related to this migration process in this issue (https://github.com/helm/charts/issues/20969) created as a common place for discussion. ** Please be patient while the chart is being deployed ** Redis can be accessed via port 6379 on the following DNS name from within your cluster: test-redis-master.redis.svc.cluster.local To get your password run: export REDIS_PASSWORD=$(kubectl get secret --namespace redis test-redis -o jsonpath="{.data.redis-password}" | base64 --decode) To connect to your Redis server: 1. Run a Redis pod that you can use as a client: kubectl run --namespace redis test-redis-client --rm --tty -i --restart='Never' \ --env REDIS_PASSWORD=$REDIS_PASSWORD \ --image docker.io/bitnami/redis:5.0.7-debian-10-r32 -- bash 2. Connect using the Redis CLI: redis-cli -h test-redis-master -a $REDIS_PASSWORD To connect to your database from outside the cluster execute the following commands: kubectl port-forward --namespace redis svc/test-redis-master 6379:6379 & redis-cli -h 127.0.0.1 -p 6379 -a $REDIS_PASSWORD查看密码[root@a.linuxea.com redis]# kubectl get secret --namespace redis test-redis -o jsonpath="{.data.redis-password}" | base64 --decode VeiervwDUG查看运行状态由于一些配置没有准备,此时redis是pending的,但是helm安装是成功的。我们的目的达到了[root@k8s-02 ~]# kubectl -n redis get all NAME READY STATUS RESTARTS AGE pod/test-redis-master-0 0/1 Pending 0 2m53s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/test-redis-headless ClusterIP None <none> 6379/TCP 2m53s service/test-redis-master ClusterIP 10.101.161.177 <none> 6379/TCP 2m53s NAME READY AGE statefulset.apps/test-redis-master 0/1 2m53s已经测试完成,现在卸载掉即可([root@a.linuxea.com redis]# helm -n redis ls NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION test-redis redis 1 2022-06-28 14:50:56.56116374 +0800 CST deployed redis-10.5.7 5.0.7 [root@a.linuxea.com redis]# helm -n redis uninstall test-redis release "test-redis" uninstalled [root@a.linuxea.com redis]# helm -n redis ls NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION基于ip的helm在很多场景中,我们需要把一个http的改成https并且还要他支持https,并且还是ip,这在harbor官网已经有说明,老话重提的。1.准备证书与此前不同的是,我们需要将subjectAltName = @alt_names的值也改成ip地址subjectAltName = IP:IPADDRESS如下CERT_PATH=/data/cert-`date +%F`/ YOU_DOMAIN=harbor.local.com mkdir -p ${CERT_PATH} openssl genrsa -out ca.key 4096 openssl req -x509 -new -nodes -sha512 -days 365000 \ -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=${YOU_DOMAIN}" \ -key ca.key \ -out ca.crt openssl genrsa -out ${YOU_DOMAIN}.key 4096 openssl req -sha512 -new \ -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=${YOU_DOMAIN}" \ -key ${YOU_DOMAIN}.key \ -out ${YOU_DOMAIN}.csr cat > v3.ext <<-EOF authorityKeyIdentifier=keyid,issuer basicConstraints=CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment extendedKeyUsage = serverAuth subjectAltName = @alt_names [alt_names] DNS.1=${YOU_DOMAIN} DNS.2=yourdomain DNS.3=hostname EOF openssl x509 -req -sha512 -days 365000 \ -extfile v3.ext \ -CA ca.crt -CAkey ca.key -CAcreateserial \ -in ${YOU_DOMAIN}.csr \ -out ${YOU_DOMAIN}.crt cp ${YOU_DOMAIN}.crt ${YOU_DOMAIN}.key ${CERT_PATH} 以172.16.100.150为例CERT_PATH=/data/cert-`date +%F`/ YOU_DOMAIN=172.16.100.150:8443 mkdir -p ${CERT_PATH} openssl genrsa -out ca.key 4096 openssl req -x509 -new -nodes -sha512 -days 365000 \ -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=${YOU_DOMAIN}" \ -key ca.key \ -out ca.crt openssl genrsa -out ${YOU_DOMAIN}.key 4096 openssl req -sha512 -new \ -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=${YOU_DOMAIN}" \ -key ${YOU_DOMAIN}.key \ -out ${YOU_DOMAIN}.csr cat > v3.ext <<-EOF authorityKeyIdentifier=keyid,issuer basicConstraints=CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment extendedKeyUsage = serverAuth subjectAltName = IP:172.16.100.150 [alt_names] DNS.1=${YOU_DOMAIN} DNS.2=yourdomain DNS.3=hostname EOF openssl x509 -req -sha512 -days 365000 \ -extfile v3.ext \ -CA ca.crt -CAkey ca.key -CAcreateserial \ -in ${YOU_DOMAIN}.csr \ -out ${YOU_DOMAIN}.crt cp ${YOU_DOMAIN}.crt ${YOU_DOMAIN}.key ${CERT_PATH}2.harbor.yaml配置harbor.yaml部分配置hostname: 172.16.100.150 http: port: 8080 https: port: 8443 certificate: /etc/ssl/certs/172.16.100.150:8443.crt private_key: /etc/ssl/certs/172.16.100.150:8443.key配置完成后需要执行./prepare并且重启3.docker和ca仍然进行证书拷贝首先拷贝当前节点的 cp 172.16.100.150\:8443.* /etc/docker/certs.d/172.16.100.150\:8443/在将当前的证书打包拷贝到其他需要使用helm上传下载的节点 tar -zcf 8443.tar 172.16.100.150\:8443 scp 8443.tar 172.16.15.136:/etc/docker/certs.d/目录结构如下[root@docker-156 certs.d]# tree /etc/docker/certs.d/ /etc/docker/certs.d/ ├── 172.16.100.150:8443 │ ├── 172.16.100.150:8443.cert │ ├── 172.16.100.150:8443.crt │ ├── 172.16.100.150:8443.csr │ ├── 172.16.100.150:8443.key │ └── ca.crt仍然需要让系统信任这个ca,于是我们将 ca.crt的内容追加到如/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem文件中,并复制ca.crt到/etc/pki/ca-trust/source/anchors/cp /etc/docker/certs.d/harbor.local.com/ca.crt /etc/pki/ca-trust/source/anchors/而后就可以正常推送和下载了
2022年06月28日
194 阅读
0 评论
0 点赞
2022-06-27
linuxea:gitops持续集成组件快速搭建
我想我多少有些浮夸,因为我将这几句破烂的文字描述的一个持续集成的幼形称作“gitops”。不禁我有些害臊这充其量只是一个持续集成的组件整合,远远算不上gitops,更别说什么devops,那是个什么东西呢。不知道从什么时候开始,我逐渐厌烦有人枉谈devops,随意的描述devops,更可恶的是有些人做了一个流水线管道就妄言从事了devops的工作,我不想与他们为伍。我肤浅的认为只有无知才会大言不惭。为此,为了能和这些所谓的devops划清界限,并跨远。我利用业余时间将一些小项目的实施交付文档经过修改改为所谓的基于git的持续集成和持续发布,很明显,这里面引入了gitlab。gitlab作为管理jenkins的共享库和k8s的yaml配置清单。当然,这是一个幼形。并且,如果我的描述和形容使你感到不适,那当我什么都没说。好的,那么我们正式开始在一些场合中,我们希望快速构建一个项目,项目里面一套持续集成的流水线,我们至少需要一些必要的组件,如:jenkins,gitlab,sonarqube,harbor,nexus3,k8s集群等。我们的目的是交付一套持续集成和持续交付的幼形,来应对日益变换的构建和发布。拓扑如下为此,这篇文章简单介绍如何快速使用docker来部署这些必要的组件。首要条件安装docker和docker-compose离线安装docker如果你准备了离线包就可以使用本地的包进行安装centos7.9:cd docker/docker-rpm yum localinstall * -y离线安装docker-compose我们至少下载一个较新的版本来应对harbor的配置要求,一般来说都够用cd docker/docker-compose cp docker-compose-Linux-x86_64 /usr/loca/sbin/docker-compose chmod +x /usr/loca/sbin/docker-compose验证docker verson docker-compsoe -v在线安装:yum install epel* -y yum install docker-ce docker-compose -yjenkins如果本地有旧的包,解压即可tar xf jenkins.tar.gz -C /data/ chown -R 1000:1000 /data/jenkins cd /data/jenkins docker-compose -f jenkins.yaml up -d安装新的version: '3.5' services: jenkins: image: registry.cn-hangzhou.aliyuncs.com/marksugar/jenkins:2.332-3-alpine-ansible-maven3-nodev16.15-latest container_name: jenkins restart: always network_mode: host environment: - JAVA_OPTS=-Duser.timezone=Asia/Shanghai # 时区1 volumes: - /etc/localtime:/etc/localtime:ro # 时区2 - /data/jenkins-latest/jenkins_home:/var/jenkins_home #chown 1000:1000 -R jenkins_home - /data/jenkins-latest/ansiblefile:/etc/ansible - /data/jenkins-latest/local_repo:/data/jenkins-latest/local_repo - /data/jenkins-latest/package:/usr/local/package #- /data/jenkins-latest/package/node-v14.17.6-linux-x64/bin/node:/sbin/node #- /data/jenkins-latest/package/node-v14.17.6-linux-x64/bin/npm:/sbin/npm #- /data/jenkins-latest/latest_war_package/jenkins.war:/usr/share/jenkins/jenkins.war # jenkins war新包挂载 # ports: # - 58080:58080 user: root logging: driver: "json-file" options: max-size: "1G" deploy: resources: limits: memory: 30720m reservations: memory: 30720m 查看密钥[root@linuxea.com data]# cat /data/jenkins-latest/jenkins_home/secrets/initialAdminPassword c3e5dd22ea5e4adab28d001a560302bc第一次卡住,修改# cat /data/jenkins-latest/jenkins_home/hudson.model.UpdateCenter.xml <?xml version='1.1' encoding='UTF-8'?> <sites> <site> <id>default</id> <url>https://mirrors.tuna.tsinghua.edu.cn/jenkins/updates/update-center.json</url> </site> </sites>跳过,不安装任何插件选择none如果没有修改上面的插件源,我们就在Manage Jenkins->Plugin Manager->Advanced->最下方的Update Site修改https://mirrors.tuna.tsinghua.edu.cn/jenkins/updates/update-center.json必要安装的jenkins插件1.Credentials: 凭据 localization: 中文插件 localization: chinase(simplified) 2.AnsiColor: 颜色插件 "echo -en \\033[1;32m" 3.Rebuilder: 重复上次构建插件 4.build user vars:变量 变量分为如下几种: Full name :全名 BUILD_USER_FIRST_NAME :名字 BUILD_USER_LAST_NAME :姓 BUILD_USER_ID :Jenkins用户ID BUILD_USER_EMAIL :用户邮箱 5.Workspace Cleanup: 清理workspace 6.Role-based Authorization Strategy 用户角色 7.Git Plugin 8.Gogs 9.GitLab 10.Generic Webhook TriggerVersion 11.Pipeline 12.Pipeline: Groovy 13.JUnit Attachments 14.Performance 15.Html Publisher 16.Gitlab Authentication 17.JIRA 18.LDAP 19.Parameterized Triggersonarqubesonarqube:8.9.2-community docker pull sonarqube:8.9.8-communityversion: '3.3' services: sonarqube: container_name: sonarqube image: registry.cn-hangzhou.aliyuncs.com/marksugar/sonarqube:8.9.8-community restart: always hostname: 172.16.100.47 environment: - stop-timeout: 3600 - "ES_JAVA_OPTS=-Xms16384m -Xmx16384m" ulimits: memlock: soft: -1 hard: -1 logging: driver: "json-file" options: max-size: "50M" deploy: resources: limits: memory: 16384m reservations: memory: 16384m ports: - '9000:9000' volumes: - /etc/localtime:/etc/localtime - /data/sonarqube/conf:/opt/sonarqube/conf - /data/sonarqube/extensions:/opt/sonarqube/extensions - /data/sonarqube/logs:/opt/sonarqube/logs - /data/sonarqube/data:/opt/sonarqube/dataharbortar xf harbor-offline-installer-v2.5.1.tgz cd harbor cp harbor.yml.tmpl harbor.yml Nodeip=`ip a s ${NETWORK_DEVIDE:-eth0}|awk '/inet/{print $2}'|sed -r 's/\/[0-9]{1,}//'` sed -i "s/hostname: reg.mydomain.com/hostname: ${NodeIp}/g" harbor.yml sed -i "s@https:@#https:@g" harbor.yml sed -i "s@port: 443@#port: 443@g" harbor.yml sed -i "s@certificate: /your/certificate/path@#certificate: /your/certificate/path@g" harbor.yml sed -i "s@private_key: /your/private/key/path@#private_key: /your/private/key/path@g" harbor.yml bash install.sh默认密码:Harbor12345nexusmkdir /data/nexus/data -p && chown -R 200.200 /data/nexus/datayamlversion: '3.3' services: nexus3: image: sonatype/nexus3:3.39.0 container_name: nexus3 network_mode: host restart: always environment: - INSTALL4J_ADD_VM_PARAMS=-Xms8192m -Xmx8192m -XX:MaxDirectMemorySize=8192m -Djava.util.prefs.userRoot=/nexus-data # - NEXUS_CONTEXT=/ # ports: # - 8081:8081 volumes: - /etc/localtime:/etc/localtime:ro - /data/nexus/data:/nexus-data logging: driver: "json-file" options: max-size: "50M" deploy: resources: limits: memory: 8192m reservations: memory: 8192mgitlabversion: '3' services: gitlab-ce: container_name: gitlab-ce image: gitlab/gitlab-ce:15.0.3-ce.0 restart: always # network_mode: host hostname: 192.168.100.22 environment: TZ: 'Asia/Shanghai' GITLAB_OMNIBUS_CONFIG: | external_url 'http://192.168.100.22' gitlab_rails['time_zone'] = 'Asia/Shanghai' gitlab_rails['gitlab_shell_ssh_port'] = 23857 # unicorn['port'] = 8888 # nginx['listen_port'] = 80 ports: - '80:80' - '443:443' - '23857:22' volumes: - /etc/localtime:/etc/localtime - /data/gitlab/config:/etc/gitlab - /data/gitlab/logs:/var/log/gitlab - /data/gitlab/data:/var/opt/gitlab logging: driver: "json-file" options: max-size: "50M" deploy: resources: limits: memory: 13312m reservations: memory: 13312mgitlab-ce启动完成后使用如下命令查看登录密码docker exec -it gitlab-ce grep 'Password:' /etc/gitlab/initial_root_password
2022年06月27日
160 阅读
0 评论
0 点赞
2019-05-27
linuxea: 三小时快速入门docker指南
在此之前,我记录了很多章关于docker使用的基础,从安装到编写,其中还有一些常见的使用技巧,这其中还包括一些docker-compsoe的简单操作案例,其中有一些由于编写的时间太久,bug很多,但用来学习绰绰有余,但也仅供参考。现在,我将所有的文章汇聚在一个页面中方便查看。假如你是一个docker新手,没有太多时间,你不妨从本章入手学习,假如你想详细了解docker,那我推荐你查看我记录的白话容器26章系列详细学习docker。如果你已经阅读了这所有的文章,你可以查看我的github上的dockerMops,在这个项目中,有我学习的从之前到现在私下编写的一些镜像原文件。请不要误会,这一些仅仅只是为了更好的理解docker,仅供参考。如果你在生产中使用,请妥善思考如何使用。如果文章中出现错误,请留言或者QQ讨论群:47355295 ,加群注明来意这是一篇似乎多余的文章,因为这里提到的都是些基础。这是因为考虑到很多新手朋友接触docker学习的困难,我花了一个下午断断续续写好,它不需要你付费,它仅仅只希望可以帮助到大家。正所谓,取自网络,回馈网络。请不要吝啬你的赞美安装docker和 compose脚本部署:[root@www.linuxea.com-Node99 ~]# curl -Lks https://raw.githubusercontent.com/marksugar/MySysOps/master/scripts/docker-init-Usage.sh|bash Please input one arguement: Usage: bash {centos_install|debian_install|ubuntu_install}当你执行这个脚本,他会提示你:Please input one arguement: Usage: bash docker-init-Usage.sh{centos_install|debian_install|ubuntu_install}只需要输入对应的系统安装即可,如centos[root@www.linuxea.com-Node99 ~]# curl -Lks https://raw.githubusercontent.com/marksugar/MySysOps/master/scripts/docker-init-Usage.sh|bash -s centos_install docker 已经安装 docker-compose 已经安装脚本细节可直接访问github链接查看即可快速入门如果你没基础,你可以从docker的基础命令看起,进行简单的操作,并且了解docker后台运行,以及在运行时候可以使用一些命令来提升学习兴趣,假如报错,可以使用日志来判断哪里有问题。当你简单了解后,可以进一步开始dockerfile的阅读。假如你觉得这些都很乱,你可以尝试学习我在上面提到过的白话容器26章系列详细学习docker。linuxea: docker后台运行模式linuxea:docker中运行bash或其他命令linuxea:docker命令如何过滤docker容器linuxea:如何从命令行删除docker容器linuxea:docker run的十个常用选项linuxea:十个初学Dcoker cli指令linuxea:有效使用docker logs查看日志linuxea:docker标签的简单介绍dockerfile编写指南如果你只是浅用户,并不需要自己编写dockfile,你可以直接跳到docker网络学习docekr的ip和网络配置linuxea:白话容器之简单制作镜像与hub使用(7)linuxea:白话容器之使用dockerfile创建简单镜像1(18)linuxea:白话容器之使用dockerfile指令使用2(19)linuxea:白话容器之dockerfile CMD/entrypoint详解3(20)linuxea:白话容器之dockerfile COPY与ADD的最佳实践(4)(21)linuxea:白话容器之dockerfile health check使用(5)(22) linuxea:白话容器之dockerfile ARG和ONBUILD使用(6)(23) dockerfile常见的使用技巧linuxea:docker run与exec的使用差异linuxea:docker不能忽视的.dockerignore用法linuxea:dockerfile中的RUN指令对镜像大小的影响linuxea:缩减docker镜像大小的5个步骤docker网络如果你也不用docker-compose对容器做简单的编排,仅仅就用一个容器尝试,你大可不必费时间看这么多,直接跳到端口暴露阅读使用即可。假如你有多个容器需要编排,你又只想简单的使用,除了docker网络里面的ip,你或许还要学习,我推荐你查看linuxea:如何使用docker-compose优雅的运行多个容器。相信你会喜欢的linuxea:docker-compose设置静态ip和link与depends_on的区别linuxea:白话容器之虚拟化网络与容器网络(8) linuxea:白话容器之docker网络(9)linuxea:白话容器之docker网络名称空间(10)linuxea:白话容器之自定义docker0网络(13)linuxea:白话容器之sock远程连接docker(14)linuxea:白话容器之docker创建自定义的网桥(15)Docker指定网桥和指定网桥IPdocker端口暴露如果对端口暴露对于你来说过于啰嗦,你直接查看linuxea:白话容器之联盟式容器与host网络模式(12),可以使用--net=host即可完成端口直接暴露,且不会隔离网络名称空间,倘若你是使用iptables,你会惊讶的发现,无论你怎么动防火墙,容器仍然可以使用。但我仍然建议你查看上述docker网络的几篇,了解docker网络的原理,这在有些时候是有必要的。linuxea:简单解释docker的端口和端口暴露(EXPOSE)linuxea:白话容器之docker的4种端口暴露方式(11)linuxea:白话容器之联盟式容器与host网络模式(12)docker仓库如果你已经在本地使用了docker有很多,想构建自己的docker仓库,你可以大致阅读这几章节的文章,选择一个。不过我推荐你使用harbor,因为这是harbor对china支持最好的中文版。相信你会喜欢的。忘了说,你已经使用了docker仓库,想必你会用上发布更新,在早期,我写过一篇jenkins+gitlab+docker快速部署发布回滚示例,这篇文章或许可以给你一些思路,尽管这看起来一点都不时髦。linuxea:白话容器之Registry与Harbor的构建和使用 (26)linuxea:docker仓库harbor-https的配置和使用docker-Portusv2.1镜像仓库快速部署使用docker-harbor0.5.0镜像仓库快速部署docker多阶段构建镜像多阶段构建可以加快构建速度,在某一些场景下,这必不可缺linuxea:Distroless与多阶段构建linuxea:docker多阶段构建Multi-Stage与Builder对比总结linuxea:Docker多阶段构建与- target和--cache-fromdocker数据卷linuxea:白话容器之docker存储卷概述(16)linuxea:白话容器之docker存储卷使用的几种方式(17)docker守护进程docker为什么要用守护进程?这似乎并不符合docker一贯的作风,我们知道,每个容器内ID为1的通常是启动容器的唯一一个进程,进程终止也就意味着容器终止。我们也知道,每个容器内进程ID为1的也是第一个启动的进程,支撑整个容器的框架的进程。我们提高过多次,容器内的进程必须是要在前台运行。为什么?这似乎要去了解更多的知识框架来支持这种操作的合理性。思考1:容器内ID为1的进程是容器内唯一一个在前台运行的进程,这样做的好处是什么? 当你从本章中的文章中了解到后你的思路将会跟明确那么为什么有需要守护进程?当你真的需要守护进程的时候你应该了解到它的必要性了。简单的说,就是在容器内需要被运行多个进程的时候,你可能需要守护进程来管理。linuxea:docker的supervisor与inotifywait的使用技巧docker-composelinuxea:如何使用docker-compose优雅的运行多个容器docker变量传递linuxea:compose中的变量传递与docker-createrepo构建linuxea:如何使用docker和docker-compose的Entrypointdocker安全linuxea:docker与gVisor沙箱linuxea:Distroless与多阶段构建linuxea:docker特权模式与--cap-add和--cap-droplinuxea:了解uid和gid如何在docker容器中工作linuxea:docker容器中程序不应该以root用户身份运行linuxea:docker卷和文件系统权限linuxea:尽可能不在docker镜像中嵌入配置或者密码linuxea:docker的安全实践初识dockerDocker简单安装和命令使用Docker网络和数据卷构建一个简单的docker镜像使用dockerfile构建一个简单的镜像Docker Hub简单使用Docker数据管理-备份和恢复Docker本地仓库简单使用docker构建示例linuxea:构建redis4.0.11-Docker镜像技巧和思路Docker alpine构建nginxDocker分离构建lnmp部署wordpressDocker部署Redis cluster3.2.5集群Docker构建二进制mariaDB环境docker构建subversion1.9.4Docker构建NTP服务器Docker一步步构建Tomcat思路docker常见问题linuxea:使用单个命令清理docker镜像,容器和卷linuxea:如何设置docker日志轮换linuxea:什么是docker <none><none> image(镜像)?其他docker相关Centos7 Install rancherlinuxea:nginx容器优化方案(小米容器cpu检测)docker其他使用jenkins+gitlab+docker快速部署发布回滚示例docker工具linuxea:如何复现查看docker run参数命令swarmlinuxea:Docker swarm集群入门简单使用(1)linuxea:Docker swarm集群节点服务更新(2)linuxea:Docker swarm集群节点路由网络(3)linuxea:docker config的配置使用docker修改容器时间linuxea:如何单单修改docker容器的系统时间docker资源限制linuxea:白话容器之CPU与内存资源限制概述(24) linuxea:白话容器之CPU与内存资源限制测试(25)
2019年05月27日
5,393 阅读
0 评论
0 点赞
2019-03-27
linuxea:gitlab-ci之docker镜像质量品质报告
在此前的两篇关于gitlab-ci的镜像的安全和质量的问题上做了一些简单的描述,现在就着此前的,我们在使用另外一个开源工具dive,使用dive用来对 镜像每个图层做分析,分析效率和镜像是否有浪费的空间,最后打印一个测试品质的报告。此前在另外一篇文章如何从docker镜像恢复Dockerfile提到如何简单使用dive查看dockerfile.阅读本章节,你将了解dive的基本使用和在gitlab-ci中的集成方式。我们先来看gitlab上作者给出的基本功能如下描述显示按层细分的Docker镜像内容当你在左侧选择一个图层时,将显示该图层的内容以及右侧的所有先前图层。此外,还可以使用箭头键完全浏览文件树(可以查看一些 dockerfile指令)。<当你使用dive image的时候你会进入一个交互式的接口>指出每层中发生了哪些变化已更改,已修改,添加或删除的文件在文件树中指示。可以调整此值以显示特定图层的更改,或直到此图层的聚合更改。估算“镜像效率”左下方窗格显示基本图层信息和一个实验指标,用于猜测镜像所包含的空间浪费。这可能是跨层重复文件,跨层移动文件或不完全删除文件。提供了百分比“得分”和总浪费的文件空间。快速构建/分析周期你可以构建Docker镜像并使用一个命令立即进行分析: dive build -t some-tag .你只需要docker build使用相同的dive build 命令替换你的命令。CI集成 分析和成像,并根据镜像效率和浪费的空间获得通过/失败结果。CI=true在调用任何有效的dive命令时,只需在环境中进行设置。安装centoscurl -OL https://github.com/wagoodman/dive/releases/download/v0.7.0/dive_0.7.0_linux_amd64.rpm rpm -i dive_0.7.0_linux_amd64.rpmdockerdocker run --rm -i -v /var/run/docker.sock:/var/run/docker.sock \ -e CI=true \ wagoodman/dive:v0.7 registry.linuxea.com/dev/linuxeabbs:latest使用我们主要围绕CI展开如果要查看Dockerfile或者其他层的详情,可以使用dive IMAGE,参考如何从docker镜像恢复Dockerfile[root@linuxea.com ~]# CI=true dive registry.linuxea.com/dev/linuxeabbs:latest Fetching image... (this can take a while with large images) Parsing image... Analyzing image... efficiency: 99.6914 % wastedBytes: 1858891 bytes (1.9 MB) userWastedPercent: 0.5040 % Run CI Validations... Using default CI config PASS: highestUserWastedPercent SKIP: highestWastedBytes: rule disabled PASS: lowestEfficiency Result:PASS [Total:3] [Passed:2] [Failed:0] [Warn:0] [Skipped:1]如何理解这些内容背后的含义是什么?参考github作者的回复,总体就如下3条规则efficiency:这基本上是(总镜像大小)/(总和(所有层中的字节数))。我们的想法是,如果你不删除任何文件或添加任何文件两次,那么你的“效率”为100%。如果你开始复制/删除文件,则会根据效率计算(按浪费文件的大小加权)。wastedBytes:发现在多个层上重复的原始字节数,或者发现在更高层中删除的原始字节数(因此,如果最终未使用,则可能不应该在任何层中。userWastedPercent:这基本上是效率的倒数,除了不考虑基础镜像的任何修改。更具体地说,这是wastedBytes / sum(所有层中的字节,基本镜像层除外)。集成到管道4/4 Docker_Dive: <<: *bash_init script: - Docker_Dive artifacts: name: "$CI_JOB_STAGE-$CI_COMMIT_REF_NAME" paths: [Docker_Dive.log] 函数部分 function Docker_Dive() { export PROJECT_NAME=$(echo "$CI_PROJECT_PATH_SLUG" |awk -F- '{print $2}') export IMAGE_TAG_LATEST="$REPOSITORY_URL"/"$PROJECT_NAME":latest docker run --rm -i -v /var/run/docker.sock:/var/run/docker.sock -e CI=true \ wagoodman/dive:v0.7 "$IMAGE_TAG_LATEST" |tee Docker_Dive.log } 部分截图如下:延伸阅读linuxea:如何使用gitlab-ci/cd来构建docker镜像和发布如何从docker镜像恢复Dockerfilelinuxea:gitlab-ci/cd docker容器漏洞扫描clair-scannerlinuxea:gitlab-ci/cd runner配置和安装(一)linuxea:gitlab-ci的定时任务linuxea:docker仓库harbor-https的配置和使用linuxea:白话容器之Registry与Harbor的构建和使用 (26)linuxea:Docker多阶段构建与- target和--cache-from阅读更多devopsgitlabgitlab-ci/cdjenkins
2019年03月27日
41,896 阅读
0 评论
0 点赞
2019-03-26
linuxea:gitlab-ci之docker-bench-security
在前面的两篇中,记录了gitlab-ci构建构建docker镜像和安全漏洞扫描,但在github上还有一个不错的项目--> Docker Bench for SecurityDocker Bench for Security是一个脚本,用于检查有关在生产中部署Docker容器的许多常见最佳实践。附:DOCKER安全性和最佳实践这其中, 检查的东西很多样化,包括:资源限制,明文密码,swarm以及其他的一些辅助性质或者参考价值的信息。阅读本章,你将了解Docker Bench for Security在gitlab-ci中的集成实践,这将有助于推动gitlab-cI自动化。运行Docker Bench有提供现成的Docker容器镜像,我们直接拿来使用。请注意,此容器正以大量特权运行- 共享主机的文件系统,pid和网络命名空间,因为基准测试的部分应用于正在运行的主机。不要忘记根据你的操作系统调整共享卷,例如它可能不使用systemddocker run -it --net host --pid host --userns host --cap-add audit_control \ -e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST \ -v /var/lib:/var/lib \ -v /var/run/docker.sock:/var/run/docker.sock \ -v /usr/lib/systemd:/usr/lib/systemd \ -v /etc:/etc --label docker_bench_security \ docker/docker-bench-security默认情况下,会检查所有的容器项目和镜像,我们要通过选项来规避一些我们想要的结果: -b optional Do not print colors -h optional Print this help message -l FILE optional Log output in FILE -c CHECK optional Comma delimited list of specific check(s) -e CHECK optional Comma delimited list of specific check(s) to exclude -i INCLUDE optional Comma delimited list of patterns within a container name to check -x EXCLUDE optional Comma delimited list of patterns within a container name to exclude from check -t TARGET optional Comma delimited list of images name to checktype一共会检查如下几项,我们通过-c参数:-c docker_daemon_configuration 来检查我们想要一个函数的结果。如果你的粒度更小 ,可以使用:-c check_2找到。[INFO] 1 - Host Configuration [INFO] 2 - Docker daemon configuration [INFO] 3 - Docker daemon configuration files [INFO] 4 - Container Images and Build File [INFO] 5 - Container Runtime [INFO] 6 - Docker Security Operations [INFO] 7 - Docker Swarm Configuration示例:仅仅检查-c docker_daemon_configuration,并且镜像只是registry.linuxea.com/dev/linuxeabbs:latest[root@linuxea.com ~]# docker run -it --net host \ --pid host \ --userns host \ --cap-add audit_control \ -e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST \ -v /var/lib:/var/lib \ -v /var/run/docker.sock:/var/run/docker.sock \ -v /usr/lib/systemd:/usr/lib/systemd \ -v /etc:/etc \ --label docker_bench_security \ docker/docker-bench-security -t registry.linuxea.com/dev/linuxeabbs:latest -c docker_daemon_configuration在或者检查某一个小项添加到管道参数:-t 指定镜像-c 指定CIS,只检查指定的项目-l 结果输出到文件(似乎不好用,用tee代替)添加新的项3/3 DockerBench_Security: <<: *bash_init script: - docker_bench_security artifacts: name: "$CI_JOB_STAGE-$CI_COMMIT_REF_NAME" paths: [docker_bench_security.log]函数部分 function docker_bench_security() { export PROJECT_NAME=$(echo "$CI_PROJECT_PATH_SLUG" |awk -F- '{print $2}') export IMAGE_TAG_LATEST="$REPOSITORY_URL"/"$PROJECT_NAME":latest docker run -i --rm --net host \ --pid host \ --userns host \ --cap-add audit_control \ -e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST \ -v /var/lib:/var/lib \ -v /var/run/docker.sock:/var/run/docker.sock \ -v /usr/lib/systemd:/usr/lib/systemd \ -v /etc:/etc \ --label docker_bench_security \ docker/docker-bench-security -t "$IMAGE_TAG_LATEST" -c container_images |tee docker_bench_security.log }运行部分结果如下:延伸阅读linuxea:如何使用gitlab-ci/cd来构建docker镜像和发布linuxea:gitlab-ci/cd docker容器漏洞扫描clair-scannerlinuxea:gitlab-ci/cd runner配置和安装(一)linuxea:gitlab-ci的定时任务linuxea:docker仓库harbor-https的配置和使用linuxea:白话容器之Registry与Harbor的构建和使用 (26)linuxea:Docker多阶段构建与- target和--cache-from阅读更多devopsgitlabgitlab-ci/cdjenkins
2019年03月26日
2,728 阅读
1 评论
0 点赞
1
2