首页
常用命令
About Me
推荐
weibo
github
Search
1
Graylog收集文件日志实例
16,951 阅读
2
linuxea:jenkins+pipeline+gitlab+ansible快速安装配置(1)
16,438 阅读
3
git+jenkins发布和回滚示例
16,253 阅读
4
linuxea:如何复现查看docker run参数命令
15,603 阅读
5
OpenVPN吊销用户和增加用户(3)
14,755 阅读
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack
Open-Falcon
Prometheus
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
音乐
影视
music
Internet Consulting
最后的净土
软件交付
持续集成
gitops
devops
登录
Search
标签搜索
kubernetes
docker
zabbix
Golang
mariadb
持续集成工具
白话容器
nginx
elk
linux基础
dockerfile
Gitlab-ci/cd
基础命令
最后的净土
docker-compose
saltstack
haproxy
jenkins
GitLab
prometheus
marksugar
累计撰写
645
篇文章
累计收到
140
条评论
首页
栏目
ops
Openvpn
Sys Basics
rsync
Mail
NFS
Other
Network
HeartBeat
server 08
Code
Awk
Shell
Python
Golang
virtualization
KVM
Docker
openstack
Xen
kubernetes
kubernetes-cni
Data
Mariadb
PostgreSQL
MongoDB
Redis
MQ
Ceph
TimescaleDB
kafka
surveillance system
zabbix
ELK Stack
Open-Falcon
Prometheus
Web
apache
Tomcat
Nginx
自动化
Puppet
Ansible
saltstack
Proxy
HAproxy
Lvs
varnish
更多
音乐
影视
music
Internet Consulting
最后的净土
软件交付
持续集成
gitops
devops
页面
常用命令
About Me
推荐
weibo
github
搜索到
75
篇与
kubernetes
的结果
2022-05-15
linuxea:kubernetes中skywalking9.0部署使用
8.9.0是skywalking发布的最后一个功能版本,从2018年开始,skywalking一直是在服务,端点,实例间依赖的关系和拓扑结构,基于代理跟踪监控发展到全栈,包括日志,跟踪,指标和事件等。也添加了更多,如vm,k8s监控,服务网格。同时也引入了更多的方式来观测,如:ebpf但是在8.x的版本中使用了组的概念来解决混合的问题,但在v9核心中最重要的概念是LAYER层代表计算机科学中的一个抽象框架,例如操作系统(VM 层)、Kubernetes(k8s 层)、Service Mesh(典型的 Isto+Envoy 层),这种层将是从不同技术检测到的不同服务的所有者。相比较v8的组。显然,一个新layer概念要好得多。此外,group概念将被保留,因为它在每个layer,组将被设计为最终用户在内部对他们的服务进行分组。使用no group,它将在默认组中。这些在可视化UI中已经作为一个管理后台的dashboard的样式可视化(UI)databaseskubernetesservice meshgeneral servicebrowser这样导致SkyWalking 9.0.0 看起来是一个全栈 APM 系统,查看更多讨论部署skywalking9.0创建名称空间apiVersion: v1 kind: Namespace metadata: name: skywalking给es创建pvcapiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest imagePullPolicy: IfNotPresent volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: fuseim.pri/ifs - name: NFS_SERVER value: 192.168.3.19 - name: NFS_PATH value: /data/nfs-k8s volumes: - name: nfs-client-root nfs: server: 192.168.3.19 path: /data/nfs-k8s --- apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io创建pvcapiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-storage namespace: default provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME' parameters: archiveOnDelete: "false" # Supported policies: Delete、 Retain , default is Delete reclaimPolicy: Retain --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-skywalking namespace: skywalking spec: accessModes: - ReadWriteMany storageClassName: nfs-storage resources: requests: storage: 10Gi创建es pod# Source: skywalking/charts/elasticsearch/templates/statefulset.yaml apiVersion: v1 kind: Service metadata: name: elasticsearch namespace: skywalking labels: app: elasticsearch spec: type: ClusterIP ports: - name: elasticsearch port: 9200 protocol: TCP selector: app: elasticsearch --- apiVersion: apps/v1 kind: Deployment metadata: name: elasticsearch namespace: skywalking labels: app: elasticsearch spec: selector: matchLabels: app: elasticsearch replicas: 1 template: metadata: name: elasticsearch labels: app: elasticsearch spec: initContainers: - name: configure-sysctl securityContext: runAsUser: 0 privileged: true image: "docker.elastic.co/elasticsearch/elasticsearch:6.8.6" imagePullPolicy: "IfNotPresent" # command: ["sysctl", "-w", "vm.max_map_count=262144"] command: ["/bin/sh"] args: ["-c", "sysctl -w DefaultLimitNOFILE=65536; sysctl -w DefaultLimitMEMLOCK=infinity; sysctl -w DefaultLimitNPROC=32000; sysctl -w vm.max_map_count=262144"] resources: {} containers: - name: "elasticsearch" securityContext: capabilities: drop: - ALL runAsNonRoot: true runAsUser: 1000 image: "docker.elastic.co/elasticsearch/elasticsearch:6.8.6" imagePullPolicy: "IfNotPresent" livenessProbe: failureThreshold: 3 initialDelaySeconds: 30 periodSeconds: 2 successThreshold: 1 tcpSocket: port: 9300 timeoutSeconds: 2 readinessProbe: failureThreshold: 3 initialDelaySeconds: 30 periodSeconds: 2 successThreshold: 2 tcpSocket: port: 9300 timeoutSeconds: 2 ports: - name: http containerPort: 9200 - name: transport containerPort: 9300 resources: limits: cpu: 1000m memory: 2Gi requests: cpu: 100m memory: 2Gi env: - name: node.name valueFrom: fieldRef: fieldPath: metadata.name - name: cluster.name value: "elasticsearch" - name: network.host value: "0.0.0.0" - name: ES_JAVA_OPTS value: "-Xmx1g -Xms1g -Duser.timezone=Asia/Shanghai" - name: discovery.type value: single-node # value: "-Xmx1g -Xms1g -Duser.timezone=Asia/Shanghai MAX_OPEN_FILES=655350 MAX_LOCKED_MEMORY=unlimited" # - name: node.data # value: "true" # - name: node.ingest # value: "true" # - name: node.master # value: "true" # - name: http.cors.enabled # value: "true" # - name: http.cors.allow-origin # value: "*" # - name: http.cors.allow-headers # value: "X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization" # - name: bootstrap.memory_lock # value: "true" volumeMounts: - mountPath: /usr/share/elasticsearch/data name: elasticsearch-data restartPolicy: Always volumes: - name: elasticsearch-data persistentVolumeClaim: claimName: pvc-skywalking创建kabanakabana使用来管理es的,也可以使用其他的套件,如果有的话apiVersion: v1 kind: Service metadata: labels: app: kibana name: kibana namespace: skywalking spec: ports: - name: http port: 5601 protocol: TCP targetPort: 5601 selector: app: kibana type: ClusterIP --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: kibana-ui namespace: skywalking spec: ingressClassName: nginx rules: - host: local.kabana.com http: paths: - path: / pathType: Prefix backend: service: name: kibana port: number: 5601 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: kibana name: kibana namespace: skywalking spec: replicas: 1 selector: matchLabels: app: kibana template: metadata: labels: app: kibana spec: containers: - env: - name: ELASTICSEARCH_HOSTS value: http://elasticsearch:9200 image: kibana:6.8.6 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 initialDelaySeconds: 10 periodSeconds: 2 successThreshold: 1 tcpSocket: port: 5601 timeoutSeconds: 2 name: kibana ports: - containerPort: 5601 name: http protocol: TCP readinessProbe: failureThreshold: 3 initialDelaySeconds: 10 periodSeconds: 2 successThreshold: 2 tcpSocket: port: 5601 timeoutSeconds: 2 resources: limits: cpu: "2" memory: 512Mi requests: cpu: 100m memory: 128Mi创建alarm configmap文件并且配置一个简单的告警模板apiVersion: v1 kind: ConfigMap metadata: name: alarm-configmap namespace: skywalking data: alarm-settings.yml: |- rules: # Rule unique name, must be ended with `_rule`. service_resp_time_rule: metrics-name: service_resp_time op: ">" threshold: 1000 period: 5 count: 3 silence-period: 5 message: Response time of service {name} is more than 1000ms in 3 minutes of last 10 minutes. service_sla_rule: # Metrics value need to be long, double or int metrics-name: service_sla op: "<" threshold: 8000 # The length of time to evaluate the metrics period: 10 # How many times after the metrics match the condition, will trigger alarm count: 2 # How many times of checks, the alarm keeps silence after alarm triggered, default as same as period. silence-period: 3 message: Successful rate of service {name} is lower than 80% in 2 minutes of last 10 minutes service_resp_time_percentile_rule: # Metrics value need to be long, double or int metrics-name: service_percentile op: ">" threshold: 1000,1000,1000,1000,1000 period: 10 count: 3 silence-period: 5 message: Percentile response time of service {name} alarm in 3 minutes of last 10 minutes, due to more than one condition of p50 > 1000, p75 > 1000, p90 > 1000, p95 > 1000, p99 > 1000 service_instance_resp_time_rule: metrics-name: service_instance_resp_time op: ">" threshold: 1000 period: 10 count: 2 silence-period: 5 message: Response time of service instance {name} is more than 1000ms in 2 minutes of last 10 minutes database_access_resp_time_rule: metrics-name: database_access_resp_time threshold: 1000 op: ">" period: 10 count: 2 message: Response time of database access {name} is more than 1000ms in 2 minutes of last 10 minutes endpoint_relation_resp_time_rule: metrics-name: endpoint_relation_resp_time threshold: 1000 op: ">" period: 10 count: 2 message: Response time of endpoint relation {name} is more than 1000ms in 2 minutes of last 10 minutes dingtalkHooks: textTemplate: |- { "msgtype": "text", "text": { "content": "Apache SkyWalking Alarm: \n %s." } } webhooks: - url: https://oapi.dingtalk.com/robot/send?access_token=0ca06927f1cd962ed8b47086 secret: SEC4c70c124f6148869de3285配置skywalking#ServiceAccount apiVersion: v1 kind: ServiceAccount metadata: labels: app: skywalking name: skywalking-oap namespace: skywalking --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: skywalking namespace: skywalking labels: app: skywalking rules: - apiGroups: [""] resources: ["pods","configmaps"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: skywalking namespace: skywalking labels: app: skywalking roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: skywalking subjects: - kind: ServiceAccount name: skywalking-oap namespace: skywalking --- # es # apiVersion: v1 # kind: Service # metadata: # name: elasticsearch-master # namespace: skywalking # labels: # app: "elasticsearch-master" # spec: # type: ClusterIP # ports: # - name: elasticsearch-master # port: 9200 # protocol: TCP # --- # apiVersion: v1 # kind: Endpoints # metadata: # name: elasticsearch-master # namespace: skywalking # labels: # app: "elasticsearch-master" # subsets: # - addresses: # - ip: 192.168.0.13 # ports: # - name: elasticsearch-master # port: 9200 # protocol: TCP --- # oap apiVersion: v1 kind: Service metadata: name: skywalking-oap namespace: skywalking labels: app: skywalking-oap spec: type: ClusterIP ports: - port: 11800 name: grpc - port: 12800 name: rest selector: app: skywalking-oap chart: skywalking-4.2.0 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: skywalking-oap name: skywalking-oap namespace: skywalking spec: replicas: 1 selector: matchLabels: app: skywalking-oap template: metadata: labels: app: skywalking-oap chart: skywalking-4.2.0 spec: serviceAccountName: skywalking-oap affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 podAffinityTerm: topologyKey: kubernetes.io/hostname labelSelector: matchLabels: app: "skywalking" release: "skywalking" component: "oap" initContainers: - name: wait-for-elasticsearch image: busybox:1.30 imagePullPolicy: IfNotPresent command: ['sh', '-c', 'for i in $(seq 1 60); do nc -z -w3 elasticsearch 9200 && exit 0 || sleep 5; done; exit 1'] containers: - name: oap image: skywalking.docker.scarf.sh/apache/skywalking-oap-server:9.0.0 # docker pull apache/skywalking-oap-server:8.8.1 imagePullPolicy: IfNotPresent livenessProbe: tcpSocket: port: 12800 initialDelaySeconds: 15 periodSeconds: 20 readinessProbe: tcpSocket: port: 12800 initialDelaySeconds: 15 periodSeconds: 20 ports: - containerPort: 11800 name: grpc - containerPort: 12800 name: rest env: - name: JAVA_OPTS value: "-Dmode=no-init -Xmx2g -Xms2g" - name: SW_CLUSTER value: kubernetes - name: SW_CLUSTER_K8S_NAMESPACE value: "default" - name: SW_CLUSTER_K8S_LABEL value: "app=skywalking,release=skywalking,component=oap" # 记录数据。 - name: SW_CORE_RECORD_DATA_TTL value: "2" # Metrics数据 - name: SW_CORE_METRICS_DATA_TTL value: "2" - name: SKYWALKING_COLLECTOR_UID valueFrom: fieldRef: fieldPath: metadata.uid - name: SW_STORAGE value: elasticsearch - name: SW_STORAGE_ES_CLUSTER_NODES value: "elasticsearch:9200" volumeMounts: - name: alarm-settings mountPath: /skywalking/config/alarm-settings.yml subPath: alarm-settings.yml volumes: - configMap: name: alarm-configmap name: alarm-settings --- # ui apiVersion: v1 kind: Service metadata: labels: app: skywalking-ui name: skywalking-ui namespace: skywalking spec: type: ClusterIP ports: - port: 80 targetPort: 8080 protocol: TCP selector: app: skywalking-ui --- apiVersion: apps/v1 kind: Deployment metadata: name: skywalking-ui namespace: skywalking labels: app: skywalking-ui spec: replicas: 1 selector: matchLabels: app: skywalking-ui template: metadata: labels: app: skywalking-ui spec: affinity: containers: - name: ui image: skywalking.docker.scarf.sh/apache/skywalking-ui:9.0.0 # docker pull apache/skywalking-ui:9.0.0 imagePullPolicy: IfNotPresent ports: - containerPort: 8080 name: page env: - name: SW_OAP_ADDRESS value: http://skywalking-oap:12800 --- # job apiVersion: batch/v1 kind: Job metadata: name: "skywalking-es-init" namespace: skywalking labels: app: skywalking-job spec: template: metadata: name: "skywalking-es-init" labels: app: skywalking-job spec: serviceAccountName: skywalking-oap restartPolicy: Never initContainers: - name: wait-for-elasticsearch image: busybox:1.30 imagePullPolicy: IfNotPresent command: ['sh', '-c', 'for i in $(seq 1 60); do nc -z -w3 elasticsearch 9200 && exit 0 || sleep 5; done; exit 1'] containers: - name: oap image: skywalking.docker.scarf.sh/apache/skywalking-oap-server:9.0.0 # docker pull apache/skywalking-oap-server:9.0.0 imagePullPolicy: IfNotPresent env: - name: JAVA_OPTS value: "-Xmx2g -Xms2g -Dmode=init" - name: SW_STORAGE value: elasticsearch - name: SW_STORAGE_ES_CLUSTER_NODES value: "elasticsearch:9200" # 记录数据。 # - name: SW_CORE_RECORD_DATA_TTL # value: "2" # Metrics数据 # - name: SW_CORE_METRICS_DATA_TTL # value: "2" volumeMounts: volumes: # --- # apiVersion: v1 # kind: Pod # metadata: # name: "skywalking-qyouc-test" # annotations: # "helm.sh/hook": test-success # spec: # containers: # - name: "skywalking-ggmnx-test" # image: "docker.elastic.co/elasticsearch/elasticsearch:6.8.6" # command: # - "sh" # - "-c" # - | # #!/usr/bin/env bash -e # curl -XGET --fail 'elasticsearch-master:9200/_cluster/health?wait_for_status=green&timeout=1s' # restartPolicy: Never --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: skywalking-ui namespace: skywalking spec: ingressClassName: nginx rules: - host: local.skywalking.com http: paths: - path: / pathType: Prefix backend: service: name: skywalking-ui port: number: 80创建skywalking service groupkubectl apply -f ns.yaml kubectl apply -f nas-to-es.yaml kubectl apply -f es.yaml kubectl apply -f kabana.yaml kubectl apply -f alarm.yaml kubectl apply -f 9.0.yamlpod引入在skywalking的donwload页面中选择Agent-> java agent。如,下载8.10.0https://www.apache.org/dyn/closer.cgi/skywalking/java-agent/8.10.0/apache-skywalking-java-agent-8.10.0.tgz将agent解压并添加到Dockerfile中,启动并指定jar包位置,如下..... COPY ./skywalking-agent /devops/skywalking-agent .... CMD java -javaagent:/devops/skywalking-agent/skywalking-agent.jar .....而后在pod的环境变量中配置必要的参数服务自动分组,根据${服务名称} = [${组名称}::]${逻辑名称},一旦服务名称包含双冒号(::),冒号之前的文字字符串将被视为组名。在最新的 GraphQL 查询中,组名已作为选项参数提供。value: mark::test1 mark是组,test1是应用名称 - name: SW_AGENT_NAME value: mark::test1 - name: SW_AGENT_COLLECTOR_BACKEND_SERVICES value: skywalking-oap.skywalking:11800 而在第二个应用程序的时候就变成,mark::test2这样一来test1和test2都属于mark组,在界面中展示其他agent envirnment variable见Table of Agent Configuration Properties忽略url并不是所有的url都值得被关注,因此出于各方面考虑,配置忽略是有必要的。有两种方法可以配置忽略模式。通过系统环境设置具有更高的优先级。通过系统环境变量设置,需要添加skywalking.trace.ignore_path到系统变量中,值为需要忽略的路径,多个路径之间用,复制/agent/optional-plugins/apm-trace-ignore-plugin/apm-trace-ignore-plugin.config到/agent/config/目录,并添加规则以过滤跟踪trace.ignore_path=/your/path/1/**,/your/path/2/**实际中,在optional-plugins下将apm-trace-ignore-plugin-8.10.0.jar复制到plugins即可# cp optional-plugins/apm-trace-ignore-plugin-8.10.0.jar plugins/配置文件和环境变量任选其一,环境变量优先添加配置文件# cat config/apm-trace-ignore-plugin.config trace.ignore_path=${SW_AGENT_TRACE_IGNORE_PATH:GET:/health,/eureka/**}添加环境变量忽略GET:/health和/eureka/**的路径 - name: SW_AGENT_TRACE_IGNORE_PATH value: GET:/health,/eureka/**但是,忽略的URL不一定会马上生效,极大可能会延迟生效。参考忽略模式
2022年05月15日
55 阅读
0 评论
0 点赞
2022-05-15
linuxea:kube-prometheus远程存储victoriametrics
我们知道,在使用promentheus的过程中,默认的数据量一旦到一个量级后,查询区间的数据会非常缓慢,甚至一个查询就可能导致promentheus的崩溃,尽管我们不需要存储多久的数据,但是集群pod在一定的数量后,短期的数据仍然非常多,对于Promentheus本身的存储引擎来讲,仍是一个不小的问题,而使用外部存储就显得很有必要。早期流行的influxDB,由于社区对Promentheus并不友好,因此早些就放弃。此前,尝试了Prometheus远程存储Promscale和TimescaleDB测试,而后在讨论中发现VictoriaMetrics是更可取的方式。而VictoriaMetrics也有自己的一套系统监控。而在官方的介绍中,VictoriaMetrics强烈diss了TimescaleDBIt provides high data compression, so up to 70x more data points may be crammed into limited storage comparing to TimescaleDB and up to 7x less storage space is required compared to Prometheus, Thanos or Cortex.VictoriaMetrics可用于 Prometheus 监控数据做长期远程存储的时序数据库之一,而在github上是这样介绍的,截取部分如下可以直接用于 Grafana 作为 Prometheus 数据源使用指标数据摄取和查询具备高性能和良好的可扩展性,性能比 InfluxDB 和 TimescaleDB 高出 20 倍内存方面也做了优化,比 InfluxDB 少 10x 倍,比 Prometheus、Thanos 或 Cortex 少 7 倍其他有能够理解的部分话术针对具有高延迟 IO 和低 IOPS 的存储进行了优化提供全局的查询视图,多个 Prometheus 实例或任何其他数据源可能会将数据摄取到 VictoriaMetricsVictoriaMetrics 由一个没有外部依赖的小型可执行文件组成所有的配置都是通过明确的命令行标志和合理的默认值完成的所有数据都存储在 - storageDataPath 命令行参数指向的目录中可以使用 vmbackup/vmrestore 工具轻松快速地从实时快照备份到 S3 或 GCS 对象存储中支持从第三方时序数据库获取数据源由于存储架构原因,它可以保护存储在非正常关机(即 OOM、硬件重置或 kill -9)时免受数据损坏同样支持指标的 relabel 操作注意VictoriaMetrics 不支持prometheus本身读取,但是为了解决报警的问题,开发人员建议配置--storage.tsdb.retention.time=24h保留24小时的数据在prometheus中,而其他的数据写入到远程VictoriaMetrics ,通过grafana展示。VictoriaMetrics wiki说不支持prometheus读取,因为它发送的数据量很大; remote_read api 可以解决警报问题。我们可以启动一个 prometheus 实例,它只有 remote_read 配置部分和规则部分。victoriaMetrics 警报非常好!由于Prometheus中的这个问题,Prometheus 远程读取 API 不是为读取由其他 Prometheus 实例写入远程存储的数据而设计的。至于 Prometheus 中的警报,则将 Prometheus 本地存储保留设置为涵盖所有已配置警报规则的持续时间。通常 24 小时就足够了:--storage.tsdb.retention.time=24h. 在这种情况下,Prometheus 将对本地存储的数据执行警报规则,同时remote_write像往常一样将所有数据复制到配置的 url。而这些在github的wiki中以及为什么 VictoriaMetrics 不支持Prometheus 远程读取 API?有过说明远程读取 API 需要在给定时间范围内传输所有请求指标的所有原始数据。例如,如果一个查询包含 1000 个指标,每个指标有 10K 个值,那么远程读取 API 必须1000*10K向 Prometheus 返回 =10M 个指标值。这是缓慢且昂贵的。Prometheus 的远程读取 API 不适用于查询外部数据——也就是global query view. 有关详细信息,请参阅此问题。因此,只需通过vmui、Prometheus Querying API 或Grafana 中的 Prometheus 数据源直接查询 VictoriaMetrics 。VictoriaMetrics在VictoriaMetrics 中介绍如下VictoriaMetrics uses their modified version of LSM tree (Logging Structure Merge Tree). All the tables and indexes on the disk are immutable once created. When it's making the snapshot, they just create the hard link to the immutable files.VictoriaMetrics stores the data in MergeTree, which is from ClickHouse and similar to LSM. The MergeTree has particular design decision compared to canonical LSM.MergeTree is column-oriented. Each column is stored separately. And the data is sorted by the "primary key", and the "primary key" doesn't have to be unique. It speeds up the look-up through the "primary key", and gets the better compression ratio. The "parts" is similar to SSTable in LSM; it can be merged into bigger parts. But it doesn't have strict levels.The Inverted Index is built on "mergeset" (A data structure built on top of MergeTree ideas). It's used for fast lookup by given the time-series selector.提到的技术点, LSM 树,以及MergeTreeVictoriaMetrics 将数据存储在 MergeTree 中,MergeTree 来自 ClickHouse,类似于 LSM。与规范 LSM 相比,MergeTree 具有特定的设计决策。MergeTree 是面向列的。每列单独存储。并且数据按“主键”排序,“主键”不必是唯一的。它通过“主键”加快查找速度,获得更好的压缩比。“部分”类似于 LSM 中的 SSTable;它可以合并成更大的部分。但它没有严格的等级。倒排索引建立在“mergeset”(建立在 MergeTree 思想之上的数据结构)之上。通过给定时间序列选择器,它用于快速查找。为了能够有更多的理解,可以参考LSM Tree原理详解):https://www.jianshu.com/p/b43b856e09bb应用到kube-prometheus对照如下kubernetes版本安装对应的kube-prometheus版本kube-prometheus stackKubernetes 1.19Kubernetes 1.20Kubernetes 1.21Kubernetes 1.22Kubernetes 1.23release-0.7✔✔✗✗✗release-0.8✗✔✔✗✗release-0.9✗✗✔✔✗release-0.10✗✗✗✔✔main✗✗✗✔✔Quickstart找到符合集群对应的版本进行安装,如果你是ack,需要卸载ack-arms-prometheus替换镜像k8s.gcr.io/prometheus-adapter/prometheus-adapter:v0.9.1 v5cn/prometheus-adapter:v0.9.1k8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.4.2 bitnami/kube-state-metrics:2.4.2quay.io/brancz/kube-rbac-proxy:v0.12.0 bitnami/kube-rbac-proxy:0.12.0开始部署$ cd kube-prometheus $ git checkout main kubectl.exe create -f .\manifests\setup\ kubectl.exe create -f .\manifests配置ingress-nginx> kubectl.exe -n monitoring get svc NAME TYPE CLUSTER-IP PORT(S) alertmanager-main ClusterIP 192.168.31.49 9093/TCP,8080/TCP alertmanager-operated ClusterIP None 9093/TCP,9094/TCP,9094/UDP blackbox-exporter ClusterIP 192.168.31.69 9115/TCP,19115/TCP grafana ClusterIP 192.168.130.3 3000/TCP kube-state-metrics ClusterIP None 8443/TCP,9443/TCP node-exporter ClusterIP None 9100/TCP prometheus-adapter ClusterIP 192.168.13.123 443/TCP prometheus-k8s ClusterIP 192.168.118.39 9090/TCP,8080/TCP prometheus-operated ClusterIP None 9090/TCP prometheus-operator ClusterIP None 8443/TCP ingress-nginxapiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: monitoring-ui namespace: monitoring spec: ingressClassName: nginx rules: - host: local.grafana.com http: paths: - path: / pathType: Prefix backend: service: name: grafana port: number: 3000 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: prometheus-ui namespace: monitoring spec: ingressClassName: nginx rules: - host: local.prom.com http: paths: - path: / pathType: Prefix backend: service: name: prometheus-k8s port: number: 9090配置nfs测试apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest imagePullPolicy: IfNotPresent volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: fuseim.pri/ifs - name: NFS_SERVER value: 192.168.3.19 - name: NFS_PATH value: /data/nfs-k8s volumes: - name: nfs-client-root nfs: server: 192.168.3.19 path: /data/nfs-k8s --- apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.iovm配置创建一个pvc-victoriametricsapiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-storage namespace: default provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME' parameters: archiveOnDelete: "false" # Supported policies: Delete、 Retain , default is Delete reclaimPolicy: Retain --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-victoriametrics namespace: monitoring spec: accessModes: - ReadWriteMany storageClassName: nfs-storage resources: requests: storage: 10Gi准备pvc[linuxea.com ~/victoriametrics]# kubectl apply -f pvc.yaml storageclass.storage.k8s.io/nfs-storage created persistentvolumeclaim/pvc-victoriametrics created [linuxea.com ~/victoriametrics]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM ... pvc-97bea5fe-0131-4fb5-aaa9-66eee0802cb4 10Gi RWX Retain Bound monitoring/pvc-victoriametrics ... [linuxea.com ~/victoriametrics]# kubectl get pvc -A NAMESPACE NAME STATUS VOLUME CAPACITY ... monitoring pvc-victoriametrics Bound pvc-97bea5fe-0131-4fb5-aaa9-66eee0802cb4 10Gi 创建victoriametrics,并配置上面的pvc1w : 一周# vm-grafana.yaml apiVersion: apps/v1 kind: Deployment metadata: name: victoria-metrics namespace: monitoring spec: selector: matchLabels: app: victoria-metrics template: metadata: labels: app: victoria-metrics spec: containers: - name: vm image: victoriametrics/victoria-metrics:v1.76.1 imagePullPolicy: IfNotPresent args: - -storageDataPath=/var/lib/victoria-metrics-data - -retentionPeriod=1w ports: - containerPort: 8428 name: http resources: limits: cpu: "1" memory: 2048Mi requests: cpu: 100m memory: 512Mi readinessProbe: httpGet: path: /health port: 8428 initialDelaySeconds: 30 timeoutSeconds: 30 livenessProbe: httpGet: path: /health port: 8428 initialDelaySeconds: 120 timeoutSeconds: 30 volumeMounts: - mountPath: /var/lib/victoria-metrics-data name: victoriametrics-storage volumes: - name: victoriametrics-storage persistentVolumeClaim: claimName: nas-csi-pvc-oms-fat-victoriametrics --- apiVersion: v1 kind: Service metadata: name: victoria-metrics namespace: monitoring spec: ports: - name: http port: 8428 protocol: TCP targetPort: 8428 selector: app: victoria-metrics type: ClusterIPapply[linuxea.com ~/victoriametrics]# kubectl apply -f vmctoriametrics.yaml deployment.apps/victoria-metrics created service/victoria-metrics created [linuxea.com ~/victoriametrics]# kubectl -n monitoring get pod NAME READY STATUS RESTARTS AGE alertmanager-main-0 2/2 Running 88 268d blackbox-exporter-55c457d5fb-6rc8m 3/3 Running 114 260d grafana-756dc9b545-b2skg 1/1 Running 38 260d kube-state-metrics-76f6cb7996-j2hx4 3/3 Running 153 260d node-exporter-4hxzp 2/2 Running 120 316d node-exporter-54t9p 2/2 Running 124 316d node-exporter-8rfht 2/2 Running 120 316d node-exporter-hqzzn 2/2 Running 126 316d prometheus-adapter-59df95d9f5-7shw5 1/1 Running 78 260d prometheus-k8s-0 2/2 Running 89 268d prometheus-operator-7775c66ccf-x2wv4 2/2 Running 115 260d promoter-66f6dd475c-fdzrx 1/1 Running 3 8d victoria-metrics-56d47f6fb-qmthh 0/1 Running 0 15s [linuxea.com ~/victoriametrics]# kubectl -n monitoring get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE alertmanager-main NodePort 10.68.30.147 <none> 9093:30092/TCP 316d alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 316d blackbox-exporter ClusterIP 10.68.25.245 <none> 9115/TCP,19115/TCP 316d etcd-k8s ClusterIP None <none> 2379/TCP 316d external-node-k8s ClusterIP None <none> 9100/TCP 315d external-pve-k8s ClusterIP None <none> 9221/TCP 305d external-windows-node-k8s ClusterIP None <none> 9182/TCP 316d grafana NodePort 10.68.133.224 <none> 3000:30091/TCP 316d kube-state-metrics ClusterIP None <none> 8443/TCP,9443/TCP 316d node-exporter ClusterIP None <none> 9100/TCP 316d prometheus-adapter ClusterIP 10.68.138.175 <none> 443/TCP 316d prometheus-k8s NodePort 10.68.207.185 <none> 9090:30090/TCP 316d prometheus-operated ClusterIP None <none> 9090/TCP 316d prometheus-operator ClusterIP None <none> 8443/TCP 316d promoter ClusterIP 10.68.26.69 <none> 8080/TCP 11d victoria-metrics ClusterIP 10.68.225.139 <none> 8428/TCP 18s修改prometheus的远程存储配置,我们主要修改如下,其他参数可在官方文档查看首先修改远程写如到vm remoteWrite: - url: "http://victoria-metrics:8428/api/v1/write" queueConfig: capacity: 5000 remoteTimeout: 30s并且prometheus的存储时间为1天retention: 1d一天的本地存储只是为了应对告警,而远程写入到vm后通过grafana来看Prometheus-prometheus.yamlapiVersion: monitoring.coreos.com/v1 kind: Prometheus metadata: labels: app.kubernetes.io/component: prometheus app.kubernetes.io/instance: k8s app.kubernetes.io/name: prometheus app.kubernetes.io/part-of: kube-prometheus app.kubernetes.io/version: 2.35.0 name: k8s namespace: monitoring spec: retention: 1d alerting: alertmanagers: - apiVersion: v2 name: alertmanager-main namespace: monitoring port: web enableFeatures: [] externalLabels: {} image: quay.io/prometheus/prometheus:v2.35.0 nodeSelector: kubernetes.io/os: linux podMetadata: labels: app.kubernetes.io/component: prometheus app.kubernetes.io/instance: k8s app.kubernetes.io/name: prometheus app.kubernetes.io/part-of: kube-prometheus app.kubernetes.io/version: 2.35.0 podMonitorNamespaceSelector: {} podMonitorSelector: {} probeNamespaceSelector: {} probeSelector: {} replicas: 1 resources: requests: memory: 400Mi remoteWrite: - url: "http://victoria-metrics:8428/api/v1/write" queueConfig: capacity: 5000 remoteTimeout: 30s ruleNamespaceSelector: {} ruleSelector: {} securityContext: fsGroup: 2000 runAsNonRoot: true runAsUser: 1000 serviceAccountName: prometheus-k8s serviceMonitorNamespaceSelector: {} serviceMonitorSelector: {} version: 2.35.0而此时的配置不出意外会被应用到URL/configremote_write: - url: http://victoria-metrics:8428/api/v1/write remote_timeout: 5m follow_redirects: true queue_config: capacity: 5000 max_shards: 200 min_shards: 1 max_samples_per_send: 500 batch_send_deadline: 5s min_backoff: 30ms max_backoff: 100ms metadata_config: send: true send_interval: 1m查看日志level=info ts=2022-04-28T15:26:12.047Z caller=main.go:944 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml ts=2022-04-28T15:26:12.053Z caller=dedupe.go:112 component=remote level=info remote_name=1a1964 url=http://victoria-metrics:8428/api/v1/write msg="Starting WAL watcher" queue=1a1964 ts=2022-04-28T15:26:12.053Z caller=dedupe.go:112 component=remote level=info remote_name=1a1964 url=http://victoria-metrics:8428/api/v1/write msg="Starting scraped metadata watcher" ts=2022-04-28T15:26:12.053Z caller=dedupe.go:112 component=remote level=info remote_name=1a1964 url=http://victoria-metrics:8428/api/v1/write msg="Replaying WAL" queue=1a1964 .... totalDuration=55.219178ms remote_storage=85.51µs web_handler=440ns query_engine=719ns scrape=45.6µs scrape_sd=1.210328ms notify=4.99µs notify_sd=352.209µs rules=47.503195ms回到nfs查看[root@Node-172_16_100_49 /data/nfs-k8s/monitoring-pvc-victoriametrics-pvc-97bea5fe-0131-4fb5-aaa9-66eee0802cb4]# ll total 0 drwxr-xr-x 4 root root 48 Apr 28 22:37 data -rw-r--r-- 1 root root 0 Apr 28 22:37 flock.lock drwxr-xr-x 5 root root 71 Apr 28 22:37 indexdb drwxr-xr-x 2 root root 43 Apr 28 22:37 metadata drwxr-xr-x 2 root root 6 Apr 28 22:37 snapshots drwxr-xr-x 3 root root 27 Apr 28 22:37 tmp修改grafana的配置此时看到的数据是用promenteus中获取到的,修改grefana来从vm读取数据 datasources.yaml: |- { "apiVersion": 1, "datasources": [ { "access": "proxy", "editable": false, "name": "prometheus", "orgId": 1, "type": "prometheus", "url": "http://victoria-metrics:8428", "version": 1 } ] }顺便修改时区stringData: # 修改 时区 grafana.ini: | [date_formats] default_timezone = CST如下apiVersion: v1 kind: Secret metadata: labels: app.kubernetes.io/component: grafana app.kubernetes.io/name: grafana app.kubernetes.io/part-of: kube-prometheus app.kubernetes.io/version: 8.5.0 name: grafana-datasources namespace: monitoring stringData: # 修改链接的地址 datasources.yaml: |- { "apiVersion": 1, "datasources": [ { "access": "proxy", "editable": false, "name": "prometheus", "orgId": 1, "type": "prometheus", "url": "http://victoria-metrics:8428", "version": 1 } ] } type: Opaque --- apiVersion: v1 kind: Secret metadata: labels: app.kubernetes.io/component: grafana app.kubernetes.io/name: grafana app.kubernetes.io/part-of: kube-prometheus app.kubernetes.io/version: 8.5.0 name: grafana-config namespace: monitoring stringData: # 修改 时区 grafana.ini: | [date_formats] default_timezone = CST type: Opaque # grafana: # sidecar: # datasources: # enabled: true # label: grafana_datasource # searchNamespace: ALL # defaultDatasourceEnabled: false # additionalDataSources: # - name: Loki # type: loki # url: http://loki-stack.loki-stack:3100/ # access: proxy # - name: VictoriaMetrics # type: prometheus # url: http://victoria-metrics-single-server.victoria-metrics-single:8428 # access: proxy而此时的datasources就变成了vm,远程写入到了vm,grafana读取的是vm,而Prometheus还是读的是prometheus监控vmdashboards与版本有关,https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/dashboards并且添加监控# victoriametrics-metrics apiVersion: v1 kind: Service metadata: name: victoriametrics-metrics namespace: monitoring labels: app: victoriametrics-metrics annotations: prometheus.io/port: "8428" prometheus.io/scrape: "true" spec: type: ClusterIP ports: - name: metrics port: 8428 targetPort: 8428 protocol: TCP selector: # 对应victoriametrics的service app: victoria-metrics --- apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: victoriametrics-metrics namespace: monitoring spec: endpoints: - interval: 15s port: metrics path: /metrics namespaceSelector: matchNames: - monitoring selector: matchLabels: app: victoriametrics-metrics参考Prometheus远程存储Promscale和TimescaleDB测试victoriametricsLSM Tree原理详解
2022年05月15日
48 阅读
0 评论
0 点赞
2022-05-10
linuxea:helm3调试和基础函数if/with/range(3)
在开发一个chart的时候就需要对整个模板进行了解,从使用角度来说,是简单的。但是去开发的时候就需要一定的认识才能够完成一个模板的开发。而在其中用的较多的就是内置的一些函数,或者说是对象,通常,在helm的模板变量中,以大写开头的都是系统提供的,这由Go的函数约定。内置对象releaserelease:release是作为顶级对象的,与下面的其他不同的是她是内置函数。字面上release是发布的意思,这个对象描述了也是关于发布的一些信息,如下:Release.Name: 顾名思义,release名称Release.Namespace: release名称空间Release.IsUpgrade: release当在升级或者回滚的时候,值为trueRelease.IsInstall: 如果当前是安装则为trueRelease.Revision: release版本号,第一次安装是1,升级和回滚都会增加Release.Service: 渲染helm的服务values.yaml除此之外,还有values.yaml,这个文件的提供信息是传递到values的,而默认情况下是空的, 所以我们可以进去定制。values.yaml也是用的最多的文件Chart.yaml而chart.yaml的内容的值最终是被渲染到helm中的,这些使用helm ls的时候就可以看到,比如name和version等,这些信息可以进行定制值name: linuea Version: 1.1.1渲染后就变成了linuxea-1.1.1FilesFiles可以访问chart的非特殊文件,无法访问模板,提供以下参数Files.Get 或许文件,如: .Files.Get confuig.iniFiles.GetBytes 以bytes数组获取文件内函数Files.Glob 用于返回名称给到shell glob模式匹配的文件列表Files.Lines 逐行读取文件的函数,遍历每行内容Files.Assecrets 以Base64编码字符串返回Files.AsConfig 以yaml字典返回CapabilitiesCapabilities用于获取与kubernetes集群支持功能信息的对象Capabilities.APIVersion: 支持的版本Capabilities.APIVersion.Has.version 判断版本或者资源是否可用Capabilities.Kube.Version k8s版本,同时也是Capabilities.Kube 的缩写Capabilities.Kube.Major k8s主版本Capabilities.Kube.Minor k8s次版本TemplateTemplate包含当前正在执行的模板信息Name: 当前模板文件路径BasePath: 当前chart模板目录的路径创建chart使用create就可以进行创建,helm create --help可以看到帮助信息中有想要的参数,直接创建的将会是一个nginx的Deployment的yaml的清单,尝试创建即可helm create linuxea[root@linuxea.com /data/helm/mysql]# helm create linuxea WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config Creating linuxea目录结构如下[root@linuxea.com /data/helm/mysql]# tree linuxea/ linuxea/ ├── charts ├── Chart.yaml ├── templates │ ├── deployment.yaml │ ├── _helpers.tpl │ ├── hpa.yaml │ ├── ingress.yaml │ ├── NOTES.txt │ ├── serviceaccount.yaml │ ├── service.yaml │ └── tests │ └── test-connection.yaml └── values.yaml 3 directories, 10 filesvalues.yaml实际上我们不需要创建的这些文件,只需要一个目录结构即可,最重要的是values.yaml这个文件可以使用-f传递给helm install或helm upgradehelm install -f values.yaml linuxea再者,使用--set传递各个参数helm install --set image=123 linuxeavalues.yaml的是被用来渲染模板,而--set的值是优先于values.yaml的,因此--set可以覆盖values.yaml的值既然默认创建的chart并不是期望的,可以删除模板文件和vlaues.yaml的内容,或者手动创建目录结构即可手动创建目录mkdir -p liunuxea/{templates,charts} touch values.yaml liunuxea/ touch configmap.yaml liunuxea/templates/configmap.yamlcat > liunuxea/Chart.yaml << EOF apiVersion: v2 name: linuxea description: A Helm chart for Kubernetes type: application version: 0.1.0 appVersion: "1.16.0" EOF结构如下[root@linuxea.com /data/helm/mysql]# tree liunuxea/ liunuxea/ ├── charts ├── Chart.yaml ├── templates │ └── configmap.yaml └── values.yaml 2 directories, 3 files示例1此时渲染一个参数的值,比如,有一个configmapapiVersion: v1 kind: ConfigMap metadata: name: linuxea-cmp labels: app: linuxea-cmp data: test: 这里的test是空的,在values.yaml中定义一行mydata: hi mark, my www.linuxea.com而现在要想在configmap中使用values.yaml的mydata,就需要使用{{ .Vaules }}来使用apiVersion: v1 kind: ConfigMap metadata: name: linuxea-cmp labels: app: linuxea-cmp data: test: {{ .Values.mydata }}{{ .Values.mydata }}这种方式是go的模板方式helm使用template可以进行渲染,直接在windows就可以渲染模板以共查看PS H:\k8s-1.20.2\helm> helm.exe template test .\liunuxea\ --- # Source: linuxea/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: linuxea-cmp labels: app: linuxea-cmp data: test: hi mark, my www.linuxea.com除此之外,当我们输入的是test的时候,我们希望test被替换城什么值的时候就可以使用Release比如,Release.NameapiVersion: v1 kind: ConfigMap metadata: name: {{ .Release.Name }}-cmp labels: app: {{ .Release.Name }}-cmp data: test: {{ .Values.mydata }}这次chart名称换 marksugarhelm.exe template marksugar .\liunuxea\PS H:\k8s-1.20.2\helm> helm.exe template marksugar .\liunuxea\ --- # Source: linuxea/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: marksugar-cmp labels: app: marksugar-cmp data: test: hi mark, my www.linuxea.com这里引用{{ .Release.Name }}的地方就被替换城了marksugar--set除此之外,还可以使用helm的--set来覆盖 --set mydata=linuxea.comPS H:\k8s-1.20.2\helm> helm.exe template marksugar --set mydata=linuxea.com .\liunuxea\ --- # Source: linuxea/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: marksugar-cmp labels: app: marksugar-cmp data: test: linuxea.com--dry-runtemplate只是用来本地模板渲染,但是--dry-run是用来模拟安装,这两个在调试的时候有一些差别,--dry-run也可以使用--debughelm install marksugar --dry-run --set mydata=linuxea.com liunuxea/如下[root@linuxea.com /data/helm]# helm install marksugar --dry-run --set mydata=linuxea.com liunuxea/ WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config NAME: marksugar LAST DEPLOYED: Sat Apr 16 02:09:37 2022 NAMESPACE: default STATUS: pending-install REVISION: 1 TEST SUITE: None HOOKS: MANIFEST: --- # Source: linuxea/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: marksugar-cmp labels: app: marksugar-cmp data: test: linuxea.com示例2而values.yaml有多个值的时候,如mydata: names: linuxea data: hi mark, my www.linuxea.com引用的时候就发生了变化apiVersion: v1 kind: ConfigMap metadata: name: {{ .Release.Name }}-cmp labels: app: {{ .Release.Name }}-cmp data: name: {{ .Values.mydata.names }} test: {{ .Values.mydata.data }}本地渲染PS H:\k8s-1.20.2\helm> helm.exe template marksugar .\liunuxea\ --- # Source: linuxea/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: marksugar-cmp labels: app: marksugar-cmp data: name: linuxea test: hi mark, my www.linuxea.com--set覆盖也是一样PS H:\k8s-1.20.2\helm> helm.exe template marksugar --set mydata.data=linuxea.com .\liunuxea\ --- # Source: linuxea/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: marksugar-cmp labels: app: marksugar-cmp data: name: linuxea test: linuxea.com基本语法函数和管道函数values.yaml有些托拉,有时候更希望有一些函数更快的方式转换来提供数据,而这些函数大多数都由golang的template提供。这里面包含了一些逻辑和运算。另外,还有一些模板在sprig的包,如日期,时间等。阅读模板函数能更快了解quote: 转换成字符串,加上双引号upper: 转换大写with: 作用域.Values中的点(.)其实就是作用域。而with可以控制变量的作用域并且重新使用,调用就是对当前作用域的引用,.values是在当前作用域下查找values对象indent: 空格indent的用户是要顶行进行配置,比如想让test: ok前有两个空格,如下data: name: {{ .Values.mydata.names | default "supper" }} test: {{ .Values.mydata.data | upper | repeat 5 }} status: ok namestatus: "true" defaultstaus: true {{ indent 2 "test: ok" }}管道管道非常常用,并没有什么特别的区别参数 | 函数 参数 | 函数 | 函数2如上:转换字符串的话{{ .Values.mydata.name | quote | upper }} {{ .Values.mydata.name | upper | quote }}repeat : repeat COUNT STRING 将字符串重复多少次如下apiVersion: v1 kind: ConfigMap metadata: name: {{ .Release.Name }}-cmp labels: app: {{ .Release.Name }}-cmp data: name: {{ .Values.mydata.names }} test: {{ .Values.mydata.data | upper | repeat 5 }}渲染PS H:\k8s-1.20.2\helm> helm.exe template marksugar .\liunuxea\ --- # Source: linuxea/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: marksugar-cmp labels: app: marksugar-cmp data: name: linuxea test: HI MARK, MY WWW.LINUXEA.COMHI MARK, MY WWW.LINUXEA.COMHI MARK, MY WWW.LINUXEA.COMHI MARK, MY WWW.LINUXEA.COMHI MARK, MY WWW.LINUXEA.COMdefault : 如果没有给定参数就使用默认参数这个在shell中也很常用,在helm也非常常用,特别是在你的values.yaml中不能使用一个静态配置的时候,default就排上用场了举个例子,如果names变量为空,就默认赋值supperapiVersion: v1 kind: ConfigMap metadata: name: {{ .Release.Name }}-cmp labels: app: {{ .Release.Name }}-cmp data: name: {{ .Values.mydata.names | default "supper" }} test: {{ .Values.mydata.data | upper | repeat 5 }}values.yaml中把值删掉mydata: names: data: hi mark, my www.linuxea.com渲染下PS H:\k8s-1.20.2\helm> helm.exe template marksugar .\liunuxea\ --- # Source: linuxea/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: marksugar-cmp labels: app: marksugar-cmp data: name: supper test: HI MARK, MY WWW.LINUXEA.COMHI MARK, MY WWW.LINUXEA.COMHI MARK, MY WWW.LINUXEA.COMHI MARK, MY WWW.LINUXEA.COMHI MARK, MY WWW.LINUXEA.COMprintfprintf和在go中的printf类似,用来打印字符串或者其他的数组,如下test: {{ .Vaules.test | default (printf "%s-1" (include "test" ))}}条件语句流程控制在go template中也是非常重要的一部分,和大多数判断语法一样,不但如此,还有一些声明和命名模板的操作,如下:if/else 条件with 作用域范围range 遍历的一种,类似与for eachdefine 在模板内声明新的命名模板template 导入一个命名模板blok 声明特殊的可填充模块区域if/else if 条件判断不管是在什么语言类型中,都是有一个结束标记的,而在helm中使用的是end结束{{ if TEST }} test {{ else if TEST2 }} test2 {{ elese }} default test {{ end }}如果test为真则test,如果test为test2则test2,否则defalut test判断的是管道而非values值,这里控制结构可以执行这个判断语句管道,而不仅仅是一个值,如果结果是如下几项,则为false,否则则是true:布尔 false0空字符串nil或者null空的集合,如map,slice,tuple,dict,array示例1 if以上个configmap为例,如果此时希望用一个判断语句来确定是否添加一个字段的话,就可以使用if来进行操作{{ if eq .Values.mydata.status "isok" }}status: ok{{ end }}如果此时使用的是换行的if,如{{ if eq .Values.mydata.status "isok" }} status: ok {{ end }}那么很有可能不在一行,解决这个问题的版本是加一个--是用来删除if之后和end之前的空格的,不可以加载if之前,加在前面就跳到上一行了{{ if eq .Values.mydata.status "isok" -}} status: ok {{- end }}如果Values.mydata.status等于"isok",就添加status: okapiVersion: v1 kind: ConfigMap metadata: name: {{ .Release.Name }}-cmp labels: app: {{ .Release.Name }}-cmp data: name: {{ .Values.mydata.names | default "supper" }} test: {{ .Values.mydata.data | upper | repeat 5 }} {{ if eq .Values.mydata.status "isok" }}status: ok{{ end }}values.yamlmydata: names: marksugar data: hi mark, my www.linuxea.com status: isok渲染PS H:\k8s-1.20.2\helm> helm.exe template marksugar .\liunuxea\ --- # Source: linuxea/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: marksugar-cmp labels: app: marksugar-cmp data: name: marksugar test: HI MARK, MY WWW.LINUXEA.COMHI MARK, MY WWW.LINUXEA.COMHI MARK, MY WWW.LINUXEA.COMHI MARK, MY WWW.LINUXEA.COMHI MARK, MY WWW.LINUXEA.COM status: okelse if除此之外,还可以进行嵌套else {{ if eq .Values.mydata.status "isok" -}} status: ok {{ else if eq .Values.mydata.names "marksugar" -}} namestatus: "true" {{ else -}} defaultstaus: true {{- end }}如果等于isok,就结束,否则就继续判断第二个else if,如果第二个else if等于marksugar就结束,否则就else的默认值。如下NAME -}}{{- NAME这里的-号是删除空格的意思apiVersion: v1 kind: ConfigMap metadata: name: {{ .Release.Name }}-cmp labels: app: {{ .Release.Name }}-cmp data: name: {{ .Values.mydata.names | default "supper" }} test: {{ .Values.mydata.data | upper | repeat 5 }} {{ if eq .Values.mydata.status "isok" -}} status: ok {{ else if eq .Values.mydata.names "marksugar" -}} namestatus: "true" {{ else -}} defaultstaus: true {{- end }}执行PS H:\k8s-1.20.2\helm> helm template .\liunuxea\ --- # Source: linuxea/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: release-name-cmp labels: app: release-name-cmp data: name: marksugar test: HI MARK, MY WWW.LINUXEA.COMHI MARK, MY WWW.LINUXEA.COMHI MARK, MY WWW.LINUXEA.COMHI MARK, MY WWW.LINUXEA.COMHI MARK, MY WWW.LINUXEA.COM status: ok由于第一个if执行完成就是true,逻辑结束示例2 with上述的.Values是对当前模板引擎下的作用域内查找.vaules对象,而with是控制变量的作用域,并且和if语句类似with中默认是不能使用内置的函数,比如:.Release.Name之类的,如果要使用,需要另外的办法使用示例1的文件重新配置。如下 {{ with .Values.mydata -}} {{ if eq .status "isok" -}} status: ok {{- else if eq .names "marksugar" -}} namestatus: "true" {{ else -}} defaultstaus: true {{- end }} {{- end }}一旦 {{ with .Values.mydata -}}配置之后,在whth中,就不在使用.Vaules.mydata.names来调用了,而是.namesapiVersion: v1 kind: ConfigMap metadata: name: {{ .Release.Name }}-cmp labels: app: {{ .Release.Name }}-cmp data: name: {{ .Values.mydata.names | default "supper" }} test: {{ .Values.mydata.data | upper | repeat 5 }} {{ with .Values.mydata -}} {{ if eq .status "isok" -}} status: ok {{- else if eq .names "marksugar" -}} namestatus: "true" {{ else -}} defaultstaus: true {{- end }} {{- end }} {{ indent 2 "test: ok" }}示例3 range通过在helm中使用range来遍历,比如使用range遍历一个列表或者字典首先定义一个列表mylistmylist: - mark - edwin - sean 而后使用range mylist: |- {{- range .Values.mylist }} - {{ . | title | quote }} {{- end }}这里的|-的作用,|是将以下的数据放在一个字符串中,如果不加|就不是一个字符串了。而-是换行而这里的.的作用域是在range中的,作用域与range开始,end结束|-和|+|-:|-的意思是将文末的换行符删掉|+|+的意思是将文末的换行符保留参考helm3的简单使用(1)helm3模板渲染(2)kubernetes helm概述(49)kubernetes helm简单使用(50)kubernetes 了解chart(51)kubernetes helm安装efk(52)
2022年05月10日
74 阅读
0 评论
0 点赞
2022-05-08
linuxea:kubernetes检测pod部署状态简单实现
通常,无状态的应用多数情况下以deployment控制器下运行,在deployment更新中,当配置清单发生变化后,应用这些新的配置。我们假设一些都ok,也成功拉取镜像,并且以默认的25%进行滚动更新,直到更新完成。kubectl apply -f test1.yaml然而这一切按照预期进行,没有任何问题。kubectl只是将配置推送到k8s后,只要配置清单没有语法或者冲突问题,返回的是0,状态就是成功的而整个过程有很多不确定性,比如,不存在的镜像,没有足够的资源调度,配置错误导致的一系列问题,而捕捉这种问题也是比较关键的事情之一。这并不单纯的简单观测问题,pod并不是拉取镜像就被running起,一旦runing就意味着接收到流量,而程序准备需要时间,如果此时程序没有准备好,流量就接入,势必会出现错误。为了解决这个问题,就需要配置就绪检测或者Startup检测pod在被真正的处于ready起来之前,通常会做就绪检测,或者启动检测。在之前的几篇中,我记录了就绪检测和健康检测的重要性,而在整个就绪检测中,是有一个初始化时间的问题。如果此时,配置清单发送变化,调度器开始执行清单任务。假设此时的初始化准备时间是100秒,有30个pod,每次至少保证有75%是正常运行的,默认按照25%滚动更新Updating a Deployment。此时的准备时间(秒)至少是30 / 25% * (100)readiness probe time 如果pod越多,意味着等待所有pod就绪完成的总时间就越长,如果放在cd管道中去运行,势必会让反馈时间越久。当一个重量级的集群中,每一条全局遍历都非常消耗资源,因此操作非常昂贵。整个集群有可能因此产生大的延迟,在集群外部调用API间隔去获取远比实时获取消耗资源要少。如果pod并不多,这个问题不值得去考量。使用rollout足以解决。获取清单被推送到节点的方式,有如下:事件监控pod在更新的时候,如果有问题会触发事件watch状态以及其他第三方的编码来达到这个需求,比如由额外的程序来间隔时间差去检测状态,而不是一直watch通常,使用kubectl rollout或者helm的--wait,亦或者argocd的平面控制来观测rollout在kubernetes的文档中,rollout的页面中提到的检查状态rollout能够管理资源类型如:部署,守护进程,状态阅读rolling-back-a-deployment中的status watch,得到以下配置kubectl -n NAMESPACE rollout status deployment NAME --watch --timeout=Xmrollout下的其他参数history列出deployment/testv1的历史kubectl -n default rollout history deployment/testv1查看历史记录的版本1信息kubectl -n default rollout history deployment/testv1 --revision=1pause停止,一旦停止,更新将不会生效kubectl rollout pause deployment/testv1需要恢复,或者重启resume恢复,恢复后延续此前暂停的部署kubectl rollout resume deployment/testv1status此时可以配置status查看更新过程的状态kubectl rollout status deployment/testv1status提供了一下参数,比如常用的超时比如,--timeout=10m,最长等待10m,超过10分钟就超时kubectl rollout status deployment/testv1 --watch --timeout=10m 其他参数如下NameShorthandDefaultUsagefilenamef[]Filename, directory, or URL to files identifying the resource to get from a server.kustomizek Process the kustomization directory. This flag can't be used together with -f or -R.recursiveRfalseProcess the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.revision 0Pin to a specific revision for showing its status. Defaults to 0 (last revision).timeout 0sThe length of time to wait before ending watch, zero means never. Any other values should contain a corresponding time unit (e.g. 1s, 2m, 3h).watchwtrueWatch the status of the rollout until it's done.undo回滚回滚到上一个版本kubectl rollout undo deployment/testv1 回滚到指定的版本1,查看已有版本# kubectl -n default rollout history deployment/testv1 deployment.apps/testv1 REVISION CHANGE-CAUSE 1 <none> 2 <none> 3 <none>查看版本信息# kubectl rollout history deployment/testv1 deployment.apps/testv1 REVISION CHANGE-CAUSE 2 <none> 7 <none> 8 <none>2,回滚到2# kubectl rollout undo deployment/testv1 --to-revision=2 deployment.apps/testv1 rolled backhelm--waitif set, will wait until all Pods, PVCs, Services, and minimum number of Pods of a Deployment, StatefulSet, or ReplicaSet are in a ready state before marking the release as successful. It will wait for as long as --timeout只有在控制器的pod出于就绪状态才会结束,默认时间似乎是600秒·看起来像是这样helm upgrade --install --namespace NAMESPACE --create-namespace --wait APP FILEAPI上面两种方式能够完成大部分场景,但是watch是非常占用资源,如果希望通过一个脚本自己的逻辑去处理,可以使用clent-go的包手动for循环查看状态clent-go手动去for循环package main import ( "context" "flag" "fmt" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/client-go/kubernetes" typev1 "k8s.io/client-go/kubernetes/typed/apps/v1" "k8s.io/client-go/rest" "k8s.io/client-go/util/retry" "os" "time" ) type args struct { namespace string image string deployment string } const ( numberOfPoll = 200 pollInterval = 3 ) func parseArgs() *args { namespace := flag.String("n", "", "namespace") deployment := flag.String("deploy", "", "deployment name") image := flag.String("image", "", "image for update") flag.Parse() var _args args if *namespace == "" { fmt.Fprintln(os.Stderr, "namespace must be specified") os.Exit(1) } _args.namespace = *namespace if *deployment == "" { fmt.Fprintln(os.Stderr, "deployment must be specified") os.Exit(1) } _args.deployment = *deployment if *image == "" { fmt.Fprintln(os.Stderr, "image must be specified") os.Exit(1) } _args.image = *image return &_args } func main() { _args := parseArgs() // creates the in-cluster config config, err := rest.InClusterConfig() if err != nil { panic(err.Error()) } // creates the clientset clientset, err := kubernetes.NewForConfig(config) if err != nil { panic(err.Error()) } deploymentsClient := clientset.AppsV1().Deployments(_args.namespace) ctx := context.Background() retryErr := retry.RetryOnConflict(retry.DefaultRetry, func() error { // Retrieve the latest version of Deployment before attempting update // RetryOnConflict uses exponential backoff to avoid exhausting the apiserver result, getErr := deploymentsClient.Get(ctx, _args.deployment, metav1.GetOptions{}) if getErr != nil { fmt.Fprintf(os.Stderr, "Failed to get latest version of Deployment %s: %v", _args.deployment, getErr) os.Exit(1) } result.Spec.Template.Spec.Containers[0].Image = _args.image _, updateErr := deploymentsClient.Update(ctx, result, metav1.UpdateOptions{}) return updateErr }) if retryErr != nil { fmt.Fprintf(os.Stderr, "Failed to update image version of %s/%s to %s: %v", _args.namespace, _args.deployment, _args.image, retryErr) os.Exit(1) } _args.pollDeploy(deploymentsClient) fmt.Println("Updated deployment") } // watch 太浪费资源了,而且时间太长,还是轮询吧 func (p *args) pollDeploy(deploymentsClient typev1.DeploymentInterface) { ctx := context.Background() for i := 0; i <= numberOfPoll; i++ { time.Sleep(pollInterval * time.Second) result, getErr := deploymentsClient.Get(ctx, p.deployment, metav1.GetOptions{}) if getErr != nil { fmt.Fprintf(os.Stderr, "Failed to get latest version of Deployment %s: %v", p.deployment, getErr) os.Exit(1) } resourceStatus := result.Status fmt.Printf("%s -> replicas: %d, ReadyReplicas: %d, AvailableReplicas: %d, UpdatedReplicas: %d, UnavailableReplicas: %d\n", result.Name, resourceStatus.Replicas, resourceStatus.ReadyReplicas, resourceStatus.AvailableReplicas, resourceStatus.UpdatedReplicas, resourceStatus.UnavailableReplicas) if resourceStatus.Replicas == resourceStatus.ReadyReplicas && resourceStatus.ReadyReplicas == resourceStatus.AvailableReplicas && resourceStatus.AvailableReplicas == resourceStatus.UpdatedReplicas { return } } fmt.Fprintf(os.Stderr, "应用在 %d 秒内没有启动成功,视作启动失败,请查看日志。\n", numberOfPoll*pollInterval) os.Exit(1) }其他参考kubernetes-deployment-status-in-jenkinsKubernetes探针补充Kubernetes Liveness 和 Readiness 探测避免给自己挖坑续集重新审视kubernetes活跃探针和就绪探针 如何避免给自己挖坑2Kubernetes Startup Probes避免给自己挖坑3
2022年05月08日
85 阅读
0 评论
0 点赞
2022-05-01
linuxea:helm3模板渲染(2)
helm说白了其实就是一个模板渲染系统,核心就在templates和values,模板使用了go template编写的,并且增加了sprig库,共计50个左右的附加模板函数和其他的一些函数。如果要进行使用,必然要遵循template的模板的约定),比如{{ if pipeline }} T1 {{ else }} {{ if pipeline }} T0 {{end}} { end} {{ range pipeline }} T1 {{ end }}if或者range开头的都需要end结尾,并且可以指定引用a模板{{ templete "a" }}这些,可以在go文档库中找到,但是这些还不够,sprig还能解决一些大小写等的问题。除此之外,helm的docs中也有自己的一些函数。而这些模板存储在templates目录下, 当helm渲染charts,就会通过模板引擎传递目录中文件,而values可以通过两种方式提供:Chart 开发人员可以在 chart 内部提供一个名为 values.yaml 的文件,该文件可以包含默认的 values 值内容。Chart 用户可以提供包含 values 值的 YAML 文件,可以在命令行中通过 helm install 来指定该文件。当用户提供自定义 values 值的时候,这些值将覆盖 chart 中 values.yaml 文件中的相应的值。简单示例mysql的包,目录结构如下[root@linuxea.com /data/helm/mysql]# tree ./ ./ ├── Chart.yaml ├── README.md ├── templates │ ├── configurationFiles-configmap.yaml │ ├── deployment.yaml │ ├── _helpers.tpl │ ├── initializationFiles-configmap.yaml │ ├── NOTES.txt │ ├── pvc.yaml │ ├── secrets.yaml │ ├── serviceaccount.yaml │ ├── servicemonitor.yaml │ ├── svc.yaml │ └── tests │ ├── test-configmap.yaml │ └── test.yaml └── values.yaml 2 directories, 15 files打开svc.conf你会看到如下的格式,这种就是template的样式,这些值是通过value来进行替换成value中的值[root@linuxea.com /data/helm/mysql]# cat templates/svc.yaml apiVersion: v1 kind: Service metadata: name: {{ template "mysql.fullname" . }} namespace: {{ .Release.Namespace }} labels: app: {{ template "mysql.fullname" . }} chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" release: "{{ .Release.Name }}" heritage: "{{ .Release.Service }}" annotations: {{- if .Values.service.annotations }} {{ toYaml .Values.service.annotations | indent 4 }} {{- end }} {{- if and (.Values.metrics.enabled) (.Values.metrics.annotations) }} {{ toYaml .Values.metrics.annotations | indent 4 }} {{- end }} spec: type: {{ .Values.service.type }} {{- if (and (eq .Values.service.type "LoadBalancer") (not (empty .Values.service.loadBalancerIP))) }} loadBalancerIP: {{ .Values.service.loadBalancerIP }} {{- end }} ports: - name: mysql port: {{ .Values.service.port }} targetPort: mysql {{- if .Values.service.nodePort }} nodePort: {{ .Values.service.nodePort }} {{- end }} {{- if .Values.mysqlx.port.enabled }} - name: mysqlx port: 33060 targetPort: mysqlx protocol: TCP {{- end }} {{- if .Values.metrics.enabled }} - name: metrics port: 9104 targetPort: metrics {{- end }} selector: app: {{ template "mysql.fullname" . }} 以上面port: {{ .Values.service.port }}为例 ,values.yaml的值如下,意思就是替换城3306端口service: annotations: {} type: ClusterIP port: 3306而这种方式也可以通过--set 来进行替换预定义 Values在模板中用 .Values 可以获取到 values.yaml 文件(或者 --set 参数)提供的 values 值,此外,还可以在模板中访问其他预定义的数据。预定义可用于每个模板、并且不能被覆盖的 values 值,与所有 values 值一样,名称都是区分大小写的,而预定义的都是大写开头的,如下的values都可以在模板中获取到Release.Name:release 的名称(不是 chart),通过helm ls查看到的Release.Namespace:release 被安装到的命名空间Release.Service:渲染当前模板的服务,在 Helm 上,实际上该值始终为 HelmRelease.IsUpgrade:如果当前操作是升级或回滚,则该值为 trueRelease.IsInstall:如果当前操作是安装,则该值为 trueChart:Chart.yaml 文件的内容,可以通过 Chart.Version 来获得 Chart 的版本,通过 Chart.Maintainers 可以获取维护者信息Files: 一个包含 chart 中所有非特殊文件的 map 对象,这不会给你访问模板的权限,但是会给你访问存在的其他文件的权限(除非使用 .helmignore 排除它们),可以使用 {{ index .Files "file.name" }} 或者 {{ .Files.Get name }} 或者 {{ .Files.GetString name }} 函数来访问文件,你还可以使用 {{ .Files.GetBytes }} 以 []byte 的形式获取访问文件的内容Capabilities:也是一个类 map 的对象,其中包含有关 Kubernetes 版本({{ .Capabilities.KubeVersion }})和支持的 Kubernetes API 版本({{ .Capabilities.APIVersions.Has "batch/v1" }})信息。常被用来判断版本信息等注意任何未知的 Chart.yaml 字段都会被删除,在 Chart 对象内部无法访问他们,所以,Chart.yaml 不能用于将任意结构化的数据传递到模板中,但是可以使用 values 文件来传递。模板渲染当我们有一些模板是想渲染而不是执行到kubernetes的时候,就可以使用templatehelm template mysql stable/mysqlhelm install只是将上面的渲染直接安装了而已除次之外,也可以使用--dry-run --debug来查看整个渲染和执行的过程helm install --dry-run --debug mysql123 stable/mysql当然,他们都不会真的运行CRDhelm3中, 当定义后,CRD被使用之前都会先安装CRD目录下所有的CRD,而这些CRD不能使用模板。而一旦CRD被使用,就会等到CRD安装成功,否则是不会渲染模板并安装的因为CRD是全局安装的,所以在卸载的时候需要手动去卸载,并且CRD只有在安装操作的时候才会被创建,如果helm中的CRDS已经存在,且无论是什么版本,helm都不会重新安装或者升级。而一旦删除CRD,将会自动删除集群中所有namespace的CRD
2022年05月01日
66 阅读
0 评论
0 点赞
2022-04-28
linuxea:ingress-nginx basic auth认证的优雅实现
ingress-nginx basic auth认证是最简单和基础的,要使用它,需要安装httpd,提供一个htpasswd,而后使用htpasswd -c auth NAME 的方式创建一个文件,而后是以哦那个create来创建,如下kubectl create secret generic bauth --from-file=NAME这通常能解决使用,但是需要额外安装一个软件包,或许需要另外一个方式来解决通过htaccesstools.com 或者https://wtools.io/generate-htpasswd-online来生成加密的密钥信息,如下我通过https://wtools.io/generate-htpasswd-online生成用户名: linuxea 密码: OpSOQKs,qDJ1dSvzs生成generation如下linuxea:$apr1$btmgi74s$JEKIq8dTE3OI8o5a1qQvq0手动用base64加密[root@linuxea.com ~]# echo 'linuxea:$apr1$btmgi74s$JEKIq8dTE3OI8o5a1qQvq0' |base64 bGludXhlYTokYXByMSRidG1naTc0cyRKRUtJcThkVEUzT0k4bzVhMXFRdnEwCg==而后直接复制这串字符串添加到配置清单中apiVersion: v1 data: auth: bGludXhlYTokYXByMSRidG1naTc0cyRKRUtJcThkVEUzT0k4bzVhMXFRdnEwCg== kind: Secret metadata: name: basic-auth namespace: monitoring type: Opaque在应用到ingress-nginx的配置中即可apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: monitoring-ui namespace: monitoring annotations: nginx.ingress.kubernetes.io/auth-type: basic nginx.ingress.kubernetes.io/auth-secret: basic-auth nginx.ingress.kubernetes.io/auth-realm: "Authentication Required - input: Trump " spec: ingressClassName: nginx rules: - host: local.prom.com http: paths: - path: / pathType: Prefix backend: service: name: prometheus-k8s port: number: 9090配置完成打开这个域名参考ingress-nginx的rewrite与canaryingress-nginx应用常见的两种方式k8s下kube-prometheus监控ingress-nginx
2022年04月28日
63 阅读
0 评论
0 点赞
2022-04-22
linuxea:helm3的简单使用(1)
无论是debian还是redhat,亦或者其他linux发行版,都有一个包管理用来解决依赖问题,而在kubernetes中,helm是用来管理kubernetes应用程序,其中charts是可以定义一个可以进行安装升级的应用程序,同时也容易创建起来,并且进行版本管理。而在越复杂的应用程序来讲,helm可以作为一个开箱即用的,单单从使用角度来看,类似于yum或者apt的,使用起来,会更加流行。比如:我们创建一个应用程序,控制器使用deployment,同时需要一个service和关联其他的对象,并且还需配置一个ingress配置域名等作为入口,还可能需要部署一个有状态的,类似mysql的后端数据存储等等。这些如果需要一个个去安装维护将会麻烦很多,特别对于一个使用者来讲,更多时候,无需关注里面发生了什么,而更多的时候只想拿来即用的,helm就是用来打包这些程序。一个流行kubernetes生态的组件库中,你会发现必然会提供一个helm的方式。这正是因为helm的特色,得益于这种的易用性使得helm愈发的普及。作为一个charts提供的参数,对其中的内容进行渲染,从而生成yaml文件,安装到kubernetes中。helm就是解决这些事情的除此之外,我们还有一个kustomize也可以进行配置清单管理,kustomize解决的是另外一个问题,有机会在写这个kustomize。而helm2和helm3是有些不同的。安装helm是读取kubeconfig文件来访问集群的,因此,你至少能够使用kubectl访问集群才能使用helm在使用版本上v3版本比v2更好用一些,简化了集群内的一个服务换城了kubernetes CRD, 在v2中需要大量的权限控制,这样也会带来一个安全问题,而在v3中变成了一个客户端, 因此,我们使用v3稳定版本即可如果需要了解更多的概念,可以参考helm2的时候的一些文章对于helm2,可以查看如下kubernetes helm概述(49)kubernetes helm简单使用(50)kubernetes 了解chart(51)kubernetes helm安装efk(52)在helm的github下载对应系统的版本,比如:3.8.1的amd版本wget https://get.helm.sh/helm-v3.8.1-linux-amd64.tar.gz tar xf helm-v3.8.1-linux-amd64.tar.gz cp helm /usr/local/sbin/查看版本信息这里温馨的提示说我们的配置文件的权限太高# helm version WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config version.BuildInfo{Version:"v3.8.1", GitCommit:"5cb9af4b1b271d11d7a97a71df3ac337dd94ad37", GitTreeState:"clean", GoVersion:"go1.17.5"}常用的命令- helm search: 搜索以恶个 charts - helm pull: 下载 chart - helm install: 安装到 Kubernetes - helm list: 查看 chartshelm installhelm install可以通过多个源进行安卓,大致如下chart仓库本地chart压缩包本地解开的压缩包的目录中的路径在线url总之要能够访问的到,首先通过在线安装1.添加chart仓库源我们需要安装一个chart源来使用,这类似于yum的源一样,我们使用azure的仓库helm repo add stable http://mirror.azure.cn/kubernetes/charts/ helm repo list[root@linuxea.com ~]# helm repo add stable "stable" has been added to your repositories [root@linuxea.com ~]# helm repo list NAME URL stable http://mirror.azure.cn/kubernetes/charts/我们可以使用 helm search repo stable查看当前的包于此同时,使用helm repo update更新到最新的状态[root@linuxea.com ~]# helm repo update WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "stable" chart repository Update Complete. ⎈Happy Helming!⎈ [root@linuxea.com ~]# helm search repo stable WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config NAME CHART VERSION APP VERSION DESCRIPTION stable/acs-engine-autoscaler 2.2.2 2.1.1 DEPRECATED Scales worker nodes within agent pools stable/aerospike 0.3.5 v4.5.0.5 DEPRECATED A Helm chart for Aerospike in Kubern... stable/airflow 7.13.3 1.10.12 DEPRECATED - please use: https://github.com/air... stable/ambassador 5.3.2 0.86.1 DEPRECATED A Helm chart for Datawire Ambassador stable/anchore-engine 1.7.0 0.7.3 Anchore container analysis and policy evaluatio... stable/apm-server 2.1.7 7.0.0 DEPRECATED The server receives data from the El... stable/ark 4.2.2 0.10.2 DEPRECATED A Helm chart for ark stable/artifactory 7.3.2 6.1.0 DEPRECATED Universal Repository Manager support... stable/artifactory-ha 0.4.2 6.2.0 DEPRECATED Universal Repository Manager support... stable/atlantis 3.12.4 v0.14.0 DEPRECATED A Helm chart for Atlantis https://ww... ......2.安装 chart安装一个mysql,在安装之前我们可以show一下 helm show chart stable/mysql查看它的版本号等信息更详细的信息可以通过helm show all stable/mysql ,all来查看[root@linuxea.com ~]# helm show chart stable/mysql WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config apiVersion: v1 appVersion: 5.7.30 deprecated: true description: DEPRECATED - Fast, reliable, scalable, and easy to use open-source relational database system. home: https://www.mysql.com/ icon: https://www.mysql.com/common/logos/logo-mysql-170x115.png keywords: - mysql - database - sql name: mysql sources: - https://github.com/kubernetes/charts - https://github.com/docker-library/mysql version: 1.6.9安装--generate-name生成一个名称# helm install stable/mysql --generate-name WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config WARNING: This chart is deprecated NAME: mysql-1649580933 LAST DEPLOYED: Sun Apr 10 04:55:35 2022 NAMESPACE: default STATUS: deployed REVISION: 1 NOTES: MySQL can be accessed via port 3306 on the following DNS name from within your cluster: mysql-1649580933.default.svc.cluster.local To get your root password run: MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysql-1649580933 -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo) To connect to your database: 1. Run an Ubuntu pod that you can use as a client: kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il 2. Install the mysql client: $ apt-get update && apt-get install mysql-client -y 3. Connect using the mysql cli, then provide your password: $ mysql -h mysql-1649580933 -p To connect to your database directly from outside the K8s cluster: MYSQL_HOST=127.0.0.1 MYSQL_PORT=3306 # Execute the following command to route the connection: kubectl port-forward svc/mysql-1649580933 3306 mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}而后我们可以观察到pod的状态[root@linuxea.com ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ... mysql-1649580933-8466b76578-gphkp 0/1 Pending 0 106s <none> <none> <none> <none> ...和svc[root@linuxea.com ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ... mysql-1649580933 ClusterIP 10.68.106.229 <none> 3306/TCP 2m23s ...以及一个pvc[root@linuxea.com ~]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql-1649580933 Pending 4m33s一旦安装完成可以通过ls查看她的版本[root@linuxea.com ~]# helm ls WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION mysql-1649580933 default 1 2022-04-10 04:55:35.427415297 -0400 EDT deployed mysql-1.6.9 5.7.30 当我们能看到这个name的时候,就可以使用uninstall删除uninstalll 会删除这个包下的所有相关的这个包的资源。同时,可以使用--keep-history参数保留release的记录而使用了--keep-history的时候就可以使用helm ls -a查看被卸载掉的记录[root@linuxea.com ~]# helm uninstall mysql-1649580933 WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config release "mysql-1649580933" uninstalled参数配置如果直接install是默认的配置,但是更多时候,我们需要调整一下配置的参数,比如类似于端口等其他的选项参数,当然,这些参数必须是可以配置的的,一旦配置后,就会覆盖掉默认的值,通过helm show values来查看这些参数[root@linuxea.com ~]# helm show values stable/mysql比如配置密码,初始化,参数,调度,容忍,是否持久化等等。既然,我们要修改这些参数,那就需要一个覆盖的文件来进行操作,于是,创建一个文件,比如mvalule.yaml,在文件中配置想要修改的值, 如下指定用户和密码,并创建一个linuea的库,并且不进行数据持久化mysqlUser: linuxea mysqlPassword: linuxea.com mysqlDatabase: linuxea persistence: enabled: false而后只需要指定这个配置文件即可当你不使用 --generate-name的时候,只需要指定名称即可helm install mysql -f mvalule.yaml stable/mysql[root@linuxea.com /data/helm]# helm install -f mvalule.yaml stable/mysql --generate-name WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config WARNING: This chart is deprecated NAME: mysql-1649582722 LAST DEPLOYED: Sun Apr 10 05:25:23 2022 NAMESPACE: default STATUS: deployed REVISION: 1 NOTES: MySQL can be accessed via port 3306 on the following DNS name from within your cluster: mysql-1649582722.default.svc.cluster.local To get your root password run: MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysql-1649582722 -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo) To connect to your database: 1. Run an Ubuntu pod that you can use as a client: kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il .....此时,可以通过kubectl describe 来查看传入的变量[root@linuxea.com ~]# kubectl describe pod mysql-1649582722-dbcdcb895-tjvsr Name: mysql-1649582722-dbcdcb895-tjvsr Namespace: default .... Environment: MYSQL_ROOT_PASSWORD: <set to the key 'mysql-root-password' in secret 'mysql-1649582722'> Optional: false MYSQL_PASSWORD: <set to the key 'mysql-password' in secret 'mysql-1649582722'> Optional: false MYSQL_USER: linuxea MYSQL_DATABASE: linuxea ...pod启动完成,我们通过上面的提示进入到mysql[root@linuxea.com /data/helm]# MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysql-1649582722 -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo) [root@linuxea.com /data/helm]# echo $MYSQL_ROOT_PASSWORD 8FFSmw66je [root@linuxea.com /data/helm]# kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il If you don't see a command prompt, try pressing enter. root@ubuntu:/# root@ubuntu:/# apt-get update && apt-get install mysql-client -y ... Setting up mysql-client-5.7 (5.7.33-0ubuntu0.16.04.1) ... Setting up mysql-client (5.7.33-0ubuntu0.16.04.1) ... Processing triggers for libc-bin (2.23-0ubuntu11.3) ... ... root@ubuntu:/# mysql -h mysql-1649582722 -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 12 Server version: 5.7.30 MySQL Community Server (GPL) Copyright (c) 2000, 2021, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | linuxea | | mysql | | performance_schema | | sys | +--------------------+ 5 rows in set (0.00 sec) mysql> 环境变量基本上使用两种方式来传递配置信息:value那么除了使用value 或者-f指定yaml文件来覆盖values的值外,还可以指定多个值set直接在命令行指定需要覆盖的配置,但是对于深度嵌套的不建议使用--set通常--set优先于-f,-f将值持久化在configmap中如果我们使用--value配置文件已经配置了enabled: false,同时有配置了--set persistence.enabled: true, 而此时的enabled是等于true的,--set优先于--value对于value可以通过get value查看[root@linuxea.com /data/helm]# helm get values mysql-1649582722 WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config USER-SUPPLIED VALUES: mysqlDatabase: linuxea mysqlPassword: linuxea.com mysqlUser: linuxea persistence: enabled: false对于一个已经运行的chart而言,使用helm upgrade来更新,或者使用--reset来删除--set--set可以接受0个或者多个键值对,最直接的常用的修改镜像,就是--set image:11,而对于多个,使用逗号隔开即可--set name:linuxea,image:11,如果在yaml文件中就换行,如下name: linuxea image: 11如果参数配置中所示,假如我们要修改的是mysql的参数[root@linuxea.com /data/helm]# cat mvalule.yaml mysqlUser: linuxea mysqlPassword: linuxea.com mysqlDatabase: linuxea persistence: enabled: false两种方式sethelm install mysql -f mvalule.yaml stable/mysql --set mysqlUser:linuxea,mysqlPassword:linuxea.com,mysqlDatabase:linuxea,persistence.enabled:false对于有换行的空格,使用.来拼接, persistence.enabled:false对应如下persistence: enabled: false其他1,如果有更多的参数,比如:--set args={run,/bin/start,--devel}args: - run - /bin/start - --devel2,除此之外,我们可以借用索引的方式,如下metadata: name: etcd-k8s namespace: monitoring这样的话,就变成了metadata[0].name=etcd-k8s,metadata[0].namespace=monitoring3,对于特殊字符可以使用反斜杠和双引号来做name: "a,b"这样的set明天就变成了--set name=a\,b4,其他包含反斜杠的nodeSelector: kubernetes.io/role: master这时候的--set就需要转义:--set nodeSelector."kubernetes\.io/role"=master本地安装通过fetch可以将chart放到本地[root@linuxea.com /data/helm]# helm fetch stable/mysql WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config [root@linuxea.com /data/helm]# ls mysql-1.6.9.tgz -ll -rw-r--r-- 1 root root 11589 Apr 10 06:14 mysql-1.6.9.tgz而后就可以直接使用helm安装[root@linuxea.com /data/helm]# helm install mysql mysql-1.6.9.tgz 或者解压[root@linuxea.com /data/helm]# tar xf mysql-1.6.9.tgz [root@linuxea.com /data/helm]# ls mysql Chart.yaml README.md templates values.yaml [root@linuxea.com /data/helm]# tree mysql mysql ├── Chart.yaml ├── README.md ├── templates │ ├── configurationFiles-configmap.yaml │ ├── deployment.yaml │ ├── _helpers.tpl │ ├── initializationFiles-configmap.yaml │ ├── NOTES.txt │ ├── pvc.yaml │ ├── secrets.yaml │ ├── serviceaccount.yaml │ ├── servicemonitor.yaml │ ├── svc.yaml │ └── tests │ ├── test-configmap.yaml │ └── test.yaml └── values.yaml 2 directories, 15 files安装[root@linuxea.com /data/helm]# helm install mysql ./mysql升级与回滚helm的upgrade命令会更新你提供的信息,并且只会更新上一个版本,这种较小的更新更快捷每,进行一次upgrade都会生成新的配置版本,比如secret,默认似乎有15个版本,这将会是一个问题。添加一个mysqlRootPassword: www.linuxea.com 进行upgrademysqlUser: linuxea mysqlPassword: linuxea.com mysqlDatabase: linuxea mysqlRootPassword: www.linuxea.com persistence: enabled: falseupgradehelm upgrade mysql-1649582722 stable/mysql -f mvalule.yaml如下[root@linuxea.com /data/helm]# helm upgrade mysql-1649582722 stable/mysql -f mvalule.yaml WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config WARNING: This chart is deprecated Release "mysql-1649582722" has been upgraded. Happy Helming! NAME: mysql-1649582722 LAST DEPLOYED: Sun Apr 10 06:29:00 2022 NAMESPACE: default STATUS: deployed REVISION: 2 NOTES: MySQL can be accessed via port 3306 on the following DNS name from within your cluster: mysql-1649582722.default.svc.cluster.local ...更新完成后REVISION已经变成2了通过helm ls查看[root@linuxea.com /data/helm]# helm ls WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION mysql-1649582722 default 2 2022-04-10 06:29:00.717252842 -0400 EDT deployed mysql-1.6.9 5.7.30 而后可以通过helm get values mysql-1649582722 查看[root@linuxea.com /data/helm]# helm get values mysql-1649582722 WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config USER-SUPPLIED VALUES: mysqlDatabase: linuxea mysqlPassword: linuxea.com mysqlRootPassword: www.linuxea.com mysqlUser: linuxea persistence: enabled: false此时的mysql的新密码已经更新到secret,但是并没有在mysql生效的 ,我们就进行回滚下[root@linuxea.com /data/helm]# kubectl get secret --namespace default mysql-1649582722 -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo www.linuxea.comrollbackls查看helm的名称[root@linuxea.com /data/helm]# helm ls WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION mysql-1649582722 default 2 2022-04-10 06:29:00.717252842 -0400 EDT deployed mysql-1.6.9 5.7.30 查看mysql-1649582722历史版本[root@linuxea.com /data/helm]# helm history mysql-1649582722 WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION 1 Sun Apr 10 05:25:23 2022 superseded mysql-1.6.9 5.7.30 Install complete 2 Sun Apr 10 06:29:00 2022 deployed mysql-1.6.9 5.7.30 Upgrade complete进行rollback[root@linuxea.com /data/helm]# helm rollback mysql-1649582722 1 WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config Rollback was a success! Happy Helming!在来查看values[root@linuxea.com /data/helm]# helm get values mysql-1649582722 WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config USER-SUPPLIED VALUES: mysqlDatabase: linuxea mysqlPassword: linuxea.com mysqlUser: linuxea persistence: enabled: false在查看密码[root@linuxea.com /data/helm]# kubectl get secret --namespace default mysql-1649582722 -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo 8FFSmw66je而现在的history就是三个版本了[root@linuxea.com /data/helm]# helm history mysql-1649582722 WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION 1 Sun Apr 10 05:25:23 2022 superseded mysql-1.6.9 5.7.30 Install complete 2 Sun Apr 10 06:29:00 2022 superseded mysql-1.6.9 5.7.30 Upgrade complete 3 Sun Apr 10 06:39:44 2022 deployed mysql-1.6.9 5.7.30 Rollback to 1 这是因为版本是一直在新增,而3的版本就是回滚到1了,Rollback to 1 其他参数在整个helm中有一些非常有意思且重要的参数,比如常见的install和upgrade,当我们不确定一个程序是否被安装的时候,我们就需要安装,否则就是更新,于是可以使用upgrade --install,一般而言,我们可能还需要一个名称空间,那么就有了另外一个参数--create-namesapce,如果不存在就创建helm upgrade --install --create-namespace --namespace linuxea hmysql ./mysql如果名称空间不存在就创建,如果mysql没有install就install,否则就upgrade同时,当helm执行完成后, list列表中的状态已经为deployed,但是并不能说明pod已经装备好了,这两者之间并没有直接关系的,此时需要一些配置参数辅助--wait等待所有pod就绪,包含共享存储的pvc,就绪状态准备情况,以及svc,如果超过五分钟,这个版本就会标记失败-- timeout等待kubernetes命令完成,默认五分钟--no-hooks跳过命令的运行的hooks--recreate-pods仅适用于upgrade和rollback,在helm3中这个标志将导致重新创建所有的pod参考kubernetes helm概述(49)kubernetes helm简单使用(50)kubernetes 了解chart(51)kubernetes helm安装efk(52)
2022年04月22日
89 阅读
0 评论
0 点赞
2022-04-20
linuxea:ingress-nginx的rewrite与canary
ingress-nginx的官网提供了更多的一些配置信息,包括url重写,金丝雀,尽管金丝雀支持并不完美,ingress-nginx仍然是最受欢迎的ingress之一。在上一篇中,我介绍了ingress-nginx应用常见的两种方式,并且采用的最新的版本,早期有非常陈旧的版本在使用。鉴于此,随后打算将ingress-nginx重新理一遍,于是就有了这篇,后续可能还会有ingress-nginx本身只是做一个声明,从哪里来到哪里去而已,并不会做一些流量转发,而核心是annotations的class是可以借助作一些操作的,比如修改城Traefik或者自己定制创建一个deployment的pod,我们至少需要指定标签如下 selector: matchLabels: app: linuxea_app version: v0.1.32 template: metadata: labels: app: linuxea_app version: v0.1.32而后在service关联 selector: app: linuxea_app version: v0.1.32并配置一个ingress,name: myapp必须是这个service的name , 且必须在同一个名称空间,如下apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test namespace: default spec: tls: - hosts: - linuxea.test.com secretName: nginx-ingress-secret ingressClassName: nginx rules: - host: linuxea.test.com http: paths: - path: / pathType: Prefix backend: service: name: myapp port: number: 80 --- apiVersion: v1 kind: Service metadata: name: myapp namespace: default spec: selector: app: linuxea_app version: v0.1.32 ports: - name: http targetPort: 80 port: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: dpment-linuxea namespace: default spec: replicas: 7 selector: matchLabels: app: linuxea_app version: v0.1.32 template: metadata: labels: app: linuxea_app version: v0.1.32 spec: containers: - name: nginx-a image: marksugar/nginx:1.14.b ports: - name: http containerPort: 80这样就完成了一个最简单不过的ingress-nginx的域名配置,当然也可以配置一个404的页面之类的而整个过程大致如下请求到达LB后被分发到ingress-controller,controller会一直关注ingress对象,匹配到对应的信息后将请求转发到其中的某一个pod之上,而service对后端的pod做发现和关联。意思就是ingress-nginx不是直接到service,而是只从service中获取pod的信息,service仍然负责发现后端的pod状态,而请求是通过ingress通过ed到pod的url重写ingress-nginx大致如下,除此之外,我们可以对annotation做一些配置,较为常见的rewrite功能在实际中,访问的url可能如下linuxea.test.com/qsv1 linuxea.test.com/qsv2 linuxea.test.com/qsv3诸如此类,而要进行这种跳转,需要前端代码支持,或者配置rewrite进行转发,如下NameDescriptionValuesnginx.ingress.kubernetes.io/rewrite-targetTarget URI where the traffic must be redirectedstringnginx.ingress.kubernetes.io/ssl-redirectIndicates if the location section is only accessible via SSL (defaults to True when Ingress contains a Certificate)boolnginx.ingress.kubernetes.io/force-ssl-redirectForces the redirection to HTTPS even if the Ingress is not TLS Enabledboolnginx.ingress.kubernetes.io/app-rootDefines the Application Root that the Controller must redirect if it's in / contextstringnginx.ingress.kubernetes.io/use-regexIndicates if the paths defined on an Ingress use regular expressionsboolCaptured groups are saved in numbered placeholders, chronologically, in the form $1, $2 ... $n. These placeholders can be used as parameters in the rewrite-target annotation.nginx.ingress.kubernetes.io/rewrite-target 将请求转发到目标比如现在要将/app转发到/app/modiy,那么就可以如下正则表达/app(/|$)(.*)并且rewrite-target,的值是一个$2占位符 annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 ... paths: - path: /app(/|$)(.*)而这种方式i还有一个问题,就是你的js代码很有可能是绝对路径的,因此你不能够打开js,js会404 。 要么修改为相对路径,要么就需要重新配置一个重定向假设你的jss样式在/style下,还可能有图片是在image下以及js的路径,和其他增删改查的页面,现在的跳转后央视404,可以使用configuration-snippet重写configuration-snippet annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/app-root: /linuxea.html nginx.ingress.kubernetes.io/configuration-snippet: | rewrite ^/style/(.*)$ /app/style/$1 redirect; rewrite ^/image/(.*)$ /app/image/$1 redirect; rewrite ^/javascripts/(.*)$ /app/javascripts/$1 redirect; rewrite ^/modiy/(.*)$ /app/modiy/$1 redirect; rewrite ^/create/(.*)$ /app/create/$1 redirect; rewrite ^/delete/(.*)$ /app/delete/$1 redirect; |表示换行,而后rewrite以/style/路径下的所有跳转到/app/style/下,完成对style添加前缀/appapp-root如果此时我们希望访问的根目录不是默认的,可以使用app-root来进行跳转,比如跳转到linuxea.html如果是一个目录,就可以写一个路径,比如/app/ annotations: nginx.ingress.kubernetes.io/app-root: /linuxea.html[root@Node-172_16_100_50 ~/ingress]# kubectl apply -f ingress.yaml ingress.networking.k8s.io/test configured现在就完成了自动跳转basic auth认证在nginx里面是可以配置basic auth认证的,非常简单的一个配置,在ingress-nginx中也是可以的我们可以进行yum安装一个httpd的应用,或者在搜索引擎搜索一个在线 htpasswd 生成器来生成一个用户mark,密码linuxea.comyum install httpd -y# htpasswd -c auth mark New password: Re-type new password: Adding password for user mark在线生成即可# cat auth1 mark:$apr1$69ocxsQr$omgzB53m59LeCVxlOAsTr/创建一个secret,将这个文件配置即可kubectl create secret generic bauth --from-file=auth1# kubectl create secret generic bauth --from-file=auth1 secret/bauth created # kubectl get secret bauth NAME TYPE DATA AGE bauth Opaque 1 21s # kubectl describe secret bauth Name: bauth Namespace: default Labels: <none> Annotations: <none> Type: Opaque Data ==== auth1: 43 bytes而后添加到ingress-nginx中,如下 annotations: ... nginx.ingress.kubernetes.io/auth-type: basic nginx.ingress.kubernetes.io/auth-secret: auth-1 nginx.ingress.kubernetes.io/auth-realm: 'Authentication failed, please try again'auth-secret是引入刚创建的bauth,而auth-type指定了类型,如下apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test namespace: default annotations: nginx.ingress.kubernetes.io/app-root: /linuxea.html nginx.ingress.kubernetes.io/auth-type: basic nginx.ingress.kubernetes.io/auth-secret: auth-1 nginx.ingress.kubernetes.io/auth-realm: 'Authentication failed, please try again' spec: tls: - hosts: - linuxea.test.com secretName: nginx-ingress-secret ingressClassName: nginx rules: - host: linuxea.test.com http: paths: - path: / pathType: Prefix backend: service: name: myapp port: number: 80执行一下# kubectl apply -f ingress.yaml ingress.networking.k8s.io/test configured如下灰度我们通常使用最多的滚动更新,蓝绿,灰度,而ingress-ngiinx是通过annotations配置来实现的,能满足金丝雀,蓝绿、ab测试缺少描述 部分此前我们配置了一个pod和一个service,要配置金丝雀那就需要在配置一组,而后我们在ingress中使用annotations来进行调用其他的一些class来完成一些操作配置nginx:v1.14.aapiVersion: apps/v1 kind: Deployment metadata: name: testv1 labels: app: testv1 spec: replicas: 5 selector: matchLabels: app: testv1 template: metadata: labels: app: testv1 spec: containers: - name: testv1 image: marksugar/nginx:1.14.a ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: testv1-service spec: ports: - port: 80 targetPort: 80 selector: app: testv1testv2apiVersion: apps/v1 kind: Deployment metadata: name: testv2 labels: app: testv2 spec: replicas: 5 selector: matchLabels: app: testv2 template: metadata: labels: app: testv2 spec: containers: - name: testv2 image: marksugar/nginx:1.14.b ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: testv2-service spec: ports: - port: 80 targetPort: 80 selector: app: testv2apply# kubectl apply -f testv1.yaml # kubectl apply -f testv2.yaml可以看到现在已经有两组# kubectl get pod NAME READY STATUS RESTARTS AGE testv1-9c974bd5d-c46dh 1/1 Running 0 19s testv1-9c974bd5d-j7fzn 1/1 Running 0 19s testv1-9c974bd5d-qp4tv 1/1 Running 0 19s testv1-9c974bd5d-thx4r 1/1 Running 0 19s testv1-9c974bd5d-x9rpf 1/1 Running 0 19s testv2-5767685995-f8z5s 1/1 Running 0 6s testv2-5767685995-htm74 1/1 Running 0 6s testv2-5767685995-k8sdv 1/1 Running 0 6s testv2-5767685995-mjd6c 1/1 Running 0 6s testv2-5767685995-prhld 1/1 Running 0 6s给testv1配置一个 ingress-v1.yamlapiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: testv1 namespace: default spec: ingressClassName: nginx rules: - host: test.mark.com http: paths: - path: / pathType: Prefix backend: service: name: testv1-service port: number: 80# kubectl apply -f ingress-v1.yaml 而后我们查看的版本信息# for i in $(seq 1 10);do curl -s test.mark.com/linuxea.html ;done linuxea-testv1-9c974bd5d-qp4tv.com ▍ f6f9a8a43d4ee ▍version number 1.0 linuxea-testv1-9c974bd5d-j7fzn.com ▍ d471888034671 ▍version number 1.0 linuxea-testv1-9c974bd5d-j7fzn.com ▍ d471888034671 ▍version number 1.0 linuxea-testv1-9c974bd5d-qp4tv.com ▍ f6f9a8a43d4ee ▍version number 1.0 linuxea-testv1-9c974bd5d-x9rpf.com ▍ 9cbb453617c73 ▍version number 1.0 linuxea-testv1-9c974bd5d-c46dh.com ▍ 4c0e80c7d9a34 ▍version number 1.0 linuxea-testv1-9c974bd5d-qp4tv.com ▍ f6f9a8a43d4ee ▍version number 1.0 linuxea-testv1-9c974bd5d-x9rpf.com ▍ 9cbb453617c73 ▍version number 1.0 linuxea-testv1-9c974bd5d-thx4r.com ▍ b9e074d68c3c7 ▍version number 1.0 linuxea-testv1-9c974bd5d-j7fzn.com ▍ d471888034671 ▍version number 1.0canary而后我们配置canary nginx.ingress.kuberentes.io/canary: "true" # 开启灰度发布机制,首先启用canary nginx.ingress.kuberentes.io/canary-weight: "30" # 分配30%的流量到当前的canary版本如下给testv2配置一个 ingress-v2.yaml 并配置canary权重apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: testv2 namespace: default annotations: nginx.ingress.kubernetes.io/canary: "true" nginx.ingress.kubernetes.io/canary-weight: "30" spec: ingressClassName: nginx rules: - host: test.mark.com http: paths: - path: / pathType: Prefix backend: service: name: testv2-service port: number: 80此时由于版本问题你或许会发现 有一个问题Error from server (BadRequest): error when creating "ingress-1.yaml": admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: host "linuxea.test.com" and path "/" is already defined in ingress default/test而这个问题的根源在于没有验证webhook忽略具有不同的ingressclass的入口controller.admissionWebhooks.enabled=false并且在1.1.2修复我们安装1.1.2的ingress-nginxdocker pull k8s.gcr.io/ingress-nginx/controller:v1.1.2@sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c找到一个aliyun的docker pull registry.cn-shanghai.aliyuncs.com/wanfei/ingress-nginx-controller:v1.1.2 docker pull registry.cn-shanghai.aliyuncs.com/wanfei/kube-webhook-certgen:v1.1.1 docker pull registry.cn-shanghai.aliyuncs.com/wanfei/defaultbackend-amd64:1.5修改ingress-nginx的deployment.yaml而后在配置下应用声明式的文件# kubectl apply -f ingress-v2.yaml # kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE linuxea nginx linuxea.test.com 172.16.100.50 80 22h testv1 nginx test.mark.com 172.16.100.50 80 3m23s testv2 nginx test.mark.com 172.16.100.50 80 70sfor i in $(seq 1 10);do curl -s linuxea.test.com ;done# for i in $(seq 1 10);do curl -s test.mark.com/linuxea.html ;done linuxea-testv1-9c974bd5d-thx4r.com ▍ b9e074d68c3c7 ▍version number 1.0 linuxea-testv1-9c974bd5d-c46dh.com ▍ 4c0e80c7d9a34 ▍version number 1.0 linuxea-testv2-5767685995-mjd6c.com ▍ 1fa571f0e1e0e ▍version number 2.0 linuxea-testv1-9c974bd5d-qp4tv.com ▍ f6f9a8a43d4ee ▍version number 1.0 linuxea-testv1-9c974bd5d-thx4r.com ▍ b9e074d68c3c7 ▍version number 1.0 linuxea-testv1-9c974bd5d-j7fzn.com ▍ d471888034671 ▍version number 1.0 linuxea-testv1-9c974bd5d-qp4tv.com ▍ f6f9a8a43d4ee ▍version number 1.0 linuxea-testv1-9c974bd5d-qp4tv.com ▍ f6f9a8a43d4ee ▍version number 1.0 linuxea-testv1-9c974bd5d-x9rpf.com ▍ 9cbb453617c73 ▍version number 1.0 linuxea-testv1-9c974bd5d-j7fzn.com ▍ d471888034671 ▍version number 1.0这里的比例是大致的一个算法,而并不是固定的此时可以将weight配置成0撤销更新 nginx.ingress.kubernetes.io/canary-weight: "0"或者将weight配置成100完成更新 nginx.ingress.kubernetes.io/canary-weight: "100"参考:validating webhook should ignore ingresses with a different ingressclassvalidating webhook should ignore ingresses with a different ingressclassslack讨论Error: admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: host "xyz" and path "/" is already defined in ingress xxx #821kubernetes Ingress Controller (15)kubernetes Ingress nginx http以及7层https配置 (17)kubernetes Ingress nginx配置 (16)
2022年04月20日
88 阅读
0 评论
0 点赞
2022-04-18
linuxea:k8s下kube-prometheus监控ingress-nginx
首先需要已经配置好了一个ingress-nginx亦或者使用ACK上的ingress-nginx鉴于对ingress-nginx的状态,或者流量的监控是有一定的必要性,配置监控的指标有助于了解更多细节通过使用kube-prometheus的项目来监控ingress-nginx,首先需要在nginx-ingress-controller的yaml中配置10254的端口,并且配置一个service,最后加入到ServiceMonitor即可。start如果是helm,则需要如下修改helm.. controller: metrics: enabled: true service: annotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" ..如果不是 helm,则必须像这样编辑清单:服务清单: - name: prometheus port: 10254 targetPort: prometheusprometheus将会在service中被调用apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: "true" prometheus.io/port: "10254" .. spec: ports: - name: prometheus port: 10254 targetPort: prometheus ..deploymentapiVersion: v1 kind: Deployment metadata: annotations: prometheus.io/scrape: "true" prometheus.io/port: "10254" .. spec: ports: - name: prometheus containerPort: 10254 ..测试10254的/metrics的url能够被访问到bash-5.1$ curl 127.0.0.1:10254/metrics # HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles. # TYPE go_gc_duration_seconds summary go_gc_duration_seconds{quantile="0"} 1.9802e-05 go_gc_duration_seconds{quantile="0.25"} 3.015e-05 go_gc_duration_seconds{quantile="0.5"} 4.2054e-05 go_gc_duration_seconds{quantile="0.75"} 9.636e-05 go_gc_duration_seconds{quantile="1"} 0.000383868 go_gc_duration_seconds_sum 0.000972498 go_gc_duration_seconds_count 11 # HELP go_goroutines Number of goroutines that currently exist. # TYPE go_goroutines gauge go_goroutines 92 # HELP go_info Information about the Go environment.Service And ServiceMonitor另外需要配置一个ServiceMonitor, 这取决于kube-promentheus的发行版spec部分字段如下spec: endpoints: - interval: 15s # 15s频率 port: metrics # port的名称 path: /metrics # url路径 namespaceSelector: matchNames: - kube-system # ingress-nginx所在的名称空间 selector: matchLabels: app: ingress-nginx # ingress-nginx的标签最终配置如下:service在ingress-nginx的名称空间下配置,而ServiceMonitor在kube-prometheus的monitoring名称空间下,使用endpoints定义port名称,使用namespaceSelector.matchNames指定了ingress pod的名称空间,selector.matchLabels和标签apiVersion: v1 kind: Service metadata: name: ingress-nginx-metrics namespace: kube-system labels: app: ingress-nginx annotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" spec: type: ClusterIP ports: - name: metrics port: 10254 targetPort: 10254 protocol: TCP selector: app: ingress-nginx --- apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: ingress-nginx-metrics namespace: monitoring spec: endpoints: - interval: 15s port: metrics path: /prometheus namespaceSelector: matchNames: - kube-system selector: matchLabels: app: ingress-nginxgrafana在grafana的dashboards中搜索ingress-nginx,得到的和github的官网的模板一样https://grafana.com/grafana/dashboards/9614?pg=dashboards&plcmt=featured-dashboard-4或者下面这个模板这些在prometheus的targets中被发现参考ingress-nginx monitoringprometheus and grafana install
2022年04月18日
99 阅读
0 评论
0 点赞
2022-04-16
linuxea:windows远程调试k8s环境
有些朋友问怎么在windows上调试自己的环境,刚好最近也在自己的虚拟环境调试,就整理下了文档以kubectl和helm以及kustomize为例下载对应的包你要正常使用当你包,必须是与你kubernetes版本匹配的,这些信息在他们的readme.md中都有介绍假如你的k8s 是1.20的,那你就不能使用与此版本差距太大的版本以免出现未知的问题而其他的大版本的包使用方式一直在发送变化https://dl.k8s.io/release/v1.20.11/bin/windows/amd64/kubectl.exe https://get.helm.sh/helm-v3.8.2-windows-amd64.zip https://github.com/kubernetes-sigs/kustomize/releases/tag/kustomize%2Fv3.10.0将exe放置在一个位置,比如:C:k8sbinPS C:\k8sbin> dir 目录: C:\k8sbin Mode LastWriteTime Length Name ---- ------------- ------ ---- -a---- 2022-04-14 1:47 46256128 helm.exe -a---- 2022-04-16 12:59 41438208 kubectl.exe -a---- 2021-02-10 8:03 15297536 kustomize.exe以win10为例,在左下角的搜索栏中,或者有一个放大镜,输入"环境变量"重新打开一个窗口PS C:\WINDOWS\system32> kubectl.exe version Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:26:26Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"windows/amd64"} Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it. PS C:\WINDOWS\system32> kustomize.exe version {Version:kustomize/v3.10.0 GitCommit:602ad8aa98e2e17f6c9119e027a09757e63c8bec BuildDate:2021-02-10T00:00:50Z GoOs:windows GoArch:amd64} PS C:\WINDOWS\system32> helm version version.BuildInfo{Version:"v3.8.2", GitCommit:"6e3701edea09e5d55a8ca2aae03a68917630e91b", GitTreeState:"clean", GoVersion:"go1.17.5"} PS C:\WINDOWS\system32>将kubernetes的config文件拿到本地cat /etc/kubernetes/kubelet.kubeconfig 在windwos上当前用户的加目录创建.kube,并将kubelet.kubeconfig 内容复制到一个config的文件中C:\Users\Administrator\.kube\configgetPS C:\Users\Administrator\.kube> kubectl.exe get pod NAME READY STATUS RESTARTS AGE dpment-linuxea-6bdfbd7b77-fr4pn 1/1 Running 9 10d dpment-linuxea-a-5b98f7fb86-9ff2f 1/1 Running 17 23d hello-run-96whr-pod 0/1 Completed 0 10d hello-run-pod 0/1 Completed 0 10d mysql-1649582722-dbcdcb895-tjvsr 1/1 Running 6 5d20h nfs-client-provisioner-597f7dd4b-h2nsg 1/1 Running 71 248d testv1-9c974bd5d-gl52m 1/1 Running 9 10d testv2-5767685995-mjd6c 1/1 Running 16 22d traefik-6866c896d5-dqlv6 1/1 Running 9 10d ubuntu 0/1 Error 0 5d19h whoami-7d666f84d8-8wmk4 1/1 Running 15 20d whoami-7d666f84d8-vlgb9 1/1 Running 9 10d whoamitcp-744cc4b47-24prx 1/1 Running 9 10d whoamitcp-744cc4b47-xrgqp 1/1 Running 9 10d whoamiudp-58f6cf7b8-b6njt 1/1 Running 9 10d whoamiudp-58f6cf7b8-jnq6c 1/1 Running 15 20d PS C:\Users\Administrator\.kube>如下图挂在windows共享目录仅限于内网共享使用如果是传统的共享,你需要创建用户,需要共享文件,权限指定,而后使用netstat -aon来过滤139,145,138端口权限是否开启添加用户权限到共享文件夹查看是否打开共享yum install cifs-utils -y挂载mount -t cifs -o username=share,password=share //172.16.100.3/helm /data/helm[root@liinuxea.com /data]# mkdir helm [root@liinuxea.com /data]# mount -t cifs -o username=share,password=share //172.16.100.3/helm /data/helm [root@liinuxea.com /data]# df -h|grep helm //172.16.100.3/helm 282G 275G 6.5G 98% /data/helm
2022年04月16日
85 阅读
0 评论
0 点赞
1
2
...
8