linuxea:kubernetes pod控制器DaemonSet(11)


集群节点中的每一个节点,或者选择器节点,运行“某个”指定的POD副本,通常用来实现系统级的管理功能,或者后台任务,可以将节点上的某个目录作为存储卷关联至POD中,让pod实现某些管理功能。并且在新加入的node节点自动添加此类的pod副本,其中也需要标签选择器的使用。并且daemonSet也支持滚动更新!
这类服务通常无状态,以守护进程运行

  • 参数

updateStrategy: 更新策略(默认滚动更新)
​ RollingUpdate:如果type是RollingUpdate,只能定义maxUnavailable,每台就运行1个
​ maxUnavailable: 只能定义为maxUnavailable,node节点删除后进行更新
​ OnDelete: 在删除时候更新
每个node节点最多只能运行一个pod,当删除node上的pod,才会进行滚动更新,这里的数量参数指node节点个数,并不是pod个数。并且在kubectl describe ds ds-filebeat-linuxea时也不会显示更新策略。

ENV:对象列表 ,每一个对象都有一个变量名和一个传递的变量值

  • 场景

这类程序通常运行在node节点级别,在节点工作,因此这类应用部署在节点上,并且通过节点发送到节点之外的存储中,如filebaet,这类应用多少取决于节点规模

I. filebeat配置

yaml文件如下
其中redis部分,kind是Deployment,其中name等于redis,在filebeat中会被ENV调用value: redis.default.svc.cluster.local,这很关键,因为这各pod之间利用service的主机名进行调用。


其中filebeat部分,kind是DaemonSet,那么就意味着每台node最多运行一个filebeat副本,并且在其中将会将变量传递filebeat中,如下:

[root@linuxea linuxea]# cat ds-demo.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: Logstorage
  template:
    metadata:
      labels:
        app: redis
        role: Logstorage
    spec:
      containers:
      - name: redis
        image: redis:4.0-alpine
        ports:
        - name: redis
          containerPort: 6379
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: ds-filebeat-linuxea
  namespace: default
spec:
  selector:
    matchLabels:
      app: filebeat
      release: stable
  template:
    metadata:
      labels:
        app: filebeat
        release: stable
    spec:
      containers:
      - name: filebeat
        image: ikubernetes/filebeat:5.6.5-alpine
        env:
        - name: REDIS_HOST
          value: redis.default.svc.cluster.local
        - name: REDIS_LOG_LEVEL
          value: info

apply声明启动

[root@linuxea linuxea]# kubectl apply -f ds-demo.yaml 
deployment.apps/redis created
daemonset.apps/ds-filebeat-linuxea created

顺利的被run起来之后,可以通过kubectl get pods查看

[root@linuxea linuxea]# kubectl get pods
NAME                              READY     STATUS    RESTARTS   AGE
ds-filebeat-linuxea-2p6hc         1/1       Running   0          6s
ds-filebeat-linuxea-c722n         1/1       Running   0          6s
ds-filebeat-linuxea-nqxll         1/1       Running   0          6s
redis-66ccc9b5df-kcx4q            1/1       Running   0          6s

看见到每台机器会运行一个ds-filebeat-linuxea的pod副本

[root@linuxea linuxea]# kubectl get pods -o wide
NAME                              READY     STATUS    RESTARTS   AGE       IP             NODE
ds-filebeat-linuxea-2p6hc         1/1       Running   0          43s       172.16.3.43    linuxea.node-3.com   <none>
ds-filebeat-linuxea-c722n         1/1       Running   0          43s       172.16.1.39    linuxea.node-1.com   <none>
ds-filebeat-linuxea-nqxll         1/1       Running   0          43s       172.16.2.42    linuxea.node-2.com   <none>

并且可以使用kubectl get pods -l app=filebeat -o wide过滤标签查看

[root@linuxea linuxea]# kubectl get pods -l app=filebeat -o wide
NAME                        READY     STATUS    RESTARTS   AGE       IP            NODE                 NOMINATED NODE
ds-filebeat-linuxea-2p6hc   1/1       Running   0          33m       172.16.3.43   linuxea.node-3.com   <none>
ds-filebeat-linuxea-c722n   1/1       Running   0          33m       172.16.1.39   linuxea.node-1.com   <none>
ds-filebeat-linuxea-nqxll   1/1       Running   2          33m       172.16.2.42   linuxea.node-2.com   <none>

也可查看日志,使用kubectl logs ds-filebeat-linuxea-2p6hc

[root@linuxea linuxea]# kubectl  logs ds-filebeat-linuxea-2p6hc
2018/09/02 06:59:33.111320 beat.go:297: INFO Home path: [/usr/local/bin] Config path: [/usr/local/bin] Data path: [/usr/local/bin/data] Logs path: [/usr/local/bin/logs]
2018/09/02 06:59:33.111357 beat.go:192: INFO Setup Beat: filebeat; Version: 5.6.5
2018/09/02 06:59:33.111488 redis.go:140: INFO Max Retries set to: 3
2018/09/02 06:59:33.111516 outputs.go:108: INFO Activated redis as output plugin.
2018/09/02 06:59:33.111586 publish.go:300: INFO Publisher name: ds-filebeat-linuxea-2p6hc
2018/09/02 06:59:33.111750 async.go:63: INFO Flush Interval set to: 1s
2018/09/02 06:59:33.111766 async.go:64: INFO Max Bulk Size set to: 2048
2018/09/02 06:59:33.111867 modules.go:95: ERR Not loading modules. Module directory not found: /usr/local/bin/module
2018/09/02 06:59:33.112317 beat.go:233: INFO filebeat start running.
2018/09/02 06:59:33.112795 registrar.go:68: INFO No registry file found under: /var/log/containers/filebeat_registry. Creating a new registry file.
2018/09/02 06:59:33.114126 registrar.go:106: INFO Loading registrar data from /var/log/containers/filebeat_registry
2018/09/02 06:59:33.114215 registrar.go:123: INFO States Loaded from registrar: 0
2018/09/02 06:59:33.114272 crawler.go:38: INFO Loading Prospectors: 1
2018/09/02 06:59:33.114424 prospector_log.go:65: INFO Prospector with previous states loaded: 0
2018/09/02 06:59:33.114814 metrics.go:23: INFO Metrics logging every 30s
2018/09/02 06:59:33.114909 registrar.go:236: INFO Starting Registrar
2018/09/02 06:59:33.114902 sync.go:41: INFO Start sending events to output
2018/09/02 06:59:33.114984 config.go:95: WARN DEPRECATED: document_type is deprecated. Use fields instead.
2018/09/02 06:59:33.115113 prospector.go:124: INFO Starting prospector of type: log; id: 11998382299604891537 
2018/09/02 06:59:33.115154 crawler.go:58: INFO Loading and starting Prospectors completed. Enabled prospectors: 1
2018/09/02 06:59:33.115002 spooler.go:63: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2018/09/02 07:00:03.115273 metrics.go:39: INFO Non-zero metrics in the last 30s: registrar.writes=1

暴露端口

expose暴露端口

[root@linuxea linuxea]# kubectl expose deployment redis --port=6379
service/redis exposed

此刻redis已经被暴露。同时,这很重要,这意味着redis被解析在CoreDNS中。pod内的访问的ip地址是10.106.219.113

[root@linuxea linuxea]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP          10d
redis        ClusterIP   10.106.219.113   <none>        6379/TCP         7s

进入redis pod查看,端口已经启动

[root@linuxea linuxea]# kubectl exec -it redis-66ccc9b5df-kcx4q -- /bin/sh
/data # netstat -tnl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       
tcp        0      0 0.0.0.0:6379            0.0.0.0:*               LISTEN      
tcp        0      0 :::6379                 :::*                    LISTEN      

查看解析,此解析在变量中被传递,nslookup可以成功解析出地址

/data # nslookup redis
nslookup: can't resolve '(null)': Name does not resolve

Name:      redis
Address 1: 10.106.219.113 redis.default.svc.cluster.local

filebeat日志收集

在filebeat pod中,查看配置文件,在其中output.redis:{hosts:${REDIS_HOST}}这个变量会被传递到配置文件中,如下

[root@linuxea linuxea]# kubectl exec -it ds-filebeat-linuxea-nqxll -- /bin/sh
/ # ps aux
PID   USER     TIME   COMMAND
    1 root       0:00 /usr/local/bin/filebeat -e -c /etc/filebeat/filebeat.yml
   13 root       0:00 /bin/sh
   19 root       0:00 ps aux

查看配置文件的output的redis字段

/ # tail -3 /etc/filebeat/filebeat.yml
output.redis:
  hosts: ${REDIS_HOST:?No Redis host configured. Use env var REDIS_HOST to set host.}
  key: "filebeat"

在查看环境变量,redis地址已经被传递到${REDIS_HOST}

/ # printenv|grep REDIS_HOST
REDIS_HOST=redis.default.svc.cluster.local

并且能够解析成功,那说明能够可以访问到

/ # nslookup redis.default.svc.cluster.local
nslookup: can't resolve '(null)': Name does not resolve

Name:      redis.default.svc.cluster.local
Address 1: 10.106.219.113 redis.default.svc.cluster.local

而后在运行一次filebeat的进程,让日志瞬间写入到redis测试下
人为的准备点日志^_^

/var/log/containers # touch 1.log
/var/log/containers # echo linuxea.ds.com > 1.log 

启动

/ # /usr/local/bin/filebeat -e -c /etc/filebeat/filebeat.yml
2018/09/02 07:19:10.108151 beat.go:297: INFO Home path: [/usr/local/bin] Config path: [/usr/local/bin] Data path: [/usr/local/bin/data] Logs path: [/usr/local/bin/logs]
2018/09/02 07:19:10.108199 beat.go:192: INFO Setup Beat: filebeat; Version: 5.6.5
2018/09/02 07:19:10.108395 metrics.go:23: INFO Metrics logging every 30s
2018/09/02 07:19:10.108507 redis.go:140: INFO Max Retries set to: 3
2018/09/02 07:19:10.108539 outputs.go:108: INFO Activated redis as output plugin.
2018/09/02 07:19:10.108762 publish.go:300: INFO Publisher name: ds-filebeat-linuxea-nqxll
2018/09/02 07:19:10.108966 async.go:63: INFO Flush Interval set to: 1s
2018/09/02 07:19:10.108992 async.go:64: INFO Max Bulk Size set to: 2048
2018/09/02 07:19:10.109313 modules.go:95: ERR Not loading modules. Module directory not found: /usr/local/bin/module
2018/09/02 07:19:10.109497 beat.go:233: INFO filebeat start running.
2018/09/02 07:19:10.109580 registrar.go:85: INFO Registry file set to: /var/log/containers/filebeat_registry
2018/09/02 07:19:10.109657 registrar.go:106: INFO Loading registrar data from /var/log/containers/filebeat_registry
2018/09/02 07:19:10.109741 registrar.go:123: INFO States Loaded from registrar: 0
2018/09/02 07:19:10.109820 crawler.go:38: INFO Loading Prospectors: 1
2018/09/02 07:19:10.109958 registrar.go:236: INFO Starting Registrar
2018/09/02 07:19:10.109983 sync.go:41: INFO Start sending events to output
2018/09/02 07:19:10.110154 prospector_log.go:65: INFO Prospector with previous states loaded: 0
2018/09/02 07:19:10.110216 spooler.go:63: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2018/09/02 07:19:10.110531 config.go:95: WARN DEPRECATED: document_type is deprecated. Use fields instead.
2018/09/02 07:19:10.110555 prospector.go:124: INFO Starting prospector of type: log; id: 11998382299604891537 
2018/09/02 07:19:10.110603 crawler.go:58: INFO Loading and starting Prospectors completed. Enabled prospectors: 1
2018/09/02 07:19:40.108752 metrics.go:34: INFO No non-zero metrics in the last 30s
2018/09/02 07:20:10.108693 metrics.go:34: INFO No non-zero metrics in the last 30s

redis

回到redis查看,日志已经被写入到redis中来

redis.default.svc.cluster.local:6379> keys *
1) "filebeat"
redis.default.svc.cluster.local:6379> type filebeat
list
redis.default.svc.cluster.local:6379> lrange filebeat 0 -1
1) "{\"@timestamp\":\"2018-09-02T07:18:05.554Z\",\"beat\":{\"hostname\":\"ds-filebeat-linuxea-nqxll\",\"name\":\"ds-filebeat-linuxea-nqxll\",\"version\":\"5.6.5\"},\"input_type\":\"log\",\"json_error\":\"Error decoding JSON: invalid character 'm' looking for beginning of value\",\"log\":\"linuxea.ds.com\",\"offset\":12,\"source\":\"/var/log/containers/1.log\",\"type\":\"kube-logs\"}"
redis.default.svc.cluster.local:6379> 

II. 滚动更新

  • set image
[root@linuxea linuxea]# kubectl set image daemonsets ds-filebeat-linuxea filebeat=ikubernetes/filebeat:5.6.6-alpine
daemonset.extensions/ds-filebeat-linuxea image updated

修改完成使用kubectl get ds -o wide查看IMAGES

[root@linuxea linuxea]# kubectl get ds -o wide
NAME                  DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE       CONTAINERS   IMAGES                              SELECTOR
ds-filebeat-linuxea   3         3         2         0            2           <none>          45m       filebeat     ikubernetes/filebeat:5.6.6-alpine   app=filebeat,release=stable

kubectl get pods -w观察整个滚动的过程,拉取镜像,拉取完成后Running。

[root@linuxea linuxea]# kubectl get pods -w
ds-filebeat-linuxea-nqz84   1/1       Running   0         11s
ds-filebeat-linuxea-c722n   1/1       Terminating   0         45m
ds-filebeat-linuxea-c722n   0/1       Terminating   0         45m
ds-filebeat-linuxea-c722n   0/1       Terminating   0         45m
ds-filebeat-linuxea-c722n   0/1       Terminating   0         45m
ds-filebeat-linuxea-97l8x   0/1       Pending   0         0s
ds-filebeat-linuxea-97l8x   0/1       ContainerCreating   0         0s
ds-filebeat-linuxea-97l8x   1/1       Running   0         9s
ds-filebeat-linuxea-2p6hc   1/1       Terminating   0         45m
ds-filebeat-linuxea-2p6hc   0/1       Terminating   0         45m
ds-filebeat-linuxea-2p6hc   0/1       Terminating   0         45m
ds-filebeat-linuxea-2p6hc   0/1       Terminating   0         45m
ds-filebeat-linuxea-6mkd7   0/1       Pending   0         0s
ds-filebeat-linuxea-6mkd7   0/1       ContainerCreating   0         0s
ds-filebeat-linuxea-6mkd7   1/1       Running   0         10s

III. 共享字段

hostIPC ,hostNetwork, hostPID
从集群外部范围kubernetes Pod
hostNetwork设置适用于Kubernetes pods。配置pod时hostNetwork: true,在这样的pod中运行的应用程序可以直接查看启动pod的主机的网络接口。配置为侦听所有网络接口的应用程序又可以在主机的所有网络接口上访问。以下是使用主机网络的pod的示例定义:

[root@linuxea linuxea]# cat nginx.yaml 
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-linuxea
  namespace: default
spec:
  selector:
    matchLabels:
      app: nginx
      release: www
  template:
    metadata:
      labels:
        app: nginx
        release: www
    spec:
      hostNetwork: true
      containers:
      - name: linuxea
        image: "marksugar/nginx:1.14.a"
        ports:
        - name: http
          containerPort: 80
          hostPort: 80

命令启动pod

[root@linuxea linuxea]# kubectl apply -f nginx.yaml 
daemonset.apps/nginx-linuxea created

在查看的到的ip地址便是宿主机的IP地址

[root@linuxea linuxea]# kubectl get pods -l app=nginx -o wide
NAME                  READY     STATUS    RESTARTS   AGE       IP              NODE                 NOMINATED NODE
nginx-linuxea-gbmn5   1/1       Running   0          11s       10.10.240.202   linuxea.node-1.com   <none>
nginx-linuxea-kj848   1/1       Running   0          11s       10.10.240.146   linuxea.node-3.com   <none>
nginx-linuxea-rh2kg   1/1       Running   0          11s       10.10.240.203   linuxea.node-2.com   <none>

但是这样一来,那便意味着每次启动,kubernetes都可以将pod重新安排到其他节点。除此之外,相同的端口不能在同一个节点运行,不然会端口冲突。最重要的是,hostNetwork: true在OpenShift上创建一个pod 是一项特权操作,并且跳过了Flannel 。 基于这些原因,这种方式并不能是应用程序被外部访问的好方式。

当然,并不妨碍我们测试它,如下:



但是,你要保证每台机器就运行一个pod,那么你就只能使用DaemonSet
打开测试

1 分享

您可以选择一种方式赞助本站

支付宝扫码赞助

支付宝扫码赞助

日期: 2018-09-17分类: kubernetes

标签: kubernetes

发表评论