This is the sixth day of my participation in Gwen Challenge

There is a scenario where pods need to run on all nodes and automatically change as nodes increase or decrease, always staying the same as node nodes, and each node has only one corresponding POD. Or keep consistent on the specified labels node. This scenario is essential in log collection, monitoring services, scheduled tasks, and daemons. The K8S controller has an object called DaemonSet that is specific to this scenario.

DaemonSet introduction

Let’s look at the characteristics of Daemonset

The characteristics of

  • Make sure each node runs a pod copy
  • When a node joins or leaves the cluster, pod copies are automatically added and deleted
  • Deleting DaemOnset automatically deletes all pod copies it creates
  • Combine with Labels to customize flexible configuration items
  • Provides the same POD maintenance capability as deployment control

The deployment of

Next, use the elK official example to deploy fileBeat, the repository address: github.com/elastic/bea…

The document address: www.elastic.co/guide/en/be…

Namespace uses kube-system directly

Download elasticSearch from docker (juejin.cn/post/684490…)

Download the YAML file

The curl - L - O https://raw.githubusercontent.com/elastic/beats/7.13/deploy/kubernetes/filebeat-kubernetes.yamlCopy the code

Modify the elasticSearch connection configuration

--- apiVersion: v1 kind: ConfigMap metadata: name: filebeat-config namespace: kube-system labels: k8s-app: filebeat data: filebeat.yml: |- filebeat.inputs: - type: container paths: - /var/log/containers/*.log processors: - add_kubernetes_metadata: host: ${NODE_NAME} matchers: - logs_path: logs_path: "/var/log/containers/" # To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this: #filebeat.autodiscover: # providers: # - type: kubernetes # node: ${NODE_NAME} # hints.enabled: true # hints.default_config: # type: container # paths: # - /var/log/containers/*${data.kubernetes.container.id}.log processors: - add_cloud_metadata: - add_host_metadata: cloud.id: ${ELASTIC_CLOUD_ID} cloud.auth: ${ELASTIC_CLOUD_AUTH} output.elasticsearch: hosts: ['${ELASTICSEARCH_HOST:esip}:${ELASTICSEARCH_PORT:9200}'] username: ${ELASTICSEARCH_USERNAME} password: ${ELASTICSEARCH_PASSWORD} --- apiVersion: apps/v1 kind: DaemonSet metadata: name: filebeat namespace: kube-system labels: k8s-app: filebeat spec: selector: matchLabels: k8s-app: filebeat template: metadata: labels: k8s-app: filebeat spec: serviceAccountName: filebeat terminationGracePeriodSeconds: 30 hostNetwork: true dnsPolicy: ClusterFirstWithHostNet containers: - name: filebeat image: docker. Elastic. Co/beats/filebeat: 7.13.2 args. [ "-c", "/etc/filebeat.yml", "-e", ] env: - name: ELASTICSEARCH_HOST value: "esip" - name: ELASTICSEARCH_PORT value: "9200" - name: ELASTICSEARCH_USERNAME value: elastic - name: ELASTICSEARCH_PASSWORD value: changeme - name: ELASTIC_CLOUD_ID value: - name: ELASTIC_CLOUD_AUTH value: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName securityContext: runAsUser: 0 # If using Red Hat OpenShift uncomment this: #privileged: true resources: limits: memory: 200Mi requests: cpu: 100m memory: 100Mi volumeMounts: - name: config mountPath: /etc/filebeat.yml readOnly: true subPath: filebeat.yml - name: data mountPath: /usr/share/filebeat/data - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true - name: varlog mountPath: /var/log readOnly: true volumes: - name: config configMap: defaultMode: 0640 name: filebeat-config - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers - name: varlog hostPath: path: /var/log # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart - name: data hostPath: # When filebeat runs as non-root user, this directory needs to be writable by group (g+w). path: /var/lib/filebeat-data type: DirectoryOrCreate --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: filebeat subjects: - kind: ServiceAccount name: filebeat namespace: kube-system roleRef: kind: ClusterRole name: filebeat apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: filebeat labels: k8s-app: filebeat rules: - apiGroups: [""] # "" indicates the core API group resources: - namespaces - pods - nodes verbs: - get - watch - list - apiGroups: ["apps"] resources: - replicasets verbs: ["get", "list", "watch"] --- apiVersion: v1 kind: ServiceAccount metadata: name: filebeat namespace: kube-system labels: k8s-app: filebeat ---Copy the code
kubectl apply -f filebeat-kubernetes.yaml
Copy the code

After executing this command, you can find that the service account has been created and the fileBeat cluster resources namespace, Pods, Nodes get,watch,list permissions have been granted

kubectl get ds,pods -n kube-system -l k8s-app=filebeat -o wide
Copy the code

One pod copy per cluster node

Processing flow

After daemonset deployment, the list of nodes is retrieved from etCD and traversed through all nodes.

If no labels are set, start a pod copy on all nodes. If a pod copy is set, start a POD copy on all nodes where the labels are set.

The inspection results are generally as follows:

  • There is no pod on node. Create pod on node.
  • If there are more than one pod in node, remove the extra pod from the node.
  • There is only one pod and the node is fine.

See the log

Build a Kibana with Docker (juejin.cn/post/684490…) , and then create an index to view the log

DaemonSet advantages

  • DaemonSet Pod has its own monitoring function and is self-sustaining
  • DaemonSet Pod Any service is deployed in a uniform manner
  • DaemonSet Pod resource limitation to avoid resource contention

Rancher