The original link

In some scenarios, such as node performance monitoring and log cell phone, a related service needs to be deployed on each node. Affinity related functions or DaemonSet can be used to implement this

DaemonSet

type DaemonSet struct {
    metav1.TypeMeta `json:",inline"`

    metav1.ObjectMeta `json:"metadata,omitempty"`

    // Specific definition of DaemonSet
    Spec DaemonSetSpec `json:"spec,omitempty"`

    // The current state of DaemonSet, read only
    Status DaemonSetStatus `json:"status,omitempty"`
}

type DaemonSetSpec struct {
    // Label selector, used to find pods managed by this DaemonSet
    // It must be the same as the Pod Label in Template
    Selector *metav1.LabelSelector `json:"selector"`

    // Pod definition
    Template v1.PodTemplateSpec `json:"template"`

    // Pod updated policy configuration
    UpdateStrategy DaemonSetUpdateStrategy `json:"updateStrategy,omitempty"`

    // Minimum wait time before Pod is ready, default is 0, that is, immediately after everything is ready
    MinReadySeconds int32 `json:"minReadySeconds,omitempty"`

    // The number of historical versions of reserved pods. The default is 10
    RevisionHistoryLimit *int32 `json:"revisionHistoryLimit,omitempty"`
}

type DaemonSetUpdateStrategy struct {
    // Update policies for Pod in DaemonSet, including RollingUpdate and OnDelete. The default is RollingUpdate
    Type DaemonSetUpdateStrategyType `json:"type,omitempty"`

    // Scroll to upgrade the configuration
    RollingUpdate *RollingUpdateDaemonSet `json:"rollingUpdate,omitempty"`
}

type RollingUpdateDaemonSet struct {
    // The maximum number of unusable numbers, can be a specific number, can be a percentage, default is 1, cannot be 0
    // After the configuration, the update will first stop the specified number of pods, then start the new Pod, delete the old Pod, continue to update the rest of the Pod
    MaxUnavailable *intstr.IntOrString `json:"maxUnavailable,omitempty"`
}
Copy the code

A simple example

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: ab
  labels:
    app: ab
spec:
  selector:
    matchLabels:
      app: ab
  template:
    metadata:
      labels:
        app: ab
    spec:
      containers:
      - name: ab
        image: jordi/ab
        args:
        - -n100
        - -c10
        - -k
        - -r
        - URL
Copy the code

This configuration uses the Docker – Apache-benchmark docker to achieve a simple stress test function.

As the command running in the foreground of this container is AB, when the command is executed, the container will automatically end. At this time, Pod will report an exception, and then be eliminated by DaemonSet to create a new one.

In practice, this scenario is more of a timed task, but it can also be used here, as there is no need to configure anything to achieve the effect of assigning one Pod per machine.

Here’s what happens when you run

$ kubectl get ds/ab -o wide NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR  ab 2 2 2 2 2 <none> 59s ab jordi/ab app=ab $ kubectl get pods-lapp=ab -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ab-5gsbr 1/1 Running 0 85s 172.40.0.2tx <none> <none> ab-g9rzd 1/1 Running 0 85s 172.32.0.3ks <none> <none> $kubectl logs ds/ab Found 2 Pods, Using POD /ab-5gsbr This is ApacheBench, Version 2.3 <$Revision: 1826891 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking www.mi0ffice.cn (be patient) Completed 100 requests Completed 200 requests Completed  300 requests Completed 400 requests ...Copy the code