In order to facilitate everyone to learn Kubernetes system, I organized a Kubernetes learning series of articles, covering the basic knowledge of Kubernetes, installation steps and the related content of the whole Kubernetes system, I believe we read this series, To have a deeper understanding of Kubernetes.

Summary of 0.

Kubernetes provides many Controller resources to manage and schedule pods, including Replication Controller, ReplicaSet, Deployments, StatefulSet, DaemonSet, and more. This article describes the functions and usage of these controllers. The controller is a resource in Kubernetes that facilitates the management of pods. Think of the controller as a process manager, responsible for maintaining the state of the process. If a process is dropped, it is responsible for pulling up. If more processes are needed, it is responsible for adding processes. It can monitor the dynamic expansion and shrinkage of processes according to the process consumption of resources. Except in Kubernetes, controllers manage Pods. The Controller monitors the current status of each resource object in the cluster in real time through the interface provided by the API Server. When the system status changes due to various faults, it tries to restore the system status to the expected state.

1. ReplicationController

Replication Controller is often abbreviated to RC, RCS. RC, like RS, keeps the number of pods at the expected value. Rc-created Pod will automatically restart upon failure. RC choreographer files must have apiVersion, KIND, metadata,.spec.repicas, and.spec.template fields. The spec. The template. The spec. RestartPolicy only Always, default is empty. Take a look at the RC choreography file.

apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx
spec:
  replicas: 3
  selector:
    app: nginx
  template:
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: docker.io/nginx
        ports:
        - containerPort: 80
Copy the code

1.1 Common Management Operations

  • Delete RC and the associated Pod usingkubectl deleteTo delete RC, the Pods created by RC will be deleted simultaneously
  • Delete only RC, usekubectl delete --cascade=falseDelete only RC, not related Pods
  • Isolate PODS by modifying labels

1.2 Common Scenarios

  • The Rescheduling RC controller ensures that there are always as many pods running in the cluster as you set
  • Scaling is facilitated by changing the Replicas field
  • Rolling updates, you can use the command line toolkubectl rolling-updateRolling Upgrade
  • Multriple Release Tracks, combined with label and service, enables Canary publishing
  • Cooperate with Services

RC has no detection function, and no automatic capacity expansion function. They don’t check

2. ReplicaSet

RS is the next generation of RC, and differs only in support of tag selection. RS supports set selection, while RC only supports equal selection. ReplicasSet ensures that the cluster is running a specified number of Pod copies at any given time to take a look at the RS orchestration file.

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: nginx
  labels:
    app: nginx
    tier: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      tier: frontend
    matchExpressions:
      - {key:tier.operator: In.values: [frontend]}
  template:
    metadata:
      labels:
        app: nginx
        tier: frontend
      spec:
        containers:
        - name: nginx
          image: docker.io/nginx
          resources:
            requests:
              cpu: 100m
              memory: 100Mi
            ports:
            - containerPort: 80
Copy the code

ApiVersion, KIND, metadata, spec, spec.replicas, spec.template, and spec.selector are the fields required to orchestrate a file.

Although ReplicaSet can be used on its own, Deployments is now recommended as the primary way to orchestrate (create, delete, update) pods. Deploymnets is a higher level abstraction that provides RS management functionality, and unless you are using custom update orchestration or do not want all pods to be updated, there is little chance of using RS.

2.1 Common Management Operations

  • Delete RS and related Pods,kubectl delete <rs-name>
  • Delete only RS,kubectl delete <rs-name> --cascade=false
  • Pod isolation: You can modify the LABEL of a Pod to isolate the Pod for testing and data recovery. – HPA automatically expands, and ReplicaSet can be used as the TARGET of the HPA
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: nginx-scaler
spec:
  scaleTargetRef:
    kind: ReplicaSet
    name: nginx
  minReplicas: 3
  maxReplicas: 10
  targetCPUUtilizationPercentage: 50
Copy the code

3. Deployments

Deployment is actually the management of RS and Pods, it always creates RS first and the Pods are created by RS. The RS created by Deployment is named [deployment-name]-[pod-template-hash-value], and it is not recommended to manually maintain the RS created by Deployment. The Deployment update only happens if the template of the Pod is updated.

Several typical Deployment scenarios are described below.

3.1 Creating a Deployment Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment        // Deployment name
  labels:
    app: nginx
spec:
  replicas: 3                   // Number of copies
  selector:                     //Pod select rule
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:                   / / the Pods
        app: nginx
    spec:
      containers:
      - name: nginx
        image: Docker. IO/nginx: 1.7.9
        ports:
        - contaienrPort: 80
Copy the code
kubectl apply -f dp.yaml
kubectl get deployment nginx-deployment
kubectl rollout status deployment nginx-deployment
kubectl get pods --show-labels
Copy the code

3.2 Updating the Deployment

If you need to update the Deployment that has been created, there are two methods: one is to modify the Deployment file and apply the update, and the other is to update the Deployment parameters directly by command.

Update Update the image file version

kubectl setImage depolyment/nginx - deployment nginx = docker. IO/nginx: 1.9.1Copy the code

How to update the choreographer file first modify the choreographer file and then execute it

kubectl apply -f dp.yaml
Copy the code

If a Deployment has been created, updating the Deployment will create a new RS and gradually replace the old ONE (create new pods at a certain rate and delete the old ones after making sure the new ones are running properly). So if you look at Pods, you might find that the total number of Pods exceeds the number specified by Replicas for a period of time. If a Deployment is being created and has not yet completed, updating the Deployment at this point causes the newly created Pods to be killed immediately and the creation of new Pods to begin.

3.3 Rolling Back updates

Sometimes there are problems with the deployed version and we need to roll back to the previous version, which Deployment also provides. By default, the Deployment updates are kept in the system and we are able to roll back the version accordingly.

Only updates to.spec.template trigger versioning. Expansion alone does not record history. So rolling back won’t change the number of Pods.

kubectl apply -f dp.yaml
kubectl setIO /nginx:1.91 kubectl rollout status deployments nginx-deployment kubectl  get rs kubectl get pods# kubectl rollout history deployment/nginx-deployment
kubectl rollout history deployment/nginx-deployment --revision=2
kubectl rollout undo deployment/nginx-deployment
kubectl rollout undown deployment/nginx-deployment --to-revision=2
kubectl get deployment
Copy the code

Default record 10 version, can get through it. Spec. RevisionHistoryLimit modification.

3.4 capacity

# kubectl scale deployment nginx-deployment --replicas=5
Copy the code

If the automatic capacity expansion function is enabled for a cluster, you can set the conditions for automatic capacity expansion.

# kubectl autoscale deployment nginx-deployment --min=10 --max=15 --cpu-percent=80
Copy the code

3.5 pause Deployment

Once the deployment is created, we can pause and restart the deployment process and perform some operations during this time.

$ kubectl rollout pause deployment/nginx-deployment
$
$ kubectl rollout resume deployment/nginx-deployment
Copy the code

3.6 Deployment status

Deployment contains several possible states.

  • Progressing

    • A new ReplicaSet is created
    • Scaling up of the new ReplicaSet is being performed
    • Scaling down is being performed on the old ReplicaSet
    • The new Pods are ready
  • Complete

    • All copies have been updated to the latest status
    • All copies are available
    • No old copies are running
  • Failed

    • Lack of Quota
    • Readiness detection failure
    • Failed to pull the mirror. Procedure
    • Insufficient permissions
    • Application running error

3.7 Parameters

  • Strategy .spec.strategyThis has two options, namely, set and RollingUpdate, the default being the second. The first strategy is to kill the old Pods and then create new ones. The second strategy is to kill the old Pods while creating new ones
  • Max Unavailable .spec.strategy.rollingUpdate.maxUnavailable, the maximum percentage of Pods that are allowed to be unavailable during the update process. Default is 25%
  • Max Surge .spec.strategy.rollingUpdate.maxSurge, the maximum number of Pods allowed to exceed replicas during the update process. The default is 25%
  • Progress Deadline Seconds .spec.progressDeadlineSeconds, optional parameter to set the time when the system reports progress
  • Min Ready Seconds .spec.minReadySecondsSet the minimum interval at which a new Pod can run
  • Revision History Limit .spec.revisionHistoryLimitThis parameter is optional

4. StatefulSets

SteatefulSets I have a special file introduction, you can refer to here.

5. DaemonSet

DaemonSet ensures that a Pod copy is running on all nodes, and pods are created on nodes as soon as they join the cluster. Typical application scenario includes: running a storage cluster (glusterd, ceph), running a log collection cluster (fluentd, logstash), operation monitoring program (Prometheus Node Exporter, collectd, Datadog, etc.). By default, DaemonSet is scheduled by the DaemonSet controller. If the nodeAffinity parameter is set, there will be a default scheduler scheduling.

A typical orchestration file is as follows.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-es
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
spec:
  selector:
    matchLabels:
      name: fluentd-es
    template:
      metadata:
        labels:
          name: fluentd-es
      spec:
        tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
        containers:
        - name: fluentd-es
          image: Docker. IO/fluentd: 1.20
          resources:
            limits:
              memory: 200Mi
            requests:
              cpu: 100m
              memory: 200Mi
          volumeMounts:
          - name: varlog
            mountPath: /var/log
          - name: varlibdockercontainers
            mountPath: /var/lib/docker/containers
            readOnly: true
          terminationGracePeriodSeconds: 30
          volumes:
          - name: varlog
            hostPath:
              path: /var/log
          - name: varlibdockercontainers
            hostPath:
              path: /var/lib/docker/contaienrs
Copy the code

6. Grabage Collection

Some objects in Kubernetes have dependencies, such as an RS that has a set of pods. The GC in Kubernetes is used to delete objects that once had owners, but have no owners since. Objects that have owners in Kubernetes have a metadata.ownerReferences property pointing to owners. After Version 1.8 of Kubernetes, The system automatically sets ownerReferences for objects created by ReplicationController, ReplicaSet, StatefulSet, DaemonSet, Deployment, Job, and CronJob.

We’ve talked about cascading deletes before in various controllers, and that’s what this property does. There are two forms of cascading deletes Foreground and Background, Foreground mode, select cascade deletion after GC will automatically all ownerReference blockOwnerDeletion = true object is deleted, Finally, delete the owner object. In Background mode, the owner object is deleted immediately, and the GC removes other dependent objects in the Background. If we choose not to cascade when deleting RS, the RS-created Pods become orphaned with no owner.

7. Jobs

A Job creates one or more pods to run a specific task, and ends when the number of pods that normally complete the task reaches a set threshold. Deleting a Job deletes all Pods created by the Job.

A typical marshalling file

apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  completions: 5 # specify the number of tasks to be executed in sequence
  parallelism: 2 # specify the number of parallel tasks
  backoffLimit: 4 The number of failed tasks allowed
  activeDeadlineSeconds: 100    The maximum time allowed for a task to execute
  template:
    spec:
      containers:
      - name: pi
        image: docker.io/perl
        command: ["perl"."-Mbignum=bpi"."-wle"."print bpid(2000)"]
      restartPolicy: Never
Copy the code

There are three main types of jobs

  • A non-parallel Job that typically starts only one Pod to perform the task
  • Parallel jobs with a fixed number of completions need to be matched.spec.completionsSet to a non-zero value
  • Parallel jobs combined with queues do not need to be set.spec.completionsTo set up.spec.parallelism

Note that even if you specify .spec.parallelism = 1 and .spec.completions = 1 and .spec.template.spec.restartPolicy = “Never”, the same program may sometimes be started twice. It feels like a hole.

Parallel Jobs provided by Kubernetes are not suitable for scientific computation or related tasks, but are more suitable for individual tasks such as email sending, rendering, and file escaping.

8. CronJob

Cron jobs automatically create Job objects based on time. Similar to Crontab, periodically executes a task. During each execution, a Job object is created. You can create two jobs or none, which can happen, so you should keep the Job idempotent.

For every CronJob, the CronJob controller checks how many schedules it missed in the duration from its last scheduled time until now. If There are more than 100 missed schedules, then it does not start the job and logs the error

A typical marshalling file

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: docker.io/busybox
            args:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure
Copy the code

All the layout files have been uploaded to my Github, which you can download.

The resources

  1. Kubernetes ReplicaSet
  2. Running Automated Tasks with a CronJob