1. Controller description

Pod classification:

  • Autonomous PODS: This type of Pod will not be created for either abnormal or normal exit, that is, there is no manager.

  • Controller managed Pod: This type of Pod in the life cycle of the controller, the controller always maintain the number of Pod copies, the basic daily use of this type of Pod, because the autonomous Pod can not guarantee stability and other problems.

1.1 introduce Deployment

Deployment provides a declarative definition method for Pod and ReplicaSet, that is, you are only responsible for describing the target state in Deployment, and the Deployment controller operates on ReplicaSet to make it the desired state. Deployment is designed to replace ReplicationController for a more convenient management application.

Typical application scenarios are as follows:

  • Define Deployment to create ReplicaSet and Pod

  • Rolling updates and rolling back applications

  • Capacity expansion and reduction

  • Suspend and continue Deployment

The flowchart for creating an RS is as follows:

To emphasize the first point above, Deployment does not manage or create pods directly, but rather creates and manages pods by creating ReplicaSet. The name of the Deployment is nginx-deploy, and RS is also created as nginx-deploy-xxx, where XXX is a random code.

1.2 introduce DaemonSet

DaemonSet ensures that a copy of Pod runs on all (or some) nodes. When nodes join the cluster, a Pod is added to them. These pods are also reclaimed when nodes are removed from the cluster. Deleting DaemonSet will delete all the pods it created.

If multiple Pods are required to run on each Node, multiple DaemonSet scenarios can be defined to implement this.

Typical application scenarios of DaemonSet:

  • Run clustered storage, such as Glusterd, Ceph, and so on on each Node

  • Run log collection on each Node, such as Fluentd and LogStash

  • A monitor, such as Prometheus Node and CollectD, runs on each Node

1.3 introduce StatefulSet

StatefulSet is used to manage stateful services. In order to solve the problem of stateful services, Deployment and ReplicaSet are more suitable for Deployment of stateless services.

StatefulSet maintains a sticky ID for each Pod, and each Pod has a permanent ID no matter how scheduled.

StatefulSet application scenarios:

  • A stable, unique network identifier.

  • Stable, persistent storage.

  • Orderly, elegant deployment and scaling.

  • Orderly, automatic rolling updates.

1.4 the Job introduction

Job Loads a batch task that is executed only once. It ensures that one or more PODS of the batch task are successfully executed.

You can set the number of successful Job execution and track the number of successfully completed PODS. When the number reaches the specified threshold, the Job is finished.

We don’t need to worry about the program executing successfully; if Jod doesn’t exit successfully with code 0, it will re-execute the program.

1.5 a CronJob to introduce

A CronJob is like the Crontab for Linux. It is written in Cron format and runs only once at a given point in time, periodically executing a Job at a given scheduling time.

Typical application scenarios:

Create joDs that run periodically, most commonly database backups.

Second, the controller creates an instance test

2.1 RS and Deployment instance test

In the new version of K8s, ReplicationControler (RC) is no longer used. Instead, ReplicaSet (RS) is used instead of RC.

Supports integrated selector actions:

The Pod is tagged when it is created, and the container inside the Pod is tagged. When you need to delete the container or perform other operations, you can use the tag to perform related operations.

Let’s demonstrate the RS tag

The rs_frontend. Yaml resource list is as follows:

ApiVersion: apps/v1 kind: ReplicaSet metadata: # RS element information name: frontend # RS name labels: # custom label app: Guestbook # RS Label Tier: frontend # RS label spec: replicas: 3 # Pod number of replicas selector: matchLabels: matching label tier: Frontend # matches the frontend tag template: # Pod template metadata: # Pod element information Labels: # custom tag tier: frontend # frontend spec: Containers: - name: mynginx # container name image: hub.test.com/library/mynginx:v1 # image addressesCopy the code

RS to create

[root@k8s-master01 ~]# kubectl create -f rs_frontend.yaml 
replicaset.apps/frontend created
Copy the code

View the Pod and RS status

View the Pod and select the label. The labels of the three PODS are frontend

Next modify one of the Pod tag tests

[root@k8s-master01 ~]# kubectl label pod frontend-8wsl2 tier=frontend1 --overwrite
pod/frontend-8wsl2 labeled
Copy the code

Check the Pod status and labels again

At this point, there are four PODS, because RS uses the frontend tag to match pods. When RS detects that one Pod of the frontend tag is missing, it creates its own label Pod to meet the expected value. The Pod frontend- 8wSL2 is no longer under RS control because its label is Frontend1.

After RS is deleted, the function of labels can be seen more intuitively

[root@k8s-master01 ~]# kubectl delete rs frontend
replicaset.apps "frontend" deleted
[root@k8s-master01 ~]# kubectl get pod --show-labels
Copy the code

The other three pods have been removed, except for the Pod labeled Frontend1, because the Pod tag does not match RS, so RS will not do anything to the Pod.

Deployment instance Test

Here is an example of Deployment. You actually create a ReplicaSet that starts three Nginx pods.

The list of nginx-deployment.yaml resources is shown below

ApiVersion: apps/v1 kind: Deployment metadata: # Deployment metadata name: nginx-deployment # Deployment namellabels: Spec: replicas: 3 # Pod replicas: selector: matchLabels: app: Nginx # matches the frontend tag template: metadata: labels: app: nginx # Pod tag frontend spec: containers: - name: Mynginx image: hub.test.com/library/mynginx:v1 # mirror address ports: - containerPort: 80Copy the code

Create a Deployment

[root@k8s-master01 ~]# kubectl apply -f nginx-deployment.yaml  --record
deployment.apps/nginx-deployment created
Copy the code

— Record command, convenient rollback convenient view each revision change.

Check the Deployment, RS, and Pod status, all of which are Running

View Pod details and access tests

[root@k8s-master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES Nginx-deployment-644f95f9bc-fb6wv 1/1 Running 0 7m2s 10.244.1.63k8s-node01 <none> <none> Nginx-deployment-644f95f9bc-hbhj7 1/1 Running 0 7m2s 10.244.2.46k8s-node02 <none> <none> Nginx-deployment-644f95f9bc-j8q2k 1/1 Running 0 7m2s 10.244.1.64k8s-node01 <none> <none>Copy the code

Deployment expansion function

[root@k8s-master01 ~]# kubectl scale deployment nginx-deployment --replicas=10
Copy the code

As you can see, the Deployment capacity has been expanded to 10 pods. The Deployment capacity is very simple and can be expanded horizontally with a single command, thus relieving the pressure on external application provisioning.

The above is manual expansion, which can be automatically expanded according to the Pod load?

Yes, K8s provides HPA function, please see my notes for details.

Deployment Update image operation

[root@k8s-master01 ~]# kubectl set image deployment/nginx-deployment nginx=hub.test.com/library/mynginx:v2
Copy the code

Nginx is the container name

View the RS status. A new RS has been created, but the old RS is still available and can be rolled back later

Access one of the Pod tests

Deployment Rollback operation

[root@k8s-master01 ~]# kubectl rollout undo deployment/nginx-deployment 
deployment.apps/nginx-deployment rolled back
Copy the code

Has been rolled back to a previous version of RS

[root@k8s-master01 ~]# kubectl get rs
NAME                          DESIRED   CURRENT   READY   AGE
nginx-deployment-5cf56df4d6   0         0         0       8m2s
nginx-deployment-644f95f9bc   10        10        10      37m
Copy the code

The default is to roll back to the previous version and access the test

You can also scroll or roll back to a specified version

Viewing version Information

[root@k8s-master01 ~]# kubectl rollout history deployment/nginx-deployment
Copy the code

The rollback to the specified version command is as follows, which will not be tested here

[root@k8s-master01 ~]# kubectl rollout undo deployment/nginx-deployment --to-revision=3
Copy the code

Version Clearing Policy

Can be set in the Deployment. Spec. RevisionHistoryLimit fields to specify to retain the Deployment of many old ReplicaSet. The remaining ReplicaSet will be garbage collected in the background. By default, this value is 10

2.2 DaemonSet instance test

Daemonset. Yaml resources are listed below

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: daemonset-example
  labels:
    app: daemonset
spec:
  selector:
    matchLabels:
      name: daemonset-example
  template:
    metadata:
      labels:
        name: daemonset-example
    spec:
      containers:
      - name: mynginx
        image: hub.test.com/library/mynginx:v1
Copy the code

Create DaemonSet

[root@k8s-master01 ~]# kubectl create -f daemonset.yaml 
daemonset.apps/daemonset-example created
Copy the code

Check the DaemonSet Pod status

A Pod on each Node will not be created on the Master because there is a stain concept in K8s.

Remove the Pod from one of the nodes, and it automatically creates a new Pod to meet DaemonSet expectations, one copy per Node, as shown in the figure.

2.3 Job Instance Testing

Yaml resource listing, which is responsible for calculating PI to 2000 decimal places and printing out the result.

ApiVersion: Batch /v1 kind: Job metadata: name: PI # Job name spec: template: spec: containers: - name: PI # Pod name image: ["perl", "-mbignum = BPI ", "-wle", "print BPI (2000)"] BackoffLimit: 4 # Set the number of retries and mark the Job as failedCopy the code

Create a Job and view its status

[root@k8s-master01 ~]# kubectl create -f job.yaml 
job.batch/pi created
[root@k8s-master01 ~]# kubectl get pod -w
Copy the code

You can see the process of the Job from Running to Completed. The Job has Completed the calculation.

View specific results

Other Parameters

Spec.com pletions Indicates that the Job is complete only when the number of pods is successfully completed. (Default 1)

.spec.parallelism Indicates the number of parallel running pods. (Default 1)

. The spec. ActiveDeadlineSeconds marks in the entire lifetime of the Job, once the Job running time reached set value, are all running Pod will be terminated. (Seconds)

2.4 CronJob Instance Test

The following is an example of the cronjob.yaml resource list

ApiVersion: Batch /v1 kind: CronJob metadata: name: hello spec: schedule: "*/1 * * * *" # Execute jobTemplate every minute: # specifies tasks need to run the spec: template: spec: containers: - name: hello image: hub.test.com/library/busybox:latest imagePullPolicy: Command: - /bin/sh - -c -date; Echo Hello from the Kubernetes cluster restartPolicy: OnFailureCopy the code

If you set up a scheduled task uncertain time, you can check through this small tool website.

Crontab Time calculation: tool.lu/crontab/

Create a CronJob

[root@k8s-master01 ~]# kubectl apply -f cronjob.yaml 
cronjob.batch/hello created
Copy the code

Check the CronJob status

[root@k8s-master01 ~]# kubectl get cronjob
NAME    SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
hello   */1 * * * *   False     0        21s             21m
Copy the code

View the running status of the Job. By default, three successful Pod records are saved

Looking at one of the Pod logs, you can see that one is executed every minute, as we expected with our setup.

[root@k8s-master01 ~]# kubectl logs hello-27053804-jc6jd
Wed Jun  9 08:44:00 UTC 2021
Hello from the Kubernetes cluster
[root@k8s-master01 ~]# kubectl logs hello-27053805-hfr5l
Wed Jun  9 08:45:00 UTC 2021
Hello from the Kubernetes cluster
Copy the code

Other Parameters

. Spec. SuccessfulJobsHistoryLimit set hold successful Job number (3) by default

. The spec. FailedJobsHistoryLimit sets the amount of Job save failed (default 1)

.spec.concurrencyPolicy concurrencyPolicy (default Allow)

When you read this, you’ll notice that StatefulSet is also being tested for instance, because StatefulSet needs to work with K8s storage, so we’ll show that later when we talk about storage deployment.