Containers fall into two categories

  1. Online business-service container

    The service needs to be provided continuously without interruption, and the container needs to be running all the timeCopy the code
  2. Offline business – work class container

    The container can be closed after one-off tasks, such as collecting log data, are completedCopy the code

Job

cat job.yaml

apiVersion: batch/v1 kind: Job metadata: name: job-test spec: template: metadata: name: job-test spec: containers: - name: test-job image: busybox command: ["echo", "test job!"]  restartPolicy: NeverCopy the code

The restartPolicy parameter can only be Never or onFailure

Start the

[root@master01 ~]# kubectl apply -f job.yaml 
job.batch/job-test created
Copy the code

Running state

[root@master01 ~]# kubectl get pod
NAME             READY   STATUS              RESTARTS   AGE
job-test-t2gbw   0/1     ContainerCreating   0          12s

[root@master01 ~]# kubectl get pod
NAME             READY   STATUS      RESTARTS   AGE
job-test-t2gbw   0/1     Completed   0          24s
Copy the code

Viewing the Running result

[root@master01 ~]# kubectl logs job-test-t2gbw
test job!
Copy the code

What can I do if Job execution fails

The job fails to start because the command is improperly set

command: ["echo123", "test job!"]  restartPolicy: NeverCopy the code

When restartPolicy: Never, you can see that K8S keeps opening new pods but does not keep it open forever. The default spec.backoffLimit: 6 prevents this

[root@master01 ~]# kubectl get pod
NAME             READY   STATUS               RESTARTS   AGE
job-test-2zrt4   0/1     ContainerCannotRun   0          87s
job-test-5z884   0/1     ContainerCannotRun   0          77s
job-test-cxgmm   0/1     ContainerCannotRun   0          107s
job-test-nt9gt   0/1     ContainerCannotRun   0          57s
job-test-wxv7l   0/1     ContainerCreating    0          17s
Copy the code

When restartPolicy: OnFailure occurs, K8S does not start a new pod, but just keeps restarting the pod

[root@master01 ~]# kubectl get pod
NAME             READY   STATUS             RESTARTS   AGE
job-test-bd695   0/1     CrashLoopBackOff   4          3m15s
Copy the code

What if the job doesn’t report an error, but it never ends?

Parameter spec. ActiveDeadlineSeconds: 100 in the 100 s after will be closed all Pod in the Job The Pod closed for the reason: DeadlineExceededCopy the code

Parallel Job

cat job.yaml

apiVersion: batch/v1
kind: Job
metadata:
  name: job-test
spec:
  parallelism: 2
  ......
Copy the code

Parameter Parallelism: 2 Allows only a maximum of two PODS to run simultaneously

Viewing health

[root@master01 ~]# kubectl get pod
NAME             READY   STATUS              RESTARTS   AGE
job-test-6ttz9   0/1     ContainerCreating   0          8s
job-test-mknjc   0/1     ContainerCreating   0          8s

[root@master01 ~]# kubectl get pod
NAME             READY   STATUS      RESTARTS   AGE
job-test-6ttz9   0/1     Completed   0          68s
job-test-mknjc   0/1     Completed   0          68s

[root@master01 ~]# kubectl get job
NAME       COMPLETIONS   DURATION   AGE
job-test   0/1 of 2      10s        10s
[root@master01 ~]# kubectl get job
NAME       COMPLETIONS   DURATION   AGE
job-test   2/1 of 2      19s        66s
Copy the code

Add the parameter Completions

apiVersion: batch/v1
kind: Job
metadata:
  name: job-test
spec:
  parallelism: 2
  completions: 8
Copy the code

Parameter Completions: 8 indicates that at least eight pods have been successfully run. If not, it will be processed according to the restartPolicy, which can be considered as a check mechanism

Viewing health

[root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE job-test-6xwpt 0/1 ContainerCreating 0 8s job-test-grk4q 0/1 ContainerCreating 0 8s [root@master01 ~]# kubectl get job NAME COMPLETIONS DURATION AGE job-test 0/8 11s 11s [root@master01 ~]# kubectl get job NAME COMPLETIONS DURATION AGE job-test 6/8 76s 76s [root@master01 ~]# kubectl  get pod NAME READY STATUS RESTARTS AGE job-test-2jxqx 0/1 Completed 0 28s job-test-6xwpt 0/1 Completed 0 90s job-test-gndq7 0/1 Completed 0 70s job-test-grk4q 0/1 Completed 0 90s job-test-n96k6 0/1 Completed 0 70s job-test-np4n7 0/1 Completed 0 49s job-test-scc9c 0/1 Completed 0 28s job-test-zcnbp 0/1 Completed 0 49s [root@master01 ~]# kubectl get  job NAME COMPLETIONS DURATION AGE job-test 8/8 82s 91sCopy the code

CronJob

CronJob periodically creates a job object

cat cronjob.yaml

apiVersion: batch/v1beta1 kind: CronJob metadata: name: hello spec: schedule: "*/1 * * * *" jobTemplate: spec: template: spec: containers: - name: hello image: busybox command: ["echo","test cron job!"]  restartPolicy: OnFailureCopy the code

Viewing the Running result

[root@master01 ~]# kubectl get cronjob
NAME    SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
hello   */1 * * * *   False     0        67s             3m45s

[root@master01 ~]# kubectl get jobs
NAME               COMPLETIONS   DURATION   AGE
hello-1595319480   1/1           27s        3m23s
hello-1595319540   1/1           19s        2m23s
hello-1595319600   1/1           24s        83s
hello-1595319660   0/1           22s        22s

[root@master01 ~]# kubectl get pod
NAME                     READY   STATUS              RESTARTS   AGE
hello-1595319480-2kl8h   0/1     Completed           0          3m21s
hello-1595319540-gwthv   0/1     Completed           0          2m21s
hello-1595319600-54548   0/1     Completed           0          81s
hello-1595319660-8xqn6   0/1     ContainerCreating   0          20s
Copy the code

As you can see, the Job object is executed periodically.

conclusion

The Job/CronJob controller can process offline services without running a Pod for a long time.

The Pod managed by this kind of controller does not provide continuous online service. After the task is completed, the container will be closed and can be used for some scheduled data processing work.

To contact me

Wechat official account: IT Struggling Youth