“This is the fifth day of my participation in the First Challenge 2022. For details: First Challenge 2022”
ReplicaSet
ReplicationController is a K8S component used for replication and rescheduling of nodes in case of exceptions. ReplicaSet is introduced to replace ReplicationController
How is ReplicationController different from ReplicaSet?
ReplicationController and ReplicaSet behave exactly the same, but ReplicaSet’s POD selector is more expressive
- ReplicationController only allows matching pods that contain a tag
- ReplicaSet can contain pods with specific tag names, such as env=dev and env=pro
- ReplicaSet also matches pods that are missing a tag
No matter how many tag values ReplicationController matches, ReplicationController cannot match tags based on their names. For example, it cannot match env=*. But ReplicaSet does
Write a ReplicaSet Demo
Rs is short for ReplicaSet. Write a ReplicaSet demo
- The API version is Apps/V1
The VERSION of the API here is slightly different from what we wrote earlier, which is explained here
Apps here stands for API group
V1 here represents the v1 version of the Apps group, which is the same as the route we usually write
- There are 3 copies
- The selector specifies the match tag as app= xMT-kubia (ReplicationController is written directly after selector).
- The image pulled from the template is Xiaomotong888 /xmtkubia
kubia-rs.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: kubia-rs
spec:
replicas: 3
selector:
matchLabels:
app: xmt-kubia
template:
metadata:
labels:
app: xmt-kubia
spec:
containers:
- name: rs-kubia
image: xiaomotong888/xmtkubia
ports:
- containerPort: 8080
Copy the code
The deployment of the rs
kubectl create -f kubia-rs.yaml
After deploying rs, we can see that Kubectl get RS is used to view the basic information of the newly created RS
There is no effect on the existing 3 pods labeled app= xMT-kubia, rs does not create more pods, this is fine
Rs will also search for the number of PODS corresponding to the matching tag in the environment and compare it with the expectation in its configuration. If the expectation is large, the number of PODS will be increased; if the expectation is small, the number of pods will be reduced
Interested users can also use Kubectl describe to check out rs, which is no different from RC
ReplicaSet works like this
In the example above, we can see that RepilicaSet uses matchLabels just like ReplicationController. Now we can enrich our choices by using matchExpressions
For example, when we added matchExpressions at YAML, we could write it like this
Omit multiple lines...
selector:
matchExpressions:
- key: env
operator: In
values:
- dev
Omit multiple lines...
Copy the code
For example, the yamL code snippet above means:
- The matching label key is ENV
- The operator is In
- The matched env should be dev
key
The specific label key
operator
Operators, there are four of them
- In
The value of the Label must match one of the specified values
- NotIn
The value of the Label must not match any specified values
- Exists
Pod must contain a specified name tag, regardless of whether it has a value. Do not specify values in this case
- DoesNotExist
The pod tag name must not contain the specified name. Do not specify values
Pay attention to
If we specify multiple expressions, they all need to be true for this to work and for a match to be correct
Delete the rs
When deleting RS, it is the same as when deleting RC. By default, all RS-managed pods are deleted. If we do not need to delete the corresponding pod, we can also add –cascade=false or –cascade= orphan
Once specified as above, removing RS does not have any effect on the POD
DaemonSet
ReplicationController and ReplicaSet both deploy a specific number of PODS in a K8S cluster, but it doesn’t matter which nodes the pods are running on.
Now we can share a DaemonSet, which is also a resource in K8S
The DaemonSet resource can be used to manage when we want our PODS to run exactly one per node
DaemonSet did not have the concept of duplicate number. He checked whether there was pod corresponding to the label he managed in each node. If so, he maintained it, and if not, he created it
Here is a simple diagram of ReplicaSet and DaemonSet management content and methods:
In the figure, it can be seen that DaemonSet deployable one POD for each node, while ReplicaSet only ensures that it can manage 4 pods for corresponding labels in the whole cluster
The small case of DaemonSet
DaemonSet resources are also the apps/ V1 API version used
-
Matches the label app= SSD
-
In the POD template, we set the pod to run on a node labeled disk= ssdNode, which can be specified by the nodeSelector keyword
-
Select xiaomotong888/xmtkubia as the mirror
daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kubia-ds
spec:
selector:
matchLabels:
app: ssd
template:
metadata:
labels:
app: ssd
spec:
nodeSelector:
disk: ssdnode
containers:
- name: rs-kubia
image: xiaomotong888/xmtkubia
ports:
- containerPort: 8080
Copy the code
Deploy DaemonSet
We deploy DaemonSet using commands
kubectl create -f daemonset.yaml
Check out ds, which stands for DaemonSet
kubectl get ds
Run commands to check node status
kubectl get nodes
As can be seen from the figure above, after the deployment of DaemonSet resources, each parameter is 0
The reason is that there are no nodes labeled Disk = ssdNode in the DaemonSet lookup environment
Labels the specified node disk= ssdNode
kubectl label node minikube disk=ssdnode
After labeling, we can see the above picture. Each parameter of DaemonSet resources has changed to 1. When viewing POD, we can also see the corresponding POD resources
The demo uses minikube, so there is only one node
Modify the node label again
If we change the node tag again, will the previous POD be terminated? Let’s try it
kubectl label node minikube disk=hddnode –overwrite
As expected, there was no problem old iron. When we modified the label of the specified node in the environment, the POD was destroyed because there was no node corresponding to the label specified in our own configuration in the DaemonSet resource search environment
Job
Let’s introduce Job resources in K8S
The Job resource is run we run a pod, and once the program runs ok, the pod comes out, the Job is done, he doesn’t restart the pod
If an exception occurs during the running of the pod managed by the job, we can configure the job to restart the POD
ReplicaSet and Job management POD are shown as follows:
As can be seen from the figure above, ReplicaSet and Job resource management pods can be restarted when nodes or pods themselves become abnormal, without manual operation
But a POD that is not managed by the above resource has no one responsible for restarting it if an exception occurs
The Job of case
Yaml is also used to create resources for a Job
- Type of Job
- The restartPolicy in the template is set to restartPolicy: OnFailure. The policy cannot be set to Always, which will Always restart the pod
- The mirror is luksa/ Batch-Job
This image is an image on docker Hub. After the program is pulled out, running for 2 minutes will end the program
myjob.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: batchjob
spec:
template:
metadata:
labels:
app: batchjob-xmt
spec:
restartPolicy: OnFailure
containers:
- name: xmt-kubia-batch
image: luksa/batch-job
Copy the code
The deployment of the Job
kubectl create -f myjob.yaml
You can see that the Job resource has been successfully deployed and the POD is already being created
While pod is running, let’s take a look at the pod log
kubectl logs -f batchjob-gpckc
You can see that the program has output from the start log
When the POD runs for about 2 minutes, we can continue to view the day. The program is Completed successfully, and the POD Completed state. The Job is Completed
The Job resource can also be configured with multiple POD instances, which can be set to run in parallel or serial, depending on our business requirements
In serial, we can write yaml like this:
When defining Job resources, configure completions. The Job resources are run one by one to create pods. After the pods are run, the next pod is created
apiVersion: batch/v1
kind: Job
metadata:
name: batchjob
spec:
completions: 10
template:
Omit multiple lines...
Copy the code
In parallel we can write yaml like this:
To set up parallelism, we simply add the Parallelism configuration to yamL above, indicating how many pods to run in parallel
apiVersion: batch/v1
kind: Job
metadata:
name: batchjob
spec:
completions: 10
parallelism: 4
template:
Omit multiple lines...
Copy the code
CronJob
The Job manages pods that start, run, or run the number of times they are run.
Of course it can be done in K8S, so we can use CronJob resources in K8S to complete our idea
We only need to write the CronJob configuration in the YAML file and specify the cycle time for pod to run
The demo CronJob
- The resource type is CronJob
- The running period is zero
"* * * * *"
, indicates that the POD is run every 1 minute
cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: mycronjob
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
metadata:
labels:
app: cronjob-xmt
spec:
restartPolicy: OnFailure
containers:
- name: cronjobxmt
image: luksa/batch-job
Copy the code
Here we set the pod to run once per minute. If we have other requirements, we can also set it by ourselves. The above 5 * meanings are as follows:
- minutes
- hours
- day
- month
- week
For example, if I need to set up a wake-up call every Monday at 8:00, I could write this
“0 8 * * 1”
Deploy the CronJob
kubectl create -f cronjob.yaml
Check the CronJob
kubectl get cj
After deploying cJ, we can see that CJ is up, but it seems that no corresponding pod has been created. Cj is short for CronJob
No pod will be created. It will take 1 minute to create one
Looking at our CJ again, we can see that ACTIVE is already 1, indicating that a POD has been created through cJ
We have created a pod successfully, and has been running, no problem old iron
When we use CronJob resources, we will encounter the following situation:
When a Job or pod is started relatively late, we can set a boundary value like this
If our pod start time is not much later than the scheduled time, we can set it to 20s. If it is later than this value, the job is considered to have failed
We can write yaml like this:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: mycronjob
spec:
schedule: "* * * * *"
startingDeadlineSeconds: 20
jobTemplate:
Omit multiple lines...
Copy the code
ReplicaSet, DaemonSet, Job, CronJob. Let’s take action
Today is here, learning, if there is a deviation, please correct
Welcome to like, follow and favorites
Friends, your support and encouragement, I insist on sharing, improve the quality of the power
All right, that’s it for this time
Technology is open, our mentality, should be more open. Embrace change, live in the sun, and strive to move forward.
I am Nezha, welcome to like, see you next time ~