A Pod (container group) is the smallest scheduling unit in Kubernetes and can be created directly from a YAML definition file. But the Pod itself does not have self-healing. If the node on which a Pod is located fails, or if the scheduler has problems with itself, or if the Pod is expelled due to insufficient node resources or the node enters maintenance, the Pod is deleted and cannot recover itself.
Therefore, in Kubernetes we generally do not create pods directly, but manage them through controllers.
Controller
Controller provides the following features for Pod:
- Horizontal extension to control the number of copies that Pod runs
- Rollout, a version update
- Self-healing. When a node fails, the controller can automatically schedule a Pod with exactly the same configuration on another node to replace the Pod on the failed node.
Controllers supported in Kubernetes include:
- ReplicationController: A controller used to maintain a steady collection of Pod replicas
- ReplicaSet is an updated version of ReplicationController with one more feature: it supports collection-based selectors. RollingUpdate not supported
- Deployment: Contains ReplicaSet and can update ReplicaSet and its pods in a declarative, rolling update manner. For stateless applications, Deployment is recommended
- StatefulSet: Used to manage stateful applications
- DaemonSet: Run a copy of a specified Pod as a daemon on a node, for example, when monitoring the node and collecting logs on the node
- CronJob: Creates a Job based on a scheduled schedule, similar to the Crontab in Linux
- Job: Uses Job to execute a task
ReplicaSet
In Kubernetes, although Deployment is generally used to manage PODS, ReplicaSet is also used to maintain Pod replica sets in Deployment, so ReplicaSet is also briefly introduced here.
ReplicaSet’s definition consists of three parts:
- Selector: a label selector that specifies which pods are managed by the ReplicaSet
matchLabels
To match the Pod’s label. - Replicas: Specifies the expected number of Pod replicas that the ReplicaSet should maintain. The default is 1.
- Template: Pod defines the template, which ReplicaSet uses to create pods.
A sample ReplicaSet definition document is shown below,
apiVersion: apps/v1 # API version
kind: ReplicaSet # Resource type
metadata: # Metadata definition
name: nginx-ds # ReplicaSet name
spec:
replicas: 2 # number of Pod copies, default 1
selector: # tag selector
matchLabels:
app: nginx
template: # Pod defines the template
metadata: # Pod metadata definition
labels:
app: nginx # Pod labels
spec:
containers: # container definition
- name: nginx
image: nginx
Copy the code
ReplicaSet creates and deletes Pod container groups to ensure that the number of pods matching the Selector is equal to the number specified by Replicas. ReplicaSet creates a Pod that has a metadata. OwnerReferences field that identifies which ReplicaSet the Pod belongs to. Kubectl get pod pod-name -o yaml kubectl get pod pod-name -o yaml
ReplicaSet uses the definition of the selector field to determine which pods should be managed by ReplicaSet, regardless of whether the Pod was created by ReplicaSet. The ReplicaSet also manages pods created by external definitions. . Therefore need to pay attention to the spec. The selector. MatchLabels as the spec. The template. The metadata. The definition of labels, and avoid the selector overlap with other controller, chaos.
ReplicaSet does not support rolling updates, so stateless applications are typically deployed using Deployment rather than ReplicaSet directly. ReplicaSet is primarily used in Deployment as a means to create, delete, and update pods.
Deployment
The Deployment object contains ReplicaSet as a dependent object and can update ReplicaSet and its PODS in a declarative, rolling update manner. ReplicaSet is now primarily used in Deployment as a means of managing Pod creation, deletion, and update. When using Deployment, you don’t have to worry about ReplicaSet created by Deployment, which takes care of all the details associated with it. At the same time, Deployment can manage Pod and ReplicaSet in a “declarative” manner (essentially, hardening a series of operational steps for a particular scenario for quick and accurate execution) and provide revision rollback capabilities.
Example Deployment definition,
apiVersion: apps/v1
kind: Deployment Object type, fixed to Deployment
metadata:
name: nginx-deploy # the name of the Deployment
namespace: default The default namespace is default
labels:
app: nginx # label
spec:
replicas: 4 # number of Pod copies, default 1
strategy:
rollingUpdate: Replicas is 4, so the number of pods in the whole process is between 3 and 5
maxSurge: 1 The maximum number of replicas that exceed the maximum number of replicas in a rolling upgrade can also be a percentage of replicas. The default value is 1
maxUnavailable: 1 The maximum number of pods unavailable for a rolling upgrade can also be a percentage of replicas. The default value is 1
selector: Tag selector, which selects the Pod for the Deployment management by tag
matchLabels:
app: nginx
template: # Pod defines the template
metadata:
labels:
app: nginx # Pod labels
spec: Define a container template that can contain multiple containers
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
Copy the code
Kubectl Explain XXX to see which configuration options are supported,
#View the Deployment configuration TAB
[root@kmaster ~]# kubectl explain deployment
...
#Look at the configuration items for the Deployment.spec module
[root@kmaster ~]# kubectl explain deployment.spec
KIND: Deployment
VERSION: apps/v1
RESOURCE: spec <Object>
DESCRIPTION:
Specification of the desired behavior of the Deployment.
DeploymentSpec is the specification of the desired behavior of the
Deployment.
FIELDS:
minReadySeconds <integer>
Minimum number of seconds for which a newly created pod should be ready
without any of its container crashing, for it to be considered available.
Defaults to 0 (pod will be considered available as soon as it is ready)
paused <boolean>
Indicates that the deployment is paused.
progressDeadlineSeconds <integer>
The maximum time in seconds for a deployment to make progress before it is
considered to be failed. The deployment controller will continue to process
failed deployments and a condition with a ProgressDeadlineExceeded reason
will be surfaced in the deployment status. Note that progress will not be
estimated during the time a deployment is paused. Defaults to 600s.
replicas <integer>
Number of desired pods. This is a pointer to distinguish between explicit
zero and not specified. Defaults to 1.
revisionHistoryLimit <integer>
The number of old ReplicaSets to retain to allow rollback. This is a
pointer to distinguish between explicit zero and not specified. Defaults to
10.
selector <Object> -required-
Label selector for pods. Existing ReplicaSets whose pods are selected by
this will be the ones affected by this deployment. It must match the pod
template's labels.
strategy <Object>
The deployment strategy to use to replace existing pods with new ones.
template <Object> -required-
Copy the code
Other configuration items:
.spec.minReadySeconds
: Controls the upgrade speed of applications. During the upgrade process, newly created pods are considered available once they successfully respond to ready probes and proceed to the next round of replacement..spec.minReadySeconds
Defines how long a new Pod object must wait after it is created before it is considered ready, during which time update operations will be blocked..spec.progressDeadlineSeconds
: specifies when the system reports a Deployment failure — as in the stateMisc = Misc, Status=False, Reason=ProgressDeadlineExceeded
Number of seconds before Deployment can wait to proceed. The Deployment Controller continues to retry the Deployment. If this parameter is specified, the value must be greater than or equal to.spec.minReadySeconds
..spec.revisionHistoryLimit
: Specifies the number of old ReplicaSet or revision versions that can be retained. By default, all old Replicaset replicas are retained. If an old RepelicaSet is deleted, Deployment cannot fall back to that Revison. If this value is set to 0, all ReplicaSet with zero Pod copies will be deleted, and the Deployment cannot be rolled back because revision history has been cleaned up.
1. Create
[root@kmaster test]# kubectl apply -f nginx-deploy.yaml --record
Copy the code
— Record writes this command to the Deployment kubernetes.io/change-cause annotation. You can see the reasons for changes in one of the Deployment versions later.
2. Check the
Once Deployment is created, the Deployment controller will immediately create a ReplicaSet, which will create the required pods.
#Check the Deployment
[root@kmaster test]# kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deploy 0/2 2 0 64s
#Check the ReplicaSet
[root@kmaster test]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deploy-59c9f8dff 2 2 1 2m16s
#View Pod to display scheduling nodes and labels[root@kmaster test]# kubectl get pod -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS nginx-deploy-59c9f8DFF-47bGD 1/1 Running 0 5m14s 10.244.1.91 KNOde2 < None > < None > App =nginx,pod-template-hash=59c9f8dff nginx-deploy-59c9f8dff-q4zb8 1/1 Running 0 5m14s 10.244.3.47 KNOde3 <none> <none> app=nginx,pod-template-hash=59c9f8dffCopy the code
The poD-template-hash tag is added to ReplicaSet by Deployment when ReplicaSet is created, which in turn adds the tag to pod. This tag is used to distinguish which ReplicaSet in Deployment has created which Pod. The value of this tag is the hash value of.spec.template. Do not modify the tag. As can be seen from the above, ReplicaSet and Pod are named in the format of < deployment-name >-< pod-template-hash > and < deployment-name >-< pod-template-hash >-xxx, respectively.
3. Rollout
A Deployment release update (rollout) is triggered if and only if the contents of the Deployment’s Pod Template (.spec.template) field change (for example, a label or container image is changed). Changes to other fields in Deployment (such as changing the.spec.replicas field) will not trigger a Deployment publish update.
Update the definition of Pod in Deployment (for example, publish a new version of the container image). At this point, the Deployment controller will create a new ReplicaSet for the Deployment and gradually create pods in the new ReplicaSet and remove pods in the old ReplicaSet to achieve a rolling update effect.
For example, if we change the container image of the Deployment above,
#Method 1: Use the kubectl command directly to set and modify[root@kmaster ~]# kubectl set image deploy nginx-deploy nginx=nginx:1.16.1 -- Record deployment.apps/nginx-deploy image updated
#Method 2: Use Kubectl Edit to edit YAML
[root@kmaster ~]# kubectl edit deploy nginx-deploy
Copy the code
View the status of the rollout
[root@kmaster ~]# kubectl rollout status deploy nginx-deploy
Waiting for deployment "nginx-deploy" rollout to finish: 2 out of 4 new replicas have been updated...
Copy the code
Check the ReplicaSet,
[root@kmaster ~]# kubectl get rs NAME DESIRED CURRENT READY AGE nginx-deploy-59c9f8dff 1 1 1 3d6h nginx-deploy-d47dbbb7c 4 4 2 3m41sCopy the code
We can see that the Deployment update is achieved by creating a new 4-replicaset and reducing the number of replicas of the old ReplicaSet to zero.
Since we set both maxSurge and maxUnavailable to 1, two ReplicaSet replicas have at most five pods at any given time during the update process (4 replicas +1 maxSurge). The number of available pods is at least three (4 replicas-1 maxUnavailable).
Use the kubectl describe command to view the events section of Deployment, as shown below
[root@kmaster ~]# kubectl describe deploy nginx-deploy
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deploy-d47dbbb7c to 1
Normal ScalingReplicaSet 12m deployment-controller Scaled down replica set nginx-deploy-59c9f8dff to 3
Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deploy-d47dbbb7c to 2
Normal ScalingReplicaSet 10m deployment-controller Scaled down replica set nginx-deploy-59c9f8dff to 2
Normal ScalingReplicaSet 10m deployment-controller Scaled up replica set nginx-deploy-d47dbbb7c to 3
Normal ScalingReplicaSet 8m56s deployment-controller Scaled down replica set nginx-deploy-59c9f8dff to 1
Normal ScalingReplicaSet 8m56s deployment-controller Scaled up replica set nginx-deploy-d47dbbb7c to 4
Normal ScalingReplicaSet 5m55s deployment-controller Scaled down replica set nginx-deploy-59c9f8dff to 0
Copy the code
When the Deployment Pod Template is updated, the Deployment Controller creates a new ReplicaSet (nginx-deploy-d47dbbb7c), The old ReplicaSet (nginx-deploy-59c9F8dFF) is scaled down to three replicas. Then the Deployment Controller continues to scale up the new ReplicaSet and scale down the old ReplicaSet until the new ReplicaSet has pods with the number of replicas. The old ReplicaSet Pod number scales to 0. This process is called rollout.
An update strategy can be specified through the.spec.strategy field, and in addition to the RollingUpdate (RollingUpdate) used above, another desirable value is Recreate. If you choose recreate, Deployment will delete all the pods in the original ReplicaSet and then create the new ReplicaSet and the new Pod. During the update process, there will be a period of application unavailability. Therefore, online environments typically use RollingUpdate.
4. Rolled back
By default, Kubernetes will save all update (rollout) history for Deployment. Can be set through the revision history limit (. Spec. RevisionHistoryLimit configuration items) to the specified number saved version of history.
Kubernetes creates a Deployment Revision (version) for a Deployment if and only if the.spec.template field of the Deployment is modified (for example, modifying the image of the container). Other updates to the Deployment (for example, changing the.spec.replicas field) will not create a new Deployment Revision.
View the Deployment Revision,
[root@kmaster ~]# kubectl rollout history deploy nginx-deploy deployment.apps/nginx-deploy REVISION CHANGE-CAUSE 1 Yaml --record=true 2 kubectl set image deploy nginx-deploy nginx=nginx:1.16.1 --record=trueCopy the code
If — Record =true was not added when the Deployment was updated earlier, CHANGE-CAUSE here will be empty.
We simulate a failed update by changing the image to a non-existent version and rolling back to the previous version scenario,
#1. Change the image version to a value that does not exist[root@kmaster ~]# kubectl set image deploy nginx-deploy nginx=nginx:1.161 -- Record deployment.apps/nginx-deploy image updated
#2. Check the ReplicaSet
[root@kmaster ~]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deploy-58f69cfc57 2 2 0 2m7s
nginx-deploy-59c9f8dff 0 0 0 3d7h
nginx-deploy-d47dbbb7c 3 3 3 81m
#3. Check the Pod status
[root@kmaster ~]# kubect get pod
NAME READY STATUS RESTARTS AGE
nginx-deploy-58f69cfc57-5968g 0/1 ContainerCreating 0 42s
nginx-deploy-58f69cfc57-tk7c5 0/1 ErrImagePull 0 42s
nginx-deploy-d47dbbb7c-2chgx 1/1 Running 0 77m
nginx-deploy-d47dbbb7c-8fcb9 1/1 Running 0 80m
nginx-deploy-d47dbbb7c-gnwjj 1/1 Running 0 78m
#4. View Deployment details[root@kmaster ~]# kubectl describe deploy nginx-deploy ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 3m57s deployment-controller Scaled up replica set nginx-deploy-58f69cfc57 to 1 Normal ScalingReplicaSet 3m57s deployment-controller Scaled down replica set nginx-deploy-d47dbbb7c to 3 Normal ScalingReplicaSet 3m57s deployment-controller Scaled up replica set nginx-deploy-58f69cfc57 to 2
#5. View the historical version of Deployment[root@kmaster ~]# kubectl rollout history deploy nginx-deploy deployment.apps/nginx-deploy REVISION CHANGE-CAUSE 1 Yaml --record=true 2 kubectl set image deploy nginx-deploy nginx=nginx:1.16.1 --record=true 3 kubectl set image deploy nginx-deploy nginx=nginx:1.161 --record=true
#6. View details about a version[root@kmaster ~]# kubectl rollout history deploy nginx-deploy --revision=3 deployment.apps/nginx-deploy with revision #3 Pod Template: Labels: app=nginx pod-template-hash=58f69cfc57 Annotations: kubernetes.io/change-cause: Kubectl set image deploy nginx-deploy nginx=nginx:1.161 --record=true Containers: nginx: image: nginx:1.161 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none>
#7. Roll back to the previous version
[root@kmaster ~]# kubectl rollout undo deploy nginx-deploy
deployment.apps/nginx-deploy rolled back
#8. Roll back to the specified version
[root@kmaster ~]# kubectl rollout undo deploy nginx-deploy --to-revision=1
deployment.apps/nginx-deploy rolled back
#9. View historical version information[root@kmaster ~]# kubectl rollout history deploy nginx-deploy deployment.apps/nginx-deploy REVISION CHANGE-CAUSE 3 Kubectl set image deploy nginx-deploy nginx=nginx:1.161 --record=true 4 kubectl set image deploy nginx-deploy Kubectl apply --filename=nginx-deploy.yaml --record=trueCopy the code
The kubectl rollout undo command can be used to roll back to a previous version or a specified version. As you can see from the above example, rolling back to a historical version sets the sequence number of the historical version to the latest one. As mentioned above, we can set the Deployment of the spec. How many old ReplicaSet revisionHistoryLimit to specify to retain (or revision), beyond the Numbers will be garbage collected in the background. If this field is set to 0, Kubernetes will clean up all past revisions of the Deployment and the Deployment cannot be rolled back.
5. Adjustable
Deployment can be scaled to increase or decrease the number of Pod copies by using the kubectl scale command or by modifying the definition of Kubectl edit.
#Scale the Pod number to 2
[root@kmaster ~]# kubectl scale deploy nginx-deploy --replicas=2
deployment.apps/nginx-deploy scaled
#Check the Pod
[root@kmaster ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deploy-59c9f8dff-7bpjp 1/1 Running 0 9m48s
nginx-deploy-59c9f8dff-tpxzf 0/1 Terminating 0 8m57s
nginx-deploy-59c9f8dff-v8fgz 0/1 Terminating 0 10m
nginx-deploy-59c9f8dff-w8s9z 1/1 Running 0 10m
#Look at the ReplicaSet, and the DESIRED changes to 2[root@kmaster ~]# kubectl get rs NAME DESIRED CURRENT READY AGE nginx-deploy-58f69cfc57 0 0 0 22m nginx-deploy-59c9f8dff 2 2 2 3d8h nginx-deploy-d47dbbb7c 0 0 0 102mCopy the code
6. Automatic scaling (HPA)
If the cluster is enabled with HPA — Horizontal Pod Autoscaling (HPA — Horizontal Pod Autoscaling), automatic scaling of Deployment can be achieved in a maximum and minimum range based on CPU and memory utilization.
#Create an HPA
[root@kmaster ~]# kubectl autoscale deploy nginx-deploy --min=2 --max=4 --cpu-percent=80
horizontalpodautoscaler.autoscaling/nginx-deploy autoscaled
#Check the HPA
[root@kmaster ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
nginx-deploy Deployment/nginx-deploy <unknown>/80% 2 4 2 16s
#Delete the HPA
[root@kmaster ~]# kubectl delete hpa nginx-deploy
horizontalpodautoscaler.autoscaling "nginx-deploy" deleted
Copy the code
7. Pause and resume
We can pause a Deployment and then make one or more updates on it. At this point, the Deployment does not trigger an update, and only after the Deployment is resumed will all updates for that period of time be executed. This allows multiple updates to the Deployment in between pausing and resuming without triggering unnecessary rolling updates.
#1. Stop Deployment
[root@kmaster ~]# kubectl rollout pause deploy nginx-deploy
deployment.apps/nginx-deploy paused
#2. Update the container image[root@kmaster ~]# kubectl set image deploy nginx-deploy nginx=nginx:1.9.1 -- Record deployment.apps/nginx-deploy image updated
#3. View the version history. No update is triggered[root@kmaster ~]# kubectl rollout history deploy nginx-deploy deployment.apps/nginx-deploy REVISION CHANGE-CAUSE 3 Kubectl set image deploy nginx-deploy nginx=nginx:1.161 --record=true 4 kubectl set image deploy nginx-deploy Kubectl apply --filename=nginx-deploy.yaml --record=true
#4. Update Resource limits, again without triggering updates
[root@kmaster ~]# kubectl set resources deploy nginx-deploy -c=nginx --limits=memory=512Mi,cpu=500m
deployment.apps/nginx-deploy resource requirements updated
#5. View the changes. The Pod definition has been updated[root@kmaster ~]# kubectl describe deploy nginx-deploy Pod Template: Labels: app=nginx Containers: nginx: Image: Nginx :1.9.1 Port: 80/TCP Host Port: 0/TCP Limits: CPU: 500m Memory: 512Mi
#6. Restore the Deployment
[root@kmaster ~]# kubectl rollout resume deploy nginx-deploy
deployment.apps/nginx-deploy resumed
#7. Check the version history. You can see that only one rollout is performed for the two changes[root@kmaster ~]# kubectl rollout history deploy nginx-deploy deployment.apps/nginx-deploy REVISION CHANGE-CAUSE 3 Kubectl set image deploy nginx-deploy nginx=nginx:1.161 --record=true 4 kubectl set image deploy nginx-deploy Nginx =nginx:1.16.1 --record=true 5 kubectl apply --filename=nginx-deploy.yaml --record=true 6 kubectl set image deploy Nginx - deploy nginx = nginx: 1.9.1 - record = trueCopy the code
When the container image is updated, no new revisions are generated because the Deployment is suspended. When the Deployment resumes, the updates take effect and a rolling update is performed to generate a new version. Updates made on the Deployment in pause cannot be rolled back because there is no build version. A rollback cannot be performed on a Deployment that is in a suspended state and can only be performed after a Resume.
8. Canary Release
Canary publishing is also called grayscale publishing. When we need to release a new version, we can create a new Deployment for the new version and hang it under a Service (via label Match) with the old version. Distribute the user request traffic to the Pod of the new Deployment through the load balancing of Service, observe the running status of the new Deployment, update the version of the old Deployment to the new version if there is no problem, complete the rolling update, and finally delete the new Deployment. It is clear that this kind of Canary publishing has some limitations and cannot be split by user or geography, but if Canary publishing is to be more fully implemented, Istio and others may need to be introduced.
Canary to release the origin of name: previously, absenteeism in mine is facing an important danger is the mine gas, they think of a way to identify whether there is gas in the mine, miners carry a canary underground, canary to poison resistance ability is weaker than humans, will hang up first in a gas environment in order to play the role of early warning. The idea behind it is that with trial and error at a small cost, even if something goes horribly wrong (poison gas), the overall damage to the system is either tolerable or very small (losing a canary).
conclusion
In Kubernetes, the smallest scheduling unit is Pod. It is ReplicaSet that the workload creates Pod and controls it to run in a certain number of copies, while Deployment can manage Pod and ReplicaSet in a “declarative” manner. It also provides rolling updates and revision rollback functions. Therefore, it is common to deploy applications using Deployment rather than operating ReplicaSet or Pod directly.
Welcome to pay attention to the author’s public account: Halfway Rain Song, check out more technical dry goods articles