Deployment – Stateless Deployment
When a regular application is deployed to a Kubernetes cluster, Pod is rarely used for direct Deployment, but the concept of logical unpacking of Deployment is used instead. Deployment is the encapsulation of a set of PODS. You can specify the number of Pod instances to run in Deployment, which is monitored by the Kube-Controller-Manager in the Kubernetes cluster. Ensure that the number of Pod instances in the cluster is maintained at the number set in Deployment.
Deployment, as the name implies, is a collection of pods, which can be defined by defining a Pod template template in a YAML file. The Pod defined here can define various properties and features that we described in the previous section.
Get to know Deployment
The pods in the Kubernetes cluster have a defined life cycle. For example, once a Pod is running in your cluster, all Pods on the node where the Pod is running will fail if a fatal error occurs. Kubernetes considers such a failure to be the final state: even if the node is later restored, a new Pod needs to be created to resume the application. However, to make users’ lives slightly easier, Kubernetes doesn’t need to manage each Pod directly. Instead, a load resource can be used to manage a set of Pods for the user. These resources configure the controller to ensure that the right number of pods of the right type are in a running state, consistent with the user-specified state. Deployment is ideal for managing stateless applications on a cluster. All Pods in Deployment are equivalent to each other and can be replaced as needed.
Next, let’s create a Deployment and do some basic practice to get a feel for the convenience and power of Deployment. In the previous separate definition of Pod, we need to delete the Pod first, then modify the definition of Pod, and then create Pod. In the Deployment, we will modify the definition of Pod by deleting the old Pod and creating a new one.
The first step is to create a new YAML file with the following contents:
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: selector: matchLabels: Nginx Replicas: 2 tells Deployment to run two Pod instances template: # Pod template metadata: labels: app: Spec: containers: - name: nginx image: nginx:1.14.2 ports: -containerPort: 80Copy the code
Second, create the Deployment:
kubectl apply -f 1-deploy.yaml
Step 3, check the running status of Deployment. As shown in the figure, Deployment is successfully created. The specific meanings of the parameters are as follows:
(1) NAME: lists the NAME of Deployment in the cluster. (2) READY: Displays the number of available copies of the application. The pattern displayed is ready/expected. (3) up-to-date: displays the number of copies that have been updated in order TO achieve the desired status. (4) AVAILABLE: Indicates the number of copies AVAILABLE for users. (5) AGE: Displays the running time of the application.
[root@kubernetes-master01 deploy]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 2/2 2 2 5m7s
Copy the code
See the corresponding Pod in action, as shown in the figure. Two pods have been created as defined in YAML. The name of the Pod is self-generated.
[root@kubernetes-master01 deploy]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-6b474476c4-dx7b6 1/1 Running 0 6m5s
nginx-deployment-6b474476c4-xljrj 1/1 Running 0 6m5s
Copy the code
Fourth step, change the number of Pod in Deployment, and change the corresponding parameter Replicas to the number 3.
Replicas: 3 # tells Deployment to run three Pod instances template: # Pod template metadata: Labels: app: nginx # Pod tagsCopy the code
Step 5. At this point, we just need to reapply the YAML file without making any manual changes to the running Deployment.
kubectl apply -f 1-deploy.yaml
Step 6, look at the Deployment operation, as shown in the figure, where you have successfully created three Pod instances as defined in YAML.
[root@kubernetes-master01 deploy]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-6b474476c4-dx7b6 1/1 Running 0 10m
nginx-deployment-6b474476c4-glfzt 1/1 Running 0 28s
nginx-deployment-6b474476c4-xljrj 1/1 Running 0 10m
Copy the code
By doing this, we can easily manually expand or shrink Pod instances, which is very convenient. Of course, in the next section, we will describe how to automatically expand or shrink Pod instances according to their operation conditions.
The following are typical use cases for Deployments. This book will not walk you through them one by one, but we have selected the most commonly used rolling upgrade and version rollback cases to study with you.
(1) Create Deployment to bring ReplicaSet online. ReplicaSet creates Pods in the background. Check ReplicaSet online status to see if it is successful. (2) Declare the new status of the Pod by updating the Deployment PodTemplateSpec. A new ReplicaSet is created and Deployment migrates pods from the old ReplicaSet to the new ReplicaSet at a controlled rate. Each new ReplicaSet updates the revised version of Deployment. (3) If the current state of Deployment is unstable, roll back to an earlier Deployment version. Each rollback updates the revised version of Deployment. (4) Scaling up Deployment to take on more loads. (5) Pause Deployment to apply the changes made to PodTemplateSpec, and then resume execution to start the new live version. (6) Use the Deployment state to determine whether there is a standstill in the on-line process. (7) Clear the older ReplicaSet, which is no longer needed.
2. Rolling upgrade
What is a rolling upgrade? When we are in the version upgrade for the application, and hope does not affect the application of the foreign service, the application does not stop the foreign service, we can consider application instance part of part of the upgrade, this is the so-called rolling upgrade, in the traditional application implementation rolling upgrade considerable trouble, but in Kubernetes cluster, it is very convenient.
For the escalation strategy, add the following to the YAML file that defines Deployment:
strategy:
rollingUpdate:
maxSurge: number1%
maxUnavailable: number2%
type: RollingUpdate
Copy the code
The meanings of the parameters are as follows:
(1) Strategy: Define the upgrade strategy.
(2) Type: The default is RollingUpdate, which is a rolling upgrade strategy. There is also an indefinite-set upgrade strategy that enables all new pods after all old pods are closed. This upgrade strategy is used when an application upgrade is incompatible with the old version.
(3) maxSurge: the number of additional pods that can be used in a cluster for upgrading. For example, in Deployment, the number of pods is defined as 4. If maxSurge is defined as 75%, a maximum of 4*75%=3 additional pods can be created in the cluster and replaced with the old ones.
(3) maxUnavailable: the maximum number of unavailable pods in the cluster. For example, the number of pods in Deployment is defined as 4. If maxUnavailable is defined as 25%, there can only be 4*25%=1 Pod unavailable in the cluster.
Next, let’s practice a rolling upgrade example.
As a first step, we create a YAML file that looks like this and adds the rolling upgrade requirements.
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: strategy: rollingUpdate: maxSurge: 75% maxUnavailable: 25% Type: RollingUpdate selector: matchLabels: # 4 # tell Deployment to run four Pod instances template: # Pod template metadata: labels: app: nginx # Pod tag spec: containers: - name: Nginx image: nginx:1.14.2 ports: -containerPort: 80Copy the code
Create the Deployment with the record parameter to record the escalation.
kubectl apply -f 2-deploy-roll.yaml –record
Second, look at the Deployment operation. As shown in the figure, four PODS have been created and version 1.14.2 of Nginx is running.
[root@kubernetes-master01 deploy]# kubectl get deployment -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES Nginx - Deployment 4/4 4 4 119s nginx nginx:1.14.2Copy the code
Third, we change the nginx version of the Pod defined in the Deployment, leaving only the nginx image version unchanged.
Spec: containers: -name: nginx image: nginx:1.14.2 # Upgrade from 1.14.2 to 1.16.1 ports: -containerport: 80Copy the code
Apply the new YAML:
kubectl apply -f 2-deploy-roll.yaml –record
Fourth, see what is running in the Deployment. As shown in the figure, nginx has changed to version 1.16.1.
[root@kubernetes-master01 deploy]# kubectl get deployment -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES Nginx - Deployment 4/4 4 4 9m2s nginx nginx:1.16.1Copy the code
The fifth step is to try to analyze the process of version upgrade. Rs will be encapsulated inside each Deployment. After the version upgrade, there are two RS under the Deployment, as shown in the figure, RS at the end of C4 runs for 12 minutes, rs at the end of 49 runs for more than 5 minutes, that is to say:
(1) Nginx-Deployment-6B474476C4: is the RS associated with deployment before version upgrade.
(2) Nginx-Deployment-7B45D69949: is the RS associated with deployment after version upgrade.
[root@kubernetes-master01 deploy]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-6b474476c4 0 0 0 12m
nginx-deployment-7b45d69949 4 4 4 5m7s
Copy the code
Look at the version upgrade process, as shown in the figure, the old RS changes from 4 to 3, the new RS expands directly to 3, then the new RS expands to 4, and the old RS changes to 2, 1, and 0, which is consistent with our defined expectations. That is, three additional pods of the new version will be created, one of the old version will be removed first, and then a new Pod will be created. Once the target of four new pods is reached, all the pods of the old version will be removed gradually.
[root@kubernetes-master01 deploy]# kubectl describe deployment nginx-deployment
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 26m deployment-controller Scaled up replica set nginx-deployment-6b474476c4 to 4
Normal ScalingReplicaSet 19m deployment-controller Scaled up replica set nginx-deployment-7b45d69949 to 3
Normal ScalingReplicaSet 19m deployment-controller Scaled down replica set nginx-deployment-6b474476c4 to 3
Normal ScalingReplicaSet 19m deployment-controller Scaled up replica set nginx-deployment-7b45d69949 to 4
Normal ScalingReplicaSet 19m deployment-controller Scaled down replica set nginx-deployment-6b474476c4 to 2
Normal ScalingReplicaSet 19m deployment-controller Scaled down replica set nginx-deployment-6b474476c4 to 1
Normal ScalingReplicaSet 18m deployment-controller Scaled down replica set nginx-deployment-6b474476c4 to 0
Copy the code
To summarize, RollingUpdate can be supported with additional resources of four old pods plus three new pods in this case. At the same time, a maximum of maxUnavailable pods can be unavailable in the entire Deployment service state.
Version rollback case
When we find problems with the upgraded application, we can roll back the Deployment.
View the upgrade history of the current Deployment based on the upgrade of the previous version, as shown in the figure.
[root@kubernetes-master01 deploy]# kubectl rollout history deployment/nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
1 kubectl apply --filename=2-deploy-roll.yaml --record=true
2 kubectl apply --filename=2-deploy-roll.yaml --record=true
Copy the code
Currently, the version of the Nginx image in the Deployment in the cluster, as shown in the figure, is 1.16.1.
[root@kubernetes-master01 deploy]# kubectl get deployment -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
nginx-deployment 4/4 4 4 58m nginx nginx:1.16.1 app=nginx
Copy the code
Roll back Deployment to version 1 by executing the following command:
kubectl rollout undo deployment nginx-deployment –to-revision=1
Look at the current version of Nginx in Deployment, which has been restored to 1.14.2, as shown in the figure.
[root@kubernetes-master01 deploy]# kubectl get deployment -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
nginx-deployment 4/4 4 4 61m nginx nginx:1.14.2 app=nginx
Copy the code
Looking at the RS corresponding to Deployment, that is, the replica set, as shown in the figure, no new replica set RS is added, but the number of pods corresponding to the previous replica set is restored.
[root@kubernetes-master01 deploy]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-6b474476c4 4 4 4 62m
nginx-deployment-7b45d69949 0 0 0 54m
Copy the code
Through the version change of nginx image in Deployment, we conducted a version upgrade and version rollback case, from which we verified the following conclusions:
(1) Deployment manages the definition and number of Pod through ReplicaSet.
(2) the definition modification operation of Pod in Deployment is realized by defining a new ReplicaSet, ReplicaSet.
Based on the rollback history, the state of Deployment can be rolled back to a historical version.
(4) for updates to attributes in Deployment other than Pod definitions, a new ReplicaSet ReplicaSet will not be created, but a new REVISION record will be added to the rollback history, such as updates to the number of pods.
Four, guess you like
If you are interested in learning more about containerization, you can read the Kubernetes column on Beautiful Containerization
Features of this column:
- Combining theory with practice
- The explanation is simple and simple