The author | zhang jie feather (ice) source | alibaba cloud native public number

background

Before we get started, let’s review the concept and design philosophy of unitized deployment. In edge computing scenarios, compute nodes are geographically distributed. The same application may be deployed on compute nodes in different regions. Take Deployment as an example, as shown in the figure below, the traditional way is to first set the same label for the compute nodes in the same region, and then create multiple Deployment nodes, and select different labels for different Deployment nodes through NodeSelectors. This enables you to deploy the same application to different geographies.

However, with the increasing geographical distribution, operation and maintenance becomes more and more complex, which is embodied in the following aspects:

  • When the mirrored version is upgraded, a number of related Deployment mirrored version configurations need to be changed.
  • A custom Deployment naming convention is required to indicate the same application.
  • Lack of a higher perspective for unified management, operation and maintenance of these deployments. The complexity of o&M increases linearly with the increase of applications and geographical distribution.

Based on the above requirements and problems, the Unit Deployment provided by The Yurt-app-Manager component of OpenYURT manages the Deployment of these children uniformly through a higher level of abstraction: Automatic creation/update/deletion, which greatly simplifies operation and maintenance.

Yurt – app – manager component: https://github.com/openyurtio/yurt-app-manager

As shown below:

These workloads are abstracted at a higher level by UnitedDeployment, which contains two main configurations: WorkloadTemplate and Pools. The workloadTemplate format can be Deployment or Statefulset. Pools is a list, and each list has a Pool configuration, and each Pool has its Name, Replicas, and nodeSelector configurations. A group of machines can be selected through nodeSelector, so in the edge scenario Pool, we can simply think that it represents a group of machines in a certain region. Using the WorkloadTemplate + Pools definition, we can easily distribute a Deployment or Statefulset application to different regions.

Here is a specific UnitedDeployment example:

apiVersion: apps.openyurt.io/v1alpha1 kind: UnitedDeployment metadata: name: test namespace: default spec: selector: matchLabels: app: test workloadTemplate: deploymentTemplate: metadata: labels: app: test spec: selector: matchLabels: App: test template: metadata: labels: app: test spec: containers: -image: nginx:1.18.0 imagePullPolicy: Always name: nginx topology: pools: - name: beijing nodeSelectorTerm: matchExpressions: - key: apps.openyurt.io/nodepool operator: In values: - beijing replicas: 1 - name: hangzhou nodeSelectorTerm: matchExpressions: - key: apps.openyurt.io/nodepool operator: In values: - hangzhou replicas: 2Copy the code

The logic of the UnitedDeployment controller is as follows:

The user defines a UnitedDeployment CR with a DeploymentTemplate and two pools.

  • Where the DeploymentTemplate format is a Deployment format definition, the Image used in this example is nginx:1.18.0.
  • Pool1 name for Beijing, replicas = 1, nodeSelector for apps. Openyurt. IO/nodepool = Beijing. Representative UnitedDeployment controller will create a child of Deployment, the replicas is 1, nodeSelector for apps. Openyurt. IO/nodepool = Beijing, The other configurations inherit from the DeploymentTemplate configuration.
  • Pool2 name for hangzhou, replicas = 2, nodeSelector for apps. Openyurt. IO/nodepool = hangzhou, Representative UnitedDeployment controller will create a child of Deployment, replicas of 2, nodeSelector for apps. Openyurt. IO/nodepool = hangzhou, The other configurations inherit from the DeploymentTemplate configuration.

When the UnitedDeployment controller detects that a UnitedDeployment CR instance with name test has been created, it first generates a Deployment template object based on the configuration in DeploymentTemplate. Generate two Deployment resource objects whose name prefix is test-Hangzhou – and test-Beijing – respectively according to Pool1 and Pool2 configuration and Deployment template object. These two Deployment resource objects have their own Nodeselector and Replica configurations. By using the workloadTemplate+Pools format, the workload can be distributed to different locales without requiring users to maintain a large number of Deployment resources.

Problems solved by UnitedDeployment

UnitedDeployment automatically maintains multiple Deployment or Statefulset resources with a single unitized Deployment instance, each following a common naming convention. At the same time, it can also realize the differential configuration of Name, NodeSelectors and Replicas. This greatly simplifies the operation and maintenance complexity of users in edge scenarios.

New requirements

UnitedDeployment can meet most requirements of users, but in the process of promotion, customer landing and discussion with students in the community, we gradually found that the functions provided by UnitedDeployment are still a little insufficient in some special scenarios, such as the following scenarios:

  • During the upgrade of an application image, the user plans to verify the image in a node pool first. If the verification succeeds, the image will be updated and published in all node pools.
  • To speed up image fetching, users may set up their own private image repositories in different node pools. Therefore, the image names of an application in each node pool may be different.
  • The number of servers, specifications, and service access pressure in different node pools are different. Therefore, the POD CPU and memory configurations of an application in different node pools may be different.
  • An application may use different ConfigMap resources in different node pools.

These requirements force UnitedDeployment to provide some personalized configuration functions for each Pool, allowing users to do some personalized configuration according to the actual situation of different node pools, such as mirroring, POD request and limit, etc. In order to provide maximum flexibility, after discussion, we decided to add Patch fields in the Pool to allow users to customize Patch content. However, we need to comply with Kubernetes strategic Merge Patch specification. Its behavior is somewhat similar to that of kubectl patch.

Patch is added to the pool as shown in the following example:

pools: - name: beijing nodeSelectorTerm: matchExpressions: - key: apps.openyurt.io/nodepool operator: In values: -Beijing replicas: 1 Patch: spec: template: spec: containers: - image: nginx:1.19.3 Name: nginxCopy the code

Content defined in patch should follow Kubernetes Strategic Merge Patch specification. Students who have used Kubectl Patch can easily know how to write patch content. For details, see Using Kubectl Patch to update Kubernetest API objects. Next, we will demonstrate the use of UnitedDeployment Patch.

Features demonstrate

1. Environment preparation

  • Provide a K8s cluster or OpenYurt cluster with at least two nodes. A node label is: apps. Openyurt. IO/nodepool = beiing, another node label is: apps. Openyurt. IO/nodepool = hangzhou.
  • The Yurt-app-Manager component must be installed in the cluster.

Yurt – app – manager component: https://github.com/openyurtio/yurt-app-manager

Create a UnitedDeployment instance

cat <<EOF | kubectl apply -f - apiVersion: apps.openyurt.io/v1alpha1 kind: UnitedDeployment metadata: name: test namespace: default spec: selector: matchLabels: app: test workloadTemplate: deploymentTemplate: metadata: labels: app: test spec: selector: matchLabels: app: test template: metadata: labels: app: test spec: containers: - image: Nginx :1.18.0 imagePullPolicy: Always name: nginx topology: pools: - name: Beijing nodeSelectorTerm: matchExpressions: - key: apps.openyurt.io/nodepool operator: In values: - beijing replicas: 1 - name: hangzhou nodeSelectorTerm: matchExpressions: - key: apps.openyurt.io/nodepool operator: In values: - hangzhou replicas: 2 EOFCopy the code

The workloadTemplate in the example uses the Deployment template, where the image with name nginx is nginx:1.18.0. At the same time, there are two pools defined in topology: Beijing and Hangzhou, and the number of replicas is 1 and 2 respectively.

3. View the Deployment created by UnitedDeployment

# kubectl get deployments
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
test-beijing-rk8g8    1/1     1            1           6m4s
test-hangzhou-kfhvj   2/2     2            2           6m4s
Copy the code

The yurt-app-manager controller creates two Deployment nodes corresponding to Beijing and Hangzhou pools. The naming convention for Deployment is prefixed with {UnitedDeployment name}-{pool name}. Looking at the two Deployment configurations, Replicas and Nodeselector inherit the configuration of each of the corresponding pools, while the other configurations inherit the configuration of the workloadTemplate template.

4. View the created Pod

# kubectl get pod
NAME                                   READY   STATUS    RESTARTS   AGE
test-beijing-rk8g8-5df688fbc5-ssffj    1/1     Running   0          3m36s
test-hangzhou-kfhvj-86d7c64899-2fqdj   1/1     Running   0          3m36s
test-hangzhou-kfhvj-86d7c64899-8vxqk   1/1     Running   0          3m36s
Copy the code

One POD with a name prefix of test-Beijing and two PODS with a name prefix of test-Beijing have been created.

5. Use the patch capability to differentiate configurations

Run the kubectl edit ud test command to add patch to the Beijing pool. In the patch field, modify the version of the container whose name is nginx to nginx:1.19.3.

The format is as follows:

- name: beijing nodeSelectorTerm: matchExpressions: - key: apps.openyurt.io/nodepool operator: In values: -Beijing replicas: 1 Patch: spec: template: spec: containers: - image: nginx:1.19.3 Name: nginxCopy the code

6. View the Deploy instance configuration

Looking back at the Deployment prefix test-Beijing, you can see that the mirrored configuration of the Container has changed to 1.19.3.

 kubectl get deployments  test-beijing-rk8g8 -o yaml
Copy the code

conclusion

Workload can be quickly distributed to different regions via inherited templates through the form of workloadTemplate + Pools in UnitedDeployment. In addition to Pool patch capability, it can inherit template configuration and provide more flexible differentiated configuration, which can basically meet the special needs of most customers in edge scenarios.

If you have any questions about OpenYurt, you are welcome to join the Nail exchange group using the Nail search group number (31993519).