The author | zhang jie feather (ice) source | alibaba cloud native public number

Introduction to the

OpenYurt is the industry’s first non-invasive edge computing project based on native Kubernetes built by Ali Cloud open source. The goal is to extend Kubernetes to seamlessly support edge computing scenarios. It provides full Kubernetes API compatibility; Support for all Kubernetes workloads, services, carriers, CNI plug-ins and CSI plug-ins; Provides good node autonomy, even if the edge node is disconnected from the cloud, the applications running on the edge node will not be affected. OpenYurt can be easily deployed in any Kubernetes cluster service, allowing the powerful cloud native capabilities to scale to the edge.

OpenYurt V0.3.0 is a blockbuster release

On January 8, 2021 (Beijing time), Openyurt releases version V0.3.0, which introduces the concept of node pool and unitized deployment for the first time. The cloud YURt-App-Manager component is added to improve the efficiency of application deployment in edge scenarios and reduce the complexity of edge nodes and application operation and maintenance. Fully optimize the performance of yurthub and Yurt-Tunnel core components. Yurtctl provides kubeadm provider, which can quickly and conveniently convert Kubernetes cluster created by Kubeadm into Openyurt cluster.

1. Yurt-app-manger is designed for edge application operation and maintenance

After extensive discussion with the community, OpenYurt provides The OpenYurt Yurt-app-Manager component. Yurt-app-manager is a standard extension of Kubernetes, which can be used with Kubernetes, providing NodePool and UnitedDeployment controllers. Provides o&M capabilities for nodes and applications in edge scenarios from the host dimension and application dimension.

1) NodePool: NodePool

On the edge of the edge scenarios, nodes usually have strong local, regional, or other logical grouping features (such as the same CPU architecture, the same operator, cloud providers), different groups of nodes often exists between network not exchange, resource sharing, resource heterogeneity, obvious isolation properties such as application-independent, That’s where NodePool came from.

As the name implies, a NodePool can be called a NodePool, node group, or node unit. For the management of Woker nodes with common attributes, the traditional way is to classify and manage hosts by Label. However, as the number of nodes and labels increases, o&M of node hosts is classified (for example: Batch set scheduling policy, TAINts, etc.) efficiency and flexibility will become worse and worse, as shown in the following figure:

NodePool abstracts node division in a higher dimension based on node groups. It can centrally manage, operate, and maintain hosts in different edge areas from the perspective of node pools, as shown in the following figure:

2) Unit deployment: UnitedDeployment

In edge scenarios, the same application may need to be deployed on compute nodes in different regions. Taking Deployment as an example, the traditional approach is to set the same Label on compute nodes in the same region and then create multiple Deployments. Each Deployment selects different labels through nodeSelectors to meet the requirements of deploying the same application to different regions in turn. However, these represent multiple Deployment of the same application, with very little differentiation except for name, Nodeselectors, Replicas and other features. As shown below:

However, with more and more geographical distribution and different application requirements in different regions, operation and maintenance becomes more and more complex, which is embodied in the following aspects:

  • Each Deployment needs to be changed individually to upgrade the mirroring version.
  • A custom naming convention for Deployment is needed to indicate the same application.
  • As edge scenarios become more complex and requirements increase, the Deployment of each node pool will have some differentiated configurations that are difficult to manage.

Unified Deployment manages the Deployment of these children at a higher level of abstraction: automatic creation/update/deletion. As shown below:

The UnitedDeployment controller can provide a template to define the application and manage multiple workloads to match the different areas below. The workload for each area under each UnitedDeployment is called a pool, and currently two workloads are supported for pool: StatefulSet and Deployment. The controller creates child Workload resource objects based on the pool configuration in UnitedDeployment, and each resource object has an expected number of Replicas Pods. Multiple Deployment or Statefulset resources can be automatically maintained from a single UnitedDeployment instance, with differentiated configurations such as REPLICas. For a more intuitive experience, check out the YURt-App-Manager tutorial and the developer tutorial.

For more discussion of Yurt-app-Manager please refer to community Issue and Pull Request:

  • issue 124: UnitedDeployment usages
  • issue 171: [feature request] the definition of NodePool and UnitedDeployment
  • pull request 173: [proposal] add nodepool and uniteddployment crd proposal

2. The Autonomous component yurt-Hub of the node

Yurt-hub is a daemon running on every node in the Kubernetes cluster. Its function is to act as a proxy for outbound traffic (Kubelet, Kubeproxy, CNI plug-in, etc.). It caches the state of all resources that may be accessed by the Kubernetes node daemons in the local storage of edge nodes. If an edge node is offline, these daemons can help the node recover from a restart to achieve edge autonomy. In v0.3.0, the community has made a number of functional enhancements to Yurt-Hub, including:

  • When yurt-Hub connects to the cloud Kube-Apiserver, it automatically applies for a certificate from Kube-Apiserver and supports automatic certificate expiration rotation.

  • Added a timeout mechanism to watch cloud resources.

  • Optimize response when locally cached data does not exist.

3. Yurt-tunnel, a cloud-side O&M channel component

The Yurt tunnel consists of the TunnelServer in the cloud and the TunnelAgent running on each edge node. The TunnelServer establishes a connection with the TunnelAgent daemon running on each edge node through a reverse proxy to establish secure network access between the control plane of the public cloud and the edge nodes on the Intranet. In v0.3.0, the community has made a number of enhancements to the Yurt-Tunnel component in terms of reliability, stability, and integration testing.

4. OpenYurt O&M component YurTCTL

In v0.3.0, YurTCTL supports kubeadm Provider, The native Kubernetes cluster created by Kubeadm can be quickly and easily converted into Kubernetes cluster that can adapt to the edge weak network environment, greatly improving the use experience of OpenYurt.

Get started with OpenYurt – Playing OpenYurt on raspberry PI

Future plans

OpenYurt V0.3.0 is released, which further improves the extension capability of native Kubernetes in edge scenarios. At the same time, yurt-app-Manger components are released for the deployment of applications in edge scenarios. In the future, The OpenYurt community will continue to invest in device management, edge operation and maintenance scheduling, community governance and contributor experience. Thanks again for the participation of Intel/Vmware students, and we welcome those who are interested to join us to build a stable, reliable and completely cloud-native edge computing platform.

For more community details: github.com/alibaba/ope… .

A link to the

  • OpenYurt Release v0.3.0

  • OpenYurt v0.3.0 CHANGELOG

  • OpenYurt use tutorial

  • OpenYurt website

If you have any questions about OpenYurt, you are welcome to join the Nail exchange group using the Nail search group number (31993519).