The author

Wang Dong, Senior R&D engineer of Tencent Cloud, focuses on Kubernetes, container and other cloud native fields, and is a core developer of SuperEdge. Currently, he is responsible for the privatization of TKE Edge, Tencent cloud Edge container.

background

On September 27, 2021, at the edge computing special session of 2021 Intelligent Cloud Edge Open Source Summit held by VMware together with Intel, PingCAP and other cooperative companies, Wang Dong, a senior engineer from Tencent Cloud, shared “New features and future road of SuperEdge”.

SuperEdge is an edge computing distributed container management system jointly launched by Tencent in 2020, together with Intel, VMware, Huya Live, Cambrian, Capital Online and Meituan, aiming to seamlessly extend Kubernetes centralized resource management capabilities to edge computing and distributed resource management scenarios. Manages edge devices and applications, enabling a wide range of IoT devices.

Here is the full text.

Four main features of SuperEdge

  • SuperEdge source

    SuperEdge is an open source product in TKE Edge product family of Tencent cloud Edge container management system. Its corresponding commercial products are TKE Edge public cloud service and TKE Edge private service. Its public cloud service was incubated internally in 2018 and officially tested and provided externally in 2019. Currently, it is completely free to the outside world. Its privatized products are currently delivered and maintained externally by Linque Cloud.

  • SuperEdge and TKE Edge

    SuperEdge is an open source Edge capability component of TKE Edge and does not include the parts created by its commercial product clusters. The edge capabilities component is completely open source, and the edge capabilities of the open source product and the commercial product are exactly the same, even the open source product SuperEdge features are updated earlier than the commercial product. Because Tencent currently maintains only one warehouse inside and outside, namely SuperEdge on Github.

The SuperEdge core has four edge capabilities:

L3 level edge autonomy

This capability is mainly provided by the light red Lite-Apiserver. Why do you need ability? There are two reasons:

  • The first reason is that the cloud side is generally weak network and may be disconnected. In the case of weak network and disconnected network, the edge service should be stable. Lite-apiserver directly requests data from Cloud Kube-Apiserver when the cloud side network is normal. However, when the cloud side does not receive the request, the cloud side will retrieve relevant components from the local cache to control the cache and return it to the requester to ensure the stability of the edge service.
  • Second, edge nodes or edge sites may be powered off and restarted, especially when the edge node is restarted when the cloud is disconnected, and the business container on the edge node cannot be pulled up. This problem is avoided with lite-Apiserver’s local cache, which loads the business container from local storage.

Lite-apiserver also provides several other capabilities. Such as:

  • Access kube-apiserver in InCluster
  • Supports caching of all types of resources, including CRDS
  • Edge node security: Lite-Apiserver requests Kube-Apiserver with proxy component permissions, not super permissions
  • Support a variety of cache storage: Light Edge can use local file storage, Heavy Edge can use SQLite and other KV storage;

Cloud side synergy

This capability is mainly provided by the light green tunnel-Cloud and Tunnel-edge components. These two components are cloud side tunnels developed by Tencent Cloud Edge Computing team. Currently, they can proxy TCP, HTTP, HTTPS and SSH requests. Why is cloud side tunneling capability needed?

  • First, edge nodes generally do not have public IP addresses. Edge nodes can proactively access kube-Apiserver in the cloud, but the cloud cannot directly access edge nodes. Therefore, reverse tunnel at the cloud side is required to get through.
  • The second is to prepare for cloud side data transmission. Part of edge data is to be sent back to the cloud for analysis and processing. Efficient and secure encryption tunnel is a necessary condition.

However, SuperEdge cloud tunnel is not exclusive to SuperEdge, any place that needs a tunnel can be taken to configure and directly use according to their own scene.

Mass site management ability

This capability is provided by the light-purple application-grid-Conterlloer and Application-grid-Wrapper components. Why are these two components needed?

  • First, there are many similar sites on the edge, and the same set of applications need to be deployed. It is impossible for us to deploy one by one. Some sites will have differences in direct circular deployment, and application-grid-Conterlloer is born to solve this problem. A user’s application can be deployed to multiple edge sites at the same time in a single submission in the cloud, and the site gray capacity is allowed to allow differences in site configuration.

  • The second is to prevent edge applications from accessing across sites. Because each site provides essentially the same edge service, the service may be accessed across sites, and cross-site access causes two problems. Data at site A may be written improperly at site B, resulting in an uncontrollable cross-site access delay.

This is the problem solved by application-grid-Wrapper, which can lock traffic from a site to a site and intelligently configure endponit on the back end to lock services to the desired range of users.

The diagram above shows a typical use of these two components. One service servicegroup-1 can be deployed on nodeunit-1 and nodeUnit-2 at the same site. Servicegroup-2 needs to be deployed on nodeunit-3. Services on each site can be accessed only on each site. A small equipment room can be divided into one or more sites. Nodes in the small equipment room can also belong to multiple sites. Different sites can deploy different services to make full use of the resources in the small equipment room.

Distributed Health Check

This capability is mainly provided by the light yellow edge-Health-Admission and Edge-Health components. Why are these two components needed?

  • The first is the need to feedback the health of edge nodes as much as possible when the cloud is disconnected. For example, a small machine room on the edge of a usable area cloud edge is disconnected, the cloud temporarily can not know the cloud edge is disconnected, or the usable area down. There is no way to know the health status on the cloud, but the other available areas of the small machine room can regularly Check each other’s health status. Edge-health does just that.

  • The second is to maintain the stability of edge services to avoid repeated reconstruction. After pushing the native Kubernetes to the edge, the native Kubernetes’ expulsion ability is not completely edge compatible. On the edge weak network or down network, the node status may be repeated NotReady, but the edge service is normal and is not affected by the cloud weak network. But Kubernetes on the cloud does not think so, a node is not NotReady can cause edge service expulsion, the edge service repeatedly migration and reconstruction, causing the instability of the edge service. Edge-health-admission is designed to solve this problem by feeding the real health status of edge nodes fed by Edge-Health to Kube-Apiserver to prevent edge services from being mistakenly expelled.

New features and principles of SuperEdge

Since opening in December last year, SuperEdge has been released in five versions, bringing many new features. Here are four typical ones, and the rest can be followed in the SuperEdge community.

New feature 1: Ease of use one-click creation and one-click integration

From open source to the present, SuperEdge has been focused on simplicity and ease of use.

  • One-click create edge K8s cluster

    If you do not have a K8s cluster, you can use edgeadm init to create an edge K8s cluster:

    /edgeadm init --apiserver-cert-extra-sans=... Edgeadm Join kube-api-addr --token XXXX...Copy the code

    With only one Master Node and one Node, 2C2G resources can easily play around with edges, taking care of edge nodes and edge devices that users drop anywhere.

    Init node and Install Container Runtime. Addon CNI network add-on and edge capability component mentioned above.

    Kubeadm is used in the same way as Kubeadm, only two more parameters than Kubeadm. Install edge K8s cluster and native K8s cluster with edgeadm

  • One-click integration of edge capabilities

    Users who already have a native K8s cluster can use Addon SuperEdge to integrate edge capabilities with one click.

    Edgeadm Addon edge-apps --master-addr... Edgeadm Join kube-api-addr --token=...Copy the code

    After the integration of edge capabilities, the original K8s cluster will have the ability to manage both the central node and the central application, as well as the edge node and the edge application, realizing the central-edge mixing, mixing and mutual fire. Can Join any edge node, do not require SSH to the edge node, as long as the edge node can access the central Kube-apiserver can be joined. In addition, of course, it has all the edge abilities of SuperEdge.

The implementation principle is as follows:

User through any way to build a native K8s cluster, edgeadm Addon SuperEdge will configure him into the standard Kubeadm Kubernetes cluster, if it is Kubeadm Kubernetes more can directly skip this step. Then prepare the prerequisites for Addon SuperEdge and Join Edge node. The more challenging point here is the condition of preparing to Join edge node, so that any K8s cluster can Join edge node in any position with one key. Addon SuperEdge enables native K8s clusters to manage edge applications and nodes

New feature 2: Edge managed cluster + edge independent cluster + edge cascading cluster

The diagram below is SuperEdge’s first step towards edge-distributed multi-cluster.

At present, SuperEdge can realize unified management and management of edge managed cluster, independent edge cluster and even edge cascading cluster in the center through ClusterNet, a new open source distributed multi-cluster project of Tencent Cloud Edge Computing team.

The managed edge independent cluster is not limited to SuperEdge TYPE K8s cluster, but also includes lightweight K3s cluster, MicroK8s cluster…… And other native K8s clusters.

New feature 3: Remote login to Intranet nodes and HPA by Tunnel

Tunnel-cloud and tunnel-edge are both ends of a cloud-side tunnel. SuperEdge does not retain all connections of edge nodes on each tunnel-Cloud instance Pod. However, each tunnel-cloud only bears a part of tunnel-edge tunnel connections of edge nodes, that is, each cloud side tunnel has only one long connection. Most other cloud side tunneling projects are all long links that each instance of the cloud needs to maintain with edge nodes:

Number of long links = Number of tunnel cloud instances x Number of edge nodesCopy the code

The main purpose of this is to support the automatic expansion and shrinkage of tunnel-Cloud

As the number of edge nodes in an edge cluster continues to break the upper limit of SuperEdge, tunnel-Cloud can no longer maintain a fixed number of instances statically, but needs to dynamically expand tunnel-Cloud instances, so as to access more long connections and manage more edge nodes. This is where the tunnel-cloud automatic HPA requirements come from.

Finally, with the help of tunnel tunnel, SuperEdge supports remote secure SSH to edge nodes without public IP, which brings great convenience for users to remotely operate edge nodes without public IP.

New feature 4: Add edge nodes in batches remotely

The final new feature is the remote batch addition of edge nodes. This was a requirement for SuperEdge landing production batches and the code has been opened source to SuperEdge Penetrator module. Remote batch adding edge nodes can be divided into two situations:

  • Edge nodes that can be SSH to in the cloud

    SSH to edge nodes in the cloud is a common operation. You can add edge nodes by running SSH remote commands in batches by delivering an SSH Job. penetratorThe key is not directly SSH to the edge node how to achieve batch add?

  • Edge node that the cloud cannot SSH to

    Edge nodes that cannot be directly SSH to, as shown below, can be added to an edge K8s cluster by a proxy or other means. The edge node is used as a springboard and the Job is sent to the springboard node. Then the edge nodes on the same Intranet as the springboard node can be added in batches. In this way, edge Intranet nodes that cannot be SSH can be added remotely in batches.

SuperEdge’s future cloud edge

SuperEdge on the cloud of the future

Tencent Cloud Edge team has recently opened its second open source project Clusternet, which is not an open source project related to cluster network, but a distributed multi-cluster management open source project to achieve the goal of accessing users’ K8s clusters like accessing the Internet network. Why is this project needed?

  • The first is to meet the management of massive edge nodes

    The number of edge nodes managed by a K8s cluster is limited. The current limit given by the community for a native K8s cluster is 5000 nodes. The more nodes a K8s cluster manages, the cost of maintenance and technical difficulty increases exponentially. Put a large number of nodes in a cluster, the risk itself is relatively high, once there is a problem in the center, node applications may be affected. With tens of thousands of edge nodes to manage, a single cluster is not elegant, but small and beautiful clusters are safer and more stable. Clusternet currently manages a wide variety of K8s clusters, including public cloud, privatized and edge K8s clusters. It can realize unified management and access to each K8s cluster in a central control plane, and can realize mutual access from the managed cluster.

  • The second is to meet the requirements of site and application Dr

    The management of various K8s clusters is only the first step to achieve distributed multi-cluster management. Cluster Dr And application Dr Are the goals. Outages are more likely and more frequent at edge sites than at the center. After a site is down, the services of the corresponding site must continue to be provided on the adjacent site or the backup site. Cluster migration and intra-city hypermetro are urgent requirements. Edge applications are not only deployed on one site, and a site failure requires continued service on other sites.

The diagram above shows the Clusternet architecture, which currently consists of two components, ClusterNet-Agent and Clusternet-Hub. Clusternet-agent is responsible for registering THE K8s cluster with the parent cluster, and ClusterNet-Hub is responsible for registering, aggregating the sub-K8S cluster Kube-Apiserver, and deploying applications to multiple K8s clusters.

The future edge of SuperEdge

The following figure describes the current situation of the cloud side Service exchange and side Service exchange of edge K8s cluster.

Most of the cloud side services are exposed in NodePort mode, and few edge projects realize seamless Service exchange within a cluster like the original K8s cluster.

Service Access at the same timeEven more difficult, if there is a one-way network between the sides can also be through the tunnel access, if the physical network is completely blocked only through the cloud transfer. How to avoid performance loss and overcome the instability of cloud – side and – side physical network even if the mutual access of cloud – side and – side services is realized?

The solutions here can be found in the SuperEdge community, and more will follow.

The future of SuperEdge

SuperEdge has implemented Addon’s native Edgex Foundry and can optionally deploy Edgex Foundry components by using the following commands:

Attlee ➜ qualify./ Edgeadm Addon Edgex-h Addon Edgex to Kubernetes clusterUsage:
edgeadm addon edgex [flags]

Flags:
--core       Addon the edgex core-services to cluster.
--app        Addon the edgex application-services to cluster.
--device     Addon the edgex device-services to cluster.
--ui         Addon the edgex ui  web to your cluster.
Copy the code

See connecting to IoT devices with the EdgeX Foundry on the SuperEdge for details.

The EdgeX Foundry is just one of a number of devices management platforms SuperEdge will be abstracting and integrating with to create a seamless solution for multi-platform edge devices. But regardless of that solution, SuperEdge will give users the freedom to choose in an Addon way, and will not be strongly tied to any edge device platform.

The last chart shows how the SuperEdge and EdgeX Foundry are currently deployed on the side, and how the devices are plugged in. A site can manage its edge devices by deploying a single set of side services from SuperEdge and EdgeX Foundry.

In the future, SuperEdge will also provide a series of support for edge sites, including site autonomy, site Workload, and site DISASTER recovery, and manage users’ edge sites in a unified manner on the cloud.

Finally, a word for you:

Edge computing technology will be the key to the success of the Internet of Everything, enabling 5G and digitization with low latency and low cost!

The original video speech attlee-1251707795.cos.ap-chengdu.myqcloud.com/superedge/v…

Pay attention to [Tencent cloud native] public account, background reply keywords [cloud edge open source Summit] can get the speech PPT manuscript.

SuperEdge

  • Tencent Cloud united with a number of ecological partners, the open source SuperEdge edge container project

  • 【TKE Edge Container Series 】SuperEdge is easy to learn and use.

  • TKE Edge Containers from 0 to N Learn about SuperEdge

  • 【TKE Edge Container Series 】 Understand the architecture and principle of SuperEdge edge container

  • Install edge K8s cluster and native K8s cluster with edgeadm

  • Addon SuperEdge enables native K8s clusters to manage edge applications and nodes

  • Connect to IoT devices with the EdgeX Foundry on SuperEdge

  • 【TKE Edge Container Series 】 Break the internal network barrier, add hundreds of edge nodes from the cloud at a time

  • 【TKE Edge Container Series 】SuperEdge Cloud Edge tunnel new feature: From the cloud SSH operation and maintenance of edge nodes

  • 【TKE Edge Container Series 】 What are the features of SuperEdge High Availability Cloud side tunnel?

Landing case related information:

  • Edge container practice of Tencent WeMake Industrial Internet platform: Build a more efficient industrial Internet
  • After the explosion. With the edge container, the workload of seven or eight team members in a week can be achieved in seconds
  • Construction of industrial Internet platform based on edge container technology
  • Deploy the EdgeX Foundry using TKE Edge

About us

More about cloud native cases and knowledge, can pay attention to the same name [Tencent cloud native] public account ~

Benefits: the official account responds to the “Manual” backstage, and you can get “Tencent Cloud native Roadmap manual” & “Best Practices of Tencent Cloud Native” ~