【 Abstract 】 In 2018, the Ministry of Industry and Information Technology carried out a national Innovation and Development Project, which proposed the establishment of a national industrial big data center, in which China Mobile undertook the research and development of edge collaboration and data acquisition related functions. In this paper, the problems and challenges faced under the background of the project and the selection of technology will be described.
I. Problems and challenges
demand
Collect production and operation data from the factory and collect them in the cloud
Cloud for unified control: what data to collect, how to deal with data
challenge
The cloud can only be actively connected to the cloud, but the cloud cannot be actively connected to the cloud (there is no public IP address on the edge).
Versatile enough and flexible enough to accommodate all types of industrial equipment and protocols
The edge autonomy capability enables the edge autonomy when the network is unstable
With edge computing capability, can run various applications on edge nodes
Low resource consumption and low power consumption
Two, technology selection
In fact, the selection of technology is also based on our actual needs. First of all, EdgeX is used for data collection and management before this project. EdgeX is relatively perfect in data collection and management and has strong functions. In my opinion, it is a pure edge autonomous architecture, which does not have the synchronization ability with cloud. Of course, we also have some schemes, such as transferring a VPN from EdgeX node to our central cloud, but the scalability of VPN scheme is relatively poor.
Note: pictures from the Internet
The second problem is that K3S/K8S does not have the ability of cloud edge collaboration. The second problem is that K8S occupies too much resources, which is not suitable for our factory. K3S occupies a lot of resources, but on the one hand, it lacks the ability of cloud edge collaboration, on the other hand, it also lacks the ability of equipment management.
The third is OpenNESS, it is a very general framework, but it is too general for us, no matter what we do, we need to write adapters, to the bottom docking Kubernetes can, a little too flexible, development workload is relatively large, lack of equipment management ability;
The fourth is OpenYurt, which is similar to KubeEdge in terms of function description, but it appeared late. At that time, our project was already half way, and its overall maturity is still worse than KubeEdge.
And finally KubeEdge. It has cloud side collaboration ability, resource overhead is relatively small, it also has device management ability, I think it is more distinctive, especially cloud side collaboration ability and device management ability, may be rarely found in the market at the same time have these two capabilities.
Architecture design
The overall framework
This is the architecture we actually use in the National Industrial Internet big data center. In fact, the core is our KubeEdge, which actually plays a role of equipment management and application management. My Cloud will definitely have a K8s cluster at first. We will deploy the so-called Cloud Core of KubeEdge. All data including management data are stored in the Cloud’S K8s. It is responsible for running our containerized applications on the edge, including some applications that do device management, data collection;
Mapper is a standard defined by the community for device management and data collection. Currently, the Mapper community provides Modbus and Bluetooth. For example, if I want to manage a camera or my own device, I need to write Mapper according to the community specifications. On top of that we have a layer of encapsulation, a layer of management services with Java and Spring Cloud, why do this layer of encapsulation? If we directly expose KubeEdge or K8s API to users, there will be some security risks, because the interface is relatively open, may involve some data isolation and K8s cluster itself some functions, if we once exposed the API, users may do some destructive operations. So we also encapsulate a layer of business logic externally.
Finally, we also made an industrial APP bazaar, mainly the cause of our community is to do this is to set a standard, I personally developer or manufacturer: actually, I can do what the standard Mapper application, it can be published to our application market after finishing, we can share with other users for free charge or, In fact, we also want to build such an ecosystem to encourage people to build Mapper based on KubeEdge, and hope that the developers of Mapper can also get profits. This is one of our considerations.
The data collection
Some of the improvements we made to KubeEdge during the project:
(1) Support a wider range of industrial equipment and protocols
Actually we’re just do project to find KubeEdge support agreement is limited, only support bluetooth, Modbus, and it put the things in CRD has been fixed, we have no way to modify, so we want to increase their agreements are not flexible, we need to make some changes to the code layer, given the industry agreement very much, In order to better support these protocols, we allow some custom extensions. One is to extend existing protocols. For example, we all use Modbus protocol, and different devices may have some additional configurations. Can customize to match some fields; The second is to completely do not need the community protocol, this time can completely use our custom protocol, completely customize their own protocol.
(2) Support more convenient device collection configuration
Actually industry and we have some IT or not, we do IT is the general definition template, to define instance, is different, but the industry is generally defined as an example, the instance copy modify the content of the inside, but they do this is also in consideration of the reality, for example, I have ten temperature sensor, IT is the same, It is connected to the same industrial bus, but its so-called attributes are the same, the only difference is that its offset on Modbus is different, so I just need to change the offset in Instance. Therefore, based on this consideration, we moved the PropertyVisitor from the original Device Model to DeviceInstance, and then added some more flexible configuration items. For example, the whole acquisition cycle cannot be configured. It can be configured for different measurement points in industry. For example, the temperature cycle may be hourly, so the energy consumption data may not need to be so frequent, so this requires a more flexible collection cycle configuration. We also added configuration items such as CollectCycle to the PropertyVisitor and extract serial port and TCP configuration to the public part.
(3) Optimize the delivery of twin attributes
(4) Bypass data configuration is supported
Bypass data processing
Support for Mapper to push temporal data to an edge MQTT broker (which EdgeCore does not process), and to which Topic it can be defined
Integrate with EMQX Kuiper, which supports reading metadata from DeviceProfile
The cleaning rules are sent from KubeEdge to Kuiper
Third-party applications get data directly from edge MQTT
Condition monitoring
In fact, to make a commercial product, state monitoring is very important. In fact, I think KubeEdge is still lacking in monitoring. The community provides a channel called Cloud Stream, which can work with MetricServer and Prometheus. However, you need to configure iptables to intercept traffic. Another is that IF I match the whole traffic will be blocked, so this is some problems.
** So we also made another plan: ** Has a scheduled task container on the edge node that does very simple things like pulling data from my edge node every 5 seconds and pushing it to PushGateway, which is an official component of Prometheus. PushGateway is on the cloud, so we can monitor the whole thing that way.
3. Some problems encountered in other projects
Multi-tenant Sharing
In fact, K8s itself has multi-tenant design, but when KubeEdge was doing it, our Device did not consider the problem of Namespace. Therefore, there is a bug if we use Namespace in the Device, so now KUbeEdge cannot put different devices under different namespaces. We can only encapsulate this from the business logic of the business layer, such as marking some labels to the Device and filtering through the labels. There is no way for edge node working nodes to belong to Namespace, but in our scenario, a node belongs to a tenant and is exclusively owned by the tenant. At this time, we can encapsulate it with the upper-layer business logic.
IP Address Restriction
In fact, according to our current design, each tenant will be given a K8s cluster, which will connect to one of its edge devices. In this way, the cloud cluster requires a public IP, and IP resources are actually quite limited. How can we solve the problem in the case of limited address, for example, if we do a project and give you 200 public IP addresses, but I may have 1000 users?
1**) IPv6 is the most radical solution: ** So far the community has said yes, but we haven’t tried it yet
2**) Port reuse: ** In fact, kubeEdge needs to use fewer ports, the default is 10003, a maximum of 4-5 ports, in fact, a public IP can be used by multiple instances of kubeEdge to reuse
High availability solution: this is available in the community and uses kubernetes’ own functionality, Service+Deployment and status checking application cases
Case 1: OPC-UA data collection and processing
Through our application supermarket, users can order OPC-UA Mapper to the edge of the gateway, and then through one of our page configuration can realize from
Collect data from industrial equipment on the edge, such as:
- Opc-ua Mapper collects temperature data
- Edge node alarm applications directly obtain data from the edge
- When the threshold is exceeded, an alarm is generated and the device is suspended
- KubeEdge adjusts the threshold
Case two: industrial video security
This is an application of the edge of a partial autonomy, and the cloud is actually less interaction, it issued to the edge side can run independently, mainly in the edge of the side to do AI reasoning, that if you want it and cloud together, we will put the training of the model on cloud, finish the training model through KubeEdge again pushed to the edge, mainly include:
KubeEdge manages the configuration of video security applications on edge nodes
Edge video security applications run autonomously on edge nodes
Take stream from camera, AI reasoning
Safety helmet, work clothes wearing detection
Dangerous areas are forbidden for inspection
Four,
(1) Industrial data collection based on KubeEdge
Currently, CustomizedProtocol and CustomizedValue support various industrial protocols
The ConfigMap enables the cloud to control edge data applications (Mapper)
Bypass Data (Spec/Data) provides convenient support for processing sequential Data
(2) Productization of KubeEdge
Multi-tenant scheme
Multiple monitoring schemes
High availability solution
Public IP address multiplexing scheme
This article is shared by Huawei cloud community “KubeEdge architecture Design and Application in national Industrial Internet Big Data Center”, original author: technology torch bearer.
Click to follow, the first time to learn about Huawei cloud fresh technology ~