In November 2015, the first Session of KubeCon was held in San Francisco, the United States, as a small conference of 200 people. In July 2019, the second session of KubeCon was held in China, where more than 3,500 cloud native and open source engineers gathered. Even Linus Torvalds, the founder of Linux and Git, came to KubeCon China Shanghai. Jim Zemlin, executive director of the Linux Foundation, said: “I’ve identified two big events in the open source community: the success of Linux and the explosion of Kubernetes and cloud native. Open source is one of the most successful global innovation enablers in history, Linux has grown into the world’s most important software platform, and cloud native is exploding with unstoppable momentum.
China has contributed significantly to the overall cloud native movement, with China’s K8s Ficolin-3 already ranking second among all contributors globally, with more than 10% of CNCF members coming from China and 26% of Kubernetes’ certified suppliers from China.
As one of the earliest Chinese companies to become a CNCF member, Ali Cloud has been constantly practicing and exploring in the field of cloud native technology. On the basis of Ali Cloud’s platinum membership, Ant Financial is also the latest to join CNCF as a gold member.
KubeCon China, Ali Cloud and released what black technology?
Highlight 1: Embracing the community and serving the widest range of developers – release of cloud native application management and delivery system
The whole Cloud Native application management and delivery system includes five projects. This time, KubeCon took the lead in launching two projects: Cloud Native App Hub, the first open Cloud Native application center in China; Cloud native application automation engine – OpenKruise.
Open Cloud Native App Hub – Cloud Native App Hub
The Cloud Native App Hub hopes to become a Hub for hosting and distributing apps on the Cloud Native “highway.” In order to enable Chinese developers to better use Helm Hub’s capabilities, Ali Cloud Developer Center has reached a series of technical cooperation with Helm Community, providing synchronous mirror warehouse and Hub site of the first Helm Hub official site in North America in China in the open cloud native Application Center. The Open Cloud Native Application Center is a distribution center for hosting and distributing applications on the “highway” of cloud native applications, as well as an important basic warehouse for domestic developers to use cloud native applications. In the Kubernetes ecosystem, an “application” is a set of YAML description files, while the Cloud Native Application Center provides a completely open and interactive platform for searching, using, and sharing these application description files. Helm is one of the most widely used application definition standards in the current Kubernetes application ecosystem. Therefore, in the release of cloud native Application Center, the hosting, search and distribution capabilities of Helm format applications became the first online capabilities of the center. In order to enable Chinese developers to better use Helm Hub, Ali Cloud Developer Center has reached a series of technical cooperation with Helm community, providing synchronous mirror warehouse and Hub site of the first Helm Hub official site in North America in China in the open cloud native Application Center. Meanwhile, Helm Hub officially recommended “Open Cloud Native Application Center” as the first choice for Chinese developers to use Helm Charts in its core Charts warehouse. In the Open Cloud Native Application Center, all default Helm Charts (Helm format applications) are regularly synchronized from the Helm Hub north American official site and hosted on Github. In this process, the cloud native Application Center will automatically “localize” all Charts that are synchronized, including replacing inaccessible mirror urls such as Gcr. IO and qury. IO with domestic mirror sources. Replace the URL of application products hosted in Google Cloud storage with domestic mirror address, and continuously verify these Charts in Aliyun Kubernetes service through background CI system. These technical efforts will finally allow Chinese developers to search for cloud-native applications and install them directly on any Kubernetes cluster around the world using the Helm install command.
It is particularly worth mentioning that the back-end storage service used by the open cloud native application center is the ali Cloud enterprise-level container image service used by Ali Cloud to support the peak exchange of “Double 11”. Behind this service is the accumulation of core technologies that host alibaba Group’s 100,000 images and support a total of 200 million downloads of container images. In addition, the cloud native application center also provides a “one-click installation” experience function, users only need to provide remote Kubernetes cluster access information to deploy the application in the past.
OpenKruise- Cloud native application automation engine
In the process of the overall cloud transformation of Alibaba’s economy, alibaba’s technical team has gradually accumulated a set of technical concepts and best practices that are closely related to upstream community standards and adapt to the large-scale scene of the Internet. Of these, the most important is undoubtedly how to automate the release, operation and management of applications.
Kruise is cruise, ‘k’ for Kubernetes. Literally cruise, luxury yacht. Meaning Kubernetes application of automatic cruise, full of alibaba application deployment management experience for many years. Kruise’s aim is automate everything on Kubernetes! Kruise project is derived from the best practices of large-scale application deployment, release and management in Alibaba’s economy over the past years, from the container platform team’s ability to operate and maintain group applications and build sites on a large scale, and from the demand precipitation of thousands of customers served by Aliyun Kubernetes. Kruise takes advantage of the cloud native community, integrates the essence of alibaba cloud native practice, feeds back to the community, guides the industry cloud original biochemical best practice, and avoids detours. Kruise core lies in automation, we will solve the automation of Kubernetes applications from different dimensions, including, deployment, upgrade, elastic expansion and shrinkage, Qos adjustment, health check, migration repair and so on. Kruise’s open source content is mainly in application deployment and upgrade, that is, a set of enhanced Controller components for application deployment, level and operation and maintenance. Subsequently, Kruise will open source intelligent elastic expansion and shrinkage components, as well as the application of Qos self-regulation ability components. Kruise Controllers – a set of Controller components for automated deployment management applications on Top of Kubernetes. As we all know, the core principle of Kubernetes project is “controller mode”.
At present, Kubernetes project has provided a set of Controller components by default, such as Deployment, Statefulset, DaemonSet, etc. These controllers provide rich application Deployment and management functions. However, as Kubernetes becomes more and more widely used, it becomes more and more common to see mismatches between business demands and upstream Controller functions in real enterprise and scale scenarios. Take Alibaba as an example: Alibaba’s internal Kubernetes cluster needs services covering several BU and tens of thousands of applications. This volume is very large, which poses huge challenges for scale and high availability. At the same time, The Kubernetes service on Alibaba Cloud also has access to thousands of enterprise customers, collecting and supporting a variety of customer needs. These appeals and the practical experience of Ali economy eventually contributed to the birth of Kruise open source project.
Highlight 2: Managed edge Containers (ACK@Edge) released
With the rapid increase of the number of Internet intelligent terminal equipment, and 5 g and the advent of the era of Internet, the traditional centralized storage cloud computing center, calculation model has been unable to meet the terminal equipment for aging, capacity and the demand of the work force, will be the ability to sink to the edge of the side of the cloud computing, equipment, and the unified delivery, with the center operations, controls, Will be an important development trend of cloud computing. IDC estimates that by 2020, there will be more than 50 billion terminals and devices connected to the Internet in the world, and more than 40% of the data will be analyzed, processed and stored at the edge of the network, which provides sufficient scene and imagination space for edge computing.
Edge computing is mainly divided into three parts according to functions and roles: Cloud – the central node of traditional cloud computing, with abundant cloud computing product forms and resources, is the control end of edge computing, responsible for the unified management, scheduling and storage of computing power and data of the whole network. Edge — The Edge side of cloud computing, which is also divided into Infrastructure Edge and Device Edge. The edge of infrastructure is usually located in IDC, with sufficient computing power and storage capacity, and is connected to the center by private lines or backbone networks, such as CDN nodes. Device edge refers to the edge nodes of non-traditional IT infrastructures that are closer to devices and data sources, typically data gateways. End – terminal equipment, such as mobile phones, smart home appliances, all kinds of sensors, cameras and so on.
The main challenges facing edge computing are:
- Cloud side – end collaboration: lack of unified delivery, operation, maintenance, and control standards.
- Security: It is difficult to control the security risks of edge services and data.
- Network: Reliability and bandwidth limits for edge networks.
- Heterogeneous resources: Supports different hardware architectures, hardware specifications, and communication protocols, and provides unified services based on heterogeneous resources, networks, and scales.
On the other hand, Cloud Native technology represented by Kubernetes is one of the fastest developing directions in the field of Cloud computing in recent years. K8s has also become the de facto standard of container application programming, and has expanded its coverage in the field of Cloud computing at a very fast development speed. Can greatly improve the efficiency of cloud technology to expand to the edge.
Kubernetes based cloud native technology, one of the core value is achieved by unified standard provided on any of the infrastructure and cloud consistent function and experience, with the help of a cloud native technology, can realize the cloud – edge – the integration of distributed application, solve, the equipment at the edge of the vast application delivery, unified complete large-scale operations and control demands; In terms of security, cloud native technology can provide container and other more secure workload operating environment, as well as traffic control, network policy and other capabilities, which can effectively improve the security of edge services and edge data. In the edge network environment, the edge container capability based on cloud native technology can guarantee the autonomy of weak network and disconnection network, provide effective self-recovery ability, and have good compatibility to the complex network access environment. Relying on strong community and vendor support in the field of cloud native technology, the applicability of cloud native technology to heterogeneous resources has gradually improved. In the field of Internet of Things, cloud native technology has been able to support a variety of CPU architectures (x86-64/ ARM/ARM64) and communication protocols, and achieve low resource occupation.
In this context, Ali Cloud released ACK@Edge, committed to realize cloud-edge-end integration and coordination, perfect expansion of cloud native boundaries through non-invasive enhancement.
With the advent of 5G and the Internet of Things, the boundaries of cloud computing are expanding. Ali cloud ACK@Edge relies on Ali Cloud Kubernetes hosting service to build a general edge container cloud native infrastructure, which is applicable to a wide range of scenarios. Based on the mainstream cloud native non-invasive design principles, to achieve a consistent cloud side experience. At the same time, the combination of native + Addons is very conducive to rapid business integration and expansion. There are no additional edge resource costs and maintenance costs when building PaaS in IoT and CDN domains.
ACK@Edge is a cloud native edge container product dedicated to the integration of cloud edge and end. Edge cluster hosting services help build cloud native edge computing infrastructure and promote cloud product integration; Up as a base to support edge computing field PaaS construction; Support downward access to edge computing resources such as ENS and IoT own nodes, and support edge autonomy, edge security containers, edge intelligence, etc. At the same time, it is also committed to building channels and platforms for cloud AI, streaming computing and other capabilities to sink to the edge and broaden the boundary of cloud products. With the explosion of demand for edge computing, and the gradual expansion of the scale of IoT, CDN and other edge scenarios, ACK@Edge will continue to make efforts in scale and stability, helping to improve the innovation efficiency of edge computing business.
Highlight 3:9 years of technology accumulation: creating the world’s largest cloud native application practice
Alibaba is the first company in China to deploy cloud native technology. As early as 2011, before the concept of cloud native was put forward by the industry, Alibaba began to explore container technology. At present, the core businesses of the group, such as e-commerce and urban brain, have adopted cloud native technology on a large scale. Among them, Double 11 is regarded as the largest cloud native application practice in the world. Last year, the online service of Double 11 completed all containerization, rapid deployment of 1000+ servers within 10 minutes, the scale of container deployment reached one million, and successfully dealt with 325,000 transactions per second peak. Up to now, Alibaba Group’s internal container mirror service is responsible for hosting 100,000 images and has accumulated 200 million image downloads. At the same time, this technical ability is constantly exported, Ali Cloud is the internal accumulation of full-link pressure measurement, rapid elastic expansion and shrinkage experience service. It has the largest public cloud container cluster in China, the largest public mirror warehouse in China, and the richest scene best practices. Taking e-commerce as an example, enterprises can use the cloud native architecture to simplify the rehearsal and practice on the cloud, and improve the efficiency and reliability of dealing with traffic peaks. High business elasticity can be realized at the container application level through ACK, the horizontal and vertical expansion and shrinkage of database can be realized through PolarDB, and the full-link pressure test can simulate real business traffic through PTS performance test service. Alibaba All in Cloud, precipitated and realized the comprehensive upgrade of Cloud native ability. What is more worth mentioning is that Ali Cloud has the industry-leading cloud native capability technology stack and is the only enterprise in China to enter the “Public Cloud Container Service Competition Pattern” by Gartner in 2019.
Ali Cloud is committed to the deep cultivation of open source community, the experience of the essence and the formation of open source code back to the ecology. Previously, ali Cloud container platform team provided more than 20 open source components covering Kubernetes, Networking, Logs, application containerization, Serverless, AI and other directions. Such as high-performance network plug-in Terway, deep learning accelerator Arena, shared GPU scheduling GPU Sharing, etc. Nowadays, more and more enterprises are beginning to evolve to “cloud native”. Ali Cloud is constantly polishing, precipitation and upgrading cloud native technology capabilities, expecting to help developers and users to achieve greater business value through a standard, efficient and easy-to-use way.
The original link
This article is the original content of the cloud habitat community, shall not be reproduced without permission.