Authors: Wang Chen, Mu Huan, Xi Yang et al
Review & proofread: Dosa Dosa, Hong Rya, Zhang Lei, Chi Min
Editing & Typesetting: Wine round
The essence of container is an isolation technology, which solves the unsolved problems of its predecessor – virtualization: slow startup environment and low resource utilization. The two core concepts of container technology, Namespace and Cgroup, perfectly solve these two problems. Namespace replaces Hypervise and GuestOS as a seemingly isolated technology, evolving from two operating systems into one that is lighter and faster to start, and Cgroup as an isolated technology. Limits a process to consuming only part of the CPU and memory of the entire machine.
Of course, container technology is popular because it provides a standardized software development deliverable, the container image. Based on the container image, continuous delivery is the only thing that can really happen.
There are many more reasons to use container technology that we won’t go into here.
Meanwhile, cloud computing solves the elastic scaling of the basic resource layer, but does not solve the batch and rapid deployment problems caused by the elastic scaling of the PaaS layer. Thus, the container choreography system came into being.
According to third-party research data, containers and Kubernetes have become the mainstream choice in the cloud native era, but when it comes to actual implementation, it is in trouble. We try to summarize some common points and solutions, which may provide some references for enterprises who are implementing container technology.
What’s hard to use?
Containers and Kubernetes are undoubtedly advanced, but a large number of enterprises are in trouble when they start to embrace Kubernetes, the de facto standard for container orchestration. “K8s is like a double-edged sword. It is the best container choreography technology, but at the same time it has a high level of complexity and a high threshold of application, which often leads to common mistakes.” Even Google, the founder and core driver of Kubernetes, has acknowledged the problem.
In an interview, Alibaba’s senior technical expert Lei Zhang analyzed the nature of Kubernetes, pointing out that “Kubernetes itself is a distributed system rather than a simple SDK or programming framework, which in itself has raised its complexity to the level of a system-level distributed open source project. In addition, Kubernetes first popularized the idea of declarative apis in open source infrastructure, and based it on a series of usage paradigms such as the container design pattern and the controller model, These advanced and forward-looking designs also make it a learning curve for the Kubernetes project to be accepted by the public.”
We summarized the four major complexities of Kubernetes.
1. Cognitive complexity: It is different from the original backend development system, extending a whole new set of theories and providing a whole new set of technical concepts, and these concepts, such as Pod, Sidecar, Service, resource management, scheduling algorithm and CRD, are mainly designed for platform development teams rather than application developers. Provides many powerful and flexible capabilities. However, not only does this create a steep learning curve that affects the experience of application developers, but in many cases it can lead to misoperations and even production failures.
2. Complex development: K8s uses a declarative approach to orchestrate and manage containers. To do this, you need to configure a YAML file, but introducing new steps in complex applications affects developer productivity and agility. In addition, the lack of a built-in programming model and the need for developers to rely on third-party libraries to handle dependencies between programs can hamper development efficiency and add unnecessary DevOps overhead.
Complex: 3, migration to an existing application migrated to K8s is complicated, especially for micro service architecture, in many cases, must reconstruct specific components or the entire structure, and the need to use cloud native principle refactor the application, such as state dependence, such as writing a local directory, sequential, rely on network, such as IP write death, quantity, such as a copy of the fixed and so on.
4. Complex operation and maintenance: The declarative API of K8s subverts the traditional procedural operation and maintenance mode, and the declarative API corresponds to the end-state-oriented operation and maintenance mode. With the growth of K8s cluster scale, the operation and maintenance difficulty of unarmed K8s based on open source K8s will also show a linear increase, presenting in cluster management, application publishing, monitoring, logging and other links, the stability of the cluster will face extremely high challenges.
Is there another solution?
Technology has always had two sides, and the container is revolutionizing cloud computing infrastructure and becoming the new computing interface. Kubernetes has created a unified infrastructure abstraction layer that shields the platform team from “computing”, “network”, “storage” and other infrastructure concepts that we had to focus on in the past. It allows us to easily build any vertical business system we want based on Kubernetes without having to worry about the details of any infrastructure layer. This is the fundamental reason why Kubernetes is called the Linux of cloud computing and “Platform for Platforms”.
But is playing With Kubernetes the only way we can apply container technology? The answer is no. In the evolution of container technology, we have also found many open source projects and commercial products that can lower the barriers to container choreography. In the following section, we will introduce each of them from the lowest level to the highest level of liberation.
Open source tools around the Kubernetes ecosystem
OAM/KubeVela is an open source project hosted in CNCF, which aims to reduce the complexity of K8s application development and operation. It was initially jointly initiated by Ali Cloud and Microsoft Cloud.
KubeVela, a standard implementation of the open application architecture model OAM, is infrastructure-neutral, natively extensible, and most importantly, completely application-centric. In KubeVela, “apps” are designed to be “first-class citizens” of the entire platform. Application teams only need to focus on several cross-platform and cross-environment upper abstractions such as components, operation and maintenance features, workflow to deliver and manage applications, without paying attention to any infrastructure details and differences. The platform administrator can use IaC to configure features such as component types and o&M capability sets supported by the platform to adapt to any application hosting scenario.
KubeVela is completely built on THE basis of K8s, with natural integration ability and universality, which naturally reflects all the capabilities of K8s and its ecology, rather than superimposed abstraction. Therefore, KubeVela is suitable for those technical teams who have a certain level of K8s platform development and operation capability and want to use the full RANGE of K8s capabilities to expand the capabilities of the platform.
Containers have evolved from an isolated technology into an ecosystem, and open source tools like KubeVela, which can greatly reduce the complexity of using K8s, will gradually unleash their vitality, allowing developers to enjoy the efficiency and convenience of cloud native without having to become K8s experts.
Sealer is an open source distributed application package delivery solution that greatly simplifies the delivery complexity and consistency of container projects. The sealer built product, known as a “cluster image” and embedded with K8s, can be pushed to Registry for sharing with other users or can be used directly with very common distributed software found in official repositories.
Delivery is another challenge in the container ecosystem that relies on complexity and consistency, especially for industrial-scale Kubernetes delivery projects, which require long delivery cycles and high delivery quality. Sealer is ideal for software developers, ISVs, and other enterprises that can reduce deployment times to the hour level.
Open, standardized, enterprise-class Kubernetes service
Most cloud vendors provide container platform capabilities of Kubernetes as a Service, such as AWS EKS and ALI Cloud ACK, which can greatly simplify the deployment, operation and maintenance, network storage, security management and other capabilities of K8s cluster, and provide K8s services that have passed CNCF standardized certification. It can meet the workload requirements of almost all scenarios and provides rich extension and customization capabilities. In addition, most cloud manufacturers will be based on open source Kubernetes framework, in the upper layer to do different degrees of encapsulation, to adapt to the needs of different enterprise backgrounds and different scenarios, to provide distribution and Pro version, for example, Ali Cloud ACK Pro provides the ability to host master and fully managed node pool. Fully integrate IaaS capabilities to become more efficient, secure, and intelligent, offering best practices and full stack optimization for various segmentation scenarios of container clusters as built-in services to enterprises.
From the existing user scale, this is the mainstream choice of most Internet enterprises landing container technology.
For more information, please go to: Alibaba Cloud Container Service release: a new generation of efficient, intelligent, safe and boundless Platform.
Kubernetes service evolving to Serverless
Traditional Kubernetes adopts node-centric architecture design: nodes are the operation carrier of Pod, and the Kubernetes scheduler selects the appropriate node in the working node pool to run Pod.
For Serverless Kubernetes, the most important concept is to decouple the runtime of the container from the specific node runtime environment. Users do not need to pay attention to node operation and security, reducing operation and maintenance costs. And greatly simplifies the container elastic implementation, no need to plan according to the capacity, on-demand container application Pod can be created; In addition, the Serverless container runtime can be supported by the entire cloud elastic computing infrastructure, protecting the overall cost and scale of resiliency.
Many cloud vendors are also further merging containers and Serverless: For example, Alicloud Serverless container service ASK and AutoPilot of Google GKE, so as to avoid the operation and maintenance mode reducing the operation complexity of customers on K8s nodes and clusters, container applications can be directly deployed without purchasing servers. Meanwhile, container applications can still be deployed through the K8s command line and API, taking full advantage of K8s’s orchestration capabilities, and are paid on demand based on the amount of CPU and memory resources configured for the application.
This kind of service is very good at handling some job-like tasks, such as algorithm model training in the FIELD of AI, and has relatively consistent development experience in the K8s environment, which is a very good supplement to the container service ecosystem.
More information can be found at Serverless Kubernetes: Ideals, Realities and The Future.
A new generation of PaaS services powered by container and Serverless technologies
Enterprise the demand of the market is always a layered and diverse, which closely related to the distribution of technical personnel, not every enterprise could establish a team of technical strength is strong enough, especially in the north of guangzhou city, a new technology and be born, is always a phased plan, it will give more products form bred the market space.
K8s, while providing full lifecycle management for container applications, is too rich, complex, and flexible, which is both a strength and sometimes a weakness. Especially for the r & D operation and maintenance personnel who are used to managing applications from the perspective of application in the era of virtual machine, even though AWS EKS and Ali Cloud ASK have reduced the operation complexity of K8s to a certain extent, they still hope to further reduce the threshold of using container technology in some way.
Container and K8s do not have to be used together. In some new PaaS services, such as Ali Cloud Serverless Application Engine (SAE), virtualization technology is transformed into container technology at the bottom, making full use of container isolation technology to improve startup time and resource utilization, while in the application management level, The original management paradigm of microservice applications is retained, and users do not need to learn the huge and complex K8s to manage applications. These new PaaS services usually have a full set of microservice governance capabilities built in. Customers do not need to consider framework selection, data isolation, distributed transactions, circuit breaker design, traffic limiting degradation, etc., nor do they need to worry about secondary customized development with limited community maintenance.
In addition, after the underlying computing resources are pooled, its natural Serverless attribute makes it unnecessary for users to purchase and keep servers separately, but to configure computing resources according to the amount of CPU and memory resources, so that container + Serverless + PaaS can be combined into one. Technology advancement, resource utilization optimization, constant development operation and maintenance experience can be integrated together. Therefore, compared to the other solutions in this article, this type of solution is characterized by providing a PaaS experience, allowing the new technology to land more smoothly.
Most of the traditional industries, some Internet enterprises with technical capabilities biased towards the business layer, and some startups that do not want to be restricted by the back-end to affect the business express iteration, tend to prefer products in the form of PaaS. Regardless of enterprise attributes, PaaS services have delivery advantages in dealing with the following scenarios:
-
Launched a new project, want to quickly verify, do not break down, at the same time to control the input cost of manpower;
-
The business volume is increasing rapidly, and the number of users is increasing, so the business stability is a little unmanageable. New version release, online application management and other links are beginning to be a little timid, but the technical reserve cannot timely cope with the current changes;
-
We decided to upgrade the original single architecture into microservice architecture, but due to the lack of microservice experts in the team, we found that the upgrade risk was relatively high after evaluating the project.
For more information, please visit: “Breaking Serverless landing boundary, Ali Cloud SAE Releases 5 new features”.
More extreme Serverless service – FaaS
With the advent of FaaS, business scenarios with flexible appeal have a better option. More and more large and medium-sized enterprises are running on Serverless architecture, stripping out execution units that are flexible for expansion in the traditional backend domain.
This makes FaaS (functional computation) an alternative to containers and K8s as a general computation force.
Like Serverless services such as Google Cloud Run and App Runner, THE product form of FaaS is becoming more and more open with fewer and fewer restrictions on operation. It is not only suitable for event-driven computing model, but also suitable for single Web applications and Job scenarios. This helps users maximize the flexibility and further improve the utilization of computing resources.
Lilith in the game industry, for example, applies functional calculations to combat verification to verify that combat uploaded by a player’s client is not rigged. Battle verification usually needs to be calculated frame by frame, and the CPU consumption is very high. Usually, the battle of 1 team v 1 team needs N ms, while the battle of 5 team V 5 team needs corresponding 5N ms, which requires high elasticity. In addition, SLBS mounted in container architecture will not be able to perceive the actual load of Pod due to polling mechanism, resulting in load imbalance, resulting in dead cycles and stability risks.
The functionally calculated scheduling system helps Lilith arrange each request properly, and for the dead-loop problem, it also provides a time-out process killing mechanism, which sinks the complexity of the scheduling system into the infrastructure. In addition, after deep optimization of function calculation, the cold start delay is greatly reduced, from scheduling, to obtaining computing resources, and then to service start, which is basically about 1 second +.
In addition, the emergence of FaaS also greatly liberates the energy spent by full-stack engineers of start-up companies on DevOps to carry small programs, websites and other Web monomer applications. For example, function calculation reduces the server maintenance threshold of Front-end languages such as Node.js. Anyone who can write JS code can maintain Node services.
For more information, you can go to: Overcoming industry stumbling Blocks, Alibaba Cloud Function Computing releases 7 Technical Breakthroughs
What fits is best
The more demand there is, the more input there will be. This is always the same. After we decide to introduce container technology, before using K8s, we need to figure out why we need K8s.
KubeVela is the ideal open source option if we want to take full advantage of the full capabilities of K8s and the team has the technical know-how to do so. Sealer also helps us reduce delivery complexity. If you want to hand over different levels of K8s packaging work to cloud vendors to deal with, in order to more efficiently meet the needs of different business scenarios, then cloud vendors provide commercial container services is a good choice; If the container and K8s do not fit the requirements of the elastic business class, you can choose FaaS.
However, if our application is not that complex and simply wants to simplify application lifecycle management and underlying infrastructure, ensure high business availability, and focus on business development, then we may not need to use K8s to orchestrate container applications. After all, K8s is derived from Google Borg. It is used to manage Google’s vast collection of container applications.
Reference article:
-
The Past life of Cloud Computing, Liu Chao
-
“Flexible and efficient Cloud native Cluster Management experience: Using K8s to manage K8s”, Huaiyou, Linshi
-
Will Complexity be Kubernetes’ Achilles heel? , Zhao Yuying
-
Simplifying Kubernetes For Developers, Rishidot Research
-
KubeVela officially open Source: A Highly scalable cloud native Application Platform and Core Engine, OAM project maintainer
-
KubeVela 1.0: Unlocking the Future of programmable Application Platforms, OAM project maintainer
Related links:
1. Project Address: github.com/oam-dev/kub…
2. Project Address: github.com/alibaba/sea…