Author: zheng mining cloud | Wang Xun

In recent years, with the rapid development of the Internet, in order to follow up the pace of the rapid growth of business, new technologies have sprung up like bamboo shoots after a rain. Container-centric cloud native technologies are growing rapidly, and Kubernetes, as the new infrastructure and de facto standard for container orchestration, is undoubtedly the brightest star.

However, although Kubernetes is a good solution to large-scale application deployment, resource management and scheduling problems, but its business delivery is not friendly, Kubernetes own deployment is also more complex, in the continuous emergence of applications around Kubernetes ecology, There has always been a lack of applications that can integrate business, middleware, and clusters for integrated delivery.

At present, sealer, an open source project initiated by ali Cloud intelligent cloud native application platform team and co-built by Zhengzhengyun and Harmoncloud technology, complies Kubernetes’ weaknesses in the field of integrated delivery with a very elegant design solution that considers the overall delivery of cluster + distributed applications. As a representative of the government procurement industry, Sealer has successfully used sealer to complete the overall private delivery of large distributed applications. The delivery practice fully demonstrates sealer’s flexible and powerful integrated delivery capabilities.

background

The privatized delivery clients of Zhengcaiyun are government and enterprise scenarios, and the business scale to be delivered is large: Under the background of 300+ business components and 20+ middleware, the infrastructure of delivery target is heterogeneous and uncontrollable, and the network is strictly restricted. Some sensitive scenarios are even completely isolated networks. In this context, the biggest pain point of business delivery is the processing of deployment dependence and delivery consistency. Although the unified delivery of business based on Kubernetes has realized the consistency of the operating environment, but how to solve a series of problems, such as all the mirrors that rely on in the deployment process, the unified processing of various packages and the consistency of the delivery system itself, need to be solved urgently.

As shown in the figure above, there are six steps in the process of local delivery of JCE cloud: confirming user requirements -> proposing resource requirements to users -> obtaining resource list provided by users -> generating and preparing configuration based on resource list -> preparing deployment scripts and dependencies -> actual delivery. It takes a lot of manpower and time to prepare and deploy when it comes to lead-up and actual delivery.

Privatization delivers pain

In the era of cloud native, the emergence of Docker solves the environmental consistency and packaging problems of single applications, and business delivery no longer depends on the deployment environment for a large amount of time like traditional delivery. Then the emergence of Kubernetes and other container scheduling systems solves the problem of unified scheduling of the underlying resources and unified scheduling of the application runtime. However, the delivery of a complex business itself is a huge problem. Take the scenario of the political cloud as an example: The deployment and configuration of various resource objects such as Helm Chart, RBAC, ISTIO Gateway, CNI and middleware, plus the delivery of more than 300 business components, each private delivery brings a large amount of labor and time cost consumption.

Zhengcaiyun is in the period of rapid business development, the demand of privatized deployment projects is constantly increasing, and the high-cost delivery method is becoming more and more difficult to support the actual demand. How to reduce the delivery cost and ensure the consistency of delivery is the most urgent problem that the operation and maintenance team needs to solve.

Found the sealer

In its early days, PCE cloud used Ansible for business delivery. Ansible solutions achieved a degree of automation and reduced delivery costs, but there were several problems:

1. Ansible only addresses the deployment process, requiring independent preparation of deployment dependencies, which incur additional costs and availability verification, and the localized scenarios of the political cloud are generally strictly limited to external networks. Getting dependencies directly from the extranet is also not feasible.

2. Using Ansible to cope with differentiated requirements can be very tiring. In a private delivery scenario where the needs of different users and business dependencies are different, re-editing ansible Playbook for each delivery can take time to debug.

3. Ansible’s declaration language is weak when it comes to complex control logic.

4. Prepare ansible operating environment before deployment and delivery. 0 dependencies cannot be delivered.

Ansible is more about glue and is better suited for operations where logic is simpler. As more and more localization projects were added, ansible’s delivery shortcomings began to emerge. Each localization project required a large amount of time, and the GCE team began to explore and think about how to optimize the delivery system. We investigated a number of technical solutions, but the existing Kubernetes delivery tools focus on the delivery of the cluster itself rather than the business layer. Although encapsulation can be implemented based on cluster deployment tools, this solution is no different from deploying upper-layer services using Ansible cluster deployment.

Fortunately, we discovered the Sealer project, a packaged distributed application delivery solution that solves the delivery problem of complex applications by packaging distributed applications and their dependencies together. It is elegantly designed to manage the packaged delivery of an entire cluster with an ecosystem of container mirrors.

When using Docker, we define the operating environment and packaging of a single application through Dockerfile. Accordingly, sealer’s technical principle can be explained by analogy with Docker, treating the whole cluster as a machine and defining Kubernetes as the operating system. The Kubefile is used to define the applications in the “operating system” and finally package them into an image, then sealer Run can deliver the entire cluster and application just as Docker Run delivers a single application.

Excited to find Sealer, we invited our community partners to talk to the company. Sealer is still a new project and is only a few months old. We encountered a lot of problems in the actual experience and stepped in a lot of holes. But we didn’t give up, because we had great expectations and confidence in sealer’s design model, and we chose to work with the community to build and grow together. And the final successful landing practice also proved that our choice is very correct!

Community collaboration

At the beginning of our decision to work with the community, we conducted a comprehensive test evaluation of Sealer and, combined with our requirements scenario, identified the following key issues:

1. High cost of image caching. Initially Sealer only offered cloud Build, which was a prerequisite for packaging Sealer cluster images by pulling up a cluster based on cloud resources. Based on this requirement, we proposed and contributed a Lite Build approach that supports image resolution and direct caching by resolving helm, resource definition YAML, and image manifest. Lite builds are the cheapest way to build without pulling up a cluster and with a single host capable of running SEALer.

2, after the delivery of business, the lack of check mechanism, need to manually check the Kubernetes cluster of each component status, and then, we contributed to the cluster and component status check function.

3. Some of the early Sealer configurations were solidified in RootFS. For example, Registry’s deployment host was fixed on the first master node.

4. After deploying a cluster based on SEALer, there is a need to add nodes to the cluster, so we contributed sealer Join capabilities.

In addition, it is worth mentioning a few useful and powerful Sealer features when we land:

1. It is important to mention that cluster images produced by Sealer builds can be pushed directly into private Docker image repositories such as Harbor. Then, like docker images, you can build on existing images and extend functionality.

2. The Sealer community has optimized Registry and Docker to support multi-source, multi-domain proxy caching, which is a very useful feature. When dealing with mirror dependencies, we need to cache an image and change the address of the image, for example, we need to cache a public image into a private image. The mirror address referenced by the corresponding resource object also needs to be changed synchronously to the address of the private mirror repository, but sealer’s built-in Registry is optimized to match the cache without modifying the mirror address. In addition, Registry built into SEALer can act as a proxy for multiple privatized mirror warehouses, which is a useful feature in scenarios with multiple private warehouses.

The ground practice

With Sealer we redefined the delivery process to deliver business components, containerized middleware, image caches and other components directly using Sealer via Kubefile. With Sealer Lite Build mode, this is done automatically with image-dependent parsing and built-in caching.

The use of SEALer shields a large number of application delivery complex process logic, and dependency processing logic, greatly simplifies implementation. Continuous simplification of implementation logic makes scale delivery possible. In our practical scenario, the new delivery system was used to shorten the delivery period from 15 days/person to 2 days/person, and achieve the successful delivery of a cluster containing 20G business image cache, 2000G+ memory 800+ core CPU scale. Next, we plan to continue to simplify the delivery process so that a novice can complete the delivery of the entire project with simple training.

future

The success of landing is not only the achievement of delivering the system, but also reflects the power of open source and explores a new model of cooperation and construction with the community. In the future, Zhengcaiyun will continue to support and participate in the sealer community and provide additional contributions to the community based on actual business scenarios.

As a new open source project, the sealer now are not perfect, there are some questions to be solved and the optimization, there are more requirements and waiting for us to realize the business scene, hope that through continuous contribution, let sealer can serve more users, at the same time, also hope to have more partners to participate in community construction, Let sealer this star, more dazzling! Kubernetes one-click installation

Sealer Cluster whole package!