preface
With the development of container technology, the advantages of containers, such as easy packaging, reproducability, isolation, and low overhead, make continuous applications start to move from traditional physical machines and virtual machines to containers. The birth and development of Kubernetes reduces the difficulty of standardized deployment management of applications and greatly accelerates the process of application relocation to containers. What are the methods and approaches when a distributed system with foreground, background and database is deployed to K8S with one key?
Application model on K8S
Kubernetes organizes container applications by defining the runtime, configuration, service provision, storage, and mirroring of applications into a variety of resource objects. Each resource object can be described and defined in a certain format. For example, environment variables can be defined in configMap objects or in container configurations. The application runtime, on the other hand, is multiple workloads: Deployment, StatefulSet, DaemonSet, and so on.
The resource type | function |
---|---|
Deployment | The workload related to defining applications such as container images, resource requirements, and runtime environment is stateless and can be expanded or shrunk at will |
StatefulSet | Defined application container image, resource requirements, runtime environment and other workload related, exists status information records, sequential start and scale |
DaemonSet | Define the application’s container image, resource requirements, runtime environment, and other workload related, typically proxy applications, with instances running on each node |
Job | Define the application container image, resource requirements, runtime environment and other workload related, short time task class, the completion of the exit |
ConfigMap | Defines configurations that can be mounted to environment variables or files in the container |
Secret | Defines sensitive configuration information that can be mounted to container environment variables or files |
Service | Define application access points |
Ingress | Define HTTP access points for applications |
. . |
An application will have at least one workload, along with configuration, access points, and other resource objects. By controlling these resource objects, one application or multiple applications can be managed as a whole, which is called application orchestration on K8S. The original way, of course, is to directly manipulate each resource object, but multiple scattered files, and no variable substitution, means that these files can only be executed on a set of environments, once the environment changes, you need to manually modify the contents of the files; These objects must also be executed in a certain order, or they will fail. As more applications are applied, the process that is executed becomes more complex.
Orchestration requirements for distributed systems
The following is a typical distributed system, including front-end program, background program, background database, load balancer and so on. When the application system develops to a certain extent, it is natural to split the roles of each component of the system. For each role, it can be extended horizontally or vertically to ensure the expansibility of the application and remove the single point of the system.
Such a system would have the following requirements for application orchestration:
-
Multiple applications can be defined in the system
-
Control dependencies can be defined between multiple resource objects in each application
-
Configuration transfer exists between different applications
-
Different applications depend on each other in startup sequence
Helm: Community native package management tool
Since K8S resource objects have a formatting specification, the community provides Helm’s package for handling K8S applications, called a chart. The package format of Chart is as follows: A “chart. yaml” file that defines the package name, version, and so on, and a “Templates” directory that defines the go-template syntax for all the related resource descriptions of the application. The variables used by the template are declared in a “values.
The implementation principle of Helm is very simple. The rendering replacement of variables is implemented on the client side, and the complete content is passed to the server side (Tiller). Tiller calls the K8S interface in a certain order to complete the creation of the application layout.
Helm also supports dependency dependencies for references between chart packages. Parameter passing between applications can also be accomplished by sharing the same variable input. In this way, the 1,2,3 and 4 requirements of the distributed system seem to be satisfied.
However, in the practical application of helm, there are several problems.
-
Task delivery in an application only controls the order in which requests are sent, but does not guarantee the order in which requests are created. This results in a situation where deployment sometimes fails to reference an object such as Secret before it has been processed.
-
For workloads such as Deployment, the helm does not really determine whether the instance is ready to start and complete, even if the application instance fails for some reason, the helm will not be aware of it.
-
The dependency on the Helm controls only the reference to the chart package, but when multiple applications are created together, they are all expanded and executed in the same type order.
-
The template writing of Chart package, on the one hand, requires the user to have some basic understanding of the resource object structure of K8S; on the other hand, the introduction of go-Template makes the template become an unformatted structure, which cannot be normally checked and processed, leading to errors.
The problems Helm has encountered in dealing with distributed systems are not intractable. At its core is a well-established DAG framework that accurately controls all the steps and processes, and manages and delivers the results of each step. The framework should be formatted with a simple structure, not that complex, and should be proficient in K8S, Go-Templat e, etc., preferably with a graphical interface to assist writing.
Apply Choreography services: resolve dependencies and configuration delivery
Huawei Cloud Application Orchestration Service (AOS for short) is such a system that can meet the various requirements of complex distributed applications on K8S. The AOS service connects to the Cloud Container Engine (CCE service) of Huawei Cloud and provides a set of normalized model structure to define various resource objects on K8S. Through the template of model design, one – click arrangement of complex distributed system can be realized on K8S cluster.
-
Excellent graphic designer, through the graphical drag and drop can design the application structure, the application needs a variety of resources, precise control of the execution sequence of all objects, to achieve the maximum parallelism.
-
By marshaling syntax, you can output information such as shared parameters and output access addresses of applications.
-
Good variable input and output mechanism, will be applied in different environments deployment required configuration refinement, to achieve a set of templates, multiple reuse.
The previous distributed system, choreographed by AOS, can become the following diagram. In such a distributed system, DB databases will be created in turn, and the job to create the database table structure will be executed. The database access information will be recorded in Secret, which will be mounted and used by two back-end programs, App1-Back2 and app-Backend1. Both back-end programs also reference the common configuration AppConfig. The foreground application accesses the back-end program through the ingress and service, and the ingress is provided for external access. The load balancer is not shown in this figure because the ingress is configured with the ELB service to connect to the Huawei Cloud.
The application Orchestration system also supports various services on the orchestration management cloud, such as RDS database and ELB load balancer. Using these services, the cloud architecture of the distributed system is simplified, and users can focus on their own business, so that they do not need to spend much effort on service delivery, deployment, operation and maintenance.