Abstract: In the process of container implementation, research and development, operation and maintenance learning and use costs are very high, so is there a simple and easy to use platform? This paper introduces Kepler Cloud platform, an application management platform based on Kubernetes.

The background,

In order to quickly adapt to and meet market demands, there are more and more small and fast applications. “How to deploy and manage these fragmentary applications? Become a headache for everyone. If all VMS are used, resources are consumed. At this point, the application of containerization is obviously a very good choice, but many companies are facing the same problem, that is, the implementation of containerization is difficult.

In the process of container implementation, research and development, operation and maintenance learning and use costs are very high, so is there a simple and easy to use platform?

Kepler cloud platform is an open source application management solution based on Kubernetes of Jinke fortune Technology Department. Committed to solving the company’s difficult container, Kubernetes difficult, high operation and maintenance costs. The application only needs to add a very simple Dockerfile file through the Kepler cloud platform can be deployed on Kubernetes, greatly reducing the difficulty of use.

The Kepler Cloud Platform

A previous article on the Kubernetes Container Cloud Practice provided some basic introduction to the Kepler platform.

After some tweaking, we finally open source the platform: github.com/kplcloud/kp…

Kepler Cloud platform is a platform for r & D, operation and maintenance people, just need to have simple knowledge to quickly deploy applications to Kubernetes, the platform infrastructure is as follows:

The Kepler platform can run on Kubernetes as a container or be deployed independently.

To complete the deployment, you need to add the app.cfg configuration file on the Kubernetes Master node.

$ git clone github.com/kplcloud/kplcloud && cd kplcloud/
$ kubectl apply -f install/kubernetes/kpaas/
Copy the code

Below is a picture of the Kepler cloud platform’s docking platform and its flow.

Kepler cloud platform operates applications by calling Jenkins, Gitlab(Github), Kubernetes and other APIS.

Consul’s KV function can be used as a configuration center. Consul API can be directly invoked on the Kepler cloud platform, and you can decide whether to enable Consul KV function in the configuration file.

Jenkins is currently only responsible for compiling code and uploading Docker images to the repository. Kepler creates or builds a Job by calling the JenkinsAPI and listens for Job status.

The Kepler platform can also call the Github or Gitlab API to retrieve project branches and tags that need to be online. The information is passed to Jenkins, who pulls the code and executes the relevant build process.

Three, use,

The resources called by the platform to Kubernetes API and Jenkins API or alarms are all handled in the form of templates. The administrator can adjust the templates of related resources according to the environment of his company.

In addition to the most basic requirements for production, there is also increased demand support for testers in the test environment.

  • Application cloning: Testers may need to do multiple environments for one version. The platform can assume that one space is a scenario. After all applications are deployed in one space, the same applications need to be generated in other Spaces. To facilitate operation, you can directly use the toolset-clone function to complete one-click cloning.
  • Adjusting container time: Financial products should all have problems adjusting container time. Usually testing a function requires modifying the time of the service. Because Docker uses the kernel time of the host, the container cannot adjust the kernel time, so it needs to use other tools to complete this work. An open source tool is recommended github.com/wolfcw/libf… By compiling the tool to the host and mounting it into the container, we can make adjustments to a single container without affecting other containers.

Kepler cloud platform has many functions. Here are a few common functions that we are concerned about. (See docs.nsini.com for more details)

  • Create an
  • Release a new version
  • Log collection
  • Monitoring alarm
  • Persistent storage

3.1 Creating an Application

The process of creating an application is very simple, just fill in some simple information, the administrator will review it and then start building it. An application upgrade can be completed by simply selecting tags and performing a build.

Take creating a Go application as an example:

Dockerfile:

FROM golang:latest as build-env

ENV GO111MODULE=on
ENV BUILDPATH=github.com/kplcloud/hello
RUN mkdir -p /go/src/${BUILDPATH}
COPY ./ /go/src/${BUILDPATH}
RUN cd /go/src/${BUILDPATH} && CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go install -v

FROM alpine:latest

COPY --from=build-env /go/bin/hello /go/bin/hello

WORKDIR /go/bin/
CMD ["/go/bin/hello"]
Copy the code

Place the above Dockerfile in the project directory and fill in the relevant information:

An application is created, the administrator review submitted information is qualified, unqualified will be rejected; Pass it and deploy it.

Deploying the application will obtain the pre-defined base template based on the information submitted by the user, and then generate resources recognized by Kubernetes based on the base template, and then call the Kubernetes API to create these resources. After the Job is created, call the Jenkins API to create the Job, and finally execute the build.

After Jenkins finished building and uploading the Docker Image to the warehouse, Kepler would update the version of the Kubernetes-related application.

To add more operations to this process, modify the JenkinsCommand template.

3.2 Releasing new applications

The process of building an application is to create an application and submit some information for processing.

  • Get the tags list from the Git repository.
  • Call Jenkins API to pass the relevant parameters and version information of the application to it and build.
  • Jenkins Job runs the Shell command to execute the docker build and upload it to the Docker Image repository.
  • When the platform detects that the job has been successfully executed, it calls Kubernetes API to update the Image address of the application.
  • Listen for upgrades.
  • Send notifications.

Click the “Build” button on the app details page, select the tags version you want to apply in the pop-up dialog box, and submit it, as shown below:

Click the Build log TAB of the details page to display the recent build records. Click the corresponding version on the left to view the build status of this version, or to interrupt the application being built, as shown below:

3.3 Log Collection

Our log collection is low-coupling, scalable, and easy to maintain and upgrade.

  • Filebeat collects host logs on each node.
  • Each Pod injects the Filebeat container to collect business logs.

Filebeat is deployed with the application container and the application does not need to know it exists, just specify the directory for logging input. The configuration used by Filebeat is read from ConfigMap. You only need to maintain the log collection rules.

If the preceding collector is configured, it will inject a Filebeat collector to the Pod where the service resides to collect service logs of the application service. The collected logs are injected into the Kafka cluster, and the Logstash message is processed and formatted.

After processing, we entered the ES cluster, and finally we could query the business logs through Kibana.

The Filebeat container and Filebeat ConfigMap can also be modified using templates.

3.4 Monitoring Alarms

Application monitoring of alarms is also a very important link, we use Prometheus+Grafana scheme for monitoring, Prometheus+AlertManager for alarm processing.

If you subscribe to the Kepler cloud platform, you will send alarm messages of the subscribed type to the corresponding tool.

In “Personal Settings – Message Subscription Settings”, we can select the type of subscription we want to subscribe to and the tools we want to receive:

The following is the operation notice received by wechat:

Please refer to our documentation for more tutorials. docs.nsini.com

3.5 Persistent Storage

Kubernetes Cluster administrators can provide different storage classes to meet users’ storage requirements based on different service quality levels, backup policies, and arbitrary policies. Dynamic storage volume provisioning is implemented using StorageClass, allowing storage volumes to be created on demand. Without dynamic storage provisioning, the administrator of the Kubernetes cluster would have to manually create new storage volumes. With the dynamic storage volume, Kubernetes can automatically create storage as required by users.

Found in the menu “configuration and storage” – > “persistent storage volume statement”, choose the application space, and click “create” button, first create a storage volume, and then we find the need to mount the persistent storage application and enter the details page, find the persistent storage TAB, mount just created by the persistent storage volume.

Four, tail

The Kepler platform is now open source, and a demonstration platform is available, with complete documentation for reference. This document describes in detail how to set up related services and provides various deployment schemes.

  • Making: github.com/kplcloud/kp…
  • Document: docs.nsini.com
  • Demo: kplcloud.nsini.com

Author: Wang Cong

Creditease Institute of Technology