Introduce Serverless

Serverless concept was first introduced in 2012 by Ken Fromm, vice president of Cloud infrastructure service provider Iron.io, in his article “Why the Future of Software and Apps is Serverless” [4]. Serverless became popular after Amazon released AWS Lambda in 2014, with major cloud vendors both at home and abroad scrambling to follow suit.

In August 2016, the article “Serverless” [5] published on Martinfowler.com elaborated on the concept of Serverless. In simple terms, Serverless can be understood as the following:

Server-side logic implemented by developers runs in stateless computing containers, usually understood as Functions as a Service (Faas) Functions in the form of services, which are triggered by events and completely managed by a third party. Currently, AWS Lambda is the most widely used.

The relationship between Serverless and containers and virtual machines can be shown in the following figure:

Figure 1 from physical machine to functional calculation

As can be seen from the figure above, Serverless is built on virtual machines and containers and is more closely related to applications.

Kubeless, an open source framework, was developed by Skippbox, a software vendor acquired by Bitnami in March 2017. Kubeless is a Kubernetes native serverless framework with an AWS Lambda CLI compliant command line interface (CLI).

Kubeless officially defines it like this:

Kubeless is a Kubernetes-native serverless framework that lets you deploy small bits of code (functions) without having to worry about the underlying infrastructure. It is designed to be deployed on top of a Kubernetes cluster and take advantage of all the great Kubernetes primitives. If you are looking for an open source serverless solution that clones what you can find on AWS Lambda, Azure Functions, and Google Cloud Functions, Kubeless is for you! 【 1 】

Kubeless is a native serverless framework based on Kubernetes that allows users to deploy small amounts of code (functions) without worrying about the underlying architecture. It is designed to be deployed on top of a Kubernetes cluster and to take full advantage of Kubernetes features and resource types. Clone content on AWS Lambda, Azure Functions, and Google Cloud Functions.

The main features of Kubeless can be summarized as follows:

  • Support for Python, Node.js, Ruby, PHP, Golang,.NET, Ballerina and custom runtimes.

  • Kubeless CLI conforms to AWS Lambda CLI.

  • Event triggers use the Kafka messaging system and HTTP.

  • Prometheus monitors function calls and latencies by default.

  • Serverless framework plug-in.

Since the features of Kubeless are built on top of Kubernetes, it is very easy for people familiar with Kubernetes to deploy Kubeless, Its main implementation is to convert user-written functions into CRD (Custom Resource Definition) in Kubernetes, and run in the cluster in the way of containers.

1 Basic components of Kubeless

Kubeless consists of the following three parts:

  • Functions

  • Triggers

  • Runtime

The following is a detailed introduction of these three components.

1.Functions

Functions represent code to execute, which in Kubeless contain metadata about their runtime dependencies, builds, and so on. Functions have an independent life cycle and support the following methods:

(1) Deploy: Kubeless to the Kubernetes cluster as a Pod. This step involves building an image of the function.

(2) Execute: Executes the function without calling it from any event source.

(3) Update: Modify function metadata.

(4) Delete: Delete all related resources of the function in Kubernetes cluster.

(5) List: Display the List of functions.

(6) Logs: function instance generated and run in Kubernetes log.

2.Triggers

Triggers represents the event source for a function. Kubeless ensures that the function is called at most once when an event occurs, and Triggers can be associated with a single function or multiple functions, depending on the event source type. Triggers is decoupled from the life cycle of a function and can do the following:

(1) Create: Create Triggers using details of the event source and related functions.

(2) Update: Update Triggers metadata.

(3) Delete: Delete Triggers and any resources configured for it.

(4) List: Display the List.

3.Runtime

Functions are often used in different languages depending on user preferences. Kubeless brings users almost all the mainstream function runtime, currently including [3] :

(1) Python: Supports versions 2.7, 3.4, and 3.6.

(2) NodeJS: supports versions 6 and 8.

(3) Ruby: version 2.4 is supported.

(4) PHP: version 7.2 is supported.

(5) Golang: Support version 1.10.

(6).NET: version 2.0 is supported.

(7) Ballerina: supports version 0.975.0.

In Kubeless, each function runtime is wrapped in a container image in the form of an image, which can be used by referencing these images in Kubeless configuration and viewing the source code via the Docker CLI.

2Kubeless design approach

Like other development frameworks, Kubeless has its own design approach. Kubeless uses many concepts in Kubernetes to complete the deployment of function instances, mainly using the following features of Kubernetes [2] :

(1) CRD (custom resource) is used to represent functions.

(2) Each event source is treated as a separate Trigger CRD object.

(3) CRD Controller is used to handle CRUD operations corresponding to CRD objects.

(4) Deployment/Pod runs the appropriate runtime.

(5) ConfigMap injects the function code into the runtime Pod.

(6) Init-container load function dependencies.

(7) Use Service to expose functions in a cluster (ClusterIP).

(8) Expose functions externally using the Ingress resource object.

The Kubernetes CRD and CRD Controller form the design philosophy of Kubeless. Using different CRDS for functions and Triggers can clearly distinguish key points. Using a separate CRD Controller can decouple and modularize the code.

After Kubeless is deployed, three CRDS appear in the namespace corresponding to Kubeless in the cluster to represent Functions and Triggers in the Kubeless architecture, as shown in Figure 1. After that, every Functions and Triggers created through the Kubeless CLI belong to the three CRD endpoints,

Figure 2. CRD of Kubeless

The process of deploying functions on Kubeless can be divided into the following three steps [2] :

  • The Kubeless CLI reads the Function run configuration entered by the user, generates a Function object and submits it to the Kubernetes API Server.

  • Kubeless Function Controller (Kubeless Controller Manager) When a new function is detected and its information is read, Kubeless first generates a ConfigMap with the function code and its dependencies. Generate a Service for internal access via HTTP or other means, and finally a Deployment with a base image. This order is important because if the Controller in Kubeless cannot deploy ConfigMap or Service, Then, Deployment is not created. Failure in any step aborts the process.

  • After the Deployment corresponding to the function is created, a Pod will run in the cluster, which will dynamically read the contents of the function when it is started.

1 Installation before deployment

The author’s test environment is Ubuntu 16.04, Kubernetes cluster is Node3 and node4, and Node3 is Master node. The following practices are based on the above environment and will not be described here.

The Kubeless installation is divided into three main parts:

  • Install Kubeless CLI

Firstly, download the CLI compressed package, which can be selected according to the version, as shown in Figure 2. The installation package address is: https://github.com/kubeless/kubeless/releases/download/v1.0.0-alpha.7/kubeless_linux-amd64.zip

Figure 3 Kubeless CLI package version

After downloading, decompress and move:

unzip kubeless_linux-amd64.zipsudo mv bundles/kubeless_$OS-amd64/kubeless /usr/local/bin/Copy the code

Then test whether the installation is successful, as shown in Figure 3:

Figure 4 Viewing the Kubeless CLI installation

  • Install the Kubeless framework

For different Kubernetes environments in the cluster, Kubeless provides a list of options, namely non-RBAC,RBAC and Openshift, as shown in Figure 5:

Figure 5 Kubeless provides an optional Kubernetes environment

Since the author’s test environment cluster is based on the RBAC environment, RBAC installation mode is selected and the following command is executed to create the cluster:

Kubectl create ns kubelesskubectl create -f https://github.com/kubeless/kubeless/releases/download/v1.0.0-alpha.7/kubeless-v1.0.0-alpha.7.yamlCopy the code

Kubeless-v1.0.0-alpha.7.yaml: kubeless-v1.0.0-alpha.7.yaml: Kubeless-v1.0.0-alpha.7.yaml: Kubeless-v1.0.0-alpha.7.yaml:

 

Figure 6 Deployment-Kubeless-controller-Manager

Figure 7 ServiceAccount-controller-ACCt

 

Figure 8 ClusterRole-Kubeless-controller-Deployer

Figure 9 ClusterRoleBinding – (Controller-Acct kubeless-controller-deployer)

 

Figure 10 CustomResourceDefinition – (functions provides. Kubeless. IO, httptriggers. Kubeless. IO, cronjobtriggers. Kubeless. IO)

Figure 11. ConfigMap (nodeJS, Python, Java, Perl, etc.)

  • Install Kubeless UI

The installation command is:

kubectl create -f https://raw.githubusercontent.com/kubeless/kubelessui/master/k8s.yamlCopy the code

After the installation is complete, you can view the deployment situation, as shown in the figure below:

kubectl get po,svc,deploy -n kubeless -o wideCopy the code

Figure 12 Kubeless UI deployment view

The deployed Kubeless UI can be accessed in a browser, as shown below:

Figure 13 Kubeless UI

2 Write functions and run instances

Since Kubeless supports multiple languages, python is used as an example:

Write the python-based function test.py as shown below:

Figure 14. Python-based test.py function

It can be seen from the function definition in Figure 14 that the function receives two parameters, event and context respectively. It should be explained here that in Kubeless framework, each function runtime contains these two parameters when defining the function, among which the first parameter contains information about the event source received by the function. The second argument contains general information about the function, such as its name, maximum timeout, and so on. The function eventually returns a string in response to the caller’s return.

Run the instance for the test.py function with the following command:

Kubeless function deploy Serverlessdemo -- Runtime python2.7 --from-file test.py--handler test.helloCopy the code

For space reasons, the parameters followed by — will not be described. Details can be found at kubeless function deploy — help

See the deployed functions via kubeless function ls or Kubectl get function, as shown below:

Figure 1 5 View the deployed function instance

There are three ways to call a deployed function:

  • Call specified by kubeless

kubeless function call serverlessdemo --data '{"dsds":"dsd"}Copy the code

Figure 16 Calling a function instance

  • Call through kube Proxy

Kube proxy -p 8889 & Specifies the proxy portCopy the code

Figure 17 Enabling the specified port proxy

Use a proxy to call a function

curl -L --data '{"Another": "Echo"}' --header "Content-Type:application/json"localhost:8999/api/v1/namespaces/default/services/serverlessdemo:http-functionport/proxy/Copy the code
  • Called through the Kubeless UI

The Kubeless UI operation is shown below:

Figure 18. Calling a function instance through the Kubeless UI

As you can see from the above figure, the calling function supports POST/GET methods, and the data transmission format supports TEXT/JSON. You can edit/delete the function, and also see the log when the function is executed.

You can view the number of function calls through the command, as shown in the figure below:

kubeless function topCopy the code

Figure 19 Viewing the number of times a function instance is called

You can also modify test.py and use kubeless function update serverlessdemo –from-file test-update.py to update the function.

You can see functions in action in the cluster for further discovery by first looking at the function instance deployed in the cluster as shown below:

Figure 20 shows an instance of a function deployed in the cluster

The Pod instance serverlessDemo-07CCc4dxxxxxx is deployed on node4, as shown in the figure below.

See the container where the Pod instance is running in Figure 2 1 Node4

As you can see from the above image, this function instance is deployed on node4 and generates two containers, one for pause and one for test.py.

Node3:

The related images in Figure 22 node3

node4:

Figure 23. Related images in Node4

From the above two figures, it is not difficult to see that after each function instance is scheduled to a node by the cluster, it will pull the runtime image of the corresponding function on the node, such as python environment image or Node.js environment image.

If there is an error in the deployment of specific common error to view the official document, address is: https://kubeless.io/docs/debug-functions/

Kubeless solves the Serverless deployment problem on Kubernetes, but it still has some disadvantages. For example, when extending a function instance, if the node where the instance is running does not have a runtime image for this function, you need to download this image. At present, these images are relatively large. A Node.js runtime image is 600 meters. in a multi-cluster environment, when a lot of load balancing is required for requests, each deployed node will have to download the runtime environment, which is inefficient.

In addition, Kubeless currently does not cache the base image, which means that every time you build a new image, you need to download the base image. Kubeless is also working on this issue.

Currently, a good suggestion is to configure a private or shared repository for Docker. Kubeless automatically pushes this image to the image repository after each deployment instance. Then, if the same function is deployed, the image can be pulled directly from the mirror repository. If the function is changed, Kubeless will update the corresponding image and upload it to the mirror repository, thus solving the efficiency problem.

References:

[1] https://kubeless.io/ 

[2] https://kubeless.io/docs/

[3] https://kubeless.io/docs/runtimes/

[4] https://readwrite.com/2012/10/15/why-the-future-of-software-and-apps-is-serverless/ 

[5] https://martinfowler.com/bliki/Serverless.html 

Content editor: Cloud Security Laboratory Pu Ming Responsible editor: Xiao Qing

Review past

  • Flare-on 5Th WriteUP

  • NuggetPhantom analysis report

  • A preliminary study on attribute graph database JanusGraph

  • 【 Recruitment 】 Green Alliance Technology Innovation Center intern Recruitment Notice (long-term valid)

This public account original article only represents the author’s views, does not represent the position of Green Alliance technology. All original content is copyrighted by Green Alliance Technology Research Communications. Without authorization, any media or wechat official account is strictly prohibited to copy, reprint, extract or use in other ways, the reprint must indicate from the Green Alliance Science and Technology research newsletter and attach the link of this article.

About us

Green Alliance Science and Technology Research communication is operated by green Alliance Science and Technology Innovation Center, which is the cutting-edge technology research department of Green Alliance Science and Technology. Including cloud security laboratory, security big data analysis laboratory and Internet of Things security laboratory. The team consists of PHDS and masters from tsinghua university, Peking University, Harbin Institute of Technology, Chinese Academy of Sciences, Beijing University of Posts and Telecommunications and other key universities.

As one of the important training units of “Zhongguancun Science and Technology Park Haidian Postdoctoral Workstation Sub-station”, LvMENG Science and Technology Innovation Center conducts joint postdoctoral training with Tsinghua University, and its scientific research achievements have covered various national projects, national patents, national standards, high-level academic papers, published professional books, etc.

We continue to explore cutting-edge academic directions in the field of information security, starting from practice, combined with company resources and advanced technology, to achieve conceptual prototype systems, and then deliver product line incubation products and create great economic value.

Long press the qr code above, you can follow us