Author | Liu Yu
Knative is a Serverless framework based on Kubernetes. The goal is to develop a cloud-native, cross-platform Standard for Serverless choreography.
Knative introduction
Knative implements its Serverless standard by integrating container construction (or functions), workload management (dynamic scaling), and event models.
In the Knative architecture, the collaboration between various roles is shown in the figure below.
- Developers refer to Serverless service developers who can deploy Serverless service based on Knative using native Kubernetes API directly.
- Contributors are primarily community contributors.
- Knative can be integrated into a supported environment, such as a cloud vendor or within an enterprise. Currently, Knative is implemented based on Kubernetes, so it can be assumed that wherever Kubernetes is available, Knative can be deployed.
- A user refers to a terminal user who accesses the service through the Istio gateway or triggers the Serverless service in Knative through the event system.
- As a general-purpose Serverless framework, Knative consists of three core components.
- Tekton: Provides generic build capabilities from source code to images. The Tekton component is responsible for obtaining source code from the repository, compiling it into an image, and pushing it to the image repository. All of this is done in Kubernetes Pod.
- Eventing: Provides a complete set of event management capabilities for event access, triggering, and more. The Eventing component has a complete design for Serverless event-driven mode, including access to external event sources, event registration, subscriptions, and event filtering. Event models can effectively decouple producer and consumer dependencies. Producers can generate events before consumers start, and consumers can listen for events before producers start.
The collaborative relationship between various roles in Knative architecture
- Serving: Manages the Serverless workload, which is nicely combined with events and provides request-based automatic scaling and is scaled to zero when there are no services to work with. The duty of the Serving component is to manage the workload to serve externally. The most important feature of the Serving component is its ability to scale automatically. At present, its telescopic boundary is unlimited. Serving also has grayscale publishing capabilities.
Knative deployment
This article will take the deployment of Kantive service in Ali Cloud as an example to explain in detail how to deploy Knative related services. First, log in to the Container Services Administration Console, as shown in the figure.
Ali Cloud Container Service management console
If no cluster exists, create one, as shown in the following figure.
Configure and create clusters
It takes a long time to create a cluster. After the cluster is successfully created, the following figure is displayed.
A cluster is created successfully
Once in the cluster, select “Application” on the left, find “Knative” and click “One-click Deployment”, as shown in the picture.
Create a Knative application
Wait a moment, and after the Knative installation is complete, you can see that the core components are in the “Deployed” state, as shown in the figure.
The Knative application is deployed
At this point, we have completed the deployment of Knative.
Acceptance testing
First you need to create an EIP and bind it to the API Server service, as shown in the figure below.
The picture shows API Server binding to EIP
Once complete, test the Serverless application. Select “Kantive Application” in the application, and select “Create with Template” in service Management, as shown in the figure.
Create a sample application quickly
Once created, you can see that a Serverless application has appeared on the console, as shown in the figure.
The example application is created successfully
At this point, we can click the application name to view the details of the application, as shown below.
View details about the example application
To facilitate testing, you can set Host locally:
101.200.87.158 helloworld-go.default.example.com
After the configuration is complete, open the system-assigned domain name in the browser, and you can see that the expected result is displayed, as shown in the figure.
Browser test sample application
At this point, we have completed the deployment and testing of a Knative Serverless application.
At this point, we can also manage the cluster through CloudShell. On the cluster list page, select CloudShell for management, as shown in the figure.
Cluster Management List
Manage the created cluster through CloudShell, as shown in the figure.
CloudShell window
Execute instructions:
kubectl get knative
You can see the newly deployed Knative application as shown in the figure below.
CloudShell views Knative applications
About the author: Liu Yu (Jiang Yu) National University of Defense Technology electronic information major doctoral candidate, Ali Cloud Serverless product manager, Ali Cloud Serverless cloud preacher, CIO College distinguished lecturer.