The author | yuan yi

**Knative is an open source Serverless application orchestration framework based on Kubernetes. On the basis of community Knative, Ali Cloud Knative is deeply integrated with Ali Cloud products to bring you the purest containerized Serverless experience.

About Knative

Knative is an open source Serverless application orchestration framework based on Kubernetes. In fact, Knative includes not only Workload, but also Kubernetes’ native process choreography engine and complete event system. Knative aims to provide standardization of application Serverless workload choreography based on Kubernetes. The Knative core modules mainly include Eventing, the event-driven framework, and Serving of deployment workloads.

1. Serverless Service Engine – Serving

The core capability of Knative Serving is its concise and efficient application hosting service, which underlies its Serverless capabilities. Of course, As the Serverless Framework, it cannot do without the ability to allocate resources on demand. Knative can automatically expand the number of instances in peak periods according to the request volume of applications, and automatically reduce the number of instances when the request volume decreases, which can help you save costs very automatically.

Serving also provides traffic management capabilities and flexible grayscale publishing capabilities. The traffic management ability can slice the traffic according to the percentage, and the gray publishing ability can gray according to the percentage of the traffic.

1) Simple application model

Knative Service, a simple application model, is provided to meet the requirements of Service deployment, Service access and grayscale publishing. It can be expressed in the following formula: Knative Service = Workload (Deployment) + Service access (Service) + Gray flow (Ingress).

The application model is shown below

  • Service: Abstract to Serverless application model, manage application life cycle through Service;
  • Configuration: Configures the desired information of the application. The Configuration is updated every time the Service is updated;
  • Revision: Each Configuration update creates a snapshot for version management.
  • Route: Routes requests to revisions and forwards different percentages of traffic to different revisions.

  • Application hosting

    • Kubernetes is an abstraction for IaaS management. Many resources need to be maintained when applications are deployed directly through Kubernetes.
    • A single resource can define the hosting of an application with Knative Service.
  • Traffic management

    • Knative applies the flow through the Gateway, and then can segment the flow by percentage, which lays a good foundation for the elasticity, gray scale and other basic capabilities.

  • Gray released
    • Support multi-version management, the application at the same time have multiple versions of online service is easy to achieve;
    • Different versions can set different percentage of traffic, grayscale publishing and other features are easy to implement.

  • The elastic
    • The core ability of Knative to help applications save costs is flexibility. It automatically expands capacity when traffic increases and shrinks capacity when traffic decreases.
    • Each grayscale version has its own elastic policy, and the elastic policy is associated with the traffic allocated to the current version. Knative makes capacity expansion or reduction decisions based on the amount of traffic allocated.

2) Rich flexible strategies

As the Serverless framework, its core capability is automatic elasticity. Knative provides rich elasticity strategies:

  • Automatic capacity expansion based on traffic request – KPA
  • Automatic capacity expansion based on CPU and Memory – HPA
  • Supports automatic capacity expansion and reduction by timing + HPA
  • Event gateway, which provides request and Pod one-to-one processing capability

Serverless Event Driven Framework – Eventing

Event-driven is standard with Serverless, and Eventing is also available in Knative. Knative Eventing provides a complete event model that makes it easy to plug in events from various external systems. After the event is accessed, it is circulated internally through the CloudEvent standard. Knative Eventing provides two ways to forward events:

  • Event sources are forwarded directly to the service;
  • The event source is forwarded to the Broker/Trigger and then filtered to the service.

What kind of forwarding should be used in the process of use? In fact, the Broker/Trigger model is implemented based on the underlying message system. For event sources such as Github, Gitlab and K8s APIserver, message events need to be buffering to ensure the reliability of message transmission. Then we recommend event flow via event source forwarding to Broker/Trigger. For systems where the event source itself is a message, such as MNS, Kafka, and RocketMQ, it is more efficient to use the event source to forward directly to the service. At this point, we have to mention Knative’s source of events. I liken it to an event-driven engine, and Knative Eventing drives the flow of events through these event sources. The Knative community provides rich event sources such as Kafka, GitHub, and more. In addition, message cloud product event sources, such as MNS, RocketMQ, and so on, are accessed.

Ali cloud Knative

Ali Cloud Knative is fully integrated with Ali Cloud resource system on the basis of community native Knative, providing richer capabilities and cloud product-level support.

1. Integration with Aliyun products

  • Rich message cloud product event sources: Kafka, MNS, RocketMQ
  • Service access: SLB
  • Storage: NAS and cloud disks
  • Observability: Logging service, ARMS
  • IaaS resources: ECS and ECI

2. Natural integration of Ali Cloud K8s ecology

  • Support Ali Cloud standard version Kubernetes, proprietary version Kubernetes;
  • It supports Aliyun Serverless Kubernetes (ASK), and fully hosts Knative management and control components in ASK, saving resources and operation and maintenance costs for users.

A case in point

Next, I will introduce how to play Aliyun Knative with an example of sending bullets. Take a look at the results:

Architecture diagram:

Process description:

  • The user sends the barrage to Aliyun Kafka through the barrage Web service.
  • Kafka Source Event Source monitors the barrage message and sends it to the barrage message processing service.
  • After receiving the message, the bullet-screen message processing service automatically expands the instance to process the message, and sends the processed message to the bullet-screen service.
  • Finally, the bullet screen is displayed to the user through the Web service interface.

conclusion

Finally, we summarize the capabilities that Ali Cloud Knative can bring to us:

  • Service deployment is low barrier and easy to use
  • Serverless On-demand resources
  • The event driver works seamlessly with the messaging cloud product
  • Natural integration ali Cloud K8s ecology
  • Get through with Ali Cloud products

We hope that these capabilities can bring you real on-demand and lower the cost of operation and maintenance and resource use, which is also the goal of Serverless Philosophy.