Serverless has become one of the key concerns of the cloud native community. Some say it is the successor to microservices and will revolutionize software development. This paper will introduce the Serverless market observation, landing challenges, and Ant Financial’s practice of Serverless.

Serverless is the trend

When we look back at the evolution of cloud computing, we can see that the infrastructure has evolved from physical machines to virtual machines and from virtual machines to containers. Under this trend, application architecture is also evolving synchronously, from monomer to multi-layer, and now microservices. Behind the change lies a sustained drive driven by three constant pursuits: improving resource utilization, optimizing the development operation experience, and better supporting the business.

Currently, Serverless has become one of the key concerns of the cloud native community, and its growth is no exception. Serverless offers a more granular approach to resource management than container technology, enables developers to get to grips with cloud native faster, and promotes an event-driven model to support business growth. This helps users solve problems such as complex resource management and occupation of low-frequency service resources. Implement resource-oriented usage instead of resource-oriented allocation. According to a CNCF statistical report based on 2,400 people at the end of 2018, 38% of organizations are using Serverless technology, up 22% from the same period in 2017. (Source: CNCF Survey)

Image source: Gartner Report China SummaryTranslation Evolution of Server Computing – VMs to Containers to Serverless -Which to Use When

At present, cloud vendors provide a variety of Serverless products and solutions in the market, which can be roughly divided into:

1. Functional computing services: such as AWS Lambda, which is characterized by running in units of code fragments and has certain requirements on code style.

2. Application-oriented Serverless services: Such as Knative, which features container-based services and provides the ability to build from code packages to images.

3. Container hosting services: Such as AWS Fargate, which is characterized by container image running as a unit, but users still need to be container aware.

From the community perspective, CNCF Cloud Native Foundation is coordinating community discussions and promoting the formation of specifications and standards through the Serverless Working Group, which has produced important content such as the Serverless White Paper and panorama. Among them, the panorama divides the current ecosystem into four modules: platform layer, framework layer, tool chain layer and security layer.

Image source: https://landscape.cncf.io/

The ground challenge

In the process of communication, we found that Serverless well solved some demands of customers, including improving resource time ratio and reducing cost through 0-1-0 expansion capacity; Support code package, so that customers do not affect the implementation of cloud native, historical applications without container transformation; Supports flexible trigger configuration to guide users to implement event-driven transformation of applications to adapt to rapid service development. These advantages make Serverless shine in the context of small program development. At the same time, there have been a lot of explorations and breakthroughs in the production environment of Serverless for enterprise-level applications. At KubeConChina 2019, the Serverless working group also discussed this topic. The current core challenges can be summarized as follows:

1. The platform is portable

At present, many platforms have launched their own Serverless standards, including code formats, frameworks, operation and maintenance tools, etc. Users not only face high learning costs and selection pressure, but also worry that they cannot flexibly migrate Serverless applications between platforms.

2.0 – M – N performance

Online applications have strict requirements for controlling request latency. Therefore, users need to be careful to verify that Serverless 0-1 cold start speed, M-N expansion speed, and stability meet production requirements.

3. Debug and monitor

Users have no awareness of underlying resources and can only debug and monitor applications using the platform’s capabilities. Users need the platform to provide powerful log functions for troubleshooting and multi-dimensional monitoring functions to keep abreast of application status.

4. Event source integration

With the Serverless architecture, applications tend to be split more fine-grained and cascaded through events. Therefore, users expect the platform to integrate most common event sources and support custom events to make the triggering mechanism more flexible.

5. Workflow support

The completion of a certain service usually involves the cooperation between multiple Serverless applications. When the number is large, users hope to use workflow tools for unified orchestration and status view to improve efficiency.

Ant Financial practice

SOFAStack is committed to solving the actual pain points of customers on the cloud through product technologies, precipitating the technical practices of Ant Financial services, and helping users migrate to the cloud native architecture in an efficient and low-cost way. Serverless Application Service (SOFA SAS) is a one-stop Serverless platform derived from ant Financial practice. SAS is based on SOFAStack CAFE Cloud ApplicationFabric Engine (CAFE). CAFE container service has passed the consistency certification of CNCF and is a standard Kubernetes.

Serverless application service product is compatible with standard Knative and integrates the application life cycle management ability derived from ant Financial practice. It provides supporting capabilities such as Serverless engine management, application and service management, version management and flow control, and 0-M-N-0 automatic scaling, metering, logging, and monitoring based on service requests or events. At the same time, combined with the actual pain points of customers on the financial cloud, the product alone provides two forms of exclusive version and shared version, as well as three research and development modes of traditional code package, container image and pure function, so as to solve the different needs of users and reduce the access threshold of customers.

One-click deployment: Users can deploy applications with one click and test execution at any time, either through code packages or container images.

Engine management: SAS provides rich engine life-cycle management, diagnosis, monitoring and other capabilities, enabling customers of exclusive edition Serverless engine data management and operation and maintenance operations.

Service and version: SAS provides application management, application service management, and version management. Versions can be deployed in container image mode or code package deployment in traditional VM publishing mode. In many cases, user code can be migrated without modification or writing and maintaining dockerfiles.

0-M-N: SAS provides 0-M-N-M-0 Serverelss rapid scaling capability, supports 0-M triggered by events or traffic, and M-N of various indicators (such as QPS, CPU, MEM, etc.)

Log monitoring and metering: Built-in log, monitoring, and metering capabilities help users debug and monitor application status.

Flow control: Based on SOFAMesh, SAS provides basic flow control capability, which will be further integrated with service grid to provide large-scale multi-dimensional flow control capability across regions and hybrid cloud.

Trigger management: The product supports cron expression triggers based on common cycles and second-level accuracy, which can be associated with and trigger serverless applications. More IaaS, PaaS and data events will be supported later.

Performance Summary: The horizontal axis is the number of Java applications that trigger cold startup at exactly the same time, and the vertical axis is the average and minimum time taken by cold startup applications. As the pressure increases, it takes about 2.5 seconds on average for 50 Java applications and 3-4 seconds on average for 100 Java applications to schedule and start cold at the same time, and the shortest time is 1.5 to 2 seconds.

Performance analysis: The relationship between pool capacity and actual application quantity per unit time can be obtained as shown in the figure (blue indicates the actual application quantity, and green indicates the pool capacity). Currently, the product successfully supports the Serverless mode of small programs in the production environment. At the same time, the ability to pass 0-M-N-M-0 greatly reduces the operating cost of small programs. In the field of industrial customers, an insurance company decided to migrate part of the daily settlement front end and long tail application to Serverless product platform recently, which is another important breakthrough of our products. In the future, we are committed to SAS as a financial Serverless platform.

At KubeCon China 2019, Serverless application service was officially unveiled, attracting more than 100 participants in SOFAStack Workshop to experience the easy construction of cloud applications based on Serverless.