In 2012, Iron. IO proposed a concept called “Serverless”, believing that future software and applications should be Serverless; In 2019, Berkeley published A Berkeley View on Serverless Computing, 2019 FEB, predicting the future decade of cloud Computing, believing that Serverless is the master of the cloud era.

Today, people are not just talking about Serverless, but “Serverless First” — that is, the discussion has changed from “whether to use it” to “how to use it”. Is it a publicity stunt for cloud vendors, a solution for front-end developers, or is it a real game-changer?

To answer this question, let’s take a closer look at the misconceptions, challenges, and opportunities facing Serverless. At the same time, we also contacted and obtained the first-hand practice materials of Huawei app market AppGallery Connect (AGC for short) in the Serverless field, hoping to bring you inspiration.

 

Serverless and microservices are not substitutions

 

Serverless = FaaS (function as a service) + BaaS (Back end as a service), which is currently the most accepted definition of Serverless. The relationship between Serverless and microservices, but few people can say clearly, even many people think: Serverless and microservices is a replacement relationship, can only choose one.

 

However, Serverless and microserver have their own advantages in different scenarios. Take Huawei AppGallery Connect Serverless as an example. Serverless has the following characteristics:

 

1. Low cost, developers only pay for the resources actually used, not for idle resources, which significantly reduces operation, maintenance and use costs;

2. Free of operation and maintenance, developers do not need to pay attention to the operation and maintenance of back-end services, automatic elastic scaling and other complex operations in the era of traditional cloud services are automatically completed by Serverless service;

3. Fast online. In the Serverless architecture, the development/deployment unit of function granularity and the operation mechanism triggered by events can greatly simplify code logic and improve the online speed of business;

4. Cross-platform. AGC platform also provides cross-platform support for services, helping developers to achieve user intercommunication on different platforms and further improving development efficiency.

 

Therefore, Serverless is often more appropriate in computationally intensive scenarios such as real-time computing, parallel task processing, and edge computing. Microservices are characterized by simple services, flexible expansion, easy maintenance, independent evolution, mixed development and continuous delivery, which are more suitable for large and complex business systems.

 

However, microservices architecture is a test of the technical ability of the r&d team, and granularity alone has become a hot topic at various technology conferences. If the focus is on the whole architecture level, the challenges in the framework selection, service governance, elastic scaling and other aspects will be greater, requiring the team to have very rich experience in service.

 

In fact, the most innovative service model at the moment is Serverless microservices. Compared with traditional microservices architecture, Serverless microservices has two characteristics:

 

1. Application full hosting capability: Serverless Microservice provides full life cycle management capability from microservice code packaging, deployment, monitoring, call chain, service governance, elastic scaling, version upgrade rollback.

2. Charging by service running time: The charging mode is based on the CPU and memory usage of microservices. When there is no idle period for access requests, the microservice running instances are automatically scaled to 1 or 0 to reduce resource usage and cost and optimize resources.

 

In general, we can think of Serverless microservice as CI/CD pipeline + microservice framework (including registry and microservice governance framework) + Kubernetes/ container + cloud operation and maintenance (including call chain, log, alarm, performance monitoring, etc.) + elastic scaling service + traffic governance service.

 

Serverless landing key: Not one size fits all

 

If Serverless microservices is so good, shouldn’t teams within each team immediately push for Serverless microservices? Through the case analysis of Huawei app market AppGallery Connect Serverless, we come to the conclusion that we should pay attention to and plan carefully, but not one size fits all.

 

First, let’s briefly introduce the background of the case. Huawei AppGallery Connect platform is dedicated to providing life-cycle services for the creative, development, distribution, operation and analysis of applications, improving the efficiency of application development and operation, and accelerating the innovation and commercial success of applications. AppGallery Connect deeply integrates various high-quality services within Huawei, opening up huawei’s long-term accumulated capabilities in technology research and development, global operation, quality, security, engineering management and other fields to developers, greatly reducing the difficulty of application development and operation, improving application quality, and opening distribution and operation services. The architecture is as follows:

It is based on Serverless, Through the cross-end SDK (AppGallery Connect Kit), AppGallery Connect Portal and RESTful API, mobile developers are provided with full life-cycle services related to application creativity, development, distribution, operation and analysis.

 

AppGallery Connect Serverless solution is more focused on solving the efficiency problem of mobile application development, which has the following technical characteristics:

 

1. Data security: The cloud database adopts the original all-secret state encryption technology of the end-to-end cloud to realize the collaborative encryption of end-to-end and cloud-side data. The key encrypted based on the user password is backed up in the cloud to fully guarantee the security of user data.

2. High performance: The function cold start has been greatly optimized in terms of code transmission and loading, using resource pooling, code caching, call chain prediction and other technologies to make the function cold start delay up to 10~20ms without changing the operating system; Cloud database through network optimization, protocol optimization, etc., to achieve the end cloud data synchronization less than 120 milliseconds (the industry is usually 200 milliseconds +).

3. Elastic scaling of database: Build CloudDB, a Serverless cloud database, to solve the problems of end-cloud data synchronization, multi-end data synchronization and mass data storage. Compared with traditional database services, end-to-end database services provide real-time data synchronization mechanism between client and cloud, client and client, and mobile terminal offline availability and other mobile-oriented features. The underlying database engine adopts the distributed architecture with the separation of storage and computation, which can automatically expand storage capacity or compute nodes according to the requirements of mobile terminals, and shield developers from problems such as database expansion and migration.

4. Saving labor cost: AGC cloud hosting eliminates the CDN, domain name management, SSL certificate management and other work of the application website of developers, and built-in global CDN acceleration and global domain name management services, saving the operation and maintenance labor and cost of developers.

 

At present, AppGallery Connect Serverless solution has been applied to AppGallery Connect APP, Huawei Express Application, translation service, application market intermodal activity SEC kill system and other projects in Huawei. Compared with the previous micro-service architecture, the r&d efficiency has been greatly improved.

 

Take huawei AppGallery Connect Serverless’s support for translation services as an example. InfoQ learned that the development team used Serverless cloud functions, cloud storage, and cloud database services to efficiently build translation services with high availability and on-demand scaling. Compared with traditional architecture, manpower is reduced by 45% and r&d cycle is shortened by 50%.

 

In terms of division of labor, the Serverless solution also differs from the traditional organization. Architects are mainly responsible for overall architecture design, domain model design, data model design, and function division. The role of r&d engineer will be divided into two categories, one is function development, one is business launch.

 

The engineer responsible for function development is responsible for function development, unit test, joint test; The responsibilities of the engineer responsible for service on-line are quite Serverless. The engineer needs to independently open cloud functions, cloud storage, cloud database and other services. In addition, the engineer needs to be responsible for uploading and publishing functions and setting triggers.

 

Triggers are at the heart of event-driven programming based on cloud functions. This project involves HTTP, CloudDB (cloud database), CloudStorage (CloudStorage) and other triggers, requiring engineers who used to be used to serial API call programming to change their thinking and habits, and quickly learn asynchronous programming based on event trigger. Here’s a graphic comparison that illustrates the difference between the two programming mindsets:

 

 

 

Admittedly, due to high familiarity with microservice architecture and low familiarity with Serverless architecture, the promotion of Serverless may also face some organizational resistance. For the leader, the key to landing is that the technology can not be a one-size-fits-all, not to use Serverless. The leader needs to get deeply familiar with the business process and technical pain points, and combine the advantages of Serverless for adaptation and promotion.

 

The technical architecture diagram of the final implementation of AGC translation service is as follows:

 

 

 

Technical barriers to landing Serverless: stateless function, time-limited billing, cold start time

 

With all that said, there are still some problems with the Serverless architecture. We will just list three of the most important problems for your reference. Before committing to “Serverless First”, you should have a clear understanding of these problems.

 

The first is the stateless function problem. For better horizontal scaling and failover, cloud functions are stateless and do not provide caching capabilities. However, it was discovered that although cloud functions are stateless, business processes are usually stateful. The usual solution is for engineers to operate a piece of external storage to preserve state and do a read/write lock.

 

This leads to a complex development effort and is not naturally suitable for low-latency scenarios.

 

But that, too, is being addressed. In the case of Huawei’s AppGallery Connect Serverless, InfoQ learned that Huawei AGC is developing a stateful function programming model and a concurrent multi-function access consistency model. Solve the deadlock and inconsistent state caused by concurrent access of state data and efficient read and write of state data. Here’s a simple hint for you to follow:

 

 

 

The second type is running time constraints. Serverless is charged according to the usage time, and the function instances after running are destroyed. Therefore, there is usually a limit on the running time to avoid problems such as synchronous waiting and service congestion, which may cause cloud functions to be suspended for a long time and consume resources. This has an impact on some strongly state-dependent services.

 

Although cloud vendors generally allow developers to adjust the default runtime of cloud functions, this is not a complete solution.

 

One solution is to use stateful functions instead of externalizing state data to a third-party system, such as retrieving state data from a third party through a REST interface. Because once the state data depends on the third party system, it is difficult to guarantee the delay, performance and other indicators. On the other hand, through new technical means such as dynamic pricing, to reduce the cost of the function running for a long time and gradually configure the running time according to demand is also a technical exploration direction in the future.

 

The third type is cloud function cold start problem. If the function lives in memory, it wastes resources and increases costs. If each invocation is started in cold mode, it takes about 200 milliseconds. (The data varies greatly in different programming languages, and the data is used for reference only.) This is unacceptable for some delay-sensitive services. How to solve the problem of function cold start is a huge technical challenge, which is also a difficult problem that cloud functions must overcome.

 

On the other hand, the cold start problem is both a challenge and an opportunity for Serverless. Once the cold startup speed of cloud functions is accelerated, the applicable domain of Serverless will be greatly expanded and the mainstream business architecture will be completely revolutionized. AppGallery Connect Serverless continuously improves function startup and scaling efficiency through function cold start optimization, intelligent function scheduling strategy, fast traffic awareness and instance fast start.

 

Middleware, modeling, low code, is the advanced direction of Serverless

 

Of course, in addition to the cold start problem of cloud functions, middleware, modeling and low code are also the core development and evolution direction of Serverless in the next stage.

 

The middleware

An important trend of Serverless development in the future is that there will be more and more Serverless middleware. Traditional businesses developed with SpringMVC, SpringCloud, or microservices frameworks can be very expensive if they all use function rewriting.

 

However, if there is a Serverless microservice, existing business code can be directly Serverless only with some small adaptation modifications, and enjoy the o&M free, elastic scaling and other capabilities brought by Serverless, then any architect will start to think carefully. Feasibility of migrating from existing architecture to Serverless architecture.

 

modeling

If the service depends on a small number of Serverless services, you can use the service management console or CLI tool to deploy the service based on the service development and deployment rules.

 

However, for more complex businesses involving the simultaneous use of multiple Serverless services, the cost of each deployment and upgrade will be high if there is no uniform application description and deployment tool.

 

If the Serverless can be modeled and standardized, and the dependent services can be automatically opened to realize one-click automatic deployment, it will save a lot of energy for the r & D students.

 

Low code platform

With the acceleration of enterprise digital transformation, the delivery efficiency of traditional software development mode has been unable to meet business needs, and enterprise digital construction lags behind business needs, so it is urgent to improve development efficiency. Low code platform has gradually become a technology hotspot.

 

Using a low code platform, the application can be built by graphical, drag and drop, configuration and scripting. Compared with the traditional development mode, the difficulty and cost of development are greatly reduced.

 

Due to Serverless’s inherent features of free operation and maintenance, high availability, elastic scaling and so on, the low code platform built based on Serverless will further reduce the amount of code, development cost and operation and maintenance workload of developers after the launch, and truly achieve low code/low workload in the whole application life cycle.

 

Therefore, Serverless low-code platform will be an important direction of Serverless evolution in the future. It is revealed that the core of the next generation Serverless solution of Huawei AppGallery Connect is the low-code platform. To solve application development and o&M efficiency problems, the architecture is as follows:

 

 

 

See here, you may wonder, is the state function, is the cold start optimization, Huawei application market into Serverless enthusiasm from where? As we know, Huawei consumer business continues to adhere to the “1+8+N” full-scene smart life strategy to create full-scene smart life experience for consumers.

 

As for Huawei AGC, it means “loading the source of innovation”, helping millions of application developers accelerate application innovation and jointly creating a better digital life experience for global users. It is a kind of undertaking of Huawei’s consumer business strategy. With that in mind, it’s not unduly surprising to see Huawei’s app market investing heavily in Serverless.

 

Anyway, it’s a great time to be a developer. We believe that both the Serverless First strategy and more scenario-based Serverless applications in the future will bring more architectural choices to developers. More and more efficient application delivery will be the main theme of the evolution of related platforms and tools for some time to come.