Source | Serverless public number

background

The concept of Serverless was proposed in 2012, and the real launch of related cloud products was in 2014 when AWS launched Lambda. If we compare Serverless to a baby, he is already six years old.

While there is no agreed upon definition of Serverless, I’m sure most developers think Lambda when they hear Serverless and use the words “function,” “charge per call,” and “event-driven.” Indeed, Serverless, which was just born that year, is like the lovely “Purple Potato Man” below. Purple is full of mystery (when it was just launched, it was definitely a black technology), which is impressive.

Just born Serverless

The huge influence of AWS and the all-black technology carried by AWS really make Serverless remembered by people. However, it is because of its impressive birth that Serverless, now six years old, Many people are still stuck with Serverless=Lambda or Serverless=FC (Function Compute), which is a pity.

Now the Serverless

Today, enterprises are in a comprehensive digital transformation, and the entire technical architecture system is eager to rely on cloud native to obtain huge technical dividends. Serverless has been cloud native since the first day of its birth, so it is necessary for us to systematically understand the concept of Serverless and related products born in these years. We believe that no matter you are front-end, back-end, architect, SRE or CTO, you will gain and make better use of the technical value of Serverless to contribute to business success in the future.

define

The industry has been trying to define Serverless, for example, CNCF gives the definition: NoOps and Pay as You Run, and Berkeley says Serverless = FaaS + BaaS. But I’d like to say that Serverless doesn’t need to be defined, it’s already pretty clear: “Server + less” is a concept. The core idea is that you don’t need to pay attention to Server anymore. In contrast, in the IaaS era, you buy a Server, install various tools, and develop your business on it.

Server will not go away, but will take the attention of the average developer away from Server, which means “smart resilience”, “fast delivery”, and “lower cost”, which are also typical features of Serverless related products.

So there is no need to define Serverless, which is clearly described. Let’s get rid of the concept and look at the products in each specific technology area, I believe you will have a more intuitive understanding.

The rebirth of PaaS in the Serverless era

PaaS itself is a pretty big concept, and it falls between IaaS and SaaS in a broad sense. Let’s start with a specific product: GAE (Google App Engine). In 2006, AWS launched cloud computing for IaaS. Google decided that cloud computing should not be the underlying form of IaaS, so it launched its own cloud computing product GAE in 2008. (See Zhang Lei’s article on the reason for this development: Container decade, a chronicle of software delivery).

GAE, like Lambda, was a hit, but developers soon found it to be very restrictive, a classic “I don’t want you to think, I want me to think,” in today’s terms, and the result was a rush back to IaaS.

Later PaaS products, such as Cloud Foundry, are more practical. The underlying IaaS is still provided by Cloud vendors, and the upper layer provides an application management ecosystem. The idea behind it is that developers are not expected to use Cloud computing in such a low-level way as IaaS. You start with PaaS, but it’s not Serverless either. You still have to think about server maintenance, updates, scaling, capacity planning, etc.

SAE (Serverless App Engine)

Now, with the maturity of container technology and the further development of Serverless concept, THE concepts of PaaS and Serverless have begun to merge. Such products not only have the characteristics of PaaS [fast delivery], but also have the characteristics of Serverless [intelligent flexibility] and [lower cost]. A typical product is aliYun’s product launched in 2019: Serverless App Engine (SAE).

** First, it is a PaaS, and more specifically, an application PaaS. ** This means that most developers will be comfortable with the concepts you will be familiar with, such as app publishing, restart, greyscale, environment variables, configuration management, etc.

** It is also Serverless. ** This means you don’t have to worry about servers anymore, you don’t have to apply for machines, maintain servers, and install a bunch of tools, but you can use them on demand, charge by the minute, and combine that with great flexibility (timing flexibility, metric flexibility) to achieve extreme costs.

Finally, thanks to the development of container technology represented by Docker, SAE solved the prominent problems of classic PaaS (various restrictions and strong binding), relying on container images, which can run in any language applications.

Seeing this, I believe that most developers have already had an outline of the product combining PaaS and Serverless. According to the Chinese cloud native User survey report (2020), this form of Serverless product has been adopted by more and more developers.

On top of that, another topic worth discussing is microservices and Serverless.

Microservices and Serverless

Part of the industry’s current understanding of microservices and Serverless goes like this: Think that the current typical representative technology of cloud computing is microservices, and the representative technology of the next generation is Serverless, which will make you think that Serverless is more advanced than microservices, and even think that there will be no microservices in the future with Serverless, similar to the following picture:

Personally, I think this realization came about by externalizing Serverless’s ideas into products like functional computing (FaaS). Now when we talk about microservices, we will think of the technical framework behind it, such as Spring Cloud and Dubbo. However, the term microservices has gone far beyond the scope of pure technical framework. There are also core supporting ideas behind it, including:

  1. Microservices increase technical complexity to a certain extent, but at a certain scale they reduce system complexity and organizational complexity.

  2. Modern business systems are becoming more and more complex, and many business systems are based on domain-driven design (DDD). Microservices are actually the supporting technology behind DDD.

Singleton, microservices, and complexity

So if you can’t use microservices in the Serverless era, I’m sure a lot of developers will feel overwhelmed or “future-averse” because they’ll feel like someone has given me a vision of a future, but have no idea how to get there.

Regardless of the specific technical implementation, back to the concept behind, Serverless represents a concept of eliminating server concern and reducing the use of cloud computing services, so it is not in conflict with microservices, and can completely coexist. In Ali Cloud’s SAE, the ability to integrate microservices (relying on Ali Cloud product MSE) means:

  1. Applications deployed on Serverless platforms such as SAE can continue to be developed using microservices without any modifications.

  2. There are even a number of microservices enhancements on SAE, including registry hosting, service governance, and more, that further reduce the barrier and burden of using microservices for developers.

Therefore, Serverless and microservices are no longer opposed to each other in PaaS products of Serverless class. Developers can continue to use microservices technology to develop, and at the same time, they can also enjoy the “intelligent flexibility” and “lower cost” brought by Serverless concept.

Function calculation FC

After talking about Serverless Application, let’s take a look at Serverless Function. As the Serverless product with “roots and roots”, FC is no stranger to it. After so many years of development, It has been well applied in front-end Serverless, multimedia processing, AI, event-based scenarios (cloud product events, database change events, etc.), Internet of Things messages and other scenarios. Even more and more companies will completely build their business on FC, such as Serverless practice of Century Lianhua.

In addition, there are now solutions to many of the earlier technical limitations:

  1. Most of the early function computing products have limitations on disk size, code package size, running time, memory specifications, etc. Ali Cloud function computing launched performance examples to basically solve these limitations.

  2. To solve the cold start problem, you can use reserved performance examples.

Let’s take a look at some typical scenarios using FC.

The front-end Serverless

After iterations of Ajax, Nodejs, React and other technologies, the front end has formed a relatively mature technical system, especially Nodejs, which makes the front end and the server have a connection.

The division of labor between the front end and the back end gives play to the advantages of each, but there is also a problem in the process of collaboration. The back end students usually provide interface oriented to the domain and service, while the front end is specific data interface oriented to users. Sometimes a simple requirement will be confused because of the definition and joint adjustment of both sides. Thus a layer of BFF (Backends For Frontends), developed by users, has been created to deal with the conversion of domain model-UI models.

The ideal is beautiful, but the reality is also very boring. If the front-end students go to the BFF layer and find that they need to learn the back-end DevOps, high availability, capacity planning, etc., which are actually the front-end students do not want to care about, such appeals have been well solved in the Serverless era. From BFF to SFF (Serverless For Frontend), the front-end students only need to write a few functions, the rest to the Serverless platform.

Similarly, there is Server Side Rendering (SSR). Originally, after the division of the front and back ends, the back end only needs to write interfaces, and the front end is responsible for Rendering. However, under the background of SEO friendly and fast first screen Rendering, sometimes the Server Side Rendering scheme is used. Using the Serverless front end students can play happily again.

In fact, many front-end products (such as all kinds of small programs and language sparrow and other products), front-end students will complete the overall development of the whole stack, more and more will use Serverless related technology.

Of course, to use Serverless well, you need a complete ecosystem, including related frameworks, runtimes, toolchains, configuration specifications, etc. For this, see Midway.

Multimedia processing

At present, online education, live broadcasting, short video and other industries are booming, which also gives rise to a lot of video requirements, including video processing, such as video editing, segmentation, combination, transcoding, resolution adjustment, client adaptation and so on. Typical scenarios include:

  • Every Friday, hundreds of 1080P videos of more than 4G are generated regularly, but they are expected to be processed in a few hours.

  • You even have more advanced customized processing requirements, such as recording the transcoding details to the database after the video transcoding is completed, or automatically preheating the videos with high heat to the CDN after the transcoding is completed, so as to relieve the pressure of the source station.

In a Serverfull scenario, you might need to build a complex system to support these requirements, but if you use FC, you will find that everything is much easier.

AI Serverless

A typical application scenario of the AI Model Serving is function calculation. After the data scientist has trained the Model, he or she often needs to work with software engineers to translate the Model into a system or service. This process is often referred to as Model Serving. The operation – free and elastic scaling of function computing is exactly what data scientists want for highly available distributed systems.

Serverless container – ASK

As a production-level container choreography system, Kubernetes has become the de facto standard for container choreography, widely used for automated deployment, extension, and management of container applications. It also has corresponding Serverless Kubernetes products, such as Ali Cloud’s ASK, AWS Fargate and so on. In these products, you can deploy container applications without purchasing nodes, do not need to do node maintenance and capacity planning for the cluster, and pay on demand based on the amount of CPU and memory resources configured for the application. The ASK cluster provides complete Kubernetes compatibility while lowering the threshold to use Kubernetes, allowing you to focus more on the application rather than managing the underlying infrastructure.

If you are a heavy USER of K8S, then Serverless Kubernetes is a good choice. Typical customer scenarios include:

  • Weibo: 500 application instances can be rapidly expanded within 30 seconds to cope with New Year’s Eve activities and hot events;
  • Megvii: Develop intelligent, o&M free AI application platform based on ASK;
  • Interesting headlines: Build Serverless big data computing platform based on ASK.

BaaS

All mentioned above are “computing” Serverless products, FC, SAE, ASK, etc., but we all know that the development process can not only have computing logic, there are many other dependencies, such as storage, middleware, etc. **BaaS (backend-as-a-service) ** products provide API-based services. These apis are generally used on demand, free of o&M, and automatically expanded and scaled, so they are Serverless.

Typical OSS, such as Ali Cloud, has platform-independent RESTful APIS that can store and access any type of data in any application, anytime, anywhere.

It is worth mentioning that we are very familiar with the middleware when developing enterprise-level applications. Taking Ali as an example, it is also upgrading the 4.0 technical architecture, comprehensively BaaS, unified operation and maintenance, delivery, billing and support modes, and out of the box, the degree of productization continues to improve.

conclusion

To sum up, a series of Serverless products mentioned above cover front-end, back-end, container and BaaS fields, including many products not mentioned above (such as CDN) which are actually Serverless products. So I don’t agree with Berkeley’s Serverless = BaaS + FaaS, but I do agree with his other view: “Serverless will dominate Cloud computing”.

Serverless is an idea, not a specific technology. When 99% of cloud products are Serverless in the future, cloud computing will be Serverless. I think this change is not black and white, or revolutionary like overturning and starting all over again. It’s about lowering the cost of using the cloud for users and improving the r&d efficiency for developers.

About the author: Chen Tao, 10 years of software development experience, 4 years of entrepreneurial experience, once worked in Taobao, Didi, focus on cloud native, micro services, Serverless and other technical fields, has accumulated r&d, management and business experience in cloud computing, e-commerce, zero-to-one entrepreneurship and other aspects. Currently, I am working in Aliyun, engaged in the design and development of Serverless Application engine (SAE) in cloud native application platform.