Brief introduction: Function calculation The first FC GPU instance, the first instance in the industry can be observed and debuggable, the first to provide end-to-end cloud coordination and multi-environment deployment capabilities, the optimization of gB-level image startup time to second level, and the optimization of VPC network connection to 200ms. Serverless application engine SAE supports seamless migration of microservice framework, no need for containerization transformation, and the industry’s first hybrid elasticity strategy. These innovations and breakthroughs solve the technical problems in the Serverless field, and completely overcome the obstacles affecting the further implementation of Serverless in the core production scenarios of enterprises.
“Even with the rise of cloud computing, it’s still all about servers, but not for long. Cloud applications are going serverless.”
This is Ken Form’s view of The Future of cloud computing in his 2012 article “Why The Future of Software and Apps is Serverless”.
Serverless First: From cloud vendor advocacy to customer initiative
Serverless, with its inherent flexibility and fault tolerance, well meets the dual demands of flexibility and stability of enterprise online business and becomes a new direction in the evolution of enterprise cloud architecture.
Today, as more and more large and medium-sized enterprises have stripped out execution units with flexible requirements for expansion in traditional back-end fields and run on Serverless architecture, and entrepreneurial teams that pay more attention to R&D and delivery efficiency have Serverless all their businesses. The idea of Serverless First is gaining traction, with more and more cloud workloads running on Serverless.
Changes in the numbers represent the market maturity of the technology.
According to a report from Datadog this year, Lambda is used by half of AWS customers on Datadog and 80% of AWS container customers, and these users are calling functions 3.5 times more per day than two years ago, running 900 hours per day. Look at the domestic market, according to the “2020 China Cloud Native Survey Report” released by CNCF this year, 31% of enterprises are using Serverless technology in production, 41% are evaluating the selection, and 12% plan to use it in the next 12 months.
On October 21, ali Cloud Serverless released a series of technological breakthroughs at the cloud Native Summit of the Cloud Computing Conference, focusing on solving the difficulties and pain points faced by the industry. This was followed by large-scale practices of major enterprises on Serverless. For example, netease Cloud Music used Serverless technology to build offline audio and video processing platform and pumpkin Movie 7-day comprehensive Serverless. Based on this, it established business monitoring, publishing and flexibility systems.
From First to Faster, FC 7 major technological breakthroughs have overcome the stumbling block affecting the development of Serverless
The essence of Serverless is to achieve the concentration and freedom of business layer development by shielding the underlying computing resources. But the higher the abstraction, the more complex the cloud vendor’s implementation becomes at the bottom. Function calculation will further split the service to the granularity of the function, which is bound to bring new challenges to the development, operation and maintenance, delivery, etc., such as how to carry out the end cloud tuning of the function, how to conduct observation and debugging of the function, how to optimize the gB-level cold start of the mirror? The granularity of these services in the past, are not a problem, has become a stumbling block to the large-scale implementation of the core production business of Serverless enterprises.
Since entering the Forrester Leadership Quadrant last year, alibaba Cloud computing team has continued to solve these technical problems in the industry and announced 7 technological innovations and breakthroughs at this Cloud Computing conference.
Serverless Devs 2.0: the first Desktop in the industry, supporting end-to-end cloud coordination and multi-environment deployment
Serverless Developer platform Serverless Devs 2.0 has been released after nearly a year of open source. Compared with 1.0, 2.0 has achieved all-round improvement in performance and use experience. Serverless Desktop, the first Desktop client in the industry, has carried out fine design for the Desktop client with aesthetic feeling and pragmatism, and has stronger enterprise-level service capability.
As the industry’s first cloud native full life cycle management platform supporting mainstream Serverless services/frameworks, Serverless Devs is committed to creating Serverless application development one-stop service for developers. Serverless Devs 2.0 proposes multi-mode debugging solutions. Including open online and offline environment; The end cloud joint debugging scheme, the local debugging scheme, and the online debugging/remote debugging scheme of the cloud operation and maintenance state debugging. Serverless Devs 2.0 supports more than 30 one-click deployment frameworks, including Django, Express, Koa, Egg, Flask, Zblog, WordPress, etc.
Industry-first instance level observation and debugging
An instance is the smallest schedulable atomic unit of a function resource, analogous to a container’s Pod. Serverless highly abstracted heterogeneous basic resources, so the “black box problem” is the core of Serverless’s large-scale popularization. Similar products in the industry have not revealed the concept of “instance”, nor have indicators such as CPU and memory been revealed in observable functions. However, observable is the developer’s eye, without observable, how can we talk about high availability?
The function calculates the observability of blockbuster release instance level, conducts real-time monitoring and performance data collection for function instances, and makes visual display, providing developers with end-to-end monitoring and investigation paths for function instances. Instance-level indicators enable you to view core indicators such as CPU and memory usage, instance network status, and number of requests within an instance to ensure that the black box is not black. At the same time, function calculation will open part of the instance login permission, so that it can not only observe, but also debug.
The first instance reservation strategy with fixed quantity, timing and automatic water level expansion in the industry
Function calculation of cold start is influenced by several factors: code and image size, startup container, language runtime initialization, process initialization, execution logic, etc., which depends on two-way optimization between user and cloud vendor. The cloud vendor automatically assigns the most appropriate number of instances for each function and optimizes cold startup on the platform side. However, as the delay of some online services is very sensitive, cloud vendors cannot carry out deeper business optimization on behalf of users, such as streamlining code or dependencies, programming language selection, process initialization, algorithm optimization, etc.
Similar products in the industry generally adopt the policy of reserving a fixed number of instances. That is, users are allowed to configure N concurrent values, which will not be extended or contracted after allocating N instances unless manually adjusted. This scheme only solves the cold start delay of some business peak periods, but greatly increases the operation and maintenance costs and resource costs. It is not friendly to the business with irregular peaks and valleys, such as hongbao promotion.
Therefore, function computing takes the lead in granting users the scheduling permission of part of instance resources, allowing users to reserve an appropriate number of function instances through multi-dimensional instance reservation policies such as fixed number, timed scaling, water-level scaling, and mixed scaling. Meet the demands of different scenarios, such as relatively stable business curve (such as AI/ML scenario), clear peak and valley time (such as game interactive entertainment, online education, new retail and other scenarios), unpredictable burst traffic (such as e-commerce promotion, advertising and other scenarios), mixed business (such as Web background, data processing and other scenarios). Thus, the impact of cold start on delay-sensitive services is reduced and the ultimate goal of flexibility and performance is achieved.
The industry was the first to launch GPU instances
Function computing provides two instance types: elastic instance and performance instance. The size of elastic instance ranges from 128 MB to 3 GB, and the isolation granularity is the smallest in the entire cloud ecosystem, which can truly achieve 100% resource utilization in universal scenarios. The performance instance specification range includes 4 GB, 8 GB, 16 GB, and 32 GB. With a higher resource upper limit, it is mainly suitable for compute-intensive scenarios, such as audio and video processing, AI modeling, and enterprise-class Java applications.
With the vigorous development of dedicated hardware acceleration, GPU manufacturers have launched special ASICS for video coding and decoding. For example, Nvidia integrates video coding special circuit from Kepler architecture and video decoding special circuit from Fermi architecture.
Function computing officially launched the GPU instance based on Turning architecture, enabling Serverless developers to load video codec to GPU hardware acceleration, thus greatly speeding up the efficiency of video production and video transcoding.
Delivers up to 2W instances/minute
The so-called “serverless” does not mean that software applications can run without servers, but that users do not need to care about the state of the underlying server, resources (such as CPU, memory, disk and network) and the number of software applications. The computing resources required for the normal running of software applications are dynamically provided by cloud computing vendors. However, users still care about the resource delivery capability of cloud vendors and the access fluctuation caused by insufficient resources in the case of sudden traffic.
Function computing relies on the powerful cloud infrastructure service capability of Ali Cloud, and achieves the maximum delivery of 2W instances/minute in peak business hours through the mutual backup of DCP bare metal resource pool and ECS resource pool, which further improves the delivery capability of function computing in customers’ core business.
VPC network connection optimization: optimized from 10 seconds to 200ms
To access resources, such as RDS or NAS, in a VPC, you need to connect the VPC network. FaaS in the industry generally use dynamic ENI mounting to connect a VPC. That is, create an ENI in the VPC and mount it to the machine that performs functions in the VPC. This solution makes it very easy for users to interconnect with back-end cloud services, but ENI typically takes more than 10 seconds to mount, resulting in significant performance overhead in delay-sensitive business scenarios.
Functional computing decouples computing and network by servising the VPC gateway, and scaling of compute nodes is no longer limited by ENI mounting capabilities. In this solution, the gateway service is responsible for ENI mounting, high availability and automatic scaling of gateway nodes, while the function computing focuses on computing node scheduling. Finally, the cold startup time of the function is reduced to 200 ms when the VPC network is established and connected.
GB image startup: Optimized from minute to second
Function Computing was the first to release the function deployment mode of container image in August 2020, AWS Lambda was re-invented in December 2020, and Domestic friends also announced the heavyweight function of FaaS supporting containers in June 2021. Cold start has always been a pain point of FaaS, and the introduction of container images dozens of times larger than the code compression package aggravates the delay caused by cold start process.
Function computing innovatively invented Serverless Caching. Based on the characteristics of different storage services, it builds a data-driven, intelligent and efficient Caching system to achieve software and hardware co-optimization and further improve the Custom Container experience. So far, function calculations have optimized image acceleration to a high level. Our public use case in function evaluation (github.com/awesome-fc)…
Experimental results show that the function calculation has achieved a leap from minute to second level in the scenario of GB mirroring cold start.
From specialized to universal, from complex to simple, SAE makes All on Serverless possible
If the problems of large-scale implementation of FaaS in the core production business of enterprises need to be solved through technological breakthroughs, Serverless PaaS, represented by SAE, has made more breakthroughs in product ease of use and scene coverage.
From proprietary to generic, SAE is a natural fit for the large-scale implementation of a company’s core business
Different from Serverless in the form of FaaS, SAE is “application-centric” and provides application-oriented UI and API to maintain the user experience in the form of server and classic PaaS, that is, applications can be seen and felt. It avoids FaaS ‘transformation of applications and relatively weak experience of observable and adjustable, and can achieve zero code transformation and smooth migration of online applications.
SAE has broken the implementation boundary of Serverless, making Serverless no longer the special favor of front-end full stack and small program. Background micro-service, SaaS service, Internet of Things application and other applications can also be built on Serverless, which is naturally suitable for large-scale implementation of enterprise core business. In addition, SAE supports deployment of multi-language source packages such as PHP and Python, supports multiple runtime environments and custom extensions, truly making the Serverless implementation specialized to general purpose.
From complex to simple, SAE is a natural fit for rapid containerization in enterprise applications
Traditional PaaS is criticized for complex use, difficult migration and troublesome expansion. At the bottom of SAE, virtualization technology is transformed into container technology, which makes full use of container isolation technology to improve startup time and resource utilization and achieve rapid containerization of applications. However, in the application management, It retains the original management paradigm of Spring Cloud/Dubbo and other microservice applications, and does not need to use the huge and complex K8s to manage applications.
In addition, after the underlying computing resources are pooled, its natural Serverless property enables users to configure the computing resources they need according to the amount of CPU and memory resources instead of separately purchasing and continuously maintaining servers. In addition, advanced micro-service governance capabilities that have been tested for many years on Double 11 have been added. Container + Serverless + PaaS can be combined into one, so that technology advancement, resource utilization optimization, efficient development operation and maintenance experience can be integrated. So it’s easier and smoother to land new technologies.
It can be said that SAE covers almost All scenarios of the application cloud, which is not only the best choice of the application cloud, but also the model of All on Serverless.
Four big changes, Serverless accelerate enterprise modern application architecture innovation
The leading technology alone cannot promote the development of the industry. The personal changes brought by Serverless to enterprise customers and developers constitute a two-wheel drive of market maturity. Technology is evolving by itself and customers are feeding back on the ground, which is the correct posture for the sustainable development of any new technology.
Change 1: Server vs code
Full stack engineer at a startup: “My job is no longer around cold, boring servers, and the server processing time is longer than I write code, so I can spend more time on the business, and use the most familiar code to ensure the stable operation of the application.”
The daily routine of a full stack engineer working on the front end might look something like this: Master at least one front-end language such as Node.js or Puppeteer, write some API interfaces, and modify some back-end bugs, and spend a lot of energy on the server operation and maintenance. The more business the company has, the more time it spends on the operation and maintenance.
Functional computing lowers the barrier to server maintenance in front-end languages such as Node.js. Anyone who knows how to write JS code can maintain Node services without learning DevOps.
Change 2: Computing cluster vs computing resource pool
Java engineer in the field of algorithms: “I no longer worry about server specifications, complicated procurement, and difficult operation and maintenance caused by the increase in algorithm and complexity. Instead, I improve flexibility and freedom through infinite resource pool, fast cold start and reserved instances.”
Netease Cloud Music has implemented 60+ audio and video algorithms, 100+ business scenarios, and 1000+ cloud hosts and physical machines of different specifications. Although the interconnection of internal business scenarios and algorithms has been simplified in many ways, more and more algorithms with storage and incremental processing, business scenario scales of different traffic, and different business scenarios may reuse the same algorithm, resulting in less and less time in business.
Netease Cloud Music upgrades the offline audio and video processing platform based on function computing, and applies it to business scenarios such as listening to songs, karaoke songs and song recognition, realizing the business implementation at a speed of 10 times, and greatly reducing the computing cost and operation and maintenance cost of the sparse algorithm.
Change 3: Load vs scheduling
Main program of the game: “I no longer worry about the polling mechanism of SLB leading to the inability to perceive the actual load of Pod, resulting in uneven load, the scheduling system of function calculation will reasonably arrange each request, to ensure the high CPU consumption and high flexibility in the battle verification scenario.”
Combat checkup is a mandatory business scenario in many lilith combat games, which is used to verify whether the combat uploaded by the player client is cheating. Battle verification usually needs to be calculated frame by frame, and the CPU consumption is very high. Usually, the battle of 1 team v 1 team needs N ms, while the battle of 5 team V 5 team needs corresponding 5N ms, which requires high elasticity. In addition, SLBS mounted in container architecture will not be able to perceive the actual load of Pod due to polling mechanism, resulting in load imbalance, resulting in dead cycles and stability risks.
The scheduling system of function calculation helps Lilith to arrange each request properly, and also provides time-out process killing mechanism for the problem of infinite loop. The complexity of the scheduling system is reduced to the infrastructure by functional calculation. After deep optimization, the cold start delay of the manufacturer is greatly reduced, from scheduling, to obtaining computing resources, and then to service start, which is basically about 1 second +.
Change 4: Scripting vs. automation
Operation and maintenance engineer of interactive Entertainment industry: I no longer worry about the problems of slow and error-prone version release, difficult to guarantee environmental consistency, cumbersome permission allocation and troublesome rollback in traditional server mode. SAE’s full service governance ability improves the development operation and maintenance efficiency by 70%, while elastic resource pool can shorten the capacity expansion time of business end by 70%.
After a hit movie, the number of registered users on pumpkin Movie Day exceeded 80W, leading to the failure of API gateway of flow entrance. Then all the services at the back end faced great stability challenges, and the whole process took 4 hours, including buying ECS, uploading scripts to the server, running scripts and expanding database. However, because of the natural explosion caused by such hits, it has accelerated the thought of upgrading the technology of pumpkin movies.
Pumpkin Movie fully Severless within 7 days with the help of Serverless application engine SAE, embracing K8s with zero threshold, easily cope with the sudden traffic of hit movies. Compared with the traditional server operation and maintenance mode, the development operation and maintenance efficiency is increased by 70%, the cost is reduced by 40%, and the capacity expansion efficiency is increased by more than 10 times.
One step ahead, aim for a thousand miles
In 2009, Berkley made a six-point prediction about the then emerging cloud computing, including the possibility of pay-as-you-go services and a huge increase in the utilization of physical hardware, which has come true over the past 12 years. In 2019, Berkley again predicted that Serverless computing will become the default computing paradigm in the cloud era, replacing the Serverful (traditional cloud) computing paradigm.
Given the 12-year history of cloud computing, Serverless is just over a quarter of the way through its third year of testing Berkeley’s predictions. During these three years, from the beautiful imagination of the future of cloud, to Serverless First and large-scale investment advocated by cloud manufacturers, to enterprise users making full use of the advantages of Serverless to optimize the existing architecture, and objectively facing the obstacles affecting the large-scale implementation of Serverless enterprise core business, To today, through technological innovation and breakthrough to resolve the common pain points of the industry. This requires not only the courage and courage to go ahead, but also the mission and responsibility to be ambitious.
The original link
This article is the original content of Aliyun and shall not be reproduced without permission.