The author does not aversion | Ali cloud Serverless head

“Only transcendence can keep us going.”

This was the tenth year of no anger in Ali. Since joining Ali Cloud in 2010, Dosa has been involved in the development of Ali Cloud Flying Distributed system, successively serving as batch computing architect and Table storage (NoSQL) R&D manager. He has been deeply involved in the whole process of Ali Cloud system development and product iteration. In 2016, Dosa became the head of product development of Ali Cloud function computing, dedicated to building the next generation of flexible, highly available serverless computing platform.

Serverless is a technical challenge that should be solved in the next decade. In this wave of Serverless, Ali Cloud has been walking in the front, whether technology or products, is the first in the richness of domestic. “Never take it lightly. Serverless is still in its early stage in China. Only when the technology and products are polished and mature, can the user experience be better, can the battle be won.”

We did a brief interview with Bha Dosa to get his thoughts on Serverless development, technical difficulties and implementation.

Accept or see?

In the future, cloud computing will become the infrastructure of the whole society and business. It should be as easy to use cloud computing as it is to use coal, water and electricity now. You don’t need to know where the water comes from, how to filter it, how to lay pipes, etc., just turn on the tap and get a glass of water. The concept of Serverless helps cloud computing move in this direction. It advocates that people do not need to care about service related matters beyond application logic, including management, configuration, operation and maintenance, and pay for what they use.

From this perspective, Serverless is an implementation path to truly turn cloud computing into a social business infrastructure. It is also closer to the current cloud-native approach advocated by the industry, so people should naturally use Serverless in the process of using cloud computing.

The mentality of foreign developers in the Serverless domain is significantly better than that of domestic developers. Because many foreign companies started their business based on Lambda ecology at the very beginning, while some large domestic enterprises have started to use Serverless tools and products, and a large number of enterprises are still in a wait-and-see state.

The emergence of a new product also requires an adaptation period, so after the emergence of a series of products like Serverless, users have many concerns about whether to use, whether to migrate, and how to migrate. Enterprises often consult about how to ensure the security of function calculation, how to ensure the stability of function calculation, and whether there is a relatively large transformation cost and transformation risk for traditional projects to migrate to Serverless architecture. These concerns are natural, but I believe they will be addressed as Serverless grows, FaaS becomes more broadly defined, and the tool chain becomes more complete. In theory, nothing technology can solve is a problem.

Do not build Serverless without scale

The extreme flexibility, cost savings, development efficiencies and more that Serverless brings are very attractive. In the process of developing and going online, traditional businesses need team cooperation. Everyone develops part of the business, merges codes, develops joint investigation, and then conducts resource assessment, test environment building, online environment building, test going online, operation and maintenance. However, in the Serverless era, developers only need to develop their own part of the function/function, and then deploy to the test environment, online environment, a large part of the later operation and maintenance work do not have to worry about.

It is no exaggeration to say that if enterprises build their own database services through cloud hosts, in general, the availability is not as good as the database services provided by cloud vendors. In addition, API gateway, data storage services and other products provided by cloud vendors have better performance and are more secure and reliable.

Small businesses are better off not building Serverless themselves. Because the core element of Serverless is quantity, this means that if today’s quantity is small, you use very few resources; If today’s volume is large, additional resources will need to be mobilized. On “Double Eleven”, the amount of traffic is on the order of hundreds of millions. If your enterprise does not have machine resources of this kind of traffic in the order of hundreds of millions, how can you schedule these resources for others to use? There is no way to implement scheduling by volume, let alone Serverless. Enterprises that do not have resource scaling are not advised to build their own Serverless capabilities, but can practice Serverless by using public cloud products.

At present, all major manufacturers have identified Serverless as the future, even if it is not the final state of cloud computing, it is also a way to the final state, on the one hand, because Serverless can solve many practical problems, more “like” or more “close” to the real cloud computing; On the other hand, no one wants to be left behind in the cloud. So, Serverless is the place to win.

There are three main parts to the competition for Serverless capabilities:

One is performance, including security, stability, flexibility, etc. If performance is not good, I think it is not necessary to do Serverless, even cloud computing, because performance is the core of Serverless, everything is based on security, stability and performance.

The second is the function, want to do the Serverless well, the function is indispensable. Because Serverless is not just FaaS, and even FaaS is not just running online, but includes many things like BaaS, triggers, logs, monitoring, alarms, etc. Developers will be willing to use it only if it meets their needs in terms of features.

Finally, experience, Serverless experience is too important, experience includes all aspects, such as the usability of functions, stability, security, product flexibility, tool chain integrity and so on. In addition to the three points mentioned above, COMMUNITY, ecology and openness are also very important.

As one of the first public cloud vendors in China to launch Serverless platform, Ali Cloud’s FaaS platform product is called function computing. Functional computation has a lot of data to look at in terms of event triggering, supported languages, and user experience:

  • ** event trigger: ** Ali Cloud function calculation can be triggered by service events on Ali Cloud, such as Ali Cloud object Storage (OSS), Log Service (SLS), Message Service (MNS), Form storage (OTS), API Gateway, CDN, etc. Its unique Callback mechanism greatly reduces the architectural and code cost of the asynchronous model.

  • Ali cloud function computing currently supports mainstream development languages such as Node.js, Java, Python, and Custom Runtime support Go, C/C+, Ruby, Lua languages, etc.

  • ** User experience: ** Ali Cloud function computing provides a Web-based console and SDK; Users can manage applications through the Web console or operate them through an interactive command line.

  • ** Service mode: ** functions can be managed by services and applications, and a single function instance can execute multiple requests in parallel, effectively saving computing resource costs.

A tough nut to crack

The pain points of Serverless are very difficult, such as how to quickly migrate traditional projects to Serverless, how to smooth the transition, how to Serverless, how to conduct better debugging under Serverless architecture, how to save costs better, etc., all of them are difficult problems. My colleague Xu Xiaobin mentioned the challenges facing Serverless in his article “Behind the Hubbub: Concepts and Challenges of Serverless” :

In the mainstream scene of large-scale landing Serverless, it is not an easy thing, there are many challenges, I specifically analyze these challenges below:

Challenge 1: Difficulty in business lightweight

To achieve complete automatic resiliency, paying for resources actually used means that the platform needs to be able to scale out business instances in seconds or even milliseconds. This is a challenge for the infrastructure and a high demand for the business, especially for the larger business applications. If it takes ten minutes for an application to be distributed and launched, then the automatic resilience of the response capacity can barely keep up with the changes in traffic…

Challenge 2: Infrastructure is extremely responsive

Once instances of Serverless applications or functions can scale in seconds, or even milliseconds, the associated infrastructure can quickly come under tremendous strain. The most common infrastructure is service discovery and log monitoring systems, where a cluster instance might change at a rate of several times per hour, but now it changes at a rate of several times per second. Moreover, if the responsiveness of these systems can’t keep up with the rate at which instances change, the whole experience is compromised.

Challenge 3: The business process lifecycle is inconsistent with the container

The Serverless platform relies on a standardized application lifecycle to achieve fully automated container movement, application self-healing, and other features. In the system based on standard containers and Kubernetes, the life cycle that the platform can control is the life cycle of containers. Therefore, the business needs to keep the lifecycle of the business process consistent with the lifecycle of the container, including startup, stop, and the specification of Readiness probe and LiVENESS Probe…

Challenge 4: Observable capability needs to be improved

In Serverful mode, if anything goes wrong in the production environment, the server will not disappear and users will naturally want to log on to the server. In Serverless mode, the user does not need to care about the server, that is, the server is not seen by default. What if the system fails and the platform cannot heal itself? … When the overall observability around the Serverless pattern is insufficient, users are not necessarily reassured.

Challenge 5: The mindset of R&D operations needs to change

Almost all development, when deploying your application for the first time in your career, is for one server, or one IP, which is a very powerful habit. In the process of the gradual implementation of Serverless, the R&D needs to change some modes of thinking, gradually adapt to the mind that “IP may change at any time”, and move to operate and maintain their own systems from the perspective of service version and traffic.

Metaphorically, Serverless currently has a form, that is, a framework, but there are many grids (problems) in this framework have not been filled (solved), this is also the question of whether to use Serverless today, one of the reasons is that there are not enough successful cases. ** But in fact, Alibaba has successfully implemented Serverless on Singles Day in 2019. Not only that, Ali Cloud also led a number of enterprises to use functional computing products, thus saving a lot of IT costs. **

“Become the Serverless user needs”

Function computing has several typical application scenarios, such as Web application, AI reasoning, audio and video processing, graphic and text processing, real-time file processing, real-time stream processing, etc. At present, function computing has a large number of customer groups, such as Graphite Document, Mango TV, Sina Weibo, MLONG Technology, etc.

Taking Weibo as an example, the function computes billions of weibo requests per day on average. Its millisecond scale computing resources can ensure that the application can still guarantee stable delay when hot events occur, and the user experience is not affected by the number of accesses. Run through the calculation of function image processing service, weibo has realized the continuous cost savings, don’t need for smooth handling business peak traffic surge in idle machine resources reserved in advance, because of the complicated state of the machine does not need to maintain at the same time, the engineers can concentrate on the work with product team to increase business value, rather than spending time management infrastructure.

Not only the early Internet enterprises like Sina have settled Serverless, but also some new startups are joining the Serverless camp.

Blue Ink is a high-tech company founded by American overseas students, focusing on new technology research and platform operation in the field of digital publishing and mobile learning in the era of mobile Internet. With the explosion of demand for online education, Lanmo has increased efforts to integrate high-quality course resources in the industry and continuously expand its business boundaries. While winning opportunities, the technical team has also faced unprecedented challenges.

The video processing related business is one of the most difficult problems encountered by the blue Ink technical team. Blue Ink has to deal with a large number of video teaching materials every day, involving a series of complex technical work such as video editing, segmentation, combination, transcoding, resolution adjustment, client adaptation and so on. In the technical practice of the past few years, the blue Ink technical team has established a set of autonomous and controllable video processing mechanism through FFmpeg and other technologies, which supports the rapid development of business. However, this year’s business growth rate is beyond blue Ink’s engineers’ expectation, the peak of the video processing demand dozens of times more than in previous years, the existing architecture is overwhelmed, seriously affecting the user experience.

Blue Ink’s core appeal is summarized in three aspects: cost saving, extreme flexibility, and free operation and maintenance. These are exactly the problems that Serverless is best at solving. After the investigation of Serverless services provided by domestic cloud manufacturers, the blue Ink technical team agreed that Ali Cloud function computing is the most suitable solution for them in the field of video processing.

Because FC is fully compatible with existing code logic and supports a wide variety of major development languages, the blue Ink team can migrate code logic from the original architecture to FC in a nearly seamless manner at a very low cost. ** By interconnecting with OSS triggers, whenever a new video source file is uploaded on OSS, the function computing instance can be automatically pulled up to start a video processing service life cycle.

Through the integration of Serverless workflow, it can also arrange distributed tasks uniformly, realize the complex operation of parallel processing and final merging of large files after slicing, quickly mobilize computing resources of tens of thousands of instances in a short time, and realize the fast execution of video processing tasks.

On the other hand, compared with the traditional method, the Serverless scheme based on FC calculation can help Blue Ink save about 60% of IT cost in video processing scenarios.

The main battleground for the next decade

In the ideal Serverless, it should be: more perfect product form, more extreme flexibility, better tool chain, more cost saving, more efficient development efficiency, more convenient and fast migration speed, more simple and powerful cloud experience. Developers should be able to focus on the development of business code in one way, without paying attention to the difference of running platform. One writing can run everywhere, and developers can switch between different businesses without learning costs as long as they master one way.

From a developer’s perspective, Serverless’s entire R&D model also poses challenges to the r&d system. For the front end, Serverless not only complements the existing capabilities of the front end engineer, but may also change the positioning of the entire front end industry. It used to be that the front end was easy, just do the UI development and leave the rest to the back end. But with the combination of the front end and Serverless, the demand for the front end is not just to develop a page, but to deliver the development of the entire application.

But on the back end, your first reaction might be, does this revolutionize me? So I don’t have to work? That’s not true. The evolution of the Serverless DEVELOPMENT model helps them evolve further down the ladder, allowing them to focus on the parts that really need to do technical research. For example, the ability of the data, the ability of the service, how to do better, more solid, that’s what we want to see.

Ali Cloud is playing a very interesting and beneficial card for The overall development of Serverless through the combination of toolchain, community and product capabilities. ** Ali Cloud Serverless aims to be “the Serverless everyone needs”, which is quite different from other cloud vendors. ** Only Serverless manufacturers who put user needs in the first place can do a good job of Serverless products.

** In the future, Serverless will be ubiquitous, and any sufficiently sophisticated technical solution may be implemented as a fully managed, Serverless back-end service. ** Not only cloud products, but also services from partners and three parties. The capabilities of the cloud and its ecosystem will be embodied through API + Serverless. In fact, Serverless will be the most important part of any platform product or organization that uses API as the way of function disclosure, such as Dingding, Didi, wechat, etc.

Course recommended

In order for more developers to enjoy the dividends brought by Serverless, this time, we gathered 10+ Technical experts in the field of Serverless from Alibaba to create the most suitable Serverless open course for developers to learn and use immediately. Easily embrace the new paradigm of cloud computing – Serverless.

Click to free courses: https://developer.aliyun.com/learning/roadmap/serverless

Serverless public number, release the latest information of Serverless technology, collect the most complete content of Serverless technology, pay attention to Serverless trend, pay more attention to the confusion and problems encountered in your landing practice.