The author | Yang Haoran (no aversion)

For enterprises, the Serverless architecture has great application potential. With the improvement of cloud products, the enhancement of product integration and integration capabilities, and the improvement of software delivery process automation capabilities, we believe that under the Serverless architecture, enterprise agility has the potential to be improved by 10 times. This sharing is mainly divided into the following four aspects:

Challenges of DevOps and how to reduce the cost of DevOps implementation? Why Serverless is the inevitable result of cloud development? Serverless + DevOps =? 4. Share actual combat cases

The challenge of the conversation

1. DevOps challenges

The entire process of application delivery usually involves three parts: development, testing, and operations. In a traditional organization, there are usually three different teams. Each of these steps has its own focus, but in practice, making the app delivery process smooth and efficient, and keeping the app highly available once it goes live, often requires the three teams to break down the walls that exist between them.

The walls here are not just barriers caused by organizational gaps, but also differences in focus in three areas. Such as development need to focus on testability and operational availability, and these things will profoundly affect the architecture of the application design and development, if the development of students is not fully considering the testability of the code, then to test students will do a lot of problems, such as how to implement the fault injection and precise flow control, it needs to be in development when they consider to be clear about.

For operations is the same, the development of the time also need to consider the operational availability, such as the need to consider how to when I was in the development of the service actually do it smooth and not lose data when offline, at the same time, the design of this need and operational system for deep docking, such ability is very reliable, very safely connect, improve operational efficiency.

Many internal failures of Ali were also caused by the inconsistence of design information between development and operation and maintenance. For example, the highly reliable guarantee of three copies would be made during development and design, but the operation and maintenance side might think that the machine where the copy is located did not provide services, so it was mistakenly offline.

So, DevOps is really about two things. First, it’s about developing, testing, and operating as a team. Second, there is the need to get the team mentality together, which is the real challenge of DevOps.

2. DevOps challenge – Development

Here’s a quick review of what you need to consider at each stage of DevOps. In the development stage, we need to sort out the business requirements and scenarios first, and transform the requirements into system design, and at the same time, we need to consider how to design the data model, so that the database does not become a single point and bottleneck. In an Internet enterprise like Alibaba, applications carry a large number of user visits, so complex balancing, fault-tolerant design, traffic control, etc., need to be considered.

If you have a microservices architecture where the application is composed of multiple services, then service management needs to be considered as well. After all of the above is considered, it is translated into system design, and finally development debugging and unit testing. After these are completed, the application can be handed over to test.

3. DevOps challenge – testing

Many aspects and dimensions need to be considered during testing to ensure the quality of all aspects of software. Testing includes integration testing, end-to-end E2E testing, performance testing, stress testing, fault tolerance testing, compatibility testing, and destruction testing.

4. DevOps challenges – Operations

When an application through the test, will output a deliverables, the deliverables are thought to possess the ability to publish, the follow-up is needed operational work, such as the application of gray distribution, upgrade the rollback, server up and down the line, monitor, alarm, security patches, upgrades, network configuration, operation audit, production environment, drainage, etc.

5. DevOps challenges – How can DevOps implementation costs be reduced?

When we take a closer look at the work items included in DevOps, we can see that there are a lot of things that need to be considered if we want to build a resilient, highly reliable application, and that these considerations become something that the same team needs to consider throughout the entire application lifecycle once DevOps is implemented. The demands on team mentality and ability are very high.

The DevOps application delivery pipeline consists of many steps, and how these steps are smoothly connected and automated is also important.

After reviewing the challenges of DevOps, we can see the need for platforms and tools to reduce the complexity of DevOps through practices within Alibaba and the industry as a whole.

Introduction of Serverless

1. Cloud trends

Before introducing Serverless, let’s review cloud trends and see why Serverless is a natural consequence of cloud development.

In the past decade, cloud computing has grown dramatically, enabling users to easily access almost unlimited computing power via apis and virtual machines. This model has many advantages. It is compatible with the original application development and running environment. This pattern enables a very smooth migration of legacy applications to the cloud.

The first phase of the cloud is the cloud of infrastructure, in this case the cloud hosting model. Based on cloud storage, network and other infrastructure to build applications, the core value of this model is resource flexibility and cost. In the next phase, cloud architecture will go far beyond infrastructure and see cloud services emerging in all areas. So today we need to think about how we can take advantage of the capabilities of cloud services to build applications more quickly in a building block fashion, rather than reinventing the wheel, which is the cloud native model.

2. The cloud product system is rapidly Serverless

Currently, mainstream cloud computing vendors are rapidly Serverless in their product systems, which is not a prediction of the future, but a fact that is actually happening. The data in the figure below is based on the statistics of new functions or forms of new services released by AWS, Microsoft and Alicloud. It can be seen that the vast majority of new services are Serverless.

3. Cloud programming model

Cloud computing generates a large number of services that, from a performance point of view, are in the higher-level abstraction of Serverless form, which makes sense. If you re-examine the cloud product architecture from the perspective of the cloud programming model, you can see that the lowest layer is the infrastructure layer, which consists of two parts, IaaS and containers. On top of the infrastructure is the cloud native application operating system, and K8s is the de facto standard for managing the underlying IaaS infrastructure. There are very rich apis on top of the operating system, which is a fully hosted cloud service system. If we look at the product system of Ali Cloud, we will find that Ali Cloud provides a rich product system, including database, big data and middleware, which are provided in Serverless full hosting mode.

With so many cloud apis, the question today is how to design a common computing framework that makes very tight connections with these Serverless cloud services and cloud apis to help customers quickly build resilient, highly available applications. Therefore, Serverless computing appears in the framework layer. The main reason is that it needs to have a close chemical reaction with cloud API to help users improve the efficiency of application construction and operation and maintenance, and to help customers build a new generation of distributed, data-oriented and intelligent cloud native applications.

4. Differences between cloud hosting and Serverless applications

Here are the essential differences between cloud hosted applications and Serverless applications. For applications, the build pattern can be broken down into three layers: the underlying infrastructure management, the intermediate external service integration, and the upper application logic. If the cloud hosting pattern, is actually at the infrastructure layer to build applications, applications to build the level of abstraction is relatively low, thus will bring a lot of work, the user need to integrate different components and services, a lot of decision-making and implementation, delivery speed will be slow, need to consider many things, but also has a lot of repeated work in operations.

If users build applications in Serverless mode, which is equivalent to building applications in the upper API mode, the glue logic and infrastructure management work will be undertaken by cloud service providers, and the cost of integration and decision-making required by users is relatively low. The main consideration is how to adapt the business logic and requirements to the cloud services to build the application. The benefit of building applications based on very efficient cloud apis is that they are cheap to build, deliver by the day, by the hour, and greatly reduce the burden of future operations and maintenance.

5. What is Serverless computing?

Serverless computing has four characteristics: first, there is no need to maintain cloud computing infrastructure, and application construction is more abstract and efficient; Second, it can achieve real-time elastic scaling, so that the future data-driven load sensing algorithm can achieve not only meet the very low latency, but also achieve high resource utilization; Third, the metering mode provides a very fine-grained on-demand mode, which can be measured by seconds and can be fully paid on demand. For users, the resource utilization is 100%; Finally, the ability to achieve high availability is built into the platform layer.

6. Aliyun Serverless product system

Here is an explanation: Serverless computing is only a part of Aliyun Serverless products, in addition to which storage, API, analysis, middleware, etc. Therefore, from this perspective, Serverless is not a very new concept. The earliest OSS object storage is a Serverless product. It can be seen that the cloud product system is being Serverless. It is only in recent years that a general-purpose Serverless computing platform such as functional computing has emerged, enabling the Serverless architecture products to be linked together to build a Serverless application.

Serverless DevOps

Once you have these Serverless capabilities, how do you combine them with DevOps?

1. Simplify infrastructure management, operation and maintenance

The figure below shows more from the perspective of how to build a highly available application. Application modules are divided into four areas: infrastructure, runtime, data, and applications. The infrastructure layer is required to handle machine-specific operations, such as troubleshooting. At runtime, application resource isolation and flow control are required. The data layer is mainly related to the database and cache, such as how to design the database table structure, how to design the cache strategy, how to achieve load balancing, and how to ensure that there is no bottleneck of horizontal scaling.

At the application layer, you need to handle application-specific operations, such as code package errors, configuration errors, and heartbeat exception handling. The part of the blue dotted box in the figure below can be completely responsible by the platform, and users can not perceive; The blue solid box indicates that the platform helps users do a lot of work, but users still need to perceive and make certain decisions. The red box represents the parts that still need to be managed by the user. It can be seen that in terms of fault tolerance, the platform provides very strong capabilities, including multi-AZ disaster tolerance, rapid flexibility, built-in flow control and multi-level and multi-dimensional monitoring and alarm capabilities. With these capabilities, the complexity of the user management infrastructure is greatly reduced.

2. Agile application perspective process

The following figure shows the application delivery process, where code is stored and managed through a centrally managed code base, converted into a deliverable through continuous integration, and stored in a deliverable repository. The deliverables can be container images or code package patterns. Once the deliverables are produced, they can be automatically deployed to test, production for release deployment, and finally to production. Therefore, the key point of the application delivery process is to achieve a high degree of automation, and the key part of automation is two parts: the infrastructure, namely the automation of the code and the links.

3. Automate the application delivery pipeline

The following diagram shows the automated application delivery pipeline. As you can see, there are many functions that need to be implemented at each step, and many of them are repetitive, so infrastructure is code.

4. Infrastructure is code

The diagram below shows infrastructure as code. The Serverless application model defines application resources through declarations, enabling standardization, automation, and visualization.

You can pass different parameters to the template and dynamically generate the application running environment.

5. Service version and grayscale release

In the function calculation, the application has the concept of version, version is an immutable entity, so the version of the unexpected modification caused by online application damage, Ali cloud through the service version and gray release to avoid such problems, clients access the application through alias to access.

Serverless workflow

Ali Cloud provides Serverless workflow to facilitate users to connect DevOps in series. Users can quickly create workflow through supporting service capabilities and tool capabilities, and display it in a visual way, so that they can clearly see the effect of workflow.

7. Automate the application delivery pipeline

Review how to automate the application delivery pipeline once you have these capabilities. In the source phase, you can implement a static check of code quality to ensure that the CheckIn code is of good quality. When CheckIn is checked into the code base, unit tests are automatically run and deliverables are produced. In the testing process, automatic deployment to the test environment and test cases can be realized through seamless integration with Ali Cloud ROS. After the completion of these, through ReleaseManager can confirm the deployment, through the workflow to connect these tasks, released to the pre-release environment, and further deployed to the production environment, every step has been automated, research and development efficiency has been greatly improved.

8. Collect and query logs

On the Serverless computing platform, native provides many log collection and Metric collection capabilities, such as simple log query and advanced log query, to provide users with advanced data analysis capabilities through logging.

9. Ability to collect and visualize indicators

In addition to the basic indicator view, the Serverless computing platform supports user-defined indicator views. Users can search for user-defined keywords and indicators to analyze data related to services.

When Serverless and DevOps are combined, the r&d efficiency can be greatly improved. On the one hand, the mental burden of the development team can be greatly reduced. On the other hand, tools make the entire DevOps pipeline highly automated.

Case sharing

Finally, share some successful cases. Ali Serverless Computing supports ali Economy small program platform, saving 40% of research and development resources. Ali Cloud Serverless support Language uses function calculation to realize the document and other computation-intensive business, which greatly reduces the operation and maintenance cost, and also reduces the operation and maintenance cost of graphite document by 58%. It helps Weibo to improve the research and development efficiency, and makes the function online time from the original 2 weeks to a few hours.

It can be seen that in 2020, the industry has greatly improved the acceptance of Serverless, and the capability of Serverless has also become more universal.

** Haoran Yang, head of Computing at Serverless, joined Ali Cloud in 2010 and was deeply involved in the whole process of development and product iteration of Ali Cloud Feitian distributed system. Strong understanding of large-scale distributed computing, large-scale data storage and processing.

Course recommended

In order for more developers to enjoy the dividends brought by Serverless, this time, we gathered 10+ Technical experts in the field of Serverless from Alibaba to create the most suitable Serverless open course for developers to learn and use immediately. Easily embrace the new paradigm of cloud computing – Serverless. Click on the links to free courses: developer.aliyun.com/learning/ro…

Serverless public number, release the latest information of Serverless technology, collect the most complete content of Serverless technology, pay attention to Serverless trend, pay more attention to the confusion and problems encountered in your landing practice.