The author | Chen tao source (bi unlined upper garment) | alibaba cloud native public number

First, natural cloud native Serverless

1. Cloud native age

With the development of container technology represented by Docker, CNCF Foundation and K8s in 2013, cloud native began to be known to the majority of developers. Before the cloud native era, there are two stages: one is to build IDC room, and the other is to simply move the original applications to the cloud. Self-built IDC room is difficult to obtain high availability, high scalability and operation and maintenance efficiency improvement capabilities; The second stage is the era of cloud computing. Compared with IDC, it has made some progress, but most of it is still relatively primitive and difficult to use the cloud. The resources at this stage are nearly infinite, but the way based on virtual machines and various self-built services still needs to be improved.

In the cloud native era, applications are designed to run in the cloud environment, taking full advantage of the advantages of cloud resources, such as the flexibility and distribution of cloud services. As shown in the figure above, cloud native can be divided into several parts:

One is cloud native technologies, including containers, K8s, microservices and DevOps. These technologies are just tools, and they require some best practices and combinations, namely cloud native architectures, to truly use them.

Cloud native architecture is based on the technology of cloud native a set of architectural principles and design patterns, are some guiding principles, such as requiring completes the observable, only on the premise of well observed the elasticity of the follow-up, including high relevant construction and infrastructure sinks are available, and hope to maximize the part non-business code, Under the guidance of such technical and architectural design, izumo native applications can be designed.

Cloud native applications are lightweight, agile and highly automated, which can give full play to the advantages of cloud and better adapt to business development and changes in the era of modern digital transformation.

2. Serverless Natural cloud native

Why is Serverless natural cloud native? Although Serverless has been around a little bit longer than cloud native, we go back to AWS with its first Serverless product, Lambda, whose on-demand billing and extreme scaling features fit the definition of cloud native very well, such as infrastructure sink. In Lambda, there is no need to manage the server; it scales the server on request, achieving a high degree of automation; It also organizes code in the form of functions, which are lighter and faster to deliver than applications. However, the disadvantage of this mode is the high cost of transformation, because many applications are originally a huge single or microservice application, which is difficult to transform into a functional mode.

3. Meet the SAE

Serverless concept and related products have been launched for almost 7 years, in the process of cloud native technology is also maturing, including Docker, K8s and so on. In 2018, Ali Cloud began to think about another form of Serverless, namely Serverless Application, also known as SAE product. It was launched in September, 2018, and has been commercialized for 3 years.

SAE features:

  • Immutable infrastructure, observable, automatic recovery

Based on the K8s base, behind which stands immutable infrastructure such as mirroring and observable, automatic recovery, which automatically cuts the flow or restarts the instance if a request failure is detected.

  • Free operation and maintenance, extreme flexibility, extreme cost

Hosting server resources does not require users to operate and maintain the server themselves, but also has the corresponding ability to maximize flexibility and cost.

  • Easy to get started, 0 transformation, integration

As shown in the figure above, the top layer is the customer perception layer, which is the product form of aPaaS, which is an application PaaS. After more than three years of practice, it finally achieved the effect of making users really easy to use and zero transformation, and made a lot of integrated integration.

SAE, a PRODUCT based on K8s, Serverless and aPaaS, fully conforms to the characteristics of cloud native. At the technical level, the bottom layer uses containers, K8s, and integrated microservices, including various DevOps tools. At the architecture level, because the underlying dependent on these technologies, so it is very convenient to allow the user to comply with the principle of cloud native architecture, to design their own application practice, finally let the client application can enjoy cloud native dividends, maximize the implementation of the application of lightweight, agile, and highly automated, greatly reduce into an era of cloud native threshold.

SAE product architecture drawing

SAE is an application-oriented Serverless PaaS that features a zero-threshold, zero-container base, enabling users to easily enjoy the benefits of Serverless, K8s, and microservices. At the same time, it also supports a variety of micro-service frameworks, a variety of deployment channels (including its own product UI deployment/cloud effect/Jenkins/plug-in deployment, etc.) and a variety of deployment modes (including War/Jar/image deployment, etc.).

The bottom layer is an IaaS resource layer, and the top layer is a K8s cluster, which is transparent to users and does not require them to buy their own servers or understand K8s. The next layer has two core capabilities: One is application hosting, the other is microservice governance, application hosting is the application life cycle, etc., and microservice governance is service discovery, graceful offline, etc., which are well integrated in SAE.

SAE’s core features can be summed up in three: zero code modification, 15s flexible efficiency, and 57% cost reduction and efficiency improvement.

Second, SAE design concept

1. Kubernetes base

  • The container

In the K8s container choreography ecosystem, the most basic is the container or image. Relying on the image, users can achieve immutable infrastructure. The advantage is that the image can be distributed and replicated everywhere, which is equivalent to portability without vendor binding. We also provide War/Jar level deployment for users who are not familiar with mirroring or don’t want to experience complexity, which greatly lowers the threshold for users to enjoy the bonus.

  • Facing the final state

In traditional O&M, there are many issues that are difficult to solve. For example, when a server is suddenly loaded or CPU is high for various reasons, a lot of manual O&M operations are required in traditional O&M. In K8s, liVENESS and Readiness are required in combination with observability and health checks. Can realize automated operation and maintenance, K8s will automatically cut flow and automatic rescheduling, greatly reducing the operation and maintenance cost.

  • Resources managed

Not only ECS machine is managed, K8s is also internal managed operation and maintenance, customers do not need to buy a server or buy K8s or operation and maintenance K8s, even do not need to understand K8s, greatly reducing the customer’s threshold and salary burden.

2. Serverless characteristics

  • The extreme elastic

We’ve done 15 seconds end-to-end, which means you can create a POD in 15 seconds and get the user’s application started. In terms of elasticity, we have elasticity of basic indicators (such as CPU and Memory), conditional elasticity of business indicators (such as QPS and RT) and timing elasticity. If you manually set the elasticity index, there are still some thresholds and burdens, because customers do not know how much the index should be set. In this context, we are also considering intelligent elasticity, which can automatically calculate the elasticity index and recommend it to users to further reduce the threshold.

  • Lean cost

SAE eliminates the resource hosting and operation costs that previously required customers to operate large numbers of ECS servers, which can be costly when security updates, bug fixes, and especially high density deployments are required. In addition, the SAE billing model is based on minute billing, so users can achieve lean cost, such as expanding from 1 hour to 10 instances during peak hours to 2 instances after peak hours.

  • Language enhancement

In the area of elasticity, we have targeted some language enhancements. For example, Java, combined with ali’s large-scale Java application practice, ali’s JDK – Dragonwell11, compared with other open source JDK, can make Java application start up 40% faster. We will explore more possibilities in other languages in the future.

3. Product form of PaaS

  • Application hosting

Application hosting is equivalent to the management of application life cycle, including application release, restart, expansion, grayscale release, etc. It uses the same mentality as everyone when using applications or other PaaS platforms, and the threshold to get started is very low.

  • Integrated integration

Because there are hundreds of cloud products, using every one of them is an extra cost. Therefore, we have integrated the most commonly used cloud services, including basic monitoring, service monitoring ARMS, NAS storage, SLS log collection and other aspects, to lower the threshold for users to use products.

In addition, we have made additional microservice enhancements, including hosting registries, graceful offline and microservice governance. Because microservices typically require a registry, SAE’s built-in hosting registry allows users to register their applications directly without having to re-purchase them, further reducing barriers and costs.

SAE combines these capabilities to enable users to enjoy the technology dividend behind this product at zero threshold when migrating traditional monolithic or microservice applications.

Third, SAE technical architecture

1. SAE technical architecture drawing

The technical architecture behind K8s hosted by SAE for users is shown in the figure above. In one host, the top layer is SAE PaaS interface, the second layer is K8s Master (including API server, etc.), and the bottom layer is K8s real running resources host. These applications are fully hosted by SAE. Users only need to create Pod resources in their VPC or network segment and make a connection to enable the application to run properly.

There are two core issues here:

First, anti-penetration. For example, our Pod or container uses traditional container technology like Docker, which puts users A and B of the public cloud on a physical machine. In fact, there is a very high security risk, and user B is likely to invade user A’s container to obtain user information. So the key here is to limit the user’s ability to escape.

The second is the connectivity of the network or the cloud system. We need to get through with the user’s network system, so that the user can easily communicate with his security group, security rules, RDS and so on. This is also a core issue.

2. Secure containers

Here is a specific anti-escape problem. The table above is the most widely discussed security container technology, security container is simply understood as the virtual machine idea. If traditional containerization technology like Docker is used, it is difficult to do a good job of security protection or isolation, while a safe container can be understood as a lightweight virtual machine, which has both the startup speed of the container and the security of the virtual machine.

Currently security container has been beyond security, not only safety isolation, isolation performance and fault isolation, fault isolation, for example, if the Docker this container technology, some of the kernel problems, it is possible to influence because of the failure of a Docker container to other users, the host machine can be affected, This is not the case with secure container technology.

SAE adopts Kata security Container technology. Kata is the combination of runV and Clear Container in terms of time and fact of open source, which is more mature than Firecracker and gVisor.

SAE best Practices

Best practice 1: Low barrier microservices architecture transformation

Familiar with micro service clients all know, if you want to own operations a micro service technology architecture, you need to consider many factors, not only is open source, framework, and resource level and subsequent problems, including the registry, link tracking, monitoring, service management, etc., as shown in the picture above on the left side of the, in the traditional development mode, these abilities require users to their hosting and ops.

In SAE, users can hand over some non-business features to SAE. Users only need to focus on their own business, including the user center and group center of microservices, and integrate with SAE’S CI/CD tools to quickly implement the microservices architecture.

Best practice 2: One-click start and stop development test environments reduce costs and increase efficiency

Some medium and large enterprises will have multiple sets of test environments, which are generally not used at night. In ECS mode, it is necessary to keep these application instances for a long time, and the cost of idle waste is relatively high.

And if in the SAE can be combined with the namespace, such as a key and the ability to stop or timing rev. Stop, all the application of the test environment can be built under the namespace of the test environment, configuration, such as 8:00 in the morning to start the test environment again namespace all instances, 8:00 in the evening all stop, don’t stop on time after all billing, Allows users to maximize cost reduction.

According to the calculation, in the extreme case, the user can basically save 2/3 of the hardware cost, and do not need to pay additional operation and maintenance costs, only need to configure the regular start and stop rules.

Best practice 3: Precise volume + Extreme elasticity solutions

Epidemic situation this year, a large number of students at home for online education, a lot of online education sector clients face business flow in seven or eight times, if based on the original operations of ECS architecture, the user will need to do structure upgrade in a very short period of time, is not only the operational structure upgrade, upgrade and application framework, This is a very big challenge to the cost and effort of users.

It is much easier to rely on SAE’s various integrations and the underlying K8s platform, which is highly automated. For example, you can use PTS compression tools to assess capacity water levels; For example, if there is a problem in pressure measurement, basic monitoring and application monitoring can be combined, including call chain, diagnostic report, etc., to analyze where the bottleneck is and whether it is possible to solve it in a short time. If the bottleneck is difficult to solve, you can use the application high availability service to implement traffic limiting and downgrade, ensuring that services will not collapse due to the sudden flood peak.

Finally, SAE can configure the corresponding elastic strategy based on the pressure model, such as CPU memory, RT, or QPS, and set the industry strategy in the case of the capacity model to achieve the effect of the actual usage, low cost and maximum upgrade of the architecture.

Five, the summary

Digital transformation has penetrated into all walks of life. No matter because of the development of time or the epidemic, in digital transformation, enterprises should have the ability to apply good cloud to cope with the challenges of rapid business changes and high flood peak and high flow scenarios. This process includes several stages: Rehost (new hosting), Re-platform (new platform) and Refactor (new architecture). With the deepening of architecture transformation, the value of cloud that enterprises can obtain is higher, while the cost of migration transformation will also rise. If the application is simply hosted on the cloud, it is difficult to obtain the elastic capability of cloud. It is still difficult to deal with problems in a timely manner.

Through SAE, we hope to enable users to enjoy the value dividend of Serverless + K8s + microservices with zero transformation, zero threshold and zero container base, and ultimately help users better face business challenges.

This paper sort the 【 Serverless series of Live broadcast 】 January 29 matinee Live back to see the link: developer.aliyun.com/topic/serve…

Serverless Ebook download

Book highlights:

  • Starting from the evolution of architecture, it introduces Serverless architecture and technology selection and construction of Serverless thinking.
  • Understand the operating principle of Serverless architecture popular in the industry;
  • Master 10 Serverless real cases, live learning and use.

Download link: developer.aliyun.com/topic/downl…