By Lo Ho
Component architecture for the Serverless application Engine
In the early days, software was designed in a single architecture, including software-related databases, storage, and so on, which would be directly deployed on a physical server. However, the problem of single application is that with the gradual increase of enterprise scale, the scalability is poor and the release efficiency is very low. Later, the era of microservices came, and the main framework for microservices was based on the Java language. One of the advantages of microservice architecture is that the iteration efficiency is very high and the expansibility is good, but the resource occupation and cost of microservice are relatively high. With the evolution of technology, containerization accelerates the landing of microservices. However, not all enterprises are suitable for microservices. With the increase of system complexity, the efficiency and operation and maintenance cost of microservices are also increasing. Whether an enterprise chooses a single architecture or a microservice depends on the complexity of the system.
With the development of the public cloud, more and more users will deploy their services to the cloud. The more deeply the cloud is used, the more obvious the architectural advantages become. The first stage is called Rehost, which means re-hosting. Cloud host is used to replace the local physical server without changing the application. However, this hosting mode is the most basic way to use cloud, and its efficiency has not been maximized. As we move forward, we need re-Platform, which replaces our self-built application infrastructure with hosted cloud services, essentially unchanged applications. However, re-platform is not the best approach. With further development, we can rebuild the application, Refactor. At this time, the underlying architecture and software architecture can be reconstructed in the way of microservices plus containers to maximize the value of the cloud. In the long run, the overall benefit is the largest, but in the short run, its migration cost is relatively high.
The convenience of cloud computing is best served by applications that can be reconfigured and developed with products or services that are native to the cloud. But at the same time it has several problems:
- Input cost (migration/transformation);
- Cloud vendor binding degree;
- Ease of use of the cloud (threshold/maintenance);
- Security.
Ali Cloud launched Serverless Application Engine (SAE), which provides a fully hosted platform for applications or micro-services. For example, Java microservices can be migrated to the cloud with zero transformation at present and support complete microservice governance capabilities. Users can also use this platform if they want to do containerized upgrades.
The core technology of Serverless application engine
What are the components of SAE? How do product capabilities come together? Take a look at the component architecture. The green part of the picture, which is what the user needs to focus on, is a variety of business applications. At the same time, SAE will provide various tools, such as the Cloudtoolkit plug-in to facilitate the deployment of native code to the cloud, such as the pipeline capability to connect cloud effects. The SAE platform, shown in orange, offers a variety of capabilities. For example, when you write a mall application, the front desk is an independent service module that can be iterated, developed or managed separately. It is also possible to configure resilience policies for front-office services, such as the ability to automatically scale back and forth based on actual traffic during rush periods, which is a core SAE value. SAE is a fully hosted application platform that provides resource management capabilities, application lifecycle management, and microservice governance. At the resource level, SAE encapsulates the K8s cluster, and below K8s is the infrastructure, which is built by The Divine Dragon server and security container. At the resource level, SAE helps users to provide resource management and scheduling capabilities.
Let’s move on to SAE’s core competencies. First, let’s take a look at the entire process of deploying an application for a traditional enterprise user, first purchasing ECS resources, then building a cluster, initializing the cluster, and then building the environment. After r & D is complete, start test deployment, in addition to deploying monitoring, logging, and other components. After all services are online, they enter the maintenance state, including the operation and maintenance of resources and peacekeeping services. Using SAE saves a lot of steps. First, the underlying K8s cluster is maintained by the cloud vendor, and users can simply submit images or JARS to deploy to SAE. Secondly, monitoring and logging systems, which are already provided by the platform, allow users to focus only on business logic and do not need to maintain resources.
What if the user wants to do grayscale publishing? SAE also provides users with single batch, batch, canary release strategies. The ability to enable businesses deployed to the platform to update without downtime is also provided by default.
For canary releases with very strong user demands, SAE can grayscale the requested content and scale the traffic. For example, to do the proportion of gray flow, batch release directly 50%, two batches can be sent. At the same time, the gray scale can also be carried out in accordance with the accurate flow ratio.
Customers using the platform will also be very concerned about resilience, and SAE provides a wealth of flexibility. Horizontal scaling can be triggered based on basic monitoring indicators (CPU, Mem) and business monitoring indicators (QPS, RT). The elastic expansion and shrinkage capacity based on this load model is generally applicable to the application scenarios with burst traffic or typical pulses. Like online games, social platforms. The second model is timing elasticity, which is more suitable for application scenarios with peaks and troughs, such as catering and travel.
So can elastic efficiency keep up with elastic appeal? Normally, when we deploy an image to the platform, the system goes through a resource scheduling process, then creates the POD, pulls the user image, creates the container, and starts the container. To improve this efficiency, SAE first implemented in-place upgrade capabilities for applications. For application upgrade release, users can directly pull the latest image on the original resources to update and deploy, avoiding the reconstruction of POD, thus helping users to improve the deployment efficiency by 42%.
Second, SAE has implemented a mirror acceleration capability, which can help users improve elastic efficiency by 30%. That is, when the user creates the container, the user image will be pulled synchronously on demand, which can reduce the service startup time.
Third, SAE also speeds up the startup of Java applications. The Dragonwell JDK version can be used to generate a cache at JVM and process startup, and then apply a second boot to speed up the startup time.
Finally, SAE will provide users with this monitoring and application diagnostics capability to query service call chains, interface response time, GC count, slow SQL, etc., to help users quickly locate problems.
Best practices for Serverless application engines
Migrating microservices/applications to SAE takes several steps. First of all, if it is a single application, you can directly make a compressed package and deploy it on the platform, but the single application needs to be separated from the database and the calculation code, and the calculation part is deployed to SAE. Microservice applications can choose to write a Docker file, make a mirror, and then push to the mirror repository to complete deployment. Microservice applications can also be deployed directly to SAE as a JAR/WAR package.
In terms of cost reduction, SAE also introduced the one-button start-stop function. You can enable the timed start and stop application in different environments. For example, when no one is using the test environment at night, the test environment can be directly shut down to save costs.
SAE provides a variety of tools and methods for building DevOps architectures. For example, Jenkins, which is commonly used by most enterprise users, or Cloud effect on the cloud, etc., are selected to do CI/CD. On the application side, you can configure periodic start, stop, and alarm monitoring on the SAE platform to complete service o&M.
SAE recommends the use of namespaces for environment management and permission division, which are of great concern to enterprise users. Applications in different namespaces cannot access each other. In addition, SAE recommends the use of permissions assistant to generate permissions for named nulls or application services for different teams, so that applications between different teams are not visible to each other.
Other users will look at SAE versus ECS, what enhancements have been made? The first is the o&M free full hosting capability, the second is the one-stop full application life cycle management capability, as well as micro-service governance and optimization, application monitoring, etc., which are value-added experiences provided by SAE to users.
Customer case for Serverless application Engine
The first Timing app is an online course learning app in the field of education. It is a typical single application reconstituted micro-service and migrated to SAE platform. With the development of the epidemic, Timing traffic surged, and the original architecture was unable to support the development of services, so we began to reform the microservices. In the process of microsertization, SAE platforms such as user center, learning center, self-study center, library center, etc. were separated into independent service modules. Compared with the way of self-built micro-service of cloud host, it can save about 35% cost.
Another example I would like to share is IQiyi Sports, whose entire business is deployed on THE SAE platform. In June and July this year, IQiyi Sports rebroadcast the European Cup, and the traffic was very high. But after sporting events, traffic starts to fall back again, so resilience is especially important. SAE’s flexible capabilities have helped save significant operating and maintenance costs, increasing capacity expansion efficiency by 40 percent. Secondly, the built-in application monitoring platform improves the efficiency of troubleshooting by 30% when business encounters problems. Overall, SAE helped IQiyi Sports improve resource utilization by nearly 50%.
It is believed that with the development of cloud computing, Serverless will become the default computing paradigm in the cloud era, and more and more enterprise customers will adopt this technology.
Click here to view video parsing ~