The author | zhang lei, Hong-chao deng
** Recently, the AWS ECS team released an Open source project named Amazon ECS for Open Application Model on GitHub. More and more vendors are exploring the implementation of OAM. What is the charm of OAM that makes many cloud vendors unite to embrace it?
Serverless with AWS
The term Serverless was first used in a 2012 article titled Why The Future of Software and Apps is Serverless. However, if you do some digging, you’ll see that this article talks about software engineering concepts that are actually continuous integration and code version control, and the “scale to Zero”, “pay as you go”, FaaS/BaaS that Serverless mentioned today, Not the same thing at all.
In 2014, AWS released a product called Lambda. The concept is simple: cloud computing is ultimately application-oriented, and when users want to deploy an application, they just need a place to write their own programs to perform specific tasks, regardless of which machine or virtual machine the application runs on.
It was the release of Lambda that took the “Serverless” paradigm to a whole new level. Serverless provides a new system architecture for deploying and running applications in the cloud, emphasizing that users need not care about complex server configurations, but only about their code and how it is packaged as a “runnable entity” that can be hosted by cloud computing platforms. It was with classic features such as scaling application instances based on actual traffic and billing by actual usage rather than pre-allocated resources that AWS gradually established the de facto standard in the Serverless space.
In 2017, AWS released the Fargate service, which extended the Serverless concept to container-based runable entities. This idea was soon followed by Google Cloud Run and others. Kicked off the “next generation” container-based Serverless runtime craze.
Serverless vs. Open Application Model (OAM)?
How does the Open Application Model (OAM) relate to AWS and Serverless?
First of all, OAM (Open Application Model) is a set of application description specifications (Specs) jointly initiated by AliYun and Microsoft and jointly maintained by the cloud native community. The core philosophy of OAM is “application centric”, emphasizing that r&d and operations collaborate around a set of declarative, flexible and extensible upper level abstractions, rather than directly using complex and arcane infrastructure-level apis.
For example, a container-based horizontal extension application using K8s HPA is defined in the following two YAML files under the OAM specification.
YAML file prepared by R&D:
apiVersion: core.oam.dev/v1alpha2
kind: Component
metadata:
name: web-server
spec:
# Details of the application to be deployed
workload:
apiVersion: core.oam.dev/v1alpha2
kind: Server
spec:
containers:
- name: frontend
image: frontend:latest
Copy the code
YAML files written by o&M (or PaaS platform) :
apiVersion: core.oam.dev/v1alpha2
kind: ApplicationConfiguration
metadata:
name: helloworld
spec:
components:
- name: frontend
# Operation and maintenance capability required by the operation of the application
traits:
- trait:
apiVersion: autoscaling.k8s.io/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: scale-hello
spec:
minReplicas: 1
maxReplicas: 10
Copy the code
As you can see, under the OAM specification, the r&d and operations concerns are completely separated. Development and operations can easily define and publish applications by writing a very small number of fields that are relevant to them, rather than complete K8s Deployment and HPA objects. This is where “upper level abstraction” comes in.
Once a YAML file like the one above is submitted to K8s, it is automatically converted by the OAM plug-in into a full Deployment and HPA object that is actually running.
Because OAM defines a number of criteria for cloud native application delivery, such as “application”, “operation capability”, and “release boundaries”, application management platform developers can use this specification to describe a variety of applications and operation strategies through simpler YAML files. Finally, these YAML files are mapped to actual K8s resources (including CRD) through the OAM plug-in.
In other words, OAM provides a de facto standard for defining “upper level abstractions” whose most important function is to prevent the unnecessary complexity of various infrastructure details such as HPA, Ingress, container, Pod, Service, etc. Therefore, after the release of OAM, it was called the development of K8s application platform “magic weapon”.
But more importantly, the idea of “stripping infrastructure-level details from application descriptions to provide the most developer-friendly upper level abstractions” is consistent with Serverless’s philosophy of “de-infrastructure”. Rather, OAM is Serverless by nature.
Because of this, OAM received a lot of attention from the Serverless community once it was released. And AWS, of course.
Ultimate experience: AWS ECS for OAM
At the end of March 2020, the AWS ECS team released an Open source project called Amazon ECS for Open Application Model on GItHub.
Project address: github.com/awslabs/ama…
This project is an attempt by the AWS team to support OAM based on Serverless services. The underlying runtime for this project is the Serverless container service we mentioned earlier: Fargate. The AWS ECS for OAM project is a very interesting experience for developers. Let’s take a look.
There are three steps to prepare, and you can do it all at once.
1. Users need to have local AWS account authentication information, which can be generated by running the AWS configure command on the official AWS client. 2. Compile the project so you can get an executable called OAM-ECS; 3. You need to execute the oAM-ecs env command to prepare the environment for your later deployment. After this command is executed, AWS automatically creates a VPC and the corresponding public and private subnet for you.
Once you’re ready, you can simply define an OAM application YAML file locally (such as the helloWorld application example mentioned above) and then deploy a complete APPLICATION with HPA on Fargate with a single command like the one shown below. And can be directly accessed from the public network:
oam-ecs app deploy -f helloworld-app.yaml
Copy the code
Is it very simple?
In the official documentation for the AWS ECS for OAM project, it gives a more complex example, which we will explain.
The application we will deploy this time consists of three YAML files, again divided into r&d and operations concerns:
1. R&d is responsible for writing
- Server-component. yaml: The contents of this file are the first component of the application, which describes the application container we want to deploy.
- Worker-component. yaml: The content in this file is the second component of the application, which describes a circular Job that checks for network connectivity in the current environment.
2. Operation and maintenance responsible for writing
Example -app.yaml: This file contains a complete application component topology and traits of each component. It describes a “manual-scaler” operation and maintenance policy specifically used to expand worker-Component capacity.
So, the full app description of example-app.yaml looks like this:
apiVersion: core.oam.dev/v1alpha1
kind: ApplicationConfiguration
metadata:
name: example-app
spec:
components:
- componentName: worker-v1
instanceName: example-worker
traits:
- name: manual-scaler
properties:
replicaCount: 2
- componentName: server-v1
instanceName: example-server
parameterValues:
- name: WorldValue
value: Everyone
Copy the code
As you can see, it defines two components (worker-v1 and server-v1), and the worker-v1 component has a manual expansion policy called manual-Scaler.
All three YAML files in this Demo can be viewed in this directory: github.com/awslabs/ama…
The deployment of the above complex applications is still a single command, very simple:
oam-ecs app deploy \
-f examples/example-app.yaml \
-f examples/worker-component.yaml \
-f examples/server-component.yaml
Copy the code
After the above command is executed, you can view the application access information and DNS name by using the oam-ecs app show command. Open your browser, enter this access information, and you can directly access the application, as shown below:
If you want to change the application configuration, such as updating the image or changing the replicaCount value, just modify the YAML file and redeploy it, completely declaratively.
In fact, if this were done through the AWS Console, at least five cloud product pages would need to jump to each other for various configurations; Alternatively, you need to learn CloudFormation syntax and write a very, very long CF file to pull up the Fargate instance, LoadBalancer, network, DNS configuration, and all the other resources needed to run your application.
With the OAM specification, however, it is not only extremely simple to define and deploy the above application processes, but it also directly transforms streamlined cloud service operations into cleaner, friendlier declarative YAML files. The actual work required to implement the OAM specification is actually only a few hundred lines of code.
More importantly, when Serverless services like AWS Fargate are combined with developer-friendly application definitions like OAM, you will truly feel that simplicity, simplicity and minimal mental burden is the ultimate experience Serverless brings to developers.
Finally: When the application model meets Serverless
The OAM model has aroused great response in the cloud native application delivery ecosystem. At present, Ali Cloud EDAS service has become the industry’s first production-level application management platform built on OAM, and will soon launch the next generation of “application-centric” product experience; In the CNCF community, Crossplane, a well-known cross-cloud application management and delivery platform, has also become an important adopter and maintainer of the OAM specification.
EDAS website: help.aliyun.com/product/295… Crossplane:github.com/crossplane/…
In fact, not only AWS Fargate, but all Serverless services in our cloud computing ecosystem can easily use OAM as a presentation layer and application definition for developers, thus simplifying and abstracting complex infrastructure apis. The original complex process operation “one click” to upgrade to kubernetes-style declarative application management. More importantly, thanks to OAM’s high scalability, you can deploy container applications on Fargate, you can use OAM to describe functions, virtual machines, WebAssemblies, and almost any workload type you can think of, They can then be easily deployed on Serverless services and even seamlessly migrated between different cloud services. These seemingly “magical” capabilities are the spark of innovation that comes when a standardized, extensible “application model” meets a Serverless platform.
Application model + Serverless, has gradually become one of the most popular topics of cloud native ecosystem, welcome to join CNCF Cloud Native Application Delivery Team (SIG App Delivery), and promote the cloud computing ecosystem towards the direction of “application-centric” continuous progress!
AWS ECS on OAM project: github.com/awslabs/ama… Open Application Model project: github.com/oam-dev/spe… CNCF SIG App Delivery: github.com/cncf/sig-ap…
Currently, the OAM specification and model have actually addressed many of the existing problems, but its journey has only just begun. OAM is a neutral open source project and we welcome more people to join us in defining the future of cloud native application delivery.
Participation:
- Nail scan code into OAM Project Chinese discussion group
- Participate directly in the discussion via Gitter
- OAM open source implementation address
- I’m gonna hit star
Author’s brief introduction
Zhang Lei ali Cloud senior technical expert. He is one of the maintainers of the Kubernetes project. Zhang lei currently works on alibaba’s Kubernetes team, which includes Kubernetes and cloud native application management systems.
Deng Hongchao, technical expert of Alibaba Cloud Container Platform. Former CoreOS engineer and one of the core authors of the K8s Operator project.
“Alibaba Cloud originator focuses on micro-service, Serverless, container, Service Mesh and other technical fields, focuses on the trend of cloud native popular technology, large-scale implementation of cloud native practice, and becomes the public account that most understands cloud native developers.”