In the past 24 hours, I have handled about 8,000 requests through my wechat account “ebook” :
The majority of requests were completed in 200ms, and in the initial flood of requests (nearly 1500 in 10 minutes at the start of the push), the average response time was 50ms.
This also shows that Serverless is quite reliable. Obviously, the response time gets faster when there are more requests, which is counterintuitive — generally, the response time gets slower and slower as more requests are made.
There is no doubt that microservices have become a very popular architectural style in recent years. Microservices have become popular since 2014, as shown below:
Microservices began to attract the attention of developers in 2016. And from the looks of things, it’s likely that in two years, it will be where microservices are today. As you can see, it’s quite a potential architecture.
What is the Serverless architecture?
To find out what Serverless really is? What the hell is Serverless? I’ve tried out example after example with Serverless, and I’ve done four or five applications of my own to get a general idea of Serverelss.
Virtualization and Isolation
To ensure that the development environment is correct (that is, the Bug is not caused by environmental factors), the developers have come up with a series of isolation methods: Virtual machines, container virtualization, language virtual machines, application containers (such as Tomcat in Java), virtual environments (such as VirtualEnv in Python), even language-independent DSLS.
Since the earliest physical servers, we have been abstracting or virtualizing them.
- We use virtualization technologies such as XEN and KVM to isolate the hardware and the operating systems that run on top of it.
- We use cloud computing to further automate the management of these virtualized resources.
- We use container technologies like Docker to isolate the operation of the application’s operating system from that of the server.
Now that we have Serverless, we can isolate the operating system and even the underlying technical details.
Why did it take 1,000 gigabytes?
Now, let me briefly explain “It took me 1000 GIGABytes to finally figure out what Serverless is?” Now, what the hell is Serverless?
In practice, I used AWS Lambda as the computing engine behind the Serverless service. AWS Lambda is a function-as-a-Servcie (FaaS) computing service. To put it simply, developers write functions, functions and services that run on the cloud. The cloud service provider provides the operating system, runtime environment, gateway, etc., and we just need to focus on writing our business code.
Yes, you heard that right, we just need to think about how the code provides value. We didn’t even have to worry about scalability, blue and green deployment, and Amazon’s brilliant operations engineers helped us build that infrastructure. And like traditional AWS services, such as Elastic Compute Cloud (EC2), they are monetised by traffic.
So again, how does it charge a function? How does it charge me if I run a Hello world on a Lambda function?
If you charge for a function that runs, there are only a few things you can expect: runtime, CPU, memory footprint, and hard disk. It can be cumbersome to provide different cpus for different needs. For code, the amount of hard disk space an application takes up is negligible. Of course, these apps will have a backup on your S3. So, things like AWS are running time + memory.
When running the program, AWS calculates a time and memory as follows:
REPORT RequestId: 041138F9-BC81-11e7-AA63-0dBAB83f773D Duration: 2.49 MS Firmest Duration: 100 MS Memory Size: 1024 MB Max Memory Used: 20 MBCopy the code
Memory Size is the package type we choose, Duration is the running time, and Max Memory Used is the Memory occupied by our application while it is running. Based on our Max Memory Used value and the amount of computation in the application, we can easily calculate the package we need.
So if we take a 1024 Megabyte package and run it 320 times, we’re using 320 GIGABytes of computation. The running time is rounded to the latest 100ms, so even if we run 2.49ms, it’s still 100ms. Assuming that our 320 calculations took 1s, that is, 10100ms, the cost we have to pay is: 10320*0.000001667=0.0053344 usd, even if converted into RMB, it will be less than 0.03627392 RMB of 40 cents.
If we started with a 128MB package, 2000 runs would be 200GB of computation.
If we started with a 128MB package, 8000 runs would be 1000GB of computation.
However, as shown in the table above, AWS offers Lambda a free package (available indefinitely to current and new users) that includes 1 MB of free requests per month and 400, 000 gigabytes of computing time per month. That means, for a long time, we don’t have to spend a minute.
What is Serverless?
From the content of the last section, we can know the following points:
- In Serverless applications, developers just need to focus on the business and don’t need to worry about operations and maintenance
- Serverless is truly on demand, running only when the request comes in
- Serverless is paid by running time and memory
- Serverless applications rely heavily on specific cloud platforms, third-party services
Of course these are all illusory things.
AWS official introduction to Serverless is as follows:
A server architecture is an Internet-based system in which application development does not use regular server processes. Instead, they rely only on a combination of third-party services (such as AWS Lambda services), client-side logic, and service-hosted remote procedure calls.
In an AWS based Serverless application, the application consists of:
- The Gateway API Gateway accepts and processes thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and more
- The compute service Lambda does all the computations related to the code, such as authorization validation, requests, output, and so on
- Infrastructure management CloudFormation creates and configures AWS infrastructure deployments, such as the names of S3 buckets to be used
- Static storage S3 serves as a repository for front-end code and static resources
- Database DynamoDB to store application data
- , etc.
Take the blogging system as an example. When we visit a blog, it’s just a GET request, and S3 provides us with front-end static resources and responding HTML.
And when we create a blog:
- Our request first came to the API Gateway, the API Gateway meter + 1
- The request then comes to Lambda for data processing, such as ID generation, creation time, and so on, Lambda meter + 1
- After Lambda completes the calculation, it stores the data to DynamoDB, and DynamoDB meter + 1
- Finally, we generate static blogs to S3, which only charges for storage when used.
In the process, we used a stable set of cloud services that were billed only as they were used. Since these services can be called naturally and easily, we really only need to focus on our Lambda functions and how to use these services throughout the development process.
Therefore, Serverless does not mean that there is no server, just that the server exists as a third-party service with a specific function.
Of course, you don’t have to use these cloud services, such as AWS, to be Serverless. For example, my colleague used IFTTT + WebTask + GitHub Webhook technology stack in Serverless Practice: Building Personal Reading Tracking System. It just means that some of the services in all of your apps directly use third-party services.
In this case, the layering between systems may become one service after another. Originally, in today’s mainstream microservice design, each domain or subdomain is a service. In a Serverless application, these domains and subdomains may be further split into Serverless functions because of their capabilities.
However, these services and functions are more granular than before.
Excerpt from Phodal/Serverless