The advantage of the Serverless

The most convenient thing about my development of AWS Serverless applications using the Serverless Framework is that the first deployment is no different than the second or third. Just execute Serverless deploy, and a few minutes later our code is up and running. If it was a traditional AWS application, I would need to SSH it to my server so that I could write my automatic deployment script. In addition, I also need to worry about which users are using this process.

Besides, I think it’s easy to deploy, and the price is reasonable. My AWS EC2 instance runs my blog, as well as some other networks. However, I only have about 500 blogs and spend most of my time idling. Feel some waste, but run only charge Serverless will not have such a problem. I feel free to use these services. There are, of course, other significant advantages.

Reduce start-up costs

When we develop a Web application as a company, we need a version management server, continuous integration server, test server, application version management repository, and so on as basic services during development. When running online, we need a good database server to handle the volume of requests. When our application is targeted at ordinary users, we need to:

  • Mail service for sending reminders, registration, and other services
  • SMS service (in accordance with national regulations on real-name) for user authorization operations such as registration and login

For big companies, this is ready-made infrastructure. But for startups, these are start-up costs.

Reduce operating costs

For startups, they don’t have the infrastructure, the money, and probably the ability to build it. Adopting cloud services is often the best option and can save a lot of money. They can focus on creating products that are valuable to users. If a startup goes to the cloud instead of building its own servers. He would then have more time to develop business features instead of focusing on them. You only have to pay for the software at run time.

The biggest difference between functional computing Serverless and cloud server is that cloud server needs to run all the time, while functional computing is on-demand computing. On-demand computing means that the function is run only when the request comes in. When there is no request, there is no money.

The number of users tends to grow slowly at the beginning of a project, and we tend to choose servers based on the number of likely users. At this point, some unnecessary costs are often wasted. However, Serverless applications can easily handle a sudden user outbreak. You just need to modify the database configuration and redeploy one.

Reduce development costs

A successful Serverless service provider should be able to provide a range of complementary services. That means you just write down the name of the table in the configuration file, and our data will be stored in the corresponding database. Even ** if a service provider provides a set of function calculation templates, then we just need to write our configuration. All of these things can be done automatically and efficiently.

In this case, using a cloud service is as simple as calling one of the system’s native apis.

Of course, designing applications to be stateless can be a challenge for early systems. In addition, a system as large as AWS is not easily digestible for novice programmers.

Fast online

For a Web project, starting a project requires a series of Hello, world. When we set up the environment locally, it’s a Hello World, and when we deploy the application to the development environment, it’s a deployable Hello world. It looks a little different, but in general, it works! .

The advantages of Serverless in deployment make it easy to get online.

Faster deployment pipeline

In fact, Serverless applications have the deployment advantage because they are equivalent to built-in automated deployment — we are already enhancing deployment capabilities as we develop the application.

In our daily development, in order to achieve automated deployment, we need to manually deploy first to design a relevant error-free deployment configuration, such as Docker Dockerfile, or Ansible Playbook. In addition, we also need to design blue and green publishing and so on.

In function computing, Serverless applications, these are vendor-provided capabilities. Every time we write the code, it’s enough to just run SLS deploy. In a function calculation such as AWS Lambda, the function is typically ready to be called within a few seconds of being uploaded.

This means that we use a template to develop our applications as we do every day. The first deployment can be completed within minutes of cloning the code.

The only difficulty may be which configuration type of service to choose, such as which level of throughput DynamoDB to choose and which memory size Lambda calculation.

Faster development

Thanks to the Serverless service provider, a number of basic services are in place. As developers, we just need to focus on how best to implement the business, not on technical limitations.

The service provider has prepared and tested this suite of services for us. They are basically stable and reliable and don’t run into any major problems. In fact, when we have strong enough code, such as using tests for robustness, combined with continuous integration, we can deploy it directly into production as we PUSH it. Of course, it might not be necessary, just add a Predeploy hook, do some automatic testing in this hook, and release the new version locally.

In this process, we don’t have to worry too much about release.

Higher system security

In my experience of maintaining my blog, it is not easy to keep the server running all the time. Crackers are attacking your website when you don’t expect them to. We need to guard against different types of attacks. For example, in my server, there are always hackers trying to log in with passwords, but in my blog’s server, they need keys to log in. After a magical attempted login attack, my SSH daemon crashed. This means that I can only restart the server from the EC2 background.

With Serverless, I no longer have to worry about someone trying to log in to the system because I don’t even know how to log in to the server.

I no longer need to worry about the underlying security issues of the system. Every time I log into AWS EC2, I always need to update the software. Whenever I see a bug in a piece of software, like OpenSSH before, I log in to check the version and update the software. It takes a lot of time, it takes a lot of effort, and it doesn’t do any good.

The only thing to worry about might be someone launching a DDOS attack. Could Zombie Toasters DDoS My Serverless Deployment? Per million requests, that’s about $0.2. 36 million requests per hour, that’s $72.

Adapt to microservices architecture

As we’ve seen in recent years, microservices don’t replace monolithic applications in large numbers — after all, replacing old systems with new architectures doesn’t make a lot of business sense. Therefore, for many enterprises, there is no such strong need and urgency. Living is a more pressing matter.

Serverless is a natural complement to the microservices architecture. A Serverless application has its own gateway, database, interface, and you can also develop services in your preferred language (limited by the service provider). In other words, a Serverless might be a perfect microservice instance in this case.

Within a year or two, Serverless will replace some components and services in some systems.

Automatic expansion capability

Behind Serverless is FaaS (Function as a Service) such as AWS Lambda.

For traditional applications, the way to handle more requests is to deploy more instances. By then, however, it is often too late. For FaaS, we don’t need to do this, FaaS automatically expands. It can start as many copies of the instance as needed without lengthy deployment and configuration delays.

This depends on the fact that our service is stateless, so we can constantly run new instances without fear.

The problem of Serverless

As a run-time application, Serverless also has the problems we need.

Not suitable for long running applications

Serverless runs only when the request comes in. This means that when the application is not running it goes into a “sleep state” and the next time a request comes in, the application will need a startup time, called a cold boot. At this point, you can use CRON or CloudWatch to periodically wake up your application.

If your application needs to run continuously for long periods of time, handling a large number of requests, then the Serverless architecture may not be suitable for you. In such cases, a cloud server such as EC2 is often a better choice. Because EC2 is cheaper in terms of price.

To quote Lu Zou in “It Took me 1,000 Gigabytes to Finally Figure out What Serverless Is (Part 1) : What is the Serverless Architecture?” Comments on:

EC2 is the equivalent of buying a car, Lambda is the equivalent of renting your car.

Renting a car for a long time is definitely more expensive than buying a car, but you’ll be saving on maintenance costs. Therefore, this problem is actually a problem worth further calculation.

Completely dependent on third party services

Yes, when you decide to use a cloud service, it means you may have gone down a dead end road. In this case, only non-essential apis can be placed on Serverless.

Serverless is not a good thing for you when you already have a lot of infrastructure. When we adopted the Serverless architecture, we were bound to a particular service provider. We use AWS home services, so it’s not so easy to migrate to Google Cloud.

We need to modify a series of underlying code, the solution is to create a layer of isolation. This means that when designing your application, you need to:

  • Isolated API Gateway
  • Isolating the database layer, given that there are no mature ORM tools on the market, allows you to support both Firebase and DynamoDB
  • , etc.

These will also bring us some additional costs that may create more problems than they solve.

Cold start time

As mentioned above, the Serverless application has a problem with cold startup time.

According to New Relic’s official blog Understanding AWS Lambda Performance — How Much Do Cold Starts Really Matter? Said, AWS Lambda cold start time.

AWS Startup time

Or the request response time I calculated earlier:

Serverless Request time

Although this cold start time can be less than 50ms in most cases. While this is true for node.js applications, Java and C# with virtual machines may not be so lucky.

Lack of debugging and development tools

When WORKING with the Serverless Framework, I encountered this problem: a lack of debugging and development tools. Later, I discovered a series of plugins such as Serverless-Offline, Dynamodb-Local, and the problem improved a bit.

However, this is still a formidable challenge for logging systems.

Every time you debug, you need to upload the code over and over again. And every time you upload, it’s like you’re deploying the server. And then Fuck, I can’t always pinpoint the problem quickly. So I changed the code, added a line of console.log, and deployed the code again. The problem was resolved and, good for you, I deleted console.log and deployed the code again.

Later, I learned my lesson and found a node.js library like Log4j, Winston, that can scale logging. It supports error, WARN, INFO, verbose, DEBUG, and SILLY logs.

Building complex

Serverless is cheap, but that doesn’t mean it’s easy.

Early on, after learning about AWS Lambda, I wanted to try something. But CloudForamtion is too difficult for me, its configuration is so complex and difficult to read and write (JSON format).

Given CloudForamtion’s complexity, it was only after exposure to the Serverless Framework that I regained some confidence.

Configuration of the Serverless Framework is simpler and is in YAML format. At deployment time, the Serverless Framework generates a CloudForamtion configuration based on our configuration.

In the data statistics article with Kinesis Firehose Persisting data to S3 in mind, we introduced the configuration of the Serverless framework. This configuration is a little more complicated than a typical Lambda configuration. However, this is not really a production configuration either. I mean, the real application scenario is much more complicated than that.

Language version behind

When Node.js 6 came out, AWS Lambda only supported Node.js 4.3.2; AWS Lambda was supported to 6.10.3 when Node.js 9.0 came out.

AWS Lambda supports the following runtime versions:

  • Node.js – v4.3.2 and 6.10.3
  • Java – Java 8
  • Python – Python 3.6 and 2.7
  • .net kernel -.net kernel 1.0.1 (C#)

For Java and Python, their versions are probably pretty much adequate, I don’t know about C#. The node.js version is obviously a bit old, but it’s node.js 9.2.0. That said, though, it may have a little to do with the flood of front-end releases that Chrome has brought with it.

Excerpt from Serverless Architecture Application Development Guide