Author: Jiang Hang Github: github.com/nodejh

Recently there has been more discussion about Serverless. Serverless, which seems to have little relationship with the front end, actually has a long history with the front end, and will have a transformative impact on the front end development model. Based on personal understanding and summary, this paper discusses the front-end development mode of Serverless from the aspects of the evolution of front-end development mode, Serverless front-end development cases and Serverless development best practices. I also had the honor to share this topic at QCon2019.

The evolution of the front-end development pattern

To start by reviewing the evolution of the front-end development model, I think there are four main stages.

  1. Dynamic pages based on template rendering
  2. Ajax-based separation of front and back ends
  3. Front-end engineering based on Node.js
  4. Full stack development based on Node.js

Dynamic pages based on template rendering

In the early Internet age, our web pages were simply static or dynamic pages, mainly for the display and dissemination of information. This time to develop a Web page is also very easy, mainly through JSP, PHP and other technologies to write some dynamic templates, and then through the Web Server template parsing into a HTML file, the browser is only responsible for rendering these HTML files. At this stage, there is no division of labor between the front and back ends, usually the back-end engineers write the front end pages incidentally.

Ajax-based separation of front and back ends

AJAX technology was formally proposed in 2005, opening a new chapter of Web development. With AJAX, we can divide the Web into a front end and a back end, with the front end responsible for the interface and interaction, and the back end responsible for the processing of business logic. The front and back ends interact with each other through interfaces. We no longer need to write hard-to-maintain HTML in various back-end languages. The complexity of Web pages has also shifted from the back-end Web Server to the browser-side JavaScript. That’s how the position of front End engineer came into being.

Front-end engineering based on Node.js

The introduction of Node.js in 2009 was also a historic moment for front-end engineers. Along with Node.js comes the CommonJS specification and the NPM package management mechanism. Subsequently, a series of front-end development and construction tools based on Node.js, such as Grunt, Gulp and Webpack, emerged.

Around 2013, the first versions of the three front-end frameworks, React. Js /Angular/ vue.js, were released. We can go from page-by-page development to component-based development. After the completion of development, webpack and other tools are used for package construction, and the command line tool based on Node.js is used to publish the construction results online. Front-end development has become standardized, standardized, and engineered.

Full stack development based on Node.js

Another important aspect of Node.js for the front end is that JavaScript, which used to run only in the browser, can now run on the server, allowing front-end engineers to write server-side code in their most familiar language. As a result, front-end engineers began to use Node.js to do full stack development, and began to change from front-end engineer to full stack engineer. This is the front end pushing its own boundaries.

On the other hand, as the front end evolves, so does the back end. Around the same time that Node.js was born, the back end generally began to shift from monolithic applications to microservices architectures. This also leads to the division of labor before and after the divergence. With the rise of microservice architecture, the interface of the back end becomes atomic, the interface of microservice is no longer directly oriented to the page, and the call of the front end becomes complicated. Backend For Frontend (BFF) adds a BFF layer between microservices and the Frontend. BFF aggregates and cuts interfaces and sends them to the Frontend. BFF layer is not the essential work of the back end, and it is closest to the front end and has the greatest relationship with the front end, so the front end engineers naturally choose Node.js to implement it. This is also the current node.js is widely used on the server side.

Next generation front-end development patterns

As you can see, every change in the front-end development model has been caused by a transformative technology. First came AJAX, then Node.js. So what’s the next transformative technology? It goes without saying, Serverless.

Front-end solution in Serverless services

Introduction of Serverless

As defined by CNCF, Serverless refers to the concept of building and running applications that do not require server administration. (serverless-overview)

Serverless computing refers to the concept of building and running applications that do not require server management. — CNCF

Serverless is already connected to the front end, but we may not be aware of it. For example, after we publish static resources to CDN, we do not need to care about how many nodes CDN has, how to distribute the nodes, how to do load balancing and how to achieve network acceleration, so CDN is Serverless for the front end. Another example is the object storage. Like CDN, we only need to upload files to the object storage and use it directly. We do not need to care about how to access files and how to control permissions. Even some third-party API services are Serverless, because we don’t need to care about the server when we use them.

Of course, having motion sensation is not enough, we still need a more precise definition. From a technical point of view, Serverless is a combination of FaaS and BaaS.

Serverless = FaaS + BaaS.

To put it simply, FaaS (Function as a Service) is a platform for running functions, such as Function calculation of Ali Cloud and Lambda of AWS.

Backend as a Service (BaaS) is back-end cloud services, such as cloud databases, object storage, and message queues. With BaaS, we can greatly simplify our application development.

Serverless can be understood as running in FaaS, using functions of BaaS.

The main features of Serverless are:

  • event-driven
    • Functions In the FaaS platform, a series of events are required to drive function execution.
  • stateless
    • Because each function execution may use a different container, there is no memory or data sharing. If you want to share data, you can only do so through third-party services such as Redis.
  • No operational
    • With Serverless we do not need to care about the server, do not need to care about operation and maintenance. This is also at the heart of Serverless’s thinking.
  • Low cost
    • Using Serverless is cheap because we only pay for each function run. If the function does not run, it costs nothing and does not waste server resources

Front-end solution architecture diagram in Serverless Services

The figure above shows some of the major Serverless services and their corresponding front-end solutions.

From the bottom up, infrastructure and development tools.

The infrastructure is mainly provided by cloud computing vendors, including cloud computing platforms and various BaaS services, as well as FaaS platforms running functions.

The front end is mainly the user of Serverless, so for the front end, the most important layer of development tools, we need to rely on development tools for Serverless development, debugging and deployment.

Framework (Framework)

Currently, there is no unified Serverless standard, and Serverless services provided by different cloud computing platforms are likely to be different, which leads to the failure of smooth migration of our code. One of the main functions of the Serverless framework is to simplify the development and deployment process of Serverless. Another major function is to shield the differences between different Serverless services, so that our functions can be modified with no changes or only minor changes. It also runs in other Serverless services.

Common Serverless frameworks include Serverless Framework, ZEIT Now, Apex, etc. However, these are basically foreign companies do, there is no such platform in China.

Web IDE

The Web IDE closely associated with Serverless is also primarily the Web IDE for each cloud computing platform. With the Web IDE, we can easily develop and debug functions in the cloud and deploy them directly to the corresponding FaaS platform. This has the advantage of avoiding the need to install various development tools and configure various environments locally. Common Web IDES include AWS Cloud9, Ali Cloud’s Functional computing Web IDE, and Tencent Cloud’s Cloud Studio. AWS Cloud9 is the best experience.

Command line tool

Of course, the main development method is still local development. So developing Serverless’s command-line tools locally is also essential.

There are two main types of command line tools. One is provided by cloud computing platforms, such as AWS of AWS, Az of Azure and Fun of Ali Cloud. There are also classes provided by the Serverless framework, such as Serverless, now.

Most of the tools, such as Serverless, Fun, etc., are implemented in Node.js.

Here are some examples of command line tools.

create
# serverless
$ serverless create --template aws-nodejs --path myService
# fun
$ fun init -n qcondemo helloworld-nodejs8
Copy the code

The deployment of
# serverless
$ serverless deploy
# fun
$ fun deploy
Copy the code

debugging
# serverless
$ serverless invoke [local] --function functionName
# fun
$ fun local invoke functionName
Copy the code

Application scenarios

One layer above the development tools are some vertical application scenarios for Serverless. In addition to using traditional server-side development, Serverless technology is currently used for small program development and may be designed for the Internet of Things (IoT) domain in the future.

Comparison of different Serverless services

The figure above compares different Serverless services from support language, trigger, price and other aspects. It can be found that there are differences as well as commonalities.

For example, almost all Serverless services support Languages such as Node.js/Python/Java.

In terms of supported triggers, almost all services also support HTTP, object storage, scheduled tasks, message queues, and other triggers. Of course, these triggers are also related to the platform’s own back-end services, such as the object storage trigger of Ali Cloud, which is triggered by events such as access to OSS products of Ali Cloud. AWS object storage triggers, on the other hand, are triggered by AWS S3 events, which are not common between the two platforms. This is also a problem currently facing Serverless, which is that the standards are not uniform.

From a billing point of view, fees are pretty much the same across platforms. As mentioned earlier, Serverless is charged per call. For each Serverless, there are 1 million free calls per month, and then almost ¥1.3/ million; And 400,000 gB-s free execution time, ¥0.0001108/GB-s later. So using Serverless is cost-effective when the application volume is small.

Front-end development mode based on Serverless

In this chapter, mainly to several cases to illustrate Serverless based front-end development model, and it and the previous front-end development what is different.

Before getting into specific cases, take a look at the traditional development process.

In a traditional development process, we need front-end engineers to write pages and back-end engineers to write interfaces. After the back-end interface is written, deploy the interface, and then perform front-end and back-end debugging. After the joint investigation is completed, it will be tested and put online. After the system is online, o&M engineers need to maintain the system. The whole process involves many different roles, the link is long, and communication and coordination is also a problem.

However, based on Serverless, the back end becomes very simple. The previous back-end applications are divided into functions one by one. You only need to write functions and deploy them to Serverless service, and there is no need to care about any server operation and maintenance. The bar for back-end development has dropped dramatically. As a result, you only need one front end engineer to do all the development work.

Of course, front-end engineers writing back-end based on Serverless also need to have some back-end knowledge. Scenarios involving complex back-end systems or where Serverless is not appropriate still require back-end development, and the backend becomes further behind.

Serverless based BFF

On the one hand, different apis need to be used for different devices. On the other hand, due to the complexity of front-end interface invocation caused by microservices, front-end engineers began to aggregate and cut interfaces in the way of BFF to obtain interfaces suitable for the front-end.

The following is a generic BFF architecture.



BFF @ SoundCloud

At the bottom are the various back-end microservices, and at the top are the various front-end applications. Before microservices and applications, there was BFF, usually developed by front end engineers.

This architecture solves the problem of interface coordination, but it also introduces some new problems.

For example, developing a BFF application for each device also presents some problems of duplication. And whereas the front end used to just develop pages and focus on browser-side rendering, now it has to maintain various BFF applications. In the past, the front-end didn’t need to care about concurrency, but now the concurrency pressure is on the BFF. Overall, the operation and maintenance costs are very high, and usually the front end is not good at operation and maintenance.

Serverless can help us solve these problems. With Serverless, we can implement aggregate clipping of individual interfaces using one function at a time. A request from the front end to the BFF is like an HTTP trigger for FaaS, triggering the execution of a function that implements the business logic for the request, such as calling multiple microservices to fetch data, and then returning the processing results to the front end. In this way, the pressure of operation and maintenance is shifted from the former BFF Server to FaaS service, and the front end no longer needs to care about the Server.

The above image is based on the BFF architecture of Serverless. In order to better manage various apis, we can also add a gateway layer to manage all apis (such as ali Cloud gateway) through the gateway, such as grouping and environment of apis. With API gateways, the front end does not execute functions directly through HTTP triggers, but instead sends requests to the gateway, which triggers specific functions to execute.

Serverless based server-side rendering

Based on the three most popular front-end frameworks (React. Js /Anguler/ vue.js), most of the current rendering methods are client-side rendering. When the page is initialized, only a simple HTML and the corresponding JS file are loaded, and then THE PAGES are rendered by JS. The main issues with this approach are white screen time and SEO.

To solve this problem, the front end again experimented with server-side rendering. The essential idea is actually the same as the original template rendering. The front end makes a request, the back end Server parses an HTML document and sends it back to the browser. However, it used to be the template of server language such as JSP and PHP, but now it is the isomorphic application based on React and Vue implementation, which is also the advantage of today’s server rendering scheme.

But server-side rendering creates some additional problems for the front end: operation and maintenance costs, and the front end needs to maintain the servers used for rendering.

The biggest advantage of Serverless is that it helps us reduce operation and maintenance. Can Serverless be used for server-side rendering? Of course you can.

In traditional server rendering, the path of each request corresponds to each route of the server, and the path is rendered by the HTML document. The server application for rendering is the application that integrates these routes.

Using Serverless to do server-side rendering is to split each route into functions and deploy the corresponding functions on FaaS. Thus, the path requested by the user corresponds to each individual function. In this way, the operation and maintenance operations are transferred to the FaaS platform, and the front-end server rendering is done, so there is no need to care about the operation and maintenance deployment of the server program.

ZEIT’s Next. Js does a good job of implementing Serverless server-side rendering. Here is a simple example.

The code structure is as follows:

. ├ ─ ─ next. Config. Js ├ ─ ─ now. The json ├ ─ ─ package. The json └ ─ ─ pages ├ ─ ─ the about the js └ ─ ─ index, jsCopy the code
// next.config.js
module.exports = {
  target: 'serverless'
}
Copy the code

Pages /about.js and pages/index.js are two pages, and Serverless service provided by Zeit is configured in next.config.js.

Then use the now command to deploy the code as Serverless. During deployment, pages/about.js and pages/index.js are converted into two functions that render the corresponding pages.

Serverless based small program development

At present, the domestic use of Serverless more scenarios may be the small process development. The specific implementation is small program cloud development, Alipay small program and wechat small program have provided cloud development functions.

In traditional small program development, we need front-end engineers to develop small program; The backend engineer develops the server side. Small program back-end development and other back-end application development, the essence is the same, need to care about the application of load balancing, backup redundancy, monitoring alarm and other deployment operations. If the development team is small, you may need a front end engineer to implement the server side.

However, based on cloud development, developers only need to focus on the implementation of the business, by a front-end engineer can complete the whole application of the front and back end development. Because cloud development encapsulates the backend as a BaaS service and provides developers with corresponding SDKS, developers can use various backend services as if they were calling functions. The operation and maintenance of applications has also shifted to cloud development providers.

Here are some examples of using Alipay’s Basement, functions defined in the FaaS service.

Operating database

// 'basement' is a global variable
// Operate the database
basement.db.collection('users')
	.insertOne({
		name: 'node'.age: 18,
	})
	.then((a)= > {
		resolve({ success: true });
	})
	.catch(err= > {
		reject({ success: false });
	});
Copy the code

To upload pictures

// Upload the image
basement.file
	.uploadFile(options)
	.then((image) = > {
		this.setData({
			iconUrl: image.fileUrl,
		});
	})
	.catch(console.error);
Copy the code

Call a function

// Call the function
basement.function
	.invoke('getUserInfo')
	.then((res) = > { 
		this.setData({ 
			user: res.result
		});
	})
	.catch(console.error}
Copy the code

Common Serverless architecture

Based on the Serverless development examples above, a general Serverless architecture can be concluded.

At the lowest level is Backend, which implements complex services. The FaaS layer then implements the business logic through a series of functions and services the front end directly. For front-end developers, the front-end can implement server-side logic by writing functions. For back-end developers, the back end has become further back. If the business is relatively light, the FaaS layer can be implemented and even the microservices layer is not required.

At the same time, we can call the BaaS service provided by the cloud computing platform no matter in the back-end, FaaS or the front-end, which greatly reduces the difficulty and cost of development. Applets cloud development is an example of calling BaaS services directly on the front end.

Serverless development best practices

The biggest difference between Serverless development mode and traditional development mode is that in traditional development mode, we are based on application development. After development, we need to unit test and integrate test the application. Based on Serverless, the development is a function, so how should we test the Serverless function? What is the difference between testing Serverless functions and normal unit testing?

Another important point is, how does the application developed based on Serverless perform? How can I improve the performance of Serverless applications?

This chapter focuses on best practices for testing Serverless based functions and for performance of functions.

Function testing

While it was easy to do business development with Serverless, its features posed some challenges for our testing. There are mainly the following aspects.

The Serverless functions are distributed, and we don’t know or need to know on which machine the functions are deployed or run, so we need to unit test each function. A Serverless application is made up of a set of functions that may internally depend on some other backend services (BaaS), so we also need to test the Integration of the Serverless application.

FaaS and BaaS that run functions are also difficult to emulate locally. In addition, the FaaS environment provided by different platforms may be inconsistent, and the SDK or interface of BaaS service provided by non-platform may also be inconsistent, which not only brings some problems to our test, but also increases the cost of application migration.

Function execution is event-driven, and the events that drive function execution are difficult to simulate locally.

So how to solve these problems?

According to the testing pyramid proposed by Mike Cohn, unit testing is the lowest cost and the most efficient; UI testing (integration) is the most expensive and the least efficient, so we need to do as much unit testing as possible to reduce integration testing. The same applies to functional testing of Serverless.



Martinfowler.com/bliki/TestP…

To make it easier to unit test functions, we need to separate business logic and function-dependent FaaS (such as function calculations) from BaaS (such as cloud databases). When FaaS and BaaS are separated, we can test the business logic of a function just like writing traditional unit tests. You can then write integration tests to verify that the integration of functions and other services is working properly.

A bad example

Here is an example of a function implemented using Node.js. All this function does is first store the user information in the database and then send an email to the user.

const db = require('db').connect();
const mailer = require('mailer');

module.exports.saveUser = (event, context, callback) = > {
  const user = {
    email: event.email,
    created_at: Date.now()
  }

  db.saveUser(user, function (err) {
    if (err) {
      callback(err);
    } else{ mailer.sendWelcomeEmail(event.email); callback(); }}); };Copy the code

There are two main problems with this example:

  1. Business logic and FaaS are coupled together. Basically, the business logic is all theresaveUserIn this function, thetasaveUserParameters of theeventconentObject, provided by the FaaS platform.
  2. Business logic and BaaS are coupled together. Specifically, inside functionsdbmailerThese are the two back-end services that the test function must depend ondbmailer.

Write testable functions

The above code was refactored based on the principle of separating FaaS and BaaS, which are business logic and function dependent.

class Users {
  constructor(db, mailer) {
    this.db = db;
    this.mailer = mailer;
  }

  save(email, callback) {
    const user = {
      email: email,
      created_at: Date.now()
    }

    this.db.saveUser(user, function (err) {
      if (err) {
        callback(err);
      } else {
        this.mailer.sendWelcomeEmail(email); callback(); }}); }}module.exports = Users;
Copy the code
const db = require('db').connect();
const mailer = require('mailer');
const Users = require('users');

let users = new Users(db, mailer);

module.exports.saveUser = (event, context, callback) = > {
  users.save(event.email, callback);
};
Copy the code

In the refactored code, we put all the business logic in the Users class, which does not depend on any external services. When testing, we can pass in mock services instead of real DB or mailer.

Here is an example of a simulated Mailer.

/ / simulate the mailer
const mailer = {
  sendWelcomeEmail: (email) = > {
	console.log(`Send email to ${email}success! `); }};Copy the code

This ensures that the business code runs as expected, as long as sufficient unit testing is done on Users.

You can then pass in the real DB and mailer and do a simple integration test to see if the whole function works.

The refactored code also has the benefit of facilitating function migration. When we want to migrate functions from one platform to another, we just need to change the way Users call according to the parameters provided by different platforms, without changing the business logic.

summary

To sum up, when testing a function, you need to keep the pyramid principle in mind and follow the following principles:

  1. Separate FaaS and BaaS that are business logic and function dependent
  2. Thoroughly unit test the business logic
  3. Run integration tests on the function to verify that the code works

Function performance

For development with Serverless, there is also a common concern about the performance of the functions.

For traditional applications, our programs are launched and live in memory; This is not the case with the Serverless function.

When the event to be executed by the driver function arrives, the code needs to be downloaded, a container needs to be started, a runtime environment needs to be started within the container, and the code needs to be executed. The first steps are collectively called Cold Start. Traditional applications do not have a cold start process.

Here is a schematic of the function life cycle:



www.youtube.com/watch?v=oQF…

The length of the cold start time is a key factor in function performance. To optimize the performance of a function, you need to optimize it at all stages of the function life cycle.

The impact of different programming languages on cold start times

Prior to this, many people have tested the impact of different programming languages on cold startup times, such as:



Cold start / Warm start with AWS Lambda

There are some consistent conclusions from these tests:

  • Increasing the memory of functions can reduce the cold start time
  • C#, Java and other programming languages have about 100 times the startup time of node. js and Python

Based on the above conclusion, if you want Java’s cold startup time to be as small as Node.js, you can allocate more memory for Java. But more memory means more costs.

The cold start time of the function

Developers who are new to Serverless may make the mistake of requiring a cold start every time a function is executed. That’s not the case.

After the first request (the event that drives the function execution) comes, the runtime environment is successfully started and the function is executed, and the runtime environment remains for a period of time to be used for the next function execution. This reduces the number of cold starts, thus shortening the function run time. When a request reaches the limit of one runtime, the FaaS platform automatically extends to the next runtime.

In the case of AWS Lambda, after the execution of the function, the Lambda holds the execution context for some time, expected to be used for another Lambda function call. The effect is that the service freezes the execution context after the Lambda function completes, and if AWS Lambda chooses to reuse the context when the Lambda function is called again, it unfreezes the context for reuse.

Here are two small tests to illustrate the above.

I used ali Cloud function calculation to implement a Serverless function and drive it through HTTP events. It then makes 100 requests to the function with varying amounts of concurrency.

The first is a concurrent case:

You can see that the first request time is 302ms, and the other request time is around 50ms. Basically, the function for the first request is a cold start, and the remaining 99 requests are all hot starts, directly reusing the environment of the first request.

Here is the case where the number of concurrent requests is 10:

It can be found that the time of the first 10 requests is about 200ms-300ms, and the time of the other requests is about 50ms. Therefore, it can be concluded that the first 10 concurrent requests were cold starts, starting 10 running environments simultaneously; The next 90 requests are hot starts.

This confirms the previous conclusion that the function does not start cold every time, but will reuse the previous runtime environment for a certain amount of time.

Perform context reuse

How does the above conclusion help us improve the function performance? Of course there is. Since the runtime can be preserved, it means that we can reuse the execution context within the runtime.

Here’s an example:

const mysql = require('mysql');

module.exports.saveUser = (event, context, callback) = > {

	// Initialize the database connection
	const connection = mysql.createConnection({ / *... * / });
	connection.connect();
  
	connection.query('... ');

};
Copy the code

The above example implements the initialization of a database connection in the saveUser function. The problem is that every time a function executes, it reinitializes the database connection, which is a time-consuming operation. Obviously this is not good for the performance of the function.

Since the execution context of a function can be reused for a short period of time, we can put the database connection outside the function:

const mysql = require('mysql');

// Initialize the database connection
const connection = mysql.createConnection({ / *... * / });
connection.connect();


module.exports.saveUser = (event, context, callback) = > {

	connection.query('... ');

};
Copy the code

This will only initialize the database connection the first time the runtime environment is started. When subsequent requests come in and the function is executed, connection can be directly used in the execution context, thereby improving the performance of subsequent higher functions.

In most cases, it is perfectly acceptable to sacrifice the performance of one request for the performance of a large number of requests.

Preheat the function

Since the running environment of a function is reserved for a certain period of time, we can also call the function actively and start a running environment at regular intervals, so that other normal requests are hot start, thus avoiding the impact of cold start time on function performance.

This is currently the most effective method, but there are some caveats:

  1. Do not call the function too often, at least for more than 5 minutes
  2. Call functions directly, rather than indirectly, through gateways, etc
  3. Create functions that handle this warm-up call specifically, rather than normal business functions

This is just the current, dark tech solution that can be used, but not at all if your business allows you to “sacrifice the performance of the first request for most of the performance.”

summary

In general, the performance of the optimization function is to optimize the cold start time. All of these solutions are optimizations on the developer side, but also on the FaaS platform performance.

To sum up the above plans, the main points are as follows:

  1. Choose a programming language with short cold startup time, such as Node.js/Python
  2. Allocate appropriate running memory for the function
  3. Perform context reuse
  4. Preheat the function

conclusion

As front end engineers, we always talk about what the edge of the front end is. Now the front-end development is not the front-end development of the past, the front-end can not only do web pages, but also do small programs, DO APP, do desktop programs, and even do the server. The front end is pushing its boundaries and exploring more territory in the hope that it can generate more value. It’s better to create value with tools we know, in ways we know.

The birth of Serverless architecture can help front-end engineers to realize their ideals to the greatest extent. With Serverless, we do not need to pay too much attention to the operation and maintenance of the server side, and we do not need to care about the areas we are not familiar with. We only need to focus on business development and product realization. We have less to care about, but more to do.

Serverless will also have a huge change in the development mode of the front end, and the function of the front end engineer will return to the function of the application engineer.

If there is one sentence to sum up Serverless, it is Less is More.