1. The introduction

Serverless is a “Serverless architecture” that allows users to Focus on business logic without caring about the application running environment, resources and quantity.

Now that the company is DevOPs-oriented and moving toward Serverless, why should the front-end focus on Serverless?

For business front end students:

  1. Changes the front – and back-end interface definition specifications.
  2. We will definitely change the way of front-end and back-end intermodulation, and let the front-end participate in server logic development, and even Node Java mixing.
  3. The Nodejs server maintenance barrier is greatly reduced, as long as you can write JS code to maintain Node services, without learning DevOps related knowledge.

To a freelance developer:

  1. Future server deployments are more flexible and cost-effective.
  2. Deployment is faster and less error-prone.

Front-end frameworks always bring back-end thinking, whereas Serverless brings front-end thinking to back-end operations.

Front-end developers are actually the first to enjoy the benefits of “Serverless”. They don’t need to own their own services, or even their own browser, so that their JS code can run evenly and load evenly on every user’s computer.

And every user’s browser, like today’s most fashionable and sophisticated Serverless clusters, starts with cold launches from remote loading of JS code, and is even superior at cold launches: using JIT acceleration to make code cold launches at the millisecon-level. Not only that, but the browser is the perfect environment for BAAS services. We can call any function to get cookies, environment information, local database services, regardless of what computer the user is on, what network the user is connected to, or even the size of the hard drive.

This is the Serverless concept. FAAS (Function as a Service) and BAAS (Background as a Service) attempt to create a development environment on the server side that front-end developers take for granted, so front-end developers should better understand the benefits of Serverless.

2. The intensive reading

FAAS (Function as a Service) + BAAS (Background as a Service) can be called a complete Serverless implementation, in addition to the concept of PASS (platform as a service). Often platform environments are implemented through container technology, ultimately for NoOps (unmanned operations), or at least DevOps (development & Operations).

Here are a few words to keep you from getting confused:

FAAS – Function as a service

A function is a service, and each function is a service. Functions can be written in any language without any operational details such as computing resources, elastic capacity expansion, billing by volume, and event-driven support. FAAS is supported by the industry’s major cloud vendors and each has a workbench or visual workflow to manage these functions.

BAAS – Backend as a service

Backend and services, which integrate many middleware technologies, can invoke services regardless of the environment, such as data as a service (database service), cache service, etc. While there are many more XASS down there, only FAAS + BAAS make up the Serverless concept.

PAAS – Platform as a service

Platform as a service, users can automatically continuously integrate and enjoy high availability services as long as they upload source code, which can be considered similar to Serverless if the speed is fast enough. However, with the rise of container technology represented by Docker, container-granularity PASS deployment has gradually become the mainstream and is the most commonly used application deployment mode. Such as middleware, databases, operating systems, etc.

DAAS – Data as a service

Data as a service, which packages data collection, governance, aggregation, and services for delivery. DASS services can apply the Serverless architecture.

IAAS – Infrastructure as a Service

Infrastructure is a service. Infrastructure such as computer storage, networks, and servers are provided as services.

SAAS – Software as a Service

Software as a service, such as ERP, CRM, and email services, provides services in the granularity of software.

The container

A container is a virtual program execution environment isolated from the physical environment that can be described and migrated. One of the more popular container technologies is Docker.

As the number of containers increased, techniques emerged to manage clusters of containers, with Kubernetes being the best known container orchestration platform. Container technology is an alternative to and the foundation of the Serverless architecture implementation.

NoOps

It is unmanned operation and maintenance, which is quite idealistic. It may be able to achieve complete unmanned operation and maintenance with the help of AI.

Unattended does not represent Serverless, which probably needs human maintenance (at least for now), but developers no longer need to care about the environment.

DevOps

After all, developers are held accountable when things go wrong, and a mature DevOps system allows more developers to assume the OP’s responsibilities or work more closely with the OP.


Back to Serverless, the back-end development experience of the future is likely to be similar to the front-end: You don’t need to care about which server your code is running on (browser), you don’t need to care about the server environment (browser version), you don’t need to worry about load balancing (the front end never does), middleware services call at any time (LocalStorage, Service Worker).

Front-end students should be particularly excited about Serverless. Take my own experience as an example.

Start by making a game

The author is fascinated with nurseries games. The most common nurseries games are resource building, collection, or second counting rules for computing resources when hanging up. When I was developing the game, I initially split the client-side code and the server-side code completely into two implementations:

/ /... In the UI section, draw a countdown lumberyard construction progress bar
const currentTime = await requestBuildingProcess();
const leftTime = new Date().getTime() - currentTime;
/ /... Continue the countdown to read
// After reading, + 100 wood per hour is updated to the client timer
store.woodIncrement += 100;
Copy the code

For the sake of the game experience, the user can see a bar reading the progress of the lumberyard construction, and then bam it’s done, without refreshing the browser, and see an extra 100 lumber points per second! But when the browser is refreshed at any point before, during and after the mill is built, the logic needs to be consistent and the data needs to be computed offline at the back end. It’s time to write the back-end code:

// Each login verifies the current login
const currentTime = new Date().getTime()
// Get the current state of the logging yard
if ( /* Under construction */) {
  Return the current time to the client
  const leftTime = building.startTime - currentTime
  res.body = leftTime
} else {
  // Completed
  store.woodIncrement += 100
}
Copy the code

Soon, there are more types of buildings, different states and grades of output are different, and the maintenance cost of the front and back ends is increasing, so we need to do configuration synchronization.

Configuration synchronization

To synchronize the configuration of the front and back ends, you can host the configuration separately for the front and back ends to share. For example, create a new configuration file to store the game information:

export const buildings = {
  wood: {
    name: "..",
    maxLevel: 100,
    increamentPerLevel: 50,
    initIncreament: 100
  }
  / *.. and so on .. * /
};
Copy the code

Although the configuration is reused, the front and back ends have some common logic that can be reused, such as judging the state of the building according to the construction time, judging the output of the building after N seconds, and so on. Serverless brings room for further optimization.

Play games in a Serverless environment

Imagine executing code at a functional granularity on the server, where we could abstract the game logic like this:

// Determine the state of the building according to its construction time
export const getBuildingStatusByTime = (instanceId: number, time: number) = > {
  / * * /
};

// Determine the construction capacity
export const getBuildingProduction = (instanceId: number, lastTime: number) = > {
  const status = getBuildingStatusByTime(instanceId, new Date().getTime());
  switch (status) {
    case "building":
      return 0;
    case "finished":
      // Get total output based on (current time - last opened time) * output per second
      return; / * * /}};// The front-end UI layer calls getBuildingProduction every second to update production data
// front-end entry function
export const frontendMain = (a)= > {
  / * * /
};

// The backend calls the getBuildingProduction function once for each open time
// Back-end entry function
export const backendMain = (a)= > {
  / * * /
};
Copy the code

Using the PASS service, write the front and back logic together and upload the getBuildingProduction function fragment to the FAAS service so that the front and back logic can be shared simultaneously!

In the folder view, you can plan the following structure:

. ├ ─ ─ the client# front entrance├ ─ ─ server# back-end entry├ ─ ─ common# Shared utility functions can contain 80% of the general game logic
Copy the code

One might ask: it takes more than Serverless to share back-end code.

Indeed, if the code abstraction is good enough and supported by a mature engineering solution, it is possible to export a piece of code to the browser and to the server separately. But Serverless’s function-based granularity is more in line with the idea of reusing code at the backend, and its emergence is likely to drive more extensive reusing of code at the backend. It’s not new, but it’s a big change.

Front and rear perspective

For front-end developers, you’ll find backend services simpler. For back-end developers, the challenge of finding thicker services has increased.

Simpler backend services

The choice between CentOS and AliyunOS is annoying enough when renting traditional ECS servers. For individual developers, building a complete continuous integration service is difficult, and there are a dizzying number of options:

  • Can be installed in the server database and other services, local direct server database development.
  • Docker can be installed locally to connect to the local database service, the environment packaged into a mirror overall deployment to the server.
  • The front-end code is developed locally and the server code is developed on the server.

Even the stability of the server requires tools such as PM2 to manage. When the server is under attack, restart, or disk failure, open a complex workbench or log in the Shell to recover. How does that keep people focused on what they need to do?

Serverless solves this problem because all we need to upload is a snippet of code and we no longer have to deal with server, system environment, resources, etc. External services are also supported by the encapsulated BAAS system.

In fact, before Serverless came along, many back-end teams used FAAS concepts to simplify the development process.

To reduce the interference of environment and deployment issues when writing back-end business logic, many teams abstract business logic into blocks, corresponding to code fragments or Blockly, which can be independently maintained, published, and eventually injected into the main program, or dynamically loaded. If you’re used to this type of development, it’s easier to accept Serverless.

Thicker back office service

From a background perspective, things get a little more complicated. Instead of providing a simple server and container, you now want to make the service thicker by shielding the execution environment from the user.

The author learned from some articles that the implementation of Serverless still faces the following challenges:

  • Serverless Vendors implement various types of services. To achieve multi-cloud deployment, you need to smooth out the differences.
  • Mature PASS service is actually pseudo Serverless. How to standardize it later?
  • FAAS cold start requires reloading code and dynamic allocation of resources, resulting in a slow cold start speed. In addition to preheating, an economical optimization method is also required.
  • For applications with high concurrency (such as double 11sec) scenarios, it is dangerous to not need capacity estimation, but if you can do it completely flexibly, you can eliminate the annoying capacity estimation.
  • How stock applications migrate. Most of the Serverless service vendors in the industry have not addressed the migration of existing applications.
  • The nature of Serverless leads to statelessness, and complex Internet applications are stateful, so the challenge is to support state without changing development habits.

Fortunately, all of these problems are being actively addressed, and many solutions have already been implemented.

Serverless brings more backend benefits than challenges:

  • Integration of the front and rear ends will be promoted. Further reduce the threshold for Node to write server-side code, and eliminate the learning cost of application operation. The author once encountered the application service interruption caused by the migration of the database service he applied for to other machine rooms. In the future, there is no need to worry, because as a BAAS service, the database does not need to care about where to deploy, whether to cross machine rooms, and how to do migration.
  • Improve the efficiency of resource utilization. By eliminating application monopolization and replacing it with on-demand loading, unnecessary resource consumption will be reduced and services will be evenly distributed to each machine in the cluster, leveling the CPU water level of the cluster.
  • Lower the threshold for cloud platform use. No operation and maintenance, flexible expansion, value-based services, high availability, these capabilities in attracting more customers at the same time, fully on demand billing features also reduce user costs, to achieve a win-win situation.

Try service openness with Serverless

The author is responsible for the construction of a large BI analysis platform in the company, and one of the underlying capabilities of BI analysis platform is visual construction.

So how does visual scaffolding open up? It’s easier to open up components now, because the front-end can be relatively decoupled from the back-end design, and the AMD loading system is mature.

One challenge is the open back-end capabilities, because when there are custom requirements for fetch capabilities, you may need to customize the logic for back-end data processing. At present, we can use Maven3 and JDK7 to set up local development environment test. If we want to go online, we also need the assistance of the backend students.

If the back-end builds a unique Serverless BAAS service, online Coding, debugging, and even grayscale publishing can be pre-tested just like the front-end components. Now that there has been a lot of mature exploration of front-end cloud development, Serverless can unify the experience of front-end and back-end code development in the cloud, regardless of the environment.

Serverless application architecture design

Looking at some Of the Serverless application architecture diagrams, I found that most businesses could use one of these diagrams:

Business functions are abstracted into FAAS functions, and services such as database, cache and acceleration are abstracted into BAAS services.

The upper layer provides Restful or event-triggered invocation, which corresponds to different ends (PC and mobile).

If you want to expand the capabilities of the platform, you only need to open up on the end (component access) and FAAS service (back-end access).

Benefits and Challenges

The benefits and challenges brought by Serverless coexist, this article stands in front of the point of view to chat.

Benefit 1: The front end is more focused on front-end experience technology, and does not require much application management knowledge.

Recently I read a lot of summary articles written by front-end predecessors, and the biggest experience is to recall “what role did the front-end play in these years”. We tend to exaggerate their sense of presence, in fact, the meaning of front end existence is to solve the problem of human-computer interaction, most of the scenes, is a scene to add flowers, rather than a necessity.

Remember that your proudest work experience may be the knowledge of Node application operations, front-end engineering systems, r&d performance optimization, standard specification, etc., but the part of the business that really matters is the business code that you feel is the least worth writing. The front end spends too much time on peripheral technology and too little time thinking about business and interaction.

Even for large companies, it is difficult to hire someone who is proficient in Nodejs and has extensive knowledge of operations, as well as front-end expertise and deep understanding of the business. It is almost impossible to have both.

Serverless can effectively solve this problem, front-end students only need to be able to write JS code without mastering any knowledge of operation and maintenance, they can quickly realize their entire set of ideas.

Admittedly, it is necessary to understand the knowledge of the server side, but in the perspective of reasonable division of labor, the front end should focus on the front end technology. The core competencies or business value of the front end are not replenished by learning more about operations. Instead, it eats up time that could have added more business value.

Language evolution, browser evolution, server evolution, are all from complex to simple, low-level to encapsulation process, and Serverless is the back-end + o&M as a whole further encapsulation process.

Benefit 2: Logic orchestration brings highly reusable and maintainable code, and expands cloud + capabilities.

The cloud + side is the next form of front-end development, providing strong cloud coding capabilities, or building the end into a cloud-like development environment through plug-ins. The biggest benefit is that it shields front-end development environment details, similar in concept to Serverless.

While several teams have tried to make interfaces “more resilient” with Graphsql, Serverless is a more radical approach.

My own team tried Graphsql scenarios, but because the business was too complex to describe the requirements of all scenarios in a standard model, Graphsql was not a good fit. It is a set of visual back-end development platform based on Blockly that has persisted and achieved amazing development benefits. This Blockly can almost be replaced by Serverless after generalization and abstraction. Therefore, Serverless can solve the problem of back-end development efficiency in complex scenarios.

Serverless integrates cloud development to further visually adjust function execution order and dependencies through logical orchestration.

The author used this platform to calculate offline logs in baidu advertising data processing team. After visualization of each MapReduce compute node, it is easy to see which node is blocking when a fault occurs, see the longest execution link, and reassign execution weight to each node. Even if logical choreography does not solve all of the pain points of development, it can certainly be useful in a specific business scenario.

Challenge 1: Can Serverless completely remove the front-end to back-end threshold?

The most common problem with Node code is memory overflow.

Browser + Tab is naturally a do-it-yourself scenario, and UI components and logic are created and destroyed frequently, so there are very few front-end classmates and GC issues to worry about. GC is an established habit in back-end development scenarios, so Nodejs cache overflow is a major concern.

Serverless applications are dynamically loaded and will be released if not used for a long time, so generally you don’t need to worry too much about GC. Even if memory runs out, the process may be released before memory is used up, or an exception is detected and forced to Kill it.

However, after all, the loading and release of FAAS functions are completely controlled by the cloud, and it is possible for a commonly used function to remain uninstalled for a long time. Therefore, FAAS functions still need to pay attention to control side effects.

So Serverless smoothen the o&M environment, but the server basics need to be understood. You must be aware of whether the code runs in the front-end or back-end.

Challenge 2: Performance

The cold start of Serverless will lead to performance problems, and let the business side take the initiative to care about the execution frequency or performance requirements of the program, and then start the warm-up service and drag the research and development into the abyss of operation and maintenance.

Even amazon Serverless, the most mature cloud service in the industry, cannot easily cope with sec-kill scenarios without caring about the frequency of calls.

Therefore, Serverless is probably better used in conjunction with appropriate scenarios at this point, rather than forcing Serverless into any application.

Although it is possible to keep the program Online by periodically running the FAAS service, I think this still violates the philosophy of Serverless.

Challenge 3: How do YOU make your code portable

Here’s a classic Serverless location description:

The network, storage, services, virtualizer, operating system, middleware, runtime, data, and even the application layer only need to care about the function part, and do not need to care about other parts such as startup and destruction.

This has always been seen as a strength, but it can also be seen as a weakness. When your code is completely dependent on a public cloud environment, you lose control of the overall environment, and even your code can only run on a specific cloud platform.

Different cloud platforms may provide different BAAS service specifications, as well as different FAAS entry and execution methods, which must be overcome in order to adopt a multi-cloud deployment.

Many Serverless platforms are now considering standardization, but there are also bottom-up toolsets to smooth out some of the differences, such as the Serverless Framework.

When we write FAAS functions, we try to keep the platform-bound entry functions as light as possible, putting the real entry in a generic function such as main.

3. Summary

The value of Serverless far outweighs the challenge, and its concept can actually solve many r&d performance problems.

However, the development stage of Serverless is still in the early stage, domestic Serverless is also in the trial stage, and there are many restrictions in the implementation environment, that is, the good concept of Serverless is not fully realized, so if everything goes up, it will definitely step on the pit.

These pits will probably be filled in after 3-5 years, so do you choose to join the pit filling army or use Serverless in a suitable scenario?

What does Serverless bring to the front end? Issue #135 dT-fe /weekly

If you’d like to participate in the discussion, pleaseClick here to, with a new theme every week, released on weekends or Mondays. Front end Intensive Reading – Helps you filter the right content.