Takeaway:Front-end developers are the first to enjoy the benefits of “Serverless” because the browser is an environment that works out of the box and doesn’t even have to pay for computing! Serverless brings the front-end development experience to the back-end, using FaaS and BaaS to create a set of out-of-the-box back-end development environment. This article explains the benefits and challenges of Serverless from a front-end perspective.

The introduction

Serverless is a “Serverless architecture” that allows users to Focus on business logic without caring about the application running environment, resources and quantity.

Now that the company is DevOPs-oriented and moving toward Serverless, why should the front-end focus on Serverless?

For business front end students:

  1. Will change the front and back end interface definition specification;
  2. We will definitely change the way of front and back end intermodulation, so that the front end participates in server logic development, and even Node Java mixing;
  3. The Nodejs server maintenance threshold is greatly reduced, as long as you can write JS code to maintain Node services, no need to learn DevOps related knowledge.

To a freelance developer:

  1. More flexible and cost-effective server deployments in the future;
  2. Deployment is faster and less error-prone.

Front-end frameworks always bring back-end thinking, whereas Serverless brings front-end thinking to back-end operations. ** Front-end developers are actually the first to enjoy the benefits of “Serverless”. They don’t need to own their own services, or even their own browser, so that their JS code can run evenly and load evenly on every user’s computer.

Each user’s browser, like today’s most fashionable and sophisticated Serverless clusters, starts with cold launches from remote loading of JS code, and is even superior in cold launches: using JIT acceleration to make code cold launches at the millisecon-level. Not only that, but the browser enables the perfect environment for BaaS services, where we can call any function to retrieve cookies, environment information, local database services, regardless of what computer the user is on, what network the user is connected to, or even the size of the hard drive.

This is the Serverless concept. FaaS (Function as a Service) and BaaS (Background as a Service) attempt to create a development environment on the server side that front-end developers take for granted, so front-end developers should better understand the benefits of Serverless.

Intensive reading

FaaS (function as a Service) + BaaS (Background as a Service) can be called a complete Serverless implementation. In addition, there is the concept of PaaS (Platform as a Service). Often platform environments are implemented through container technology, ultimately for NoOps (unmanned operations), or at least DevOps (development & Operations).

Here are a few words to keep you from getting confused:

  • FaaS – Function as a service

A function is a service, and each function is a service. Functions can be written in any language without any operational details such as computing resources, elastic capacity expansion, billing by volume, and event-driven support. FaaS is supported by the industry’s major cloud vendors and each has a workbench or visual workflow to manage these functions.

  • BaaS – Backend as a service

Backend and services, which integrate many middleware technologies, can invoke services regardless of the environment, such as data as a service (database service), cache service, etc. While there are many more XAAS below, only FaaS + BaaS comprise the Serverless concept.

  • PaaS – Platform as a service

Platform as a service, users can automatically continuously integrate and enjoy high availability services as long as they upload source code, which can be considered similar to Serverless if the speed is fast enough. However, with the rise of container technology represented by Docker, container-granularity PaaS deployment has gradually become the mainstream and is the most commonly used application deployment mode. Such as middleware, databases, operating systems, etc.

  • DaaS – Data as a service

Data as a service, which packages data collection, governance, aggregation, and services for delivery. DaaS services can apply the Serverless architecture.

  • IaaS – Infrastructure as a Service

Infrastructure is a service. Infrastructure such as computer storage, networks, and servers are provided as services.

  • SaaS – Software as a Service

Software as a service, such as ERP, CRM, and email services, provides services in the granularity of software.

  • The container

A container is a virtual program execution environment isolated from the physical environment, and the environment can be described and migrated. The popular container technology is Docker. As the number of containers increased, techniques emerged to manage clusters of containers, with Kubernetes being the best known container orchestration platform. Container technology is an alternative to and the foundation of the Serverless architecture implementation.

  • NoOps

It is unmanned operation and maintenance, which is quite idealistic. It may be able to achieve complete unmanned operation and maintenance with the help of AI.

Unattended does not represent Serverless, which probably needs human maintenance (at least for now), but developers no longer need to care about the environment.

  • DevOps

After all, developers are held accountable when things go wrong, and a mature DevOps system allows more developers to assume the OP’s responsibilities or work more closely with the OP.

Back to Serverless, the back-end development experience of the future is likely to be similar to the front-end: You don’t need to care which server your code is running on (browser), you don’t need to care about the server environment (browser version), you don’t need to worry about load balancing (the front end never does), middleware services call at any time (LocalStorage, Service Worker).

Front-end students should be particularly excited about Serverless. Take my own experience as an example.

Start by making a game

The author is fascinated with nurseries games. The most common nurseries games are resource building, collection, or second counting rules for computing resources when hanging up. When I was developing the game, I initially split the client-side code and the server-side code completely into two implementations:

/ /... Const currentTime = await requestBuildingProcess(); const leftTime = new Date().getTime() - currentTime; / /... WoodIncrement += 100; woodIncrement += 100;Copy the code

For the sake of the game experience, the user can see a bar reading the progress of the lumberyard construction, and “bam!” it’s finished, without having to refresh the browser, and see an extra 100 lumber points per second! But when the browser is refreshed at any point before, during and after the mill is built, the logic needs to be consistent and the data needs to be computed offline at the back end. It’s time to write the back-end code:

Const currentTime = new Date().getTime(if(/* under construction */) {// return the currentTime to the client const leftTime = building. StartTime - currentTime res.body = leftTime}else{// woodIncrement += 100}Copy the code

Soon, there are more types of buildings, different states and grades of output are different, and the maintenance cost of the front and back ends is increasing, so we need to do configuration synchronization.

Configuration synchronization

To synchronize the configuration of the front and back ends, you can host the configuration separately for the front and back ends to share. For example, create a new configuration file to store the game information:

export const buildings = {
  wood: {
    name: "..",
    maxLevel: 100,
    increamentPerLevel: 50,
    initIncreament: 100
  }
  /* .. and so on .. */
};Copy the code

Although the configuration is reused, the front and back ends have some common logic that can be reused, such as judging the state of the building according to the construction time, judging the output of the building after N seconds, and so on. Serverless brings room for further optimization.

Play games in a Serverless environment

Imagine executing code at a functional granularity on the server, where we could abstract the game logic like this:

// Determine the state of the building according to its construction timeexportconst getBuildingStatusByTime = (instanceId: number, time: number) => { /**/ }; // Determine the construction capacityexport const getBuildingProduction = (instanceId: number, lastTime: number) => {
  const status = getBuildingStatusByTime(instanceId, new Date().getTime());
  switch (status) {
    case "building":
      return 0;
    case "finished": // Total output based on (current time - last opened time) * output per secondreturn; / * * /}}; // The front-end UI layer calls getBuildingProduction every second to update the production dataexportconst frontendMain = () => { /**/ }; // According to each open time, the back-end call getBuildingProduction function and the library // back-end entry functionexport const backendMain = () => {
  /**/
};Copy the code

Using the PaaS service, write the front and back logic together and upload the getBuildingProduction function fragment to the FaaS service so that the front and back logic can be shared simultaneously!

In the folder view, you can plan the following structure:

. ├ ─ ─ the client# front entrance├ ─ ─ server# back-end entry├ ─ ─ common# Shared utility functions can contain 80% of the general game logicCopy the code

One might ask: it takes more than Serverless to share back-end code.

Indeed, if the code abstraction is good enough and supported by a mature engineering solution, it is possible to export a piece of code to the browser and to the server separately. But Serverless’s function-based granularity is more in line with the idea of reusing code at the backend, and its emergence is likely to drive more extensive reusing of code at the backend. It’s not new, but it’s a big change.

Front and rear perspective

  • For front-end developers, it will be easier to find back-end services;
  • For backend developers, the discovery service becomes thicker and more challenging.


Simpler backend services

The choice between CentOS and AliyunOS is annoying enough when renting traditional ECS servers. For individual developers, building a complete continuous integration service is difficult, and there are a dizzying number of options:

  • Can be installed in the server database and other services, local direct link server database development;
  • Docker can be locally installed to connect to the local database service, the environment packaged into a mirror overall deployment to the server;
  • The front-end code is developed locally and the server code is developed on the server.

Even the stability of the server requires tools such as PM2 to manage. When the server is under attack, restart, or disk failure, open a complex workbench or log in the Shell to recover. How does that keep people focused on what they need to do?

Serverless solves this problem because all we need to upload is a snippet of code and we no longer have to deal with server, system environment, resources, etc. External services are also supported by the encapsulated BaaS system.

In fact, before Serverless came along, many back-end teams used FaaS concepts to simplify the development process.

To reduce the interference of environment and deployment issues when writing back-end business logic, many teams abstract business logic into blocks, corresponding to code fragments or Blockly, which can be independently maintained, published, and eventually injected into the main program, or dynamically loaded. If you’re used to this type of development, it’s easier to accept Serverless.

Thicker back office service

From a background perspective, things get a little more complicated. Instead of providing a simple server and container, you now want to make the service thicker by shielding the execution environment from the user.

The author learned from some articles that the implementation of Serverless still faces the following challenges:

  • Serverless Various vendors implement a variety of services. If you want to achieve multi-cloud deployment, you need to smooth out the differences.
  • Mature PaaS service is actually pseudo Serverless, how to standardize the follow-up;
  • FaaS cold start requires reloading code and dynamic allocation of resources, resulting in a slow cold start speed. In addition to preheating, it also needs an economical optimization method.
  • For high concurrency (such as double 11sec) scenarios, it is dangerous to not need capacity estimation, but if you can do it completely flexibly, you can eliminate the annoying capacity estimation.
  • How stock applications migrate. Most of the Serverless service vendors in the industry have not solved the problem of stock application migration;
  • The nature of Serverless leads to statelessness, and complex Internet applications are stateful, so the challenge is to support state without changing development habits.

Fortunately, all of these problems are being actively addressed, and many solutions have already been implemented.

Serverless brings more backend benefits than challenges:

  • Integration of the front and rear ends will be promoted. Further reduce the threshold for Node to write server-side code, and eliminate the learning cost of application operation. The author once encountered the application service interruption caused by the migration of the database service he applied for to other computer rooms. In the future, there is no need to worry about it any more, because as a BaaS service, the database does not need to care about where to deploy, whether to cross the computer room, and how to do the migration.
  • Improve the efficiency of resource utilization. Put an end to application monopolization of resources, and change into on-demand loading will certainly reduce unnecessary resource consumption, and the service will be evenly distributed to each machine in the cluster, leveling the CPU water level of the cluster;
  • Lower the threshold for cloud platform use. No operation and maintenance, flexible expansion, value-based services, high availability, these capabilities in attracting more customers at the same time, fully on demand billing features also reduce user costs, to achieve a win-win situation.

Try service openness with Serverless

The author is responsible for the construction of a large BI analysis platform in the company, and one of the underlying capabilities of BI analysis platform is visual construction.

So how does visual scaffolding open up? It’s easier to open up components now, because the front-end can be relatively decoupled from the back-end design, and the AMD loading system is mature.

One challenge is the open back-end capabilities, because when there are custom requirements for fetch capabilities, you may need to customize the logic for back-end data processing. At present, we can use Maven3 and JDK7 to set up local development environment test. If we want to go online, we also need the assistance of the backend students.

If the back-end builds a unique Serverless BaaS service, online Coding, debugging, and even grayscale publishing can be pre-tested just like the front-end components. Now that there has been a lot of mature exploration of front-end cloud development, Serverless can unify the experience of front-end and back-end code development in the cloud, regardless of the environment.

Serverless application architecture design

Looking at some Of the Serverless application architecture diagrams, I found that most businesses could use one of these diagrams:



  • Abstract business functions into FaaS functions, database, cache, acceleration and other services into BaaS services;
  • The upper layer provides Restful or event-triggered mechanism invocation, corresponding to different ends (PC, mobile terminal);
  • If you want to expand the capabilities of the platform, you only need to open up on the end (component access) and FaaS service (back-end access).

Benefits and Challenges

Serverless brings benefits and challenges coexist, this article stands in front of the point of view to chat.

Benefit 1: The front end is more focused on front-end experience technology, and does not require much application management knowledge.

Recently I read a lot of summary articles written by front-end predecessors, and the biggest experience is to recall “what role did the front-end play in these years”. We tend to exaggerate their sense of presence, in fact, the meaning of front-end existence is to solve the problem of human-computer interaction, in most scenarios, is a kind of icing on the cake, rather than a necessity.

Remember that your proudest work experience may be the knowledge of Node application operations, front-end engineering systems, r&d performance optimization, standard specification, etc., but the part of the business that really matters is the business code that you feel is the least worth writing. The front end spends too much time on peripheral technology and too little time thinking about business and interaction.

Even for large companies, it is difficult to hire someone who is proficient in Nodejs and has extensive knowledge of operations, as well as front-end expertise and deep understanding of the business. It is almost impossible to have both.

Serverless can effectively solve this problem, front-end students only need to be able to write JS code without mastering any knowledge of operation and maintenance, they can quickly realize their entire set of ideas.

Admittedly, it is necessary to understand the knowledge of the server side, but in the perspective of reasonable division of labor, the front end should focus on the front end technology. The core competencies or business value of the front end are not replenished by learning more about operations. Instead, it eats up time that could have added more business value.

Language evolution, browser evolution, server evolution, are all from complex to simple, low-level to encapsulation process, and Serverless is the back-end + o&M as a whole further encapsulation process.

Benefit 2: Logic orchestration brings highly reusable and maintainable code, and expands cloud + capabilities.

The cloud + side is the next form of front-end development, providing strong cloud coding capabilities, or building the end into a cloud-like development environment through plug-ins. The biggest benefit is that it shields front-end development environment details, similar in concept to Serverless.

While several teams have tried to make interfaces “more resilient” with GraphQL, Serverless is a more radical solution.

My own team tried the GraphQL solution, but the business was too complex to describe the requirements of all scenarios in a standard model, so GraphQL was not a good fit. It is a set of visual back-end development platform based on Blockly that has persisted and achieved amazing development benefits. This Blockly can almost be replaced by Serverless after generalization and abstraction. Therefore, Serverless can solve the problem of back-end development efficiency in complex scenarios.

Serverless integrates cloud development to further visually adjust function execution order and dependencies through logical orchestration.

The author used this platform to calculate offline logs in baidu advertising data processing team. After visualization of each MapReduce compute node, it is easy to see which node is blocking when a fault occurs, see the longest execution link, and reassign execution weight to each node. Even if logical choreography does not solve all of the pain points of development, it can certainly be useful in a specific business scenario.

Challenge 1: Can Serverless completely remove the front-end to back-end threshold?

The most common problem with Node code is memory overflow.

Browser + Tab is naturally a do-it-yourself scenario, and UI components and logic are created and destroyed frequently, so there are very few front-end classmates and GC issues to worry about. GC is an established habit in back-end development scenarios, so Nodejs cache overflow is a major concern.

Serverless applications are dynamically loaded and will be released if not used for a long time, so generally you don’t need to worry too much about GC. Even if memory runs out, the process may be released before memory is used up, or an exception is detected and forced to Kill it.

However, after all, the loading and release of FaaS functions are completely controlled by the cloud, and it is possible for a commonly used function to remain uninstalled for a long time. Therefore, FaaS functions still need to pay attention to control side effects.

So Serverless smoothen the o&M environment, but the server basics need to be understood. You must be aware of whether the code runs in the front-end or back-end.

Challenge 2: Performance

The cold start of Serverless will lead to performance problems, and let the business side take the initiative to care about the execution frequency or performance requirements of the program, and then start the warm-up service and drag the research and development into the abyss of operation and maintenance.

Even amazon Serverless, the most mature cloud service in the industry, cannot easily cope with sec-kill scenarios without caring about the frequency of calls.

Therefore, Serverless is probably better used in conjunction with appropriate scenarios at this point, rather than forcing Serverless into any application.

Although it is possible to keep the program Online by periodically running the FaaS service, I think this still violates the philosophy of Serverless.

Challenge 3: How do YOU make your code portable

Here’s a classic Serverless location description:



The network, storage, services, virtualizer, operating system, middleware, runtime, data, and even the application layer only need to care about the function part, and do not need to care about other parts such as startup and destruction.

This has always been seen as a strength, but it can also be seen as a weakness. When your code is completely dependent on a public cloud environment, you lose control of the overall environment, and even your code can only run on a specific cloud platform.

Different cloud platforms may provide different BaaS service specifications, as well as different FaaS entry and execution methods, which must be overcome in order to adopt a multi-cloud deployment.

Many Serverless platforms are now considering standardization, but there are also bottom-up toolsets to smooth out some of the differences, such as the Serverless Framework.

When we write FaaS functions, we try to keep the platform-bound entry functions as light as possible, putting the real entry in a generic function such as main.

conclusion

The value of Serverless is far greater than the challenge, and its concept can actually solve many r&d performance problems.

However, the development stage of Serverless is still in the early stage, domestic Serverless is also in the trial stage, and there are many restrictions in the implementation environment, that is, the good concept of Serverless is not fully realized, so if everything goes up, it will definitely step on the pit.

These pits will probably be filled in after 3-5 years, so do you choose to join the pit filling army or use Serverless in a suitable scenario?

The original link

This article is the original content of the cloud habitat community, shall not be reproduced without permission.