The front end early chat conference, and nuggets jointly held. Add Codingdreamer into the conference technology group, win at the new starting line.


27th session | special front-end Flutter, understand the Web rendering engine | | UI framework performance optimization, 6-5 live in the afternoon, six lecturer (taobao, jingdong/idle fish, etc.), point I get on the bus 👉 (registration address) :

All the previous period has the full recording, the beginning of the annual ticket to unlock all at one time


The text is as follows

This is the 16th session of front-end early chat componentize and the 113th session of early chat, shared by Tencent – Vincent.

Hi, I’m very happy to be able to give you a technical talk today. Today I’m going to talk about how to practice component as a service.

To introduce myself

My name is Vincent. I graduated from South China University of Technology. Now I am working in IVWEB team of Tencent.

Today we are going to talk about components as a service, CaaS for short. Today’s sharing will be divided into the following four parts:

  1. First of all, I will elaborate the application scenarios of this scheme. In fact, different schemes have their own application scope, and we need to clarify the actual scenarios that can give full play to its advantages.
  2. The second is the focus of today’s sharing, that is, when we practice such a scheme, such as the selection of technology and some pits we stepped on on the road, we can share these experiences with you, so that you can think about what to do when you meet similar scenes.
  3. The third point is to show you an effect of such a scheme in practice in our business.
  4. Finally is the summary, to review the situation of the whole program.

I. Application scenarios

Part of the first application scenario.

The evolution of front-end page complexity

Let’s start by thinking about how the front end is different today than it used to be over the years. In my opinion, the front end page is no longer the simple UI presentation of the past, but will contain a lot of logic behind it. One of the main reasons for this evolution is that product design is not as primitive as it used to be, so the product manager, for example, might slap his head and do what he thinks the page should do. Product design is much more scientific and industrial now, which means there are many more logical and scientific ways to determine what our products should look like in the end.

As product design advances, the front page will look very different from the content behind it, but we only see a small part of it.

Page complexity – ABTest

Why is the complexity of pages getting higher and higher today? A typical example is ABTest, which has been widely used in our business in the past two years. The so-called ABTest is to develop different components for UI, interaction or product logic in the page through comparative experiments, and then put them into real users. Finally, the final data report of each user is analyzed to determine whether the scheme is better. When we do ABTest, a problem will be introduced. Our page will contain more and more components. For example, a page is divided into several modules, each module can do experiments, and each module has two experiments, one is the control group, one is the experimental group. Permutations and combinations can result in dozens of page renderings, a point of exponential increase in page complexity.

Page complexity – cross-platform

Another point is that there are many cross-platform scenarios in our business. For example, the same page may be put on many different platforms. A common example is Android and iOS, and there may be slight differences between them. Another is that if we want to put wechat and QQ, because its client environment is different, we often need to call some client capabilities, the host environment of the client is different, then its client interface will have some differences, these are also some places we need to be compatible in the code. This cross-platform nature adds further complexity to our entire page. Combining the two, we know that the pages we see today are out of proportion to the complexity they contain, and that many of these pages are extremely complex, and we need to consider their performance when we develop them.

Page performance optimization -SSR

An important tool in the field of performance optimization is to improve performance through server-side rendering. For rendering of a service with the client to render this two ways, the service side rendering its advantage is that the first is the first screen rendering performance will be higher, here there are many reasons, including the background data servers of a request than a user request from his browser web link will be shorter, so in the block may be faster to get the data.

Another point is that server-side rendering itself can also be cached to make its rendering performance more efficient than client-side rendering. Another obvious advantage of server-side rendering is that it is more SEO friendly, since the result of server-side rendering is an HTML fragment that can be displayed directly. So for search engine, its crawler to capture those key information, will have a little more advantage.

Without further ado, the point of today’s analysis is that on the one hand, our pages are getting more complex, and on the other hand, we need to consider the performance of our pages and not let this complexity drag them down. To show the scene more vividly, the page will contain more and more components, but what is more exaggerated is that in addition to the components it shows, there are many components behind it that are not shown.

Problem to be solved

So we’re going to solve this problem, and we can abstract it down to three points:

The first point is to ensure that the components of segregation, that is to say, each component as far as possible put it out, each front workers in fact, in the face of this complex page, all will naturally choose a way, is to dismantle components, the components down to a separate place, and then to develop and maintain better, this is a very general way, So the first point is that we want to ensure isolation of component development.

The second is dynamic, because what components a page displays are determined at runtime. In the traditional way, if all the presented components are packaged together on one page, then the page becomes very redundant and cumbersome, so we need to achieve the ability to load dynamically, load on demand.

The third point is the performance optimization mentioned above. We hope that our components can support server-side rendering, combined with other performance optimization methods, to improve the performance of our entire first screen rendering.

In order to solve such a problem, our team thought about whether the whole component could be servitized. Especially today, we know that concepts such as cloud functions and Serverless are also very popular. Combining the capabilities of Serverless, we directly move our components to Serverless to run. Components can be understood as a miniature server rendered page. Our home page then requests component rendering results from these cloud functions and displays them so that finer granularity of component-level server-side rendering can be achieved and each component can be relatively isolated.

There is an advantage to realize service deployment through cloud functions. For the front end, we do not need to care about deployment operation and maintenance. In general scenarios, the operation and maintenance costs of the whole CI/CD link can be simplified through cloud functions, which is the original intention of the whole scheme.

Ii. Program design and evolution

The second part introduces the design and evolution process of the whole scheme. To explain to you, when we encounter different problems, there may be a lot of technology selection, specific is how to choose, and finally how to get a final solution process.

Dynamic loading -BigPipe

The first point is about dynamic loading, we use BigPipe, a brief introduction of BigPipe is a solution proposed by Facebook to improve the loading speed of web pages, it consists of two steps, the first is to block the web page, called PageLet, this is a very common practice. After it is partitioned, it is delivered statically. We can look at an example of what this process looks like.

The first is something like a skeleton screen, which is now familiar to everyone. Our page will have a basic placeholder map, which is static and can be returned to the user at the first time, which can reduce the user’s waiting time for rendering, which can be better for the user experience.

Our approach based on BigPipe is that when our users make a request from the browser to the server, we first return the static skeleton screen HTML directly, so that the user can see a basic result in the shortest time, and then make a request to the component’s service through the server. After the component service rendering is finished, it is segmented again. After the component rendering is finished, the component can be returned first and then gradually rendered to a page of the user. This is a basic scheme.

Page slices

In under the guidance of the idea, we can’t do two things, the first thing is to block of the page, it is through the way of skeleton screen, we first put the component on page pulled out, whether need to display, or do not need to show, will be its independence, as a separate warehouse maintenance alone, In this way, the cost of our development and maintenance can be reduced to a lower level.

Another point is that at the time of deployment, as mentioned earlier, we deployed it directly to Serverless, so that developers do not need to worry about its deployment, maintenance, operation and other tedious work on the server side. And then to the main page, home page we are put it as a base, on the base, on the one hand, you can normal development, including some components may not need the service side rendering, it itself is a relatively don’t rely on data, can be directly display part, we can directly in the service side rendering approach to development, Then when the server needs to render these dynamic components, it will introduce them dynamically through an SDK and render them.

According to the need to introduce

There are different appeals to importing on demand, for example the first appeal is direct rendering which is very simple, I declare whatever component I need to render and render it.

It can be seen that each component service has its own specific name, which is equivalent to a name service. It can be addressed by this name, find its corresponding Serverless function, and then get the result directly rendered. The second option is to insert it directly into our original page using JSX and render it using CSR. React itself has a lazy loading capability, such as React. Lazy, we can pull back the result of our component services, that is, render the result of lazy loading to the page, it will show a loading state when the component is not pulled back. The component’s rendering results are displayed after it is pulled back. In this way, we are addressing a capability that some components load at runtime: first showing the static timeline of the skeleton screen, and then making a request to our component service through the main JS to pull the rendering results of the component.

Components of asynchronous

In addition to addressing the dynamics and isolation of the components themselves, the second thing we need to focus on is the performance of the components, and because of this, we didn’t directly use some of the community’s existing dynamic import solutions. There may be doubt, because a lot of people like the React of community, itself provides can dynamically to lead the ability of a component, it can be when you pack, don’t put the component package directly to your home page, but when you are running to introduce other come in, but there is a problem, that in itself is not support the service side rendering.

If we execute this scheme directly on the server, we will throw an exception. React does not support server-side rendering. So if we needed to use server-side rendering, React gave us a community solution, and we tried the community library called Loadable Components, which didn’t meet our requirements. The main reason is that its server-side rendering is not exhaustive, it only ends up rendering the loading part.

Take a look at this official code snippet. In this code snippet, in the server deployment, it really renders only loading server rendering, and the component itself actually needs to be executed by the client, so it is not complete.

Of course, one might think of another idea: could we collect component dependencies during construction? When rendering on the server side, the component’s code is read on the server via FS (file system) and then rendered in this way.

This approach is in fact the approach taken by next.js. Here is a sample code. Here is a brief description of what this code does:

I’ve put a judgment condition in here, and this is a simulation of an ABTest, because we know that an ABTest is basically a judgment of what the user is doing and if it meets certain criteria, it’s going to render what’s called the white card, if it doesn’t, it’s going to render the black card, This is a very typical ABTest scenario.

In this case, if it’s true, it can dynamically load the White Card component, and the White Card component can support server-side rendering, and if it’s false, it can dynamically load the Black Card component, It seems that this solution is in line with our previous vision. First, it can be dynamically pulled, and second, it can support server-side rendering.

But it’s not as good as it looks, because there’s a little problem. We can see if this part, we there is written to demonstrate 1 > 0.5, we can see that this is a synchronized code, that is to say when it executes is synchronous, but we know in ABTest, often we have to choose to use what strategies apply colours to a drawing is asynchronous, for example it can be launched a request, If initiates a request to the central service of ABTest, and then through the corresponding algorithm, the user is assigned to different experimental groups and control groups according to some conditions, etc. The result will be returned to the page, and the page will decide whether it renders the white card or black card above. In the NEXT. Js model, this approach will fail if the asynchrony itself is a duplicate asynchrony. Because this kind of asynchronous nesting asynchronous scenario is very typical in ABTest, this scheme can not fully meet our needs. After comparing the options, we finally decided on this approach.

We can look at a sequence diagram like this:

Sequence diagram of CaaS BigPipe SSR

When the request is made from the user side, it points to a cloud service. Cloud service can actually be understood as a page service rendered by the server side, but we can do some logical processing in it, for example, there can be an ABTest logical processing, or the cross-platform logical processing mentioned above. So what I end up with is what component I need to render, and then I make a request to some component service on the cloud, and those component services make a request to their respective backend interface. You can see the cloud services, cloud functions, background interface, because they are going through the internal network, the internal network must be faster than the user client to directly initiate a request to query the background interface.

In fact, the cloud service has returned the skeleton screen at the first time, so that users can get the on-site results of the skeleton screen at the first time. Then the components of the cloud function will be gradually returned to the user after rendering, and the user’s page will gradually render the contents of these components.

We can see several reasons why this approach is better in terms of performance: First, as mentioned earlier, it is faster to get in and out of the interface on the Intranet than to get to the interface on the Intranet from the client over a long network. Another point is that it has the ability to return in pieces, which components can be confirmed first can be returned, can be displayed first, like the traditional server rendering, need to wait until all components are rendered and then returned. Another point is to start with chunking, because we know that things like server-side rendering can be cached, but this caching capability is granular on a whole page basis. For example, for a page like ours, if we go directly to the server rendering cache, its cache hit ratio is actually very poor. Because most of the time its cache is useless.

This page is very typical. There is a relatively fixed component on it, which means that the current live broadcast status of the anchor has ended, and basically it will not change. Unless the host might change his avatar or his nickname might change a little bit, which is very infrequent, the cache is actually valuable because it works most of the time.

The following section is called “Everyone is watching”. In fact, it shows the information of these popular anchors recently, which is often refreshed. If you cache the content of the following component, it doesn’t make much sense and will cause a drop in the user experience, because the user will see an old result and then flash it and render it to the latest correct result, which will make the experience worse. And we are through the way of CaaS, can this piece of component cache, it shows the cache hit ratio basically are valid, the following a constantly changing part we can not cache, so that the entire page caching granularity can do component level, than the traditional this page level to more meaningful.

Link call

Of course, our entire request and link becomes longer and more complex this way. For example, we will have many links in the middle, and to consider the stability of the entire service, we need to monitor every link. We can leverage the monitoring of the cloud function itself, which depends on the capabilities of the cloud vendor you choose. But depending on the actual business situation, we can also consider the integration of cloud services and cloud functions, that is, your server rendering service integration.

This has the advantage of reducing the number of links in one layer, and your errors in the middle, including a slight increase in network access speed. So this needs to make a judgment according to your actual business situation, of course, the basic logic of judgment is whether the component services on the cloud function are reused. If you have a lot of reuse, for example, a component that is used so much that it may be used on every page, THEN I would prefer to deploy it in a separate location. If the reuse rate is not that high, it would be better to integrate it directly into the cloud service to reduce the complexity of the overall solution.

Another point is that we need to consider the service relegation situation, because the stability of the service itself may have problems, such as a sudden increase traffic or service exceptions, and so on, then we also have this way, when we found the entire request timeout or fails, we will forward it to the conventional CDN such a path, Normal HTML, JS, CSS is a request, and then the client to render. This way, although it is less efficient and slower, it can ensure the normal opening of the entire page, so there will be a service degradation scheme to avoid abnormal situations.

Component conflict

After we’ve done all this, another thing to consider is the possibility of conflicting components. There are three main points to consider:

The first point is that our JS needs to be isolated, especially if your JS has some global capabilities, or if the component code has global side effects, different components may interact more than the design intended.

The other is CSS isolation. CSS isolation is simply style contamination, preventing different components from writing the same style and overwriting each other. Now the isolation is actually relatively simple, there are a lot of ways, we use CSS Modules, in fact, will automatically make a difference in the name of the CSS tag, to avoid different components, it is the same style. There are also some other practices in the industry, which are also mentioned in many micro front end solutions, so I won’t go into them here.

The third component is dependent on sharing problem, you can think of A problem, A dependence on A library such as components, component B is dependent on the library, if the component A pack A basic library, component B also pack A basic library, so eventually we’ll reloading the page A lot of the library, so we need to solve such A problem, Let this common dependency be extracted. While non-public dependencies are packaged into the component itself, I’ll talk about a separate solution for separating the two JS dependencies from the component later.

The sandbox

JS isolation is basically a concept called sandbox, which is to create a separate environment for each of your components to isolate a relationship between you and other components. It is also important to eliminate side effects. For some timers, the timers of component A and component B should not affect each other. There are a number of ways to do this, mostly in the micro front end area, all of which generally cover one point.

There are two main types of sandboxes. The first type is the so-called singleton pattern, where you actually have a single sandbox on your page and everything is done in the sandbox. The second is the multi-example pattern, where you have your own sandbox for each component, and you do that through multiple sandboxes. How to do these two ways respectively, there are many ways, this is just to give you a piece of advice, in fact, the specific depends on the actual business scenario to decide.

The first is the closure approach, which is simply, by using the function scope, wrapping a function in the outer layer of the global scope, which creates an isolated scope, which can pass in your window object, and then assign a value in this way, so that within the function scope, Its Windows are isolated from the outside world’s truly global Windows.

One question, of course, is what happens if I’m in the scope of a function (i.e. the component is in some sandbox) and it needs to use, say, the Location capability in Windows? There are two ways to think about it. The first way is that you can simulate the basic capabilities of these browsers by coding them, which is a bit more complicated, but you can imagine that it would be a lot of work to write your own code to implement these basic capabilities. The second trick is to create a new iframe inside the component. After using the iframe, attach the window inside the component to the iframe. This is a trick. The trick is to get a full global Window capability in just a few lines of code. Of course, there are a few minor problems with avoiding cross-domain situations, but closures are a big class of solutions.

The second category is a snapshot, that is, speaking in front of the singleton this way, we can in the global only a sandbox, but the ability of each component in the use of these global, you need to save the scene at the end of the use, thus clear back into the original state, and then for another component, finished with another component, when you need to use when this component, so again. There’s going to be a lot of code in there for this kind of switching scene, which is one way. There are also ways like proxy can also be considered. At present, we are also exploring this area, in fact, how to do better, there is not a final and completely determined way of this area, each has its own advantages.

The next one is about how to solve the problem of component dependency sharing. If I have multiple components that rely on the same library, for example, in most cases, my components rely on the React base library, etc. It is impossible to package each component. What should I do in this case?

The way we used to do this is through Webpack external, which means that if your component has public dependencies, it will take those public dependencies out of your product when it’s packaged, and access them at runtime through global variables. This ensures that each component does not pack the base libraries, that your code for each component is kept relatively small, and that there are no double references. This approach is very simple and can be implemented with a slight change in Webpack configuration, but it has some potential problems, such as some public packages do not support this mode, it can not be exported to a variable to be attached to the global, so you can not use this solution.

Module federation

Luckily Webpack5 was released a few days ago, and it has a very interesting feature called Module Federation. What it does is you can rely on components or dependencies of another application, and the nice thing about it is that you don’t have to worry about dependencies when you’re developing either application. At runtime, it uses a special mode to dynamically load basic modules from other applications.

Take a look at my code for an example of module federation, which is to add a plug-in to your build that tells you about another application and its libraries that you need to rely on. For example, if you need to use the React base package in another application, it will not be able to type it out when it is packaged, but when it is run, it will be able to read it in from another application. We can have an application add code from another application, which is an elegant way to solve the problem of dependency sharing between our components.

summary

So there are a couple of points that we’ve talked about, which are also around these three core problems that we’re trying to solve:

  • The first is the issue of isolation. How can components be developed independently? We deployed it as a separate service, not tied to your original page.
  • Then how to implement it load on demand? BigPipe is a way to quickly return and dynamically pull your components.
  • Including the need to consider multi-component conflicts in this multi-component implementation. Like the JS sandbox solution, the CSS isolation solution, and the component dependency sharing solution.

Through the above mentioned ways, our entire page itself or the component itself, is to support server-side rendering, can take advantage of the advantages of server-side rendering, to maintain the performance of the entire rendering, but also in a better state.

Practice and effect

So let’s look at the effect of specific practices in our business.

Development costs

As mentioned above, this solution will have some development cost, we need to make some modifications to the component itself, so that it can be deployed to a service, and then another basic modification to the main page, so that the page can load the required components. We developed a tool chain to address these issues separately. For components, most of them, whether it’s CI/CD deployment or Webpack configuration for its development, are fixed and can be quickly resolved in a scaffolding manner.

For what we call the base, which is the basic page, the SDK encapsulates a lot of the ability to call component services and how to generate skeleton screens. Through the construction of such a tool chain, for example, if there is such a demand in our business, such as the ABTest heavy business, for developers, the transformation cost is relatively low, you do not need to care too much, you can follow the normal development mode to develop.

Another important point is the performance point I mentioned here. Let’s take a business page that is currently online as an example:

In this way you can imagine that first of all its performance is significantly better than that of non-server rendered pages. For pages that are rendered by the server itself, it is also more efficient than the full page rendering time due to the ability to render in chunks. This is especially true over slow networks.

In an experiment on a slow network, we can see that the performance of the first screen can be improved from 4 seconds to 2000 milliseconds. In more detail, we can see that there is a performance improvement under different test conditions. This is also the speed measurement report of a live network. In this way, the overall performance will be significantly improved. However, in fact, the performance of the current scheme is not perfect, and some links can be further optimized. For example, in the connection between the client and the server, we think that the component itself is deployed on a central cloud function after we servize it. The traditional way is to make use of the capabilities of CDN to make an acceleration. We can see that various CDN vendors are exploring the ability of dynamic computing on CDN, which can enter another topic, that is, the field of edge computing. We can use CDN nodes to deploy our component services, so that we can establish a connection with users more quickly.

Performance optimization scheme

Another point is that we actually will be found in the business of many heavy data dependent, you need to ask a lot of data, these data have some is relatively fast, there are some queries may be slower, we can do to the interface data to an optimized, for the so-called cold separation, that is the thermal data, fast data to return as soon as possible, Cold data can be slowly returned after subsequent rendering, which is also an improvement point. Serverless itself has a cold start problem, which means that if you are not called for a long time, its instance may shrink to a small size, and then when you trigger the call again, it has such a time to start. This startup will be slow, and Serverless will have some of the usual tools for this optimization scenario.

Four,

Finally, the fourth part summarizes.

We know that there is no silver bullet in software engineering, which means that there is no cure-all solution in the field of software engineering. CaaS is the same, focusing on solving some specific scenarios: In complex component scenarios, component isolation and dynamic loading should be realized, and the performance of the page can be taken into account while solving these problems, so that the whole future performance is still maintained in a relatively good state. So what we ended up with was a scenario where we put components in services, and putting components in services opens up a whole new realm of imagination.

At present, we can see that there is a concept mentioned in many cloud service manufacturers, namely the concept of edge computing. One is the physical server and edge container based on edge, and the other is the ability based on CDN to do edge computing directly on CDN. To put it simply, our traditional CDN is only used to speed up your static resource access, such as JS and CSS. With the help of CDN, you can get it faster, because THE CDN will be closer to the user. The CDN nodes may have such dynamic capability in the future, and some calculations can be done on the CDN nodes. We can do service rendering on it, and then we can give the user a server rendering result more quickly.

Another point is that we can also do some of the current, including code management, component reuse, in fact, component reuse this piece also has a lot of, as some teachers have said before, component reuse can also be further optimized. There is also a capability in HTTP2 for Server Push that may not get much attention: With Server Push capability, the Server side can directly Push some files to the client side in advance. For example, JS and CSS of our components can be pushed to the user in advance, and then make a cache on the client side, which can further reduce our deliverable time. Page in addition to render out a result, the user in the above operation, he must be completed JS load, can respond to the user’s operation. Therefore, if we can push JS to the hands of customers and users earlier, users can interact with each other faster, which is also a point that can be optimized. In addition, the separation of the interface layer mentioned earlier is an optimization for the interface itself.

Team to introduce

The last tache was to introduce team, our team is tencent IVWeb team, the team is located in shenzhen, live business mainly to do some related products, like tencent live inside some of the products are made by our team continues to develop, our team is also very focus on community building share with foreign exchange, like TLC convention every year, At the same time, our team members will also go to each conference to share technology, which is a team with a good technical atmosphere. There are also some open source products that you can visit GitHub if you are interested.

Recommended books

Finally, we recommend books. The book we recommend is WebAssembly, written by Yu Hang, a senior engineer of PayPal company in Shanghai. Why is this book recommended? On the one hand, WebAssembly is a technology that I think is very cool. It is a future-oriented technology. In the future, there is a great space for development based on WebAssembly, and the frontier of the front end can be expanded more widely. Another important reason is that I have also heard teacher Yu Hang’s sharing offline. Teacher Yu Hang is a WebAssembly preacher in China. He has a very deep understanding of WebAssembly and has many excellent practices. There’s a lot to learn from listening offline, so read this book if you’re interested in WebAssembly.

That’s all for today’s sharing. Thank you.

The appendix


Don’t forget 6-5 PM live oh, click me on the bus 👉 (registration address) :

All the previous period has the full recording, the beginning of the annual ticket to unlock all at one time


Look forward to more posts. Give it a thumbs up