Hey, beanskins. Hello again.

preface

With the rapid development of Internet technology, front-end code becomes increasingly complex. However, the complexity of the front-end code resulted in larger client sizes, requiring users to download more content to render the page. Server-side rendering is one of many powerful technologies that front-end engineers have developed to reduce the first screen rendering time and improve the user experience.

Server side rendering

To explain what server-side rendering is, let’s draw a flow chart for traditional Client Side Render:

As can be seen, the CSR link is very long and needs to go through:

  1. Request the HTML
  2. Request js
  3. The request data
  4. Perform js

To reduce FP time, front-end engineers introduced Server Side Render:

However, although SSR reduces FP Time, there is a large amount of non-interactive Time in FP and Time To Interactive. In extreme cases, users will be confused: “Hey, there is already content on the page, why can’t you click and roll?”

To sum up:

What CSR and SSR have in common is that they return HTML first, because HTML is the foundation of everything.

After that, the CSR returns JS first, then data, and the page is interactive before the first rendering.

SSR first returned data, then returned JS, the page before interactive completed the first rendering, users can see the data faster.

However, there is no conflict between returning JS first and returning data first. It should not block serial, but in parallel.

Their blocking results in a period of unsatisfactory results between FP and TTI. In order to make them parallel and further improve the rendering speed, we need to introduce the concept of Steaming Server Side Render.

The basic idea

To sum up, the ideal streaming server rendering flow is as follows:

At the same Time, in order To maximize the loading speed, we need To reduce the Time To First Byte. The best way is To reuse the request. Therefore, only two requests need To be sent:

  1. Request HTML, and the server will first return the SKELETON screen’S HTML, then return the required data, or HTML with data, and finally close the request.
  2. Request js, js returns and executes, and you can interact.

Why “streaming server rendering”? This is because the corresponding body of the HTML request is a stream, which will first return the synchronous HTML code such as skeleton screen /fallback, and then wait for the data request to be successful and return the corresponding asynchronous HTML code, and then close the HTTP connection.

The advantages are:

  • The request data is in parallel with the request JS, whereas most previous solutions were serial.
  • In the optimal case, only two requests are sent, significantly reducing the total TTFB time

However, SSR frameworks usually only execute the render function once. In order for it to know what is load state and what is data state, we need to update it, starting with lazy and Suspense

lazywithSuspense

Then we’ll take a closer look at how they work for streaming server rendering by briefly discussing the implementation principles. The simplest lazy is as follows:

function lazy(loader) { let p let Comp let err return function Lazy(props) { if (! p) { p = loader() p.then( exports => (Comp = exports.default || exports), e => (err = e) ) } if (err) throw err if (! Comp) throw p return <Comp {... props} /> } }Copy the code

The main logic is to load the target component. If the target component is being loaded, the corresponding Promise will be thrown; otherwise, the target component will be normally rendered.

Why is a design like Throw chosen here? This is because at the syntactic level, only throw can jump out of the logic of multi-layer functions and find the nearest catch to continue execution, while other flow control keywords, such as break, continue, return, etc., schedule the logic within a single function and affect the statement block.

This may come as a surprise to readers who often use throws with errors, but sometimes you need the ability to think outside the box.

Suspense lazy is usually used with Suspense, and a simple Suspense looks like this:

function Suspense({ children, Fallback}) {const forceUpdate = useForceUpdate() const addedRef = useRef(false) try { Return children} catch (e) {if(e instanceof Promise) {if(! addedRef.current) { e.then(forceUpdate) addedRef.current = true } return fallback } else { throw e } } }Copy the code

The main logic is: try rendering children, render fallback if children throws a Promise, rerender when Promise resolve.

It doesn’t really matter whether the Promise comes from lazy or fetch. Suspense, however, is not usually written this way inside frameworks; the simplest implementation of Suspense is:

function Suspense({ children }) {
  return children
}
Copy the code

Yes, that’s it, same as Fragment code, just provide a flag bit for scheduling.

To improve scalability and robustness, React uses Symbol internally as the flag bit, but the principle is the same.

When scheduling this component, if interrupted by the throw, fallback is performed:

Try {updateComponent(WIP) // be interrupted by throw} catch(e) {WIP = wip.parent // Back to Suspense component wip.child = wip.props. Fallback // Replace child pointer}Copy the code

Some frameworks, such as Vue/Preact, whose underlying data structure is not Fiber or linked list, operate by setting up two placeholders and deciding which placeholders to render based on the specific state at scheduling time

Suspense> <template #default> <article-info/> </template> <template #fallback> <div>Loading... </div> </template> </Suspense>Copy the code

Since it is not the focus of this time, I will not expand here, interested students can read the source code.

The last block

Having explored the principles of lazy and Suspense, let’s put in the last building block for streaming server rendering: the SSR framework.

app.get("/", (req, res) => { res.write("<! DOCTYPE html><html><head><title>My Page</title></head><body>"); res.write("<div id='root'>"); const stream = ReactServerDom.renderToNodeStream (<App />); stream.pipe(res, { end: false }); stream.on('end', () => { res.write("</div></body></html>"); res.end(); }); });Copy the code

In renderToNodeStream, each component is rendered directly into the stream. From the browser’s point of view, you might receive an HTML string like this:

<html>
  <body>
    <div id="root">
      <input />
      <div>some content</
Copy the code

It looks like half of it. How do you show that?

Don’t worry, modern browsers are so fault-tolerant of HTML that they can render even half of it intact, which is the foundation of streaming server rendering.

In Suspense, when WIP fallback is needed, a fallback is put into the stream and a Promise is executed. When Promise resolve is put into the corresponding replacement code, a simple example is shown as follows:

<html>
  <body>
    <div id="root">
      <div className="loading" data-react-id="123" />
Copy the code

When the Promise resolves, return:

<div data-react-id="456">{content}</div> <script> // for example, there is not really this API react. replace("123", "456") </script>Copy the code

Use inline JS scripts to replace the DOM for streaming loading.

It looks like this:

<html> <body> <div id="root"> <div className="loading" data-react-id="123" /> <! <script SRC ="./index.js" /> <! - Clients merge server HTML with client virtual DOM using the "partially hydrated" algorithm, skipping nodes managed by Suspense. <div data-react-id="456">{content}</div> <script> Replace ("123", "456") </script> </div> </body> </ HTML >Copy the code

conclusion

Streaming server rendering opens a new door to reduce rendering time and improve user experience. However, it is still in theory, and all major frameworks are under development. No demo is available for now, please wait and see.

The original link: bytedance. Feishu. Cn/wiki/wikcn5…

The resources

  • Github.com/facebook/re…
  • Github.com/facebook/re…
  • Hackernoon.com/whats-new-w…
  • Reactjs.org/docs/react-…
  • zhuanlan.zhihu.com/p/56587500
  • zhuanlan.zhihu.com/p/358624196
  • Github.com/overlookmot…

Data platform front-end team, responsible for the research and development of big data-related products such as Fengshen, TEA, Libra and Dorado. We maintain a very strong passion in front-end technology. In addition to the research and development related to data products, we have a lot of exploration and accumulation in data visualization, mass data processing optimization, Web Excel, WebIDE, private deployment, engineering tools. Welcome to the recruitment page of the team home page and send your resume to us.