By now, SPA has been widely used. Unfortunately, we’re moving faster than other people’s search engine crawlers and browser devices can keep up. Hard to write a good single page application, the result of SEO and browser compatibility this step is senseless.

I’m sure many of you have thought about server-side rendering. However, a look at the Vue/React documentation on server-side rendering may have fooled you. The previously written version does not migrate seamlessly. Also, every time there is a project, you need to run a set of Node services. Of course, friends with better structure ability can do a good job in centralized management.

As a result, when I tried to adopt VUE or React in my project, I encountered great resistance. While my head was burning, I found a new way.

Not long ago, a colleague shared puppeteer in a group with the following description on GitHub:

Puppeteer is a Node library which provides a high-level API to control headless Chrome over the DevTools Protocol. It can also be configured to use full (non-headless) Chrome.

A node library that provides an API for manipulating Headless Chrome.

More specifically, the ability to “simulate” real Chrome access pages in the Node environment through some API, and to simulate user operations, DOM access, and so on.

Since it can access pages and output rendered HTML just like real Chrome, why can’t I use it for server-side rendering?

Imagine that we had A service, A, that could access A given page like Chrome and return the DOM on the final page to you.

And your original business server B, just need to judge is crawler, or a lower version of IE to access, call up the service, get HTML, HTML back to the user, this is the implementation of server-side rendering. The general flow chart is as follows:

Once we have this idea, we try to put it into practice. The process of practice is the process of solving problems. If we think about it carefully, we will encounter the following problems:

Q1: Even when emulating Chrome to request pages, many times views are rendered asynchronously. For example, request the list interface, get the data and render the list DOM. This time, we have no control. When should the service return the loaded HTML?

The first place to look for a problem is the Puppeteer API. Happily, we found a few ways to do this:

page.waitFor(selectorOrFunctionOrTimeout[, options[, ...args]])
page.waitForFunction(pageFunction[, options[, ...args]])
page.waitForNavigation(options)
page.waitForSelector(selector[, options])Copy the code

There are some things we can do to make the page return only under certain circumstances. For example, if we set page.waitForSelector(‘#app’) so that the page has an element with id=”app”, the HTML content will be returned.

Or by setting page.waitForFunction(‘window.innerWidth < 100’), the HTML content is returned only when the page width is less than 100px.

By doing so, we can control when and what pages we want to lose to the crawler.

Q2: What to do if Internet Explorer users visit a lot? Although through such a system, we can render the page of some browsers (IE9 below) can render the page. However, such a request process would be relatively time-consuming, which is not very reasonable.

So we just have to build a caching system. Each request, will determine whether this request has not expired cache HTML, if there is, then directly return cache HTML, otherwise to request the page, save the cache.

Q3: Even though the page is out, IE users still can’t do some JS interaction.

We can’t solve this at the service level, but we can do a more interactive prompt at the front end. If the user is identified as having a lower version of Internet Explorer, a small Tip will appear, prompting the user to download a better browser for a better experience.

Q4: Single page application routing is usually done with anchor (hash mode), and hash parameters, the server cannot get, so it cannot request the correct page.

This can be solved by using HTML History route, such as vue-router, and the route link is best to generate a tag +href mode written in the page, rather than onclick after js jump, so that crawlers can best crawl the entire page.

When we figure out how to solve all the problems, we can start coding.

Pop pop, pop pop => SSR-service

Ok, and there we go. In less than 200 lines of code, we’ve implemented a generic, service-oriented, single-page application server-side rendering solution.