Jingdong PLUS membership program is the first e-commerce paid membership program in China, and the number of officially opened members has exceeded 10 million. My team took over the front-end development work of this project in 2016, witnessed its rapid growth all the way, and also contributed to this.

This project has several features:

First, demand is high. Why not use native or RN for mobile development using H5? In my opinion, considering the number of requirements and the speed of iteration of this project, even H5 is difficult to hold, so don’t expect native and RN.

Second, there are many product managers. For general projects, we need to connect with one or two product managers. For this project, we need to connect with a “team” of product managers in different places. General project change product managers one by one, this project a batch of change…… We’ve sent away several PLUS membership product managers. Iron research and development, product manager of flow.

So, the PLUS member project is the business side, the project manager, or the product manager, but it is our research and development. Every time I read this, my ears will always ring ye Qianwen’s old song: “Heaven and earth long, passers-by in a hurry, the tide rises and falls…” .

Back to the book. With so many users and frequent iterations of demand, ensuring online security and stability is always a top priority. So we have been very careful in terms of architecture adjustments and performance optimizations, with minor fixes and major upgrades only when it comes to major rebuilds. However, our thinking and practice on these issues have never stopped, and we have verified some effective optimization solutions, which will be applied in the next revision.

I’m not entirely convinced that front-end development doubles in difficulty every 18 months, but it’s true that the industry is iterating fast. If you wait until all of these optimizations are applied, it may not seem so new. Therefore, I decided to share these plans first and discuss them with interested partners to further improve them.

These solutions are mainly aimed at mobile terminals, and the core direction of optimization is to improve the loading speed of the home page, especially the loading speed of the first screen and weak network environment. Starting with persistent caching, cutting down on code, optimizing interface requests, and improving subjective perception, the big changes are implementing PWA and upgrading the architecture. PWA offline caching can greatly improve the user experience, but it does not improve the first load speed. It depends on other optimizations. It is a one-two punch. Let’s start with an architecture upgrade.

1. Architecture upgrade

The project plans to migrate to Gaea4.0 scaffolding [1], which is a set of universal Vue single-page application scaffolding developed by our team based on webpack 4. The previous series version has been verified by dozens of projects and is relatively stable. The recent release of version 4.0 is a significant improvement over the previous version.

  • Webpack was upgraded to 4.0

  • Babel has been upgraded to 7.0

  • Vue-loader was upgraded to 15

  • Refactoring the upload plugin for faster and more stable one-click upload to the test server

  • To solve the problem that our phones and computers are located on different Lans and cannot communicate with each other, we integrated our Carefree solution [2] to facilitate the testing and debugging of real computers

  • Integrated with NutUI component library [3], the UI components can be loaded on demand

  • Integrated self-developed swagger-based data mock tool SMOCK[4]

  • Support automatic generation of skeleton screen [5]

  • Support the PWA

Migration has several main purposes:

First of all, the webPack construction tool of this project is upgraded to 4.0. It was developed based on WebPack 2.0 before, and WebPack 4 has many improvements, such as:

  • Scope increases in Scope as webpack3 adds, which speeds up JS execution by reducing the number of closure functions

  • The production environment is built with smaller volumes

  • The development environment accelerates builds through an optimized incremental build mechanism, while providing detailed errors and prompts

Second, Gaea4.0 Babel is version 7.0, based on Babel7 can achieve a more intelligent Babel polyfill load on demand.

Thirdly, Gaea4.0 can provide basic support for PWA, skeleton screen and other schemes tried in this optimization plan.

Finally, Gaea4.0’s integration with Carefree, the new upload plug-in and other features will make future development and real machine debugging easier.

Babel Polyfill is loaded on demand

Today’s Web application development is built locally, so it is possible to compile the higher version of JS code into the lower version syntax during the construction phase, which not only uses the new syntax, but also solves the compatibility problem of the lower version browser. The best-known tool for this transformation is Babel. One of the things that Babel has been criticized for is the polyfill problem.

Babel only converts JavaScript syntax by default, not new apis, such as global objects like Promise, Generator, Set, Maps, Symbol, etc. Some methods defined on global objects (such as Object.assign) are not transcoded. If you want an UNtranscoded API to work in a lower release environment, you need to use Polyfill.

Polyfill has a variety of solutions, each with its own problems. Babel-polyfill schemes are commonly used in current applications, while babel-Runtime and babel-plugin-transform-Runtime schemes are commonly used in third-party libraries.

Babel-polyfill provides a complete environment shim, containing degraded modules for all apis, which can provide a backstop for new apis and methods on global objects. Its main disadvantage is that the files are large, about 80 or 90 KB compressed. This scheme is currently used in the project and this time it is considered to be optimized to reduce the volume of loaded code.

As mentioned above, this wave of transformation will migrate the project to Gaea4.0 scaffolding, which has been upgraded to the latest version 7.0 of Babel. Babel7 is a big, cliff-like update to Babel6 that came out nearly three years ago, with a number of new features, one of the most notable being support for smarter on-demand polyfills.

Babel7 is mainly loaded on demand through its provided @babel/preset-env.

Using @babel/preset-env also requires @babel/ Polyfill to be installed first, but the final package will not import all polyfills.

npm install @babel/polyfill --saveCopy the code

Also, you need to specify the browser range to be compatible with in the.browserslistrc file or in the targets field of.babelrc.

Then configure @babel/preset-env in the. Babelrc file.

@babel/preset-env The option related to loading polyfill on demand is useBuiltIns, which has two values to focus on: Entry and Usage.

When the value is entry, Babel replaces the import”@babel/polyfill” or require(“@babel/polyfill”) statement with a single Polyfill require, depending on the environment configuration we specify.

Such as the

import "@babel/polyfill";Copy the code

Replace with

import "core-js/modules/es7.string.pad-start";
import "core-js/modules/es7.string.pad-end";Copy the code

When the value is Usage, it is more intelligent. Babel adds specific polyfills based on the needs of each file and the specified environment configuration. What’s more, the same polyfills in a bundle are loaded only once, which also helps reduce the size of the bundle. Babel presumably implements this precise on-demand polyfill function through static analysis of files.

Such as

var a = new Promise();Copy the code

After conversion (if the specified environment does not support it)

import "core-js/modules/es6.promise";
var a = new Promise();Copy the code

After conversion (if the specified environment supports it)

var a = new Promise();Copy the code

We tried this by specifying the range of browsers that need to be compatible, then installing @babel/ Polyfill and setting the useBuiltIns option of @babel/preset-env to Usage. Babel then automatically analyzes each file and loads as many polyfills as it needs for each file, taking into account the browser compatibility range we specify. In the final project, only part of Polyfill was introduced, and it was calculated that the packaged code (min) was more than 60 KB smaller than that of the complete babel-Polyfill solution, while avoiding global variable contamination.

With Debug mode enabled in Babel configuration, you can see which polyfills have been added to each file at build time:

(From Zhihu, bar jing asked: “In what age, it is still compatible with Android 4.0 and iOS 8.0?” I sigh, shrug and shake hands with the monkey.

A further thought on this question:

This way of loading polyfills is a lot better than the traditional way, but it’s still not perfect. For example, a polyfill that needs to be introduced for our specified browser range might be unnecessary for older browsers.

In my opinion, an ideal solution is to determine the scope of API that may need polyfill through static analysis at the compile stage, but not pack polyfill into it. Instead, when the user visits the page in the browser, the JS script implanted in the page checks whether the current browser supports these new apis one by one. Find the unsupported ones and load the corresponding Polyfill file on the server via a request. Of course, this requires a server-side polyfill scheme like polyfill. IO. We will continue to explore this direction in the future.

Persistent caching

PWA is really popular, now in the project without PWA even embarrassed to say hello to people. The most important of the PWA’s features is the offline cache. Although the H5 had an Application Cache API before it was available, the PWA offline cache was enough to beat it to death in the sand.

From a business point of view, we think this project is not suitable for offline access, but we can use PWA to cache static resources offline and improve the speed of page access.

In this scenario, the ServiceWorker does not cache the HTML and interface data of the page itself, only the static resources, and caching is preferred. In the case of non-first access, static resources are cached and page access speeds are greatly improved.

But there is a problem, is the page update problem. Using a cache first policy means that every time a page is entered, the cache is used directly if it is present. If there is an update to the cache, the page needs to be refreshed after the cache update to see the change. Automatically refreshing the page severely affects the user experience, and prompting users to manually refresh the page looks a little strange in the APP, and not all users do it manually. For programs like PLUS membership, where there is a long queue and updates are frequent, the impact may be even greater. HTML5’s offline caching API also suffers from this problem, which is certainly not a bug, but a result of a “cache first” policy that doesn’t quite meet our needs.

Our solution to this problem is to change both the version number in the cache and the version number in the URL that references the file when the file is updated, so that the browser can use the new file directly without using the cache. After the page loads, the cache is updated, and the next time the page is accessed, the cache is removed.

There is room for optimization. Only those files that have changed need to change the version number in the URL and use the new file, while other static resources on the page that have not changed can and should continue to use the cache. The idea is to extract as much of the code as possible the stable, infrequently changing modules (such as Vue and its plug-ins) that are cached as much as possible, but can be updated in the same way if necessary. Parts that change frequently, such as business code, should be packaged separately and as small as possible to reduce the overhead of page and cache updates.

For the extraction of these stable public modules, we use DllPlugin and DllReferencePlugin plugin built in Webpack to realize independent compilation of these public modules in advance and print a vendor.dlL. js package. This code is then no longer compiled without changes, so the project’s usual build speed is much faster. The vendor.dll.js package exists independently and the hash does not change, which is especially suitable for persistent caching.

Therefore, when our business code changes, we only need to release the business package (app.js) with the new version number, and vendor.dll.js still uses the local cache.

Let’s take a look at the actual loading.

First access, no PWA cache, all resources go online. After the page loads, the PWA caches static resources.

For subsequent access, static resources are loaded from the cache first, which is extremely fast.

When the business code is updated, change the version number in the URL referencing the app.js file in the page, so that app.js does not use cache, and other static resources that have been cached can still use cache. Also change the version number of the cache. The cache will be updated after the page loads and the new app.js file will be cached.

When revisited, static resources including app.js are still all cached.

Fourth, request optimization

This is a project where the front end is a standard Vue SPA, and the data interaction with the back end is completely through the interface. PLUS membership business logic itself is relatively complex, involving a variety of user states, page logic is also complex. Different users see different interfaces, which are affected by user status and background configuration.

Some interfaces are interdependent. For example, if an interface requires user status to be transmitted, you need to obtain the user status through the user information interface first. For example, the commodity data interface needs to first request the floor configuration information interface to determine which floors are on the current page, and then decide to request the data of which floors.

This serial interface request slows down the rendering of the first screen, which is currently a major problem affecting the performance of the home page and is also a focus of this optimization.

Server rendering (like Vue SSR), the first screen straight out is of course the best solution. However, it does not seem realistic at present. The research and development team of this project is also complicated. There are two cross-workplace and cross-department teams at the front and back end, with huge demands and frequent page changes. Complete separation of the front and back ends helps clarify responsibilities, improve efficiency and reduce wrangling.

The other is a compromise solution, directly on the page a back-end template file, the back-end user state, through the template file to the r&d colleagues floor configuration information such as front hit on a page, the page in the browser when initialized directly read this information, and then to ask those who rely on these data interface. This avoids the problem of serial requests and also reduces several requests, which helps speed up page loading and rendering. For this optimization, we plan to adopt this scheme.

Before optimization:

After optimization, key requests are significantly advanced:

Before optimization:

After optimization, the page starts rendering time significantly earlier:

Dreams still have to exist. The front and back end separation is an improvement, but complete separation is not perfect, such as the first screen loading speed and SEO issues. Front and rear end separation + server first screen rendering seems to be a better solution, which combines the advantages of front and rear end separation and server rendering. It not only achieves front and rear end separation, but also ensures the speed of home page rendering, and is conducive to SEO. However, with the popularity of Vue, React and other front-end frameworks today, server rendering is no longer as simple as setting up HTML pages, even if only the first screen is rendered. Isomorphism at the front and back ends may be a better solution, but in this scenario the server rendering work is obviously more appropriate at the front end, so a middle layer with Node.js is necessary.

Five, skeleton screen

Through a series of optimizations, in addition to a significant reduction in the rendering time of the first screen, we also added a skeleton screen to the page, which made the user’s subjective perception of the page loading and rendering faster than the real situation. False, use tactics, all for the user experience.

So let’s start with the idea of a skeleton screen. The skeleton screen refers to showing the user the rough structure of the page before the data is loaded, and then replacing it with a rendering of the actual page content. This is two years popular loading control, essentially is the interface loading process in the transition effect.

Before the completion of loading the outline of the web page in advance display, and then gradually load the real content, so that users can not only relieve the anxious mood of waiting, but also make the interface loading process appears more natural and smooth, reduce the long time white screen or flashing. The skeleton screen gives the impression that the page content is “partially rendered”, which is better than traditional loading effects.

Our team has made in-depth research on skeleton screen technology and developed a Webpack plug-in named @nutui/ Draw-Page-Structure [4], which can realize the automatic generation of pure DOM page skeleton screen through Puppeteer and support automatic insertion into specified pages. Customization and adjustments are also allowed if you are not satisfied with the automatically generated results.

We experimented with this plugin in the project, and the results were good. The skeleton screen code in pure DOM form has a smaller amount of data than pictures, Canvas and other forms and is more flexible to adjust.

6. Picture format

The Plus Member channel homepage is a typical e-commerce page with lots of images. Using emerging image formats can greatly reduce the size of images loaded and help with image parsing and rendering speed, which in turn increases page rendering speed. For the mobile Web, there is another important advantage – saving users traffic (China Mobile 30M5 yuan, haha).

We used WebP in our project last year and it worked well. For example, for a background image, the compressed PNG format is 35KB, while the WebP format is only 4KB. There is basically no difference in quality between the two.

WebP, for example, is well supported by Google’s browser and Opan. Firefox and Edge are both available in new versions, but Apple hasn’t followed suit, and Safari has yet to show any sign of supporting it. If you want to support the APP on iOS, you need to package the parsing library by yourself (jingdong APP for iOS has already provided support after testing, please give it a thumbs up).

The way we use WebP is to judge whether the current browser supports WebP through JS on the page. If so, add a class named “WebP” to the body and write the judgment result to localStorage. And then when you enter the page directly from localStorage, you don’t have to execute the judgment code every time. Then in the CSS of the page through the “. Webp “selector, in the Vue image filter by judging the result to decide whether to load the WebP format image.

document.createElement('canvas').toDataURL('image/webp').indexOf('data:image/webp') === 0Copy the code

This optimization, we consider increasing the support for DPG picture format of our factory.

DPG is the image compression technology introduced by our infrastructure department – intelligent storage Department. After DPG compression, the images are compatible with JPEG and supported by all platforms and browsers. DPG is a lossy compression technology. This technology can effectively reduce the image size by 50%, reduce the CDN bandwidth flow by 50%, and speed up the image user’s rendering speed on the device.

Based on my personal understanding, DPG is a secondary compression of JPEG images through some algorithm, which is essentially JPEG (albeit with the extension changed), which makes it possible to call it “full platform browser support”. Therefore, it is especially suitable to replace JPEG images with DPG images, provided of course that DPG images are available on the server. The picture system of our factory will automatically generate the DPG format picture corresponding to the uploaded picture. Therefore, the DPG format condition we set is that the original picture is jpeg format, and the picture is located in our picture system. On the basis of considering the existing WebP format image loading logic, the image loading logic after combing is shown as the figure below:

Let’s stop here, I went to participate in the needs review of PLUS membership program…

7. Read more

[1] https://www.npmjs.com/package/gaea-cli

[2] http://carefree.jd.com

[3] http://nutui.jd.com

[4] http://smock.jd.com

[5] https://www.npmjs.com/package/@nutui/draw-page-structure