preface

Time is easy to throw people, red cherry, green banana. Quietly, the pendulum on the wall has pointed to the middle of 2020, and time is like a gentle hand hidden in the dark, moving between you in a trance. The year 2020 has been an unusually rough year for human beings. Whether it is the bushfires in Australia or the plague sweeping the world, the earth has been covered with a gray veil. However, human beings have not just sat back and fought back. Jingdong, as a national brand, has assumed its social responsibility and always fought at the front line of the struggle.

Membership of JD PLUS is a service launched by JD to provide its core customers with better shopping experience. During the Boxer Year, after half a year, jd has grown vigorously: The pages of several major channels have been revised and changed in turn. The co-branded card has also ushered in the strong entry of Tencent QQ Music and Mango TV. The calculator page has been revised and the risk control system built by PLUS members has been established.

Add a pop-up envelope animation on the front page of the channel to enhance the interest, and add a pop-up function to reduce the situation of Buddhist users hurrying to and from without taking away a cloud;

The revision page adds immersion and floor skin function, foil the atmosphere, enhance user perception; And the whole page can be configured to reduce the cost of front-end maintenance.

In addition, in the first half of 2020, the following requirements are mainly supported in terms of page revision, new rights and interests, function optimization and r&d optimization, etc. :

At present, the official user has been close to 20 million, everything is so thriving, vigorous.

Sink down and, however, whether technology upgrade, and perfecting of the project, in the 2019 jingdong PLUS member front-end development road to the optimization of the mentioned in the article, the first step is equivalent to a long journey, we there is still much to do, especially with time and demand rapid iteration, some new problems gradually exposed, We gradually realized that a good project must be able to establish a sound architecture and surrounding systems, in order to ensure the continuous renewal of the project iteration and efficient development. Admittedly, there is still a lot we can do up to now, but we might as well stop our steps, look back and sort out the road we have traveled in the past six months, take the essence and discard the dregs.

Next, this article will start with improving the development efficiency, optimizing the project structure, improving the user experience and other aspects, and share with you our experience in the first half of 2020 in the process of development project, in order to shed a few blocks and learn together.

I. Improve development efficiency

In the process of project development, often insignificant optimization, can always bring unexpected harvest. Continuous discovery of problems encountered in development, how to improve the process, improve the efficiency of development, is also our tireless pursuit.

1.1 Automatically Generating a New Template

With the iteration of the demand, the previous channel page gradually cannot meet the current demand, especially the PLUS members self-built risk control system page, channel page revision needs to add pages, then how many steps to add a page?

To add a page, as shown above, there are five steps:

1. Add a new Html page to mount static resources and skeleton screens.

2. Add entry JS file, which is the entry file of new page, that is, the entry file of Webpack package;

3. Add the Vue main file for developing the logic of the new page;

4, modify Webpack configuration file, add entry, and add corresponding Html plug-in configuration, for example:

new HtmlWebpackPlugin({
    template: './src/template/new-expired.html'.filename: path.resolve(__dirname, 'build/new-expired.html'),
    inject: false
}),
Copy the code

5. Modify the upload code component and add the new entry JS;

Therefore, every time the page is added, the above five configurations should be modified. The steps are tedious, and occasionally one item is omitted, leading to errors in the page is not unheard of. So how do you simplify the steps of adding a page? Taking a page from the team’s NutUI component library to add components and automatically generate corresponding configuration files, we introduced the Inquirer library, a tool for users to interact with the command line. Execute a command to automatically generate and modify the above five configuration files.

First, introduce a user-command-line interaction tool to enter the name of the new page:

// Key code
inquirer.prompt([
    {
        type: 'input'.name: 'pageName'.message: 'New page Name:',
        validate(value) {
            const pass = value && value.length <= 20;
            if (pass) {
                return true;
            }
            return 'Cannot be empty and cannot exceed 20 characters';
        },
    }
])
.then(function (answers) {
    createDir(answers);
});
Copy the code

Run the file, and the command line displays the following:

After we enter the English name of the new file and the Chinese title, we can proceed, such as generating the new Html file to the specified folder:

function createHtml(value) {
   const htmlCode = templateHtml.replace(/\{template\}/g, value.pageName).replace(/{title}/g, value.pageTitle);
   const createHtml = path.resolve(__dirname, `.. /createTemplate/${value.pageName}.html`);
   fs.writeFileSync(createHtml, htmlCode);
}
Copy the code

In addition, you need to automatically modify the JSON-formatted configuration file,

function createJson(value) {
    entrys[value.pageName] = `./src/entry/${value.pageName}.js`;
    const createJson = path.resolve(__dirname, './entrys.json');
    fs.writeFileSync(createJson, JSON.stringify(entrys));
}
Copy the code

Based on the automatically generated JSON file, Webpack can generate the corresponding entry and configuration items such as HtmlWebpackPlugin when the local service is started or the code is packaged and compiled.

const entryConfigs = require('./templates/entrys.json');
Reflect.ownKeys(entryConfigs).forEach( key= > {      // Loop through the object
    webpackConfig.plugins = (webpackConfig.plugins || []).concat([
        new HtmlWebpackPlugin({
            template: `./src/template/${key}.html`.filename: path.resolve(__dirname, `build/${key}.html`),
            inject: false}})]));Copy the code

In this way, the history of having to modify or create five new files every time a page is added is gone forever. A single command can complete the construction of a new page, which improves efficiency and reduces the risk of omission caused by manual operation.

1.2 Automatic prompt before the code goes online

As the saying goes, the details determine success or failure. Life is like this, so is the process. Because of the high demand iteration and average weekly parallel development five or six requirements, we adopt multiple branch parallel development, every time online update version number to avoid the user cache static resource, but imagine if worked so hard to finish development demand, anticipation after online, suddenly found no merge branches before code! Or no version number changed! It must have been thunderous, and in my mind there must have been ten thousand alpacas running by… Don’t ask me why I thought of it, it must be a painful memory! Never rely on artificial memory to ensure the necessary operation before going online, otherwise you will feel with me. So how can we avoid this kind of problem? Could someone please let me know and check the necessary operations before going online?

To implement this feature, the following requirements must be met:

  1. The prompt is triggered when the developer submits code to the master branch.

  2. Intercept the commit code process, prompting the developer to note that if the developer selects True, the commit code process will continue, otherwise exit the commit code.

So I turned to git-hook technology,

When the project wants to commit the code with Git, it uses the git hook of pre-commit to automatically execute some scripts to detect the code when calling git commit command. If there is any error detected, it will prevent the commit code, so that the commit code cannot be pushed, ensuring that the error code is only in our local area. Questions are not submitted to the remote repository.

Git hooks are stored in the.git/hooks folder of your git repository. These files are automatically generated when you create your git repository. Open the hooks folder and you’ll see:

The.sample suffix means that the default is invalid, so all we need to do is generate a pre-commit file that executes the code when the code is committed, so we can develop the hint code in that file.

If it is the master branch, it means that it is about to go online. Therefore, execute the code in the pre-commit file, as shown in the figure below:

Only if all answers are Y will the commit process continue.

So how do you get the whole team to use this feature? Do you want each member to add this file locally? When we run a project, we usually run NPM run dev, which is the local start service, so we can take advantage of this necessary step by putting the code to modify the pre-commit file in this step, so that every member will add this file to the hooks when starting the local service. The pre-commit feature was added without the team knowing it was there

1.3 Automatic check before submitting the code

Although the method in 1.2 is only a prompt interception before going live, is there room for manipulation during development? Here are some common scenarios:

The main branch may have been updated many times during the development process, so we need to merge the main branch to keep the current development branch up to date.

2. When different developers use a branch, they often forget to merge the code of others and then compile and upload it, resulting in the code of others being overwritten.

3, merge trunk branch, occasionally forget to upload;

Achieve the goal

These problems can be run by scripts that check item by item whether all the projects are completed. Let’s look at the implementation first. Before the update, we first define what we want to accomplish: automatically check the status of the current branch and the local branch.

1. All files need to be fetched locally

Because the correct judgment can only be made if the local and online are synchronized, this operation will only download the server to the local, not merge.

git fetch
Copy the code

2. Check whether there is anything uncommitted or to be updated in the trunk branch

“Ahead” or “behind” represents a relationship with the remote branch, to be updated or to be committed. The following figure is a “to be committed” state

git status
Copy the code

3. Determine the status of the current branch and the trunk branch

If the current branch is not a trunk branch, this judgment is required. If a merge is found, the user is prompted to merge the code. To determine whether to merge the trunk branch, run the following command to check the branches that have been merged by the current branch.

git branch --merged
Copy the code

4. Determine whether the current branch has uncommitted or to be updated content

If the current branch is not the main branch, then do this.

The flowchart is as follows:

The specific implementation

Script execution was used instead of manual operation. The front-end scaffold environment was node. The first step was to select a git library in NPM library, and I chose simple-git.

1, the first step is to determine whether there is a remote address. If there is no remote address, the fetch cannot be fetched, and the following comparison is not correct.

const git = require('simple-git');
git().getRemotes(true, (err, res) => {
    //do something
})

Copy the code

2. Determine whether the trunk needs to be submitted, and determine the status of the trunk through the status of isBehind and isAhead:

const mainBranch = master;
const behindReg = /behind(? = \d+\])/gi;
const aheadReg = /ahead(? = \d+\])/gi;
git().branch(['-vv'], (err, res) => { 
     const mainBranchInfo = res.branches[mainBranch]
    const isBehind = mainBranchInfo.label.search(behindReg);
    const isAhead = mainBranchInfo.label.search(aheadReg);
});
     
Copy the code

3. Determine whether the current branch is merging with the trunk. Check whether the branch is merging with the trunk by using the following methods.

const mainBranch = master;

git().branch(['--merged'], (err, res) => {
    const isMerged = res.all.includes(mainBranch);
});
Copy the code

4. Determine whether the current branch needs to be submitted by the following method: determine whether the variable Behind needs to be updated:

getCurrentBehind() {
    return new Promise((resolve, reject) = > {
        git().status((err, res) = > {
            if (err) reject();
            else resolve(res.behind);
        });
    });
}
const Behind = await gitFn.getCurrentBehind();
Copy the code

Finally, the following prompts are formed:

1.4 Automatically generating documentation

The M-side projects of PLUS members are growing, and the maintenance and debugging difficulties of the programs are also increasing. Sometimes the tool functions or business logic we encapsulate are not annotated, so it is difficult for other partners to understand them in use or secondary development. In order to increase the readability and robustness of the program, we added API documents to the project, so that team members could quickly query and start the project development. In addition, in order to save the maintenance cost of API documents, we used jsDoc to automate the document generation.

JsDoc can automatically generate interface documentation based on normalized comments. For example:

/** * @description * @returns {Boolean} result */
export function isMiniprogram() {
    const plusFrom = getCookie('plusFrom');
    return!!!!! ~ ['xcx'.'xcxplus'].indexOf(plusFrom);
}
Copy the code

This function will be collected by jsDoc and put into the development documentation, and then we can create our own NPM Script workflow to launch from the command line:

"docs": "rimraf docs && jsdoc -c ./jsdoc-conf.js && live-server docs"
Copy the code

Jsdoc-conf.js is a configuration file for jsdoc, which includes some configuration items for the document, and then we can start to automatically build the document!

Run NPM run docs, or merge the command into NPM run dev.

The browser will then automatically open the document page locally:

In the API directory on the left of the page, the API under Globals is the utility function in the JS file. In the API directory under Globals, the Vue component is the function. In the API directory under Globals, the Vue component is the function.

/** * @module drag * @description @vue-prop {Boolean} [isSide=true] - whether to drag an element with a drag edge * @vue-prop {String} [direction='h'] - The drag direction of the drag element * @ vue - prop {Number | String} [11] zIndex = - drag the elements of the stack order * @ vue - prop {Number | String} [opacity = 1] - drag the elements of the level of opaque * @ vue - prop {Object} [boundary={top: 0,left: 0,right: 0,bottom: 0}] - Drag boundary of the drag element * @vue-data {Object} position Position of the mouse click, including distance from x and y */
 // Business code...
Copy the code

Since the Vue component is not supported by the native jsDoc, we use jSDOC-vue here, so the annotation of the component is different, you can check the official documentation for the specific specification. We can then see the associated comments for this component in the API documentation.

The annotation specification of the VUE component can be queried in the official documentation of JSDOC-VUE. Once we’ve written the comments, we need to run NPM Run docs to regenerate the document.

1.5 Optimized version number logic

In order to ensure that users can get the latest code each time they go online, instead of using cached resources, it is necessary to change the version number of each time they go online, such as 4.1.2 in the following link:

https://static.360buyimg.com/exploit/mplus/4.1.2/v4/js/index.js

Since we are connected to the company’s head and tail system, we only need to change the version number in the head and tail system.

What is the head-to-tail system? It can be simply understood as: the server imports configuration file A from the system, and the front-end enters the code in file A and pushes it to the specified server with one key to update the front-end resources introduced on the server.

However, we found that every time we went online, we had to push dozens of servers in the front-end and back-end systems, and it often took a long time for server push to fail, or new problems were found in the pre-release after going online, and the version number needed to be urgently rolled back, so we had to push the front-end and back-end files again.

So we were thinking is there a better way to do this? Or to alleviate the problem. This is where the version number comparison logic comes into view again. Use the word “again”, because the logic before, but did not solve the dynamically generated static resource script, cause there is no guarantee that execution order, to page hang problem (see article the problem: “the jingdong 2019 PLUS members front-end development road”), so was removed this feature. It’s time to revisit this feature:

1. Maintain a version V1 in the Html template on the server;

2. The front-end maintains a version V2 in the front-end and tail systems;

The front end develops the version number comparison logic in Html, using the larger of the two version numbers to dynamically generate the corresponding static resources.

/* V1 is the version number that you put in the Html. V2 is the version number that you put in the front end and you end up using the larger version number */
if (typeofV2 ! ='undefined' 
&& Number(V2.replace(/\./g.' ')) > Number(V1.replace(/\./g.' '))) {
    V1 = V2;
}
Copy the code

There are two corresponding cases:

1. Only the requirement of the front-end going online is needed. The front-end pushes the updated version number in the first and last systems, which is the same as before;

2. If it is a project that both the front and back ends are going to be online, then the back end changes the version number V1 in the Html. According to the version number comparison logic in THE Html, the larger of the two version numbers will be used. Just need the back end online;

However, press the gourd gourd again, although alleviate the front-end push header and tail files, but from the requirement to ensure that the version number has a size order relationship. So we according to the demand of online order, and good version number of each demand, running for a period of time, urgent needs come over, online bug emergency needs, all want to have good version number between insertion in the original version, similar to an ordered array, if the insert to the front of the array elements, the subsequent elements will change accordingly, After discussion, we changed to the following strategy:

1, only three digits indicate the version number, then use the first two digits as the major version number, such as: 4.1.0, 4.2.0… 4.99.0;

2. Use the third version number number to indicate the version number of the emergency requirement. For example, the current version number is 4.2.0, or 4.2.1 if the emergency requirement is inserted.

This expands the number of major version numbers: the first two digits can be extended to large; And the inserted emergency requirements version number does not affect the previous ordered version number, but also can reduce online steps, it can be three arrows!

1.6 Automatic image compression

Image compression has always been a very important part of the front-end optimization, can also be said to be an essential part of the development process. Before the PLUS project image compression, has been in the spontaneous, manual processing state, which is a very test of everyone’s care and awareness.

To standardize this process, we introduced the automatic image compression and WebP conversion capabilities in Gaea scaffolding. So much for code

const imagemin = require('imagemin');
const imageminWebp = require('imagemin-webp');
const path =  require('path');
const fs = require('fs');
const tinify = require("tinify");
const config = require("./package.json");
tinify.key = config.tinypngkey;
const filePath = './src/asset/img';
const files = fs.readdirSync(filePath);
const reg = /\.(jpg|png)$/;
async function compress(){
	for(let file of files){
		let filePathAll = path.join(filePath,file);
		if(reg.test(file)){
			await new Promise((resolve,reject) = >{
				fs.readFile(filePathAll,(err,sourceData)=>{
                    tinify.fromBuffer(sourceData).toBuffer((err,resultData) = >{
                        // Overwrite the compressed file
                        fs.writeFile(filePathAll,resultData,err=>{
                            resolve();
                        })
                    })
				})
			})
		}
	}
    imagemin(['./src/asset/img/*.{jpg,png}'].'src/asset/img/webp', {use:[
            imageminWebp()
        ]
    }).then((a)= >{
        console.log(chalk.green('WebP conversion completed ~'));
    })
}
compress();
Copy the code

How to use it in CSS:

@mixin webpbg($url.$name) {
  background-image: url($url + $name);
  background-repeat: no-repeat;
  @at-root .webp & {
    background-image: url($url + "webp/" + (
        str-slice($name.0, str-index($name.".") - 1)) +".webp"); }}Copy the code

Str-slice (string, start, end) cuts a substring from a string. Start and end are used to set the starting and ending positions. If the end index is not specified, the substring is cut to the end of the string by default. Str-index (string, substring) Returns the first occurrence of substring in string. If no substring is matched, null is returned.

Because the webpbg method is defined in the common common-mixin.scss, and the calls are distributed across components, the call from the component will report that the WebPBg function cannot be found. If you want to use the WebPBg method globally, you need to import it globally in webpack.config.js as follows

@include webpbg(".. /.. /asset/img/index-formal/"."formal-title.png");
Copy the code

As a result, he got off to a bad start and made a mistake!

Mixins are usually stored in the current Sass file, but to use them globally, you need to import them globally in webpack.config.js.

Ok, image compression, auto-transfer to WebP, style support webP, everything is going well.

However, as we increased the number of images and the number of times we compressed the images, the problem came again: since we used tinypng to compress the images, we had to register with an email to get a key to use it. For free users, the same key can only compress 500 images in a month.

Therefore, we need to break this restriction. Besides applying for more keys, can we improve the optimization strategy?

In fact, the general picture cut, not often to change, especially the online part, change is less likely. Therefore, we can change the full compression of the image to incremental compression, compression only modified or newly added images. Based on this strategy, the number of images compressed is not very large, so the limitation is broken.

Here is a snippet of code to generate the hash value:

let rs = fs.createReadStream(filedir); // Open a readable file stream and return an fs.ReadStream object
let hash = crypto.createHash("md5"); // Creates and returns a Hash object that can be used to generate Hash summaries
let hex;

return await new Promise((resolve, reject) = > {
  // Emit rs.emit('data', data) internally;
  rs.on("data", hash.update.bind(hash)); // hash.update Updates the contents of the hash with the given data

  // The end event indicates that the stream has reached the end and no data can be read
  rs.on("end".function () {
    hex = hash.digest("hex"); // Evaluates a summary of all data passed to be hashed, returning a string
    result[filedir.replace(/\/|\\/g."/")] = hex; // Use the file path in both MAC and Windows as the key value, and store the generated hash value as value in result
    resolve();
  });
  // The error event indicates that an error occurred
  rs.on("error".function (msg) {
    console.log("error", filedir);
    reject();
  });
});
Copy the code

2. Optimize the project structure

Why continue to optimize the project architecture? The project is like a well-built building. If the load-bearing wall is broken from time to time for decoration and maintenance is not timely, it will eventually be in a disastrous state and in danger. Similarly, jingdong PLUS membership as a long-term maintenance projects, as demand rapid iteration, inserted into the implementation of emergency demand, initially failed to consider the problem, or hinder the progress of the project development problems rose to the surface, in order to project can run for a long time, to avoid more bloated code, we mainly do the following work:

2.1 Extracting basic components

There are a thousand Hamlets in a thousand people’s eyes. Every developer has a different idea of how to divide components. So how are the components of the PLUS membership program divided?

For example, a popover like this:

We first used the basic component in NutUI component library — Dialog popover component, and then developed the business calculator popover component based on this component. In order to better improve the reusability of components and reduce the impact of business logic changes on components, it should be composed of the following forms:

The components in our current project are also working in this direction, as we discovered that some of the NutUI base components were introduced in the PLUS project and then did some development to match the business requirements. After a long period of stable operation, these components changed very little, so in order to slim down the project, we extracted these base components and published them to NPM, and finally packaged them into the node_modules folder, which reduced the code of these base components in the project. And you don’t need to package and compile the component code every time.

It is worth mentioning that all the components worked perfectly when developed locally, but failed when packaged and deployed. After investigation, the original NPM package was only the source code of the component stored, and it was not compiled, so the direct use of the project will report an error, and the problem was exposed. Do you want to build a separate scaffold for each of these components? Our clever friends came up with an idea to use the team’s NutUI component library scaffolding as a carrier to create the base components of PLUS in the component library, and then compile and export the components that need to be packaged. By deploying the compiled code to NPM, you can install the dependencies directly in your project. The result is as follows:

2.2 Reducing existing branches

Average every week, PLUS project needs to be developed by parallel five or six requirements, how to guarantee the demand of the parallel development will not interfere with each other, we adopt the multi-branch method, research and development of every branches from the trunk v2, v3 and v4 on the basis of the new branch, branch name using pinyin initials requirements at the time, such as officially revised requirement is: ZSGB; After the development is completed, it will be merged into the Master branch to prepare for going online each time it goes online, as shown below:

Have you seen anything? Is there a problem with naming branches this way? After running for half a year, we found that there were more and more branches in the code base. Many new branches were no longer used after the development, but they had to be manually deleted every time, and those that needed to be manually operated spontaneously were unreliable. After thinking about it, we decided to use the initials of each development member as the branch name, for example, the student whose name is “Zhang Dakang”, the new branch is “ZDP”, if more than one person develops the same requirement, it can be developed on one person’s branch. The benefits of this are as follows:

  1. The number of sub-branches under each main branch is fixed, and each R&D has a corresponding sub-branch for development;

  2. It avoids deleting redundant branches in code base manually and reduces the case of deleting branches by mistake.

  3. Avoid the problem of increasing subbranches;

After the above operation, the original code base dozens of sub-branches down to a few branches, greatly reduce the amount of code, download code faster.

2.3 PC scaffold optimization

Due to historical reasons, the React technology stack is used for the development of PLUS member project on PC. With the progress of science and technology, the proportion of mobile terminal is increasing, and the proportion of PC terminal is decreasing year by year. Many functions are also diverted to M terminal, which leads to a problem. We also made fewer changes to the PC side, but the original PC scaffold was old, and there were frequent problems during compilation, such as:

1. The code packing speed is very slow. During the packing period, you can drink tea, eat melon seeds and play beans.

2. Generate files named with hash values, and replace each hash value one by one each time for debugging and version rollback, which is not good for mood and easy to make people irritable;

3, do not support hot update, resulting in each change to artificial to refresh the page, the side effect is to correct virgo obsessive-compulsive disorder;

4, do not support on-demand packaging files, each packaging will pack all the files, but only a few of the online files;

We upgraded Webpack from version 2 to version 4, redeveloped all the configuration files, and made the following improvements:

Packaging efficiency is low, development and joint adjustment is more troublesome, so the scaffolding needs to be challenged and optimized. And with the development of technology, many new technologies can be used in our projects to improve the development efficiency. Since there is a lot of code here, it is just a direction. If you have any questions, please leave a comment in the comments section

2.4 Code submission specifications

The goal of code submission normalization is to better trace the code, filter it, and quickly locate the scope and implementation of the submitted code. For an iterative project like PLUS, it is necessary to increase code submission normalization. As the so-called, no rules, no circumference. So we introduced vue-cli-plugin-commitLint in the project to constrain and regulate code submission. It can enhance the concept of the commit specification for our team members, unify the commit style of our code, and more importantly, it can automatically generate automatic ChangeLog, which makes it easier for us to find the commit version and quickly locate problems later.

Vue-cli-plugin-commitlint is an out-of-the-box Git Commit specification that combines Commitizen, CommitLint, Conventional – Changelog-CLI, and Husky.

Let’s take a look at how it can be used in a project.

Install dependencies

npm i vue-cli-plugin-commitlint commitizen commitlint conventional-changelog-cli husky -D 
Copy the code

Add in package.json

{..."scripts": {
        "log": "conventional-changelog --config ./node_modules/vue-cli-plugin-commitlint/lib/log -i CHANGELOG.md -s -r 0"."cz": "npm run log && git add . && git status && git cz"
    },
    "husky": {
        "hooks": {
            "commit-msg": "commitlint -E HUSKY_GIT_PARAMS"}},"config": {
        "commitizen": {
            "path": "./node_modules/vue-cli-plugin-commitlint/lib/cz"}}}Copy the code

Add the commitlint.config.js file

module.exports = {
    extends: ['./node_modules/vue-cli-plugin-commitlint/lib/lint']};Copy the code

Then run NPM run cz. At this point, you will be prompted to select a Commit message and fill it in accordingly, generating a Commit message that matches the format.

After the submission is complete, the changelog. md log file is generated in the root directory of the project. Click commitId in the file to jump to view the submitted content.

2.5 Improve the packing speed of scaffolding

As the project iterates, the number of files increases, and the project becomes large, Webpack builds become slower and slower. Each build takes a certain amount of time, and it becomes necessary to speed up the build.

As we all know, Webpack runs on Node.js using a single-threaded model, and the processing tasks are executed sequentially, so is there any way to allow Webpack to process multiple tasks concurrently?

To solve this problem, we added HappyPack, which allows Webpack to do this by splitting tasks into multiple sub-processes to execute concurrently, and then sending the results to the main process. The core principle of HappyPack is to break this part of the task down into multiple processes in parallel, thus reducing the overall build time.

Next, let’s look at how to plug it into a project.

Install dependencies

npm install happypack -D
Copy the code

Configure webpack.config.js

The most time consuming part of the Webpack build process is the Loader’s conversion of files because there is so much file data to be converted and the conversion operations need to be queued.

Configure Loader, on the code.

module.exports = (env, argv) = > {
  const webpackConfig = {
    / /...
    module: {
      rules: [{
		test: /\.js$/.loader: 'happypack/loader? id=happyBabel'.// Exclude files in node_modules
		exclude: [
		  path.resolve(__dirname, 'node_modules'),
		  path.resolve(__dirname, 'jssdk.min.js')]},/ /...]},/ /...}}Copy the code

We give the happypack/ Loader the happypack instance to process the.js file. We then use the happypack/ Loader id to determine which happypack instance to process the file.

Example Add a HappyPack instance.

const HappyPack = require('happypack');
const os = require('os');
const happyThreadPool = HappyPack.ThreadPool({ size: os.cpus().length });
/ /...
module.exports = (env, argv) = > {
const webpackConfig = {
/ /...
plugins: [
  new HappyPack({
  // Unique ID id
  id: 'happyBabel'.// How to handle.js files, using the same as the Loader configuration
  loaders: [{
    loader: 'babel-loader? cacheDirectory=true',}],// Share the process pool
  threadPool: happyThreadPool,
  // Allow HappyPack to output logs. The default value is true
  verbose: true}),]./ /...}}Copy the code

Create a shared process pool and include os.cpus().length subprocess in the pool using node.js OS. Add a HappyPack instance and pass in the id defined before. The happypack/ Loader is told to process the.js file. The loaders property is the same as the previous loader configuration. The threadPool property is passed in the happyThreadPool parameter, which is pre-defined. Tell HappyPack instances to use children from the same shared process pool to process tasks.

We then perform a pack-compile build, and when the build is complete we can see the HappyPack build log.

As you can see, HappyPack starts eight child processes to process tasks in parallel. Clap your hands!

Finally, let’s take a look at the comparison of the loading speed, which saves 1488ms, an improvement of nearly 20%.

2.6 Output Analysis of construction results

There are many visual resource analysis tools, and we choose Webpack-bundle-Analyze. Compared with other tools, webpack-bundle-Analyze presents in a graphical way, which is simpler and more intuitive to output analysis results and enable us to quickly analyze the problem. Let’s take a look at how to do this in a project.

Install dependencies

npm install webpack-bundle-analyzer  -D
Copy the code

Configure webpack.config.js

const BundleAnalyzerPlugin = require('webpack-bundle-analyzer').BundleAnalyzerPlugin; . module.exports =(env, argv) = > {
	constwebpackConfig = { ... plugins: [ ... new BundleAnalyzerPlugin() ], ... }}Copy the code

The configuration method is also simple, and the default options are generally sufficient without modification.

Next, we add the scripts in package.json

 "analyz": "NODE_ENV=production npm_config_report=true npm run build"
Copy the code

After the NPM run analyz command is executed, the browser automatically opens http://127.0.0.1:8888/ to display the analysis view.

If you don’t want it to pop up every time, you can change the value of the openAnalyzer parameter to false and then manually open it as needed.

Through the view, you can see the size of each module of the project, see the real contents of the files after they are packaged and compressed, and then analyze that some files occupy a large amount of time. With the analysis idea, you have the goal of optimization.

Through the final analysis diagram, it is found that the proportion of public toast.js file is relatively large, so the toast.js file can be optimized. In addition, swiper. js and lazyload.js, which account for a large proportion of third-party libraries, can also be considered for DLL removal to further improve the space.

2.7 Repartition components

In the PLUS project we have a Component folder, which contains common components, or children of the page. Due to historical issues, the folder has many redundant components as projects iterate and pages are added, and the directory of the entire folder is bloated. So repartitioning the components is imperative, so let’s take a look at how to do it.

First, let me see what’s in it.

Sorry not a screen cut!

Based on the figure above, we roughly categorize the contents of the Component folder into the following categories.

1, historical legacy: this type of file can be traced back to the initial creation of the veteran file, when the directory planning is still a single page form, this type of file is mostly for the home page of the sub-component file, there are also some according to the user state of the page face component file.

2. Page Components: As the project iterates, new pages are added to the project, and the complex subcomponents of those pages are scattered in the Component folder.

3. Functional components: Functional components such as popover components, return to the top components, calculator components and other public components are also placed under this folder.

How do you divide it?

First of all, for the page files of the historical legacy class, it is consciously classified into its corresponding page folder during the revision to facilitate maintenance. For the scattered page face components, also unified into their corresponding page folder.

For some common page components, we create a common business components folder called Plus-Components and put them together.

We use NPM to refer to components such as Dialog, Countdown, etc., which no longer occupy the local folder and reduce the Component directory. The optimization is as follows:

As you can see from the figure above, the optimized Component folder creates a plus components folder in the root directory to store the common components, a page folder in the same directory to store the various pages, and an othercomponents file in the following directory to manage the other functional components. This makes the component directory structure look cleaner and easier to manage.

2.8 Vuex optimization

Vuex is a state management mode in THE Vue project, generally speaking, it is a mechanism to centrally manage the shared state of all components in the project. Vuex is generally used in medium to large single-page applications. It is not recommended for simple single-page projects, as Vuex can be cumbersome and redundant for simple applications.

The presence of Vuex is necessary for the PLUS project, which has so many states to share.

Let’s take a look at the original Vuex code for the project:

import Vue from 'vue';
import Vuex from 'vuex';

Vue.use(Vuex);

export default new Vuex.Store({
    state: {... },getters: {... },mutations: {... },actions: {...}
});
Copy the code

The original Vuex code writes all the public states in the same index file, and state, getters, mutations and actions are all unified management. There’s nothing wrong with that, and most small and medium sized projects do the same. But for a large project, having the shared state of all the pages together can make the entire Vuex file look bloated and unmaintainable. Also, some of the public methods are iterated and changed through requirements, resulting in a lot of redundant code, and some of the public methods refer to different pages and cause coupling problems.

So to optimize Vuex, we tried to introduce Modules to modularize Vuex.

We create a modules folder in the root of store and a file called refund in that folder.

The Modules folder can be filled with any modules you want to manage, but I’ll just use refund as an example. After creating the JS or TS module in the Modules folder, import it in the index root file.

import Vue from 'vue';
import Vuex,  { StoreOptions }  from 'vuex';
import state from './states';
import mutations from './mutations';
import actions from './actions';
import getters from './getters';
import refundModule from './modules/refund';

Vue.use(Vuex);

export default new Vuex.Store({
    state,
    mutations,
    actions,
    getters,
    modules:{
        refundModule
    }

});
Copy the code

The code above shows how to introduce refund into index.ts in the root of store and name it refundModule for call. For refund file, we can write the required state, action and mutation in it just as we did in Vuex before.

const state = {
    orderId: null};const actions = {
    getOrderId: ({ commit }, data) = > {
        commit("setOrderid", data); }};const mutations = {
    setOrderId: (state, data) = >{ state.orderId = data; }};export default {
    namespaced: true.// Module namespace,
    state,
    actions,
    mutations
};
Copy the code

As shown in the code, state, Action, and mutations related to the refund page are recorded in this file. You can also see that namespaced:true is an extra line in the code. Folding code means that the namespace is turned on, so that when a module is registered, all its getters, actions, and mutations are automatically named according to the registered path of the module. In simple terms, it identifies which module you’re calling for State, getter, Action, and mutations.

After that, how do you call it?

If you write Vuex in JS, you can quote it like this:

import { mapState,  mapActions } from "vuex";

export default {
computed: {... mapStated(Module name (e.g. MenuNav /getNavMenus), {a:state= >state.a,
        b:state= >state.b
    })
},
methods: {... mapActions(Module name (e.g. MenuNav /getNavMenus)['foo'.'bar'])}Copy the code

There are many similar examples on the Vuex official website, and interested students can check the official website about the drinking methods of Modules.

If you’re writing Vuex in TS, you can call modules from a page like this:

import {Component, Prop, Vue, Watch} from 'vue-property-decorator';
import {State, Action, Getter,namespace} from 'vuex-class';

const refundMdl = namespace('refundModule')

@Component
export default class Refund extends Vue {
    @refundMdl.State(state= > state.orderId) orderId;
    @refundMdl.Action('getOrderId') getOrderId;
}
Copy the code

Namespace must be introduced when calling, and the namespace is used to call the public state and methods of modules.

After the above modification of the Vuex modularity, we can manage the module state that needs to be differentiated, and reduce the weight of the Vuex file.

Third, optimize user experience

Member at the beginning of in 2020, PLUS a few channel homepage repeatedly revised new look, product team is also presented to users of more friendly interface and function, as the front-end team, in addition to complete the demand, also have their own career, as part of it, we also want to give the user experience should be optimized to improve:

3.1 Reduce user waiting time

In multiple channels PLUS members after the redesigned by page, everyone show floor and floor order may be different, so the front needs to rely on data from the backend interface for floor display, this leads to a page needs to wait for return data to display configuration interface, once the interface back slowly, The page takes a long time to render, which is quite bad for the user experience, but we have no control over the speed of the back end return interface, so from the front end, we did the following:

1. Add skeleton screens to show the overall style of the page in front of the rendered page, reducing white screen time;

2, HTML page directly placed user information data, front-end no longer request interface, directly can render some areas of personal cards;

3. Confirm the floors that will appear on the first screen of the product, such as the personal card floor and the membership privilege floor. These floors have front-end occupying areas, and the preliminary styles of these floors will be displayed before the return of the interface, so as to avoid the shaking of the floor order caused by the return of the interface to be configured;

4. For these floors, preliminary data is set on the front end according to the data format returned by the back end to render the preliminary floor page. After data is returned by the interface, key fields in the page are replaced;

As shown in the figure above, the basic architecture of the page can be displayed before the data interface is available, reducing page white screen time and user waiting time.

3.2 Display of multiple guarantee floors

In addition to the above introduction to reduce user waiting time, we also made a multi-protection page display. Because the entire floor of the page is based on the interface returned data to do the configuration display. In general, once no interface returns data, the page is no longer displayed, leaving the user with a skeleton screen. But in order to pursue a better experience, can we take multiple insurance to maximize the problem caused by the failure of one interface? First of all, we demand the back-end the floor directly to the configuration information in the Html page, front end according to the information returned rendering the first screen of the floor, so that we can directly according to the interface information to determine the page to show which floor, if this layer fails, the configuration interface to invoke the corresponding floor, to say the least, the interface is hang up, In order to avoid the mood of the user flipping the table, we will directly call the interfaces of several floors that the first screen will definitely request. In this way, through the three-layer interface guarantee, the risk of a blank screen caused by a certain interface problem is reduced.

Multiple guarantees show the floor of the page, first get the first screen straight out data, otherwise get the floor data interface, no longer directly request the first screen floor internal data, to avoid the error of the data on the first floor caused by white screen, reduce user complaints;

3.3 Improve and optimize PWA

After adding PWA caching technology to the risk control user status page last year, I tried to take advantage of the hot iron, but found that all requests under the same domain name would be blocked by serverWork, so I stopped short and took a long view.

So how do we control the PWA to work in the specified user state?

Plan a

I have tried a lot. In the PLUS project, different user states are in different Html. Is it possible to send messages to the serivceWorker in Html, which only works in certain scenarios in Html

navigator.serviceWorker.controller.postMessage()

Copy the code

In serviceWorker

self.addEventListener('message'.function(e) {})Copy the code

However, this solution does not work because once the serviceWorker is registered, the next PWA is started before the Html is successfully read, so there are some problems with this solution

Scheme 2

The PWA can set the specified scope

 navigator.serviceWorker.register('service-worker.js', {scope: './xxx'})
Copy the code

But for our different state of the domain name is plus.m.jd.com/index, so this scheme is not suitable for us.

Plan 3

Intercepts the fetch of service-worker and controls page reading by judging some flags. For example, the user status returned by the getUserInfo interface is used to determine whether the fetch interception needs to be enabled currently, and the PWA can be disabled in some states by blacklisting. The code is as follows:

self.addEventListener('fetch'.function (event) {
    event.respondWith(
      caches.match(event.request).then(
        (response) = >
          response ||
          fetch(event.request.clone()).then(function (httpRes) {
            if (/getUserInfo/gi.test(event.request.url)) {
              httpRes.json().then((res) = > {
                //do something 
              });
            }
            returnhttpRes; }))); });Copy the code

After the above processing, serviceWorker will intercept all states under the same domain name once you access the risk control home page, as shown in the figure below:

After modification, PWA interception can be disabled on other user status pages, as shown in the figure below:

3.4 Image Processing

Now the page pictures use a large number of pictures, can give users a more direct visual impact, as PLUS members of the entrance, but also to show a large number of goods, how to work on the picture processing, we also use some thoughts. PLUS member page pictures are used in JD picture system, which let us shine is that you can configure parameters on the URL to deal with the picture, such as:

http://img30.360buyimg.com/test/s720x540_jfs/t2362/199/2707005502/100242/616257ce/56e66b21N7b8c2be8.jpg

S720x540_jfs, add the parameter between the business name and the file address, indicating that the image is zooming to 720 wide and 540 high; Add the webp suffix directly to the end of the URL to convert the image to access the WebP format. In this way, you can access the image on the server side as follows:

function imgCut(item, str) {
 if (/(((img){1}\d{2})|m{1}).360buyimg.com/.test(item)) {
  if (str) {
   item = item.replace('jfs'.'s' + str + '_jfs');
  }
  if (check_support_webp()) {
   return item + '.webp'; // Add the suffix webp if you need to determine that webP is supported
  } else {
   returnitem; }}else {
  returnitem; }}Copy the code

In accordance with the above way, the request of the service side of the picture, not only can be image cropping, ensure that the size of the page of the picture is the same, and can also seamlessly convert webP pictures, is really a great tool for research and development! Note that this feature handles images requested to the server, not images provided locally by the front end.

conclusion

Looking back, the PLUS member project has gathered dozens of pages with complex logic, iterated multiple versions and established several branches in a blink of an eye. Out from the project, recently found that the lack of take the direction of project from a overall situation, can guarantee a project through iteration needs, PLUS member project still has many areas to be perfect, in the meantime also received some team’s strong support and good suggestion, then we will continue to be polished, to establish perfect mechanism, improve the quality of the code, Improve user experience and escort PLUS members.

And I’ll end with one of my favorite lines: “I walk alone as if I were leading a million men. Body in the corner of the well, the heart to the stars, eyes have poetry, comfortable far away!” I am full of hope for life and the future