The text/rice wine
background
Some time ago, when a project was developing requirements, the project was packaged and built based on WebPack3. During the development process, I found that packaging was slow and the development experience was not good, so I made a simple optimization and sorted out the optimization plan
Analysis of packing speed
The first step to tuning is to know how slow our build is. The speed-measure-webpack-plugin measures the time spent during each phase of your WebPack build:
Const SpeedMeasurePlugin = require("speed-measure-webpack-plugin")
const smp = new SpeedMeasurePlugin()
// ...
module.exports = smp.wrap(prodWebpackConfig)
Copy the code
We can see that the packaging speed is slow mainly because the processing of style files and JS files takes a long time by loader
When the file is modified and recompiled, the time of each packaging phase is as follows
The recompile time is not that long, but it actually takes more time to see the new changes in the browser. Why
Analysis the bundle package
- The packaged bundle generates an analysis file
"analyse": "webpack --config ./webpack.config.js --profile --json>states.json"Copy the code
- Analyze the JSON file
We can see that 705 modules were involved in the packaging process using WebPack 3.5.6, generating 2 chunks and taking about 51s
- Analyze the chunks file further
The two chunks files are app and vendor files respectively, and the app.js file is as big as 6M
In the vast majority of cases, not all modules are required when the application starts to work. If all of these modules are packaged together, the bundle.js must be loaded as a whole even if the application only needs one or two modules to work. In addition, the front-end application usually runs on the browser side, which means that the response time of the application is affected, and a large amount of traffic and bandwidth is wasted.
The bunlde dependency table in the development environment also shows that most of the content in node_modules is packaged along with app.js, which is the main cause of our slow packaging
- When we update our code, HMR repackages app.js, which means that the contents of the unmodified node_modules are repackaged into app.js as well
This means that every time the code is changed, the browser reloads the 6-megabyte file, so it takes a long time for the browser to react to even minor changes
- Bundle Optimize Helper optimization suggestions
Entrypoints are code that are loaded on page load. To get best possible user experience, you should keep the total size of entrypoints to less than 200kb and load the rest dynamically by using
code splitting.
When we upload the JSON file to the Bundle Optimize Helper, the recommendation is to split the code, and the import file size should not exceed 200K
To optimize the
The code segment
The entry files at up to 6M are obviously very disruptive to the experience, so the first step in optimization is to start with code splitting. Code splitting reduces application startup cost and improves response speed by packaging the resource modules in the project into different bundles according to the rules we designed.
- The project itself has been configured with multiple entries. The three-party library files such as Lodash are packaged separately to generate vendor.js files
- The size of app.js can be reduced by packing the contents of node_modules, which the entry file depends on, into common and packaging the business code separately
new webpack.optimize.CommonsChunkPlugin({
name: 'common',
minChunks: function(module) {
return (
module.resource &&
/\.js$/.test(module.resource) &&
module.resource.indexOf(
path.join(__dirname, './node_modules') === 0)}})Copy the code
- Use CommonsChunkPlugin again to extract the mainfest.js file to ensure that the common.js hash does not change with each packaging
new webpack.optimize.CommonsChunkPlugin({
name: 'manifest',
chunks: ['vendor'.'common'.'app']}),Copy the code
- HashedModuleIdsPlugin is used to keep the module_ID referenced by the module unchanged, CommonsChunkPlugin extracts dependencies specified in the entry and mainifest stores runtime function and module identifiers
You can see that some of the tripartite libraries from app.js have been individually extracted at this point, and the size has been reduced from 6M to 3.49m
- Let’s take a look at the new package file when we modify the business code
When our business code is modified, it will be repackaged, but the third-party libraries we rely on will not be repackaged. At this time, the volume of the repackaged business code app.js is also 3.49m
We can see that the dependency packages with no changes will go through the 304 negotiation cache, while the app.js with changes will be re-requested and the loading performance is optimized because the volume is smaller than before
Speed optimization
In the previous speed analysis, we have known that the packaging speed is mainly consumed in the processing of loader
Obviously, webpack caching is extremely necessary in the development process. We add cache-loader to cache the results to disk before loader that processes style files and JS files, which can significantly improve the speed of secondary build
module.exports = {
module: {
rules: [
{
test: /\.js$/,
use: ['cache-loader'. loaders], include: path.resolve('src'),}, {test: /\.scss$/,
use: ['cache-loader'. loaders], include: path.resolve('src'),},],},};Copy the code
With the inclusion of caching we can see a significant increase in packaging speed
After optimization, we can see that the volume of the inlet package is reduced by 42% and the packaging speed is increased from 51s to 11s
Do More
There is no fixed mode for wepack packaging optimization, which requires us to divide, unpack and compress projects. Common optimization ideas are mainly divided into four parts
- Optimize search time, that is, the time to retrieve all dependent modules when packaging begins
- Optimize the parsing time, that is, the time it takes to parse corresponding files based on the configured Loader
- Optimized compression time is the time it takes WePack to optimize compression of code
- Optimize the secondary packaging time, which is the time it takes to repackage
In the current production build, UglifyJsPlugin will be used for code compression, but this plug-in is single-threaded. During compression, the code will first be parsed into the AST abstract syntax tree, then the AST will be analyzed and processed according to the rules, and finally the processed AST will be restored to JS code. This kind of computation-intensive operation is very time consuming. TerserPlugin is built into Webpack4 to handle compression of JS code. We can enable multi-process compression mode to further optimize our packaging speed
We have code segmentation for chunks, but app.js is still large at 3.49 MB in size without compression. SplitChunks in Webpack4 provide richer configuration rules, so we can extract the common parts of the code. And asynchronously loaded modules are extracted to further optimize code size
Considering the stability of the project, we will postpone the upgrade of Webpack.
The above is the summary of the webpack-based optimization project development experience. Understanding the packaging principle of WebPack and using the new features of WebPack will definitely bring us a better development experience.