The original address mp.weixin.qq.com/s/l1LRRRwsV… The author has been involved in project reconstruction for nearly half a year, and has applied a lot of knowledge of “performance optimization” and “design pattern” in the process of reconstruction. “Performance tuning” and “design patterns” two aspects of whether at work or a job interview is a high frequency application scenarios, taking the advantage of this opportunity to participate in the reconstruction of the large scale project, the author carefully combed out some general and shall be a performance optimization Suggestions, at the same time, combined with the daily development experience of the author in netease practice in four years to think of all the performance optimization Suggestions useful, Share with everyone! (As space is limited, there is another article on design patterns.)

Some of the performance tuning tips are probably already well known, but I’ve also listed some details that I might not normally notice.

Usually, performance optimization is considered as a disordered application scenario, but in my opinion, it is an orderly application scenario and many performance optimization are mutual paving or even one belt one Road. From the perspective of process trend, performance optimization can be divided into “network level” and “rendering level”. According to the results, performance optimization can be divided into “time level” and “volume level”. To put it simply, when visiting a website, make it fast, accurate and immediately present to the user.

PNG all performance optimization is implemented around two major and two small levels, the core level is the network level and rendering level, the auxiliary level is the time level and volume level, and the auxiliary level is full of the core level. So the author sorted out “nine strategies” and “six indicators” for front-end performance optimization. Of course, these policies and indicators are defined by the author, which is convenient to make some specifications for performance optimization in some way.

Therefore, combining these characteristics on the job or in an interview is a perfect way to interpret the knowledge extended by performance optimization. “High energy ahead, don’t look also have to collect, go!!”

In order to highlight the theme, all code examples only show the core configuration code, and other configurations are not supplemented. Please imagine the performance optimization at the network level of the nine strategies. There is no doubt that how to make the resource volume smaller and load faster is how to make suggestions from the following four aspects.

“Build strategy” : based on the build tools (Webpack/Rollup/Parcel/Esbuild/Vite/Gulp) : “the image strategy” based on the image type (JPG/PNG/SVG/WebP/Base64) “distribution policy” : CDN “Caching Strategy” : Browser-based caching (strong caching/negotiated caching) The above four aspects are implemented step by step and permeated throughout the project process. The “Build policy” and “Image Policy” are in development, and the “Distribution policy” and “Cache policy” are in production, so at each stage you can check whether the above policies are inserted in sequence. In this way you can maximize performance optimization application scenarios.

This strategy mainly deals with webpack and is also the most common performance optimization strategy. Other build tools handle things much the same, perhaps just differently configured. When it comes to webpack performance optimization, there is no doubt that it starts from the time level and volume level.

The author found that the overall compatibility of WebPack V5 is not very good at present, and some functions may have problems with third-party tools. Therefore, we have not upgraded to V5 for the moment, and continue to use V4 as the production tool. Therefore, the following configurations are based on V4. However, the overall configuration differs little from that of V5. The author makes 6 performance optimization suggestions for the two levels respectively and a total of 12 performance optimization suggestions. In order to facilitate memory, four-word words are used to summarize and facilitate digestion. ⏱ : reduces the packing time, and 📦 : reduces the packing volume.

“Reduce packaging time” : reduce scope, cache copies, targeted search, build ahead, build in parallel, visual structure “Reduce packaging volume” : split code, tree shaking, dynamic shim, load on demand, effect enhancement, compress resources ⏱ Reduce scope

Configure include/exclude to reduce the search scope of files by the Loader to avoid unnecessary translation. Node_modules is such a huge directory, how much more time does it take to retrieve all the files?

Include /exclude is usually configured in each Loader. The SRC directory is used as the source directory. You can perform the following operations. Of course, include/exclude can be modified as required.

export default { // … module: { rules: [{ exclude: /node_modules/, include: /src/, test: /.js$/, use: “babel-loader” }] } }; ⏱ Cached copy

Configure cache to cache the compiled copy of a file by the Loader. The advantage is that only the modified file is compiled during recompilation. Why should unmodified files be recompiled with modified files?

Most Loaders/plugins provide an option to use the compile cache, usually containing the cache word. Take babel-loader and eslint-webpack-plugin as examples.

import EslintPlugin from “eslint-webpack-plugin”;

export default { // … module: { rules: [{ // … test: /.js$/, use: [{ loader: “babel-loader”, options: { cacheDirectory: true } }] }] }, plugins: [ new EslintPlugin({ cache: true }) ] }; ⏱ Directed search

Configure Resolve to improve file search speed. The advantage is to specify the required file path. If some third-party libraries are introduced in a normal form, errors may be reported, or you want the program to automatically index certain types of files.

Alias mapping module path, Extensions indicating file suffixes, noParse filtering no dependent files. It is usually sufficient to configure alias and Extensions.

export default { // … Resolve: {alias: {“#”: AbsPath(“”), // Root directory shortcut “@”: AbsPath(” SRC “), // SRC directory shortcut swiper: “Swiper /js/swiper.min.js”}, // module import shortcut extensions: [” js “, “ts” and “JSX”, “benchmark”, “json”, “vue”] / / import path file can be omitted when the suffix}}; ⏱ Build ahead

“Configure DllPlugin to pre-package third party dependencies” with the benefit of completely separating DLLS from business code and only building business code at a time. This is an ancient configuration, dating back to WebPack V2, but is now deprecated by WebPack V4 + because the performance gains from iteration of its version are enough to ignore the benefits of the DllPlugin.

“DLL” stands for dynamically linked library, a code library that can be used by multiple programs at the same time. In the front-end field, it can be considered as the existence of an alternative cache. It packages the common code into DLL files and stores them in the hard disk. When packaging again, dynamic link DLL files do not need to package those common codes again, thus improving the construction speed and reducing the packaging time.

In general, configuring DLLS is more complex than other configurations. The configuration process can be roughly divided into three steps.

First, tell the build script which dependencies to make into DLLS and generate DLL files and DLL mapping table files.

import { DefinePlugin, DllPlugin } from “webpack”;

export default { // … entry: { vendor: [“react”, “react-dom”, “react-router-dom”] }, mode: “production”, optimization: { splitChunks: { cacheGroups: { vendor: { chunks: “all”, name: “vendor”, test: /node_modules/ } } } }, output: { filename: “[name].dll. Js “, // Output path and file name library: “[name]”, // global variable name: other modules will get the module path from this variable: AbsPath(“dist/static”) // plugins: [new DefinePlugin({” process.env.node_env “: Json.stringify (“development”) // In DLL mode overwrite the production environment into the development environment (start third-party dependency debugging mode)}), new DllPlugin({name: {name: (dist/static/[name]-manifest.json”) // output directory path})]}; Then configure the execution script in package.json and execute it first before each build to package out the DLL files.

{“scripts”: {” DLL “: “webpack –config webpack.dll. Use htML-webpack-tags-plugin to automatically insert DLL files at packaging time.

import { DllReferencePlugin } from “webpack”; import HtmlTagsPlugin from “html-webpack-tags-plugin”;

export default { // … plugins: [ // … new DllReferencePlugin({ manifest: AbsPath(“dist/static/vendor-manifest.json”) // manifest file path}), new HtmlTagsPlugin({append: False, // insert publicPath: “/”, // use publicPath tags: [“static/ vendor.dl.js “] // resource path})]}; For the time cost of those few seconds, I suggest better configuration. Of course, you can also use the autoDLL-webpack-plugin instead of manual configuration.

⏱ Parallel construction

Configure Thread to convert Loader processes from single to multiple processes to release the concurrency advantage of multiple CPU cores. When building a project using WebPack, there are a lot of files to parse and process, and the build process is a computationally intensive operation that gets slower as the number of files increases.

Webpack running in Node is a single-threaded model, which simply means that the tasks to be processed by Webpack need to be processed one by one, not multiple tasks at the same time.

File reads and writes and computations are inevitable, so can you speed up builds by having WebPack handle multiple tasks at once and harness the power of a multi-core CPU computer? Thread-loader helps you start threads based on the number of cpus.

One caveat here is that if the number of project files is not large, do not use this performance optimization recommendation. After all, there are performance costs associated with starting multiple threads.

import Os from “os”;

export default { // … module: { rules: [{ // … test: /.js$/, use: [{ loader: “thread-loader”, options: { workers: Os.cpus().length } }, { loader: “babel-loader”, options: { cacheDirectory: true } }] }] } }; ⏱ Visual structure

“Configure BundleAnalyzer to analyze the structure of packaged files”. The benefit is to find out what causes the volume to be too large. Thus, through analyzing the reasons, the optimization plan can reduce the construction time. BundleAnalyzer is an official webPack plug-in that can visually analyze the module components, module volume ratio, module inclusion relationship, module dependency relationship, file duplication, compressed volume comparison and other visual data of packaged files.

With the webpack-bundle-Analyzer configuration, we can quickly find related problems.

import { BundleAnalyzerPlugin } from “webpack-bundle-analyzer”;

export default { // … plugins: [ // … BundleAnalyzerPlugin() ] }; 📦 Split code

“Separate each module code, extract the same part of the code”, the advantage is to reduce the frequency of repeated code. Webpack V4 uses splitChunks instead of CommonsChunksPlugin for code splitting.

SplitChunks are configured in various ways. For details, please refer to the official website.

export default { // … Optimization: {runtimeChunk: {name: “manifest”}, // Split WebpackRuntime splitChunks: {cacheGroups: {common: {minChunks: 2, name: “common”, priority: 5, reuseExistingChunk: true, // Reuse existing code block test: AbsPath(” SRC “)}, vendor: {chunks: “initial”, // code block name: “vendor”, // code block name priority: 10, // priority test: // node_modules/ // check file regular expressions}}, // cache group chunks: “all” // code segmentation types: all modules, async module, initial module} // code block segmentation}}; 📦 tree shaking optimization

“Remove unreferenced code from the project”, which has the benefit of removing duplicate code and unused code. Tree shaker was first introduced in Rollup and is the core concept of Rollup, which was later borrowed from WebPack V2.

Tree shaking is only valid for ESM specifications and invalid for other module specifications. For static structure analysis, only import/export can provide static import/export functions. Therefore, the ESM specification must be used when writing business code to allow tree shaker optimization to remove duplicate and unused code.

In Webpack, you only need to set the packaging environment to the production environment to make the tree shaking optimization take effect. Meanwhile, the business code is written using ESM specification, import module is used, export module is used.

export default { // … mode: “production” }; 📦 Dynamic gasket

“Return the current browser code shim based on UA via the shim service”, which has the advantage of not packing heavy code shim in. Every build is configured with @babel/ PRESET -env and core-js to pack Polyfill in for certain needs, which undoubtedly contributes to the size of the code again.

UseBuiltIns provided by @babel/preset-env can be imported into Polyfill on demand.

“False” : Ignore target.browsers load all polyfills in “entry” : f. F loads some polyfills in from target.browsers. (Only polyfills that are not supported by browsers are introduced. Use the import “core-js/stable” entry file.) Browsers load some polyfills based on the use of ES6 in the target.browsers and detect code (no need to import “core-js/stable” in the entry file). Dynamic shivers are recommended here. The dynamic shim can return polyfills for the current browser based on the browser UserAgent. The idea is that the browser’s UserAgent can find out from browserList which features are unsupported in the current browser and return polyfills for those features. Polyfill-library and Polyfill-Service are available for those interested in this area.

Two dynamic gasket services are provided here, and you can click on the following links in different browsers to see the different Polyfill outputs. IExplore is believed to be the most polyfilled, proudly saying: “I am me, not the same fireworks.”

“Official CDN Services” : Polyfill. IO /v3/ Polyfill… “Ali CDN service” : polyfill.alicdn.com/polyfill.mi… Use htML-webpack-tags-plugin to automatically insert dynamic gaskets at packaging time.

import HtmlTagsPlugin from “html-webpack-tags-plugin”;

Export default {plugins: [new HtmlTagsPlugin({append: false, // insert publicPath: false after generating resources, // use publicPath tags: [” polyfill.alicdn.com/polyfill.mi… “] / / resource path}})]; 📦 load on demand

“Package the routing page/trigger functionality as a separate file and load it only when you use it” to reduce the burden of first-screen rendering. The more features a project has, the larger the package size, resulting in a slower rendering of the first screen.

When rendering the first screen, only the corresponding JS code is required and no other JS code is required, so loading on demand can be used. Webpack V4 provides module cutting and loading function on demand, which can be combined with Import () to achieve the effect of reducing packages in the first screen rendering, thus speeding up the first screen rendering speed. The JS code for the current function is loaded only when some function is triggered.

Webpack V4 provides magic annotation naming cutting module, if there is no annotation, the cut module can not distinguish which business module belongs to, so generally a business module shares the annotation name of a cutting module.

const Login = () => import( /* webpackChunkName: “login” / “.. /.. /views/login”); const Logon = () => import( / webpackChunkName: “logon” */ “.. /.. /views/logon”); When running, the console may report an error. Add @babel/plugin-syntax-dynamic-import to the Babel configuration of package.json.

{/ /… “babel”: { // … “Plugins” : / / /… “@ Babel/plugin – syntax – dynamic – import”]}} 📦 role

“Analyzing module dependencies and combining packaged modules into a function” has the benefit of reducing function declarations and memory costs. Enhancements first appeared in Rollup and are the core concept of Rollup, which was later borrowed from WebPack V3.

Built code will have a large number of function closures before enablement. Because of module dependency, when packaged with WebPack, it is converted to IIFE, and a large number of function closures wrap code, resulting in an increase in package size (the more modules there are, the more noticeable). More scoped functions are created while the code is running, resulting in greater memory overhead.

After the promotion is enabled, the built code is placed in a function scope in the order it was introduced, and some variables are properly renamed to prevent variable name conflicts, thereby reducing function declarations and memory costs.

In Webpack, simply set the packaging environment to production for the enhancement to take effect, or explicitly set concatenateModules.

export default { // … mode: “production” }; // Export default {//… optimization: { // … concatenateModules: true } }; 📦 Compressed resources

“Compress HTML/CSS/JS code, compress font/image/audio/video”, the benefit is more efficient to reduce packaging volume. Optimizing code to the extreme may not be as effective as optimizing the size of a resource file.

For HTML code, use the html-webpack-plugin to enable compression.

import HtmlPlugin from “html-webpack-plugin”;

export default { // … HtmlPlugin({//… minify: {collapseWhitespace: true, removeComments: true})]}; For CSS/JS code, use the following plug-ins to enable compression. OptimizeCss is packaged based on CSSNano, Uglifyjs and Terser are official plug-ins of WebPack. Meanwhile, it should be noted that compressed JS code should distinguish ES5 and ES6.

Optimize – CSS-assets -webpack-plugin: uglifyjs-webpack-plugin: optimize- CSS-assets -webpack-plugin: OPTIMIZE – CSS-assets -webpack-plugin: Import OptimizeCssAssetsPlugin from “optimize- CSS-assets -webpack-plugin”; import OptimizeCssAssetsPlugin from “optimize- CSS-assets -webpack-plugin”; import TerserPlugin from “terser-webpack-plugin”; import UglifyjsPlugin from “uglifyjs-webpack-plugin”;

Const compressOpts => ({cache: true, // parallel: true, // parallel processing [${type}Options]: {beautify: False, compress: {drop_console: true}} // compress configuration}); const compressCss = new OptimizeCssAssetsPlugin({ cssProcessorOptions: { autoprefixer: { remove: False}, // Set autoprefixer to keep outdated styles safe: true // avoid cssnano recalculation of z-index}}); const compressJs = USE_ES6 ? new TerserPlugin(compressOpts(“terser”)) : new UglifyjsPlugin(compressOpts(“uglify”));

export default { // … optimization: { // … Minimizer: [compressCss, compressJs] // code compression}} For font/audio/video files, there is really no Plugin for us to use, so we can only ask you to use the corresponding compression tool before releasing the project into production. For image files, most Loader/Plugin packages use some image processing tools, and some functions of these tools are hosted in foreign servers, so it often fails to install. Check out my post “Talking about the Dangerous Pits of NPM Mirroring” for answers.

In view of this, the author developed a Plugin for webpack compression images with a little skill, please refer to tinyimg-webpack-plugin for details.

import TinyimgPlugin from “tinyimg-webpack-plugin”;

export default { // … plugins: [ // … TinyimgPlugin() ] }; The above build strategy is integrated into my open source Bruce-CLI, which is an automated build scaffold for “React/Vue” applications. It has zero configuration right out of the box and is ideal for beginners, intermediate, and rapid development projects. It can also override its default configuration by creating brucerc.js files. Just focus on writing the business code instead of the build code, keeping the structure of the project cleaner. For details please stamp here, remember to check the documentation when using, support a Star ha!

Image policy This policy mainly deals with image types and is a performance optimization strategy with low access cost. Just do the following two things.

“Image selection” : Understand the characteristics of all image types and which application scenarios are most suitable for “image compression” : Image selection must know the relative values of parameters such as volume/quality/compatibility/request/compression/transparency/scene of each image type, so as to make a quick judgment on which image type to use in which scene.

Type Volume Quality Compatible Request Compressed transparent scene JPG Small medium high Yes Lossy Not supported Background, rotation, or colorful PNG Large high High Yes Lossless supported icon, transparent SVG Small high High Yes Lossless supported icon, vector WebP small medium low Yes Both support depending on compatibility Base64 depending on the situation in high or not lossless support icon image compression can be completed in the above build strategy – compressed resources, or you can use your own tools to complete. Since most webPack image compression tools fail to install or have various environmental issues (you know what I mean), I recommend using an image compression tool before releasing a project into production, which is stable and doesn’t increase the packaging time.

Good image compression tools are nothing more than the following, if there is a better tool to add in the comments oh!

QuickPicture ✖️ ✔️ ✔️ ✖️ ShrinkMe ✖️ port adapter * Limited volume Squoosh ✔️ ✔ visibility less compressible type, general compression texture, no quantity limit, with volume limit TinyJpg * less compressible type, good compression texture, number limit, volume limit TinyPng * less compressible type, If you don’t want to drag image files back and forth in the website, you can use the author’s open source image batch processing tool IMg-master instead. It not only has compression function, but also has compression function. There are also grouping function, marking function and transform function. At present, the author is responsible for all projects using this tool processing, has been using a straight!

Graphics strategy can be a very cheap but effective strategy for optimizing performance, as it can probably handle all the build strategies in a single image.

Distribution Policy This policy mainly deals with content distribution networks. It is also a performance optimization policy with high access cost and requires sufficient financial support.

Although the cost of access is high, most enterprises will buy some CDN servers, so they don’t have to worry too much about deployment, just use it. The strategy can maximize the effect of CDN by following the following two points as far as possible.

“All static resources go to THE CDN” : determine which files belong to the static resources in the development phase. “Put the static resources and the main page under different domain names” : avoid requests with cookies. “Content distribution network”, referred to as “CDN”, refers to a group of distributed servers that store copies of data and can satisfy data requests based on the principle of proximity. Its core features are cache and back source. Cache is to copy resources to the CDN server, and back source is to request the upper layer server and copy resources to the CDN server when the resources expire/do not exist.

CDN can reduce network congestion and improve user access response speed and hit ratio. The intelligent virtual network built on the basis of the existing network relies on servers deployed in various places and enables users to obtain the resources needed nearby through the central platform’s scheduling, load balancing, content distribution and other functional modules. This is the ultimate mission of CDN.

Based on the advantages brought by the “proximity principle” of CDN, all static resources of the website can be deployed to the CDN server. What files do static resources include? This typically means resources that can be obtained without the server producing computation, such as style files, script files, and multimedia files (fonts/images/audio/video) that are not constantly changing.

If you need to configure a CDN server independently, you can consider ali Cloud OSS, netease Shufan NOS, and Qiliuyun Kodo. Of course, you also need to purchase the CDN service corresponding to the product. Because of the length problem, these configurations will have relevant tutorials after the purchase, can be experienced by themselves, no longer narrated here.

I recommend our first choice netease tree fan NOS, after all, their own products or quite confident, accidentally to their own products hit a small advertisement, ha ha!

Cache policy This policy mainly deals with the browser cache and is a performance optimization policy with lowest access cost. It can significantly reduce the loss caused by network transmission and improve the speed of web page access, which is a performance optimization strategy worth using.

As you can see from the figure below, in order to get the most out of the browser cache, the strategy tries to follow these five steps to get the most out of the browser cache.

“Consider rejecting all Cache policies” : cache-control :no-store “Consider whether the resource is requested to the server each time” : cache-control :no-cache “Consider whether the resource is cached by the proxy server” : Cache-control :public/private “Consider resource expiration time” : Expires: T/cache-Control :max-age=t,s-maxage=t “Consider negotiated Cache” : The Last Modified/Etag image caching mechanism is also one of the most frequently asked questions in the interview. I think the above mentioned terms can be fully understood in different word sequences to understand the role of browser caching in performance optimization.

The cache policy is implemented by setting HTTP packets. It is classified into strong cache/mandatory cache and negotiated cache/comparison cache. In order to facilitate comparison, the author will use legend to show some details, I believe you have a better understanding.

PNG Image negotiation cache. PNG The whole cache policy mechanism is very clear, the strong cache first, if the hit failed to go to the negotiation cache. If strong cache is hit, use strong cache directly. If the strong cache is not matched, a request is sent to the server to check whether the negotiation cache is matched. If the negotiated cache is hit, the server returns 304 notifying the browser to use the local cache, otherwise returns the latest resource.

There are two common application scenarios that are worth using a caching strategy, but more can be tailored to project requirements.

Cache: Set cache-control to no-cache, so that the browser sends a request to the server every time, and last-modified /ETag is used to verify that the resource is valid. Set cache-control :max-age=31536000, hash the file name, when the code is modified to generate a new file name, when the HTML file is introduced to change the file name will download the latest file rendering layer “rendering layer” performance optimization, is no question how to make the code better and faster execution. Therefore, the author makes suggestions from the following five aspects.

“CSS Strategy” : based on CSS rules “DOM strategy” : based on DOM operations “blocking strategy” : based on script loading “Backflow redraw strategy” : Based on backflow “Asynchronous update strategy” : Based on asynchronous update The above five aspects are completed when writing code, full of the whole project process in the development stage. Therefore, pay attention to each of the following points during the development phase, develop good development habits, and performance optimization will be used naturally.

Performance optimization at the render level is more about coding details than physical code. Simply put, follow certain coding rules to maximize performance optimization at the render level.

The “backflow redraw strategy” is one of the most common performance optimizations at the render level. Last year, I published a gold digging book called “Playing with the Beauty of CSS Art,” which uses an entire chapter on backflow redrawing. This chapter has been opened for trial reading. For more details, please click here.

CSS policy to avoid more than three layers of nested rules to avoid adding redundant selectors to ID selectors to avoid using tag selectors instead of class selectors to avoid using wildcard selectors, to only declare rules on target nodes to avoid repeated matches and repeated definitions, Focus on inheritable properties DOM strategy caching DOM calculation properties avoid too many DOM operations Using DOMFragment caching batch DOM operations blocking strategy Scripts have a strong dependency on DOM/ other scripts: Yes

The six metrics capture most of the details of performance optimization and complement the nine strategies. According to the characteristics of each performance optimization suggestion, the author divides the indicators into the following six aspects.

“Load optimization” : resources can be done when loading the performance optimization of “optimal execution” : resource performance optimization can be done when executed “rendering optimization” : resource performance optimization can be done in the render style “optimization” : style can be done at the time of coding performance optimization “optimization” script: the script when coding performance optimization can be done “V8 engine optimization” : Performance optimization based on V8 engine features 6 indicators – Load optimization. PNG Execution optimization 6 indicators – Execution optimization. PNG Rendering optimization 6 indicators – Rendering optimization Summary: “Performance optimization” is a cliche that must come across at work or in a job interview. Most of the time, it’s not about doing or answering a performance optimization suggestion that comes to mind, it’s about having an overall understanding of why you’re doing it and what you’re doing with it.

Performance tuning can’t be covered in a single article; it would take two books to cover it in detail. This article can give you is a direction of a kind of attitude, learn to apply bai, I hope to read this article will help you.

Finally, the author organized all the contents of this article into a high-definition brain map, because the volume is too large to upload, you can follow the author’s personal public account “IQ front” and reply performance optimization to obtain pocket knowledge map!

A gold digger post with over 50,000 views

The ES6 features are as follows: 4500 + thumb up, 16.5 w reading senior front-end H5 pit must pay attention to article 40 of the mobile end guide | netease three years practice: 3800 + thumb up, 5.7 w reading quantity