preface
The author has been involved in the project reconfiguration for nearly half a year and has applied a lot of knowledge of performance optimization and design pattern in the reconfiguration process. Two aspects of performance optimization and design patterns of knowledge whether at work or a job interview is a high frequency application scenarios, taking the advantage of this opportunity to participate in the reconstruction of the large scale project, the author carefully combed out some advice with the performance optimization of conventional and will, at the same time, combined with the daily development experience of the author in netease practice in four years to think of all the performance optimization Suggestions useful, Share with everyone! (Due to the limited space, then design patterns in a special article in the future?)
Some performance optimization suggestions may have been well known, but it does not affect this share, of course, the author will also be some of the usual may not pay attention to the details listed.
Usually, we think that performance optimization is a disordered application scene, but in the author’s opinion, it is an orderly application scene and many performance optimizations are mutual paving and even One Belt And One Road. From the perspective of process trends, performance optimization can be divided into network level and rendering level; From the result trend, performance optimization can be divided into time level and volume level. Simply put, when you visit a website, make it instantly visible to the user.
All the performance optimization is implemented around two layers and two layers: the core layer is the network layer and the rendering layer, the auxiliary layer is the time layer and the volume layer, and the auxiliary layer is filled with the core layer. Therefore, the author sorted out nine strategies and six indicators of front-end performance optimization through this paper. Of course, these policies and indicators are defined by the author, so as to facilitate some specification for performance optimization in some way.
Therefore, combining these characteristics on the job or in the interview is a perfect way to interpret the extended knowledge of performance optimization. The front high energy, don’t look also have to collect, start!!
All of the code examples show only the core configuration code to highlight the theme. The rest of the configuration is not supplemented, so please use your imagination
Nine strategies
The network layer
The performance optimization at the network level is undoubtedly how to make the resource volume smaller and load faster. Therefore, the author makes suggestions from the following four aspects.
- Build strategy: Based on the build tool (
Webpack/Rollup/Parcel/Esbuild/Vite/Gulp
) - Image strategy: Based on image type (
JPG/PNG/SVG/WebP/Base64
) - Distribution strategy: Content Based Delivery Network (
CDN
) - Caching strategies: Browser based cache (
Strong caching/negotiated caching
)
The above four aspects are completed step by step in the whole project process. The build and image policies are in the development phase, and the distribution and caching policies are in the production phase, so at each stage you can check to see if the policies are plugged in sequentially. This way you can maximize performance optimization scenarios.
Build strategy
This strategy deals primarily with Webpack and is the most common performance tuning strategy to tap into. Other build tools deal with much the same thing, perhaps with configuration inconsistencies. When it comes to Webpack performance optimization, there is no doubt that it starts at the time and volume levels.
The author found that the overall compatibility of Webpack V5 is not very good at present. Some functions may have problems with third-party tools, so we haven't upgraded to V5 for the time being and continue to use V4 as a production tool. Therefore, the following configurations are all based on V4, but the overall configuration is not different from V5
The author made 6 performance optimization suggestions for the two levels respectively, a total of 12 performance optimization suggestions. In order to facilitate memory, all of them were summarized in four-word words for the convenience of digestion. ⏱ means to reduce the packaging time, and a means to reduce the packaging volume.
- Reduce Packing Time:
Shrink range
,The cached copy
,Directional search
,Build in advance
,Parallel build
,Visual structure
- Reduce packing volume:
Segmentation code
,Shake the tree optimization
,Dynamic gasket
,According to the need to load
,Role in ascension
,Compressing resources
⏱ Reduce the scope
Construct include/exclude to narrow the scope of the file to be searched by the Loader. The benefit is to avoid unnecessary translation. With the size of the node_modules directory, how much more time does it take to retrieve all the files?
Include /exclude is usually configured in each Loader. The SRC directory is usually used as the source directory and can be treated as follows. Of course, include/exclude can be amended according to the actual situation.
export default {
// ...
module: {
rules: [{
exclude: /node_modules/,
include: /src/,
test: /\.js$/,
use: "babel-loader"
}]
}
};
⏱ Cache copy
Configure the cache cache Loader to compile a compiled copy of the file. The advantage is that only the modified files are compiled when the cache cache Loader recompiles. Why would an unmodified file be recompiled along with a modified file?
Most loaders/plugins provide an option to use the compile cache, usually including the cache word. Take babel-loader and eslint-webpack-plugin for example.
import EslintPlugin from "eslint-webpack-plugin";
export default {
// ...
module: {
rules: [{
// ...
test: /\.js$/,
use: [{
loader: "babel-loader",
options: { cacheDirectory: true }
}]
}]
},
plugins: [
new EslintPlugin({ cache: true })
]
};
⏱ Directed Search
Configure resolve improves file search speed. The advantage is that you specify the path to the required file. This can be used if some third-party libraries introduce a possible error in a regular manner or if you want the program to automatically index a particular type of file.
Alias maps module paths, extensions indicate file suffixes, and Noparse filters non-dependent files. Usually configuring Alias and Extensions is sufficient.
export default { // ... If ({alias: {"#": AbsPath("")), // if (" @": AbsPath(" SRC ")), // if (" SRC ")); "Swiper /js/swiper.min.js"}, // Import the module shortcut extensions: [" js ", "ts" and "JSX", "benchmark", "json", "vue"] / / import path file can be omitted when the suffix}};
⏱ Build ahead of time
Configuring the DLLPlugin prepackages third-party dependencies with the benefit of completely separating the DLL from the business code and building only the business code at a time. This is an archaic configuration that existed in Webpack V2, but is now not recommended in Webpack V4 + because the performance gains from version iteration are enough to ignore the benefits of the DllPlugin.
DLL stands for Dynamic Linked Library, a code base that contains code that can be used by multiple programs at the same time. In the front-end field, it can be considered as an alternative cache, which packages the public code as DLL files and stores them in the hard disk. When re-packaging, dynamic linking of the DLL files eliminates the need to re-package the public code, thus improving the construction speed and reducing the packaging time.
Configuring a DLL is generally more complex than other configurations, and the configuration process can be roughly divided into three steps.
First, tell the build script which dependencies make DLLs and generate DLL files and DLL mapping table files.
import { DefinePlugin, DllPlugin } from "webpack"; export default { // ... entry: { vendor: ["react", "react-dom", "react-router-dom"] }, mode: "production", optimization: { splitChunks: { cacheGroups: { vendor: { chunks: "all", name: "vendor", test: /node_modules/ } } } }, output: { filename: "[name].dll.js", // output path and file name library: "[name]", // global variable name: other modules will get the module's path from this variable: }, plugins: [new DefinePlugin({" process.env.node_env ": JSON. Stringify ("development") // Overwrite the production environment to the development environment in DLL mode (start third party dependent debugging mode)}), new DllPlugin({name: "[name]", // global variable name: reduce the search range and use it with output.library path: AbsPath("dist/static/[name]-manifest.json") // output directory path})};
Then configure the execution script in package.json and first execute the script before each build to package out the DLL file.
{
"scripts": {
"dll": "webpack --config webpack.dll.js"
}
}
Finally, link the DLL file and tell Webpack to hit the DLL file to read. Use HTML-webpack-tags -plugin to automatically insert DLL files during packaging.
import { DllReferencePlugin } from "webpack"; import HtmlTagsPlugin from "html-webpack-tags-plugin"; export default { // ... plugins: [ // ... new DllReferencePlugin({ manifest: AbsPath("dist/static/vendor-manifest.json") // manifest file path}), new HTMLTagSplugin ({append: False, // insert publicPath: "/", // use publicPath tags: ["static/vendor.dll.js"] // resource path})]};
For the cost of those few seconds, the author suggests better configuration. You can also use autodll-webpack-plugin instead of manual configuration.
⏱ Concurrent build
Configuring Thread to convert the Loader from a single process to a multiple process. The benefit is to release the advantage of multi-core CPU concurrency. When building a project using Webpack, there are a lot of files to parse and process. The build process is a computationally intensive operation, and the more files you have, the slower the build process becomes.
Webpack running in Node is a single-threaded model, which means that Webpack tasks need to be processed one by one, rather than multiple tasks at the same time.
File reads, writes and computations are inevitable. Can Webpack do multiple tasks at the same time, leveraging the power of a multi-core CPU to speed up build speeds? Thread-loader to help you open the thread according to the number of CPUs.
One caveat here is that you should not use this performance optimization advice if the project files are not large, because there is a performance overhead associated with opening multiple threads.
import Os from "os";
export default {
// ...
module: {
rules: [{
// ...
test: /\.js$/,
use: [{
loader: "thread-loader",
options: { workers: Os.cpus().length }
}, {
loader: "babel-loader",
options: { cacheDirectory: true }
}]
}]
}
};
⏱ Visual structure
Configure BundleAnalyzer to analyze the structure of the packaged file, with the benefit of identifying the causes of oversize. Through the analysis of the reasons, the optimization scheme can be obtained to reduce the construction time. BundleAnalyzer is an official Webpack plugin that provides visual analysis of packaged files’ module components, module volume fraction, module inclusion relationships, module dependencies, file duplication, compressed volume comparison, and other visual data.
This can be configured using webpack-bundle-analyzer, which allows us to quickly find relevant issues.
import { BundleAnalyzerPlugin } from "webpack-bundle-analyzer";
export default {
// ...
plugins: [
// ...
BundleAnalyzerPlugin()
]
};
📦 Segmentation code
Divide each module code, extract the same part of the code, the benefit is to reduce the frequency of repeated code. Webpack V4 uses splitchpunks instead of commonSchunkSplugins to split the code.
For details, please refer to the official website and paste the common configurations here.
export default { // ... Optimization: {runtimeChunk: {name: "manifest"}, // SplitchPunks: {cacheGroups: {common: {minChunks: 2, name: "common", priority: 5, reuseExistingChunk: true, // Reuse existing code blocks test: AbsPath(" SRC ")}, vendor: {chunks: "initial", // Split type name: "vendor", // block name priority: 10, // test: // Node_modules // Check file regular expression}, // Cache group chunks: "all" // Split the code type: all all module, async asynchronous module, initial entry module} // Split the code block};
📦 Tree Shaking Optimization
Removing unused code from a project has the benefit of removing duplicated and unused code. Tree Shaking was first introduced in Rollup and is a core concept of Rollup that was later borrowed from Webpack V2.
Tree shaking only works for ESM specifications, not for other module specifications. For static structure analysis, only import/export can provide static import/export function. Therefore, when writing business code, you must use the ESM specification to enable tree shaking to remove duplicated and unused code.
In Webpack, you can make the tree-shaking work by simply setting the packaging environment as production environment, and the business code is written using the ESM specification, using the import module and the export module.
export default {
// ...
mode: "production"
};
📦 Dynamic gasket
The current browser code gasket is returned based on UA via the gasket service. The benefit is that there is no need to pack heavy code gaskets into it. Each build is configured with @babel/preset-env and core-js to pack Polyfill in for certain requirements, which definitely adds to the code size.
UseBuiltins provided by @babel/preset-env can import Polyfill on demand.
- false: ignore
target.browsers
Will allPolyfill
Loading in - entry: according to the
target.browsers
Will be part ofPolyfill
Loads in (only introduces those not supported by the browserPolyfill
, need to be in the entry fileimport "core-js/stable"
) - usage: according to the
target.browsers
And detecting ES6 usage in the codePolyfill
Loads in (no need in the entry fileimport "core-js/stable"
)
The use of dynamic gaskets is recommended. The dynamic gasket returns the Polyfill of the current browser based on the browser userAgent. The idea is to return the Polyfill of the current browser based on the browser’s userAgent looking up from the BrowserList which features are not supported in the current browser. Those who are interested in this aspect can refer to the source code of polyfill-library and polyfill-service.
There are two dynamic gasket services available here. Click the links below in different browsers to see the different Polyfill outputs. I believe iExplore is the most Polyfill. It proudly says: I am me, a different kind of firework.
- The official CDN service: https://polyfill.io/v3/polyfill.min.js
- Ali CDN service: https://polyfill.alicdn.com/polyfill.min.js
Use HTML-webpack-tags -plugin to automatically insert dynamic gaskets during packaging.
import HtmlTagsPlugin from "html-webpack-tags-plugin"; Export default {plugins: [new HtmlTagSplugin ({append: false, // insert publicPath: false after generating the resource, // use publicPath tags: [" https://polyfill.alicdn.com/polyfill.min.js "] / / resource path}})];
📦 Load as needed
The routing page/trigger function is packaged into a separate file and only loaded when used, which has the benefit of reducing the burden of first screen rendering. Because the more features a project has, the bigger its packaging volume will be, resulting in slower first-screen rendering.
When rendering the first screen, only the corresponding JS code is needed and no other JS code is needed, so load on demand can be used. Webpack V4 provides the function of module cutting and loading on demand, and with import(), the effect of reducing package in first-screen rendering can be achieved, thus speeding up the first-screen rendering speed. The JS code for the current function is loaded only when some function is triggered.
Webpack V4 provides magic annotations to name the cutting module. Without annotations, the cutting module cannot tell which business module belongs to, so it is generally a business module that shares the annotation name of the cutting module.
const Login = () => import( /* webpackChunkName: "login" */ ".. /.. /views/login"); const Logon = () => import( /* webpackChunkName: "logon" */ ".. /.. /views/logon");
The console may report an error when it runs. Just insert @babel/plugin-syntax-dynamic-import into the babel-related configuration of package.json.
{/ /... "babel": { // ... "plugins": [ // ... "@babel/plugin-syntax-dynamic-import" ] } }
📦 function improved
Analyzing the dependencies between modules and combining the packaged modules into a single function has the advantage of reducing function declaration and memory expense. Action promotion was first introduced in Rollup and is a core concept of Rollup that was later borrowed and used in Webpack V3.
There are a lot of function closures in the built code before the function is turned on. Due to module dependency, which is converted to IIFE when packaged via Webpack, a large number of function closures can increase the size of the package (the more modules, the more obvious). More scoped functions are created when the code is run, resulting in greater memory overhead.
After the opening effect is increased, the built code is placed in a function scope in the order of introduction, reducing function declaration and memory expense by appropriately renaming certain variables to prevent variable name collisions.
In Webpack, you just need to set the packaging environment as a production environment to get the boost effect, or explicitly set concatenateModules.
export default { // ... mode: "production" }; // explicitly set export default {//... optimization: { // ... concatenateModules: true } };
📦 Compress resources
Compression of HTML/CSS/JS code, compression of font/image/audio/video, the benefit is more effective to reduce packaging volume. Optimizing code to the extreme may not be as efficient as optimizing the size of a resource file.
For HTML code, use HTML-webpack-plugin to enable compression.
import HtmlPlugin from "html-webpack-plugin"; export default { // ... Plugins: [//... htmlPlugin ({//... minify: {collapseWhitespace: true, removeComments: true} // CollapseWhitespace: true})};
For CSS/JS code, use the following plugins to enable compression. Optimizecss is based on CSNano encapsulation, Uglifyjs and Terser are official webpack plugins, and it should be noted that compression JS code needs to distinguish ES5 and ES6.
- optimize-css-assets-webpack-plugin: compression
CSS code
- uglifyjs-webpack-plugin: compression
ES5
Version of theJS code
- terser-webpack-plugin: compression
ES6
Version of theJS code
import OptimizeCssAssetsPlugin from "optimize-css-assets-webpack-plugin"; import TerserPlugin from "terser-webpack-plugin"; import UglifyjsPlugin from "uglifyjs-webpack-plugin"; Const compressOpts = type => ({cache: true, // parallel file: true, // parallel [' ${type}Options']: {beautify: Compress: {drop_console: true}} // Compress: {drop_console: true}); const compressCss = new OptimizeCssAssetsPlugin({ cssProcessorOptions: { autoprefixer: { remove: False}, // Set Autoprefixer to retain outdated styles safe: true // Avoid CSNano recalculating Z-index}}); const compressJs = USE_ES6 ? new TerserPlugin(compressOpts("terser")) : new UglifyjsPlugin(compressOpts("uglify")); export default { // ... optimization: { // ... Minimizer: [compressCss, compressJs] // code compression}};
For font/audio/video files, there is really no Plugin for us to use, so we can only ask you to use the corresponding compression tool before releasing your project to production. For image files, most of the Loaders/Plugins use some image processing tools when packaging, and some functions of these tools are hosted in foreign servers, so the installation often fails. See my post on “NPM Mirroring Dangerous Pits” for more details.
With this in mind, I’ve developed a Plugin to compress images with Webpack with a little bit of tricks, see tinyimg-webpack-plugin for details.
import TinyimgPlugin from "tinyimg-webpack-plugin";
export default {
// ...
plugins: [
// ...
TinyimgPlugin()
]
};
The above build strategies are integrated into my open source Bruce-CLI, a React/Vue automated build scaffold with zero configuration right out of the box for entry-level, intermediate, and rapid development projects. You can also override the default configuration by creating brucerc. Just focus on the writing of business code without paying attention to the writing of construction code, so that the project structure is more concise. Details please poke here, remember to view the document when using, support a Star ha!
Image strategy
This strategy mainly deals with image types and is also a performance optimization strategy with low access cost. Just do the following two things.
- Image selection: Understand the characteristics of all image types and which application scenarios are most appropriate
- Image compression: Use tools or scripts to compress images before deploying them into production
Image selection must know the volume/quality/compatibility/request/compression/transparency/scene and other parameters relative value of each image type, so as to quickly make a judgment in which scene to use what type of image.
type | volume | The quality of | Compatible with | request | The compression | transparent | scenario |
---|---|---|---|---|---|---|---|
JPG | small | In the | high | is | lossy | Does not support | Background picture, round broadcast picture, rich color picture |
PNG | big | high | high | is | nondestructive | support | Icon, transparent map |
SVG | small | high | high | is | nondestructive | support | Icons, vector drawings |
WebP | small | In the | low | is | both | support | Depends on compatibility |
Base64 | Look at the situation | In the | high | no | nondestructive | support | icon |
Image compression can be done in the build strategy – compression resources described above, or you can use your own tools to do it. Due to the fact that most Webpack image compression tools fail to install or have environmental issues (you get the idea), I recommend using the image compression tool before releasing your project to production, as it runs steadily and does not increase the packaging time.
Good image compression tools are nothing more than the following, if there are better tools to use trouble in the comments to add oh!
tool | Open source | charge | API | Free experience |
---|---|---|---|---|
QuickPicture | ✖ ️ | ✔ ️ | ✖ ️ | There are many types of compression, good compression texture, volume limitation and quantity limitation |
ShrinkMe | ✖ ️ | ✖ ️ | ✖ ️ | There are many types of compression, general compression texture, no quantity limit and volume limit |
Squoosh | ✔ ️ | ✖ ️ | ✔ ️ | Less types of compression, general compression texture, no quantity limit, volume limit |
TinyJpg | ✖ ️ | ✔ ️ | ✔ ️ | The compression type is less, the compression texture is very good, and the quantity is limited, and the volume is limited |
TinyPng | ✖ ️ | ✔ ️ | ✔ ️ | The compression type is less, the compression texture is very good, and the quantity is limited, and the volume is limited |
Zhitu | ✖ ️ | ✖ ️ | ✖ ️ | The compression type is general, the compression texture is general, and the quantity is limited and the volume is limited |
If you don’t want to drag image files around the site, you can use my open source image batch tool IMG-master instead. It not only has compression function, but also has grouping function, tagging function, and transformation function. At present, the author is responsible for all the projects are using this tool processing, has been using a straight!
The image strategy is a cheap but effective performance optimization strategy because it can probably burn out all build strategies in just one image.
Distribution strategy
This strategy is mainly related to content distribution network, and it is also a performance optimization strategy with high access cost, which needs sufficient financial support.
Although access costs are high, most enterprises will buy some CDN servers, so there is no need to worry too much about deployment, although use is good. This strategy can play the maximum role of CDN by following the following two points as far as possible.
- All static resources go through CDN: Development phase determines which files belong to static resources
- Put the static resource under a different domain than the main page: Avoid requests to bring
Cookie
Content delivery network (CDN) refers to a group of servers that store copies of data and satisfy data requests based on the proximity principle. Its core features are caching, where resources are copied to the CDN server, and back-source, where resources are requested from the upper server and copied to the CDN server when they expire/do not exist.
Using CDN can reduce network congestion, improve user access response speed and hit rate. The ultimate mission of CDN is to build an intelligent virtual network based on the existing network, which relies on the deployment of servers in various places and enables users to obtain the required resources nearby through the scheduling, load balancing, content distribution and other functional modules of the central platform.
All static resources of the website can be deployed to the CDN server based on the advantages brought by the nearby principle of CDN. What files do static resources include? Typically, these are resources that can be obtained without the server generating computations, such as style files, script files, and multimedia files (fonts/images/audio/video) that do not change very often.
If the CDN server needs to be configured separately, Aliyun OSS, netease Trefan NOS and Qiniuyun KODO can be considered. Of course, the corresponding CDN service of this product needs to be purchased for configuration. Due to space problems, these configurations will be purchased after the relevant tutorial, can experience, here will not be described.
I recommend you to choose netease Tree Sail NOS. After all, I am confident of my own products. I accidentally made a small advertisement for my own products, haha!
Caching strategies
This strategy focuses on dealing with the browser cache and also optimizes the performance to minimize the access cost. It can significantly reduce the loss caused by network transmission and improve the speed of web page access, which is a performance optimization strategy worthy of use.
As you can see from the chart below, in order to get the most out of the browser cache, this strategy should follow the following five points to maximize the browser cache.
- Consider rejecting all caching policies:
Cache-Control:no-store
- Consider whether the resource is requested to the server every time:
Cache-Control:no-cache
- Consider whether the resource is cached by the proxy server:
Cache-Control:public/private
- Consider resource expiration times:
Expires:t/Cache-Control:max-age=t,s-maxage=t
- Consider negotiating caching:
Last-Modified/Etag
At the same time, the browser cache is also one of the high-frequency interview questions. The author thinks that the above mentioned nouns can be fully understood in different word sequences in order to truly understand the role of browser caching in performance optimization.
Caching policies are implemented by setting HTTP packets, which can be formally divided into strong caching/forced caching and negotiated caching/contrast caching. In order to facilitate the comparison, the author shows some details using legend, I believe you will have a better understanding.
The whole cache policy mechanism is clear, first strengthen the cache, if the hit failed to negotiate the cache. If a strong cache is hit, use the strong cache directly. If the strong cache is not hit, send a request to the server to check if the negotiation cache is hit; If the negotiation cache is hit, the server returns 304 telling the browser to use the local cache, otherwise it returns the latest resource.
There are two more common scenarios that are worth trying, but more scenarios can be customized based on project requirements.
- Frequent resource changeSet:
Cache-Control:no-cache
To make the browser send a request to the server each time, cooperateLast-Modified/ETag
Verify that the resource is valid - Resources don’t change very oftenSet:
Cache-Control:max-age=31536000
When the code is modified, the new file name will be generated. When the HTML file name is introduced and changed, the latest file will be downloaded
Rendering level
Performance optimization at the rendering level is definitely how to make code parsing better and executing faster. Therefore, the author makes suggestions from the following five aspects.
- CSS policy: Based on CSS rules
- DOM strategy: DOM based operations
- Blocking strategy: Script-based loading
- Backflow redraw strategy: Based on backflow redraw
- Asynchronous update policy: Based on asynchronous updates
The above five aspects are done while writing the code, throughout the development phase of the project flow. Therefore, in the development phase, it is necessary to pay attention to each of the following points, develop good development habits, and performance optimization will be used naturally.
The performance optimization at the rendering level is more about the coding details than the physical code. Simply put, there are certain coding rules that can be followed to maximize performance optimization at the rendering level.
The backflow redraw strategy is one of the most common performance optimizations for rendering. Last year I published my Nuggets booklet, “The Art of CSS,” with a whole chapter on reflow redrawing. This chapter has been opened for trial reading. For more details, please click here.
CSS strategy
- Avoid more than three layers
Nested rules
- Avoid to
The ID selector
Add extra selectors - Avoid the use of
Label selector
Instead ofClass selectors
- Avoid the use of
Wildmatch selector
Declare rules only on the target node - Avoid duplicate matches and duplicate definitions
Inheritable attribute
DOM strategy
- The cache
DOM Computing Attributes
- To avoid too much
DOM manipulation
- use
DOMFragment
Cache batchDOM manipulation
Blocking strategy
- Script with
DOM/ other scripts
Is very strong: Yes<script>
Set up thedefer
- Script with
DOM/ other scripts
Is not strong: Yes<script>
Set up theasync
Backflow redraw strategy
- The cache
DOM Computing Attributes
- Use classes to merge styles to avoid changing styles item by item
- use
display
controlDOM show hidden
That will beDOM available offline
Asynchronous update strategy
- in
Asynchronous tasks
Changes in theDOM
When it is packaged asMicro tasks
The six indicators
The author divides nine strategies and six indicators according to the importance and practicality of performance optimization. In fact, they are all living performance optimization suggestions. Some performance optimization recommendations have little impact on access or not, so the author positions the nine strategies higher than the six indicators. In view of the nine strategies, it is still suggested to connect them in the development stage and production stage. In the project resumption, the rules of the six indexes can be connected in accordance with the actual application scenarios.
The six metrics basically cover most of the performance optimization details and can be used as a supplement to the nine strategies. According to the characteristics of each performance optimization suggestion, the indicators are divided into the following six aspects.
- Loading optimization: Performance optimization that can be done on a resource when it is loaded
- Performance optimization: A performance optimization that a resource can do at execution time
- Render optimization: The performance optimization that can be done with a resource while rendering
- Style optimization: Performance optimization that can be done when a style is coded
- Script optimizations: Performance optimizations that scripts can do while they are coded
- V8 engine optimization: in view of the
V8 engine
Features can be done to optimize performance
Load optimization
Perform optimization
Rendering optimization
Style optimization
The script to optimize
V8 engine optimization
conclusion
Performance optimization as a cliche knowledge, is bound to come up in the job or interview. Most of the time, rather than thinking of a performance optimization recommendation to do or answer, but to have an overall understanding of this aspect, know why this design, the purpose of this design can achieve what effect.
Performance optimization can’t be covered in a single article, but it might take two books to cover it in detail. This article can give you is a direction of an attitude, learn to use bai, hope to read this article will be helpful to you.
Finally, the author sorted all the contents of this article into a high-definition brain map. Because the volume is too large to upload, you can pay attention to the author’s personal public account IQ front end and reply to the performance optimization to get the pocket knowledge map!