Recently, I made a performance optimization of VUE-CLI project. This time, I applied a lot of knowledge learned before to the actual project, encountered many difficulties and learned a lot.
You should be familiar with common front-end optimization methods (HTTP caching, code compression, etc.) and WebPack before you read this article
Project introduction
This is a PC side Web application with vue-CLI 3 as scaffolding, global multi-page and local single-page (divided by user role). The default configuration of vue-CLI is basically used and has not been specifically optimized for performance. This optimization hopes to speed up the first loading speed without reducing the performance of subsequent operation.
Performance indicators
Before performance optimization, we must first know how to evaluate the performance of a project, that is, which indicators to measure, so as to set a goal, quantify the performance of the project, and can know whether our performance optimization effect, how much effect.
At present, there are two kinds of front-end performance Monitoring methods: Synthetic Monitoring (SYN) and Real User Monitoring (RUM). Synthetic monitoring is in a simulated environment, through some tools, rules to simulate running a page, record related performance indicators, and finally get a report. One example of a composite monitor is Lighthouse, which comes with the Chrome developer tools. However, the amount of data generated by synthetic monitoring is small, and the test results have a great relationship with the network status and hardware performance of the test machine, so they cannot represent the situation of real users. RUM monitoring is to collect the actual performance data of online users by embedding points in the code and invoking the browser interface. The sample size is large, and it can better explain the performance status of the page in real scenarios. Therefore, the data monitored by RUM shall prevail during this performance optimization.
RUM performance indicators are specifically divided into user experience indicators and technical performance indicators.
User Experience Indicators
Based on user-centric performance indicators, common user experience indicators are as follows:
indicators | describe | Specific calculation method |
---|---|---|
FP(First Paint) | Page first drawing time (white screen time) | The time from when the user starts visiting the page to when the screen is no longer blank. According to theW3C Paint Timing Specification draftIn the browserperformance.getEntriesByType('paint') Get the performance Timing information provided by the Performance Timing API. |
FCP(First Contentful Paint) | The first time a page has content drawn | The time from when the user starts visiting the page to when there is any actual content. In the browser, you can passperformance.getEntriesByType('paint') To obtain. |
FMP(First Meaningful Paint) | First effective page drawing time (first screen time) | The time it is drawn from the time the user first visits the page to the time the maximum layout of the overall page changes.Time to First Meaningful Paint(Science required) |
TTI(Time To Interactive) | Page fully interactive time | The time between the user’s initial access to the page and the page being in a fully interactive state.First Interactive and Consistently Interactive(Science required) |
FID(First Input Delay) | The delay of the first user interaction | The amount of time the user spends interacting with the page for the first time. With the help ofEvent Timing API, listen for first-input events, FID= event start processing time – event occurrence time |
MPFID(Max Potential First Input Delay) | Maximum latency possible for user interaction operations | The maximum time a user can spend interacting with a page for the first time. With the help ofLong Tasks API, MPFID= Longest Long Task |
LOAD | The time when the page fully loads (when the load event occurs) | LOAD = loadEventStart – fetchStart |
Pictures fromwww.jianshu.com/p/456e6eff5…
Technical performance index
Technical performance indicators are the performance indicators calculated by the time of each event in the process of page loading. Its impact on users is not as intuitive as user experience indicators, but it can better reflect some performance problems and is easier to optimize. And unlike companies that have different definitions of user experience metrics, it has a very clear and uniform way of calculating.
Navigation Timing 2.0Page loading phase model defined:According to Navigtaion Timing 2.0 model, there are the following common technical performance indicators:
indicators | describe | Specific calculation method |
---|---|---|
DNS | DNS Query Time | domainLookupEnd - domainLookupStart |
TCP | Establishing a TCP connection takes time | connectEnd - connectStart |
SSL | Establishing an SSL connection takes time | connectEnd - secureConnectionStart |
TTFB | First byte response time | responseStart - requestStart |
Content transmission | Response Transmission Time | responseEnd - responseStart |
The DOM parsing | Dom parsing takes time for the browser | domInteractive - responseEnd |
Resource to load | Loading time of external resources | loadEventStart - domContentLoadedEventEnd |
The first byte | The browser gets the first response time | responseStart - fetchStart |
DOM Ready | DOM loading time | domContentLoadedEventEnd - fetchStart |
This optimization concerns the main indicators
Since the optimized project is an application based on user operation rather than display (such as news website), the focus on FP, FMP and other indicators is weakened, and TTI is mainly concerned. Focus on FCP, FMP, and DOM Ready, because optimization of FCP and FMP can reduce users’ impatience to see content to a certain extent, and DOM Ready can reflect the overall load time of the project.
The main objective of this optimization is to reduce TTI by 30%.
Analysis tools
webpack-chart
: WebPack Stats interactive pie chartwebpack-visualizer
Visualize and analyze your bundles to see which modules take up space and which are likely to be reusedwebpack-bundle-analyzer
: a plug-in and CLI tool that analyzes bundle content in a convenient, interactive, scalable tree. In vue-CLI, runnpx vue-cli-service build --report
in/dist
Next generationreport.html
To see the packaging analysis- run
npx vue-cli-service inspect > output.js --mode production
Generate the WebPack production environment configuration file - Debugging webpack plugins:
node --inspect-brk ./node_modules/@vue/cli-service/bin/vue-cli-service.js serve --inline --progress
- Global installation
@vue/cli
After the operationvue ui
Visualize dependency structures Lighthouse
: Chrome Developer Tools built-in performance reporting tool, can give a lot of recommendations
Vue-cli has been optimized by default
Vue-cli default configuration has added many common optimizations, including:
cache-loader
Vue/Babel/TypeScript compilation is enabled by default. The file will be cachednode_modules/.cache
中- Use pictures and other multimedia files
url-loader
, images smaller than 4K will be base64 encoded inlined in JS files - Use in production environment
mini-css-extract-plugin
Extract the CSS into a separate file thread-loader
Parallel processing is enabled for Babel/TypeScript translations on machines with multi-core cpus.- Extract common code: two cache groups
chunk-vendors
andchunk-common
- Code compression (
terser-webpack-plugin
) preload-webpack-plugin
: all entry js, CSS files pluspreload
, load files on demand plusprefetch
Examine where there is room for improvement in the project
The essence of optimization is to do fewer unnecessary things, so we need to know what is necessary and what is not. Loading the entire page before the user has swiped down to the bottom, for example, is not necessary in most cases. Therefore, it is important to know the complete behavior of the project as it runs. This point is often overlooked and difficult to achieve good results.
Before we start optimization, we should check where there is room for optimization in some projects to avoid useless work, or a sudden operation like a tiger, and a sad look at the data.
Static resource
- Picture: head, cover and compressed space, now average about 100K, large more than 1M
- Json returned by Ajax: Currently average around 1K per Ajax, not much compression
- Js, CSS: the first screen load of several major add up to several M, can be divided code to reduce the load
- Video, font, doc: the total does not exceed 50K, does not affect the performance, no compression is necessary
- Other: Too many prefetch files occupy too much server bandwidth
TTFB
- Ajax: 200-800 ms
- Js, CSS: 150-800ms, most of them 700+ MS (later it was found that I opened the agent, the actual number is 20-100ms)
- Image, video, font: 50-250ms
- Doc: 60 ms
other
- There are some places where you can add caching to reduce the amount of Ajax requests. You can consider the format (session storage, cache storage, index DB, HTTP cache).
- Some pages can be rendered on the server side with the document structure in mind
- There were some unnecessary redraws
Something that might work
- Responsive picture: picture element
- Intersection Observer is lazy
- Critical Loads the CSS asynchronously
- Dns-prefetch, preconnect, preFETCH, PRERender, preload
- Tree – shaking, the Scope of hoisting
- Webpack sharding adjustment
- Code minimization (further compression)
- Dynamic import implements lazy route loading
- Image compression
Optimization of trying
Shard optimization
SplitChunks are a way for Webpack to split up code. See the webpack website or this blog post
Before optimization:
Vue — cli-service inspect > output.js –mode production
splitChunks: {
cacheGroups: {
vendors: {
name: 'chunk-vendors'.test: /[\\/]node_modules[\\/]/,
priority: -10.chunks: 'initial'
},
common: {
name: 'chunk-common'.minChunks: 2.priority: -20.chunks: 'initial'.reuseExistingChunk: true}}},Copy the code
The meanings of each parameter are not explained here, you can refer to the previous links
Several main entrance sizes after packing: 7-8m
Entry defines where the code can be executed after webPack has packed it. Multi-page applications usually have multiple entries
Here’s a quick overview of how Webpack packaging and sharding are implemented:
- For all entries, the entry file starts with a chunk, of which
import
(Not including dynamicimport
andrequire
), and then add the files that the newly added files depend on, i.e., all files in the dependency chain. These are initial Chunks. - All the dynamics encountered in the previous step
import
Each of the files is a chunk. These are asyncChunks. - Extract the common code, i.e
cacheGroups
And new chunks will be created. According to the configuration rules, duplicate files in multiple chunks are extracted as new chunks. - Files of the same type in each chunk are merged into one.
For example, the figure above shows the output of Webpack at a point in time where 767/833 refers to 767 modules that have been compiled, from which 833 modules are known to be compiled, and the dependencies of that module are added to TODO as the next module is compiled. So the numbers on the left are going up and the numbers on the right are going up, until all the modules are compiled. So 65% on the left is not actually the total compilation progress, you don’t know how many modules there are until you finish compiling.
Before modifying cacheGroups, let’s take a look at the current sharding situation with the visualization tool. Here’s the visual sharding result after running the packaging analysis:
As you can see, chunk-Vendors are very large, and this is because the project relies on so many libraries. Chunk-common is also very large because all public files that enter are packed into this chunk. These two chunks are loaded by all entries, so performance is affected. There is also a lot of duplicate code, and some of the less commonly used code is packaged into chunk. These questions will be addressed in the following sections.
Modify sharding principles
node_modules
The files are still packed inchunk-vendors
, is referenced too much, has been difficult to change to load on demand, and change frequency is low, can be used as a browser cache (304) does not fail for a long time.- Files that are rarely referenced and have a large volume are separated into a separate chunk.
- The rest of the public code is split into chunks to avoid downloading all of the public code when accessed from one portal, and to avoid invalidating the cache of all of the public code when parts of the code change.
- Small asynchronous chunks merge into other related asynchronous chunks, or merge into inbound chunks.
Route lazy loading
The chunk of entry is too big. After checking the routes, many of them are static imports. Changed a wave of dynamic import, only a few commonly used routes are static.
Route lazy loading is a feature provided by vue-Router and can be used to load the Foo module on demand using const Foo = () => import(‘./ foo.vue ‘). Each of its on-demand modules is split into an asynchronous chunk, and when module Foo is actually called, and
Rules for changing the route to asyncChunk:
High frequency of use | Low frequency of use | |
---|---|---|
Small file | Merge the inbound chunk | A single chunk |
File size | A single chunk (plus prefetch) | A single chunk |
Advantages of splitting large chunks into smaller chunks: reducing the size of the files that users need to download and the amount of code that they need to execute when they first access them. Reduce duplicate code between different entries.
Cons: Users may have to download new files later, increasing the wait time. Asynchronous chunks may rely on files that are already in that chunk, causing duplicate code to be downloaded.
According to the route division, each route that meets the above rules will have a chunck, and the rest will keep the direct static import or be merged into other chunks (the same chunk will be merged).
{
path: '/user/XXX'.name: 'user-XXX'.// Change to dynamic import
component: () = > import(
/* webpackChunkName: "chunk-user-XXX" */
'src/pages/XXX'),},Copy the code
After the change, some small chunks are added, but the impact is not significant. They are asynchronously loaded chunks, so the performance will not be reduced due to too many files in a single request.
Separate the infrequently used library
- In order to
tinymce
For example, a rich text editor control. Large modules can be changed to load on demand.
const tinymce = () = > import(
/* webpackChunkName: "chunk-tinymce" */
'.. /plugins/tinymce'); .beforeMount() {
// Dynamically import tinymce, download this chunk before the first mount (300K compressed)
tinymce().then(() = > {
this.loading = false;
});
},
Copy the code
After these two steps, the total size of the chunks is 16M => 10.8M (parsed)
Sharding:
You can see there’s a lot less duplication, but the two big chunks are still huge. Continue to divide.
- Echarts (672K) : a chart control library. Not all entries are available, but it’s used so much that it’s hard to load on demand
// Modify vue.config.js
cacheGroups: {
// Add a cacheGroup
echarts: {
name: 'chunk-echarts'.test: /[\\/]node_modules[\\/]echarts[\\/]/,
priority: 0.chunks: 'all',}}Copy the code
Increase the number of common code fragments
For now the inlet chunck is still too large, you can increase the number of public chunks extracted. Increasing makes for less duplicate code, but the number of files increases, so higher is not always better.
splitChunks: {
maxInitialRequests: 5./ / 3 by default
maxAsyncRequests: 6./ / 5 by default. },Copy the code
Vendors to extract
The default strategy is to extract all the files under node_modules that the entry chunck depends on. There is a problem, async chunck packs the files under node_modules again, so some library browsers will download two copies. Moreover, libraries that are referenced only once do not have to be extracted, that is, do not reduce the total size of the chuncks, nor the number of files, but rather increase the chunk-vendors size, increasing the total file size of all entries.
cacheGroups: {
...
vendors: {
name: 'chunk-vendors'.test: /[\\/]node_modules[\\/]/,
priority: -10.// Add the following two lines
minChunks: 2.chunks: 'all',}}Copy the code
Hit the pit
After the sharding rules are modified, some JS files in dom are missing: the chunks of HtmlWebpackPlugin are inserted
The chunk – common segmentation
Originally, the common part of the code of all chunck entries (at least 2 references) was packaged into one chunk, but in fact, users of various roles are unlikely to access other entries, and all entries must download chunk-common, so chunk-common is split into multiple pieces (no longer merge all the common code into one file).
common: {
name: true.// Default is name: 'chunk-common'
minChunks: 2.minSize: 60000.// Only modules larger than 60KB will be extracted, preventing a heap of small chunck
priority: -20.chunks: 'initial'.reuseExistingChunk: true,},Copy the code
After these processes, the total size of the chunks is 10.8M => 15.8M (parsed)
It looks like the total size is bigger because there is more duplicate code, but the file load required for each entry is reduced from 7-8m to 3-4m, an average reduction of 60%. Compared to this, taking up 5M more storage space on the server is trivial.
Why split chunk-common but not Chunk-Verdors?
As mentioned above, chunk-Verdors is a library that the project depends on, and the change frequency is low. Therefore, 304 cache can be used. If split, the number of files and the number of requests for the first load will be increased. Chunk-common, on the other hand, is the project’s own code, which is constantly changing, so the cache often fails and users have to re-download it each time.
Code minimization
Vue-cli has terser configured by default for code compression, and you can modify its default configuration to further compress code.
Terser will reduce code size by simplifying expressions and functions, as well as speeding up the runtime and reducing the runtime memory footprint.
Make the following changes to vue.config.js:
/ / webpack - the use of the chain: https://github.com/neutrinojs/webpack-chain
chainWebpack: config= > {
if(process.env.NODE_ENV ! = ='development') {
config.optimization.minimizer('terser').tap((args) = > {
args[0] = {
test: /\.m? js(\? . *)? $/i,
chunkFilter: () = > true.warningsFilter: () = > true.extractComments: false.// Whether the comment is extracted separately into a file
sourceMap: true.cache: true.cacheKeys: defaultCacheKeys= > defaultCacheKeys,
parallel: true.include: undefined.// Which files are valid for
exclude: undefined.minify: undefined.// Customize the minify function
See HTTP: / / https://github.com/terser/terser#minify-options / / complete parameter
terserOptions: {
compress: {
arrows: true.// Convert to arrow function
collapse_vars: false.// There may be side effects, so turn it off
comparisons: true.// Simplify expressions such as:! (a <= b) → A > B
computed_props: true.// Convert computed variables to constants, such as {["computed"]: 1} → {computed: 1}
drop_console: true.// Remove the console.* function
hoist_funs: false.// Function promotion declaration
hoist_props: false.Var o={p:1, q:2}; F (o.P, o.q) → F (1, 2);
hoist_vars: false.// var declare variable promotion, off because it increases output volume
inline: true.// only function calls with return statements become inline calls at the following levels: 0(false), 1,2,3 (true)
loops: true.// Optimize do, while, for loops when conditions can be determined statically
negate_iife: false.// Cancel the immediate call to the function expression when the return value is discarded.
properties: false.// Replace property access with dot operators, such as foo["bar"] → foo.bar
reduce_funcs: false./ / the old options
reduce_vars: true.// Change constant object to constant when variable assignment and use
switches: true.// Remove duplicate branches and unused parts of the switch
toplevel: false.// Discard unused functions and variables in the top-level scope
typeofs: false.// Convert typeof foo == "undefined" to foo === void 0, mainly for compatibility with pre-IE10 browsers
booleans: true.// Simplify Boolean expressions such as:!! a ? B: C → A? b : c
if_return: true.// Optimize if/return and if/continue
sequences: true.// Use the comma operator to concatenate consecutive simple statements, which can be set to a positive integer to specify the maximum number of consecutive comma sequences to be generated. The default of 200.
unused: true.// Discard unused functions and variables
conditionals: true.// Optimize if statements and conditional expressions
dead_code: true.// Discard unused code
evaluate: true.// Try to evaluate constant expressions
// passes: 2, // compress maximum run times. The default is 1
},
mangle: {
safari10: true,}}};returnargs; }); }... }Copy the code
For each parameter, it should be set according to the specific situation of the project. The cost of speeding up the running speed or reducing the code volume is increasing the packaging time. If you do not know the meaning of a parameter, it is best not to modify it.Copy the code
Chunks: 15.8m => 15.7m (parsed)
HtmlWebpackPlugin
HtmlWebpackPlugin is a Plugin in Webpack that allows you to specify an HTML template for each entry to simplify the creation of HTML files. The pages option in the vue.config.js configuration actually specifies the HtmlWebpackPlugin option.
Updated to V4.3, which supports tag insertion based on chunk entry
The reason for the upgrade can be seen in the blog I wrote in the previous step pit
PreloadWebpackPlugin
This is a plugin for HtmlWebpackPlugin, used to insert and tags. Since HtmlWebpackPlugin V4 is not supported by PreloadWebpackPlugin V2.3, upgrade to V3.0.0-beta.3 (see Issues). However, v3.0.0-beta.3 inserts all asyncChunks by default under multiple entries, which must be matched with regs, and drops the original entry insertion option. Prefetch should be specified because it is not abused.
Prefetch is used to tell the browser to download in advance the files that the user may need in the following browsing. Preload is usually used to download in advance the files that the user may need in the current page because the browser blocks DOM parsing when performing CSS
Currently the policy is changed to the usual asyncChunks, Preload Chunk-vendors and vendor chunks associated with the Prefetch entry
const HtmlWebpackPlugin = require('html-webpack-plugin');
const PreloadWebpackPlugin = require('preload-webpack-plugin'); .chainWebpack: config= > {
Object.keys(config.entryPoints.entries()).forEach(page= > {
config.plugins.delete(`html-${page}`);
config.plugins.delete(`preload-${page}`);
config.plugins.delete(`prefetch-${page}`);
/** * vue-cli built-in v3.2 HtmlWebpackPlugin, The inserted chunks must show the vendors specifying * VUE - CLI by default specifies chunks ['chunk-vendors', 'chunk-common', page] * but after changing the sharding rules, I don't know what chunks are in the fragments until the fragmentation is complete. Here I will replace all chunks with HtmlWebpackPlugin */ of V4.3
config
.plugin(`html-${page}`)
.use(HtmlWebpackPlugin, [{
filename: `${page}.html`.In v4.3, the name is "chunks" and the real name is entries
chunks: [page],
template: templates[page],
}]);
/** * PreloadWebpackPlugin v2.3.0 does not support HtmlWebpackPlugin V4, * replace v3.0.0-beta.3 *@see https://github.com/GoogleChromeLabs/preload-webpack-plugin/issues/79 * v3.0.0 - beta. 3 under the multiple entry will be the default insert all asyncChunks, must be compatible with regular *@see https://github.com/GoogleChromeLabs/preload-webpack-plugin/issues/96 * someone raised by the entrance into the pull request, maybe don't have to bother * in the future@see https://github.com/GoogleChromeLabs/preload-webpack-plugin/pull/109
*/
config
.plugin(`prefetch-${page}`)
.use(PreloadWebpackPlugin, [{
rel: 'prefetch'.include: 'asyncChunks'.fileWhitelist: [
// your RegExp here].fileBlacklist: [
/\.map$/./hot-update\.js$/,].V3.0.0-beta. 3 no includeHtmlNames option, only excludeHtmlNames control
excludeHtmlNames: Object.keys(config.entryPoints.entries())
.filter(entry= >entry ! == page) .map(entry= > `${entry}.html`),}]); config .plugin(`preload-${page}`)
.use(PreloadWebpackPlugin, [{
rel: 'preload'.include: ['chunk-vendors', page],
fileBlacklist: [
/\.map$/./hot-update\.js$/,].excludeHtmlNames: Object.keys(config.entryPoints.entries())
.filter(entry= >entry ! == page) .map(entry= > `${entry}.html`),}]); }); . }Copy the code
Note that preload comes before prefetch. In addition, Prefetch should not be abused, otherwise it will take up a lot of server bandwidth.Copy the code
Preload and Prefetch are not automatically added by the PreloadWebpackPlugin if the script is added in an HTML template
CorsPlugin
This is a vue-CLI built-in WebpackPlugin, used to add crossorigin attribute to tags inserted by HtmlWebpackPlugin (script cross domain). The static resources of our project (including JS and CSS) will eventually be put on the CDN, so the source is different from the domain name of the project. If there is no Crossorigin attribute, js error is Script error, and the front-end monitoring cannot collect the location of JS error. If
However, CorsPlugin does not support HtmlWebpackPlugin V4, so we need to rewrite CorsPlugin. After the execution of PreloadWebpackPlugin, we need to replace the prefetch and Preload tags in HTML with regular expressions.
Image compression
Add an image-compression loader in vue.config.js: image-webpack-loader
npm install image-webpack-loader --save-dev
Copy the code
chainWebpack: config= > {
config.module
.rule('images')
.use('image-webpack-loader')
.loader('image-webpack-loader')
.options({
bypassOnDebug: true.// not executed in webpack 'debug' mode
})
.end()
.end()
}
Copy the code
/dist Size: 91M=> 83.1m, chunks: 16.3m => 15.7m (parsed)
Adjust the file loading sequence
HtmlWebpackPlugin inserts the chunk of entry at the end of , so scripts written in the HTML template are loaded before the chunk of entry, blocking DOM parsing until the chunk of entry is loaded. So, add the defer attribute to scripts that don’t need to be loaded first, and preload (before all CSS files) to scripts that need to be loaded first.
Defer: This Boolean property is set to inform the browser that the script will be executed after the document has been parsed and before the DOMContentLoaded event is triggered.
(Picture from Internet, exact source unknown)
The loading order of files on a page is easy to ignore, and should be adjusted so that the download and execution are fully parallel and do not "idle" the browser, which can greatly improve FMPCopy the code
Multiplexing was introduced in HTTP/2. Multiplexing is a good solution to the problem of browsers limiting the number of requests for the same domain, so you don’t have to worry about too many files. Unfortunately, the project CDN does not support HTTP/2.
Speed up webPack packaging
Due to the huge project, the packaging time was very long before. After the image-webpack-loader was added, the time for each packaging and deployment increased by about 1min, which was unbearable. Projects are deployed by pulling code packaged in a Docker container. The bulk of the packaging time is spent on NPM install and various loaders. It would be much faster if you didn’t have to start from 0 for every package. Cache-loader (image-webpack-loader, image-webpack-loader, image-webpack-loader, image-webpack-loader, image-webpack-loader, image-webpack-loader)
config.module
.rule('images')
// Add a cache to image-webpack-loader to speed up compilation
.use('cache-loader')
.before('url-loader')
.loader('cache-loader')
.options({
cacheDirectory: path.join(__dirname, 'node_modules/.cache/image-webpack-loader'})),Copy the code
Note: Only loaders with long execution times are suitable for caching, since reading and writing files is also expensive, and abuse can slow things downCopy the code
The node_modules directory is packaged once when the image is created. After each deployment, after pulling the code, move the node_modules in the image directly to the code directory and then execute NPM install and NPM run build. Or use NPM install –cache-min Infinity to install with NPM cache and move the. Cache directory in the image to node_modules. Moreover, NPM install can largely avoid the problem of slow file download caused by network instability.
There are also offline package installation tools such as Freight and NPMBox, which can be used depending on the project
Also, if the production environment does not need SourceMap, configure it to be turned off in vue.config.js.
productionSourceMap: false
Copy the code
Comparison of packaging time after processing:
There are.cache directory |
There is no.cache directory |
|
---|---|---|
Original Deployment time | 190s | 309s |
To get rid ofimage-webpack-loader after |
159s | 332s |
toimage-webpack-loader addcache-loader after |
151s | 314s |
To get rid ofSourceMap after |
102s | 257s |
Usual code writing suggestions
In fact, many of the performance problems of most projects are introduced by some bad writing methods when writing code at ordinary times. If you write badly, I write badly, it will be difficult to optimize for a long time, because it involves a large range of changes. Here are some suggestions for how to write it.
- For the new route, consider whether to load lazily, check whether the existing chunk is the same file, whether to merge the existing chunk, whether to prefetch.
- Tabs consider lazy loading (render when switching to the corresponding TAB)
- Large dependencies are best loaded on demand.
- The resolution of images within the project should take into account the size of the scene used
- Users can upload pictures to consider whether to compress
- A new TAB will cause a lot of JS to be executed again, so think about whether it’s necessary
- It is better not to frequently change the dependent library, and the changes should be centralized and unified online as far as possible
The optimization effect
The fragmentation results
Chunks: 16.09 MB(Parsed)
Entrance size: 2.3-4.8m (-60%)(parsed)
Main indicator data
After the previous operation, let’s see how much effect is produced.
avg | 50 points a | 90 points a | |
---|---|---|---|
TTI | – 11% | – 11% | – 11% |
FCP | – 20% | – 18% | – 23% |
FMP | – 11% | – 11% | – 14% |
DOM Ready | – 15% | – 16% | – 14% |
In addition, the slow-to-open ratio decreased by 67% and the first-load jump rate decreased by 47%.
On the one hand, there are too many loaded resources, some of which can be changed to load on demand. On the other hand, some Ajax requests are slow, which is out of the control of the front-end. Other solutions will be found later (service worker may be considered).
To be continued (continuing to update)