Performance optimization indicators and measurement tools
The industry standard
# Google Chrome network
- Understand the loading waterfall diagram
Horizontal reading of specific resource loading: the mouse floats over the waterfall diagram of a resource to appear in the floating box
Queueing: Time spent queuing resources.
DNS Lookup: It takes time to resolve a domain name. When a browser requests resources under a domain name, the browser needs to obtain the IP address of the DNS server through the DNS parser.
Initial Connection/Connecting: Indicates the time spent to establish a Connection, including the TCP handshake and retry time.
SSL: indicates the time of SSL certificate encryption negotiation.
Request sent: Time spent sending a Request.
Waiting (TTFB) : The time between a request and a response. This is the parameter that affects the user most. The two biggest factors that affect this parameter are ① the processing power of the server and ② the network
Content Download: Time spent downloading resources.
Read the waterfall vertically
If there is a block between the resource and the resource is serial, if it is parallel, then we can speed up the loading of the resource. ② Check the key node of the resource loading, the blue line: DOM is fully loaded and parsed. Red: The time when all DOM, CSS, JS and images on the page are fully loaded.
- Store performance test results based on HDR for subsequent use or use other performance analysis tools
- Use Lighthouse for analysis
We’ll focus on two metrics: First Contentful Paint: The time the First content painting appears, which can be either text or an image, as long as the screen is no longer blank.
Speed Index: The industry standard is 4 seconds
- Interactive experience
① Feedback time for interactive actions – fast enough
② FPS – The animation is smooth enough, 60 frames per second is standard
How to view opening debugging tool Ctrl + Shift + P
③ Asynchronous request completion time – fast enough, I hope you can optimize all asynchronous requests can return data within 1s, if the return does not return data for compression, if the compression time is still not up to, then consider front-end interaction optimization, such as loading state.
The optimization model
RAIL measurement model – Google
We can quantify the results of our optimization and tell us how good we are.
- R: Response Response (event processing should be completed within 50ms) – refers to whether the site gives feedback to the user after clicking on the page element and entering the content
- A: Animation (one frame every 10ms) – whether the Animation is smooth enough for the user to see
- – We want the browser to have enough Idle time. For example, if we are browsing the web and suddenly get stuck, the main thread of the browser is very busy and has no time to do what you are doing.
- L: Load (content is loaded within 5s and interaction can occur) – refers to the time when network resources are loaded
Measuring tool
- Chrome DevTools development and debugging, performance evaluation
- The overall quality assessment of Lighthouse website
- WebPageTest multi-test site, comprehensive performance reporting – webPagetest.org/ can also be deployed locally
Performance related APIs
DNS resolution time: domainLookupEnd - domainLookupStart TCP connection time: connectEnd - connectStart SSL security connection time: ConnectEnd - secureConnectionStart Network request Time (TTFB): responseStart - requestStart Data transmission time: DomInteractive - responseEnd Resource loading time: LoadEventStart - domContentLoadedEventEnd First Byte Time: responseStart - domainLookupStart White screen time: ResponseEnd - fetchStart first interactive time: domInteractive - fetchStart DOM Ready time: DomContentLoadEventEnd - fetchStart page full load time: LoadEventStart - fetchStart HTTP header size: transferSize - encodedBodySize redirection number: performance. Navigation. RedirectCount redirection time consuming: redirectEnd - redirectStart// Calculate some key performance metrics
window.addEventListener('load'.(event) = > {
// Time to Interactive Time
let timing = performance.getEntriesByType('navigation') [0];
console.log(timing.domInteractive);
console.log(timing.fetchStart);
let diff = timing.domInteractive - timing.fetchStart;
console.log("TTI: " + diff);
})
// Observe long tasks
// Get all the long Tasks objects in PerformanceObserver
const observer = new PerformanceObserver((list) = > {
for (const entry of list.getEntries()) {
console.log(entry)
}
})
// Listen for long tasks
observer.observe({entryTypes: ['longtask']})
// Visibility status monitoring (we can do things based on whether the user is using the page, such as pause video, save game...)
let vEvent = 'visibilitychange';
if (document.webkitHidden ! =undefined) {
// WebKit event name
vEvent = 'webkitvisibilitychange';
}
function visibilityChanged() {
// The page is visible
if (document.hidden || document.webkitHidden) {
console.log("Web page is hidden.")}else { // Page is not visible
console.log("Web page is visible.")}}document.addEventListener(vEvent, visibilityChanged, false);
// Determine the current network status of the user (you can load pictures of different definition according to the user's network, etc.)
var connection = navigator.connection || navigator.mozConnection || navigator.webkitConnection;
var type = connection.effectiveType;
function updateConnectionStatus() {
console.log("Connection type changed from " + type + " to " + connection.effectiveType);
type = connection.effectiveType;
}
// The network status changes
connection.addEventListener('change', updateConnectionStatus);
Copy the code
Rendering optimization
Modern browser rendering principles
URL resolution stage –> DNS domain name resolution –> TCP connection establishment –> SENDING HTTP request –> HTTP response stage –> browser rendering
Browsers are multithreaded, page rendering is single line; After retrieving index.html, the browser will open up a stack of memory, and allocate a main thread "top down, from left to right" to parse; Link, img, video... The browser starts a new thread to load the resource, but does not execute it. The main thread continues down; -@ import import style (synchronization) will not open up a new thread to load resources, but the main thread to obtain resources and load will continue to continue down to parse the execution of DOM ③ encountered script, external chain JS will get resources and execute, will block DOM generation; Optimization: use defer and Async to get resources asynchronously and wait for DOM rendering to finish execution; - Modern browsers have a complete code scanning mechanism. If a script needs to be loaded and executed synchronously, the code will be scanned down at the same time. If some asynchronous resource code is found, the resource request will be started. Because there may be cases of manipulating element styles in JS, even if the JS resource of the asynchronous request is loaded and rendered, the JS code will not be executed until the CSS is loaded and rendered. -defer Will be executed in sequence after the asynchronous resource acquisition. -async will be executed in disorder after the asynchronous resource acquisition. Who first loaded back who first execution - DOMContentLoaded () : when the DOM structure loaded will trigger - $(function () {}) | | $(doucment.) ready (function () {}) - the load () : The DOM Tree ⑤ main thread is now ready to execute the loaded CSS resources. The browser will notify the GPU(graphics card) to start rendering graphics according to the Render TreeCopy the code
Optimizable rendering links and methods
Key render path
- Layout and rendering – these two steps are the two most expensive steps for our key render path, how to avoid these two steps is a good optimization point.
The render tree contains only the nodes needed for the web page, the layout calculates the exact position and size of each node, and the rendering is the process of pixelating each node
Redraw: Changes in element style (but not width, height, size, position, etc.) redraw: Changes in element size or position trigger relayout, resulting in recalculation of the render tree and redraw. Redraw does not necessarily trigger redrawCopy the code
- Avoid backflow
Operations that cause backflow: * Page rendering for the first time * browser window size changes * Element size or position changes element content changes (text number or image size, etc.) * Element font size changes * Add or remove visible DOM elements * Activate CSS pseudo-classes (e.g. Hover) * Query certain properties or call certain methods * Some commonly used properties and methods that cause backflow Leftoffsetwidth, offsetHeight, offsetTop, offsetLeftscrollWidth, scrollHeight, SCrol LTop, scrollLeftscrollIntoView(), scrollIntoViewIfNeeded(), getComputedStyle(), getBoundingClientRect(), scrollTo() ① Change the position of the element, instead of using top, bottom, etc., use transform: Translate instead. This will not trigger backflow and redraw, only the composition process. ② Read-write separation - read (read layout information) write (change layout) Batch read, batch write; If the read and write are not separated, the page will jitter. Plug-in: https://github.com/wilsonpage/fastdomCopy the code
- Compound threads and layers
What does a composite thread do? I'm going to break up the page and I'm going to draw the layers and then I'm going to composite how do I break up the page into different layers, what are the rules? By default is decided by the browser, the browser based on some rules to decide whether to put the page into different layers, and put what elements composition of different layers, the analysis of the main is whether there is the mutual influence between element and element, if certain elements influence on other elements is more, it will be extracted as a separate layer and benefits: If it changes, we just redraw its layer. Or we can use the positioning | float + z - manual extraction index influence elements for a single layer; Use DevTools to understand the layer separation of the web page; Ctrl + Shift + P search Layers which styles only affect composition; Position transform: translate(npx,npx); Scale transform: tscale(n); Rotation transform: rotate(ndeg); Opacity opacity: 0.... 1;Copy the code
Code optimization
JavaScript to optimize
- JavaScript overhead and how to shorten parsing time?
Where is the cost? Code splitting, load on demand; Tree shaking code reduces weight; Avoid long tasks; Avoid interline scripts larger than 1KB;Copy the code
– High frequency event processing function to prevent shaking
/* * fn [function] * delay [number] millisecond, Return function() {if(timer){clearTimeout(timer) */ function debounce(fn,delay){let timer = null // Enter the branch statement, indicating that a timing process is currently underway and the same event is triggered again. Timer = setTimeout(fn,delay)}else{timer = setTimeout(fn,delay)}Copy the code
- Object to optimize
Initialize object members in the same order, avoiding hidden class adjustments;
The V8 engine creates hidden types while parsing, and reusable types are reused. class RectArea { // HC0 constructor(l, w) { this.l = l; // HC1 this.w = w; // HC2}} const rect1 = new RectArea(3,4); // create hidden classes HC0, HC1, HC2 const rect2 = new RectArea(5,6); Const car1 = {color: 'red'}; // Same object structure, reuse all previous hidden classes const car1 = {color: 'red'}; // HC0 car1.seats = 4; // HC1 const car2 = {seats: 2}; // No reusable hidden classes, create HC2 car2.color = 'blue'; // Create HC3 with no reusable hidden classesCopy the code
Avoid adding new attributes after instantiation;
const car1 = {color: 'red'}; Car1.seats = 4; // The Normal/Fast attribute is stored in the property store. It is not as Fast as the property of the object itselfCopy the code
Use Array instead of array-like (pseudo-array l e.g. Arguments) objects whenever possible;
Array. The prototype. The forEach. Call (arrObj, (the value, the index) = > {/ / than high efficiency the console on real Array. The log (` ${index} : ${value} `); }); const arr = Array.prototype.slice.call(arrObj, 0); Arr. ForEach ((value, index) => {console.log(' ${index}: ${value} '); });Copy the code
Avoid reading more than the array length
function foo(array) { for (let i = 0; i <= array.length; If (array[I] > 1000) {// 1. Console. log(array[I]); console.log(array[I]); // Business invalid, error}}}Copy the code
Avoid element type conversions
// If the array starts out full of integers, v8 optimizes it, and you push a double, then v8's optimizations are invalid and need to be degraded. The more specific you are, the more optimizations the compiler can make; const array = [3, 2, 1]; / / PACKED_SMI_ELEMENTS array. Push (4.4); // PACKED_DOUBLE_ELEMENTSCopy the code
- Minimize the use of closures and avoid nested loops and dead-loops
HTML optimization
- Reduce iframes
Reason: Iframes block the loading of the parent document, and creating elements in iframes is much more expensive than creating the same elements in the parent document. <iframes id='a'></iframes> doucment.getelementById ('a').setattribute ('ser', 'url')Copy the code
- Compressed whitespace character
- Avoid deep nesting of nodes
- Avoid table layouts
- Remove the comments
- Css&js use external chain as far as possible
- Delete element default attributes
You can use the tool -html-minifier
CSS optimization
- CSS parsing rules run from right to left, so minimize hierarchy nesting
- Reduce CSS blocking for rendering
Put the CSS stylesheet at the top of the head; Reduce the CSS size.
- Use GPU to complete animation
- Use contain property
contain: layout; Indicates that the outside of the element does not affect the layout of the inside of the element, and vice versa. The contain attribute allows developers to declare the current element and its contents as separate from the rest of the DOM tree as possible. This allows the browser to recalculate layout, style, drawing, size, or a combination of the four, affecting only a limited DOM area rather than the entire page, which is a significant performance improvement;
- Use the font-display attribute
It will help us make the text appear on the page earlier, and it will also reduce the glitch of the text.
Resource optimization
Compress & Merge
- HTML compression
Kangax.github. IO /html-minifi… Use NPM tools such as HTMl-minifier.
- CSS compression
Kangax.github. IO /html-minifi… Use NPM tools such as clean-CSS;
- Js compression and obfuscation – Webpack for JS build time compression
- Merge CSS JS files
Merge several small files, OK! No conflicts, same service module, OK! Purely to optimize network loading, No!
Image format
- JPEG/JPG
Advantages: high compression ratio, but our picture quality can be well preserved. Use scenario: when you need to show a large image and want to save the display of such a quality effect, such as home page rotation. Faults: If the image has strong textures and edges, it will have a jagged blur. Tools: https://github.com/imagemin/imagemin or compressed JPG imagesCopy the code
- PNG
Advantages: Can make transparent background image using scene: make some compensations for JPG, can show the line, texture, edge has a good defect: compared to the larger size, we will use to make some small images such as logo, icon tool: https://github.com/imagemin/imagemin-pngquant or compressed PNG imagesCopy the code
- webp
Advantages: has the same quality as PNG, but has a higher compression ratio than PNG. Disadvantages: Depends on browser compatibilityCopy the code
- Image base64
BASE64 code is a lot of, not convenient development and maintenance (careful use) : webpack-based related loader -file-loader can automatically BASE64 some picturesCopy the code
Image to load
- Lazy loading of images
<img loading="lazy" SRC ="https://xxxxx" > loading="lazy" if the browser supports this property yall.js BlazyCopy the code
- Use progressive images
JPG has two basic formats: ① baseline JPG, the form of top-down row scan when loading and ② progressive format, so it will load from low pixels to high pixels.Copy the code
Progressive image tools: progressive-image ImageMagick libjpeg jpegTRAN JPEg-Recompress imagemin
- Use responsive images
① Use of Srcset ② use of Sizes ③ Use of pictureCopy the code
The font to optimize
Question: If the font is not downloaded, the browser will hide it or downgrade it automatically, causing the font to Flash of invisible text. Font download is another kind of style, fonts flicker problem) these two problems are inevitable, because the font to it takes time to download through the network, as long as it does not download the browser must do a choice, either has been can’t see the download is complete, such as in the display, or to use the default font again after the download is complete, such as display another font.
- Use font-display to control this behavior in the browser
font-display: auto | block | swap | fallback | optional;
- Character set splitting using the Unicode-range attribute
- Load fonts using Ajax + Base64
Build optimization
Depend on the optimization
- noParse
Improve build speed by directly telling WebPack to ignore larger libraries and that omitted libraries cannot have import,require,define import methods
module: {
noParse: /lodash/,
}
Copy the code
- DllPlugin
Avoid repeated builds of unchanging libraries during packaging and increase build speed. The application scenario is the development environment
Resolution of the code
- Manually defined entry
- SplitChunks of Webpack are used to extract the public code and split the business code
Code Compression – Resource compression based on WebPack4 (Minification)
- Terser compression js
- The mini – CSS – exteact – the plugin CSS
- HtmlWebpackPlugin – HTML minify compression
Persistent cache – WebPack-based persistent cache of resources
- Each packaged resource file has a unique hash value
- After the modification, only the hash of affected files changes
Monitoring and Analysis – WebPack-based application size monitoring and analysis
- Stats analysis and visualization
https://alexkuz.github.io/webpack-chart/ further analysis: NPM I source - the map - explorer add command analysis "scripts" : {" analyze ": "source-map-explorer 'build/&.js'" }Copy the code
- Webpack-bundle-analyzer for volume analysis
- Speed-measure-webpack-plugin Speed analysis
According to the need to load
- Component dynamic loading
- Dynamic route loading
Transfer load optimization
GZip
# enable gzip gzip on; # Enable gzip compressed minimum file; Gzip_min_length 1k; # gzip level 1-10 gzip_comp_level 2; The type of file to compress. gzip_types text/plain application/javascript application/x-javascript text/css application/xml text/javascript application/x-httpd-php image/jpeg image/gif image/png; Gzip_vary on is recommended for HTTP header Vary: accept-encoding. Use gzip_static on directly for static resources that have been compressed by gzip; Gzip_buffers 4 16k, 4 times the size of the original data. # HTTP version used gzip_http_version 1.1;Copy the code
KeepAlive
It helps us reuse TCP connections so that when we propose a TCP connection to a server, subsequent requests don’t have to be repeated. As part of the HTTP standard, KeepAlive is enabled by default after HTTP1.1.
Keepalive_timeout 65 keepalive_timeout 65 keepalive_timeout 65 keepalive_timeout 65 keepalive_timeout 65 Keepalive_requests 100; keepalive_requests 100;Copy the code
HTTP cache
Developer.mozilla.org/zh-CN/docs/…
The cache location
Memory Cache Disk Cache Push Cache (HTTP/2)Copy the code
- Negotiation cache – There are two types: ETag/ if-none-match and last-modified/if-modify-since
www.cnblogs.com/tugenhua070…
** Negotiation cache principle: * * the request the client to the server, the server will detect if there is a corresponding identification, if there is no corresponding identifier, to identify the server returns a corresponding to the client, the client request again next time, bring the logo in the past, and then the server will verify the identity, if verification is passed, will respond to 304, telling the browser reads cache. If the identity does not pass, the requested resource is returned. (These are enabled by default on Nginx.)Copy the code
- Strong cache-expres and cache-control
{if($request_filename ~*.*\.(? :htm|html)$) { add_header Cache-Control " no-cache, must-revalidate"; add_header "Pragma" "no-cache"; add_header "expires" "0"; } if($request_filename ~* .*\.(? : js | CSS) ${# 7 days expires 7 d; } if($request_filename ~* *\.(? :gif|jpg|jpeg|png|bmp|swf|ico|cur|gz|svg)$) { expires 7d; } index index.html index.htm; }Copy the code
The Service Worker cache
Vue and React have specific implementation methods to implement Service workers offline.
HTTP/2
Advantages: ① binary transmission ② request response multiplexing ③ Server push
What is multiplexing? In HTTP 1.1, making a request looks like this: The biggest problem with this process is that each request requires an HTTP connection, which is called three handshakes and four waves. This process takes a considerable amount of time during a single request and is logically unnecessary because of continuous requests for data. It is fine to establish a connection the first time, and then use the channel to download other files. How efficient is that? To address this, HTTP 1.1 provides keep-alive, which allows us to establish a single HTTP connection to return multiple requests. But there are two problems: HTTP 1.1 transfers data based on serial files, so these requests must be ordered, so in fact we just save the time to establish a connection, and the time to get the data does not reduce the maximum concurrency problem. Let's say we set the maximum concurrency in Apache to 300, and because of the browser's own limitations, The maximum number of requests is 6, so the maximum number of concurrent requests the server can handle is 50. HTTP/2 introduces the concept of binary data frames and streams, where frames identify the data sequentially, so that the browser receives the data and can merge it sequentially without losing the merged data. Again, because of sequences, the server can transfer data in parallel. HTTP/2 is stream-based for all requests under the same domain name, which means that no matter how many files are accessed from the same domain name, only one connection is established. Also, Apache's maximum number of connections is 300, because with this new feature, the maximum number of concurrent connections can be increased to 300, which is a six-fold increase.Copy the code
SSR
Accelerate the first screen loading; Better seo.
- nuxt
- next
Use CDN resources
When the client and server exchange information, multiple data should be transmitted in JSON format as much as possible
More Optimization Techniques
DNS perfecting
There may be many domain names on our site, but it will take some time for each request to be resolved by DNS. So we can optimize: DNS prefetch (fetch first in head)
SVG optimized ICONS
Advantages: more colorful, semantic, independent vector graphics.
FlexBox layout
Advantages: Higher performance implementation solutions; The container has the ability to determine the size, order, alignment, and spacing of child elements; Bidirectional layout;
Optimize the loading sequence of resources
- Preload: Preloads resources that appear late but are important to the current page
- Prefetch: Loads resources required by subsequent pages in advance. The priority is low
pre-rendered
Performance bottlenecks for large single-page applications: JS download + parsing + execution
Main problems with SSR: Sacrificing TTFB to remedy first Paint; Implementation complexity;
Render pages in advance when pre-rendering is packaged, without server involvement;
- React snap-react and vue can be used
Github.com/stereoboost…
Improved list performance – Virtual lists
The windowing technique optimizes performance by rendering only a few areas of content, reducing the time needed to re-render components and create DOM nodes
- vue-virtual-scroll-list
- react-window
Skeleton components
Placeholders reduce home page white space and reduce layout movement. Improve user awareness.
- react-placeholder
- vue-skeleton-webpack-plugin