Overview:
Performance tuning tools
chrome devtool: Network
Ability: View network request and resource loading time
- The Queueing browser puts resources into the queue for a time
- Stall time due to queue time
- DNS Lookup DNS resolution time
- Initial Connection Indicates the time when the HTTP connection is established
- The time when the SSL browser establishes a secure connection with the server
- TTFB Time to wait for the server to return data
- Content Download Time when the browser downloads resources
chrome devtool: Lighthouse
- First Contentful Paint First render time is green within 1s
- Speed Index Speed Index within 4s green
- Time to Interactive Indicates the Time that the page is interchangeable
chrome devtool: Performance
Professional website performance analysis tool
webPageTest
You can simulate access in different scenarios, such as different browsers, different countries and so on.
webPageTest
webpack-bundle-analyzer
Resource packaging analysis tool
Front-end performance parameters
You can obtain the loading time of the front-end page in the following way:
window.addEventListener('DOMContentLoaded'.(event) = > {
let timing = performance.getEntriesByType('navigation') [0];
console.log(timing.domInteractive);
console.log(timing.fetchStart);
let diff = timing.domInteractive - timing.fetchStart;
console.log("TTI: " + diff);
})
Copy the code
More performance parameters are as follows:
DNS resolution time: domainLookupEnd - domainLookupStart TCP connection time: connectEnd - connectStart SSL security connection time: ConnectEnd - secureConnectionStart Network request Time (TTFB): responseStart - requestStart Data transmission time: DomInteractive - responseEnd Resource loading time: LoadEventStart - domContentLoadedEventEnd First Byte Time: responseStart - domainLookupStart White screen time: ResponseEnd - fetchStart first interactive time: domInteractive - fetchStart DOM Ready time: DomContentLoadEventEnd - fetchStart Page full load time: loadEventStart - fetchStart HTTP header size: TransferSize - encodedBodySize Redirection times: redirectCount Redirection time: redirectEnd - redirectStartCopy the code
Resource optimization
Image resource optimization
-
Transfer image resources to CDN
Transferring image resources to CDN has several benefits:
First, we know that the number of concurrent concurrent pages in the browser is controlled by the following concurrency limits:
Transferring image resources to CND prevents image requests from competing with the primary domain
Second, at present, many CDNS can dynamically crop and compress pictures, so that pictures with the corresponding resolution can be loaded in the website according to the actual display size, preventing the front end from scaling pictures.
There are two reasons not to zoom in on images:
- Reducing the size of the HTML image only reduces the size of the image, and the image will be distorted
- Zooming means the image is not the right size and the page load is too expensive
-
CSS Sprite
Sprite graph is also intended to reduce the number of HTTP network requests, the principle will not be described again. In addition, Sprite graph can reduce the total size of images to some extent
-
Lazy loading of images
Use js to determine when the image is visible on the page before setting SRC
-
Responsive picture
In addition to controlling CDN parameters to dynamically crop images, native responsive images can also be used to switch between different image resources in different environments:
<picture> <source srcset="banner_w1000.jpg" media="(min-width: 801px)"> <source srcset="banner_w800.jpg" media="(max-width: 800px)"> <img src="banner_w800.jpg" alt=""> </picture> Copy the code
-
Compress the size of the image
Compress image size, convert PNG image into JPG without transparent background, and reduce image size through lossless compression
If possible, using webP format images can further reduce the size of the image
-
For simple effects, use CSS instead of images
Resource preloading
preload
Add preloads to tags, which are acquired early in the page load life cycle. Instead of waiting for the actual rendering
A detailed draft can be found here
<link rel="preload" href="style.css" as="style as="." onload="preloadFinished()">
<link rel="preload" href="main.js" as="script as="." onload="preloadFinished()">
Copy the code
Preload features the following:
- Separate loading and execution, load ahead of time, and execute as needed
- Resources are loaded whether they are needed or not
- Perload has the AS attribute, which can set the correct resource load priority, for example
as="style"
Will get the highest priority, setas="script"
Gets low or medium priority - You can define an onload event for a resource
- This must be added when preloading cross-domain resources
crossorigin
attribute - Preload fonts without Crossorigin will be retrieved twice and the font file will be downloaded twice
- Preload resources that are not used will be warned 3 seconds after onload in Chrome’s Console
- Preload and HTTP2 active push
Http2 knows it needs the resource when the server gets the HTML file, so it pushes it directly to the client, whereas PerLoad does not scan the preloaded file until the browser receives the HTML file.
However, HTTP2 cannot be used for third-party resource pushes, and Preload helps browsers prioritize resource loads
prefetch
Prefetch tells the browser what resources are likely to be used for the next page to speed up the loading of the next page.
<link rel="Prefetch">
Copy the code
In the pages generated by VUE SSR, resources on the home page will use Preload, while the corresponding pages of routes will have PreFETCH
Do not mix prefetch and preload. This will cause reloading of resources.
Font compression
Here are two tools,
One is a font-spider, which automatically detects fonts and text referenced in web pages to generate font files.
One is fontmin, which minimizes a font file, for example:
var Fontmin = require('fontmin');
var fontmin = new Fontmin()
.src('fonts/Microsoft Yahei.ttf') // Set the server source font file
.dest('build/fonts') // Set the directory to generate the font
.use(Fontmin.glyph({
text: 'Font compression'.// Set yourself as needed
}));
fontmin.run(function (err, files) { // Generate the font
if (err) {
throw err;
}
console.log(files[0]); // Returns a Buffer that generates the font result
});
Copy the code
Fontmin provides the WebPack plug-in, which can be found here for detailed instructions
Network optimization
Static resources use CDN
A content delivery network (CDN) is a set of Web servers distributed in multiple geographic locations.
The principles of CDN are as follows:
- When a user visits a website, the browser undergoes DNS resolution, and then the browser makes an IP request to the target server to get resources
- If a WEB site has a CDN deployed, the browser resolves the DNS
- The DNS sends a request to the root server, top-level domain name server, and permission server to obtain the IP address of the Global load balancing system (GSLB)
- The local DNS sends a request to the GSLB. The GSLB determines the user location based on the LOCAL DNS IP address, filters out the local load balancing system (SLB) that is close to the user, and returns the IP address of the SLB to the local DNS
- Based on the resource and address requested by the browser, the SLB selects the optimal cache server to send the content back to the browser
- The cache server checks for a resource hit. If not, it sends a request to the source server, which is then sent to the browser and cached in the cache server
Reduce HTTP requests (for HTTP1.1)
An HTTP request consumes a lot of resources, and once you have a small amount of data transferred in the body, the amount of resources consumed by parsing the header and protocol increases.
We can cache Ajax requests and get duplicate requests directly from the cache, reducing the number of HTTP requests.
Using Http2
HTTP2 has many advantages over HTTP1.1, such as fast parsing, multiplexing, header compression, server push ability, and traffic and priority control
HTTP1 to HTTP3
Optimize the use of cookies
The advantage of Cookie is that it has good compatibility and can interact with background data without parameters, such as automatic login
The disadvantage is that:
- The old Internet Explorer displays the number of cookies
- Improper domain name Settings will result in Cookie information for all requests
- Cookie read/write performance is very poor
The optimization method is as follows:
- Minimize the size of cookies used on your site
- Set a reasonable expiration time for cookies
- Static resources do not use cookies for cookie isolation
Reducing DNS queries
DNS translates the domain name URL into the host IP address of the server
DNS lookup process:
- Check to see if the browser cache exists
- View the local DNS cache
- Access the local DNS server
- .
It usually takes 20 to 120ms for the browser to find the IP address of a given URL
TTL(Time to Live) indicates that a DNS record returned from a search is alive. If the DNS record expires, the DNS will be discarded
The TTL value of the DNS cache is affected by several factors:
- The server can set TTL to indicate the TTL of DNS records. The local DNS cache will use this TTL value to determine when DNS records are discarded. The TTL cannot be set very large because of the problem of fast failover
- The browser DNS cache also has its own expiration time, which is independent of the native DNS cache, compared to daunt, such as chrome’s one minute
- The number of DNS records in a browser is limited, and if you visit a large number of sites with different domain names in a short period of time, the older DNS records will be discarded
Therefore, for DNS optimization, we need to appropriately reduce the number of host names. However, too few host names can limit the number of parallel downloads (note: http1.1), and a more appropriate number is 2-4 to gain additional benefits
Avoid redirection
Redirection is used to reroute users from one URL to another.
- 301: Permanent redirect
- 302: Temporary redirect
- 304: Not Modified
For details, see the front-end interview FAQ 03: From HTTP1 to HTTP3
Page redirection delays the transfer of the actual HTML document, increasing the duration of the white screen.
When will redirection be used:
- Track internal traffic: Traffic after the user has left the home page, but internal referer logs are best used to track internal traffic
- Tracking outbound traffic: For example, if some links go outbound, we can solve the tracking problem by wrapping them in a 302 redirected connection
To enable Gzip
Gzip further reduces the size of front-end resource files.
The client HTTP request header specifies the compression mode supported by the browser. The server configuration specifies the compression mode, the file type to be compressed, and the compression mode. When the client requests the server, the file type to be compressed, and the compression mode to be used. The server parses the request header, and if the client supports GZIP, the resource is compressed and returned to the client. The browser parses in its own way.
How to enable Gzip is not covered here
The cache control
Browser cache can refer to the front-end interview Common 06: Browser cache
- Frequently changing resources:
Cache-Control: no-cache
- A resource that is not constantly changing:
Cache-Control: max-age=31536000
The details will not be developed here
Build optimization
Today’s mainstream front-end projects have a build process with many optimization techniques
tree shaking
Tree shaking means removing code that is not used in JS files during packaging.
This feature is well supported by Webpack2 and rollup
Dead Code Elimination (DCE) is a common practice in programming language compilers.
Its general realization principle is simply summarized as follows:
- The ES6 Module introduces static analysis to determine which modules are loaded at compile time
- Statically analyze the program flow to determine which modules and variables are not used or referenced, and then remove the corresponding code
In Webpack, you can indicate that you can treeshaking by configuring sideEffects in package.json:
{
"name": "your-project"."sideEffects": false
}
Copy the code
Or those files have side effects:
{
"name": "your-project"."sideEffects": [
"./src/some-side-effectful-file.js"]}Copy the code
For rollup, treeshaking is supported by default
Code compression
Packaged plug-ins such as WebPack can compress code in production environments, reducing the size of resource files and optimizing front-end loading performance.
Compression capabilities are currently built into the packaging tool, as soon as you start production configuration.
For example, the Webpack configuration mode: ‘Production’ enables compression:
module.exports = {
mode: 'production'
};
Copy the code
Rollup requires the installation of the corresponding plug-in, such as Terser:
import { terser } from "rollup-plugin-terser";
export default {
plugins: [
terser({ compress: { drop_console: true}}})];Copy the code
Bundle Splitting
Packaging the thought of separation is: if you have a huge file, revised and only one line of code, users still need to download the entire file, but if I put it into two files, so users only need to download the modified file, and the other a file directly from the cache access.
From this perspective, packaging separation is related to caching, so it makes no difference to first-time visitors to the site.
Webpack can be easily configured to:
module.exports = {
entry: path.resolve(__dirname, 'src/index.js'),
output: {
path: path.resolve(__dirname, 'dist'),
filename: '[name].[contenthash].js',},optimization: {
splitChunks: {
chunks: 'all',}}};Copy the code
The result is a main.js and vendor.js that separate the third-party libraries. This configuration means that everything in node_modules is stored in verndors~main.js.
Or we could do something like this:
const path = require('path');
const webpack = require('webpack');
module.exports = {
entry: path.resolve(__dirname, 'src/index.js'),
plugins: [
new webpack.HashedModuleIdsPlugin(), // so that file hashes don't change unexpectedly].output: {
path: path.resolve(__dirname, 'dist'),
filename: '[name].[contenthash].js',},optimization: {
runtimeChunk: 'single'.splitChunks: {
chunks: 'all'.maxInitialRequests: Infinity.minSize: 0.cacheGroups: {
vendor: {
test: /[\\/]node_modules[\\/]/.name(module) {
// get the name. E.g. node_modules/packageName/not/this/part.js
// or node_modules/packageName
const packageName = module.context.match(/[\\/]node_modules[\\/](.*?) (/ / \ \ | $) /) [1];
// npm package names are URL-safe, but some servers don't like @ symbols
return `npm.${packageName.replace(The '@'.' ')}`; },},},},},},};Copy the code
Here we have only briefly introduced the idea of packaging separation and simple usage examples. For detailed principles and details, please refer to this article: Understanding WebPack packaging fragmentation in Depth
According to the need to load
Load on Demand and shelf on demand are two different things. The purpose of load on Demand is to minimize the load size of the file when the user first accesses it, and to dynamically load code that is not needed for the time being.
Take this code for example:
window.document.getElementById('btn').addEventListener('click'.function () {
// Load the show.js file after the button is clicked, and execute the function exported from the file after the file is loaded successfully
import(/* webpackChunkName: "show" */ './show').then((show) = > {
show('Webpack'); })});Copy the code
Webpack does this when it encounters a similar statement:
- In order to
./show.js
Create a new Chunk for the entry - When the code executes the import statement, it loads the files that have been generated by chunk
- Import returns a promise that can be used when the file loads successfully
promise
thethen
Methodshow.js
Exported content.
/* webpackChunkName: “show” */ means to give a name to the dynamically generated Chunk so that we can trace and debug the code. If you do not specify the name of dynamically generated Chunk, the default name will be [id].js
Dllplugin improves build speed
The DLLPlugin and DLLReferencePlugin have somehow managed to break down bundles while greatly increasing the speed of builds.
Refer to webpack-dllPlugin for details
SSR: server rendering
At present, the front-end project generated by VUE/React generates its page view dynamically through JS, and often needs to load a complex rumtime runtime. It is slow to render the first screen. SSR puts this part of the rendering process on the server side and directly reads the DOM content when requesting the page.
Not only is it good for first screen rendering, it’s also good for SEO
The disadvantage is that it will increase the pressure on the service side, and there will be a certain transformation cost
Code optimization
HTML Performance Optimization
HTML optimizations are mainly to standardize the use of tags, such as:
- The HTML tag is always closed
- Script is moved to the end of the HTML file because JS blocks the display of subsequent pages
- Reduce iframe usage
- Simplify id and class
- Keep uniform case
- Remove the blank space
- Reduce unnecessary nesting
- Reduce the annotation
- Remove useless tags and empty tags
- Reduce the use of obsolete labels
- Avoid empty
img:src
CSS Performance Optimization
CSS performance is a broad category, with four main aspects:
- Load performance is mainly based on reducing file size, reducing blocking load and improving concurrency
- You pick the performance, but you don’t really have any effect on the overall performance, and you look at it in terms of normalization, maintainability, robustness. See this article: githubs-CSS-Performance
- Rendering performance – Rendering performance is the most important focus of CSS optimization. Page rendering junky too much, check whether text-shadow is used, whether font anti-aliasing is enabled, CSS animation implementation, whether GPU acceleration is properly used.
- Maintainability, robustness. Is the naming sound, the structural hierarchy robust, and the style abstraction repeated
- Reduce redrawing and reflow
JS performance optimization
There are more directions for JS performance optimization
From an engineering point of view:
- Remove functional code that is not being used
- Remove redundant dependent libraries
- Remove the common template code
From the perspective of memory usage:
- Avoid constructors for arrays and objects
- Avoid unnecessary global variables
- Reasonable use of JS caching mechanism, that is, local loaclStorage, SessionStorage, cookies, etc
- Reduce code instances in loops
- Reduce variable declarations than necessary
- Be careful with closures to avoid memory leaks
- Long list optimization
- Avoid js running time too long, reasonably decompose tasks, delay the execution of high-consumption tasks
- Make good use of Web worker, service worker and other apis
- Using wasm
- Use optimization techniques such as function stabilization/throttling, tail recursion, etc
Vue performance optimization tips
- Use functional components
- Subcomponent split
- Using local variables
- with
v-show
Instead ofv-if
- use
keepalive
Cache the DOM - use
deferred
Component delay batch renders component - use
time slicing
Time slice cutting technology - Use non-responsive data wisely
- Use the virtual scroll component
.
The end of the
Performance tuning is so deep that I can only list a few of them and introduce them in this short article, but I can’t even discuss them in depth because that would be too long. More accumulation, more summary, can always make the technology better and better.
The resources
- Save your year-end KPI: Front-end performance optimization
- Change a point of view, webpage performance optimization
- 24 Suggestions for Front-end Performance Optimization (2020)
- HTTP/2 header compression technology introduction
- Web Front-end Performance Optimization Tutorial 06: Reduce DNS lookups and Avoid redirects