Other chapters:
- HTML & CSS
- Javascript Part 1
- Javascript Part 2
- Vue part
- The React part
- Network part
- Part of the performance
Performance related
Reducing HTTP requests
Using HTTP2
Use server-side rendering
Client-side rendering: get the HTML file, download the JavaScript file as needed, run the file, generate the DOM, and render.
Server-side rendering: The server returns the HTML file, and the client only needs to parse the HTML.
- Advantages: fast rendering of the first screen, good SEO.
- Disadvantages: Troublesome configuration, increased server computing pressure.
Here I use Vue SSR as an example to briefly describe the SSR process.
Client rendering process
- Visit the client rendered website.
- The server returns an HTML file containing the import resource statement and ‘ ‘.
- The client requests resources from the server through HTTP. After all necessary resources are loaded, the client executes the request
new Vue()
Start instantiating and rendering the page.
Server side rendering process
- Visit the server rendered web site.
- The server looks at what resource files are required for the current routing component and populates the contents of those files into HTML files. If there is an Ajax request, it prefetches the data and populates it into an HTML file, then returns the HTML page.
- When the client receives the HTML page, it can immediately start rendering the page. At the same time, the page loads resources, and when all the necessary resources are loaded, execution begins
new Vue()
Start instantiating and take over the page.
As can be seen from the above two processes, the difference lies in the second step. A client-rendered web site will return the HTML file directly, while a server-side web site will return the HTML file after rendering the page.
What are the benefits of this? It’s faster time-to-content.
Suppose your site needs to load abCD four files to complete rendering. And each file is 1 MB in size.
The client rendered site needs to load 4 files and HTML files to complete the home page rendering, with a total size of 4M (ignoring the HTML file size). However, the website rendered by the server only needs to load a rendered HTML file to complete the home page rendering, with the total size of the rendered HTML file (such file is not too large, usually several hundred K, my personal blog site (SSR) loaded HTML file is 400K). This is why server-side rendering is faster.
Place the CSS at the top of the file and the JavaScript file at the bottom
Use the font icon iconfont instead of the image icon
Make good use of caching and do not load the same resource twice
To prevent users from having to request a file every time they visit a site, we can control this behavior by adding Expires or max-age. Expires sets a time before which the browser won’t request the file, instead using the cache. Max-age is a relative time, so it is recommended to use max-age instead of Expires.
But this raises the question, what happens when the file is updated? How do I notify the browser to request the file again?
You can make the browser drop the cache and load new resources by updating the url referenced in the page.
The specific approach is to associate the modification of the resource URL with the file content, that is, only the change of the file content will lead to the change of the corresponding URL, so as to achieve precise cache control at the file level. What’s relevant to the contents of the file? We will naturally associate with the use of data summarization algorithm to extract summary information of the file, summary information and file content one by one correspondence, there is a cache control basis can be accurate to the granularity of a single file.
The compressed file
Compressed files reduce file download time and provide better user experience.
Thanks to webPack and Node, it’s now very easy to compress files.
In Webpack you can use the following plug-ins for compression:
- JavaScript: UglifyPlugin
- CSS :MiniCssExtractPlugin
- HTML: HtmlWebpackPlugin
In fact, we can do better. That’s using gzip compression. You can turn this on by adding the Gzip identifier to the Accept-Encoding header in the HTTP request header. Of course, the server also has to support this feature.
Gzip is by far the most popular and efficient compression method. For example, the size of the app.js file that I built with Vue was 1.4MB, but when compressed with gzip, it was 573KB, reducing its size by nearly 60%.
Attached is webPack and Node configuration for gzip usage.
Download the plugin
npm install compression-webpack-plugin --save-dev
npm install compression
Copy the code
Webpack configuration
const CompressionPlugin = require('compression-webpack-plugin');
module.exports = {
plugins: [new CompressionPlugin()],
}
Copy the code
The node configuration
Const compression = require('compression') // app.use(compression())Copy the code
Image optimization
(1). Image lazy loading
In the page, the image path is not set, only when the image appears in the browser visible area, the real image is loaded, this is lazy loading. For a website with many images, loading all images at one time will greatly affect the user experience, so it is necessary to use image lazy loading.
First, you can set the image so that it will not load when the page is not visible:
<img data-src="https://avatars0.githubusercontent.com/u/22117876?s=460&u=7bd8f32788df6988833da6bd155c3cfbebc68006&v=4">
Copy the code
When the page is visible, load the image with JS:
const img = document.querySelector('img')
img.src = img.dataset.src
Copy the code
The image now loads, and the complete code can be seen in Resources.
References:
- The principle of lazy loading of web front-end images
(2). Responsive pictures
The advantage of responsive images is that the browser can automatically load the appropriate image based on the screen size.
Through picture
<picture>
<source srcset="banner_w1000.jpg" media="(min-width: 801px)">
<source srcset="banner_w800.jpg" media="(max-width: 800px)">
<img src="banner_w800.jpg" alt="">
</picture>
Copy the code
Via @media
@media (min-width: 769px) { .bg { background-image: url(bg1080.jpg); } } @media (max-width: 768px) { .bg { background-image: url(bg768.jpg); }}Copy the code
(3) adjust the size of the picture
For example, if you have a 1920 by 1080 image, show it to the user as a thumbnail, and only show the full image when the user hovers over it. If the user never actually hovers over the thumbnail, time is wasted downloading the image.
So, we can optimize with two images. Initially, only thumbnails are loaded, and large images are loaded when the user hovers over the image. Another option is to lazily load the large image, manually changing the SRC of the large image to download after all the elements have been loaded.
(4). Reduce picture quality
For JPG images, for example, the difference between 100% quality and 90% quality is often indistinguishable, especially when used as background images. Often when I photoshop a background image, I cut the image into JPG format and compress it down to 60% mass and barely notice the difference.
There are two compression methods: one is through webpack plug-in image-webpack-loader, the other is through online website compression.
The webpack plugin image-webpack-loader is shown below.
npm i -D image-webpack-loader
Copy the code
Webpack configuration
{
test: /\.(png|jpe? g|gif|svg)(\? . *)? $/,
use:[
{
loader: 'url-loader'.options: {
limit: 10000./* Images that are smaller than the 1000 byte limit are automatically converted to base64 code references */
name: utils.assetsPath('img/[name].[hash:7].[ext]')}},/* Compress the image */
{
loader: 'image-webpack-loader'.options: {
bypassOnDebug: true,}}]}Copy the code
(5). Use CSS3 effect instead of picture as much as possible
There are many images that can be drawn using CSS effects (gradients, shadows, etc.), in which case CSS3 works better. The code size is usually a fraction or even a tenth of the size of the image.
(6). Use webP format pictures
The advantages of WebP are reflected in its better image data compression algorithm, which can bring smaller image volume and has undifferentiated image quality recognized by naked eyes. With lossless and lossy compression modes, Alpha transparency and animation features, JPEG and PNG conversion results are excellent, stable and uniform.
10. Load the code on demand through Webpack, extract the third library code, reduce the redundant code from ES6 to ES5
Lazy loading, or loading on demand, is a great way to optimize a web page or application. In this way, your code leaves at some logical breakpoint, and then references, or is about to reference, new blocks of code as soon as you have done something in those blocks. This speeds up the initial loading of the application and reduces its overall size, since some code blocks may never be loaded.
Generate file names according to file contents, and import components dynamically to achieve on-demand loading
You can do this by configuring the filename attribute for output. The value option for the filename property has a [contenthash], which creates a unique hash based on the file content. When the contents of the file change, [Contenthash] also changes.
output: {
filename: '[name].[contenthash].js'.chunkFilename: '[name].[contenthash].js'.path: path.resolve(__dirname, '.. /dist'),},Copy the code
Extracting third-party libraries
Since introduced third-party libraries are generally stable, they don’t change very often. So it’s a better choice to separate them out for long-term caching. You need to use webPack4’s splitChunk plug-in cacheGroups option.
optimization: {
runtimeChunk: {
name: 'manifest' // Split the WebPack Runtime code into a separate chunk.
},
splitChunks: {
cacheGroups: {
vendor: {
name: 'chunk-vendors'.test: /[\\/]node_modules[\\/]/,
priority: -10.chunks: 'initial'
},
common: {
name: 'chunk-common'.minChunks: 2.priority: -20.chunks: 'initial'.reuseExistingChunk: true}}}},Copy the code
- Test: Used to control which modules are matched by this cache group. When passed intact, it will select all modules by default. Types of values that can be passed: RegExp, String, and Function;
- Priority: indicates the extraction weight. A larger number indicates a higher priority. Because a module may satisfy multiple cacheGroups conditions, the highest weight determines which one to extract;
- ReuseExistingChunk: Indicates whether to use an existing chunk. True indicates that no new chunk will be created if the current chunk contains modules that have already been extracted.
- MinChunks (default: 1) : The minimum number of times that a chunk of code should be referred to before it is split.
- Chunks (async by default) : Initial, Async, and all
- Name (the name of the packaged chunks) : string or function (the function can be named according to the condition)
Reduce redundant code from ES6 to ES5
In order to implement the same functions as the original code, Babel transformed code needs to use some helper functions, such as:
class Person {}
Copy the code
Will be converted to:
"use strict";
function _classCallCheck(instance, Constructor) {
if(! (instanceinstanceof Constructor)) {
throw new TypeError("Cannot call a class as a function"); }}var Person = function Person() {
_classCallCheck(this, Person);
};
Copy the code
Here the _classCallCheck is a helper function, and if classes are declared in many files, there will be many such helper functions.
The @babel/runtime package declares all the helper functions needed, and the @babel/plugin-transform-runtime package imports all the helper functions needed from the @babel/runtime package:
"use strict";
var _classCallCheck2 = require("@babel/runtime/helpers/classCallCheck");
var _classCallCheck3 = _interopRequireDefault(_classCallCheck2);
function _interopRequireDefault(obj) {
return obj && obj.__esModule ? obj : { default: obj };
}
var Person = function Person() {(0, _classCallCheck3.default)(this, Person);
};
Copy the code
Instead of compiling the classCallCheck helper, we’re referring to helpers/classCallCheck at @babel/ Runtime.
The installation
npm i -D @babel/plugin-transform-runtime @babel/runtime
Copy the code
Used in.babelrc files
"plugins": [
"@babel/plugin-transform-runtime"
]
Copy the code
Reduce redrawing rearrangements
Browser rendering process
- Parsing the HTML generates a DOM tree.
- Parsing CSS generates a CSSOM rule tree.
- Parse JS, manipulate DOM tree and CSSOM rule tree.
- Combine the DOM tree with the CSSOM rule tree to generate the render tree.
- Traverse the render tree to start the layout, calculating the location size information for each node.
- The browser sends the data for all the layers to the GPU, which synthesizes the layers and displays them on the screen.
rearrangement
Changing the position or size of a DOM element causes the browser to regenerate the render tree, a process called rearrangement.
redraw
When the render tree is regenerated, each node of the render tree is drawn to the screen, a process called redrawing. Not all actions cause rearrangement, such as changing font colors, only redrawing. Remember, rearranging leads to redrawing, redrawing does not lead to rearranging.
Both reorder and redraw operations are very expensive because JavaScript engine threads and GUI rendering threads are mutually exclusive and only one of them can work at a time.
What actions will cause the rearrangement?
- Add or remove visible DOM elements
- Element position change
- Element size change
- Content change
- Browser window size changed
How to reduce rearrangement redraw?
- When changing styles in JavaScript, it is best not to write styles directly, but to change styles by replacing classes.
- If you want to perform a series of operations on a DOM element, you can take the DOM element out of the document flow, modify it, and bring it back to the document. Hidden elements (display:none) or DocumentFragement are recommended for this solution.
Reduce the complexity of CSS selectors
(1). The browser reads the selector from the right to the left.
To see a sample
#block .text p {
color: red;
}
Copy the code
- Find all P elements.
- Find if the element in result 1 has a parent element with the class name text
- Find if the element in result 2 has a parent element whose ID is block
(2).css selector priority
Inline > ID selector > Class selector > Label selectorCopy the code
Conclusions can be drawn from the above two pieces of information.
- The shorter the selector, the better.
- Try to use higher-priority selectors, such as ID and class selectors.
- Avoid the wildcard character *.
Finally, according to my research, THERE is no need to optimize CSS selectors because the performance difference between the slowest and fastest selectors is very small.