Anyway, front-end performance optimization is very important, don’t learn how to advance to 20K+ salary ah? !

Performance optimization has been concerned about, but if you don’t go back to their responsible projects, practice and optimization, there will always be a little “book from the shallow sleep” feeling!

From the beginning of CSS in , js in front of , using Sprite images, etc., to the later static resource compression, merge and use iconfont instead of small ICONS, and then to the recent practice of gZIP compression, set HTTP Header cache fields…

Gzip compression, setting ETag, etc., have long been in the guide to High Performance Website Construction of the two books, but I have always thought that this is the back-end of the friends do not pay much attention to. Until recently, the understanding of the separation of the front and back ends became more and more full, the deployment of the whole project became more and more clear, and the resource request in the project became more and more clear, and suddenly realized: the separation of the front and back ends is what the front end does by itself!!

From the following aspects to say their own practice of optimization methods:

The process by which a browser renders a page

The so-called optimization, the first thing to understand is: optimize what, from where optimization. The front-end page is rendered in the browser, so how does the browser render the page? How do browsers handle static CSS and JS resources? I won’t go into the details here, but there are a lot of resources on the web, and I’ve written one of my own, Website Performance Optimization — CRP, which is a translated version of Google Docs.

Now that you understand how the browser renders the page, you can understand why CSS is placed in , js is placed before , and js optimizations such as async and defer.

Reducing HTTP requests

  • CSS/JS combined packaging
  • Replace small ICONS with iconfont: Used as a single DOM node, you can set the size, color, etc., very convenient. I suggest that the front end maintain the font package. Every time there is a new icon, let the designer give us the corresponding SVG file. The front end itself go to icomon. IO/this website, import the original selection. I always thought iconfont should be monochrome, but it can be multicolor, just add more paths in SVG, designers will work that out. After the font is generated, the front-end reference can be normal (when quoting, multicolor font will have more labels)
  • Use Base64 images: For small images that may be complex in color, iconfont is not appropriate. Convert them to Base64 (no caching) and embed them directly in SRC, such as webpack url-loader setting limit
  • Using Sprite images: Setting the background image size is a hassle, and Sprite images are not easy to maintain. It seems that the usage of Sprite images is becoming less and less popular, and has been replaced by iconFont

Reduce the volume of static resources

  • Compression of static resources: The volume of js and CSS files that are merged and packaged is generally large, and some images are also large, which must be compressed. Front – and back-end separation projects, whether gulp or Webpack, have toolkits. For individual images, sometimes can also be taken out of the processing, I personally often use tinypng.com/ online compression
  • Write efficient CSS: the code level optimization is more and more detailed, different levels of technical personnel to write out certainly not the same, here do not do further analysis. But why take CSS out of the picture? Because CSS preprocessors (Less, SaaS, Stylus, etc.) are all used in projects these days, this leads to the abuse of some primary front ends: nesting 5, 6 layers, or even 7, 8 layers. Not only does it affect the speed of the browser to find the selector, but it also produces a lot of redundant bytes to a certain extent. This should be corrected and reminded. It is generally recommended to nest 3 layers. Writing Efficient CSS Selectors is a recommended article for Writing efficient CSS selectors
  • Server open Gzip compression: big move, just know recently, is really too awesome, general CSS, JS files can compress 60, 70%, of course, this ratio can be set. If front-end deployment uses Node and Express as servers, gZIP compression can be enabled using middleware Compression:

    // server.js
    var express = require('express');
    var compress = require('compression');
    var app = express();
    app.use(compress());Copy the code

    For general SPA projects, if the node server function is relatively simple, such as just do an interface forwarding, many people prefer to use Nginx as the server, Nginx in forwarding interface, compression, caching and other functions better. However, most of the front end should be new to Nginx, in order to practice the technology, use familiar Node, real project deployment, professional implementation personnel to do.

Use the cache

Set cache-relevant fields in Http headers for further optimization.

Express also has Settings for static resources that you don’t pay much attention to:

You can set eTAG, maxAge, etc., and further have the difference between 200 cache and 304 cache:

200 OK (from cache) indicates that the browser directly uses the browser cache without confirming with the server. 304 Not Modified is a cache that the browser and server validate once more before using it.

Related discussion can refer to Zhihu: How does Ali Cloud storage make browsers always cache images with 200 (from cache)?

Out of memory

This kind of optimization will vary from problem to problem, but the most important thing is to be good at using the Performance and Memory panels in Google DevTools to analyze and find problems to optimize.

So far, I have only encountered memory overflow once. My colleague used Echarts to draw k-line graph, but there was something wrong with the JS logic of my colleague. When the click event happened, canvas was repeatedly rendered, which led to the gradual increase of memory. I rewrote the JS logic, made some optimization for canvas, and fixed the bug.

At present, I don’t have much experience in this analysis, so I will practice it when I encounter problems later.


Performance optimization, is touch point by point, encountered problems in the project, and then to analyze and optimize, solve the problem, at the same time, I also gained a lot of knowledge. The above is the optimization method I used to do the front end, may be for the big bull, perhaps not worth mentioning, but for the novice should still have some reference significance.

Some advanced optimization has not been practiced, such as division of the primary domain, wireless rolling optimization of a little detail, etc., will continue to learn later.

Endless scrolling complexity – This wireless scrolling optimization I’ve been looking at for a long time and haven’t figured it out 😓