preface

This article will give optimization strategies from multiple angles, and try to ensure that the strategies given are touchable, simple and efficient. If you think my writing is ok, welcome to like ~

Optimization of the request

For us front-end engineers, we can do very little in this area, which is usually handed over to the operation and maintenance students of the company. But it doesn’t hurt to know.

First, we can use HTTP headers to help us Cache content: cache-Control, ETag, last-Modified. (Click the corresponding entry to jump to MDN preview)

For example, for our static resources, we could consider setting this in the response header:

Cache-Control:public, max-age=31536000
Copy the code

It is worth noting that max-age here is how much time has passed since the request was initiated. What are the specific meaning of these three, we can refer to MDN, here we share their application in practice.

In our current project, the index.js file is stored in the following code. Public /index.html is any piece of HTML. When we run Node index.js in the current directory and access port 3000 in the browser, we find that, Our cache-control Settings are in effect. In the second part of the screenshot, no-cache is displayed.

const express = require('express')
const path = require('path')
const serveStatic = require('serve-static')

const app = express()

app.use(serveStatic(path.join(__dirname, 'public'), {
  setHeaders: setCustomCacheControl
}))

app.listen(3000)

function setCustomCacheControl (res, path) {
  if (serveStatic.mime.lookup(path) === 'text/html') {
    res.setHeader('Cache-Control'.'no-cache')}}Copy the code

For static resources that do not change, such as images, downloaded tables, and basic CSS, you can Cache them for a long time through cache-Control.

However, for some resources that change frequently, we need to be careful. Once we do cache, when we release a new version, we may have the situation that the cache does not request. When I started working, our solution was to change the link shape to? Suffixes like v=100, in those days the company did not use packaging tools, each time had to manually change the document, extremely laborious.

Now we generally don’t care, because the packing tool does it for us. If the file we generated last time was called app.2a34hd2fH3h.js, when some content of the file is changed, the file name packaged by the packaging tool will change, and then the request will be renewed. That said, it’s ok to set up a cache for a long time.

In addition, there are entry files such as index.html, whose filename usually does not change, how do we deal with it?

In this case, we usually set cache-control to no-cache. This does not mean that it does not use the Cache, but it also combines the ETag and last-Modified attributes mentioned above. If both attributes conclude that the file has not changed, Then we don’t need to request, just use cache. If you want to avoid using the cache every time, use the no-store value.

CDN is also a way to optimize HTTP requests. We can choose to use CDN to store some third-party libraries and image resources. Compared with our own server, using CDN can dynamically select the nearest node, which allows us to shorten round-trip time (RTT). That is to say, it may take 1 S to request our server, but it may only take 0.2 S to use CDN.

In addition, some CDN services can also help us process pictures. We can request compressed pictures by adding some marks when requesting pictures (I have used this service of Tencent Cloud). We will put the details in the section of picture optimization.

Cookie is a familiar attribute, which is Set by the set-cookie in the response header. If it is Set, every subsequent request will carry it. If too much content is Set, it is also a problem.

In addition, we can use Gzip compression, and Gzip compression is a solution that you must encounter every day, but some of you may not have noticed before. With Gzip compression, we can reduce the size of the file transfer. When the browser receives the file, it will automatically decompress it for us and then parse it normally.

If you want to use Gzip compression in Express, you need to introduce a middleware:

const compression = require('compression');

// Other intermediate code is omitted
// Compress all requests
app.use(compression());
Copy the code

Suppose we have a request that returns a long string:

app.get('/longText'.(req, res) = > {
  const text = 'a very long text';
  res.send(text.repeat(10 _000));
})
Copy the code

Without Gzip enabled, 160 KB of files are transferred. With Gzip enabled, only 644 KB of files are transferred, which is 0.04% of the original volume. The comparison can only be described as horrifying.

There is one final piece to the section on optimizing HTTP, which is a quick comparison of HTTP/ 1.x, HTTP/2, and HTTP/3.

HTTP/2 was developed primarily to address HTTP/ 1.x performance issues. We talked about how we can use Gzip to compress our transport content and saw the dramatic comparison before and after compression, but HTTP/1.X does not support compression of request headers, which is supported over HTTP/2.

At the same time, HTTP/2 allows multiple reuse within a TCP connection. In HTTP/1.X, there is a limit on how many requests a browser can send at the same time (Chrome has six), since only one connection can be used for each request.

If you want to learn more about HTTP/2, you can read this article

Currently, HTTP/2 is supported by all major browsers.

HTTP/2 is a significant performance improvement over HTTP/ 1.x, but it still has a problem.

As mentioned above, it has a “multiplexing” feature, which allows multiple requests to be sent and received simultaneously in a single connection. However, the process is linear, and if one packet is lost, the whole process is blocked.

HTTP/3 solves this problem by switching from TCP to UDP over HTTP/3, which makes it ok to lose packets and does not affect other streams. (The orange bag below is discarded)

However, HTTP/3 support is not particularly good at this point, so we’ll just wait and see.

To optimize the image

If do not change the code, how to make the page load quickly faster, that choice optimization picture is a good choice.

At present, the pictures of our company’s old projects are stored locally and taken directly from THE UI without any processing. Hundreds of K pictures can be seen everywhere. That’s probably the worst way to do it.

To optimize the picture, we need to choose the format of the picture first.

Images are divided into two main formats: vector and bitmap. Vector graph is the use of point, line, polygon to form a graph, the advantage is not distortion after amplification, but for some more complex, beautiful pictures appear powerless. In this case, we will have to choose bitmap, however, bitmap will be distorted when enlarged, so we may have to prepare different resolutions for the scene.

Vector graphics are SVG, and bitmaps are PNG and JPEG. For some ICONS, logos, we can choose SVG, IconFont is very convenient to host SVG, its icon library is also very rich, daily use can be said to be sure enough.

If you use IconFont, do not use Unicode or Font class, which is cumbersome and difficult to use. Use Symbol instead.

For scenarios where SVG is not an option, we need to look at whether the resolution requirements are high. If it’s high, go with PNG. If it’s low, go with a smaller JPEG. However, PNG is sometimes too large, and if your target audience is users of newer browsers, consider using WebP, which has no disadvantages over older formats and is smaller in size.

With the NPM package to help us do this, we can use Imagemin-webp to convert old image formats to WebP.

If you think WebP is awesome and want to work with older browsers, consider this:

<picture>
  <source type="image/webp" srcset="flower.webp">
  <source type="image/jpeg" srcset="flower.jpg">
  <img src="flower.jpg" alt="">
</picture>
Copy the code

When I was making a small program, I found that the report did not allow us to use GIF images, which was really a big loss. I did not pay attention to it at first, but was later rejected for approval because of this, so I changed it. It is no exaggeration to say that there are several million GIF images……

A common way to optimize giFs is to use the Video tag. We can use FFMPEG to convert GIF images into video formats. When it comes to video formats, MP4 is the first thing that comes to mind, but there is a better video format called WebM, which is smaller than MP4, but not all browsers support it.

In order to be compatible with older browsers, we also adopt the following strategy:

<video autoplay loop muted playsinline>
  <source src="my-animation.webm" type="video/webm">
  <source src="my-animation.mp4" type="video/mp4">
</video>
Copy the code

After selecting the image format, we can also consider adding lazy loading to the image. Some students may think this is complicated, but in fact, it is very convenient to add the image, just use the IMG tag loading=”lazy”. With this attribute, the browser will only request images when they are close to the viewport.

However, cancer IE does not support this property, but writing this property does not have any adverse effects, the browser does not support automatically ignore. To be honest, I don’t want to be compatible with IE at all, and I don’t have a strategy for compatibility with IE.

Another way to optimize pictures is to choose CDN. Generally speaking, we can obtain pictures of different quality and size by adding query parameters after the picture link, depending on the service provider. For example: www.some-cdn.com/mysteryven…. I just want to show you that the specific syntax depends on the service provider’s documentation.

Optimizing JavaScript files

We optimize this part mainly from the perspective of packaging tools. Like the above two sections, we can compress JS as well. Students who use Webpack may have used TerserWebpackPlugin.

There is also a minor change in writing. For example, when we use LoDash and we want to reference a library, we might package the entire library as follows:

import _ from  'lodash';
Copy the code

But we might only use one method, we could write it like this:

import cloneDeep from 'lodash/cloneDeep'
Copy the code

With this small change, we can take advantage of the tree-shaking functionality of the packaging tool.

In addition, to avoid the single JS file being too large and affecting the first screen rendering, we can consider using code split.

Optimizing CSS Files

The first thing that comes to mind is compression. In addition, I also found a very useful tool, called tailwindCSS, probably many friends know. It helps us define some common styles, so we don’t have to write CSS at all:

<div class="w-32 ml-5"> </div>
Copy the code

Why mention it here? Not only does it make it easy to style without leaving the HTML page, but it also reduces repetitive CSS styling code as we use it. This means that our CSS files will be much smaller. Some of you may not accept it at first, but I didn’t accept it at first. After all, you lose the semantics of CSS, and some elements get added to many class names. It turns out, however, that this scheme works well for scenarios where style requirements are not particularly high, and for areas that require a lot of fine-grained control, you can write CSS styles separately.

For those of you who use VS Code or WebStorm, you can download TailWind and have it auto-complete for us. It’s very easy to develop.

In addition, there is a csS-in-JS solution based on it, which I did not adopt, but which interested students can read

Write JS code can take the scheme

In this section, I’m going to introduce you to one of the less common optimizations that I find amazing even after I know it.

Do you think it’s possible to optimize code like this?

let nums = [.....]
for (let i = 0; i < nums.length; i++) {
    process(nums[i])
}
Copy the code

Not only does it, but it can also bring about a significant improvement.

In this code, our loop body has only one process function. In fact, even if our loop body is very simple, it will slow down if the number of iterations is very high. Running the loop body itself is a small performance cost. An optimization strategy like the one above is to reduce the number of iterations of the loop. This optimization is called a Duff device.

Originally, our code might look something like this:

var nums = new Array(1 _000_000_0).fill(0).map((i, index) = > index);

function process(item) {
  JSON.stringify(item);
}

console.time('Ordinary loop')
for (let i = 0; i < nums.length; i++) { 
  process(nums[i]);
}
console.timeEnd('Ordinary loop')
Copy the code

With the “Dove device”, our code becomes a bit more complicated:

console.time('Use duff equipment')

var i = nums.length % 8;

while(i) {
  process(nums[i--])
}

i = Math.floor(nums.length / 8);

while(i > 0) {
  process(nums[i--])
  process(nums[i--])
  process(nums[i--])
  process(nums[i--])
  process(nums[i--])
  process(nums[i--])
  process(nums[i--])
  process(nums[i--])
}

console.timeEnd('Use duff equipment')
Copy the code

It tries to do as much as possible in a single loop, and the performance comparison may surprise some, it’s nearly 90% faster:

But it might not be so obvious at the low number of loops, but at the 10 loops I tested, the normal loops were faster. This optimization strategy is best used when there are many cycles.

Optimizations when Using React

Common optimization strategies are virtual lists, shouldComponentUpdate, useMemo, useCallback.

For fun, long lists really hurt React forever. Fortunately, virtual lists are not complicated to implement, and there are many good implementations in the community. Sometimes, depending on your usage scenario, you might want to not only use virtual lists, but also consider handing over the value of an item in the list to a single component to maintain.

Some students might use a key to rerender a component:

<Modal key={Math.random()}>
    ...
</Modal>
Copy the code

Sometimes it’s a Hack, but in my experience there are more potential problems.

In general, React makes it harder to write code with significant performance issues. For a detailed description of these contents, the section on optimizing performance of the official documentation is more detailed, if you are interested, please go to read it.

Afterword.

Today, I focused on sharing the content of optimization request and pictures with you, the rest is just a simple skip, I hope this article can be different from what you read in other places, the biggest hope is to bring you harvest.

A few months ago, I did several tasks to optimize web page performance, which was as simple as caching certain values for the next query to read from the cache. The net effect: it looks like certain high-frequency operations are actually faster, and everyone is happy with that.

In fact, this approach is a more “smart” way, only a few points of optimization, it seems to have a special effect, but this kind of visible results in a short time, in the product, leadership side are better account. But that approach is not sustainable in the long run, and it does not address the underlying performance problem. Just equivalent to the hole to fill the hole, there is a pit to fill the pit. In the near future, there will be the same problem.

As shang Yang lobbied, Duke Xiaogong of Qin made the same choice. As an emperor, who could endure being on the throne for decades without seeing immediate results? Who doesn’t want to be famous when they’re on the throne? Who can not see any future, continue to insist on waiting for flowers? From their point of view, such a choice is quite understandable. However, as engineers, we still have to pursue, or a salty fish and what is the difference, may one day, we met the King of Zhou?