As the first screen facing the user, the importance of the first screen is self-evident. How to speed up the loading speed is a very important lesson.

This article explains: the author of their own personal blog website speed optimization experience.

Address: http://biaochenxuying.cn

1. The speed experience that users expect

In August 2018, The White Paper 4.0 of Baidu Mobile Search Landing Page Experience released by Baidu Search Resource platform mentioned that the content on the first screen of a page should be loaded within 1.5 seconds.

Some may wonder: why within 1.5 seconds? What are some ways to speed up loading? The following will answer these questions for you!

In the era of mobile Internet, users have higher and higher requirements for the opening speed of web pages. According to the research of Baidu User Experience Department, the relationship between page abandonment rate and page opening time is shown in the figure below:

According to the research results of Baidu user experience Department, the expected and acceptable page loading time of ordinary users is less than 3 seconds. If the page load time is too slow, users will lose patience and choose to leave, which is a big loss for users and webmasters.

Baidu’s search resource platform has the support of “Lightning Algorithm”. In order to ensure user experience and give excellent sites more opportunities for users, “Lightning algorithm” was launched in early October 2017.

The specific content of lightning algorithm is as follows:

If the first screen of a mobile web page is opened within 2 seconds, it will get preferential treatment of improved page evaluation and traffic tilt under mobile search. Meanwhile, pages that load very slowly (3 seconds or more) on the first screen of a mobile search page will be suppressed.

2. Analyze the problem

Before optimization, the first screen takes about 7-10 seconds, so don’t worry too much.

To analyze the problem, let’s take a look at network:

Main issues:

  • The first article list interface took 4.42 seconds
  • Other back-end interfaces are also slow
  • In addition, STATIC files such as JS CSS are also very large, and the request time is also very long

I also used Lighthouse to test and analyze my site.

Lighthouse is an open source automation tool designed to improve the quality of web applications. You can run it as a Chrome extension or from the command line. Provide Lighthouse with a site to review, which runs a series of tests against the page and then generates a report on the page’s performance.

Before optimization:

The top column is the page performance, PWA (Progressive Web application), accessibility (accessibility), best practices, SEO running score.

The following column is a detailed performance estimate for each metric.

Take a look at Lighthouse’s recommendations for performance issues, and how much time each tuning action is expected to save us:

As can be seen from the above, the main problems are:

  • Picture is too big
  • There were too many images to begin with

Knowing what the problem is is half the battle, so start optimizing.

2. The path to optimization

Webpage speed optimization method is too much, this article only said that the optimization of the method used.

2.1 Front-end Optimization

React and ANTD are used in the front part of this project, but 3.8.X is still used in Webpack.

2.1.1 WebPack packaging optimization

Since WebPack4 does a lot of packaging optimizations, such as tree-shaking, I reconfigured the project with the latest react-create-app and updated the project. All dependencies are now the latest stable version. Webpack has also been upgraded to 4.28.3.

In the project created with the latest React-create-app, many of the configurations are already good, and I only made two changes.

    1. The packaging configuration modifies this line of webpack.config.js:
// Source maps are resource heavy and can cause out of memory issue for large sourcefiles. const shouldUseSourceMap = process.env.GENERATE_SOURCEMAP ! = ='false'; // change this to const shouldUseSourceMap = process.env.node_env ==='production' ? false : true;
Copy the code

In production, when SourceMap is removed, static files are small, from 13M to 3M.

    1. We also changed the size limit for image packaging so that any image less than 40K will be base64.
{
      test: [/\.bmp$/, /\.gif$/, /\.jpe?g$/, /\.png$/,/\.jpg$/,/\.svg$/],
      loader: require.resolve('url-loader'),
      options: {
            limit// Change the default 10000 to 40000 name:'static/media/[name].[hash:8].[ext]',}}Copy the code

2.1.2 Remove useless files

For example, the files that may be useful before are found to be useless later, so annotations or delete, such as the Home module in Reducers.

import { combineReducers } from 'redux'
import { connectRouter } from 'connected-react-router'
// import { home } from './module/home'
import { user } from './module/user'
import { articles } from './module/articles'

const rootReducer = (history) => combineReducers({
 // home, 
  user,
  articles,
  router: connectRouter(history)})Copy the code

2.1.3 Image processing

  • I used Photoshop to change some static files into a new format or compressed them, such as logo pictures, originally 111K, compressed to 23K.

  • Home page article list image, modified to lazy loading mode.

Before, I didn’t want to use a plug-in for lazy loading, so I wanted to implement it myself. I read some code about lazy loading on the Internet, and combined with this project, I implemented an image lazy loading function, including throttle and debounce.

The code is as follows:

// fn is the event callback, delay is the interval thresholdfunctionThrottle (fn, delay) {// Last indicates the time when the callback is triggered last time, and timer indicates the timerletlast = 0, timer = null; // return the throttle result as a functionreturn function() {// Preserve this context when calledletcontext = this; // Retain the arguments passed in when the call is madeletargs = arguments; // Record the time when the callback is triggeredletnow = +new Date(); // Check whether the difference between the time triggered last time and the time triggered this time is smaller than the threshold of the intervalif(now-last < delay) {// If the interval is less than the threshold we set, a new timer clearTimeout(timer) is set for the trigger operation; timer =setTimeout(function() {
        last = now;
        fn.apply(context, args);
      }, delay);
    } else{// If the interval exceeds the interval threshold we set, it will not be equal. fn.apply(context, args); }}; } / / get the height of the viewing area const viewHeight = window. The innerHeight | | document. The documentElement. ClientHeight; / / with a new throttle packaging scroll callback const the lazyload = throttle (() = > {/ / get all the pictures of tag const imgs = document. QuerySelectorAll ('#list .wrap-img img'); // num is used to count which image is currently displayed, so as not to check the first image every timelet num = 0;
  for (leti = num; i < imgs.length; I++) {// subtract the height from the top of the element to the top of the viewable arealetdistance = viewHeight - imgs[i].getBoundingClientRect().top; If the height of the viewable area is greater than or equal to the height from the top of the element to the top of the viewable area, the element is exposedif(distance >= 100) {// Write an actual SRC to the element to show the imagelet hasLaySrc = imgs[i].getAttribute('data-has-lazy-src');
      if (hasLaySrc === 'false') {
        imgs[i].src = imgs[i].getAttribute('data-src');
        imgs[i].setAttribute('data-has-lazy-src'.true); If (num = I +1) {// if (num = I +1); }}}, 1000);Copy the code

Note: Set data-has-lazy-src to true after writing the real SRC to the element, so that the browser will not request the image again when setting the real SRC during rollback, wasting server bandwidth.

See the document article list for details

2.2 Back-end optimization

The back-end technologies are Node, Express, and mongodb.

The main problem on the back end is that the interface is slow, especially the article list interface, which is already paging data requests. Why is it so slow?

Therefore, after checking the contents returned by the interface, I found that many fields that are not displayed in the list are returned, especially the contents of articles, which are very large and occupy a lot of resources and bandwidth, thus lengthened the interface consumption time.

As you can see from the figure above, the article list interface simply returns the title, description, cover, number of views, number of comments, number of likes and time of the article.

So comment out or delete fields that don’t need to be shown to the front end.

// Field to be returnedletfields = { title: 1, // author: 1, // keyword: 1, // content: 1, desc: 1, img_url: 1, tags: 1, category: 1, // state: 1. / /type: 1,
        // origin: 1,
        // comments: 1,
        // like_User_id: 1,
        meta: 1,
        create_time: 1,
        // update_time: 1,
      };
Copy the code

We did the same for all the other interfaces.

After the back-end processing, all interface speed is increased, especially the article list interface, only 0.04-0.05 seconds, compared to the previous 4.3 seconds, 100 times faster, it is not too cool, the effect is as follows:

My current mood is as follows:

2.3 Server Optimization

Do you think this article is finished by optimizing the front and back ends? Little brother, you are so naive, the most important thing in the end!

The author server uses the Nginx agent.

The optimizations are as follows:

  • Hide the Nginx version number

In general, software bugs are version-dependent, so we hide or eliminate sensitive information that web services display to visitors.

How to view the nginx version number? Look directly at the network interface or the Response Headers requested by a static file.

Before setting this, you can see the version number, for example, my website version number is as follows:

Server: nginx / 1.6.2Copy the code

Nginx version = nginx version = nginx version = nginx version

Server: nginx
Copy the code
  • Enable Gzip compression

Nginx is much more efficient at handling static files than Web frameworks because it can use the Gzip compression protocol, reduce the size of static files, speed up the loading of static files, enable caching and timeout to reduce the number of requests for static files.

After I turned on gzip compression, the requested static file size was reduced by about two-thirds.

gzip on;
This command is used to turn on or off the Gzip module (on/off).

gzip_buffers 16 8k;
Set the system to obtain several units of cache for storing gzip's compressed result data stream. 16 8 KB Indicates that the installed memory is 16 times larger than the original data size of 8 KB

gzip_comp_level 6;
#gzip compression ratio, ranging from 1 to 9, 1 has the smallest compression ratio but the fastest processing speed, and 9 has the largest compression ratio but the slowest processing speedGzip_http_version 1.1;Identify the protocol version of HTTP

gzip_min_length 256;
Set the minimum number of bytes allowed to compress the page. The number of bytes is obtained from the content-Length of the header. The default value is 0, regardless of how large the page is compressed. Here I set it to 256

gzip_proxied any;
Enable compression unconditionally regardless of the header

gzip_vary on;
Add Vary: accept-encoding to the HTTP header for the proxy server

gzip_types
    text/xml application/xml application/atom+xml application/rss+xml application/xhtml+xml image/svg+xml
    text/javascript application/javascript application/x-javascript
    text/x-json application/json application/x-web-app-manifest+json
    text/css text/plain text/x-component
    font/opentype font/ttf application/x-font-ttf application/vnd.ms-fontobject
    image/x-icon;
Type of file to compress, specifically add the file type for font

gzip_disable "MSIE [1-6]\.(? ! .*SV1)";
# Disable IE 6 gzip
Copy the code

Add the above content to the HTTP module in the nginx configuration file ngixn.conf.

Check whether the content-encoding of the file request is gzip.

  • Set Expires, set cache
 server {
        listen       80;
        server_name  localhost;
        location  / {
            root   /home/blog/blog-react/build/;
            index  index.html;
            try_files $uri $uri/ @router;
            autoindex on;
            expires 7d; # 7 days cache}}Copy the code

When I refreshed the request again, it was March 16, 2019. Whether the setting was successful or not can be seen from the following fields:

  1. The form memory cache in Staus Code shows that the file was requested directly from the local browser, not from the server.
  2. Cache-control max-age= 604800
  3. Express will expire on March 23, 2019, which will also expire in 7 days.

Note: Disable cache is checked in red, indicating that the browser requests the server every time, regardless of whether the local file is expired. So you have to uncheck this to see what the cache looks like.

The ultimate big trick: server rendering SSR, is also the direction of the author next.

3.1 Test Scenarios

All optimization test results from the actual scene are playing rogue, and the impact of different time network speed on the test results is also very big.

Therefore, the author’s test scenarios are as follows:

  • A. The author’s server is ali’s, and the configuration is the entry-level student package configuration, as follows:

  • B. The test network is 10 M optical fiber broadband.

3.2 Optimization Results

The optimized first screen speed is 2.07 seconds.

The final result with caching is 0.388 seconds.

Take a look at Lighthouse’s test results again:

Compared with before optimization, all indicators have improved a lot of space.

4. The last

Optimization of the long road, endless, the world martial arts, only fast can not break.

The optimization of the project, already open source in the making, address: https://github.com/biaochenxuying/blog-react

If you think this article is good or helpful to you, please give a thumbs-up, your affirmation is the biggest motivation for me to continue to create.