1. Start with the 100,000 lists

The examples in this chapter are from the blog post of Yunzhongqiao.

Recently, often in some technical blogs to see someone mentioned how to do not jam the main page to render, this is the classic DOM rendering optimization problem, here directly with the original blog example, write very detailed.

<ul id="container"></ul>
Copy the code
// Record the start time of the task
let now = Date.now();
// Insert 100,000 pieces of data
const total = 100000;
// Get the container
let ul = document.getElementById('container');
// Insert the data into the container
for (let i = 0; i < total; i++) {
    let li = document.createElement('li');
    // ~~ means x < 0? Math.ceil(x) : Math.floor(x);
    li.innerText = ~~(Math.random() * total)
    ul.appendChild(li);
}

console.log('JS runtime: '.Date.now() - now);
setTimeout(() = >{
    console.log('Total elapsed time:'.Date.now() - now);
},0)
Copy the code

The running results are as follows:

The entire page is a blank screen waiting for a long time for the data to appear on the first screen. It can be seen that there is no performance bottleneck for this JS operation without any algorithm or even some data processing logic. The main reason for the lag lies in the large number of high-frequency dom operations for rendering.

The remedy is right. Since rendering is too slow at the same time, we can render in batches, which is the method of time fragmentation. The most commonly used method is setTimeout asynchronous rendering.

// The container to insert
let ul = document.getElementById('container');
// Insert 100,000 pieces of data
let total = 100000;
// Insert 20 strips at a time
let once = 20;
/ / the total number of pages
let page = total/once
// The index of each record
let index = 0;
// loop to load data
function loop(curTotal,curIndex){
    if(curTotal <= 0) {return false;
    }
    // How many pages per page
    let pageCount = Math.min(curTotal , once);
    setTimeout(() = >{
        for(let i = 0; i < pageCount; i++){
            let li = document.createElement('li');
            li.innerText = curIndex + i + ':'+ ~ ~ (Math.random() * total)
            ul.appendChild(li)
        }
        loop(curTotal - pageCount,curIndex + pageCount)
    },0)
}
loop(total,index);
Copy the code

Of course, if the DOM structure of each list is relatively complex, it is also a problem whether the browser page can withstand such a large number of DOM after rendering. In the follow-up, there is also a virtual list scheme, which is actually another form of application on the rendering level of the previous data request lazy loading, which will not be introduced in detail here. When the page structure is not a simple list but a complex graph such as a tree structure, this kind of virtual computing is basically unrealistic.

2. Frequently encountered in projectswebsocketTo deal with

In fact, the 100,000-list rendering in the previous chapter will not appear in normal projects, using pagination and so on can be solved. After all, a hundred thousand list of scrollbars who can carry ~~

One of the most common rendering performance issues I have encountered in projects is the constant mass and high frequency of websocket push data, which causes DOM rendering to get stuck. Note that unlike the amount of data listed in the previous chapter, which is requested at the beginning, the push data is dynamic and uninterruptable, so you can introduce the data buffer pool.

let bufferPool = [];    // To cache data for a period of time
let bufferTimer;
// The function that receives webSocket data
function onMsg(data) {
  bufferPool.push(data);
  // Push the current data to the list each time
  if(! bufferTimer){ bufferTimer =setTimeout(() = >{
      // Process cached data once
      render(bufferPool);
      bufferPool = [];
      socketTimer = null;
    }, 500); }}; ws.on('message', onMsg);
Copy the code

The universal setTimeout is still used, that is, the idea of time fragmentation is applied to the data push, and the data push is rendered in batches.

However, there is still a problem with the buffer pool. If a large amount of data is pushed within 500ms, there is still a big problem with rendering. Therefore, a setTimeout can be added to the DOM rendering according to the data length, that is, the time to apply the rendering fragmentation. Render in fragments as we did in chapter 1.

Function render(bufferPool) {if (bufferPool. Length > 100) {// Render (bufferPool)... } else {// one-time loop render... }}Copy the code

Of course the buffer pool approach here looks very inelegant… If you have RXJS in your project, this is where it comes in. RXJS is a real power for handling data flows.

Rx.Observable
    .fromEvent(ws, 'message')
    .bufferTime(1000)
    .subscribe(bufferPool= > render(bufferPool))
Copy the code

3. Do you really need to introduce Web workers?

To continue talking about performance optimization, we have to mention web workers that have become popular in recent years. There is no mystery, but a look at the EXAMPLE of MDN documentation quickly makes it clear:

var worker = new Worker("js/worker.js");
// Send data to worker
worker.postMessage(data);
// The callback function after the worker processes the data
worker.onmessage = function (e) {
	var data = e.data;  // The data returned by the worker after processing
}

// js/worker.js
onmessage = function (e) {
	var data = e.data; //worker accepts data
	// After receiving the data, process it. postMessage(data);// Send the processed data
}
Copy the code

Compare that to the JS timer.

SetTimeout is essentially a single thread, which realizes concurrency through event loop, but this kind of concurrency is actually a false concurrency. Events in the timer will still block the thread, that is, it will cause the page operation to feel stuck in some complex calculation and rendering, and the multi-core CPU of the computer is useless and cannot be effectively utilized.

Web worker is a serious multi-thread, we can put some JS data processing into the worker thread, at this time the main thread will not be affected, the operation page is very smooth.

Returning to the example in the previous section, although websocket is buffered and rendering is fragmented, if the pushed data needs to go through certain processing, it will occupy the resources of js single thread for a long time. If you want to push data and process data without affecting the normal performance of the current page, Consider introducing web workers, subscribing to websockets in workers, and processing data.

The amount of data pushed by my current project is large and the original data has to go through a series of screening and splicing, but is it necessary to use worker for such logistically complex processing? In fact, the key lies in whether it is necessary to operate the page during complex processing. If your scene is click request data waiting for processing rendering, users actually have psychological expectations for this and will not slide the page for operation in such a short time, so there is no need to use worker at all. However, as I am currently a continuous Websocket update processing is completely a background behavior, and users cannot perceive it in time. Mass processing cannot affect users’ current operations, so it is most suitable to put it into worker threads for processing.

4. Reference

  • web worker

  • rxjs

  • Virtual list