Say first demand

The requirement definition was simple, originally there was a waterfall page like this, scrolling to load more cards, and on top of that, adding video support, which could be pictures or short videos. The video requires automatic loop playback of inline mute on Wi-Fi, without other interaction.

Is not a very simple demand, the requirements are not high.

The compatibility of video inline playback is investigated, and the problem is not big.

Muted play is also a fraternal affair.

After Mounted, play or use the Autoplay attribute.

Loop play is the loop property.

Determine the Wi-Fi environment can communicate with the client through the bridge, this involves specific business, I will not go into details, in short, call an API to finish.

It looks like it’s done.

Start stepping on holes.

Encounter performance issues

According to the requirements, the back end will control the frequency of the video. Only five cards are allowed to produce a video. Otherwise, the video effect is not good when the screen is full. But this is a waterfall stream, and in theory you can scroll hundreds or thousands of cards. What if you play a video for every five cards?

CPU temperature up to 90, usage up to 300% (with four cores and eight threads), or 600% (with iOS 9 compatible iPhone-inline-video).

It just doesn’t roll on the mobile end, and even on the iPhone X, it’s incredibly slow to slide.

Try to optimize the

In the real world, users can see only a few cards on a screen, rarely more than 10, but they can load hundreds of cards, all of which are still playing automatically, and the CPU can’t handle it.

So the idea of optimization is to stop the videos that you don’t need. But MY approach is a little radical, referring to the review APP home page effect, roll to the bottom and then roll back to the images are reloaded, at least it looks like so, maybe images, video resources are GC, not just pause video.

My practice is to replace images and videos that are not in the viewable area with empty divs and then replace them when the user rolls back. As for whether the image will be GC, give the container to do.

So how do you achieve this effect?

At first I struggled with how to listen for scrolling in the parent list, determine which child cards are displayed, and notify some of the children that you have been optimized.

It doesn’t seem easy to do, but how about listening for scrolling in each child component to see if you’re in the viewable area? It would feel bad because the scroll event itself would be triggered many, many times, and there would be so many listeners.

Later, I referred to the implementation of Lazysizes, which querySelectorAll(‘.custom-class-name’) directly at initialization by hanging a list of lazyElements in the global window… The general idea is to give each element a special class name, initialize it and store it in the window object, then listen for scrolling, and forEach will walk through all the elements and determine whether they need to be loaded based on their position (whether they are in the visible area).

Considering that’s all we do with lazysizes, which we use a lot, I might as well do the same. The main implementation code is as follows:

export default {
  mounted() {
    this.addOptimizeScrollListener();
  },
  beforeDestroy() {
    document.removeEventListener('scroll'.this._optimizeListener, false);
  },
  methods: {
    addOptimizeScrollListener() {
      const windowHeight = window.innerHeight;
      const TOLERANCE_HEIGHT = 300;

      this._optimizeListener = throttleByRAF((a)= > {
        if (this.$refs.cards) {
          this.$refs.cards.forEach(card= > {
            const rect = card.$el.getBoundingClientRect();

            if (rect.top > windowHeight + TOLERANCE_HEIGHT || rect.bottom < 0 - TOLERANCE_HEIGHT) {
              card.isOptimized = true;
            } else {
              card.isOptimized = false; }}); }});document.addEventListener('scroll'.this._optimizeListener, false); }}};Copy the code

Add scrollListener to Mounted. What the listener does is get the list of cards components via $refs and then do something like Lazysizes to forEach directly to determine if it’s in the viewport and then assign isOptimized to the card instance. There are optimizations like throttleByRAF for limiting flow execution, and TOLERANCE_HEIGHT for tolerating errors, which are optimized not so strictly according to window boundaries, reducing the amount of reloading caused by the user’s slight scrolling.

So that’s the main implementation. Inside the card component, you just need to use the value isOptimized to decide whether to render the image or video as blank.

It is important to note that the height of the original image and video must be obtained, otherwise there will be jitter and even the layout of the waterfall flow will be distorted. The special point here is that the back-end interface provides the width and height to predict the height in advance, so you just need to fill in the blank according to the given height.

In fact, you can also replace the entire card with blank space (of course, I’m not talking about pure white space here, but colorful space, if you want to design some placeholders to improve the visual effect). So how do we get altitude?

The answer is window.getComputedStyle! In Mounted, retrieve the value to save. If the value is optimized, use this height.

The performance comparison

First of all, an intuitive comparison of experience, that is, the CPU usage is not so exaggerated, the mobile terminal is also very happy to roll, and there is not much difference between the use of images.

On the Performance panel, the number of Nodes is significantly reduced. The number is about 16,000 before optimization, and 3000 to 6000 after optimization.

Since I was misled that it might be caused by excessive memory usage, I compared the memory usage of different strategies in detail.

Sure enough… Makes no difference… As you can see, optimizing the entire card saves a bit of memory. After a curt look at the details, the number of VueComponents without optimization is 632, occupying 8.9MB, while the optimized number is only 343, occupying 5.1MB. The quantity difference between the two is 289, indicating that only 11 cards are currently rendered, as expected.

This difference was relatively OK, however, because I used 300 images directly online, each of which was only about 20KB. The main problem is that CPU usage is too high, and the memory optimization provided by reclaiming the DOM is a gift.

Other problems

Determine whether Wi-Fi causes lag

I tested the page on older OPPO phones and found some lag, tried to dry out the getNetworkType call, and continued to slither.

The reason is that every time I load a card with a video on it, I try to determine the network type in case you switch from Wi-Fi to 4G, for example. Because the APP bridge does not provide the corresponding listener function, we have to do so. But apparently, it’s not worth it.

You find an online and offline event that listens for browsers to go online and offline. It would be perfect if users experienced logging off and on when switching from Wi-Fi to 4G. Of course, it’s not as perfect as that, and neither event is triggered during the network switch…

The final solution is to call getNetworkType every 10 seconds and compromise with polling.

Mute playback Bug

There seemed to be no problem with mute playing the video, but when I clicked into a detail page, opened a new WebView, and returned to the current page, there was a weird sound… And, we haven’t figured out why.

I didn’t do it

Then I found that the homepage of the review APP used a WebP GIF instead of a video…

Change the plan…

Change interface…

Optimization is basically a blank, or, you can optimize, but you don’t need to…

(Learned something)

The end

The platform’s support for WebP is poor, and giFs are not directly processed. Gifs cannot be supported temporarily…

So we’re still going to use video.

The good news is that this optimization was not for nothing, the bad news is that I had to fix the bug where mute became unmute.

A work order was given to the platform and a “feature” reduced, which can be manually configured by listening to webView Appear events.

This is… Happy ending.