The front-end refactoring programmer is an anomaly who cares about the code but also the experience. Optimization of code is difficult, but there are a number of performance testing tools to prove the results of optimization. But how can we prove that it is good or bad to experience this kind of thing?

I. Optimization of visual experience

  1. Page load
  2. Data request
  3. Image rendering

Two, data to prove the experience effect

Today I will focus on the “WebNovel” PC site from the above two points to introduce to you, how to do reconstruction optimization from the perspective of experience, and how to use data to prove that your optimization is effective.

I. Optimization of visual experience

When we optimize our experience, we first need to know what a good experience is.

I don’t know where to hear the saying, “Good experience is not experience.” This may sound like an oxymoron, but it’s in line with the industry’s popular “Don’t make me think” experience. You want people to come to “WebNovel” to read novels, not to compliment us on how great our web experience is. When users start paying attention to our website experience, that’s when our experience is not good.

Where can users easily feel the experience of our website?

1. Page loading

The first place is when the page loads. Here we can clearly see the “WebNovel” Logo and three small ICONS in the page refresh, there will be a moment of nothing to create the process.

Our “WebNovel” site is aimed at overseas users all over the world, breaking out of the weird circle of compatible Internet Explorer browser in China, so we choose the SVG icon with high browser compatibility requirements but better rendering effect in the section of small icon. “SVG is a vector icon, under the condition that the volume can be guaranteed to be smaller, It can also zoom in and out without distortion. We referenced the javascript code automatically generated by Iconfont “from Alibaba’s Online Icon Management tool” into the page, and the result is the above rendering.

The principle is quite simple, the DOM structure of our SVG ICONS is dynamically added to our tags after JS execution. So you have the effect of rendering the page first, then rendering the icon. It was natural for us to solve this problem by putting the JS-generated DOM structure directly inside us.

However, when we copied the DOM structure, we found that the SVG code was too much. Putting it inside the page would greatly increase the amount of HTML code and affect the speed of the entire page rendering.

Finally, the solution we adopted was to split the SVG ICONS. For the ICONS displayed on the first screen, we added DOM directly, and for the remaining ICONS, we still used the old method of dynamic loading JS. This makes it hard for users to see ICONS outside the first screen even if they are flashing. So we solved the problem of flashing ICONS on the first screen and avoided the problem of the amount of code in SVG ICONS. The user doesn’t feel the flash of the icon and doesn’t spend time thinking about whether our page is loading fast or slow.

Loading the first screen data in advance can enhance the experience of our users when refreshing and jumping to the page to a certain extent.

2. Data request

The second place is when we request data. When we switch TAB, we can see a Loading effect. The user’s action triggers the data request. During the waiting process, we use Loading to inform the user of this state, so as to reduce the user’s anxiety about waiting. This is actually a general practice for enhancing the experience of data loading.

However, no matter how fast our data requests are, users can still see the effect of this Loading. That’s not what we want.

The way we deal with this is that when the user mouse over the button, we think the user is willing to click, so we initiate the data request operation in advance. When the user presses this button, the whole process of our data request may be over, which means that our Loading state has disappeared. The user can see the loaded data directly. In other words, it is very likely that the user will not see the Loading state during the whole process. Isn’t that what we want to optimize the experience for?

A good way to enhance the data request experience is to take advantage of the user’s idle time and do some pre-processing in advance.

3. Image rendering

As we all know, pictures and ICONS are displayable elements, especially for book covers made with great care by designers of “WebNovel” website. Naturally, we are willing to choose larger but higher quality hd images. In order to avoid affecting the presentation of the text, we have to choose a mechanism like image Lazyload, which makes the image load lazily.

However, it is obvious that this is the same as the Loading logic of data Loading before, which requires a placeholder map to indicate the Loading state. Once the image is loaded, the placeholder will be replaced by our HD book seal in a flash, and there will be an unexpected flash.

As soon as the user sees the flash, they’re going to wonder if you’re loading fast or slow. But we want “Don’t make me think.” We don’t want people to think about anything other than our product. So we need to figure out a way to weaken or even eliminate that flashing.

At this point, we have to remind ourselves that the browser has an image caching mechanism. Images that have been accessed by your browser within a certain period of time are cached by the browser. When you see the image again, the image reads directly from the browser’s cache and opens instantly, without blinking.

So we wondered if we could use this mechanism to solve the problem that was flashing before. But now the difficulty is that the book cover of our details page is bigger than the book cover of the first page. They are different images because they are different sizes, so there is no concept of caching.

If we make the detail page also use the book cover of the first page, the small picture of the detail page will look blurred when stretched out. If the home page with details of the large book seal, the home page picture data loading overhead is too large.

The solution that we have here is triple addition. We stacked the placeholder, the small book cover, and the large book cover perfectly in the same place according to the hierarchy shown above. When the user enters our details page from the home page, our placeholder map and small book cover are directly displayed instantly using the browser cache, because the small book cover is superimposed on the placeholder map, so the user can not see the placeholder map. And at this time the big book seal is loading, when the big book seal is loaded, it will cover on the small book seal. When the user looks carefully at the book seal, the big book seal may have been loaded.

As you can see from the GIF above, the whole process has changed from the placeholder image to the flash of the big book seal to the gradual change of the small book seal to the big book seal. This is actually very difficult for users to find.

Some of you might say, well, if you can’t see a placeholder map here, why do you need to load this map? The reason is simple, because not everyone goes from the front page to the details page. It is possible that the user opened the link directly. The placeholder map then reverts to its original logic. You see the placeholder first and then you see the book cover. Of course, I also have to admit that for such users, we are actually loaded with a small book of resources. But for the optimization of experience, I personally think the consumption of this resource is acceptable.

And the other interesting thing is that in this place because it’s loaded at the same time, the placeholder, the little book cover, the big book cover, they’re all cached by the browser as images, so when the user jumps to another page, if they have the same image, it’s instantaneous. In this way, we take full advantage of the browser cache to further enhance the user experience on our site.

Image caching, so that the site experience is better than good.

Two, data to prove the experience effect

Experience optimization at the visual level is wySIWYG, and you can quickly see the results. However, there are many experience problems caused by browser differences or user differences, so we can’t directly and quickly prove the effect of experience optimization. We’ll have to rely on the data to prove it.

To speak with data, we must first have data, and the data collection tool we use in overseas PC stations is handed over to Google Analytics. Here’s a funny story about this.

I saw this one the other day when I was testing our “WebNovel” pages with WebPagetest, “a world-renowned tool for testing site load speed.

What to do? We took the violent but effective option of incorporating CSS directly into our HTML files “CSS inlining HTML”. CSS files are gone, see how you can DNS, SSL… A: wow! Great. I’ll take this to the boss and claim credit. Applauded the boss gave me two points I didn’t type.

First, you’re obviously loading the page for the first time. How can you prove that this optimization works for the majority of users who already have cached CSS files?

Second, “WebNovel” is a global site, and how do you prove that this optimization for users in all regions of the optimization strength?

Yeah, how am I supposed to prove it? “Webnovel” so many users in the world, I use a “WebPagetest” test report to cover all terminals and user groups, the display is not credible. And how do I prove how strong this optimization is?

At this time, my good Chinese colleague appeared, and my front-end logic partner helped me do a random pre-processing AB sample output inside the frame machine. The idea is simple: 50% of our users load our CSS the way it was originally loaded, and the other 50% use our CSS inline HTML. In this way, I just need to add an identifier to distinguish the two samples and report them to GA “Google Analytics”, and THEN GA will automatically help us count and process these samples. Finally, I just need to subtract the average value of these two samples within 10 to know the specific level of our optimization.

However, another question arises, what should we report as data to the GA? Obviously, using the load time of the entire page to prove that CSS load is wrong. How about I use the time after the CSS is loaded to prove that? But how can I prove that the page is rendered after the CSS is loaded? Aren’t we optimizing the experience? How can the loading of CSS files prove that the user experience is better?

Now I’m messy. No, I’m holding this whole thing together. I’m not panicking.

Experience, experience, CSS is not loaded, the user does not even see the page how can you call experience? Therefore, the time when the user first sees the page is, to some extent, the proof of the experience.

How do you get this time? In fact, using the Performance API, you can get the browser phases as shown in the figure above.

For Chrome you can use window.chrom.loadtimes ().firstpainttime; IE8 + browser Windows can be used. Performance. Timing. MsFirstPaint;

To get the time when the page starts rendering. The consensus was that the “firstPainttime-NavigationStart” time should be roughly understood as the time between the user accessing the page and seeing the page.

Therefore, I happily reported this time to GA, and then the version was online, waiting for data collection. Finally, the following statistical results are obtained:

Of course, we also feed back this bottleneck problem to the students at the service end. The reply was that “WebNovel” was still a new project in our overseas stations, and many overseas servers needed to be followed up. I believe that this pain point will be solved after the follow-up equipment.

Data report, so that your experience optimization is justified.

conclusion

I don’t have enough space, so I’m just going to list a few things that I think you can do from a user experience perspective, and how you can use data to prove how hard you optimize. I hope to inspire and help you. If you have any questions, please leave a message to me. We can have in-depth exchanges.