Infinite Scrolling Best Practices is a popular UX Planet post that looks at design Practices for Infinite Scrolling from a UX perspective.
Infinite scrolling is everywhere on the Internet: the Douban homepage is one, the Facebook Timeline is another, and the Tweeter topic list is another. As you scroll down, new content magically comes out of nowhere. This is a user experience that has been widely praised.
The technical challenges behind infinite scroll loading are more than you might think. Especially for page performance, it needs to be extreme. This article through the code example, to achieve an infinite rolling loading effect. More importantly, in the process of implementation, the analysis and processing of page performance try to maximize, hoping to inspire readers, but also welcome to discuss with me.
In addition, the part of “Keep thinking” outside the code of this article references wang Peng’s translation of The article “Acrylates of an Infinite Scroller”. Click and poke me to check the link. Here, to the original author of deep respect and gratitude.
Performance measurement
Before we start our code, it’s important to look at some common performance measures:
1) Use window.performance
The Performance API that HTML5 brings is powerful. We can use its performance.now() to accurately calculate program execution time. Performance.now (), unlike date.now (), returns the time in microseconds (millionths of a second), which is more precise. And unlike date.now (), which is affected by system program execution blocking, performing.now () ‘s time increases at a constant rate and is not affected by system time (which can be adjusted manually or by software). You can also use performance.mark() to mark various timestamps (like dots on a map), save them to various measurements (measuring distances between points on a map), and analyze the data in batches.
2) Use the console.time and console.timeEnd methods
The console.time method is used to mark the start time, the console.timeEnd method is used to mark the end time, and the number of milliseconds that have passed between the end time and the start time is printed in the console.
3) Use a professional measuring tool/platform: jsPerf
In this implementation, we used the second approach because it already fully meets our requirements and is more fully compatible.
Overall idea and scheme design
The page sample we’re going to implement is an image,
It has the ability to pull down and load content indefinitely. I’m going to call the red line a block-item, and I’m going to keep naming it that way.
1) As for the design scheme, surely the first most basic and simple idea is to send ajax asynchronous request after pulling down to the bottom, and do page splicing after successful callback.
2) But looking at the layout of the page, it is obvious that there are many images, each block-item block has a caption. When the loaded content is inserted into the page, the browser begins to fetch the image. This means that all images are being downloaded at the same time and the download channel in the browser will be full. At the same time, because the content is loaded before the user sees it, you may be forced to download images at the bottom that the user will never see. Therefore, we need to design a lazy loading effect to make the page faster, and save users traffic costs and extend battery life.
3) The lazy loading implementation mentioned in the previous article avoids loading and rendering at the bottom of the actual page, causing users to wait a long time. We can set a reasonable threshold for pre-loading before the user scrolls to the bottom of the page.
4) In addition, page scrolling events must be listened for. At the same time, the page rolling problem is also more difficult, and will be analyzed specifically for rolling.
5) DOM manipulation is known to be extremely slow and inefficient. If you are interested, you can take a look at some classic benchmarks on jsPerf, such as this one. There are also many articles in the community that have analyzed the reasons for this slowness, but I won’t go into them here. But I’d like to summarize and add that DOM manipulation, just to find a node, is inherently slower than simply retrieving a value in memory. Some DOM operations also require recalculation to read or retrieve a value. A further problem is that DOM operations are blocked, so while there is a DOM operation going on, nothing else can be done, including user interaction with the page (except scrolling). This is a fact that hurts the user experience immensely.
So, in the implementation below, I used a lot of “crazy” DOM caching, even extreme caching of everything. Of course, the benefits of doing so also show up in the last section.
Scroll problem
The scroll problem is not hard to imagine because of the high frequency of the trigger scroll event processing. With my personal test, under extreme case, rolling and its stutter. Even if scrolling doesn’t stall, you can open the Chrome console and see that the frame rate is very slow. For frame rates, we have the famous 16.7 millisecond theory. There are many articles in the community about this time analysis, which will not be expanded here.
Many readers will immediately think of “Throttle and Debounce functions” for this reason. A quick summary:
1) Throttle allows us to limit the number of activated responses. We can limit the number of callbacks per second. Conversely, how much time to wait before activating the next callback;
2) Debounce means that when an event occurs, we do not activate the callback immediately. Instead, we wait a certain amount of time and check to see if the same event fires again. If so, we reset the timer and wait again. If the same event does not occur during the wait, we activate the callback immediately.
I won’t do it in code here. Once you understand the principle, it shouldn’t be hard to write.
But I want to think about it in terms of the way scrolling is handled in the major browsers on mobile:
1) On Android machines, scrolling events occur with a high frequency when the user scrolls — on Galaxy-SIII phones, the frequency is about 100 times per second. This means that the scroll handlers are also called hundreds of times, and these are expensive functions.
2) On Safari, we had the opposite problem: every time the user scrolls, the scroll event only fires when the scroll animation stops. The code to update the interface doesn’t run when the user scrolls on the iPhone (it only runs once when scrolling stops).
On the other hand, I think some readers might think of rAf (requestAnimationFrame), but from what I’ve observed, many of the front ends don’t really understand how requestAnimationFrame technology works and the problems it solves. It’s just a term that mechanically slams animation performance and frame loss into the equation. It has not been implemented personally in a real project, let alone considering requestAnimationFrame compatibility. I’m not going to use rAf in this scenario because. The minimum value of setTimeout timer is recommended to use 16.7ms (please go to the community to find the answer, no more details), we will not exceed this limit, and consider compatibility. If you have any questions about the use of this technology, please leave a comment.
Based on the above, my solution is different from Throttle and Debounce, but similar to both ideas, especially Throttle: replace the scroll event with a scroll handler with a timer that simply checks every 100 milliseconds to see if the user has rolled during that time. If not, do nothing; If so, process it.
User experience optimization tips
When the image is finished loading, the fade in effect appears. This will be slightly slower in practice and should slow down the transition execution time. But the user experience feels faster. This is a proven and ubiquitous trick. But from what I can tell, it works. Our code implementation also uses this tip. But things like this “social psychology” category are obviously not the focus of this article.
To summarize
The code will use: pre-threshold lazy loading +DOM Cache and image Cache+ rolling throttle simulation +CSS fadeIn animation. Please continue to read for details on functionality encapsulation and implementation.
Code implementation
DOM structure
The overall structure is as follows:
<div class="exp-list-box" id="expListBox">
<ul class="exp-list" id="expList">
</ul>
<div class="ui-refresh-down"></div>
</div>Copy the code
The body content is stored in a container with id “expListBox”. Ul with ID “expList” is the container for page loading content. Because each load and append into HTML is relatively large. I used templates instead of traditional string concatenation. The front-end template uses the open source work of my colleague Yan Haijing. The template structure is as follows:
<#dataList.forEach(function (v) {# >
<div id="s-<#=v.eid#>" class="slide">
<li>
<a href="<#=v.href#>">
<img class="img" src="data:image/gif; base64,R0lGODdhAQABAPAAAP%2F%2F%2FwAAACwAAAAAAQABAEACAkQBADs%3D"
data-src="<#=v.src#>">
</img>
<strong><#=v.title#></strong>
<span class="writer"><#=v.writer#></span>
<span class="good-num"><#=v.succNum#></span>
</a>
</li>
</div>
<# #})>Copy the code
The above template content is filled with data from each Ajax request and added to the entry page, making up each block-item. This should be observed to help you understand the logic behind it. In the page, the div attribute under a block-item holds the BLOCk-item EID. The corresponding class is called “slide”. The descendant node contains an image tag, and the blank image with an initial SRC value of 1px is used as placeholder. The actual image resource location is stored in “data-src”. In addition, the dataList returned by the request can be understood as an array of nine objects, that is, nine block-items are loaded per request.
Style window
Style is not the focus of this article, so pick the core line to illustrate:
.slide .img{
display: inline-block;
width: 90px;
height: 90px;
margin: 0 auto;
opacity: 0;
-webkit-transition: opacity 0.25 s ease-in-out;
-moz-transition: opacity 0.25 s ease-in-out;
-o-transition: opacity 0.25 s ease-in-out;
transition: opacity 0.25 s ease-in-out;
}Copy the code
The only thing to note is that the opacity setting of image is set to 0, and the image will be adjusted to 1 after successful request and rendering. The assisted transition property will achieve a fade in effect. Corresponding to the “trick” we mentioned above
The logical part
I designed it entirely according to business requirements, not abstractions. In fact, such a drop-down load function can be completely abstracted out. Interested readers can go down and encapsulate and abstract for themselves. Let’s focus on the logic. To get to the core of our logic, I put it in an immediate function to prevent global contamination:
(function() {
var fetching = false;
var page = 1;
var slideCache = [];
var itemMap = {};
var lastScrollY = window.pageYOffset;
var scrollY = window.pageYOffset;
var innerHeight;
var topViewPort;
var bottomViewPort;
function isVisible (id) {
/ /... Determines whether the element is in the visible region
}
function updateItemCache (node) {
/ /... Updating the DOM cache
}
function fetchContent () {
/ /... Ajax request data
}
function handleDefer () {
/ /... Lazy implementation
}
function handleScroll (e, force) {
/ /... Scroll handler
}
window.setTimeout(handleScroll, 100); fetchContent(); } ());Copy the code
I think it’s good programming practice to declare all variables at the beginning of a program, to prevent the potential for “variable promotion” and to help control the program as a whole. Let’s look at variable Settings:
// Load the state lock
1)var fetching = false;
// It is used to send request parameters when loading, indicating the number of screen contents. The initial value is 1, and the increment is 1 for each subsequent request
2)var page = 1;
// Cache only the DOM nodes generated by the latest pull-down data, i.e. the DOM cache array to be inserted
3)var slideCache = [];
// Store the offsetTop, offsetHeight of item in the generated DOM node
4) var slideMap = {};
// pageYOffset sets or returns the Y position of the current page relative to the upper left corner of the window display area.
5)var lastScrollY = window.pageYOffset; var scrollY = window.pageYOffset;
// The viewport height of the browser window
6)var innerHeight;
// Upper and lower threshold bounds for isVisible
7) var topViewPort;
8) var bottomViewPort;Copy the code
Detailed explanations of DOM cache variables are provided below.
Again, we have five functions. In the code above, comments have been written to clarify what each method does. Next, let’s break it down one by one.
The scroll handler handleScroll
It takes two variables, the second of which is a Boolean value force that indicates whether the scrolling program is forced to execute.
If the time interval is 100 milliseconds and there is no scrolling, and there is no forced trigger, do nothing, query again after 100 milliseconds, and return directly. ScrollY === window.scrollY When scrolling or forcing occurs within 100 milliseconds, you need to determine whether scrolling is near the bottom of the page. If so, pull the data, call the fetchContent method, and call the lazy loading method handleDefer. And in this handler, we calculate the upper and lower thresholds for the isVisible region. We use 600 as a floating range, so the goal is to load images in advance within a certain range and save the user waiting time. Of course, if we abstract, we can parameterize this value.
function handleScroll (e, force) {
// If no scrolling occurs and loading is not forced, do nothing and query again after 100 ms
if(! force && lastScrollY ===window.scrollY) {
window.setTimeout(handleScroll, 100);
return;
}
else {
// Update the document scroll position
lastScrollY = window.scrollY;
}
scrollY = window.scrollY;
// The viewport height of the browser window is assigned
innerHeight = window.innerHeight;
// Calculate isVisible upper and lower thresholds
topViewPort = scrollY - 1000;
bottomViewPort = scrollY + innerHeight + 600;
// Determine whether the load is required
// document.body.offsetHeight; Returns the current page height
if (window.scrollY + innerHeight + 200 > document.body.offsetHeight) {
fetchContent();
}
// Implement lazy loading
handleDefer();
window.setTimeout(handleScroll, 100);
}Copy the code
Pull the data
Here I use my own wrapped Ajax interface method, which is based on Zepto’s Ajax method but uses a layer of promise wrapping manually. Implementation is relatively simple, of course, interested can find me to ask for the code, here is no longer detailed. We use the front-end template for HTML rendering and call updateItemCache to pull the data from the generated DOM node cache. HandleScroll is then triggered manually to update the document scroll position and lazy load handling.
function fetchContent () {
// Set the load status lock
if (fetching) {
return;
}
else {
fetching = true;
}
ajax({
url: (!!! location.pathname.indexOf('/m/')?'/m' : ' ')
+ '/list/asyn? page=' + page + (+new Date),
timeout: 300000.dataType: 'json'
}).then(function (data) {
if (data.errno) {
return;
}
console.time('render');
var dataList = data.data.list;
var len = dataList.length;
var ulContainer = document.getElementById('expList');
var str = ' ';
var frag = document.createElement('div');
var tpl = __inline('content.tmpl');
for (var i = 0; i < len; i++) {
str = tpl({dataList: dataList});
}
frag.innerHTML = str;
ulContainer.appendChild(frag);
// Update the cache
updateItemCache(frag);
// Set the identifier to true
fetching = false;
// Force trigger
handleScroll(null.true);
page++;
console.timeEnd('render');
}, function (xhr, type) {
console.log('Refresh:Ajax Error! ');
});
}Copy the code
Cache object
As mentioned in the previous parameter, there are two objects/arrays for caching:
1) slideCache: cache DOM content generated by the most recently loaded data in an array:
slideCache = [
{
id: "s-97r45".img: img DOM node,node: parent container DOM node, like <div id="s-<#=v.eid#>" class="slide"></div>.src},... ]Copy the code
The slideCache is updated by the updateItemCache function, which is mainly used for lazy load assignment SRC. This allows us to write to the DOM only and not read from the DOM.
2) slideMap: cache the height and offsetTop of DOM nodes, indexed by the ID of DOM nodes. Storage mode:
slideMap = {
s- 97.r45: {
node: DOM node, similar to <div id="s-<#=v.eid#>" class="slide"></div>.offTop: 300.offsetHeight: 90}}Copy the code
SlideMap is updated and read based on the parameters of the isVisible method. This allows us to greatly reduce the DOM reading operations when determining whether isVisible or not.
Lazy loader
In the scroll handler above, we called the handleDefer function. Let’s look at the implementation of this function:
function handleDefer () {
// Time record
console.time('defer');
// Get the DOM cache
var list = slideCache;
// For each item in the list, use a variable instead of declaring it inside the loop. Save memory, the performance of efficient, do the ultimate.
var thisImg;
for (var i = 0, len = list.length; i < len; i++) {
thisImg = list[i].img; // Here we are reading from memory instead of DOM nodes
var deferSrc = list[i].src; // Here we are reading from memory instead of DOM nodes
// Determine if the element is visible
if (isVisible(list[i].id)) {
// This function is the image onload logic
var handler = function () {
var node = thisImg;
var src = deferSrc;
// Create a closure
return function () {
node.src = src;
node.style.opacity = 1; }}var img = newImage(); img.onload = handler(); img.src = list[i].src; }}console.timeEnd('defer');
}Copy the code
The idea is to loop through each item in the DOM cache. In the loop, determine whether each item has entered the isVisible region. If the isVisible area is entered, a true SRC assignment is made to the current item and opacity is set to 1.
Update the DOM cache generated by pull data
For each slide class, we cache the corresponding DOM section, ID, child element img DOM node:
function updateItemCache (node) {
var list = node.querySelectorAll('.slide');
var len = list.length;
slideCache = [];
var obj;
for (var i=0; i < len; i++) {
obj = {
node: list[i],
id: list[i].getAttribute('id'),
img: list[i].querySelector('.img')
}
obj.src = obj.img.getAttribute('data-src');
slideCache.push(obj);
};
}Copy the code
Check whether the device is detected in the isVisible area
The function takes the corresponding DOM ID and evaluates it. If the judgment condition is difficult to understand, you must hand animation to understand. If you’re too lazy to draw a picture, it doesn’t matter, I’ve done it for you, just a little ugly…
function isVisible (id) {
var offTop;
var offsetHeight;
var data;
var node;
// Determine if the element has been rendered properly, either on the screen (it has been lazily loaded) or off-screen, added to the DOM, but not yet requested (before lazy loading).
if (itemMap[id]) {
// Get offTop, offsetHeight
offTop = itemMap[id].offTop;
offsetHeight = itemMap[id].offsetHeight;
}
else {
// Set the node and set the node properties: node, offTop, offsetHeight
node = document.getElementById(id);
// offsetHeight is the height of its element
offsetHeight = parseInt(node.offsetHeight);
// The upper edge of the element is closest to the inner wall of the parent element
offTop = parseInt(node.offsetTop);
}
if (offTop + offsetHeight > topViewPort && offTop < bottomViewPort) {
return true;
}
else {
return false; }}Copy the code
Performance gains
In the code above, there are two main performance considerations:
1) Delayed loading time
2) DOM rendering time
The overall revenue is as follows:
Average value of delay before optimization: 49.2ms median value: 43ms;
Average value of delay after optimization: 17.1ms Median value: 11ms;
Average rendering value before optimization: 2129.6ms Median value: 2153.5ms;
Average value of optimized rendering: 120.5ms Median value: 86ms;
Keep thinking about
All of this is far from the “ultimate” performance experience. We did all sorts of DOM caching, mapping, lazy loading. If we continue to analyze edge cases, we can do a lot more, such as DOM recycling, tombstones, and scroll anchoring. A lot of this was borrowed from client development ideas, but the team of Google developers ahead of their time had their own implementations. For example, in a post from July last year, Acrylates of an Infinite Scroller. Here is an introduction from the principle (not code) level.
DOM recycling
The idea is that instead of actively creating the large number of DOM nodes that need to be created (such as the information content that we pull down to load) as createElement, we recycle those DOM nodes that have been removed from the window and are not needed for the time being. As shown in figure:
While THE DOM nodes themselves are not big energy hogs, they are not performance-free, with each node adding some extra memory, layout, styling, and drawing. It is also important to note that every time you rearrange or reapply styles (the process triggered by adding or removing styles on nodes) in a large DOM is expensive. So DOM recycling means that we keep the NUMBER of DOM nodes low to speed up the processing mentioned above.
As FAR as I can tell, there are few real product lines using this technology. Probably because the implementation complexity and benefit ratio are not very high. However, Taobao mobile terminal search page to achieve a similar idea. The following figure,
“.page-container.j-pagecontainer_ page number “div is generated every time the data is loaded. After scrolling multiple screens, the child nodes of the div that has been removed from the window are removed (), and in order to ensure the correct proportions of the scroll bar and prevent height collapse, The display declares a height of 2956px.
Tombstones
As mentioned earlier, if the network latency is high and the user is scrolling fast, it’s easy to throw the DOM nodes we rendered thousands of miles away. This leads to a very poor user experience. In this case, we need a placeholder for the tombstone entry. Wait until the data is available, then replace the tombstone. Tombstones can also have a separate DOM element pool. And you can design some nice transitions as well. This kind of technology is in a few abroad “lead technology tide” on the website, already had due. Here’s an example from Facebook:
I have also seen a similar scheme on the “Jian Shu” APP client. Of course, he is native…
Scroll to anchor
Scroll anchoring is triggered in two ways: when the tombstone is replaced, and when the window size changes (it can also happen when the device is flipped). In both cases, you need to adjust the corresponding scroll position.
conclusion
What is technically a simple problem becomes a complex one when you want to provide a high performance user experience feature. This article is a case in point. As Progressive Web Apps become first-class citizens on mobile devices (will they?) The good experience of high performance will become more and more important. Developers must also continue to explore patterns to deal with performance constraints. Of course, these designs are based on mature technology.
This article references Building Touch Interfaces with HTML5 by Flicker engineer and former YAHOO engineer Stephen Woods. And wang Peng’s partial translation of acrylates of an Infinite Scroller.