1. What is it?
It is a data storage container and can be used as a data cache carrier, prioritizing the least recently used data. LRU is Least Recently Used.
2. How to use it?
2.1 Simple Use
Take the Kotlin code for example, and use it succinct as follows
var lruCache: LruCache<Int, String>? = null val maxCacheSize = 8 fun main() { lruCache = LruCache(maxCacheSize) for (index in 0.. 10) { lruCache? .put(index, "$index") } print("\n===end===") }Copy the code
The output is as follows. As you can see, the earliest data added to the LruCache is eliminated
2.2 Overwriting sizeOf
In the example above, the default sizeOf the stored object is 1. In practical scenarios, it is often necessary to pass in the memory size. In this case, you need to override sizeOf to tell LruCache the sizeOf each stored object.
var imgCache: LruCache<Int, Bitmap>? = null var maxImgCacheSize: Int = 3072 @Test fun test() { imgCache = object : LruCache<Int, Bitmap>(maxImgCacheSize) { override fun sizeOf(key: Int, value: Bitmap): Int { if (value == null) { return 0 } return value.width * value.height } } var tempBm50 = Bitmap.createBitmap(50, 50, Bitmap.Config.ARGB_8888) imgCache? .put(0, tempBm50) var tempBm40 = Bitmap.createBitmap(40, 40, Bitmap.Config.ARGB_8888) imgCache? .put(0, tempBm40) print("\n===end===") }Copy the code
The output looks like this. You can see that when you add a bitmap with a width of 40, the bitmap with a width of 50 is eliminated
3. Who used it? And who did it use?
3.1 Who uses it?
Glide memory cache is implemented through LruCache, the specific implementation class is LruResourceCache. A similar caching strategy is DiskLruCache.
3.2 Who does it use?
LruCache doesn’t have a lot of code, it holds a LinkedHashMap internally.
4. See how it works
LruCache inherits directly from Object. Its core attribute is a map of type LinkedHashMap.
4.1 LinkedHashMap
Inherits from HashMap, implementing the Map interface
4.1.1 HashMap structure diagram
HashMap is a combined data structure of array and linked list. After jdK1.8, when the length of the linked list is greater than 8, it is converted to red-black tree storage. It is efficient to perform operations such as query and insert.
4.1.2 Structure diagram of LinkedHashMap
LinkedHashMap inherits from HashMap. You can intuitively see that the basic structure is still the combination of array and unidirectional linked list. The difference is that all elements are also associated by bidirectional linked list.
4.1.3 Adding elements
Call the put method, which is the put method of the superclass HashMap, and call the putVal method, which is overwritten by one of the key methods afterNodeAccess
AfterNodeAccess method, which moves the node to the end of the bidirectional list. The tail attribute is reassigned so that LruCache can retrieve the most recently used element.
void afterNodeAccess(Node<K,V> e) { // move node to last LinkedHashMapEntry<K,V> last; if (accessOrder && (last = tail) ! = e) { LinkedHashMapEntry<K,V> p = (LinkedHashMapEntry<K,V>)e, b = p.before, a = p.after; p.after = null; if (b == null) head = a; else b.after = a; if (a ! = null) a.before = b; else last = b; if (last == null) head = p; else { p.before = last; last.after = p; } tail = p; ++modCount; }}Copy the code
4.1.4 Getting the element
When accessOrder is true, the afterNodeAccess method is still called. When accessOrder is set, the afterNodeAccess method is still called.
public V get(Object key) {
Node<K,V> e;
if ((e = getNode(hash(key), key)) == null)
return null;
if (accessOrder)
afterNodeAccess(e);
return e.value;
}
Copy the code
In the constructor of LruCache, accessOrder is set to true.
4.1.5 Deleting elements
Call the remove method (HashMap remove method), that is, removeNode method, and internally call the LinkedHashMap overwritten deremoval method;
Vishruds method, which clears the association of the bidirectional linked list, and moves the head or tail pointing to it, is simple;
void afterNodeRemoval(Node<K,V> e) { // unlink
LinkedHashMapEntry<K,V> p =
(LinkedHashMapEntry<K,V>)e, b = p.before, a = p.after;
p.before = p.after = null;
if (b == null)
head = a;
else
b.after = a;
if (a == null)
tail = b;
else
a.before = b;
}
Copy the code
4.1.6 capacity
That is, the resize method of HashMap. When the threshold of the container is exceeded, it is generally expanded to double the original size.
4.2 Adding elements to LruCache
The put method is called, and the sizeOf method is called internally, in addition to the put method of the LinkedHashMap, to calculate the sizeOf the current element. If sizeOf is not overridden, the element size defaults to 1.
protected int sizeOf(@NonNull K key, @NonNull V value) {
return 1;
}
Copy the code
Finally, the trimToSize method is called to determine whether the least used elements need to be eliminated and perform the elimination algorithm. When an element is eliminated, the entryRemoved method is called to be sensed by the caller and is overridden.
public void trimToSize(int maxSize) { while (true) { K key; V value; synchronized (this) { if (size < 0 || (map.isEmpty() && size ! = 0)) { throw new IllegalStateException(getClass().getName() + ".sizeOf() is reporting inconsistent results!" ); } if (size <= maxSize || map.isEmpty()) { break; } Map.Entry<K, V> toEvict = map.entrySet().iterator().next(); key = toEvict.getKey(); value = toEvict.getValue(); map.remove(key); size -= safeSizeOf(key, value); evictionCount++; } entryRemoved(true, key, value, null); }}Copy the code
4.3 LruCache Gets the element
When the key is not found in the LinkedHashMap, LruCache will try to create a null element corresponding to the key. Finally, add the element node to the LinkedHashMap.
public final V get(@NonNull K key) { if (key == null) { throw new NullPointerException("key == null"); } V mapValue; synchronized (this) { mapValue = map.get(key); if (mapValue ! = null) { hitCount++; return mapValue; } missCount++; } /* * Attempt to create a value. This may take a long time, and the map * may be different when create() returns. If a conflicting value was * added to the map while create() was working, we leave that value in * the map and release the created value. */ V createdValue = create(key); if (createdValue == null) { return null; } synchronized (this) { createCount++; mapValue = map.put(key, createdValue); if (mapValue ! = null) { // There was a conflict so undo that last put map.put(key, mapValue); } else { size += safeSizeOf(key, createdValue); } } if (mapValue ! = null) { entryRemoved(false, key, createdValue, mapValue); return mapValue; } else { trimToSize(maxSize); return createdValue; }}Copy the code
4.4 LruCache Removes elements
Call the remove method, and again, call back to the entryRemoved method to make the caller aware of element elimination.
public final V remove(@NonNull K key) { if (key == null) { throw new NullPointerException("key == null"); } V previous; synchronized (this) { previous = map.remove(key); if (previous ! = null) { size -= safeSizeOf(key, previous); } } if (previous ! = null) { entryRemoved(false, key, previous, null); } return previous; }Copy the code
5. Application Scenario
Suitable for scenarios that require LRU caching policies, such as setting up View buffer pools, image resource buffer pools, and so on.
The caller in the figure can be ImageView, RecyclerView, Fragment and so on. The LruCache can cache and filter the source data to ensure that the caller can immediately use the most commonly used data and improve the efficiency of the program.
6. Matters needing attention
6.1 Executing the Java main method
The code examples in this article are tested using androidTest, which requires special configuration if you want to test directly in Java code.Of course, you can also run it from Tools->Groovy Console.
<? The XML version = "1.0" encoding = "utf-8"? > <project version="4"> <component name="GradleSettings"> <option name="linkedExternalProjectsSettings"> <GradleProjectSettings> <option name="delegatedBuild" value="false" /> <option name="distributionType" value="DEFAULT_WRAPPED" /> <option name="externalProjectPath" value="$PROJECT_DIR$" /> <option name="modules"> <set> <option value="$PROJECT_DIR$" /> <option value="$PROJECT_DIR$/app" /> </set> </option> <option name="resolveModulePerSourceSet" value="false" /> <option name="testRunner" value="PLATFORM" /> </GradleProjectSettings> </option> </component> </project>Copy the code
6.2 Multi-Threaded Scenario
The multi-threaded scenario is considered in LruCache. If you use LinkedHashMap alone, you need to pay attention to the multi-threaded operation scenario. The problems that HashMap may have in multi-threaded scenarios also exist in LinkedHashMap.