One, the introduction

Summary of LruCache, since the work has been delayed for a long time, so take a moment to write it down.

I believe that many children also like me, originally used LruCache as a link in ImageLoader image cache, but in the process of use, we do not care about its “recent use of the principle” exactly how the source code, but only care about it has such a feature. Therefore, the work to be done in this paper is as follows:

  • Explain the “recently used” feature in terms of basic usage;
  • Combined with source code analysis “recently used” implementation;

Two, from use to source code

Let’s start with the “recently used” feature: if the head is the nearest and the tail is the farthest, the element inserted first goes to the tail, and the element inserted last goes to the head, and if we use one of these elements, it goes to the head. If the capacity reaches the limit, the element with the least frequent tail is removed when the element is inserted again. This feature can be explained in the following figure:

Normally we use LruCache like this, assuming a limited memory size of 5M, we can do this:

    // Limit the size
    final int maxSize = 1024 * 1024 * 5;
    LruCache<String, Bitmap> lruCache = new LruCache<String, Bitmap>(maxSize){
        @Override
        protected int sizeOf(String key, Bitmap value) {
            return value.getRowBytes() * value.getHeight() / 1024 ;
        }
        @Override
        protected void entryRemoved(boolean evicted, String key, Bitmap oldValue,Bitmap newValue) {
            // If this is implemented in the list request, then fast sliding will definitely cause memory jitter, so in practical use
            // You can cache the images that need to be cleaned first, and then clean them at a certain point in time
            if(! oldValue.isRecycled()){ oldValue.recycle(); }}};Copy the code

Following its constructor, we can see that it creates a LinkedHashMap and implements it based solely on this data structure:

    public LruCache(int maxSize) {
        if (maxSize <= 0) {
            throw new IllegalArgumentException("maxSize <= 0");
        }
        this.maxSize = maxSize;
        this.map = new LinkedHashMap<K, V>(0.0.75 f.true);
    }
Copy the code

Let’s verify the “recently used” feature described earlier:


    LinkedHashMap<Integer, Integer> linkedHashMap= new LinkedHashMap<>(0.0.75 f.true);
    linkedHashMap.put(1.1);
    linkedHashMap.put(2.2);
    linkedHashMap.put(3.3);
    linkedHashMap.put(4.4);
    linkedHashMap.put(5.5);

    for (Map.Entry<Integer, Integer> e : linkedHashMap.entrySet()){
        Log.i(TAG, e.getKey() + "< - >" + e.getValue());
    }
    // The simulation uses elements 4 and 3 in sequence
    linkedHashMap.get(4);
    linkedHashMap.get(3);
    Log.i(TAG, "-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --");
    for (Map.Entry<Integer, Integer> e : linkedHashMap.entrySet()){
        Log.i(TAG, e.getKey() + "< - >" + e.getValue());
    }
    // Here the simulation reaches the memory limit and deletes the least frequently used element
    Log.i(TAG, "delete: " + linkedHashMap.keySet().iterator().next());

Copy the code

The output is as follows:

1 < - > 1 2 3 < - > < - > 2, 3, 4, 5 < - > < - > 4 5 -- -- -- -- -- -- -- -- -- -- -- -- -- 1 < - > 1 < - > 2 5 < - > < - > 4 5 4 3 < - > 3 delete: 1 / / to remove elementsCopy the code

As you can see from the results, this is exactly what we understood before, but this is a simulation operation, so let’s see what LruCache does to this part. If the space limit is exceeded, remove the least frequently used element at the end:

public final V put(K key, V value) {
    // Key and value cannot be null
    if (key == null || value == null) {
        throw new NullPointerException("key == null || value == null");
    }
    V previous;
    synchronized (this) {
        // Count the number of elements added to the container
        putCount++;
        // Count the space occupied by elements in the container
        size += safeSizeOf(key, value);
        previous = map.put(key, value);
        // If an element already exists at the key position, replace the old element with the new one and recalculate the size of the space
        if(previous ! =null) { size -= safeSizeOf(key, previous); }}// Delete the old element from the key
    if(previous ! =null) {
        entryRemoved(false, key, previous, value);
    }
    // Remove the least commonly used element in the tail when capacity exceeds
    trimToSize(maxSize);
    return previous;
}
Copy the code

The above source code of trimToSize is as follows. Different from the trimToSize function of the collection itself, the trimToSize function of the collection itself is used to remove the excess space that does not store elements, while here is to remove the least commonly used elements at the tail, which is somewhat similar:

private void trimToSize(int maxSize) {

    while (true) {
        K key;
        V value;
        synchronized (this) {
            // 1. Statistics errors
            if (size < 0|| (map.isEmpty() && size ! =0)) {
                throw new IllegalStateException(getClass().getName()
                        + ".sizeOf() is reporting inconsistent results!");
            }
            // 2. There is enough space available
            if (size <= maxSize) {
                break;
            }
            // 3. Excess capacity
            Map.Entry<K, V> toEvict = null;
            // Locate the least frequently used element at the end and mark it as deleted
            for (Map.Entry<K, V> entry : map.entrySet()) {
                toEvict = entry;
            }
            // If the container has no elements, exit directly
            if (toEvict == null) {
                break;
            }
            key = toEvict.getKey();
            value = toEvict.getValue();
            // Otherwise, delete elements and re-count space capacity
            map.remove(key);
            size -= safeSizeOf(key, value);
            evictionCount++;
        }
        // The notification element has been removed from the container and is ready for write cleanup
        entryRemoved(true, key, value, null); }}Copy the code

About LinkedHashMap

Note: The LinkedHashMap and HashMap versions in this article are based on JDK8; In Android, it is based on Android N

public class LinkedHashMap<K.V> extends HashMap<K.V> implements Map<K.V>
Copy the code

LinkedHashMap descends from HashMap, so let’s review the features of HashMap and ancestor Hashtable:

Internal structure:

  • When conflict arises (keyThe hashes are the same, but they don’tequals) has a small amount of data (The < 8),HashMapandHashtableAll useArray + one-way linked listThis structure serves as a solution;
  • When the amount of conflicting data is large (> 8Is different fromHashtable.HashMapUsing theArray + red-black treeSolutions;
  • HashMapIs a non-thread-safe data structure, whileHashtableis

The main differences between HashMap and Hashtable are described above, which result in the performance and usage scenarios of HashMap being superior to Hashtable, and you can completely abandon Hashtable. ConcurrentHashMap is used in concurrent scenarios.

Going back to HashMap, why do you switch between arrays and one-way lists and arrays and red-black trees? The reason is simple, because the amount of data is small, the operation efficiency of using one-way linked lists is high enough, and the internal structure of one-way linked lists is simple, which reduces the extra space allocation. Accordingly, the red-black tree is used when the data volume is large, because the red-black tree can ensure the failure time complexity of each operation is log(n), so it is suitable for the scenario with large data volume and frequent operations.

OK, with that review in mind, we can take a look at some of the new collation rules in LinkedHashMap.

InitialCapacity, loadFactor, accessOrder, which controls whether or not to enable the nearest usage principle:

    public LinkedHashMap(int initialCapacity,
                         float loadFactor,
                         boolean accessOrder) {
        super(initialCapacity, loadFactor);
        this.accessOrder = accessOrder;
    }
Copy the code

Here we assume that the proximity principle is turned on and that fetching elements looks like this:

    public V get(Object key) {
        Node<K,V> e;
        if ((e = getNode(hash(key), key)) == null)
            return null;
        if (accessOrder)  
            afterNodeAccess(e);  
        return e.value;
    }
Copy the code

The nodes here are bidirectional linked list nodes, inherited from hashmap. Node, as follows:

    static class LinkedHashMapEntry<K.V> extends HashMap.Node<K.V> {
        LinkedHashMapEntry<K,V> before, after;
        LinkedHashMapEntry(int hash, K key, V value, Node<K,V> next) {
            super(hash, key, value, next); }}Copy the code

Note afterNodeAccess, the source code below, where the node is moved to the tail, where the tail represents the most commonly used node and the nearest node.

void afterNodeAccess(Node<K,V> e) { 
        LinkedHashMapEntry<K,V> last;
        if(accessOrder && (last = tail) ! = e) {// The node is not at the end
            LinkedHashMapEntry<K,V> p = (LinkedHashMapEntry<K,V>)e, // The current node
            b = p.before,  // The previous node
            a = p.after;  // The last node
            p.after = null;  // Cut the connection between yourself and the next node

            // 1. Separate yourself from the list
            if (b == null)  // If there is no node in front of it, it is the head, so it is directly replaced by the next node
                head = a;  
            else           // There are nodes in front of it, so connect the previous node to the next node, give way
                b.after = a; 

            if(a ! =null)  // Because it is a bidirectional list, we need to reset the previous connection when a node exists.
                a.before = b;
            else
                last = b;  // if befor does not exist, use befor as the tail

            if (last == null) // There are no nodes in the current list, so use the current node as the header
                head = p;
            else {           // If a node already exists, connect the current node to the end
                p.before = last;
                last.after = p;
            }
            tail = p; // reassign the tail++modCount; }}Copy the code

In contrast to a HashMap, LinkedHashMap uses bidirectional rather than unidirectional lists and has collation rules that allow sorting of elements.

Third, summary

Well, at the end of it, I found I had nothing to say… Hey hey

This article mainly analyzes the recent use of LruCache principles, then reviews the LinkedHashMap related to some knowledge, such as HashMap and Hashtable, and specifically analyzes its operation rules when get elements;

In general, LruCache acts as a layer agent of the LinkedHashMap, controlling the size of memory itself and delegating storage operations to the LinkedHashMap.