What is LRU?

Before we can understand our LRUCache, we need to know what LRU is.

LRU is the abbreviation of Least Recently Used, which is a commonly Used page replacement algorithm. It selects the most Recently unused pages to be eliminated. The algorithm assigns each page an access field, which is used to record the time T experienced by a page since it was visited last time. When a page needs to be eliminated, the page with the largest T value in the existing page, that is, the least recently used page, is selected to be eliminated.

After some of Baidu’s aggressive operations, let’s illustrate the LRU.

The LRU replacement algorithm is divided into the following two cases when the memory is full:

(1) How to replace the internal cache when it does not exist.

(2) How to replace the cache when it exists internally?

Method of use and results

Import Glide’s library directly into the project and call the internal LruCache to see the effect.

LruCache lruCache = new LruCache<String, Integer>(2);
lruCache.put("1".1);
lruCache.put("2".2);
lruCache.put("1".1);
lruCache.put("3".3);
System.out.println(lruCache.get("1"));
System.out.println(lruCache.get("2"));
System.out.println(lruCache.get("3"));
Copy the code

Create a storage space of 2 (the internal structure is not revealed here), store the data with put(), fetch each data once with get(), and then look at the results.

Oh my God!! 2 didn’t? What’s going on here? Those of you who have looked carefully at the above illustration already know the answer, but let’s go into Glide’s library and see how it works.

LruCache source guide

Let’s take a look at what’s in LruCache’s variable family.

public class LruCache<T.Y> {
  // Two-way linked list with capacity 100
  private final Map<T, Y> cache = new LinkedHashMap<>(100.0.75 f.true); 
  private final long initialMaxSize; // Initialize the maximum capacity
  private long maxSize; // Maximum capacity
  private long currentSize; // Existing capacity
}
Copy the code

For LruCache, there are only three steps as for HashMap, so I will explore LruCache from these three steps, but we need to start with a question: what is the function of initialMaxSize?

new LruCache<T, Y>(size)

  public LruCache(long size) {
    this.initialMaxSize = size;
    this.maxSize = size;
  }
Copy the code

So by this point I think you already know what you’re doing, so you initialize the Max capacity and the Max capacity, so let’s go to the next step.

put(key, value)

public synchronized Y put(@NonNull T key, @Nullable Y item) {
    // Return a 1
    final int itemSize = getSize(item);
    If 1 is greater than or equal to the maximum value, no operation is performed
    // The entire initialization cannot be set to 1
    if (itemSize >= maxSize) {
      // Reserved method for overwriting
      onItemEvicted(key, item);
      return null;
    }
    // Add one to the existing data capacity
    if(item ! =null) {
      currentSize += itemSize;
    }
    @Nullable final Y old = cache.put(key, item);
    if(old ! =null) {
      currentSize -= getSize(old);
    
      if(! old.equals(item)) { onItemEvicted(key, old); } } evict();/ / 1 - >

    return old;
  }
// The method called directly from comment 1
private void evict(a) {
    trimToSize(maxSize); / / 2 - >
  }
// The method called directly from comment 2
protected synchronized void trimToSize(long size) {
    Map.Entry<T, Y> last;
    Iterator<Map.Entry<T, Y>> cacheIterator;
    // Indicates that the current capacity exceeds the maximum capacity
    // The last data needs to be cleaned up
    while (currentSize > size) {
      cacheIterator = cache.entrySet().iterator();
      last = cacheIterator.next();
      final Y toRemove = last.getValue();
      currentSize -= getSize(toRemove);
      finalT key = last.getKey(); cacheIterator.remove(); onItemEvicted(key, toRemove); }}Copy the code

This is a method with a locking mechanism, through the judgment of the current capacity and the maximum capacity, to decide whether to delete our data. But the question remains, what is the role of initialMaxSize? MaxSize is a value used to control the size of the capacity.

get()

 public synchronized Y get(@NonNull T key) {
    return cache.get(key);
  }
Copy the code

That’s calling the data in the LinkedHashMap, but still not saying what initialMaxSize does.

aboutinitialMaxSize

I’m not going to buy that, because initialMaxSize is really useless from my point of view. Hahahahaha!! But there’s another place it’s used.

public synchronized void setSizeMultiplier(float multiplier) {
    if (multiplier < 0) {
      throw new IllegalArgumentException("Multiplier must be >= 0");
    }
    maxSize = Math.round(initialMaxSize * multiplier);
    evict();
  }
Copy the code

Is used to regulate the maximum capacity of our size, but I feel or not do, but is I too food, no other calls its methods, this method is a we used directly in use process, may be used multiple times and data problems associated to a save, the scene is similar image cache loading to Glide. Also hope to know readers can give me a solution.

LinkedHashMap

Since the operation is the same as HashMap, we will not repeat it, just look at its node appearance.

static class LinkedHashMapEntry<K.V> extends HashMap.Node<K.V> {
        // There are both front and back nodes, which is called a bidirectional linked list
        LinkedHashMapEntry<K,V> before, after;
        LinkedHashMapEntry(int hash, K key, V value, Node<K,V> next) {
            super(hash, key, value, next);
        }
Copy the code

But at this point, I have another question, why don’t I see the whole data moving? That is, the most recently used data should be moved to the beginning of the last position, where does he actually process it? As a guess, we need to go into the linkedhashmap.put () method, since it is put() that causes the transformation.

For those of you who are interested in exploring, this call should not be used to query put() directly, as it would only call an interface function or abstract class function. Instead, we should use our breakpoints to explore the query.

But after a while of hard work, deep calls to explore such a problem, he will eventually call such a problem.

// Callbacks to allow LinkedHashMap post-actions
void afterNodeAccess(Node<K,V> p) {}// Move the data to the last bit
void afterNodeInsertion(boolean evict) {}void afterNodeRemoval(Node<K,V> p) {}Copy the code

This is a few methods we didn’t find before we looked at HashMap, and it was explicitly reserved for LinkedHashMap. Wow!!!!! So our operation must be in one of these.

Call the method below near line 656 of the HashMap source code
// There is this occurrence inside putVal()
afterNodeAccess(e);
// --> LinkedHashMap to implement it
// Push the current data directly to the last position
// That is, the most recently used data
void afterNodeAccess(Node<K,V> e) { // move node to last
        LinkedHashMapEntry<K,V> last;
        if(accessOrder && (last = tail) ! = e) { LinkedHashMapEntry<K,V> p = (LinkedHashMapEntry<K,V>)e, b = p.before, a = p.after; p.after =null;
            if (b == null)
                head = a;
            else
                b.after = a;
            if(a ! =null)
                a.before = b;
            else
                last = b;
            if (last == null)
                head = p;
            else{ p.before = last; last.after = p; } tail = p; ++modCount; }}Copy the code

Ok, so now we know how to transform the list.

Actual combat: hand liftLruCache

This is a very exciting part, so let’s get some ideas before we do the code.

(1) What is used for storage containers? Because the idea of LinkedHashMap was too long, we rebuilt the entire code using arrays

(2) Shifts of existing variables involved in key call methods put(), get(), and PUT ().

Wow! It seems that there is not so much to do, so let’s take a look at the framework constructed for the first time.

public class LruCache {

    private Object objects[];
    private int maxSize;
    private int currentSize;

    public LruCache(int size){
        objects = new Object[size];
        maxSize = size;
    }

    /** * insert item *@param item
     */
    public void put(Object item){}/** * get item *@param item
     */
    public Object get(Object item){
        return null;
    }

    /** * shift the array * based on the subscript@param index
     */
    public void move(int index){}}Copy the code

Since any array transformation has a shift, the shift operation is essential. Then we now work is to fill in the data, the corresponding shift is how the operation of the idea.

public class LruCache {

    public Object objects[];
    private int maxSize;
    public int currentSize;

    public LruCache(int size) {
        objects = new Object[size];
        maxSize = size;
    }

    /** * insert item **@param item
     */
    public void put(Object item) {
        // There are two cases when the capacity is not full
        / / 1. Presence in container
        / / 2. Does not exist in container
        int index = search(item);
        if (index == -1) {
            if (currentSize < maxSize) { // The container is not full
                objects[currentSize] = item;
                currentSize++;
            } else { // The container is full, delete the header and insert it
                move(0);
                objects[currentSize - 1] = item; }}else{ move(index); }}/** * get item **@param item
     */
    public Object get(Object item) {
        int index = search(item);
        return index == -1 ? null : objects[index];
    }

    /** * shifts the subsequent array ** according to the subscript@param index
     */
    public void move(int index) {
        Object temp = objects[index];
        // Shift the subsequent array
        for (int i = index; i < currentSize - 1; i++) {
            objects[i] = objects[i + 1];
        }
        objects[currentSize - 1] = temp;
    }

    /** * Returns the subscript if the array is present or -1 if it is not@param item
     * @return* /
    private int search(Object item) {
        for (int i = 0; i < currentSize; i++) {
            if (item.equals(objects[i])) return i;
        }
        return -1;
    }
Copy the code

Because it has really written more detailed, it is not difficult to masturbate my 20 minutes, I hope readers can quickly enter the entry, the following gives me a test sample, to end this topic.

conclusion

We all know that there is such a problem to think about in operating systems, the specific type of problem is the page break. Use an example to thoroughly understand LruCache’s algorithm.

For example, the sequence of data stored in memory is: (1,2,1,3,2), and the memory capacity is 2.

LruCache is mainly used for cache processing, which mainly refers to memory cache and disk cache.