This is the 7th day of my participation in the November Gwen Challenge. Check out the details: The last Gwen Challenge 2021

preface

Caching strategy is quite common in Android development, especially in the image usage business scenario caching strategy plays an important role. For mobile applications, the commonly used UI is the picture component, and the picture display resources often come from the network. Therefore, in order to faster picture loading and reduce traffic, it is necessary to use the cache strategy to achieve a complete picture loading framework.

The basic logic of the image loading library is as follows: first find the image resources from memory; Then look for images from local sources; Finally, the image resource is requested from the network only when there is neither.

Flowchart TD A[flowchart load] --> B{memory cache}; B - > is | | C/load memory image resources directly; Whether B -- -- -- - > | | D {if there is a local cache}; D - > is | | E/load local image resources; Whether D -- -- -- - > | | F/request load network image resources; C --> G; F --> G; E --> G;

Memory caching strategy

For the memory caching strategy, there is the LruCache class in the Android native code. LruCache The core cache algorithm is LRU(Least Recently Used). When the cache exceeds the maximum value, the Least Recently Used cache objects are deleted. Of course, local storage caches can also reference the LRU strategy, and the combination of the two can achieve a complete set of basic resource caching strategies as shown in the figure above.

LruCache

LruCache is a generic class that mainly uses LinkedHashMap

storage to cache objects internally. Provide put and GET operations, remove less used cache objects when the cache exceeds the maximum stored value, and put new cache objects. In addition, the remove method is supported, which actively removes cached objects to release more cached stored values.
,>

  • get
public final V get(K key) { if (key == null) { throw new NullPointerException("key == null"); } V mapValue; synchronized (this) { mapValue = map.get(key); if (mapValue ! = null) { hitCount++; return mapValue; } missCount++; } V createdValue = create(key); if (createdValue == null) { return null; } synchronized (this) { createCount++; mapValue = map.put(key, createdValue); if (mapValue ! = null) { // There was a conflict so undo that last put map.put(key, mapValue); } else { size += safeSizeOf(key, createdValue); } } if (mapValue ! = null) { entryRemoved(false, key, createdValue, mapValue); return mapValue; } else { trimToSize(maxSize); return createdValue; }}Copy the code
  • put
public final V put(K key, V value) { if (key == null || value == null) { throw new NullPointerException("key == null || value == null"); } V previous; synchronized (this) { putCount++; size += safeSizeOf(key, value); previous = map.put(key, value); if (previous ! = null) { size -= safeSizeOf(key, previous); } } if (previous ! = null) { entryRemoved(false, key, previous, value); } trimToSize(maxSize); return previous; }Copy the code
  • remove
public final V remove(K key) { if (key == null) { throw new NullPointerException("key == null"); } V previous; synchronized (this) { previous = map.remove(key); if (previous ! = null) { size -= safeSizeOf(key, previous); } } if (previous ! = null) { entryRemoved(false, key, previous, null); } return previous; }Copy the code

SafeSizeOf (safeSizeOf, safeSizeOf, safeSizeOf, safeSizeOf, safeSizeOf) In addition, the put method has a trimToSize method to check whether the cache pool exceeds the maximum cache value.