Android provides analysis of LruCache
preface
In daily development, our main work is to display the information users want to see through the interface, it is inevitable to deal with data, for some users care about the data, we must take the latest data from the network every time.
However, for some picture data, it would be a waste of resources if we read pictures from the network every time, which will not only waste users’ traffic, but also affect the performance of our App. Therefore, the usual practice is to cache the pictures.
I’m sure you’ve all heard of three levels of image caching, but what is three levels of image caching?
What is tertiary caching
The three-level cache is mainly composed of three parts:
- memory
- The hard disk
- network
We know that the fastest way to process data is in memory, followed by hard drive, and when you read data on the network and display it, you load it into memory and display it.
Therefore, we can completely save some of the pictures that users care about and most frequently used in memory and hard disk, so as to display them quickly in the next display without wasting user traffic.
For example, if we want to load an image, the address is url, if we implement the level 3 cache, then we want to display the image, perform the following steps:
The figure above shows a complete image loading process with a level 3 cache. Analyze the flow chart carefully and believe you already know what level 3 cache is
The core LRU algorithm for caching
As we know, memory and disk space is limited, we in the memory cache, and hard disk cache, can not be endless added to the cache data, is necessarily to set up and the right space to cache data, when we set the space filled with we need to remove part of the data, and then add new data into the cache.
This presents a problem: when the space is full, which data should be removed first?
Must first remove the user’s least frequently used data, the user’s frequently used data in the cache, to ensure that users can quickly access the data, which is the use of LRU algorithm
LRU (Least recently used) algorithm eliminates data according to the historical access records of data. Its core idea is that “if data has been accessed recently, it has a higher chance of being accessed in the future; if data has not been accessed recently, it has a lower chance of being accessed in the future and will be deleted first”.
Next, let’s take a look at how LruCache is implemented in The application of Lru algorithm in Android (DiskLruCache principle is similar, not in this article)
LruCache
Memory caching can use a caching class provided since Android 3.1: LruCache, which implements the LRU algorithm.
The official description
A cache that holds strong references to a limited number of values. Each time a value is accessed, it is moved to the head of a queue. When a value is added to a full cache, the value at the end of that queue is evicted and may become eligible for garbage collection.
If your cached values hold resources that need to be explicitly released, override entryRemoved(boolean, K, V, V).
If a cache miss should be computed on demand for the corresponding keys, override create(K). This simplifies the calling code, allowing it to assume a value will always be returned, even when there’s a cache miss.
LruCache is a cache that stores a finite number of strong references. Each time a value is accessed, it is moved to the head of the queue. When a value is added to an already full queue, it is removed from the end of the queue for the GC to reclaim.
If the cached value is explicitly removed, override the entryRemoved(Boolean, K, V, V) method and do something of your own
If the cache does not find a corresponding value, create (K) simplifies the calling code, allowing it to return a value even if no corresponding value is found.
Take a look at member variables and constructors
public class LruCache<K.V> {
// LruCache's core LinkedHashMap
private final LinkedHashMap<K, V> map;
// The current cache size
private int size;
// Maximum cache size
private int maxSize;
// Number of insertions
private int putCount;
// The number of times the create(K) method is created changes only when the create(K) method is overridden
private int createCount;
// The number of times data is removed. This value will change when the cache is full, new data is inserted, and old data is removed.
private int evictionCount;
// Number of hits, i.e. the number of times that GET found elements
private int hitCount;
// The number of missed elements that get failed to find
private int missCount;
/ * * *@paramMaxSize if sizeOf is not overridden,maxSize is the maximum number of elements in the cache * if sizeOf is overridden,maxSize is the sizeOf all cached elements (that is, the sum of each element times its own size) */
public LruCache(int maxSize) {
if (maxSize <= 0) {
throw new IllegalArgumentException("maxSize <= 0");
}
this.maxSize = maxSize;
// create a default capacity default load factor that allows access to sorted LinkedHashMap.
this.map = new LinkedHashMap<K, V>(0.0.75 f.true); }}Copy the code
The main ones above are the maxSize set and the internal definition of the LinkedHashMap that allows access to the sort.
MaxSize, overridden by sizeOf, represents the maximum we can add to each element times its size.
If you set accessOrder to true when creating a LinkedHashMap, you can set accessOrder to true when creating a LinkedHashMap. The internal elements are then reordered as the LinkedHashMap is accessed, which is a key part of implementing LruCache.
Although accessOrder is set to true, it is not enough because LruCache removes the most inaccessible element at some point. To what extent? How do I remove it?
The answer is to achieve in LruCache, the following according to the traditional way to understand the LruCache add, delete, change and check related operations, through a complete process analysis, can basically understand the implementation of the whole LruCache.
Analysis of common methods
To use a cache, we must add data to the cache before we can access it. To add a cache to LruCache, we use the put method:
Put () adds the cache
public final V put(K key, V value) {
// Key-value pairs cannot be null
if (key == null || value == null) {
throw new NullPointerException("key == null || value == null");
}
/ / the old value
V previous;
// Synchronize code blocks. Using this means that only one thread can manipulate the object at a time
synchronized (this) {
putCount++;
SafeSizeOf = safeSizeOf = safeSizeOf = safeSizeOf
size += safeSizeOf(key, value);
// Insert the key/value pair into the LinkedHashMap, if there is a return value, the same key exists, fetch the old value to previous
previous = map.put(key, value);
// If old values exist, the size occupied by the old value is removed from the current size.
if(previous ! =null) { size -= safeSizeOf(key, previous); }}The removed method is called entryRemoved if the value is removed.
// entryRemoved Is an empty implementation by default, which users can implement themselves to remove resources if required.
if(previous ! =null) {
entryRemoved(false, key, previous, value);
}
// This is the most critical method to calculate whether the current size meets the requirements.
trimToSize(maxSize);
// Return the old value
return previous;
}
Copy the code
In the put method, we can see that the LruCache cache does not allow the key-value pair to be null, and the Synchronized keyword is used to synchronize the code during the insertion operation, which ensures the thread safety of the insertion operation.
Then calculate the size of the current insert value and add it to size. After the insert operation, if the same key value exists before, the size of the previous element will be removed from size. If it doesn’t exist, do nothing.
If the user overrides the entryRemoved operation, the entryRemoved method is also called back to allow the user to perform some sort of resource release.
Finally, the trimToSize(maxSize) method is called, which is a core method that calculates whether the current size exceeds the maximum value set. If it does, the least recently used element will be removed.
TrimToSize () controls the size of the cache
In LruCache, the key to keeping the cache size from exceeding the maximum we set is the trimToSize() method:
public void trimToSize(int maxSize) {
while (true) {
K key;
V value;
// Also thread synchronization.
synchronized (this) {
if (size < 0|| (map.isEmpty() && size ! =0)) {
throw new IllegalStateException(getClass().getName()
+ ".sizeOf() is reporting inconsistent results!");
}
// If the current size is smaller than the set maximum size, jump out of the loop.
if (size <= maxSize || map.isEmpty()) {
break;
}
// Use map.entryset () to represent traversal from the header of the LinkedHashMap, in
// You can see the link below
ToEvict is the element of the head node
Map.Entry<K, V> toEvict = map.entrySet().iterator().next();
// Delete the element key
key = toEvict.getKey();
// Delete the value of the element
value = toEvict.getValue();
// Remove the specified element using the remove method of LinkedHashMap
map.remove(key);
// Recalculate the current size
size -= safeSizeOf(key, value);
// Remove times +1
evictionCount++;
}
// Calls user-defined entryRemoved() if user-defined
entryRemoved(true, key, value, null); }}Copy the code
First, an infinite loop is opened, in which the synchronous code block will determine whether the current capacity size exceeds the maximum capacity maxSize.
If not, end the loop. EntrySet () is used as an example of how to create a LinkedHashMap. EntrySet () is used as an example of how to create a LinkedHashMap. Map.entryset () eventually calls nextNode in LinkedHashIterator to get the node, and then in LinkedEntryIterator to get the entry value from the node through nextNode().
As mentioned in the previous source code, the LinkedHashIterator constructor takes values from the head node, so the next method called here takes the head node.
So the main thing in the trimToSize method is: if the capacity does not exceed the maximum value, return, if the capacity exceeds the maximum value, remove the head node elements in turn, until the capacity meets the set maximum value.
Remove () removes the cache
public final V remove(K key) {
// Null values are not allowed
if (key == null) {
throw new NullPointerException("key == null");
}
// Delete the element
V previous;
// Synchronize code blocks to ensure thread safety
synchronized (this) {
// Delete the element and assign the value to Previous
previous = map.remove(key);
// If there was a value for key before, subtract it
if(previous ! =null) { size -= safeSizeOf(key, previous); }}// If the user overrides entryRemoved and there is a previous value corresponding to key, perform entryRemoved.
if(previous ! =null) {
entryRemoved(false, key, previous, null);
}
return previous;
}
Copy the code
Here, too, it is easy to remove elements via the internal LinkedHashMap and then delete the corresponding values from the original cache.
Get () gets the cache
public final V get(K key) {
// Null keys are not allowed
if (key == null) {
throw new NullPointerException("key == null");
}
/ / the value of value
V mapValue;
// Synchronize code blocks to keep the current instance thread-safe
synchronized (this) {
// use the LinkedHashMap get method to find
mapValue = map.get(key);
// Return +1 hit value
if(mapValue ! =null) {
hitCount++;
return mapValue;
}
// Not found, missed count +1
missCount++;
}
If you want a return value, you can override the create method to create a return value of your own.
V createdValue = create(key);
// Create a value of null and return null
if (createdValue == null) {
return null;
}
synchronized (this) {
createCount++;
// Add createdValue to the map and save the object with the original key to mapValue
mapValue = map.put(key, createdValue);
// The original position is not empty,
if(mapValue ! =null) {
// There was a conflict so undo that last put
// Undo the previous step and still put the original value in the cache. To replace the newly created value
map.put(key, mapValue);
} else {
// The current cache size is calculated.size += safeSizeOf(key, createdValue); }}// The createdValue is replaced by the original value, and the createdValue is removed here. Returns the value of the original key.
if(mapValue ! =null) {
entryRemoved(false, key, createdValue, mapValue);
return mapValue;
} else {
// Call the trimToSize method to see if old data needs to be reclaimed
trimToSize(maxSize);
returncreatedValue; }}Copy the code
The first part of the get method is to get the value from the map and return it if it gets it.
If it doesn’t, and overrides the create(K) method, the value created by the create(K) method is stored in the cache first, and if the value is stored in a place where it already has a value, it is replaced. And executes the entryRemoved method to call back to the caller.
As mentioned earlier, LruCache sorts elements based on the order in which they are accessed. AfterNodeAccess is called when the get or put method of the LinkedHashMap is called, and the afterNodeAccess method is used to sort the internal elements. This was discussed in the previous LinkedHashMap.
EvictAll clears all cached data
public final void evictAll(a) {
trimToSize(-1); // -1 will evict 0-sized elements
}
Copy the code
I’m still calling the trimToSize method, passing in -1, and as we’ve seen, there’s a loop inside the trimToSize method that’s going to execute
if (size <= maxSize || map.isEmpty()) {
break;
}
Copy the code
After that, the loop will be terminated, passing -1, when map.isempty () terminates the loop and clears the cache.
The last
By reading the previous article, you have a clear idea of the LruCache Android provides us.
LruCache maintains a LinkedHashMap, which is created with accessOrder set to true, and its internal elements are sorted in accessOrder rather than in insert order. When the put() method is called to retrieve the data, elements are added to the internal map and trimToSize() is called to determine if the cache is full. If so, the least recently accessed element in the LinkedHashMap is removed. When the get() method is called to access the cache object, the get() method of The LinkedHashMap is called to get the collection element, and the afterNodeAccess method implemented inside the LinkedHashMap is called to move the element to the tail node.
In fact, LRU algorithm is an algorithm, the specific implementation or depends on the individual, but here Google provides us with a good implementation of LruCache, we can also achieve a similar LruCache.
The most important thing is to understand the mind.