Recently, a friend of mine was preparing for an interview, and we would discuss some interview questions in the group. Every time, I would be hit hard and failed to answer many questions. In peacetime development, Google, the third party used very slip, seems to solve the problem, but in retrospect, the technology did not grow. For example, I know that the picture is used level 3 cache, using Lru algorithm, but if you do not glide, hand-write a picture cache tool class, I find my thinking is not clear. In the case of write memory caching, I would think of strong reference caching, soft reference caching, so what are the best classes for strong reference, soft reference caching? LruCache and LinkedHashMap can be used to cache data. Why did LruCache select LinkedHashMap instead of another storage class? Ask a few whys and you’ll find you don’t know anything. I also understand why many companies focus on the underlying implementation principles and algorithms when interviewing. Only understand the implementation principle, to learn the essence, to appreciate the loading framework written by the master, in order to meet the problem of the right medicine, rather than like a headless fly like every online answer to try again, changed all do not know why. To make sense of the above set of issues, I looked at HashMap, LinkedHashMap, Lru, and memory caching implementations, which I now put together into an article.
Say what I said before
Before looking at the underlying principles of HashMap and LinkedHashMap, it is best to review the basic knowledge of array, single linked list, double linked list, hash table, hash function, hash algorithm, so as to understand HashMap and LinkedHashMap have a great help. This article has just gone through the implementation of HashMap, LinkedHashMap, and cache. It doesn’t cover all of these. After reading this article, you’ll see why LruCache uses LinkedHashMap.
HashMap
A HashMap implements a hash table based on arrays, with arrays as the backbone. To add or find an element in an array, use the array index (that is, the index) in a single location. An array is an internal storage space, and the index of an array is an address in memory. Each record in a HashMap is an Entry
object, and each Entry contains a key-value pair. These objects are stored in the array. Now let’s look at how to calculate the memory address of each stored object.
static final int hash(Object key) {
int h;
return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
}
Copy the code
Hash value = (hashCode) ^(hashcode >>> 16), hashCode is an xor operation that moves 16 bits to the right of the hashcode itself, using the hashcode of the key to compute the Hash value. The next step is to compute the subscript using the Hash value.
i = (n - 1) & hash; //hashThat's what we figured out in the last stephashP = TAB [I]; p = TAB [I]; // Use the calculated value of I as the index from the array elementif(p == null){// If this element is null, construct a Node object with key,value and place it in the array with subscript I TAB [I] = newNode(hash, key, value, null);
}
Copy the code
I is the index that we hash out, which is where we store it. If TAB [I] = TAB [I] = TAB [I] = TAB [I] The relevant source code is as follows
final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
boolean evict) {
......
if ((p = tab[i = (n - 1) & hash]) == null)
tab[i] = newNode(hash, key, value, null);
else{ Node<K,V> e; K k; // Find the array element,hashIf the key is equal to the key, the key is overwrittenif (p.hash == hash&& ((k = p.key) == key || (key ! = null && key.equals(k)))) e = p; // This array element forms a red-black tree when the list length is greater than 8else if (p instanceof TreeNode)
e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
else{// The array elementhashAt the same time, the length of the linked list <8. Traverse to find elements, if there is no overwrite, then newfor (int binCount = 0; ; ++binCount) {
if((e = p.ext) == null) {// Create a new list of data elements p.ext = newNode(hash, key, value, null);
if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for1st // List length >=8 Structure changes to red-black treeifyBin(tab, hash);
break;
}
if (e.hash == hash&& ((k = e.key) == key || (key ! = null && key.equals(k))))break; p = e; }}if(e ! = null) { // existing mappingfor key
V oldValue = e.value;
if(! onlyIfAbsent || oldValue == null) e.value = value; afterNodeAccess(e);return oldValue;
}
}
++modCount;
if (++size > threshold)
resize();
afterNodeInsertion(evict);
return null;
}
Copy the code
Ignore the red-black tree conversion and summarize the code above:
1. Check whether there is an element with the same Hashcode and key. If there is an element with the same Hashcode and key, the new value overwrites the old value and returns the old value. 2. If they have the same hashcode, then they have the same index position. In this case, whether they have the same key or not, then a hash conflict has occurred. 3. Single linked lists are introduced when hash conflicts occur. Iterate to find elements, overwrite if there is one, create if there is none.
The above is entry storage, and the process is complicated. Obtaining data is relatively simple, and the basic process is as follows:
1. Use the hashCode and addressing algorithm of the key to obtain the array subscript. If the key and hash in the array elements are equal, the array returns the array subscript. 2. If they are not equal and there are linked list elements, the list elements are traversed for matching.
HashMap does not hash, does not form a singly linked list, HashMap finds elements quickly, and the get() method can locate elements directly. After a single linked list appears, a single bucket stores a chain of entries instead of an Entry. The system can only traverse each Entry in sequence until it finds the desired Entry. If the Entry that happens to be searched is at the end of the Entry chain (the Entry that was first placed in the bucket), the system must loop to the end to find the element. To address performance issues caused by long linked lists, JDK1.8 adds red-black trees to JDK1.7 for optimization. When the linked list exceeds 8, the linked list is transformed into a red-black tree, and the performance of HashMap is improved by utilizing the characteristics of red-black tree’s rapid addition, deletion, modification and search, in which the insertion, deletion and search algorithms of red-black tree are used.
I’m going to put up a HashMap, which I found on the Internet
2. The key value cannot be repeated.
3. The underlying hash table does not guarantee order (such as insertion order).
LinkedHashMap
As mentioned above, the HashMap can be accessed quickly, but it is unordered, and iterating through the HashMap yields elements in a different order than they were originally placed into the HashMap. In our real-world applications, where we need fast access and ordered access, LinkedHashMap is a good choice. LinkedHashMap is a subclass of HashMap. Orderliness (insertion order or access order), addition, modification, and deletion of elements are efficient. The LinkedHashMap storage structure uses a double linked list. The manipulation of data is the same as that of HashMap. The big difference is the Entry. Take a look at the attributes in the Entry of LinkedHashMap
1, the K key
2, V value
3, Entry<K, V> next
4, int the hash
5, Entry<K, V> before
6, Entry
after The last two are unique to LinkedHashMap. Next is used to maintain the order of entries joined at the table location specified by the HashMap, and before and after are used to maintain the order of Entry inserts. The LinkedHashMap structure is as follows:
Image cache
The above content introduces HashMap mainly to understand LinkedHashMap, and summarizes the advantages of LinkedHashMap in simple words, that is, the efficiency of adding, modifying and deleting elements is relatively high, and the order can be insertion order or access order. LinkedHashMap is used in LruCache class. The internal realization principle of LruCache is based on LinkedHashMap, and the core is Lru algorithm. Soft reference caches can also use LinkedHashMap to access data. A simple idea for image caching will show you the advantages of LinkedHashMap.
1. When the strong reference cache is full, the images that have not been used recently are transferred to the soft reference cache according to the LRU algorithm
2. When the number of soft references exceeds 20, the oldest soft references will be removed from the chained hash table
Because the LinkedHashMap is ordered, the order can be insert order and access order, so it is very easy to find the recently not used image, just need to visit each strong reference and find the image, move to the front of the LinkedHashMap, so that the more early is the most recently used image. When memory is full, the image is fetched from the very back of the LinkedHashMap, which is the image that has not been used recently. LinkedHashMap is also used to store soft references, making it easy to remove old soft references from the chained hash table. The internal implementation of LruCache uses LinkedHashMap, which ensures the consistency of the order of the data when inserted and when retrieved, with good efficiency.
Now the picture cache has a very powerful third party to help us solve, but here still want to paste the memory cache code written by Ren Yugang God, I personally feel it is necessary to learn.
public class ImageMemoryCache {
private static final String TAG = "ImageMemoryCache"; private static LruCache<String, Bitmap> mLruCache; // Strong reference cache private static LinkedHashMap<String, SoftReference<Bitmap>> mSoftCache; Private static final int LRU_CACHE_SIZE = 4 * 1024 * 1024; Private static final int SOFT_CACHE_NUM = 20; // Here we initialize strong reference cache and weak reference cache public respectivelyImageMemoryCache(){
mLruCache = new LruCache<String ,Bitmap>(LRU_CACHE_SIZE){
@Override
protected int sizeOf(String key, Bitmap value) {
if(value ! = null){return value.getRowBytes() * value.getHeight();
}
return 0;
}
@Override
protected void entryRemoved(boolean evicted, String key, Bitmap oldValue, Bitmap newValue) {
super.entryRemoved(evicted, key, oldValue, newValue);
if(oldValue ! Log.d(TAG, LRU){// When the strong reference cache is full, the LRU algorithm will transfer the image that has not been used recently into the soft reference cache."LruCache is full,move to SoftReferenceCache"); mSoftCache.put(key,new SoftReference<Bitmap>(oldValue)); }}}; MSoftCache = new LinkedHashMap<String, SoftReference<Bitmap>>(SOFT_CACHE_NUM, 0.7f,true){ private static final long serialVersionUID = 1L; // When the number of soft references exceeds 20, @override protected Boolean removeEldestEntry(Entry<String, SoftReference<Bitmap>> younger) {if (size() > SOFT_CACHE_NUM){
Log.d(TAG,"shoudle remove the eldest from SoftReference");
return true;
}
return false; }}; } public Bitmap getBitmapFromMemory(String url){Bitmap Bitmap; Synchronized (mLruCache){bitmap = mLruCache. Get (url);if(bitmap ! MLruCache. Remove (url) {// If found, move the element to the front of the LinkedHashMap, thus ensuring that it is deleted last in the LRU algorithm; mLruCache.put(url,bitmap); Log.d(TAG,"get bmp from LruCache,url =" +url);
returnbitmap; } // If the strong reference cache is not found, go to the soft reference cache. Synchronized (mSoftCache){SoftReference<Bitmap> bitmapReference = msoftCache.get (URL);if(bitmapReference ! = null){ bitmap = bitmapReference.get();if(bitmap ! // Move the image back to LruCache mLruCache. Put (url,bitmap); mSoftCache.remove(url); Log.d(TAG,"get bmp from SoftRefrenceCache, url = "+url);
return bitmap;
}else{ mSoftCache.remove(url); }}}returnnull; Public void addBitmapToMemory(String URL,Bitmap Bitmap){if(bitmap ! = null){ synchronized (mLruCache){ mLruCache.put(url,bitmap); } } } public voidclearCache(){ mSoftCache.clear(); }}Copy the code
conclusion
Android has a lot of knowledge, such as HashMap, LinkedHashMap, etc., which will repeatedly look at the underlying implementation, but forget after a while. If you understand it with practical examples, it will be easier to understand its advantages. A lot of third parties have given us a lot of help, not only to know how to use it, but also to understand how to do it and why to do it this way. Think about why and understand it, and maybe you’ll be able to talk about it in the interview.