I didn’t want to see it, there are several hot questions list, just take a closer look.
LRU algorithm
LRU(Least Recently Used) is a cache elimination algorithm, that is, the Least Recently Used algorithm will be eliminated according to the historical access records of the data. The idea is that if data has been accessed recently, it is more likely to be accessed in the future.
So there are two other things that have to be mentioned
-
LRU (least recently used)) not used 👌
-
FIFO (first in first out)
-
LFU (least frequently used) is the least used recently
These are all “elimination” strategies
The source code
LinkedHashMap in the JDK is an “ordered version” of HashMap (LRU).
Documents: docs.oracle.com/javase/8/do…
A special constructor is provided to create a linked hash map whose order of iteration is the order in which its entries were last accessed, from least-recently accessed to most-recently (access-order). This kind of map is well-suited to building LRU caches. Invoking the put, putIfAbsent, get, getOrDefault, compute, computeIfAbsent, computeIfPresent, or merge methods results in an access to the corresponding entry (assuming it exists after the invocation completes). The replace methods only result in an access of the entry if the value is replaced. The putAll method generates one entry access for each mapping in the specified map, in the order that key-value mappings are provided by the specified map’s entry set iterator. No other methods generate entry accesses. In particular, operations on collection-views do not affect the order of iteration of the backing map.
A special constructor is provided to create a LinkedHashMap that iterates through the order in which its entries were last accessed, from most recently accessed to most recently accessed (access order). This map is ideal for building LRU caches.
- What is an interview?
Not only get counts as an access. Put, putIfAbsent, GET, getOrDefault, compute, computeIfAbsent, computeIfPresent, or merge count as an access.
Continuing with the source code, we see an access-order-related constructor:
/**
* Constructs an empty <tt>LinkedHashMap</tt> instance with the
* specified initial capacity, load factor and ordering mode.
*
* @param initialCapacity the initial capacity
* @param loadFactor the load factor
* @param accessOrder the ordering mode - <tt>true</tt> for
* access-order, <tt>false</tt> for insertion-order
* @throws IllegalArgumentException if the initial capacity is negative
* or the load factor is nonpositive
*/
public LinkedHashMap(int initialCapacity,
float loadFactor,
boolean accessOrder) {
super(initialCapacity, loadFactor);
this.accessOrder = accessOrder;
}
Copy the code
AccessOrder: true in accessOrder, false in insert order, false by default
Test the
How can I visualize LRU more intuitively and write it out?
public class LRUCache<K.V> extends LinkedHashMap<K.V> {
private int size;
public static void main(String[] args) {
LRUCache<Integer, Integer> cache = LRUCache.newInstance(3);
cache.put(1.1);
cache.put(2.2);
cache.put(3.3);
cache.put(4.4);
System.out.println(cache);
cache.get(2);
System.out.println(cache);
cache.put(5.5);
System.out.println(cache);
}
private LRUCache(int size) {
super(size, 0.75 f.true);
this.size = size;
}
// Subclassed to override removeEldestEntry "automatically deletes the entry in the head (John) header"
@Override
protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
return size() > size;
}
public static <K, V> LRUCache<K, V> newInstance(int size) {
return newLRUCache<K, V>(size); }}Copy the code
The output is :
{2=2, 3=3, 4=4}
{3=3, 4=4, 2=2}
{4=4, 2=2, 5=5}
Process finished with exit code 0
Copy the code
Explain:
1. I defined an initial length of 3, a load factor of 0.75, and an access order LRU
2. Rewrote removeEldestEntry to remove any headers that exceed the size
{1=1, 2=2, 3=3, 4=4} {1=1, 2=2, 3=3, 4=4
4. According to the procedure 2=2, it was cleared immediately, but I got access to it, and it was put at the end of the queue
5, originally 2 should be squeezed out, now the chain behind the 3 queue do not clean up
I got what I expected
{4 = 4 and 2 = 2, 5 = 5}Copy the code
To explore the implementation
A few questions:
When did the linkedHashMap change the order and how was it implemented
Source:
public V get(Object key) {
Node<K,V> e;
if ((e = getNode(hash(key), key)) == null)
return null;
// Whether to sort by access order
if (accessOrder)
afterNodeAccess(e);
return e.value;
}
void afterNodeAccess(Node<K,V> e) { // move node to last
LinkedHashMap.Entry<K,V> last;
if(accessOrder && (last = tail) ! = e) {// Reframe the list to point back and forth
LinkedHashMap.Entry<K,V> p =
(LinkedHashMap.Entry<K,V>)e, b = p.before, a = p.after;
p.after = null;
if (b == null)
head = a;
else
b.after = a;
if(a ! =null)
a.before = b;
else
last = b;
if (last == null)
head = p;
else{ p.before = last; last.after = p; } tail = p; ++modCount; }}Copy the code
Very readable, just manipulating the next and prev Pointers, nothing to say
conclusion
LinkedHashMap inherits from HashMap and adds a bidirectional linked list structure on the basis of the original hash array, which not only ensures the orderliness of data, but also realizes LRU algorithm on the basis of orderliness. LRU algorithm is generally used in memory cache scenarios, and is the best choice of “space” for “time”. Ensure the limited expected length of the hotspot data as far as possible, the data will not be used in other ways to query, reasonable use can indeed save IO or network requests, but I think there are few scenarios.
harvest
Although Java is implemented in this way, but the linked list structure has two more Pointers to occupy space, so I also read the relevant article about the implementation of the ancestor of cache: Redis. It probably uses the timestamp idle time to judge, and randomly selects several Pointers and throws one at a time. Later, I have improved the elimination mechanism, and I probably have some impression.
A deeper understanding of LinkedHashMap has not been unrewarding.
Original text: github.com/pkwenda/Blo…