What is cache
In the computer storage hierarchy, a lower level of storage can be considered as a higher level of cache. For example, Cache is the Cache of memory, memory is the Cache of hard disk, hard disk is the Cache of network, etc.
Caching can effectively resolve the memory performance versus capacity contradiction, but it is not as simple as it seems. If the cache algorithm is not properly designed, it will not improve the access speed, but will make the system slower.
Essentially, caching is effective because of locality of the program and data. Programs are executed in a fixed order, and data is stored in contiguous memory and read and written repeatedly. These features allow us to cache frequently used data, thus improving read and write speed.
The size of the cache is fixed, and it should only hold data that is most frequently accessed. However, the future is unpredictable and we can only make predictions from past access sequences, so there are various cache replacement strategies. This paper introduces a simple cache strategy called Least Recently Used algorithm.
The realization of LRU
Let’s use memory access as an example to explain how caching works. Assume that the cache size is fixed and its initial state is empty. Every time a memory read operation occurs, the system checks whether the data to be read exists in the cache. If yes, the cache matches and the data is returned. If no, the cache is not hit. Data is read from the memory and added to the cache. When adding data to the cache, if the cache is full, the earliest access data needs to be deleted. This method of updating the cache is called LRU.
When implementing LRU, we need to pay attention to its read and write performance. An ideal LRU should be able to read or update a piece of data in O(1) time, that is to say, the time complexity of reading and writing is O(1).
At this point, it’s easy to think of using a HashMap to access data based on its key at O(1) speed. However, the cache cannot be updated as fast as O(1) because you need to determine which piece of data has the earliest access time, which requires traversing all the caches to find.
Therefore, we need a data structure that can be both sorted by access time and accessed randomly in constant time.
This can be done using HashMap+ bidirectional linked lists. A HashMap guarantees O(1) time to access data through a key, while a bidirectional list passes through each data in order of access time. The reason for choosing a bidirectional list rather than a single list is that the structure of the list can be modified from any intermediate node without traversing from the beginning node.
As shown in the figure below, the black part is the structure of the HashMap, and the red arrow is the forward join of the bidirectional list (the reverse join is not drawn). You can clearly see that the data is accessed in the order 1->3->5->6->10. We just need to change the join order of the linked list after each access.
HashMap+ bidirectional linked list
The implementation code is as follows:
支那
/** * @author WJG ** LRU (Least Recently Used) * When the cache is read, the key is looked up from the HashMap. When the cache is updated, both the HashMap and the bidirectional list are updated. The bidirectional list is always in the order of access. * * / public class LRUCache {/ * * * @ param args * the test program, access sequence for [[1, 1], [2], [1], [3], [2], [4], [1], [3], [4]], the call number pairs of put, A single number calls GET. * get the results in [1], [1], [1], [3], [4], 1 said cache misses, other Numbers. */ public static void main(String[] args) { LRUCache cache = new LRUCache(2); cache.put(1, 1); cache.put(2, 2); System.out.println(cache.get(1)); cache.put(3, 3); System.out.println(cache.get(2)); cache.put(4, 4); System.out.println(cache.get(1)); System.out.println(cache.get(3)); System.out.println(cache.get(4)); } private final int capacity; Private HashMap<Integer, Entry> map; private HashMap<Integer, Entry> map; Private Entry head; private Entry head; private Entry head; Private Entry tail; private Entry tail; private Entry tail; public LRUCache(int capacity) { this.capacity = capacity; Map = new HashMap<Integer, Entry>((int)(Capacity / 0.75 + 1), 0.75f); head = new Entry(0, 0); tail = new Entry(0, 0); head.next = tail; tail.prev = head; } /** * return key ();} /** return key (); -1 */ public int get(int key) {if (map.containsKey(key)) {Entry Entry = map.get(key); popToTail(entry); return entry.value; } return -1; } /** * Insert or update a value into the cache * @param key to update the key * @param value to update the value */ public void put(int key, int value) { if (map.containsKey(key)) { Entry entry = map.get(key); entry.value = value; popToTail(entry); } else { Entry newEntry = new Entry(key, value); if (map.size() >= capacity) { Entry first = removeFirst(); map.remove(first.key); } addToTail(newEntry); map.put(key, newEntry); * @author WJG ** / class Entry {int key; int value; Entry prev; Entry next; Entry(int key, int value) { this.key = key; this.value = value; Private void popToTail(entry entry) {entry prev = entry. Prev; Entry next = entry.next; prev.next = next; next.prev = prev; Entry last = tail.prev; last.next = entry; tail.prev = entry; entry.prev = last; entry.next = tail; } private Entry removeFirst() {Entry first = head.next; Entry second = first.next; head.next = second; second.prev = head; return first; } private void addToTail(Entry Entry) {entry last = tail.prev; last.next = entry; tail.prev = entry; entry.prev = last; entry.next = tail; }}Copy the code
Each method and member variable is preceded by Chinese annotations that do not need to be explained too much.
It’s worth noting that there is already a data type in the Java API that provides the functionality we need, namely the LinkedHashMap class. Internally, the class is also implemented using HashMap+ bidirectional linked lists. Using this class to implement LRU is much simpler.
支那
/** ** A simpler and more practical LRUCache scheme can be implemented using LinkedHashMap. * LinkedHashMap provides sorting by access order, internally using HashMap+ bidirectional linked lists. * Just override the removeEldestEntry method. When this method returns true, the LinkedHashMap removes the oldest node. * * @author wjg * */ public class LRUCacheSimple { /** * @param args */ public static void main(String[] args) { LRUCacheSimple cache = new LRUCacheSimple(2); cache.put(1, 1); cache.put(2, 2); System.out.println(cache.get(1)); cache.put(3, 3); System.out.println(cache.get(2)); cache.put(4, 4); System.out.println(cache.get(1)); System.out.println(cache.get(3)); System.out.println(cache.get(4)); } private LinkedHashMap<Integer, Integer> map; private final int capacity; public LRUCacheSimple(int capacity) { this.capacity = capacity; Map = new LinkedHashMap<Integer, Integer>(capacity, 0.75f, true){ protected boolean removeEldestEntry(Map.Entry eldest) { return size() > capacity; }}; } public int get(int key) { return map.getOrDefault(key, -1); } public void put(int key, int value) { map.put(key, value); }}Copy the code
Simply overwrite the removeEldestEntry method of the LinkedHashMap, returning true if the cache is full, and the oldest elements are automatically removed internally.
LeetCode 146. LRU Cache, how to implement this algorithm
Author: Mr King links: www.jianshu.com/p/b1ab4a170… The copyright of the book belongs to the author. Commercial reprint please contact the author for authorization, non-commercial reprint please indicate the source.