Small talk
Recently, there is a very popular interview related project on Github, which has entered the front page of Trending in just a few days. The project is about the interview questions and answers of several large Internet companies in China, and many algorithm questions are worth brushing. This article is mainly about the LRU cache mechanism in the interview question of ari. The answer is attached on github. Unfortunately, only python version and C++ version, not Java version. Fuck Java is the number one language in the world!!
LRU caching mechanism
LRU Caching Mechanism Design and implement an LRU (least recently used) cache data structure that should support the following operations: GET and PUT. Get (key) – Gets the value of the key if it exists in the cache (always positive), otherwise -1 is returned. Put (key,value) – If key does not exist, set or insert value. When the cache reaches its capacity, it should invalidate the least recently used items before inserting new ones.
Why swipe LeetCode?
LRU is an abbreviation for Least Recently Used. LRU Cache = 146. LRU Cache = 146. LRU Cache = 146. To verify the results, let’s go straight to the original LeetCode problem.
The original problem
146. LRU Cache
Design and implement a data structure for Least Recently Used (LRU) cache. It should support the following operations: get and put.
get(key) – Get the value (will always be positive) of the key if the key exists in the cache, otherwise return -1. put(key, value) – Set or insert the value if the key is not already present. When the cache reached its capacity, it should invalidate the least recently used item before inserting a new item.
The cache is initialized with a positive capacity.
Follow up: Could you do both operations in O(1) time complexity?
Example:
LRUCache cache = new LRUCache( 2 /* capacity */ );
cache.put(1, 1); cache.put(2, 2); cache.get(1); // returns 1 cache.put(3, 3); // evicts key 2 cache.get(2); // returns -1 (not found) cache.put(4, 4); // evicts key 1 cache.get(1); // returns -1 (not found) cache.get(3); // returns 3 cache.get(4); // returns 4
146. The LRU cache
Design and implement an LRU(least recently used) caching mechanism. It supports the following operations: GET and PUT.
Get (key) – Gets the value (always positive) if (key) exists in the cache, otherwise -1 is returned. Write data PUT (key, value) – If the key does not exist, write its data value. When the cache capacity reaches its maximum, the least recently used data is deleted before new data is written to make room for new data.
Advanced:
Can you do it in O(1) time?
Ex. :
LRUCache cache = new LRUCache(2 /* Cache capacity */);
cache.put(1, 1); cache.put(2, 2); cache.get(1); // return 1 cache. Put (3, 3); Cache.get (2); // return -1 (not found) cache.put(4, 4); Cache.get (1); // return -1 (not found) cache.get(3); // return 3 cache.get(4); / / return 4
- The subject is under Design in LeetCode
Method one: LRU realization of bidirectional linked list queue
Concept design
Given the least recent in the problem, we can obviously think of using a queue structure. In order to improve the efficiency of insertion and search as much as possible, ensure the time complexity at O(1), and maximize the speed of GET and PUT, it is necessary to use bidirectional linked list queue (single linked list requires traversal to insert nodes, so the time complexity is high). After the bidirectional list queue structure is implemented, two points need to be considered: 1. When data put reaches the maximum number, delete the oldest node (longest unused), that is, the tail node of the list queue, because the tail node is the last to be inserted; 2. When you get data, you need to move the currently obtained node to the head of the queue because it has been used recently.
Coding practices
Public class LRUCache {/** ** Node class used for abstract linked list nodes * key, value, * before, after point to the front and back nodes of the current Node respectively; * */ class Node { int key; int value; Node before; Node after; } /** * Private HashMap<Integer, Node> cache = new HashMap<Integer, Node>(); /** * The oldest unused node will be deleted */ private int capacity; /** * private Node head, tail; public LRUCache(int capacity) { this.capacity = capacity; head = new Node(); head.before = null; tail = new Node(); tail.after = null; head.after = tail; tail.before = head; }} @param node */ private void addToHead(node node) {node.before = head; node.after = head.after; head.after.before = node; head.after = node; } private void removeNode(node node) {node before = node.before; Node after = node.after; before.after = after; after.before = before; } private void moveToHead(node node) {removeNode(node); addToHead(node); } /** * delete valid data tail node ** @returnTail Node */ private NodepopTail() {
Node res = tail.before;
this.removeNode(res);
return res;
}
public int get(int key) {
Node node = cache.get(key);
if (node == null) {
return- 1; // should raise exception here.} // If data is obtained, move the node to the queue head; moveToHead(node);return node.value;
}
public void put(int key, int value) {
Node node = cache.get(key);
if (node == null) {
Node newNode = new Node();
newNode.key = key;
newNode.value = value;
cache.put(key, newNode);
addToHead(newNode);
if(cache.size() > capacity) {// Delete the queue valid data Node Node tail = this.poptail (); this.cache.remove(tail.key); }}else{ node.value = value; MoveToHead (node); moveToHead(node); }}}Copy the code
instructions
See code comment
Method 2: The Java standard library JDK LinkedHashMap
Train of thought
In fact, the Java standard library provides LRU data structure that can be directly used, namely LinkedHashMap, whose underlying implementation principle is consistent with method one
Coding practices
class LRUCache extends LinkedHashMap<Integer,Integer> { private int capacity; Public LRUCache(int capacity) {super(capacity, 0.75f,true);
this.capacity = capacity;
}
public int get(int key) {
return super.getOrDefault(key,-1);
}
public void put(int key, int value) {
super.put(key,value);
}
@Override
public boolean removeEldestEntry(Map.Entry<Integer, Integer> eldest){
returnsize() > capacity; }}Copy the code
instructions
LinkedHashMap provides a removeEldestEntry method that can be overwritten. The removeEldestEntry method is directly referenced in the LinkedHashMap source code.
/**
* Returns <tt>true</tt> if this map should remove its eldest entry.
* This method is invoked by <tt>put</tt> and <tt>putAll</tt> after
* inserting a new entry into the map. It provides the implementor
* with the opportunity to remove the eldest entry each time a new one
* is added. This is useful if the map represents a cache: it allows
* the map to reduce memory consumption by deleting stale entries.
*
* <p>Sample use: this override will allow the map to grow up to 100
* entries and then delete the eldest entry each time a new entry is
* added, maintaining a steady state of 100 entries.
* <pre>
* private static final int MAX_ENTRIES = 100;
*
* protected boolean removeEldestEntry(Map.Entry eldest) {
* return size() > MAX_ENTRIES;
* }
* </pre>
*
* <p>This method typically does not modify the map in any way,
* instead allowing the map to modify itself as directed by its
* return value. It <i>is</i> permitted for this method to modify
* the map directly, but if it does so, it <i>must</i> return
* <tt>false</tt> (indicating that the map should not attempt any
* further modification). The effects of returning <tt>true</tt>
* after modifying the map from within this method are unspecified.
*
* <p>This implementation merely returns <tt>false</tt> (so that this
* map acts like a normal map - the eldest element is never removed).
*
* @param eldest The least recently inserted entry in the map, or if
* this is an access-ordered map, the least recently accessed
* entry. This is the entry that will be removed it this
* method returns <tt>true</tt>. If the map was empty prior
* to the <tt>put</tt> or <tt>putAll</tt> invocation resulting
* in this invocation, this will be the entry that was just
* inserted; in other words, if the map contains a single
* entry, the eldest entry is also the newest.
* @return <tt>true</tt> if the eldest entry should be removed
* from the map; <tt>false</tt> if it should be retained.
*/
protected boolean removeEldestEntry(Map.Entry<K,V> eldest) {
return false;
}
Copy the code
When a new node is inserted, the oldest node is deleted, which can be used when caching data. In addition, if the default false is used, the container does not limit the size and does not delete the oldest unused node, which is equivalent to using a normal Map.
Note also that there is a very striking demo in the comments:
private static final int MAX_ENTRIES = 100;
protected boolean removeEldestEntry(Map.Entry eldest) {
return size() > MAX_ENTRIES;
}
Copy the code
In this case, the oldest node is deleted when the maximum size of the queue exceeds the threshold. Therefore, use size() > capacity to check whether the oldest node is deleted. Get (key,-1) : getOrDefault(key,-1) : getOrDefault(key,-1)
eggs
The addToHead method adds a node to the head and the popTail method removes a node from the tail. This is the Easter egg.
conclusion
This paper analyzes two solutions to Ali’s LRU interview question. Personally guess that ali question creator’s idea is to investigate the understanding of LRU idea, on the basis of understanding the idea may be further extended to the source of LinkedHashMap, LinkedHashMap familiar with will pretend to inadvertently continue to go deep into its parent class HashMap, When it comes to HashMap, you will definitely continue to ask about load factors, Hash collisions, red-black tree splits, left and right HashMap rotation, and then talk to you about the differences between HashTable and HashMap… You will find that there is always a possible hit your blind spot, is not very afraid of fine thinking, ha ha ~ do not square, scan, pay attention to my wechat subscription number, there will be for similar LinkedHashMap and other important data structure source analysis series articles, please look forward to! Finally, if you find this article helpful or inspiring, give it a thumbs up. 0.0