LRU background

LRU, full name: Least Recently UsedMemory elimination algorithmThe first time the author came into contact with this algorithm was in the course of operating system during undergraduate studyVirtual memory page replacementWhen I mentioned it.

This classic memory weeding algorithm is also used elsewhere, often as a cache weeding strategy,The cacheAs a means of speeding up queries, it is itself intended to persist data on diskHot partsLoad into faster read memory (The locality principle), and memory will not be able to load all the data in the disk, so what if the memory is full when loading the cache? Who needs to be eliminated from the cache? So the natural thing to think about is obsolescence in memoryThe worst everSo the LRU algorithm defines a way to tag trace to this least popular data.

LRU in the operating system

Virtual memory

The operating system acts as an agent between the user application and the hardware device, and has the responsibility of shielding the different underlying hardware logic, such as memory modules,



For the user application, it should not be worried about how many memory modules, what model, how much memory is running on the machine, nor need to pay attention to whether it will read the memory information of other applications, will not write to the other person’s memory resulting in data disorder. The operating system encapsulates all of this, and the way the operating system encapsulates it isVirtual memory.

The encapsulation of the operating system of the underlying hardware, provide apis for the upper application user interface’s thought is the core idea of computer field, namely layered thought, each layer will only do the current packaging function of this layer, block at the bottom of the heterogeneous characteristics of the unity of provides a layer of abstraction API interface, learn to abstract is the core of the programmers thinking, Whether it is the operating system, TCP/IP network architecture, Java’s Spring container, it is the embodiment of this idea.

The virtual memory of the operating system passes. ProcedureA page tableBetween hardware and user applicationsFixed memory sizeandIsolate different process memory SpacesThis article focuses on the implementation of fixed memory size. Operating system, shielding the size of the underlying hardware memory for 32-bit operating system, for example, no matter how much you use G memory chips, process of the available memory space is 4 G (user mode and kernel mode), if the actual memory chips have no so big, can store data in memory chips, other memory data is stored on disk, it is involvedWho stays in memory, who should be on diskClassical philosophical questions.

Virtual memory with LRU

Operating systems often use the LRU algorithm to identify the hotspot data that is frequently accessed and leave it in memory, while the least recently used memory data is eliminated to disk. Dragon dragon’s wonderful metaphor: imagine that you are an emperor, the operating system is a eunuch, the eunuch manager will predict which concubines you will find tonight according to your historical behavior of turning over the brand, there are three thousand ladies in the harem, the emperor has limited patience to choose, so the eunuch manager willThe least brand change recentlyThe concubines moved straight out of the plaque pool.

Other page replacement (elimination) algorithms

  • The best replacement algorithm (OPT), the eunuchs can see into the future, know which concubines will never be favored in the future, and directly eliminate them, but this is impossible, so as the evaluation criteria for elimination algorithm
  • First in, first out replacement algorithm (FIFO), eunuch manager is more straightforward, always fair and just, strictly in accordance with the first of the concubines to put the brand
  • The replacement algorithm (LFU) should be used at least. LRU pays attention to the time when each concubine was finally visited, and LFU pays attention to the historical frequency of each concubine being favored. For example, Concubine Hua was visited every day in history, but it happened to be not visited in the recent week, which may be obsolete for LRU, but retained for LFU. In the early days of Zhen Huan, it was not seen by the emperor every day, but in the last week, it was liked by the emperor. If lRU-Zhen Huan is used as the eunuch chief, it must be the first place, but if LFU-Zhen Huan is used as the new favorite, it may not be included in the brand

emmm… So obviously LRU is more sacred…

Redis and LRU

Redis Specifies the expiration time mode

In general, redis uses the Expire KEY_NAME TIME_IN_SECONDS command to set a fixed Expire time for keys. Redis deletes expired keys from the memory by lazy deletion or periodic deletion.

Lazy deletion and periodic deletion in Redis: There are three implementation schemes for redis cache expiration processing strategy:

  1. Periodic deletion: each time a key expires, a timer is set to delete the key immediately. The advantage is to save memory, expired immediately clean up; The disadvantage is that it wastes CPU time doing cleanup work, creates a lot of timers, and has a serious performance impact
  2. Lazy delete, every time get key to see if expired. Advantages and 1 is just the opposite, save CPU time; The disadvantage is that there may be long unread keys in memory, wasting memory (memory leak)
  3. Periodically delete, 1 and 2 of the plan compromise, free aunt regularly clean the room, 1 and 2 have the advantages and disadvantages

Finally, Redis adopted a combination of lazy deletion and regular deletion.

The LRU redis

The above cleanup occurs when the memory of Redis is sufficient and the key that has expired is cleared. When Redis is used as a data cache layer, it will also encounter the above operating system scenario. When more and more data is cached, what will happen if the memory is not enough? Redis provides six ways to eliminate data

config set maxmemory-policy volatile-lru

Maxmemory-policy has six ways

  • Volatile -lru: lru only on keys with expiration time set (default)
  • Allkeys-lru: deletes the key of the LRU algorithm
  • Volatile -random: a key that is about to expire is deleted randomly
  • Allkeys-random: indicates random deletion
  • Volatile-ttl: deletes the one that is about to expire
  • Noeviction: never expires, returns error

The author uses LRU in the module

The module used in the work of the author can be understood as encapsulating the query, filter, paging, sorting functions of various data sources, exposing the simple select XXX from view where… The MySQL view statement is a copy of the MySQL view concept and provides read-only functionality. Therefore, this module also carries on the memory cache to the query operation, and the cache elimination algorithm also uses the classic LRU algorithm.

LRU implementation – – two-way linked list + hash table

As LRU cache is a kind of cache, the time complexity of PUT/GET operation is O(1), so hash table is needed to store the mapping relationship of KV. You also need a data structure that allows you to list the most recently visited nodes first && remove the least recently visited nodes from the Node[] array by logging the node.lastestVisitTime timestamp in the Node[] array and then removing it in order. In this case, each sort requires O(logn) quicksort time. In order to ensure that the memory elimination time is O(1), we need to use a bidirectional linked list, with the most recently accessed nodes at the top of the table, and the nodes at the bottom of the table are kicked out

To sum up, the implementation of LRU needs to combine the data structure of hash table and two-way linked list, and optimize the performance of PUT/GET through a redundant layer of data structure.

The structure of the LRU cache is as follows

GIF demonstration of GET operation process of LRU cache

GIF demonstration of THE PUT operation process of LRU cacheThe pseudocode for the LRU cache is as follows

class LRUCached:Hash table# key -> Node(key, value)Two-way linked list# Node -> Node -> Node

	capacity = 10 # Capacity, over size according to the LRU algorithm begins to be eliminated

	def get(key) :
		ifKey does not exist:return -1
		else: Moves data to the header of a two-way linked listCall put(key, value) once
			returnHash table. The get (key). The valuedef put(key, value) :
		Node node = new Node()
		ifRemove (hash. Get (key)) Bidirectional linked list addFirst(node)else: # key does not existHash table put(key, value) Bidirectional linked list addFirst(node)ifSize > capacity): removeLast() hash table. Remove (lastNode.key)Copy the code

reference

  1. ^LRU- Baidu Baike
  2. ^LRU policy detail and implementation – LRU caching mechanism – force button (LeetCode)
  3. ^LRU principle and Redis implementation – a toutiao interview question – Zhihu
  4. ^ Redis Design and Implementation
  5. ^ Modern Operating Systems
  6. ^Redis LRU algorithm