1. Introduction of LRU

1.1 an overview of the

Cache resources are usually more expensive, large amount of data, usually from fewer cache to satisfy as many as possible, there is a hypothesis, usually recently accessed data, then it is likely to be the follow-up to continue access, based on this assumption, will be ordered by all the data according to the access time, and out of the old data, then there is cached data for hot number According to, this not only saves the memory resources, but also greatly satisfies the access. LRU(Least recently Used) algorithm is always cache replacement algorithm based on this assumption.

1.2 Algorithm Flow

Assume that the cache size is 4, and the write order is A, B, C, D, E, F. The access order is divided into write and read operations. Write needs to update the access time, and when data reaches the maximum cache, data needs to be expelled, while read only updates the access time, and the write replacement algorithm flow is shown in the figure above.

When the cache size is not reached, all data is written to storage and the writing order is recorded. When E is written, the cache is full and the value of E does not exist. The longest unaccessed data A needs to be exported. In this case, the cache content is E D C B. Next write D, D is in the cache, update the access order of D, and the cache content is D, E, C, B. Next write F, F is not in the cache, eject the last C in the cache, and the cache content is F, D, E, C

2 go to realize

2.1 train of thought

When using GO, you can use list and map to implement LRU cache. The specific idea is as follows: When writing, query the LRU cache from the map first. If it can be queried, if it can be queried, move the value to the first in the list. If no value is found, the system checks whether the current map reaches the maximum value. If so, the last value in the List is removed and the value in the map is deleted. If the capacity of the map does not reach the maximum value, the system writes the value to the map and puts the value at the beginning of the List.

When reading data, the system queries it from the map. If a value can be found, the system directly moves the value in the List to the front and returns the query result.

To ensure concurrency security, read/write locks need to be introduced. In addition, there is the case of reading the contents of the List against the map, because a container object is declared to hold both the key and the value, and both the List and the map store references to the container object. Atomic objects are introduced to count the hit number and miss number

2.2 Key Codes

See cache for the complete code

  • Set(write operation)
func (c *MemCache) Set(key string, value interface{}) {
	c.mutex.Lock()
	defer c.mutex.Unlock()
	ifc.cache == nil { c.cache = make(map[interface{}]*list.Element) c.cacheList = list.New() } // Check if it is in a map. If it is in a map, move value from the list to the front.if ele, ok := c.cache[key]; ok {
		c.cacheList.MoveToFront(ele)
		ele.Value.(*entry).value = value
		return} // If not in map, store the value at the top of List ele := c.cachelist. PushFront(&entry{key: key, value: Value}) c.ache [key] = ele // Check whether the capacity limit is reached. When the capacity limit is reached, delete the last value in the List.ifc.maxItemSize ! = 0 && c.cacheList.Len() > c.maxItemSize { c.RemoveOldest() } }Copy the code
  • Get(read operation)
func (c *MemCache) Get(key string) (interface{}, Bool) {c.mutex.rlock () defer c.mutex.runlock () c.gets.add (1) // If the value is read, move it in the List and return valueif ele, hit := c.cache[key]; hit {
		c.hits.Add(1)
		c.cacheList.MoveToFront(ele)
		return ele.Value.(*entry).value, true
	}
	return nil, false
}
Copy the code

Reference 3.

wiki_LRU