Overivew

LRU (Least recently used) algorithm filters out data based on historical access records. Its core idea is that “if data has been accessed recently, it has a higher chance of being accessed in the future”.

  1. Insert new data into the linked list header;
  2. Whenever a cache hit (that is, cached data is accessed), the data is moved to the head of the linked list;
  3. When the list is full, discard the data at the end of the list.

The process is as follows:

  1. Initially, the memory space is empty, so there is no problem entering A, B, and C in sequence
  2. When D is added, there is A problem, and the memory space is not enough. Therefore, according to the LRU algorithm, A in the memory space has been waiting for the longest time, and A is selected to eliminate it
  3. When B is referenced again, B in the memory space is active again, and C becomes the most recently unused memory space
  4. When E is added to the memory space again, the memory space is insufficient again, and C which has stayed in the memory space for the longest is selected to eliminate it from the memory. At this time, the object stored in the memory space is E->B->D

Swift implementation

Check it out, github.com/nicklockwoo… The implementation of the

Create a Container, which is a two-ended queue

final class Container {
    var value: Value
    var cost: Int
    let key: Key
    unowned(unsafe) var prev: Container?
    unowned(unsafe) var next: Container?

    init(value: Value.cost: Int.key: Key) {
        self.value = value
        self.cost = cost
        self.key = key
    }
}
Copy the code

Defines its delete node and add node operations

// Remove container from list
func remove(_ container: Container) {
    if head = = = container { // If the node to be deleted is equal to the head node, the head node is moved down
        head = container.next
    }
    if tail = = = container {// If the node to be deleted is equal to the last node
        tail = container.prev
    }
    // Empty invalid links
    container.next?.prev = container.prev
    container.prev?.next = container.next
    container.next = nil
}

// Append container to list adds the node to the end of the list
func append(_ container: Container) {
    assert(container.next = = nil)
    if head = = nil {
        head = container
    }
    container.prev = tail
    tail?.next = container
    tail = container
}
Copy the code

And clear the contents of the list. Notice the lock here

func clean(a) {
    lock.lock()
    defer { lock.unlock() }
    while totalCost > totalCostLimit || count > countLimit,
          let container = head
    {
        remove(container)
        values.removeValue(forKey: container.key)
        totalCost - = container.cost
    }
}
Copy the code

Now let’s look at the structure of the whole thing,

private var values: [Key: Container] = [:]
private unowned(unsafe) var head: Container?
private unowned(unsafe) var tail: Container?
var count: Int { values.count }
var isEmpty: Bool { values.isEmpty }
Copy the code

Values contains the key of the data and points to the corresponding queue node and when we have new data to add to the cache, we call setValue,

func setValue(_ value: Value? .forKey key: Key.cost: Int = 0) {
    guard let value = value else {  // If yes
        removeValue(forKey: key)
        return
    }
    lock.lock()
    if let container = values[key] {
        container.value = value
        totalCost - = container.cost
        container.cost = cost
        remove(container)
        append(container)
    } else {
        let container = Container(
            value: value,
            cost: cost,
            key: key
        )
        values[key] = container
        append(container)
    }
    totalCost + = cost
    lock.unlock()
    clean()
}
Copy the code

When you want to delete something,

func removeValue(forKey key: Key) -> Value? {
    lock.lock()
    defer { lock.unlock() }
    guard let container = values.removeValue(forKey: key) else {
        return nil
    }
    remove(container)
    totalCost - = container.cost
    return container.value
}
Copy the code

The locking mechanism is added to ensure the concurrency of LRUCache.