YYCache is a thread safe high-performance cache component open source by domestic developer Ibireme. The code style is concise and clear. It has 1600+ stars on GitHub.

Reading its source code helps to establish a more complete cache design ideas, but also to consolidate two-way linked lists, thread locks, database operations related knowledge. If you have not seen the YYCache source code, then congratulations, reading this article will understand the YYCache source code has a larger help.

Before we get into the source code, let’s take a quick look at how to use the framework.

Basic usage

As an example of caching user names, look at the YYCache apis:

    // The object to be cached
    NSString *userName = @"Jack";
   
   // The cache key of the object to be cached
    NSString *key = @"user_name";
    
    // Create a YYCache instance :userInfoCache
    YYCache *userInfoCache = [YYCache cacheWithName:@"userInfo"];
    
    // Store the key-value pair
    [userInfoCache setObject:userName forKey:key withBlock:^{
        NSLog(@"caching object succeed");
    }];
    
    // Check whether the cache exists
    [userInfoCache containsObjectForKey:key withBlock:^(NSString * _Nonnull key, BOOL contains) {
        if (contains){
            NSLog(@"object exists"); }}];// Read data according to key
    [userInfoCache objectForKey:key withBlock:^(NSString * _Nonnull key, id<NSCoding>  _Nonnull object) {
        NSLog(@"user name : %@",object);
    }];

    // Remove cache based on key
    [userInfoCache removeObjectForKey:key withBlock:^(NSString * _Nonnull key) {
        NSLog(@"remove user name %@",key);
    }];
    
    // Remove all caches
    [userInfoCache removeAllObjectsWithBlock:^{
        NSLog(@"removing all cache succeed");
    }];

    // Remove all cache tape progress
    [userInfoCache removeAllObjectsWithProgressBlock:^(int removedCount, int totalCount) {
        NSLog(@"remove all cache objects: removedCount :%d totalCount : %d",removedCount,totalCount);
    } endBlock:^(BOOL error) {
        if(! error){NSLog(@"remove all cache objects: succeed");
        }else{
            NSLog(@"remove all cache objects: failed"); }}];Copy the code

Overall, these apis are similar to NSCache. Take a look at the framework’s architectural diagram and the division of member responsibilities.

Architecture diagram and member responsibility division

Architecture diagram

Division of Responsibilities of members

From the architecture diagram, there are not many members in this component:

  • YYCache: provides the outermost interface and calls the related methods of YYMemoryCache and YYDiskCache.
  • YYMemoryCache: Processes memory cache with small capacity and relatively high speed. Thread safe, support automatic and manual cache cleaning and other functions.
  • _YYLinkedMap: bidirectional linked list class used by YYMemoryCache.
  • _YYLinkedMapNode: Is the node class used by _YYLinkedMap.
  • YYDiskCache: Processes disk caches with large capacity and relatively low speed. Thread safety, support asynchronous operations, automatic and manual cache cleaning and other functions.
  • YYKVStorage: The underlying implementation class of YYDiskCache, which is used to manage disk cache.
  • YYKVStorageItem: built in YYKVStorage, is YYKVStorage internal used to encapsulate a cache class.

The code on

After knowing the YYCache architecture diagram and the division of responsibilities of members, it is now combined with the code to begin the formal explanation. The presentation is divided into the following six parts:

  • YYCache
  • YYMemoryCache
  • YYDiskCache
  • Different scenarios for thread safety
  • Several attempts to improve cache performance
  • Other knowledge points

YYCache

YYCache provides the user with all the outermost cache operation interfaces, and the internal of these interfaces actually calls the related methods of YYMemoryCache and YYDiskCache objects.

Let’s take a look at YYCache properties and interfaces:

YYCache properties and interfaces


@interface YYCache : NSObject


@property (copy.readonly) NSString *name;// Cache name
@property (strong.readonly) YYMemoryCache *memoryCache;// Memory cache
@property (strong.readonly) YYDiskCache *diskCache;// Disk cache

// Whether to include a cache, no callback
- (BOOL)containsObjectForKey:(NSString *)key;
// Whether to include a cache, with a callback
- (void)containsObjectForKey:(NSString *)key withBlock:(nullable void(^) (NSString *key, BOOL contains))block;

// Get the cache object, no callback
- (nullable id<NSCoding>)objectForKey:(NSString *)key;
// Get the cache object, with a callback
- (void)objectForKey:(NSString *)key withBlock:(nullable void(^) (NSString *key, id<NSCoding> object))block;

// Writes to the cache object with no callback
- (void)setObject:(nullable id<NSCoding>)object forKey:(NSString *)key;
// Writes to the cache object with a callback
- (void)setObject:(nullable id<NSCoding>)object forKey:(NSString *)key withBlock:(nullable void(^) (void))block;

// Remove a cache without a callback
- (void)removeObjectForKey:(NSString *)key;
// Remove a cache with a callback
- (void)removeObjectForKey:(NSString *)key withBlock:(nullable void(^) (NSString *key))block;

// Remove all caches, no callback
- (void)removeAllObjects;
// Remove all caches, there is a callback
- (void)removeAllObjectsWithBlock:(void(^) (void))block;
// Remove all caches with progress and completion callbacks
- (void)removeAllObjectsWithProgressBlock:(nullable void(^) (int removedCount, int totalCount))progress
                                 endBlock:(nullable void(^) (BOOL error))end;

@end
Copy the code

As can be seen from the above interface YYCache interface and NSCache is very similar, and on the interface distinguish whether there is callback function. Here’s a look at how these interfaces are implemented in the code:

YYCache interface implementation

Interfaces with callbacks are omitted below because they are very close to interfaces without callbacks.

- (BOOL)containsObjectForKey:(NSString *)key {
    
    // Check whether the memory cache exists first, then check whether the disk cache exists
    return [_memoryCache containsObjectForKey:key] || [_diskCache containsObjectForKey:key];
}

- (id<NSCoding>)objectForKey:(NSString *)key {
    
    // First try to get the memory cache, then get the disk cache
    id<NSCoding> object = [_memoryCache objectForKey:key];
    
    // If the memory cache does not exist, the disk cache will be looked for. If the disk cache is found, the disk cache will be written again. If you don't find it, you return nil
    if(! object) { object = [_diskCache objectForKey:key];if(object) { [_memoryCache setObject:object forKey:key]; }}return object;
}


- (void)setObject:(id<NSCoding>)object forKey:(NSString *)key {
    // Write to the memory cache first, then to the disk cache
    [_memoryCache setObject:object forKey:key];
    [_diskCache setObject:object forKey:key];
}


- (void)removeObjectForKey:(NSString *)key {
    // Remove the memory cache first, then the disk cache
    [_memoryCache removeObjectForKey:key];
    [_diskCache removeObjectForKey:key];
}

- (void)removeAllObjects {
    // Remove all memory caches first, then all disk caches
    [_memoryCache removeAllObjects];
    [_diskCache removeAllObjects];
}

Copy the code

From the above interface implementation can be seen: in YYCache, always access the memory cache, and then access the disk cache (including write, read, query, delete the cache operation). There is no block callback for _memoryCache operations.

It is worth mentioning that during the cache read operation, if the corresponding cache cannot be obtained in the memory cache, the disk cache will be looked for. If the corresponding cache is found in the disk cache, the object is written to the memory cache again, ensuring that the next attempt to obtain the same cache can be returned in memory, improving the speed.

YYMemoryCache = YYMemoryCache = YYMemoryCache = YYMemoryCache

YYMemoryCache

YYMemoryCache handles small, relatively fast memory caches: it associates the object to be cached with the incoming key, similar to NSCache.

But unlike NSCache, YYMemoryCache internally has:

  • Cache weeding algorithm: The LRU(least-recently-used) algorithm is used to weed out (clean up) less frequently used caches.
  • Cache cleaning strategy: Use three dimensions to mark, namely count (cache number), cost (cost), and age (time since last access). YYMemoryCache provides interfaces for the three dimensions of cache clearing. Users can clear the cache that exceeds the standard in one dimension according to different requirements (policies).

One is the elimination algorithm, and the other is the cleanup dimension, which at first glance may not make much difference. Let me make a quick distinction here:

The purpose of the cache weeding algorithm is to distinguish between the cache that is used frequently and the cache that is used less frequently. When the number of caches reaches a certain limit, the cache that is used less frequently will be cleared first. Caches that are already used less often are likely to be used less frequently in the future.

The cache cleanup dimension is the markup added to each cache:

  • If a user needs to delete caches that are more than 1 day old, YYMemoryCache will start with the cache that is least used until all caches that are more than 1 day old have been cleared.

  • If the user needs to clear the total cache overhead to a value that is less than or equal to, within YYMemoryCache, the cache that is least frequently used will be cleared until the total overhead is less than or equal to this value.

  • If the user needs to clear the total number of caches to a value where the total overhead is less than or equal to a certain value, within YYMemoryCache, the cache that is least frequently used is cleared until the total overhead is less than or equal to this value.

As you can see, no matter which dimension you use to clear the cache, you start with the cache that is least used. The usage frequency of YYMemoryCache is determined by the ALGORITHM LRU.

Now that you know the difference between the two, let’s explain the cache weeding algorithm and cache cleaning strategy in detail:

YYMemoryCache cache elimination algorithm

Before going into more detail about this algorithm, I think it’s important to say a few words about the core of the algorithm:

Personally, I think the core of the LRU cache replacement strategy is that the more often a cache is accessed, the more likely it is that users will access it in the future. So in this algorithm, the most recently accessed caches are moved to the front, and then the caches that have been written long ago and accessed infrequently are automatically moved to the back. In this way, in the case of a certain number of retained caches, the remaining caches are frequently accessed, so as to improve the cache hit ratio. You don’t want to keep a cache that is hard for users to access, because the cache itself has resources, right?

In fact this truth and some mall app recommendation logic is the same: if the home page can only display 10 items, for a programmer users, may be recommended to those he recently purchase similar mechanical keyboard mouse, technical books or display goods, rather than on items like some dolls or pen.

So what does the LRU algorithm do?

In YYMemoryCache, the bidirectional linked list is used as a data structure to store these ccache:

  • When a new cache is written, the cache node is placed in the list header, and the cache node in the original list header becomes the second cache node in the current list.
  • When accessing an existing cache, the cache node is moved to the list header, the caches on both sides of the original location are attached, and the cache node in the original list header becomes the second cache node in the current list.
  • When the cache is automatically cleared (by cleansing dimension), it is cleared from the end of the list one by one.

This ensures that the cache at the front end of the list is recently written and frequently accessed. And the algorithm always removes the cache from the end of the list, which ensures that the cache is left with something “fresh.”

The following code to explain the implementation of this algorithm:

YYMemoryCache uses a linked list node class to store information about a single memory cache (keys, values, cache times, etc.) and then uses a bidirectional linked list class to store and manage these nodes. The names of the two classes are:

  • _YYLinkedMapNode: A node class in a linked list, which can be viewed as an encapsulation of a separate memory cache.
  • _YYLinkedMap: bidirectional linked list class that holds and manages all memory caches (nodes)

_YYLinkedMapNode

The _YYLinkedMapNode can be thought of as an encapsulation of a cache: it contains Pointers to the previous and next nodes, the cache’s key and corresponding values (objects), and the cache’s overhead and access time.

@interface _YYLinkedMapNode : NSObject {
    
    @package
    __unsafe_unretained _YYLinkedMapNode *_prev; // retained by dic
    __unsafe_unretained _YYLinkedMapNode *_next; // retained by dic
    id _key;              		  / / the cache key
    id _value;              	          / / key corresponding value
    NSUInteger _cost;                     // Cache overhead
    NSTimeInterval _time;                 // Access time
    
}
@end

@implementation _YYLinkedMapNode
@end
Copy the code

Let’s look at the bidirectional linked list class:

_YYLinkedMap

@interface _YYLinkedMap : NSObject {
    @package
    CFMutableDictionaryRef _dic; 	// Used to store nodes
    NSUInteger _totalCost;   		/ / total overhead
    NSUInteger _totalCount;  		// Total number of nodes
    _YYLinkedMapNode *_head;            // The header node of the list
    _YYLinkedMapNode *_tail; 		// The end node of the list
    BOOL _releaseOnMainThread; 	        // Whether to release on the main thread. Default is NO
    BOOL _releaseAsynchronously; 	// Whether to release in the child thread, default is YES
}

// Insert a node into the list header
- (void)insertNodeAtHead:(_YYLinkedMapNode *)node;

// Move a node inside the list to the head of the list
- (void)bringNodeToHead:(_YYLinkedMapNode *)node;

// Remove a node
- (void)removeNode:(_YYLinkedMapNode *)node;

// Remove the end node of the list and return it
- (_YYLinkedMapNode *)removeTailNode;

// Remove all nodes (default child thread operation)
- (void)removeAll;

@end
Copy the code

The list class has built-in CFMutableDictionaryRef, which holds the key and value pairs of nodes. It also holds the total cost, the total number of nodes in the list, and the first and last nodes.

Here’s a diagram to see the relationship:

Take a look at the implementation of the _YYLinkedMap interface:

Insert the node into the list header:

- (void)insertNodeAtHead:(_YYLinkedMapNode *)node {
    
    // Set the value of the node
    CFDictionarySetValue(_dic, (__bridge const void *)(node->_key), (__bridge const void *)(node));
    
    // Increase the overhead and total cache count
    _totalCost += node->_cost;
    _totalCount++;
    
    if (_head) {
        
        // If there is already a head node in the list, assign the head node to the tail pointer of the current node (the first node becomes the second node)
        node->_next = _head;
        
        // Assign this node to the head pointer of the second node (in this case, _head refers to the first node)
        _head->_prev = node;
        
        // Assign this node to the list's head node pointer (this node becomes the current first node)
        _head = node;
        
    } else {
        
        // If there is no head node in the list, the list is empty. Note If it is the first insertion, the first and last nodes of the list are set as the current node_head = _tail = node; }}Copy the code

To understand the code for node operations, you only need to understand the bidirectional linked list feature. In a two-way linked list:

  • Each node has two Pointers to the front and back nodes. So each node knows who the node before it and who the node after it is.
  • The pointer to the leading node of the list to the node before it is null; The pointer to the trailing node of the list is also null.

For the sake of understanding, we can compare this abstract concept to children holding hands in kindergarten: each child’s left hand is holding the right hand of the child in front of him; Each child’s right hand is pulled behind the child’s left hand; And the left hand of the most front child and the right hand of the last child did not pull any child.

Move a node to the list header:


- (void)bringNodeToHead:(_YYLinkedMapNode *)node {
    
    // If the node is already the head of the list, return immediately and do nothing
    if (_head == node) return;
    
    
    if (_tail == node) {
        
        // If the node is the end of the list
        //1. Change the head pointer of this node to the last node of the list (change the second to last node to the first to the last node)
        _tail = node->_prev;
        
        //2. Set the tail pointer of the new tail node to null
        _tail->_next = nil;
        
    } else {
        
        // If the node is outside the head and tail of the list (intermediate node)
        //1. Assign the node pointed to by the head pointer of the node to the head pointer of the node pointed to by the tail pointer
        node->_next->_prev = node->_prev;
        
        //2. Assign the node pointed to by the tail pointer of the node to the tail pointer of the node pointed to by the head pointer
        node->_prev->_next = node->_next;
    }
    
    // Assign the original head node to the tail pointer of the node (the original first node becomes the current second node)
    node->_next = _head;
    
    // Set the header of the current node to null
    node->_prev = nil;
    
    // Point the head node of the current second node to the current node (in this case, _head refers to the second node)
    _head->_prev = node;
    
    // Set this node as the head of the list
    _head = node;
}
Copy the code

The first time I saw the above code I was confused, but if the combination of the above children handle the example can be quickly understood. If you want one of the children to go to the front of the line, you need

  • Pull the hands of the children before and after the original child.
  • Then the child’s right hand and the original row in the first child’s left hand up.

This is a little brief, but I believe it will help you understand the whole process.

It can also be combined with the diagram of the linked list:

The reader can also use this thinking to understand the following code:

To remove a node from a linked list:

- (void)removeNode:(_YYLinkedMapNode *)node {
    
    // Remove the value of the node key
    CFDictionaryRemoveValue(_dic, (__bridge const void *)(node->_key));
    
    // Subtract the overhead and total cache count
    _totalCost -= node->_cost;
    _totalCount--;
    
    // Node operations
    //1. Assign the node pointed to by the head pointer of the node to the head pointer of the node pointed to by the tail pointer
    if (node->_next) node->_next->_prev = node->_prev;
    
    //2. Assign the node pointed to by the tail pointer of the node to the tail pointer of the node pointed to by the head pointer
    if (node->_prev) node->_prev->_next = node->_next;
    
    //3. If the node is the head node of the list, assign the trailing pointer of the node to the head node (the second becomes the first)
    if (_head == node) _head = node->_next;
    
    //4. If the node is the last node in the list, assign the node to the last node (the second to last becomes the first to last).
    if (_tail == node) _tail = node->_prev;
}
Copy the code

Remove and return the trailing node:

- (_YYLinkedMapNode *)removeTailNode {
    
    // Return nil if there is no tail node
    if(! _tail)return nil;
    
    _YYLinkedMapNode *tail = _tail;
    
    // Remove the value corresponding to the tail node
    CFDictionaryRemoveValue(_dic, (__bridge const void *)(_tail->_key));
    
    // Reduce overhead and total cache count
    _totalCost -= _tail->_cost;
    _totalCount--;
    
    if (_head == _tail) {
        // If the first and last nodes of the list are the same, the list has only one node. To empty it
        _head = _tail = nil;
        
    } else {
        
        // assign the pointer to the trailing pointer of the list (the second to last becomes the first to the last)
        _tail = _tail->_prev;
        // Set the tail pointer of the new tail node to null
        _tail->_next = nil;
    }
    return tail;
}
Copy the code

OK, now you know the code for the underlying node operations of YYMemoryCache. Now look at how YYMemoryCache uses them.

YYMemoryCache properties and interfaces

//YYMemoryCache.h
@interface YYMemoryCache : NSObject

#pragma mark - Attribute

// Cache name, which defaults to nil
@property (nullable.copy) NSString *name;

// Total number of caches
@property (readonly) NSUInteger totalCount;

// Total cache overhead
@property (readonly) NSUInteger totalCost;


#pragma mark - Limit

// The maximum number, which defaults to NSUIntegerMax
@property NSUInteger countLimit;

// The upper limit of overhead, which defaults to NSUIntegerMax, means no upper limit
@property NSUInteger costLimit;

// The default value is DBL_MAX
@property NSTimeInterval ageLimit;

// The interval for clearing the cache that exceeds the upper limit is 5s by default
@property NSTimeInterval autoTrimInterval;

// Whether to clear all caches when memory warning is received. Default is YES
@property BOOL shouldRemoveAllObjectsOnMemoryWarning;

// The default value is YES
@property BOOL shouldRemoveAllObjectsWhenEnteringBackground;

// Received memory warning callback block
@property (nullable.copy) void(^didReceiveMemoryWarningBlock)(YYMemoryCache *cache);

// Enter the callback block in the background
@property (nullable.copy) void(^didEnterBackgroundBlock)(YYMemoryCache *cache);

// Whether cache clearing is performed in the background. Default is NO
@property BOOL releaseOnMainThread;

// The default value is YES
@property BOOL releaseAsynchronously;


#pragma mark - Access Methods

// Whether to include a cache
- (BOOL)containsObjectForKey:(id)key;

// Get the cache object
- (nullable id)objectForKey:(id)key;

// Write to the cache object
- (void)setObject:(nullable id)object forKey:(id)key;

// Writes to the cache object and adds the corresponding overhead
- (void)setObject:(nullable id)object forKey:(id)key withCost:(NSUInteger)cost;

// Remove a cache
- (void)removeObjectForKey:(id)key;

// Remove all caches
- (void)removeAllObjects;

#pragma mark - Trim

// =========== Cache clearing interface ===========
// Clear the cache to the specified number
- (void)trimToCount:(NSUInteger)count;

// Clear the cache to the specified overhead
- (void)trimToCost:(NSUInteger)cost;

// Clear the cache whose cache time is less than the specified time
- (void)trimToAge:(NSTimeInterval)age;
Copy the code

Implementation of YYMemoryCache interface

In the YYMemoryCache initialization method, an instance of _YYLinkedMap is instantiated to the _lru member variable.


- (instancetype)init{ .... _lru = [_YYLinkedMap new]; . }Copy the code

The _lru member variable is then used for all operations on the cache, because it is the bidirectional list class that holds the cache (node) underneath. Let’s take a look at the implementation of these cache operation interfaces:


// Whether to include a cache object
- (BOOL)containsObjectForKey:(id)key {

  // Try to get the cache object from the built-in dictionary
  if(! key)return NO;
  pthread_mutex_lock(&_lock);
  BOOL contains = CFDictionaryContainsKey(_lru->_dic, (__bridge const void *)(key));
  pthread_mutex_unlock(&_lock);
  return contains;
}

// Get a cache object
- (id)objectForKey:(id)key {
  
  if(! key)return nil;
  
  pthread_mutex_lock(&_lock);
  _YYLinkedMapNode *node = CFDictionaryGetValue(_lru->_dic, (__bridge const void *)(key));
  if (node) {
      // If the node exists, update its time information (the last time it was accessed)
      node->_time = CACurrentMediaTime(a); [_lru bringNodeToHead:node]; } pthread_mutex_unlock(&_lock);return node ? node->_value : nil;
}

// Writes to a cache object. The cost defaults to 0
- (void)setObject:(id)object forKey:(id)key {
  [self setObject:object forKey:key withCost:0];
}


// Writes to a cache object and stores the cache overhead
- (void)setObject:(id)object forKey:(id)key withCost:(NSUInteger)cost {
  
  if(! key)return;
  
  if(! object) { [self removeObjectForKey:key];
      return;
  }
  
  pthread_mutex_lock(&_lock);
  
  _YYLinkedMapNode *node = CFDictionaryGetValue(_lru->_dic, (__bridge const void *)(key));
  NSTimeInterval now = CACurrentMediaTime(a);if (node) {
      // If there is a node that matches the passed key, update the value,cost, and time of the node and move the node to the top of the list
      
      // Update the total cost
      _lru->_totalCost -= node->_cost;
      _lru->_totalCost += cost;
      
      / / update the node
      node->_cost = cost;
      node->_time = now;
      node->_value = object;
      
      // Move node to the top of the list
      [_lru bringNodeToHead:node];
      
  } else {
      
      // If no node matches the passed key, create a new node, assign key,value,cost, and time to it, and insert the node into the list header
      // Create a node and assign a value to it
      node = [_YYLinkedMapNode new];
      node->_cost = cost;
      node->_time = now;
      node->_key = key;
      node->_value = object;
      
      // Insert node into the list header
      [_lru insertNodeAtHead:node];
  }
  
  // If the cost exceeds the limit, delete the cache from the end of the list until the limit is met.
  if (_lru->_totalCost > _costLimit) {
      dispatch_async(_queue, ^{
          [self trimToCost:_costLimit];
      });
  }
  
  If the total count exceeds the limit, delete the cache from the end of the list.
  if (_lru->_totalCount > _countLimit) {
      _YYLinkedMapNode *node = [_lru removeTailNode];
      if (_lru->_releaseAsynchronously) {
          dispatch_queue_t queue = _lru->_releaseOnMainThread ? dispatch_get_main_queue() : YYMemoryCacheGetReleaseQueue();
          dispatch_async(queue, ^{
              [node class]; //hold and release in queue
          });
      } else if(_lru->_releaseOnMainThread && ! pthread_main_np()) {dispatch_async(dispatch_get_main_queue(), ^{
              [node class]; //hold and release in queue
          });
      }
  }
  pthread_mutex_unlock(&_lock);
}

// Remove a cache object
- (void)removeObjectForKey:(id)key {
  
  if(! key)return;
  
  pthread_mutex_lock(&_lock);
  _YYLinkedMapNode *node = CFDictionaryGetValue(_lru->_dic, (__bridge const void *)(key));
  if (node) {
  
      // The removeNode: method of the linked list is called internally
      [_lru removeNode:node];
      if (_lru->_releaseAsynchronously) {
          dispatch_queue_t queue = _lru->_releaseOnMainThread ? dispatch_get_main_queue() : YYMemoryCacheGetReleaseQueue();
          dispatch_async(queue, ^{
              [node class]; //hold and release in queue
          });
      } else if(_lru->_releaseOnMainThread && ! pthread_main_np()) {dispatch_async(dispatch_get_main_queue(), ^{
              [node class]; //hold and release in queue
          });
      }
  }
  pthread_mutex_unlock(&_lock);
}


// The removeAll method of the linked list is called internally
- (void)removeAllObjects {
  pthread_mutex_lock(&_lock);
  [_lru removeAll];
  pthread_mutex_unlock(&_lock);
}
Copy the code

The above implementation is for the query, write, and fetch operations of the cache. Let’s take a look at the cache cleanup strategy.

Cache clearing policy of YYMemoryCache

As mentioned above, in YYCache, the cache can be cleared from the total number of caches, the total cache overhead, and the time since the last cache access. And each dimension of the cleaning operation can be divided into automatic and manual ways to carry out.

Automatic cache clearing

The automatic cleanup of the cache starts after YYMemoryCache is initialized, and is an implementation of recursive calls:

//YYMemoryCache.m
- (instancetype)init{
    
    ...
    
    // Start cleaning regularly
    [self_trimRecursively]; . }// Clean up recursion with _autoTrimInterval and execute immediately after initialization
- (void)_trimRecursively {
    
    __weak typeof(self) _self = self;
    dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(_autoTrimInterval * NSEC_PER_SEC)),
                   
        dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{
            
        __strong typeof(_self) self = _self;
        if (!self) return;
        
        // Clean in the background
        [self _trimInBackground];
        
        // Call yourself, recursive operation
        [self _trimRecursively];
            
    });
}

// Clear all caches that do not conform to the limit, in order: cost, count, age
- (void)_trimInBackground {
    dispatch_async(_queue, ^{
        
        [self _trimToCost:self->_costLimit];
        [self _trimToCount:self->_countLimit];
        [self _trimToAge:self->_ageLimit];
        
    });
}

Copy the code
//YYMemoryCache.m
- (void)trimToCount:(NSUInteger)count {
    if (count == 0) {[self removeAllObjects];
        return;
    }
    [self _trimToCount:count];
}

- (void)trimToCost:(NSUInteger)cost {
    [self _trimToCost:cost];
}

- (void)trimToAge:(NSTimeInterval)age {
    [self _trimToAge:age];
}
Copy the code

It can be seen that YYMemoryCache automatically clears the cache according to the order of cache number, cache overhead and cache time. Let’s take a look at the code to see how it clears the cache based on the number of caches (the other two methods are similar, so I won’t show you for now) :

//YYMemoryCache.m

// Reduce the number of memory caches to equal or less than the number passed in; If the value passed in is 0, the entire memory cache is deleted
- (void)_trimToCount:(NSUInteger)countLimit {
    
    BOOL finish = NO;
    
    pthread_mutex_lock(&_lock);
    
    // If the passed parameter =0, delete all memory caches
    if (countLimit == 0) {
        
        [_lru removeAll];
        finish = YES;
        
    } else if (_lru->_totalCount <= countLimit) {
    
        // If the total number of cached items is less than or equal to the number of items passed in, then YES is returned without cleaning
        finish = YES;
    }
    pthread_mutex_unlock(&_lock);
    if (finish) return;
    
    NSMutableArray *holder = [NSMutableArray new];
    while(! finish) {//==0 indicates that the lock is obtained successfully when the lock is tried, and the operation can be performed. Otherwise wait 10 seconds (but somehow it's 10s instead of 2s, 5s, etc.)
        if (pthread_mutex_trylock(&_lock) == 0) {
            if (_lru->_totalCount > countLimit) {
                _YYLinkedMapNode *node = [_lru removeTailNode];
                if (node) [holder addObject:node];
            } else {
                finish = YES;
            }
            pthread_mutex_unlock(&_lock);
        } else {
            usleep(10 * 1000); //10 ms}}if (holder.count) {
        dispatch_queue_t queue = _lru->_releaseOnMainThread ? dispatch_get_main_queue() : YYMemoryCacheGetReleaseQueue();
        dispatch_async(queue, ^{
            [holder count]; // release in queue}); }}Copy the code

Manual cache clearing

In fact, the above three methods of clearing are encapsulated as interfaces in YYMemoryCache, so users can also use the memoryCache property of YYCache to manually clear the cache that does not meet the passed standards in the corresponding dimension:

//YYMemoryCache.h

// =========== Cache clearing interface ===========
// Clear the cache to the specified number
- (void)trimToCount:(NSUInteger)count;

// Clear the cache to the specified overhead
- (void)trimToCost:(NSUInteger)cost;

// Clear the cache whose cache time is less than the specified time
- (void)trimToAge:(NSTimeInterval)age;
Copy the code

Take a look at their implementation:

// Clear the cache to the specified number
- (void)trimToCount:(NSUInteger)count {
    if (count == 0) {[self removeAllObjects];
        return;
    }
    [self _trimToCount:count];
}

// Clear the cache to the specified overhead
- (void)trimToCost:(NSUInteger)cost {
    [self _trimToCost:cost];
}

// Clear the cache whose cache time is less than the specified time
- (void)trimToAge:(NSTimeInterval)age {
    [self _trimToAge:age];
}
Copy the code

YYDiskCache

YYDiskCache processes disk caches with large capacity and low speed. Thread safe, supports asynchronous operations. As the second level cache of YYCache, it is similar to the first level cache YYMemoryCache:

  • Both have interfaces for querying, writing, reading, and deleting caches.
  • Do not operate the cache directly, but also indirectly through another class (YYKVStorage) to operate the cache.
  • It uses the LRU algorithm to clean up the cache.
  • Support for cleaning up nonconforming caches in the cost, count, and age dimensions.

The difference between YYMemoryCache and YYMemoryCache is:

    1. Caching takes different forms depending on the size of the cached data:
    • Database SQLite: For small caches, cached data and metadata are stored in a database.
    • File + database format: For large caches, cached data is written to the file system and its metadata is stored in a database.
    1. In addition to the cost, count, and age dimensions, a disk capacity dimension has been added.

There are two types of cache types in the source code, but there are actually three types of cache type enumeration:

typedef NS_ENUM(NSUInteger, YYKVStorageType) {
    
    YYKVStorageTypeFile = 0,
    YYKVStorageTypeSQLite = 1,
    YYKVStorageTypeMixed = 2};Copy the code

That is, I only found the second and third forms of caching, while the first pure file storage (YYKVStorageTypeFile) form of implementation I did not find: The cache implementations for YYKVStorageTypeFile and YYKVStorageTypeMixed are the same: both store the data in a file and place the metadata in the database.

YYKVStorageTypeFile = YYKVStorageTypeFile = YYKVStorageTypeFile = YYKVStorageTypeFile

//YYDiskCache.m

- (instancetype)init {
    @throw [NSException exceptionWithName:@"YYDiskCache init error" reason:@"YYDiskCache must be initialized with a path. Use 'initWithPath:' or 'initWithPath:inlineThreshold:' instead." userInfo:nil];
    return [self initWithPath:@ "" inlineThreshold:0];
}

- (instancetype)initWithPath:(NSString *)path {
    return [self initWithPath:path inlineThreshold:1024 * 20]; // 20KB
}

- (instancetype)initWithPath:(NSString *)path
             inlineThreshold:(NSUInteger)threshold {

   ...    
    YYKVStorageType type;
    if (threshold == 0) {
        type = YYKVStorageTypeFile;
    } else if (threshold == NSUIntegerMax) {
        type = YYKVStorageTypeSQLite;
    } else{ type = YYKVStorageTypeMixed; }... }Copy the code

From the above code can be seen, when to specify initialization method initWithPath: inlineThreshold: the second parameter to 0, the cache types are YYKVStorageTypeFile. And the more common initialization method, initWithPath:, passes 20KB to the specified initialization method, which sets type to YYKVStorageTypeMixed.

And I can’t figure out how the metadata would be stored if there were only file caches. If readers know, please let me know, thank you very much ~~

For the time being in this article, the above mentioned “file + database form” will be referred to as file cache in the following paragraphs.

In terms of interface design, YYDiskCache and YYMemoryCache are highly consistent. However, it may be time-consuming to access large files sometimes, so the framework author keeps the same interface as YYMemoryCache, and adds block callback on the basis of the original. Avoid blocking threads. YYDiskCache interface (omitted comment)

//YYDiskCache.h

- (BOOL)containsObjectForKey:(NSString *)key;
- (void)containsObjectForKey:(NSString *)key withBlock:(void(^) (NSString *key, BOOL contains))block;


- (nullable id<NSCoding>)objectForKey:(NSString *)key;
- (void)objectForKey:(NSString *)key withBlock:(void(^) (NSString *key, id<NSCoding> _Nullable object))block;


- (void)setObject:(nullable id<NSCoding>)object forKey:(NSString *)key;
- (void)setObject:(nullable id<NSCoding>)object forKey:(NSString *)key withBlock:(void(^) (void))block;


- (void)removeObjectForKey:(NSString *)key;
- (void)removeObjectForKey:(NSString *)key withBlock:(void(^) (NSString *key))block;


- (void)removeAllObjects;
- (void)removeAllObjectsWithBlock:(void(^) (void))block;
- (void)removeAllObjectsWithProgressBlock:(nullable void(^) (int removedCount, int totalCount))progress
                                 endBlock:(nullable void(^) (BOOL error))end;


- (NSInteger)totalCount;
- (void)totalCountWithBlock:(void(^) (NSInteger totalCount))block;


- (NSInteger)totalCost;
- (void)totalCostWithBlock:(void(^) (NSInteger totalCost))block;


#pragma mark - Trim
- (void)trimToCount:(NSUInteger)count;
- (void)trimToCount:(NSUInteger)count withBlock:(void(^) (void))block;


- (void)trimToCost:(NSUInteger)cost;
- (void)trimToCost:(NSUInteger)cost withBlock:(void(^) (void))block;

- (void)trimToAge:(NSTimeInterval)age;
- (void)trimToAge:(NSTimeInterval)age withBlock:(void(^) (void))block;
Copy the code

As can be seen from the above interface code, YYDiskCache and YYMemoryCache are very similar in interface design. However, YYDiskCache has one very important property that serves as the watershed between sqLite and file caching:

//YYDiskCache.h
@property (readonly) NSUInteger inlineThreshold;
Copy the code

The default value for this property is 20480byte, which is 20KB. That is, if the length of the cached data is greater than this value, file storage is used; If the value is less than this, sqLite is used. Take a look at how this property is used:

First we will see this property in YYDiskCache’s specified initialization method:

//YYDiskCache.m
- (instancetype)initWithPath:(NSString *)path
             inlineThreshold:(NSUInteger)threshold { ... _inlineThreshold = threshold; . }Copy the code

This is where _inlineThreshold is assigned, and this is the only assignment. If the size of the write cache is larger than this threshold, then use the file cache:

//YYDiskCache.m
- (void)setObject:(id<NSCoding>)object forKey:(NSString *)key {
   
   ...
    NSString *filename = nil;
    if(_kv.type ! = YYKVStorageTypeSQLite) {// if the length is critical, the filename is generated so that filename is not nil
        if (value.length > _inlineThreshold) {
            filename = [self _filenameForKey:key];
        }
    }
    
    Lock();
    // Inside the method, check whether the filename is nil. If it is, use sqLite to cache it; If not, file caching is used
    [_kv saveItemWithKey:key value:value filename:filename extendedData:extendedData];
    Unlock();
}
Copy the code

Now we know that the biggest difference between YYDiskCache and YYMemoryCache is the type of cache. Careful friends will find the above write caching method (saveItemWithKey: value: filename: extendedData:) actually belongs to the _kv. This _kv is an instance of YYKVStorage mentioned above, which is assigned in the YYDiskCache initialization method:

//YYDiskCache.m
- (instancetype)initWithPath:(NSString *)path
             inlineThreshold:(NSUInteger)threshold {
    ...
    
    YYKVStorage *kv = [[YYKVStorage alloc] initWithPath:path type:type];
    if(! kv)return nil; _kv = kv; . }Copy the code

Similarly, for the other two interfaces, the _kv method is called internally:

- (BOOL)containsObjectForKey:(NSString *)key {
    if(! key)return NO;
    Lock();
    BOOL contains = [_kv itemExistsForKey:key];
    Unlock();
    return contains;
}

- (void)removeObjectForKey:(NSString *)key {
    if(! key)return;
    Lock();
    [_kv removeItemForKey:key];
    Unlock();
} 
Copy the code

So it is time to look at YYKVStorage interface and implementation:

YYKVStorage

The YYKVStorage instance is responsible for storing and managing all disk caches. Similar to YYMemoryCache _YYLinkedMap which encapsulates cache into node class _YYLinkedMapNode, YYKVStorage also encapsulates a single disk cache into a class, this class is YYKVStorageItem, It holds information about a cache (key, value, file name, size, etc.) :

//YYKVStorageItem.h

@interface YYKVStorageItem : NSObject

@property (nonatomic.strong) NSString *key;                / / key
@property (nonatomic.strong) NSData *value;                / / value
@property (nullable.nonatomic.strong) NSString *filename; / / file name
@property (nonatomic) int size;                             // The size of the value in bytes
@property (nonatomic) int modTime;                          // Modify the timestamp
@property (nonatomic) int accessTime;                       // The timestamp of the last access
@property (nullable.nonatomic.strong) NSData *extendedData; //extended data

@end
Copy the code

Since the cache is encapsulated as YYKVStorageItem instance here, then as the manager of the cache, YYKVStorage must have the interface to operate YYKVStorageItem:

//YYKVStorage.h

// Write an item
- (BOOL)saveItem:(YYKVStorageItem *)item;

// Write a key-value pair with an NSData object
- (BOOL)saveItemWithKey:(NSString *)key value:(NSData *)value;

// Write a key-value pair, including the file name and data information
- (BOOL)saveItemWithKey:(NSString *)key
                  value:(NSData *)value
               filename:(nullable NSString *)filename
           extendedData:(nullable NSData *)extendedData;

#pragma mark - Remove Items

// Remove the item of a key
- (BOOL)removeItemForKey:(NSString *)key;

// Remove items with multiple keys
- (BOOL)removeItemForKeys:(NSArray<NSString *> *)keys;

// Remove items larger than the parameter size
- (BOOL)removeItemsLargerThanSize:(int)size;

// Remove items whose time is earlier than the parameter time
- (BOOL)removeItemsEarlierThanTime:(int)time;

// Remove item so that the total cache size is smaller than the size parameter
- (BOOL)removeItemsToFitSize:(int)maxSize;

// Remove item so that the cache number is smaller than the size parameter
- (BOOL)removeItemsToFitCount:(int)maxCount;

// Remove all items
- (BOOL)removeAllItems;

// Remove all items, with progress and end blocks
- (void)removeAllItemsWithProgressBlock:(nullable void(^) (int removedCount, int totalCount))progress
                               endBlock:(nullable void(^) (BOOL error))end;


#pragma mark - Get Items
// Read the item corresponding to the key parameter
- (nullable YYKVStorageItem *)getItemForKey:(NSString *)key;

// Read data corresponding to parameter key
- (nullable NSData *)getItemValueForKey:(NSString *)key;

// Read the item array corresponding to the parameter array
- (nullable NSArray<YYKVStorageItem *> *)getItemForKeys:(NSArray<NSString *> *)keys;

// Read the item dictionary corresponding to the parameter array
- (nullable NSDictionary<NSString *, NSData *> *)getItemValueForKeys:(NSArray<NSString *> *)keys;
Copy the code

The interface for writing to the cache is implemented.

// Write an item
- (BOOL)saveItem:(YYKVStorageItem *)item;

// Write a key-value pair with an NSData object
- (BOOL)saveItemWithKey:(NSString *)key value:(NSData *)value;

// Write a key-value pair, including the file name and data information
- (BOOL)saveItemWithKey:(NSString *)key
                  value:(NSData *)value
               filename:(nullable NSString *)filename
           extendedData:(nullable NSData *)extendedData;
Copy the code

All three interfaces are similar in that both methods call the method with the most arguments at the bottom. Before explaining the code of writing to the cache in detail, I first talk about the general logic of writing to the cache, to help you understand the whole YYDiskCache write to the cache process:

  1. First check whether the passed key and value meet the requirements. If not, NO is immediately returned and the cache fails.
  2. Then check whether type==YYKVStorageTypeFile and the file name is an empty string (or nil) : If yes, then NO is immediately returned and the cache fails.
  3. Check whether filename is an empty string:
  4. If it is not empty: the file is written to the database, and the cache key is written to the database, but the data corresponding to the key is not written to the database.
  5. If empty:
  6. If the cache type is YYKVStorageTypeSQLite: Delete the cache file
  7. If the cache type is not YYKVStorageTypeSQLite: the cache key and corresponding data and other information are stored in the database.
- (BOOL)saveItem:(YYKVStorageItem *)item {
    return [self saveItemWithKey:item.key value:item.value filename:item.filename extendedData:item.extendedData];
}

- (BOOL)saveItemWithKey:(NSString *)key value:(NSData *)value {
    return [self saveItemWithKey:key value:value filename:nil extendedData:nil];
}

- (BOOL)saveItemWithKey:(NSString *)key value:(NSData *)value filename:(NSString *)filename extendedData:(NSData *)extendedData {
    
    if (key.length == 0 || value.length == 0) return NO;
    
    if (_type == YYKVStorageTypeFile && filename.length == 0) {
        return NO;
    }
    
    if (filename.length) {
        
        // If the file name is not an empty string, file caching is required
        if(! [self _fileWriteWithName:filename data:value]) {
            return NO;
        }
        
        // Write metadata
        if(! [self _dbSaveWithKey:key value:value fileName:filename extendedData:extendedData]) {
            // If the cache information fails to be saved, the corresponding file is deleted
            [self _fileDeleteWithName:filename];
            return NO;
        }
        
        return YES;
        
    } else {
        
        // If the file name is an empty string, do not cache the file
        if(_type ! = YYKVStorageTypeSQLite) {// If the cache type is not database cache, the file name is found and deleted
            NSString *filename = [self _dbGetFilenameWithKey:key];
            if (filename) {
                [self_fileDeleteWithName:filename]; }}// The cache type is database cache, which writes metadata and value to the database
        return [self _dbSaveWithKey:key value:value fileName:nilextendedData:extendedData]; }}Copy the code

Can be seen from the above code, at the bottom of the write cache method is _dbSaveWithKey: value: fileName: extendedData:, this method USES the twice:

  • When storing the cache as a file (and database)
  • When storing the cache as a database

But even though the call is made twice, we can make a difference from the argument passed in: the second time filename passed nil. So let’s take a look at _dbSaveWithKey: value: fileName: extendedData: internal is how to distinguish the fileName:

  • If filename is empty, no files have been written to the cache: Data is written to the database
  • If filename is not empty, there is an external file written to the cache: data is not written to the database

Here’s a look at the code:

// Database storage
- (BOOL)_dbSaveWithKey:(NSString *)key value:(NSData *)value fileName:(NSString *)fileName extendedData:(NSData *)extendedData {
    
    / / SQL statements
    NSString *sql = @"insert or replace into manifest (key, filename, size, inline_data, modification_time, last_access_time, extended_data) values (? 1,? 2.? 3,? 4,? 5,? 6,? 7);";
    
    sqlite3_stmt *stmt = [self _dbPrepareStmt:sql];
    
    if(! stmt)return NO;
    
    int timestamp = (int)time(NULL);
    
    //key
    sqlite3_bind_text(stmt, 1, key.UTF8String, - 1.NULL);
    
    //filename
    sqlite3_bind_text(stmt, 2, fileName.UTF8String, - 1.NULL);
    
    //size
    sqlite3_bind_int(stmt, 3, (int)value.length);
    
    //inline_data
    if (fileName.length == 0) {
        
        // If the length of the file name is ==0, then the value is stored in the database
        sqlite3_bind_blob(stmt, 4, value.bytes, (int)value.length, 0);
        
    } else {
        
        // If the length of the file name is not 0, the value is not stored in the database
        sqlite3_bind_blob(stmt, 4.NULL.0.0);
    }
    
    //modification_time
    sqlite3_bind_int(stmt, 5, timestamp);
    
    //last_access_time
    sqlite3_bind_int(stmt, 6, timestamp);
    
    //extended_data
    sqlite3_bind_blob(stmt, 7, extendedData.bytes, (int)extendedData.length, 0);
    
    int result = sqlite3_step(stmt);
    
    if(result ! = SQLITE_DONE) {if (_errorLogsEnabled) NSLog(@"%s line:%d sqlite insert error (%d): %s", __FUNCTION__, __LINE__, result, sqlite3_errmsg(_db));
        return NO;
    }
    
    return YES;
}
Copy the code

The framework author uses a single record in the database to hold all the information about a cache. In addition, the fourth field of the database holds the data corresponding to the cache. From the above code, you can see the difference in the processing when the filename is empty and not empty.

The sqlite3_STmt above can be thought of as an internal data structure that has parsed the SQL statement and recorded with SQLite’s own markup. Sqlite3_bind_text and SQlite3_bind_int are binding functions that can be thought of as operations to insert variables into fields.

OK, now that we’ve looked at writing to the cache, let’s look at fetching the cache again:

//YYKVSorage.m
- (YYKVStorageItem *)getItemForKey:(NSString *)key {
    
    if (key.length == 0) return nil;
    
    YYKVStorageItem *item = [self _dbGetItemWithKey:key excludeInlineData:NO];
    
    if (item) {
        // Update the memory access time
        [self _dbUpdateAccessTimeWithKey:key];
        
        if (item.filename) {
            // If there is a file name, try to get the file data
            item.value = [self _fileReadWithName:item.filename];
            // If the file data fails to be obtained, delete the corresponding item
            if(! item.value) { [self _dbDeleteItemWithKey:key];
                item = nil; }}}return item;
}
Copy the code

From the above we can see this code for YYKVStorageItem instance method is _dbGetItemWithKey: excludeInlineData: let’s take a look at its implementation:

  1. The STMT is first generated from the SQL statement that finds the key
  2. The passed key is then bound to the STMT
  3. Finally, the STMT is used to find the other data related to the cache corresponding to the key and generate the item.

Look at the code:

- (YYKVStorageItem *)_dbGetItemWithKey:(NSString *)key excludeInlineData:(BOOL)excludeInlineData {
    NSString *sql = excludeInlineData ? @"select key, filename, size, modification_time, last_access_time, extended_data from manifest where key = ? 1." : @"select key, filename, size, inline_data, modification_time, last_access_time, extended_data from manifest where key = ? 1.";
    sqlite3_stmt *stmt = [self _dbPrepareStmt:sql];
    if(! stmt)return nil;
    sqlite3_bind_text(stmt, 1, key.UTF8String, - 1.NULL);
    
    YYKVStorageItem *item = nil;
    int result = sqlite3_step(stmt);
    if (result == SQLITE_ROW) {
        // Pass STMT to generate YYKVStorageItem instance
        item = [self _dbGetItemFromStmt:stmt excludeInlineData:excludeInlineData];
    } else {
        if(result ! = SQLITE_DONE) {if (_errorLogsEnabled) NSLog(@"%s line:%d sqlite query error (%d): %s", __FUNCTION__, __LINE__, result, sqlite3_errmsg(_db)); }}return item;
}
Copy the code

We can see the resulting YYKVStorageItem instance is through _dbGetItemFromStmt: excludeInlineData: to achieve:

- (YYKVStorageItem *)_dbGetItemFromStmt:(sqlite3_stmt *)stmt excludeInlineData:(BOOL)excludeInlineData {
    
    // Extract the data
    int i = 0;
    char *key = (char *)sqlite3_column_text(stmt, i++);
    char *filename = (char *)sqlite3_column_text(stmt, i++);
    int size = sqlite3_column_int(stmt, i++);
    
    / / determine excludeInlineData
    const void *inline_data = excludeInlineData ? NULL : sqlite3_column_blob(stmt, i);
    int inline_data_bytes = excludeInlineData ? 0 : sqlite3_column_bytes(stmt, i++);
    
    
    int modification_time = sqlite3_column_int(stmt, i++);
    int last_access_time = sqlite3_column_int(stmt, i++);
    const void *extended_data = sqlite3_column_blob(stmt, i);
    int extended_data_bytes = sqlite3_column_bytes(stmt, i++);
    
    // Assign data to the attribute of item
    YYKVStorageItem *item = [YYKVStorageItem new];
    if (key) item.key = [NSString stringWithUTF8String:key];
    if(filename && *filename ! =0) item.filename = [NSString stringWithUTF8String:filename];
    item.size = size;
    if (inline_data_bytes > 0 && inline_data) item.value = [NSData dataWithBytes:inline_data length:inline_data_bytes];
    item.modTime = modification_time;
    item.accessTime = last_access_time;
    if (extended_data_bytes > 0 && extended_data) item.extendedData = [NSData dataWithBytes:extended_data length:extended_data_bytes];
    return item;
}
Copy the code

The above code is divided into two parts:

  • Get the data for each field in the database
  • Assign data to an instance of YYKVStorageItem

Note that:

  1. String types need to be usedstringWithUTF8String:To convert to NSString.
  2. There’s judgment in thereexcludeInlineData:
  • If TRUE, the stored data is extracted
  • If FALSE, it does not extract

A thread-safe scheme

I believe that for a certain design, it must be based on a certain scenario under a certain problem

As can be seen from the above:

  • YYMemoryCache uses a pthread_mutex thread lock (mutex) to ensure thread safety
  • YYDiskCache chose dispatch_semaphore which is more suitable for it.

Mutex for memory cache operations

In YYMemoryCache, mutex is used to ensure thread safety. Firstly, YYMemoryCache is initialized with mutex, and all its interfaces are added with mutex to ensure thread safety, including setter, getter method and cache operation implementation. A few examples:

- (NSUInteger)totalCost {
    pthread_mutex_lock(&_lock);
    NSUInteger totalCost = _lru->_totalCost;
    pthread_mutex_unlock(&_lock);
    return totalCost;
}

- (void)setReleaseOnMainThread:(BOOL)releaseOnMainThread {
    pthread_mutex_lock(&_lock);
    _lru->_releaseOnMainThread = releaseOnMainThread;
    pthread_mutex_unlock(&_lock);
}

- (BOOL)containsObjectForKey:(id)key {
    
    if(! key)return NO;
    pthread_mutex_lock(&_lock);
    BOOL contains = CFDictionaryContainsKey(_lru->_dic, (__bridge const void *)(key));
    pthread_mutex_unlock(&_lock);
    return contains;
}

- (id)objectForKey:(id)key {
    
    if(! key)return nil;
    
    pthread_mutex_lock(&_lock);
    _YYLinkedMapNode *node = CFDictionaryGetValue(_lru->_dic, (__bridge const void *)(key));
    if (node) {
        // If the node exists, update its time information (the last time it was accessed)
        node->_time = CACurrentMediaTime(a); [_lru bringNodeToHead:node]; } pthread_mutex_unlock(&_lock);return node ? node->_value : nil;
}

Copy the code

And you need to destroy the lock in the dealloc method:

- (void)dealloc {
    
    ...
    
    // Destroy the mutex
    pthread_mutex_destroy(&_lock);
}
Copy the code

Disk caching uses semaphores in place of locks

The framework authors took the semaphore approach by instantiating a semaphore first during initialization:

- (instancetype)initWithPath:(NSString *)path
             inlineThreshold:(NSUInteger)threshold {
    ...
    _lock = dispatch_semaphore_create(1);
    _queue = dispatch_queue_create("com.ibireme.cache.disk", DISPATCH_QUEUE_CONCURRENT); .Copy the code

Then I used a macro instead of locking the unlock code:

#define Lock() dispatch_semaphore_wait(self->_lock, DISPATCH_TIME_FOREVER)
#define Unlock() dispatch_semaphore_signal(self->_lock)
Copy the code

A quick word about semaphores:

Dispatch_semaphore is a method of synchronization used by GCD. There are three functions associated with dispatch_semaphore, which are

  • Dispatch_semaphore_create: defines a semaphore
  • “Dispatch_semaphore_signal” : indicates that the semaphore is +1
  • Dispatch_semaphore_wait: set the semaphore to -1

When the semaphore is zero, wait processing is done, which is what makes other threads wait if they access it. So if the semaphore is initially set to 1, it can be “locked” :

  • Before executing a piece of code, execute the function dispatch_semaphore_wait to change the semaphore -1 to 0 and execute the code.
  • At this point, if another thread comes to access the code, it has to wait.
  • When this code finishes the current thread, it executes the function dispatch_semaphore_signal, making the semaphore +1 again so that any waiting threads can access it.

Note that if multiple threads are waiting, the order of access after the semaphore recovers is the order in which the threads encounter dispatch_SEMaphore_WAIT.

This is the difference between a semaphore and a mutex: a mutex is used to keep threads together, and a signal line is used to synchronize threads.

  • Mutually exclusive: Allows only one visitor to access a resource at a time, which is unique and exclusive. However, mutual exclusion cannot limit the order in which visitors access resources, that is, access is out of order.

  • Synchronization: On the basis of mutual exclusion (in most cases), visitors can access resources in order through other mechanisms. In most cases, synchronization is already mutually exclusive, and in particular all writes to resources must be mutually exclusive. That is, using a semaphore allows multiple threads to access a resource in order.

The question is: why does the memory cache use pthread_mutex and the disk cache use dispatch_semaphore?

The answer can be found in the framework author’s article YYCache design ideas:

Why does memory cache use a mutex (pthread_mutex)?

The framework authors initially used OSSpinLock as a thread lock for memory caching, but later learned that this was not safe enough, so they used pthread_mutex instead.

Why does disk caching use a “dispatch_semaphore”?

Dispatch_semaphore is a semaphore, but can also be used as a lock when the total number of signals is set to 1. It performs better than pthread_mutex when there are no waits, but it degrades a lot when there are waits. The advantage over OSSpinLock is that it consumes no CPU while waiting. It is appropriate for disk caching.

YYDiskCache may have a long wait time when writing to a large cache. Dispatch_semaphore does not consume CPU resources at this time. Therefore, dispatch_semaphore is suitable.

Several attempts to improve cache performance

Select the appropriate thread lock

See the previous section for the different locks used by YYMemoryCache and YYDiskCache and why.

Choose the appropriate data structure

In YYMemoryCache, the author has chosen a bidirectional linked list to hold these cache nodes. So if you think about it, why use a two-way list instead of a one-way list or an array?

  • Why not choose a unidirectional linked list: the node of a singly linked list only knows the node after it (only a pointer to the node after it), but not the node before it. So if you want to move one of the nodes, it’s hard to connect the nodes before and after it.

  • Why not select arrays: Array elements are arranged contiguously in memory, which is convenient for addressing operations; However, for insertion and deletion, the operation is very inconvenient, which requires a whole move. The more elements are moved, the greater the cost. A linked list, on the other hand, is convenient for insertion and deletion because its nodes are associated only by Pointers, while addressing is time-consuming. Because there is so much movement, insertion, and deletion of nodes in an LRU policy, there is an advantage to using a two-way linked list.

Select the appropriate thread to operate the different tasks

Regardless of the automatic clearing and release of the cache, the author defaults these tasks to the child thread:

Take a look at freeing all memory caches:

- (void)removeAll {
    
    // Set overhead, cache number to 0
    _totalCost = 0;
    _totalCount = 0;
    
    // Empty the first and last nodes of the list
    _head = nil;
    _tail = nil;
    
    if (CFDictionaryGetCount(_dic) > 0) {
        
        CFMutableDictionaryRef holder = _dic;
        _dic = CFDictionaryCreateMutable(CFAllocatorGetDefault(), 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
        
        // Whether to operate in the child thread
        if (_releaseAsynchronously) {
            dispatch_queue_t queue = _releaseOnMainThread ? dispatch_get_main_queue() : YYMemoryCacheGetReleaseQueue();
            dispatch_async(queue, ^{
                CFRelease(holder); // hold and release in specified queue
            });
        } else if(_releaseOnMainThread && ! pthread_main_np()) {dispatch_async(dispatch_get_main_queue(), ^{
                CFRelease(holder); // hold and release in specified queue
            });
        } else {
            CFRelease(holder); }}}Copy the code

YYMemoryCacheGetReleaseQueue here () USES the inline function, returned to the concurrent queue of low priority.

// An inline function that returns the lowest priority global concurrent queue
static inline dispatch_queue_t YYMemoryCacheGetReleaseQueue() {
    return dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0);
}
Copy the code

Select the underlying class

Same dictionary implementation, but the author uses the lower level and faster CFDictionary instead of NSDictionary.

Other knowledge points

Disables the native initialization method and identifies the newly defined specified initialization method

YYCache has four initialization interfaces for external invocation, and either object method or class method needs to be passed in a string (name or path).

Two native initialization methods were banned by the framework authors:

- (instancetype)init UNAVAILABLE_ATTRIBUTE;
+ (instancetype)new UNAVAILABLE_ATTRIBUTE;
Copy the code

If the user uses the above two initialization methods, an error will be reported at compile time.

Of the four remaining initializers available, one is the specified initializer, which is marked by NS_DESIGNATED_INITIALIZER.

- (nullable instancetype)initWithName:(NSString *)name;
- (nullable instancetype)initWithPath:(NSString *)path NS_DESIGNATED_INITIALIZER;

+ (nullable instancetype)cacheWithName:(NSString *)name;
+ (nullable instancetype)cacheWithPath:(NSString *)path;
Copy the code

The specified initialization method is the method that all available initialization methods must call. For a more detailed introduction, please refer to my two articles below:

  • This section of the iOS code specification describes “classes.”
  • Article 16 of the Effective Objc Dry Goods Trilogy (Part 3) : Tips.

Techniques for releasing objects asynchronously

To asynchronously release an object, you can do this by sending a message to it in a GCD block. This technique is common in the framework, as an example of removing a memory cache:

The cached Node class is first fetched and then asynchronously released.

- (void)removeObjectForKey:(id)key {
    
    if(! key)return;
    
    pthread_mutex_lock(&_lock);
    _YYLinkedMapNode *node = CFDictionaryGetValue(_lru->_dic, (__bridge const void *)(key));
    if (node) {
        [_lru removeNode:node];
        if (_lru->_releaseAsynchronously) {
            dispatch_queue_t queue = _lru->_releaseOnMainThread ? dispatch_get_main_queue() : YYMemoryCacheGetReleaseQueue();
            dispatch_async(queue, ^{
                [node class]; //hold and release in queue
            });
        } else if(_lru->_releaseOnMainThread && ! pthread_main_np()) {dispatch_async(dispatch_get_main_queue(), ^{
                [node class]; //hold and release in queue
            });
        }
    }
    pthread_mutex_unlock(&_lock);
}
Copy the code

To release the node object, the class message is sent to it in an asynchronously executed block (in the main queue or custom queue). We don’t need to know what the message is, but it is intended to avoid compilation errors because we can’t just write an object in a block.

In fact, I am a little uncertain about the above point, hope to understand more thoroughly the students can leave a message below ~ ^^

Memory warning and listening into the background

By default, YYCache automatically clears all memory caches when it receives a memory warning or enters the background. So in the YYMemoryCache initialization method, we can see the actions of these two listeners:

//YYMemoryCache.m

- (instancetype)init{
    
    ...
      
    // Listen to the app lifecycle
    [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(_appDidReceiveMemoryWarningNotification) name:UIApplicationDidReceiveMemoryWarningNotification object:nil];
    [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(_appDidEnterBackgroundNotification) name:UIApplicationDidEnterBackgroundNotification object:nil]; . }Copy the code

Then implement the handling method after listening to the message:

// Delete all memory caches when memory warning
- (void)_appDidReceiveMemoryWarningNotification {
    if (self.didReceiveMemoryWarningBlock) {
        self.didReceiveMemoryWarningBlock(self);
    }
    if (self.shouldRemoveAllObjectsOnMemoryWarning) {
        [selfremoveAllObjects]; }}// Delete all memory caches when entering the background
- (void)_appDidEnterBackgroundNotification {
    if (self.didEnterBackgroundBlock) {
        self.didEnterBackgroundBlock(self);
    }
    if (self.shouldRemoveAllObjectsWhenEnteringBackground) {
        [selfremoveAllObjects]; }}Copy the code

Determine the import of header files

#if __has_include(<YYCache/YYCache.h>)
#import <YYCache/YYMemoryCache.h>
#import <YYCache/YYDiskCache.h>
#import <YYCache/YYKVStorage.h>
#elif __has_include(<YYWebImage/YYCache.h>)
#import <YYWebImage/YYMemoryCache.h>
#import <YYWebImage/YYDiskCache.h>
#import <YYWebImage/YYKVStorage.h>
#else
#import "YYMemoryCache.h"
#import "YYDiskCache.h"
#import "YYKVStorage.h"
#endif
Copy the code

Here the authors use __has_include to check whether the Frameworks introduces a class. Because YYWebImage has been integrated with YYCache, if YYWebImage has been imported, there is no need to import YYCache again.

The last word

By looking at the source code of this component, I gained not only the idea of caching design, but also:

  • Bidirectional linked list concept and related operations
  • Database usage
  • Mutex, semaphore use
  • Implement a thread-safe scheme
  • Naming variables, naming methods, and designing interfaces

Believe that read this article you will also have some harvest ~ if you can strike while the iron is hot, download a YYCache source code to see better ~


This post has been synchronized to a personal blog: Portal

— — — — — — — — — — — — — — — — — — — — — — — — — — — — on July 17, 2018 update — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Pay attention!!

The author recently opened a personal public account, mainly to share programming, reading notes, thinking articles.

  • Programming articles: a selection of my previous technical articles, as well as subsequent technical articles (mainly original), will gradually move away from iOS content and focus on improving programming capabilities.
  • Reading notes: Share your reading notes on programming, thinking, psychology, and workplace books.
  • Thinking articles: share my thoughts on technology and life.

Because there is a limit to the number of messages published on the official account every day, so far we have not published all the selected articles in the past on the official account, and will be published gradually in the future.

And because of the various limitations of the major blog platform, later will be published on the public account some short and concise, to see the big dry goods articles oh ~

Scan the qr code below and click follow, looking forward to growing together with you ~