preface
Recently I plan to start to do various optimizations in my spare time. Before this, I will compare the cache framework combed before to facilitate future records. Then make a record of application re-signing.
With this done, due to Swift’s constant optimization of API stability, as well as the brand new performance experience, we are going to start building a topic using the Swift brush algorithm one problem per day. It is also the beginning of learning Swift (PS: actually, IT took me half a year in version 2.3, but every time the system was upgraded, the Swift999 error report was too time-consuming, so the project could not run stably, so I did less later).
I have looked at the Cache of SDWebImage before, but there is no record, so I will take it for comparison this time.
NSCache, YYCache, PINCache, SDImageCache: NSCache, YYCache, PINCache, SDImageCache: NSCache, YYCache, PINCache, SDImageCache
- Threads and locks
- Respective complexity
- Respective elimination strategies
- Differences in frame design
SDImageCachesManager is a class that operates on a cache and adds tasks to the cache using _unfair_lock. I’m just looking at the cache section and I’m not looking at the rest. So I’ll just talk about SDImageCache and its internals here.
The body of the
memory
NSCache: The NSCache function is enabled
- _objects = [NSMapTable strongToStrongObjectsMapTable].
_objects
Used to query - _accesses = [NSMutableArray new];
_accesses
For caching policy
There are two copies in memory, which are cached by default.
YYCache
:
YYCache is divided into memory cache and disk cache to store different functions.
YYMemoryCache
-
- Memory cache usage
_lru = [_YYLinkedMap new];
Use dictionaries and bidirectional linked lists. The memory is all there_YYLinkedMap
the_dic
The linked list is just through the node_prev, _next
Property link, property modifier is__unsafe_unretained
So it doesn’t take up memory.
- Memory cache usage
YYDiskCache
Store in the databasekey
.inline_data
.filename
.last_access_time
And so on.-
- When the data
value
>20kb
The time,filename
There is a value (MD5 encrypted key value), which is written to the file when stored.
- When the data
-
value
< =20kb
The data is stored as a binary streaminline_data
PINCache
:
PINCaching
: This protocol defines the PIN interface. Will implement the protocol.PINMemoryCache
:-
- Five dictionaries are used in memory to store the same dictionary
key
Different information about.
- Five dictionaries are used in memory to store the same dictionary
PINDiskCache
-
_metadata
Is a mutable dictionary that reads data from disk to_metadata
Disk caching operations_metadata
And disk data will be updated synchronously.
SDImageCache
:
SDMemoryCache
In the
/ / strong self - weak management. WeakCache = [[NSMapTable alloc] initWithKeyOptions: NSPointerFunctionsStrongMemory valueOptions:NSPointerFunctionsWeakMemory capacity:0]; OS_UNFAIR_LOCK #define SD_LOCK_INIT(lock) if (@available(iOS 10, tvOS 10, watchOS 3, macOS 10.12, *)) lock = OS_UNFAIR_LOCK_INIT; \Copy the code
That is, in the memory cache, there is no real holding of the stored object.(UIImage)
PS:SDMemoryCache
inheritanceNSCache
, so all operations are taken from the parent class, and the child class does a layer of fault tolerance
SDDiskCache
In the
Data was directly written into a file, and the file name was encrypted with MD5 Key to obtain 16 bytes. The file name was obtained by splicing the 16 bytes. There is a copy of the data in the file.
Threads and locks
NSCache: the system does not design a thread-safe cache class, which will affect efficiency and performance. Performance is a perennial topic.
YYCache
:
-
YYMemoryCache
: -
- Normal add, delete, change and check used
pthread_mutex_t
The mutex.
- Normal add, delete, change and check used
-
YYMemoryCacheGetReleaseQueue
A global low-priority queue that controls which thread objects are released from.
-
_queue
: is a serial queue that is called asynchronously when all clipping policies in the memory cache are triggered.
-
-
- In a queue, use
pthread_mutex_lock
Mutex for thread protection_lru
To protect thread safety when operating on _lru.
- In a queue, use
-
-
YYDiskCache
: -
- Used when manipulating data
dispatch_semaphore_t
Handle thread safety.
- Used when manipulating data
-
_queue
Asynchronous queue, asynchronous call. This parameter is used when the disk cache data is called back to block. In normal use, the disk does not need to be called back and only semaphores are used to ensure safe operation.
_globalInstances: An NSMapTable object StrongToWeak, a path of the disk cache, and a YYDiskCache object. Operations for _globalInstances are also managed by Dispatch_SemaphoRE_T.
PINCache
:
_operationQueue
:PINOperationQueue object, traversed globally
PINOperationQueue
_concurrentQueue
Asynchronous queues, queues that perform priority tasks and additional tasks_serialQueue
Serial queues, queues that perform tasks primarily_semaphoreQueue
Serial queues are mainly queues that control the maximum number of concurrent tasks._group
Thread groups, mainly for task blocking function.The recursive lock pthread_mutexattr_t is used. The lock is mainly used for adding and removing tasks. PIN was also written about in detail in a previous article
-
PINMemoryCache
-
The synchronous interface
: used in syncpthread_mutex_t
Mutex ensures thread safety.
-
The asynchronous interface
: Adds a task forward_operationQueue
-
PINDiskCache
-
The synchronous interface
: used in syncpthread_mutex_t
Mutex ensures thread safety.
-
The asynchronous interface
: Adds a task forward_operationQueue
SDImageCache
:
ioQueue
A serial queue, add, delete, change, check asynchronous call.SDMemoryCache
.SDDiskCache
There is no additional thread processing internally.
OS_UNFAIR_LOCK is used for SDMemoryCache to protect thread security.
The complexity of the
NSCache: The following is the complexity analysis for implementing the cache policy. If the cache policy is not executed, the default value is O(1).
increase
Add directly, complexityO(1)
delete
Delete the element in the array before adding.O(n)
change
After query, delete first, then assign value to add.O(n)
check
Query first, after finding, execute cache policy, remove first and then add.O(n)
YYCache
:
increase
Add directly to _dic, insert the list header complexityO(1)
delete
Delete the elements in the dictionary, delete the elements in the linked list, because it is directly processing the pointer before and after deleting the node, so the complexity isO(1)
.change
Extract node from _dic, insert node into the head of the list, and assign to add.O(1)
check
Take node out of the dictionary, and the list inserts node into the head nodeO(1)
PINCache
:
increase
Add directly to _dic, insert the list header complexityO(1)
delete
Delete the elements in the dictionary, delete the elements in the linked list, because it is directly processing the pointer before and after deleting the node, so the complexity isO(1)
.change
Extract node from _dic, insert node into the head of the list, and assign to add.O(1)
check
Take node out of the dictionary, and the list inserts node into the head nodeO(1)
SDImageCache
:
increase
Add weak to the disk.O(1)
delete
The vm can be removed from the memory or disk based on the file name.O(1)
.change
There is no modification, just overwrite, and increment is a call.O(1)
check
The file name can be queried in the memory or on the disk.O(1)
Elimination strategy
NSCache: Uses the comparator NSEnumerator *e = [_accesses objectEnumerator];
-
- Iterate to get the element, and then iterate through the array to get the oldest used data from front to back.
-
- Take the data and determine whether to join the removal queue based on the frequency of access.
-
- The removal strategy ends when the amount of data removed is sufficient to drop the element.
YYCache
:
Here we will talk about YYCache author using recursive way, to do a timed cleaning of the cache, is recursive call method internal delay 5 seconds to continue recursion. The global queue is set to a low priority.
The difference is that there are serial queues in memory and asynchronous queues on disk
-
YYMemoryCache: A bidirectional linked list. Since it is a head insertion method, it is necessary to remove the tail node every time. When removing the node, it is necessary to remove the key from the _DIC group and modify the _next pointer of the previous node. LRU
-
- When executed asynchronously, it is also executed in the serial queue and in the same queue as the add, delete, alter, and query.
-
YYDiskCache: Remove the database using the last_access_time field and order. Realize the LRU
-
- When multithreading is removed, it is executed asynchronously on a concurrent queue on disk
PINCache
:
PINMemoryCache
:-
- So this is actually sorted, so let’s paste it
- (void)trimToCostLimit:(NSUInteger)limit { NSArray *keysSortedByCost = [_costs keysSortedByValueUsingSelector:@selector(compare:)]; for (NSString *key in [keysSortedByCost reverseObjectEnumerator]) { //.... }}Copy the code
-
- Sort from different dictionaries by value, then remove. Such as time, consumption, limitation of time and so on.
-
PINDiskCache
: -
- from
_metadata
Select obj from obJ, get key, sort by lastModifiedDate or other property, record keys. Then remove the local file according to the value in keys before removing_metadata
Data in.
- from
-
- Asynchronous policies are also queued, with the lowest priority set, and processed according to a unified task flow.
SDImageCache
:
- Since the NSCache inheritance.
The framework design
NSCache: is the memory cache design, to some extent, the official recommendation to use NSCache instead of NSDictionary. But API exposure and interface design are the basis on which many caching frameworks should be designed.
YYCache: From interface design to memory, disk and database, each class has distinct and independent tasks and perfectly realizes its functions. Very, very, very good framework for learning.
PINCache: Expose interfaces with protocols, expose aggregation externally, low coupling internally. Generally speaking, as all data is stored on disk, and the policy execution needs to be sorted according to the dictionary value, the performance is doomed to be not particularly good (unfriendly for small data, higher performance for big data).
- So for a range of sizes of data, PINCache might be more efficient than other frameworks.
- Also implemented to add, delete, change and check are constant level
SDImageCache: cross-platform framework, the design of cache to meet their needs. More suitable for internal use. KVO is used to listen for the setting of some properties such as costLimit.
Point of learning
NSCache
:
It could be a very old design, a general feel or a sense of propriety in memory design. Like NSMutableArray, the system is not designed to be thread-safe, and needs can be handled by the lock itself. Or design a county safe dictionary ah, array ah, even NSCache to play.
Most thread-safe ideas are to use synchronous queues, to synchronize tasks or queue +barr functions. Or recursive locking. There’s a lot of information on the Internet that you can look at yourself,
YYCache
:
High cohesion, low coupling, each class in the interface design can achieve the corresponding function, independent and same-sex.
- You can see more test demo, do not know how to test performance, recommended to see.
- For the removal thread that controls the removal object, I learned this when I saw it earlier. It looks so simple, but it’s so unobtrusive, it’s not so easy to imagine.
- Different scenarios are used to test different locks, such as mutex in memory and semaphore in disk.
PINCache
:
PINOperationQueue
This class encapsulates and handles threads.- It’s time to write your own Cache,
setObject:forKeyedSubscript:``objectForKeyedSubscript
These are the two methods that we have to implement in our shorthand. isdic[@”key”] = valueIt’s written this way.
SDImageCache
:
- First of all,
SDImageCache
Inside is using the sameSDImageCacheConfig
Equivalent to a configuration class for all kinds of information to be passed, sort of likePINCache
Which global queue is defined at the beginning and passed in. - Properties of the
memoryCache
As well asdiskCache
Are allid<Protocol>
, different from other frameworks, can better protect internal data. It is not the class of the property that is controlled as a member variable. This is why it is possible to avoid calling specific classes directly when operating the cache externally. - SD frame picture encoding and decoding is a very good information, do their own storage can be taken directly to use.
- Since I’ve only looked at a small portion of the cache, I won’t go into details here to avoid misleading you.
Supplementary os_unfair_lock
As shown in the figure above, when the thread is executing a time-consuming operation, the kernel code is entered, and the system call is made, although I do not know what the system call is, but the name should be busy etc.
How often do you see threads hanging and waiting?
-
IOS10 has replaced OSSpinLock.
-
Since it is a substitute for spin locks, then the essence is strong resources.
-
Since it is designed that way, an alternative lock should not violate the idea. If you want to suspend threads to release CPU resources and wait for signal processing, then mutex is perfectly fine. Why do you do this?
The documentation does not specify that switching between kernel-mode and user-mode causes a significant performance drain, and unfair locks enter kernel system calls while waiting. The principle should be the same as that of spin locks, but with priority inversion, the same access to the resource will continue until it is retrieved.
Maybe look at some locks on other systems, get some inspiration, and think about what the actual flow is like.
This code is in libsystem_pthread.dylib, which is a system dynamic library, and cannot be further developed. Look for information later.
conclusion
This knowledge looks simple, but it’s actually the output and actual writing that are the most difficult. So a lot of things look easy, don’t say it’s easy until you try it, until you actually do it. Suggestions to prove with action, perhaps really not simple, just imagine very simple…