What are the iOS persistence schemes?
To recap, there are a number of iOS local caching schemes, each of which has its own application scenarios:
-
NSUserDefault
:The simplest key-value local storage solution provided by the system is suitable for lightweight data stores, such as service flags. The main reason is that the underlying layer is stored in PLIST files, and performance problems may occur when the amount of data gradually increases.
-
To keep or file documents:
Either converting service data to binary and then writeFile, or directly using the NSKeyedArchiver interface of the system to file, all belong to the file storage solution. The advantage is that the development is simple, and the service can control the storage content of a single file to avoid possible performance problems.
-
Sqlite, FMDB:
The storage scheme that uses data at the bottom layer is more suitable for storage scenarios with large amounts of data and requirements such as query and sorting. The disadvantage is that the development is slightly complicated.
-
CoreData and other ORM schemes:
CoreData doesn’t feel like it’s very widely used, right?
-
Cache scheme of key-value interface:
This refers specifically to a cache library that provides key-value interfaces. The underlying cache can be either a file or SQLite. The underlying YYCache discussed in this paper is a hybrid storage mode of file + SQLite. NSUserDefault has better performance than NSUserDefault because of its simple interface. It is suitable for most service scenarios, but not for similar database scenarios.
-
Ps. If you’d like to talk about iOS persistence or more iOS tech issues, you’re welcome to join the iOS Tech Chat
Talk about the good design of YYCache
In fact, YY God himself has a detailed introduction of its design ideas in his blog YYCache design Ideas. It is suggested that you can read it first and talk about some advantages of it compared with other cache libraries in this paper.
High-performance thread-safe solution
First of all, high performance is a core design goal of YYCache, and a lot of code logic is done around the point of performance.
As a comparison, YY proposed the performance defects of TMMemoryCache scheme. The thread safety of TMMemoryCache adopts the relatively common scheme of dispatch_barrier to guarantee parallel read and serial write. This scheme was introduced in my last article “AFNetworking source code analysis and Interview Examination Points”. Could TMMemoryCache have performance problems because of its thread-safe scheme of dispatch_barrier?
The answer lies in the design of its synchronous interface:
- (id)objectForKey:(NSString *)key { if (! key) return nil; __block id objectForKey = nil; dispatch_semaphore_t semaphore = dispatch_semaphore_create(0); [self objectForKey:key block:^(TMMemoryCache *cache, NSString *key, id object) { objectForKey = object; dispatch_semaphore_signal(semaphore); }]; dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER); #if ! OS_OBJECT_USE_OBJC dispatch_release(semaphore); #endif return objectForKey; }Copy the code
TMCache blocks the current thread with a semaphore in the synchronous interface and then switches to another thread (dispatch_async to a parallel queue in its asynchronous interface) to perform the read operation. According to YY, the main performance loss should be in this thread switching operation, and there is no need for the synchronization interface to switch threads.
The idea of YY is to ensure thread safety through the spin lock, but still perform read operations in the current thread, so as to save the overhead caused by thread switching. (But I see in YYCache’s latest code is a common mutex, there is no spin lock, should be later to make a solution change?)
Besides locking serial, is it feasible to implement synchronization with dispatch_sync?
In addition to the locking serial scheme provided by YY, let’s see whether the barrier parallel read serial write scheme introduced previously also has performance problems. If you use this scenario, the synchronization interface might look like this:
- (id)objectForKey:(NSString *)key { __block id object = nil; dispatch_sync(concurrent_queue, ^{ object = cache[key]; // read interface, no barrier, read and read in parallel}); return object; } - (void)setObject:(id)object forKey:(NSString *)key {dispatch_barrier_sync(concurrent_queue, ^{ Cache [key] = object; }); }Copy the code
After demo verification, it can be found that although dipatch is executed in a CONcurrent_queue, it will not be switched to a new thread because it is dispatched synchronously in sync. That is to say, this scheme can also save the overhead of thread switching.
Dispatch_sync does not switch call threads for execution.
So how does this scheme compare with the serial locking scheme?
Barrier implements parallel reads serial writes vs mutex serial processing
Single-threaded test
First, I ran the benchmark of YYCache, whose principle is to test the total time taken by a single thread to do 20000 reads or writes. TMCache new indicates the test data after the value is changed to dispatch_sync.
= = = = = = = = = = = = = = = = = = = = = = = = = = = the Memory cache set 200000 key - value pairs NSDictionary: 67.53 NSDict + Lock: 73.47 YYMemoryCache: 133.08 PINMemoryCache: 257.59 NSCache: 457.63 TMCache: 7638.25 TMCache new: 297.58 = = = = = = = = = = = = = = = = = = = = = = = = = = = the Memory cache get 200000 key - value pairs NSDictionary: 43.32 NSDict + Lock: 53.68YYMemoryCache: 93.15 PINMemoryCache: 141.12 NSCache: 73.89 TMCache: 7446.88 TMCache New: 210.80Copy the code
In conclusion, the single-threaded dispatch_sync scheme is a little more than 2 times slower than YYCache’s serial-locking scheme and 25 to 35 times faster than the original semaphore force-sync operation.
Therefore, the synchronization interface implementation scheme similar to the original TMCache writing method should be avoided in the development process.
Multithreaded testing
Display_barrier is a parallel read and serial write scheme. In theory, it will be more advantageous in multi-threaded concurrent scenarios. Therefore, I tried to write a multi-threaded benchmark to compare performance, with the following code:
typedef void(^exec_block)(id key, id value); + (void)benchmark:(NSString *)type exec:(exec_block)block keys:(NSArray *)keys values:(NSArray *)values { int count = 10000; printf("Memory cache %s %i pairs\n", type.UTF8String, count); __block NSTimeInterval begin, end, time; begin = CACurrentMediaTime(); dispatch_group_t group = dispatch_group_create(); dispatch_queue_t queue = dispatch_queue_create(type.UTF8String, DISPATCH_QUEUE_CONCURRENT); for (int i = 0; i < count; i++) { dispatch_group_async(group, queue, ^{ block(keys[i], values[i]); }); } dispatch_group_notify(group, queue, ^{ end = CACurrentMediaTime(); time = end - begin; Printf ("%s: %8.2f\n", type.utf8String, time * 1000); }); }Copy the code
The end time is obtained via dispatch_group as it is executed concurrently. The function takes exec_block as input, and inside the block executes the specific YYCache and TMCache set/get methods.
There is a problem in this test scheme. The whole time is mainly for dispatch_group_async. Whether the cache get/set method is executed inside the block has little influence on the overall time result. In the end, I didn’t get an accurate test result. Maybe it would have made more sense to create several threads for concurrent testing.
High-performance local storage solutions
In addition to the high performance implementation of multithreading, YYCache also has a small strategy for improving performance with local persistence. The core question should be: is it faster to read and write binary data from a file or sqLite? Sqlite website has a test result:
The value in the table indicates the saving time of files divided by the saving time of databases. If the value is greater than 1, the saving time of databases is faster, which is green.
Based on this conclusion and my own measured results, YYCache adopts a scheme that files larger than 20K are directly stored, and then meta information (such as file path) is stored in SQLite, while those smaller than 20K are directly stored in SQLite.
Data integrity assurance:
For associated data, the integrity of the storage must be guaranteed. It is all or nothing. For example, when YYCache stores files, the existence of database meta-information and the storage of actual files must ensure atomicity. If the cloud information storage succeeds, but the file storage fails, logic problems can result. The specific YYCache code is as follows:
if (! [self _fileWriteWithName:filename data:value]) { return NO; } if (! [self _dbSaveWithKey:key value:value fileName:filename extendedData:extendedData]) { [self _fileDeleteWithName:filename]; return NO; } return YES;Copy the code
As you can see, the database metadata is saved only when the file is saved successfully. If the database metadata is saved successfully, the database metadata will be deleted.
We also need to pay attention to this logic when developing businesses to store associated data.
Cache elimination strategy
In addition to performance, YYCache also has some new utility features.
For example, LRU algorithm, cache control strategy based on storage duration, quantity, size and so on.
LRU algorithm adopts the classical double linked list + hash table scheme to achieve, very suitable for unfamiliar students reference learning, here will not expand.
Summary; There are many ways to persist on iOS, depending on which one you use or is more appropriate for a project scenario; I hope I can help you a little. Thank you for reading.
The original address