preface

Locking is not new to us, but it is very new to us. When multiple threads operate on the same resource, we need to protect the resource for memory security, so we need to use locking. Common locks such as @synchronized, NSRecursiveLock, NSLock, and attribute atomic manipulation lock, etc. What is the difference between them? How do we understand it and use it correctly?

Is it?

Take a look at the following example

   - (void)viewDidLoad {
        [super viewDidLoad];
         // Suppose there are 100 movie tickets
        self.ticketCount = 100;
       / / thread 1
        dispatch_async(dispatch_get_global_queue(0.0), ^ {for (int i = 0; i < 60; i++) { [self saleTicket]; }});/ / thread 2
        dispatch_async(dispatch_get_global_queue(0.0), ^ {for (int i = 0; i < 50; i++) { [self saleTicket]; }}); } - (void)saleTicket{
        if (self.ticketCount > 0) {
            self.ticketCount--;
            sleep(0.1);
            NSLog(@"Currently remaining tickets: %lu",(unsigned long)self.ticketCount);
        }else{
            NSLog(@"Tickets are currently sold out"); }}Copy the code

Output:Analysis: as shown in the figure above, if not locked, the output of the current residual ticket will occur disorder, we want to see the order should be decreasing, so abovesaleTicketAdd a@synchronizedLocking can solve the problem of data insecurity caused by multiple threads.

- (void)saleTicket{
    @synchronized (self) {
        if (self.ticketCount > 0) {
            self.ticketCount--;
            sleep(0.1);
            NSLog(@"Currently remaining tickets: %lu",(unsigned long)self.ticketCount);
        }else{
            NSLog(@"Tickets are currently sold out"); }}}Copy the code

So let’s look at the following example

// Initialize the NSLock
self.mylock=[[NSLock alloc]init];
dispatch_async(dispatch_get_global_queue(0.0), ^ {static void (^testMethod)(int);
    testMethod = ^(int value){
        // Block processes business code
        [self.mylock lock];
        if (value > 0) {
            NSLog(@"current value = %d",value);
            testMethod(value - 1);
        }
        [self.mylock unlock];
    };
    testMethod(10);
    });
Copy the code

Output:

**current value = 10**
Copy the code

TestMethod is a block of code that performs a recursive call. If you use an NSLock lock, it will generate a deadlock. In theory, the output log should decrease from 10.

dispatch_async(dispatch_get_global_queue(0.0), ^ {static void (^testMethod)(int);
    testMethod = ^(int value){
        @synchronized (self) {
            if (value > 0) {
                NSLog(@"current value = %d",value);
               testMethod(value - 1); }}}; testMethod(10);
});
Copy the code

Output:

current value = 10
current value = 9
current value = 8
current value = 7
current value = 6
current value = 5
current value = 4
current value = 3
current value = 2
current value = 1
Copy the code

Analysis: At least we know that @synchronized can solve the recursive deadlock problem of NSLock, which means that NSLock is a non-recursive lock. If you change NSLock to a recursive lock, NSRecursiveLock can also solve the problem. So does NSRecursiveLock solve the multithreading problem?

self.recursiveLock = [[NSRecursiveLock alloc] init];
for (int i= 0; i<10; i++) {
    dispatch_async(dispatch_get_global_queue(0.0), ^ {static void (^testMethod)(int);
        testMethod = ^(int value){
            [self.recursiveLock lock];
                if (value > 0) {
                    NSLog(@"current value = %d",value);
                    testMethod(value - 1);
                }
            [self.recursiveLock unlock];
        };
        testMethod(10);
    });
    }
Copy the code

Result: NSRecursiveLock recursively crashes in multithreaded situations, and @synchronized solves the same recursion problem in multithreaded situations. It looks like@synchronizedReally a bit cool, not only can multithreaded lock and recursive call, the following analysis of the source code to see how it is implemented.

@synchronized

Write a random @synchronized and see what the underlying symbols call in assembly, then follow the source code to see the details, or use clang to compile the source files to see what @synchronized compiled into.It is found from assembly that @synchronized underlying is calledobjc_sync_enterLock andobjc_sync_exitUnlock, let’s go into objC source code to look at these two functions.

objc_sync_enter

int objc_sync_enter(id obj)
{
    int result = OBJC_SYNC_SUCCESS;
    // If the object is not null, lock it
    if (obj) {
        SyncData* data = id2data(obj, ACQUIRE);
        ASSERT(data);
        data->mutex.lock();
    } else {
        // @synchronized(nil) does nothing
        if (DebugNilSync) {
            _objc_inform("NIL SYNC DEBUG: @synchronized(nil); set a breakpoint on objc_sync_nil to debug");
        }
        // Do nothing if it is empty
        objc_sync_nil();
    }
    return result;

}
Copy the code

If obj is not null, construct a SyncData object and lock it; if obj is null, do nothing. Therefore, when using @synchronized(obj), you must pass an object or lock it. Mutex.lock () is obtained from the SyncData object. Take a look at the SyncData structure and how the id2Data method generates SyncData.

SyncData

typedef struct alignas(CacheLineSize) SyncData {
    struct SyncData* nextData;// The one-way list structure points to the next syncData
    DisguisedPtr<objc_object> object;// Construct a unified structure for the obj passed in
    int32_t threadCount;  // The number of threads operating on the lock
    recursive_mutex_t mutex;/ / recursive locking

} SyncData;
Copy the code

Analysis: With the SyncData object, it’s clear that threadCount should be the reason @synchronized can multithread lock, and recursive_mutex_t should be the reason @synchronized recurses. As far as nextData is concerned, it’s a linked list structure. Let’s look at the function that creates this object, id2data().

static SyncData* id2data(id object, enum usage why)
{   
   // Get the lock on the object from SyncList to keep the method block memory safe
    spinlock_t *lockp = &LOCK_FOR_OBJ(object);
    // Get the syncData address of an object from the hash table SyncList
    SyncData **listp = &LIST_FOR_OBJ(object);
    SyncData* result = NULL;
#if SUPPORT_DIRECT_THREAD_KEYS
    bool fastCacheOccupied = NO;
    // Select SyncData from thread-local store based on object. TLS is thread-local store
    SyncData *data = (SyncData *)tls_get_direct(SYNC_DATA_DIRECT_KEY);
    if (data) {
       // If the same thread is the same object, lockCount ++, lockCount--, single line with the same object recursively lock will enter
       if (data->object == object) {
       //....}}// Find SyncData from thread cache, if lockCount ++, unlock lockCount-
    SyncCache *cache = fetch_cache(NO);
    if (cache) {
     / /...
    }
    lockp->lock();
    // multithreaded entry process, threadcount++
    {
      / / to omit...
    }

    // The first time SyncData is created, thread default is 1, nextData specified as the previous SyncData, header method
    posix_memalign((void **)&result, alignof(SyncData), sizeof(SyncData));
    result->object = (objc_object *)object;
    result->threadCount = 1;
    new (&result->mutex) recursive_mutex_t(fork_unsafe_lock);
  
    result->nextData = *listp;
    // Assign a hash table to *listp
    *listp = result;
 done:
    lockp->unlock();
    if (result) {
       // set TLS to TLS for the first time
        if(! fastCacheOccupied) { tls_set_direct(SYNC_DATA_DIRECT_KEY, result); tls_set_direct(SYNC_COUNT_DIRECT_KEY, (void*)1);
        } else 
        {// Save thread cache
            // Save in thread cache
            if(! cache) cache = fetch_cache(YES); cache->list[cache->used].data = result; cache->list[cache->used].lockCount =1; cache->used++; }}return result;
}
Copy the code

Analysis: The process for creating SyncData is as follows

  • fromHash tableSyncList retrieves the syncData address of an object, which is null when the object is first locked
  • fromtls(Thread-local storage) to get the syncData corresponding to object,Each objectAre bindingA syncDataTLS is different for each thread. If it is the same thread and the object is the same, put the object in TLSlockcount++orlockcount--, plus or minus depending on whether to lock or unlock.
  • fromThe thread cacheIf it is the same thread and the same object, then put the object in the thread cache lockcount++ or lockcount–, add or subtract according to whether to lock or unlock.
  • If it’s multithreaded, put itthreadcount++
  • If neither TLS nor thread cache exists, create syncData for object and specify properties in syncDatanextDataFor the last syncData, this isZipper chain watch head insert methodStructure.
  • The first entry of the same thread inserts syncData into TLS, and the second entry of the same thread inserts syncData into the thread cache.

Hash table SyncList source:

static StripedMap<SyncList> sDataLists;
class StripedMap {#ifTARGET_OS_IPHONE && ! TARGET_OS_SIMULATOR enum { StripeCount =8 };
#else
    enum { StripeCount = 64 };
#endif
/ /...
}
struct SyncList {
    SyncData *data;
    spinlock_t lock;
    constexpr SyncList() : data(nil), lock(fork_unsafe_lock){}};Copy the code

static sDataListsIs aGlobal hash table, real machine case storageSyncListThe number is8Other environments64SyncList is a structure that encapsulates SyncData. Use the following figure to understand the hash table.

conclusion

  • @synchronized Recursively and multithreaded
  • @synchronized is a global hash table,Zipper methodStorage syncData
  • @synchronized threads can be stored in two ways,TLS and thread caching.
  • @synchronized(self), self objects are locked to ensure that they are lockedThe life cycle, as well as easy storage and release