Resources to prepare

  • Objc4-818.2 – the source code

  • Swift – corelibs – foundation the source code

The introduction of the lock

The type of lock

  • Spin lock: The thread repeatedly checks that the lock variable is available. Because the thread keeps executing during this process, it is a busy wait. Once a spin lock is acquired, the thread holds it until the display releases the spin lock. Spin-locks avoid the scheduling overhead of the process context and are therefore effective in cases where threads block for only a short time:

    • OSSpinLock
    • os_unfair_lock
  • Mutex: A mechanism used in multithreaded programming to prevent two threads from reading or writing to the same common resource, such as a global variable. This is accomplished by slicing code into critical sections:

    • NSLock
    • pthread_mutex
    • @synchronized
  • Conditional locks: Conditional variables that sleep when certain resource requirements of a process are not met, i.e. locked. When the resource is allocated, the conditional lock is opened and the process continues:

    • NSCondition
    • NSConditionLock
  • Recursive locking: The same thread can be locked N times without causing a deadlock

    • NSRecursiveLock
    • pthread_mutex(recursive)
  • Semaphore: A more advanced synchronization mechanism. Mutex is a special case of semaphore with a value of only 0/1. Semaphores can have more value space for more complex synchronization, rather than just mutual exclusion between threads:

    • dispatch_semaphore
  • Read/write lock: A special type of spin lock. It divides visitors to shared resources into readers and writers. Readers only read and access shared resources, while writers write to shared resources:

    • Improves concurrency compared to spin locks. Because in a multiprocessor system, it allows multiple readers to access a shared resource at the same time, the maximum possible number of readers is the actual number of logical cpus. However, writers are exclusive. A read/write lock can have only one writer or more readers at a time (related to CPU number first), but not both readers and writers. Preemption also fails during read/write lock holding;

    • If the read-write lock currently has no reader and no writer, then the writer can acquire the read-write lock immediately, otherwise it must spin there until there are no writers or readers;

    • If the read-write lock has no writer, the reader can acquire the read-write lock immediately, otherwise the reader must spin there until the writer releases the read-write lock.

In fact, there are three types of basic locks: spin locks, mutex locks, read/write locks, and others, such as conditional locks, recursive locks, and semaphores, are encapsulation and implementation of the upper layer:

  • Mutex = mutex + synchronization. Mutex ensures thread safety. When one thread executes, the other threads sleep. Synchronization ensures sequential execution, with multiple threads executing sequentially

    • NSLock,pthread_mutex,@synchronized,NSCondition,NSConditionLock,NSRecursiveLock,pthread_mutex(recursive),dispatch_semaphore
  • Spin locks = mutex + busy, for example, do… The while loop. Its advantage is that it does not cause the caller to sleep, so there is no thread scheduling, CPU time slice rotation and other time-consuming operations. The disadvantage is that it consumes a lot of CPU resources when waiting, so spinlocks are not suitable for longer tasks:

    • OSSpinLock,os_unfair_lock

The performance of the lock

2021 iPhone 12 real test, lock performance comparison picture:

  • In descending order of performance:OSSpinLockSpin lock >os_unfair_lockSpin lock >NSConditionConditional lock >pthread_mutex(Mutex) >NSLock(Mutex) >dispatch_semaphore(Semaphore) >pthread_mutex(recursive)Recursion lock >NSRecursiveLockRecursion lock >@synchronized(Mutex) >NSConditionLock(Conditional lock)

Test plan

Cycle one hundred thousand times, lock and unlock operation. Calculate the lock time based on the start time and end time:

    int kc_runTimes = 100000;
Copy the code
  • OSSpinLock:
{ OSSpinLock kc_spinlock = OS_SPINLOCK_INIT; double_t kc_beginTime = CFAbsoluteTimeGetCurrent(); for (int i=0 ; i < kc_runTimes; i++) { OSSpinLockLock(&kc_spinlock); / / unlock OSSpinLockUnlock (& kc_spinlock); } double_t kc_endTime = CFAbsoluteTimeGetCurrent() ; KCLog(@"OSSpinLock: %f ms",(kc_endTime - kc_beginTime)*1000); }Copy the code
  • dispatch_semaphore_t:
{
    dispatch_semaphore_t kc_sem = dispatch_semaphore_create(1);

    double_t kc_beginTime = CFAbsoluteTimeGetCurrent();
    for (int i=0 ; i < kc_runTimes; i++) {
        dispatch_semaphore_wait(kc_sem, DISPATCH_TIME_FOREVER);
        dispatch_semaphore_signal(kc_sem);
    }
    double_t kc_endTime = CFAbsoluteTimeGetCurrent() ;
    KCLog(@"dispatch_semaphore_t: %f ms",(kc_endTime - kc_beginTime)*1000);
}
Copy the code
  • os_unfair_lock_lock:
{
    os_unfair_lock kc_unfairlock = OS_UNFAIR_LOCK_INIT;

    double_t kc_beginTime = CFAbsoluteTimeGetCurrent();
    for (int i=0 ; i < kc_runTimes; i++) {
        os_unfair_lock_lock(&kc_unfairlock);
        os_unfair_lock_unlock(&kc_unfairlock);
    }
    double_t kc_endTime = CFAbsoluteTimeGetCurrent() ;
    KCLog(@"os_unfair_lock_lock: %f ms",(kc_endTime - kc_beginTime)*1000);
}
Copy the code
  • pthread_mutex_t:
{
    pthread_mutex_t kc_metext = PTHREAD_MUTEX_INITIALIZER;

    double_t kc_beginTime = CFAbsoluteTimeGetCurrent();
    for (int i=0 ; i < kc_runTimes; i++) {
        pthread_mutex_lock(&kc_metext);
        pthread_mutex_unlock(&kc_metext);
    }
    double_t kc_endTime = CFAbsoluteTimeGetCurrent() ;
    KCLog(@"pthread_mutex_t: %f ms",(kc_endTime - kc_beginTime)*1000);
}
Copy the code
  • NSLock:
{
    NSLock *kc_lock = [NSLock new];

    double_t kc_beginTime = CFAbsoluteTimeGetCurrent();
    for (int i=0 ; i < kc_runTimes; i++) {
        [kc_lock lock];
        [kc_lock unlock];
    }
    double_t kc_endTime = CFAbsoluteTimeGetCurrent() ;
    KCLog(@"NSlock: %f ms",(kc_endTime - kc_beginTime)*1000);
}
Copy the code
  • NSCondition:
{
    NSCondition *kc_condition = [NSCondition new];

    double_t kc_beginTime = CFAbsoluteTimeGetCurrent();
    for (int i=0 ; i < kc_runTimes; i++) {
        [kc_condition lock];
        [kc_condition unlock];
    }
    double_t kc_endTime = CFAbsoluteTimeGetCurrent() ;
    KCLog(@"NSCondition: %f ms",(kc_endTime - kc_beginTime)*1000);
}
Copy the code
  • PTHREAD_MUTEX_RECURSIVE:
{
    pthread_mutex_t kc_metext_recurive;
    pthread_mutexattr_t attr;
    pthread_mutexattr_init (&attr);
    pthread_mutexattr_settype (&attr, PTHREAD_MUTEX_RECURSIVE);
    pthread_mutex_init (&kc_metext_recurive, &attr);

    double_t kc_beginTime = CFAbsoluteTimeGetCurrent();
    for (int i=0 ; i < kc_runTimes; i++) {
        pthread_mutex_lock(&kc_metext_recurive);
        pthread_mutex_unlock(&kc_metext_recurive);
    }
    double_t kc_endTime = CFAbsoluteTimeGetCurrent() ;
    KCLog(@"PTHREAD_MUTEX_RECURSIVE: %f ms",(kc_endTime - kc_beginTime)*1000);
}
Copy the code
  • NSRecursiveLock:
{
    NSRecursiveLock *kc_recursiveLock = [NSRecursiveLock new];

    double_t kc_beginTime = CFAbsoluteTimeGetCurrent();
    for (int i=0 ; i < kc_runTimes; i++) {
        [kc_recursiveLock lock];
        [kc_recursiveLock unlock];
    }
    double_t kc_endTime = CFAbsoluteTimeGetCurrent() ;
    KCLog(@"NSRecursiveLock: %f ms",(kc_endTime - kc_beginTime)*1000);
}
Copy the code
  • NSConditionLock:
{
    NSConditionLock *kc_conditionLock = [NSConditionLock new];

    double_t kc_beginTime = CFAbsoluteTimeGetCurrent();
    for (int i=0 ; i < kc_runTimes; i++) {
        [kc_conditionLock lock];
        [kc_conditionLock unlock];
    }
    double_t kc_endTime = CFAbsoluteTimeGetCurrent() ;
    KCLog(@"NSConditionLock: %f ms",(kc_endTime - kc_beginTime)*1000);
}
Copy the code
  • @synchronized:
{
    double_t kc_beginTime = CFAbsoluteTimeGetCurrent();

    for (int i=0 ; i < kc_runTimes; i++) {
        @synchronized(self) {}

    }
    double_t kc_endTime = CFAbsoluteTimeGetCurrent() ;
    KCLog(@"@synchronized: %f ms",(kc_endTime - kc_beginTime)*1000);
}
Copy the code

iphoneXReal machine execution result:

OSSpinLock: 0.485063 ms
dispatch_semaphore_t: 0.717044 ms
os_unfair_lock_lock: 0.768065 ms
pthread_mutex_t: 0.838041 ms
NSlock: 1.092076 ms
NSCondition: 1.066923 ms
PTHREAD_MUTEX_RECURSIVE: 1.288056 ms
NSRecursiveLock: 1.695037 ms
NSConditionLock: 5.481005 ms
@synchronized: 3.665924 ms
Copy the code
  • You can see@synchronizedIs not the slowest, indicating that the system has been optimized.

iphoneXSimulator execution results:

OSSpinLock: 0.509977 ms
dispatch_semaphore_t: 0.768065 ms
os_unfair_lock_lock: 0.856042 ms
pthread_mutex_t: 1.099944 ms
NSlock: 1.405954 ms
NSCondition: 1.512051 ms
PTHREAD_MUTEX_RECURSIVE: 1.899004 ms
NSRecursiveLock: 3.090978 ms
NSConditionLock: 4.673004 ms
@synchronized: 5.235076 ms
Copy the code
  • You can see it in the simulator@synchronizedIs a lot less efficient, and why is that? We’ll talk about that at the bottom.

The results of @synchronized were much faster than those of the previous version, suggesting an official optimization

Using the simulator, the results are slightly different from the real machine, because the underlying system is handled differently on the real machine and the simulator.

@ synchronized explore

The underlying implementation

If we want to analyze @synchronized, we need to know what methods it calls. We can explore this in two ways: 1. Clang compilers C++ code, 2. Assembly with source code.

clangCompile theC++code

int main(int argc, char * argv[]) {
    NSString * appDelegateClassName;
    @autoreleasepool {
        // Setup code that might create autoreleased objects goes here.
        appDelegateClassName = NSStringFromClass([AppDelegate class]);
        @synchronized (appDelegateClassName) {
        }
    }
    return UIApplicationMain(argc, argv, nil, appDelegateClassName);
}
Copy the code

Generate CPP files using xcRun:

xcrun -sdk iphoneos clang -arch arm64 -rewrite-objc main.m -o main.cpp
Copy the code

Open the main. CPP file and find the implementation of main:

  • Calling _sync_exit passes in _sync_obj, which is equivalent to calling the constructors and destructors of the structure. There is no code in the constructor, and the destructor calls the objc_sync_exit function, which is passed sync_exit equal to _sync_obj

    • Use the objc_sync_enter(_sync_obj) function to lock;

    • Use the objc_sync_exit(_sync_obj) function to unlock the account

  • Using a try… Catch: indicates that the lock may be abnormal

By assembling a trace

Set a breakpoint in the main function, run the emulator, and view the assembly code:

  • According to the assembly trace, the same effect,objc_sync_enterandobjc_sync_exitFunctions come in pairs and lock and unlock separately.

Set a symbolic breakpoint on objc_sync_Enter:

  • And you can see that this function comes from thetalibobjc.A.dylib.

Underlying source code analysis

objc_sync_enter

Open objC4-818.2 and enter the objc_sync_enter function:

  • ifobjPresence, executionid2dataFunction to get the correspondingSyncDataforthreadCount,lockCountIncrement operations are performed. ;
  • Otherwise, executeobjc_sync_nilFunctions;

Enter the objc_sync_nil function:

Find the macro definition for BREAKPOINT_FUNCTION:

In fact, the code in objc_sync_nil is equivalent to passing in void objc_sync_nil(void) to the macro, which is equivalent to the following code:

void objc_sync_nil(void) { asm(""); }
Copy the code
  • Equivalent to invalid code, no locking operation

Look at the assembly code for objc_sync_nil:

libobjc.A.dylib`objc_sync_nil: -> 0x7fff2018a7a3 <+0>: ret
Copy the code
  • Do nothing, just return

So, with @synchronized, passing in nil is equivalent to invalid code and does not lock.

objc_sync_exit:

  • ifobjIf yes, callid2dataMethod to obtain the correspondingSyncDataforthreadCount,lockCountPerform the decrement operation.
  • ifobjfornilDo nothing.

Thus, the core code in objc_sync_Enter and objc_Sync_exit are id2data functions that pass in different ACQUIRE and RELEASE parameters, but eventually get a SyncData object.

  • In objc_sync_Enter, lock the mutex in SyncData.

  • In objc_sync_exit, call tryUnlock to unlock mutex in the SyncData object.

SyncDatastructure

Find the structural definition of SyncData

SyncData structure, a one-way linked list

  • nextDataPoint to the next piece of data
  • DisguisedPtr<objc_object>Used for encapsulation types
  • threadCountLog multithreaded operands
  • recursive_mutex_tRecursive lock, can be used recursively, but does not support multithreaded recursion

The structure of SyncData shows that @synchronized is a recursive mutex that supports multithreading recursion and is more powerful than RECURsive_mutex_t.

id2data

Enter id2data function:

Static SyncData* id2data(id object, enum usage why) {// static SyncData* id2data(id object, enum usage why) Spinlock_t * LOCKp = &LOCK_FOR_OBJ(object); SyncData **listp = &list_for_obj (object); SyncData* result = NULL; #if SUPPORT_DIRECT_THREAD_KEYS // Check per-thread single-entry fast cache for matching object bool fastCacheOccupied = NO; SyncData *data = (SyncData *)tls_get_direct(SYNC_DATA_DIRECT_KEY); if (data) { fastCacheOccupied = YES; If (data->object == object) {// Found a match in fast cache. @synchronized uintptr_t lockCount; result = data; lockCount = (uintptr_t)tls_get_direct(SYNC_COUNT_DIRECT_KEY); if (result->threadCount <= 0 || lockCount <= 0) { _objc_fatal("id2data fastcache is buggy"); } switch(why) {case ACQUIRE: {// lockCount++; Tls_set_direct (SYNC_COUNT_DIRECT_KEY, (void*)lockCount); break; } case RELEASE: // lockCount--; Tls_set_direct (SYNC_COUNT_DIRECT_KEY, (void*)lockCount); If (lockCount == 0) {// remove from fast cache // Delete tls_set_direct(SYNC_DATA_DIRECT_KEY, NULL); Collide with concurrent ACQUIRE object threadCount (+ 1); Because of the current thread object already unlocked OSAtomicDecrement32Barrier (& result - > threadCount); } break; case CHECK: // do nothing break; } return result; }} #endif Check per-thread cache of already-owned locks for matching object SyncCache *cache = fetch_cache(NO); if (cache) { unsigned int i; For (I = 0; i < cache->used; i++) { SyncCacheItem *item = &cache->list[i]; If (item->data->object! = object) continue; // Found a match. result = item->data; if (result->threadCount <= 0 || item->lockCount <= 0) { _objc_fatal("id2data cache is buggy"); } switch(why) {case ACQUIRE: // lockcount +1 item->lockCount++; break; Case RELEASE: // lockCount- 1 item->lockCount--; If (item->lockCount == 0) {// remove from per-thread cache->list[I] = cache->list[--cache->used]; // remove from per-thread cache->list[I] = cache->list[--cache->used]; Collide with concurrent ACQUIRE object threadCount (+ 1); Because of the current thread object already unlocked OSAtomicDecrement32Barrier (& result - > threadCount); } break; case CHECK: // do nothing break; } return result; } } // Thread cache didn't find anything. // Walk in-use list looking for matching object // Spinlock prevents multiple threads from creating multiple // locks for the same new object. // We could keep the nodes in some hash table if we Find that there are // more than 20 or so distinct locks active, but we don't do that now. To ensure thread-safe allocation of SyncData code lockp->lock(); { SyncData* p; SyncData* firstUnused = NULL; If (p = *listp; if (p = *listp; p ! = NULL; If (p->object == object) {// Assign p to result result = p; // If threadCount is +1 // atomic because may collide with concurrent RELEASE OSAtomicIncrement32Barrier(&result->threadCount); // Jump to done goto done; } // Find an unused SyncData if ((firstUnused == NULL) && (p->threadCount == 0)) firstUnused = p; } // No SyncData currently associated with object // No SyncData currently associated with object Direct access to the done the if ((according to = = RELEASE) | | (according to = = CHECK)) goto the done. // If (firstUnused!) an unused one is found, use it = NULL) {// assign to result result = firstUnused; result->object = (objc_object *)object; result->threadCount = 1; // Jump to done goto done; } } // Allocate a new SyncData and add to list. // XXX allocating memory with a global lock held is bad practice, // might be worth releasing the lock, allocating, and searching again. // But since we never free these guys we won't be stuck in allocation very often. Posix_memalign ((void **)&result, alignOF (SyncData), sizeof(SyncData)); // assign a new SyncData to SyncList and add it to SyncList. result->object = (objc_object *)object; result->threadCount = 1; new (&result->mutex) recursive_mutex_t(fork_unsafe_lock); NextData = *listp; result->nextData = *listp; *listp = result; Lockp ->unlock(); if (result) { // Only new ACQUIRE should get here. // All RELEASE and CHECK and recursive ACQUIRE are // handled by the // Some error handling, should only ACQUIRE, enter here when new SyncData is generated // All RELEASE and CHECK and recursive ACQUIRE, If (why == RELEASE) {// Probably some thread is no longer great in essence // while the object is held by another thread. return nil; } if (why ! = ACQUIRE) _objc_fatal("id2data is buggy"); if (result->object ! = object) _objc_fatal("id2data is buggy"); #if SUPPORT_DIRECT_THREAD_KEYS if (! FastCacheOccupied) {// Save in fast thread cache tls_set_direct(SYNC_DATA_DIRECT_KEY, result); tls_set_direct(SYNC_COUNT_DIRECT_KEY, (void*)1); } else #endif {// Save thread cache; // Save thread cache; cache) cache = fetch_cache(YES); cache->list[cache->used].data = result; cache->list[cache->used].lockCount = 1; cache->used++; } } return result; }Copy the code
  • [Step 1] Pass in object to get data from the hash table

    • Object is passed in to get a lock from the hash table, which is used to ensure thread-safe allocation of SyncData code

    • Pass in object and get the address of SyncData from the hash table, equivalent to SyncList

  • Look for TLS (thread-local storage) in the current thread

    • Get SyncData from TLS

    • If so, compare the object in SyncData to whether the object passed in is the same

      • Also, depending on the type passed in, lockCount is ++ or — to update TLS. In the case of RELEASE, the threadCount of the SyncData object is -1 because the object in the current thread is unlocked. Complete the logic and return result directly

      • Different, enter [step 3]

    • Does not exist, go to [step 3]

  • [Step 3] Not found in TLS, search in the cache of the respective thread

    • Call the fetch_cache function to get the SyncCache

    • If so, iterate through the cache to see if the object in item is the same as the object passed in

      • Similarly, lockCount is either ++ or –, depending on the type passed in, to update the cache. In the case of RELEASE, the threadCount of the SyncData object is -1 because the object in the current thread is unlocked. Complete the logic and return result directly

      • After traversal, the same object is not found, and step 4 is entered.

    • No cache exists. Go to step 4.

  • Step 4: Traverse SyncList to find SyncData, which is equivalent to addressing all threads

    • Compare the object in SyncData with the object passed in, and find unused SyncData during traversal

      • ThreadCount = threadCount +1

      • If the same object is not found, check whether the unused SyncData exists

        • If yes, override object in unused SyncData and reset threadCount to 1.

        • Does not exist, need to allocate new SyncData, go to [5]

    • If traversal is not possible, the current object is entered for the first time, and a new SyncData needs to be allocated. Go to step 5.

  • [Step 5] Assign a new SyncData and add it to SyncList

    • For object assignment, threadCount is initialized to 1

    • With the single – linked header method, new nodes are always inserted in the head

    • Step 7

  • [Step 6] Save SyncData to TLS thread or cache

    • FastCacheOccupied: True if SyncData is present in TLS

      • If true, TLS is still in use and saved to the cache

      • If false, save to TLS of the current thread

Through source code analysis, the lockCount in the same thread, denoting @synchronized can be used recursively. ThreadCount in SyncData, on the other hand, indicates that @synchronized can be used in multiple threads.

Hash table structure

  • lockanddataAre taken fromsDataListsThat type ofStripedMap, the use ofstaticModify, there is only one in the system.

Go to the definition of StripedMap

  • StripedMapIs a hash table structure, real machine reserved8Space, while reserved on the simulator64A space. So the performance of the lock was tested before, using a real machine and an emulator, and the results were slightly different.

In the case of the linked list, when different objects generate the same hash, that is, when there is a hash conflict, it will be stored directly in the linked list

In another case, take the emulator as an example, when all 64 positions in StripedMap are fully plugged, no expansion operation is performed. When a new object appears, a hash collision will occur, and SyncData will be inserted into the linked list.

From the code logic, the SyncList is iterated through [Step 4], but because the object in SyncData is different from the object passed in, it looks for unused SyncData overwrites in the linked list. If no unused SyncData is found, a new SyncData is allocated and added to SyncList:

TLSThread related explanation

Thread Local Storage (TLS) : A private space provided by the operating system for threads, usually with limited capacity

Using pthread library API in Linux:

  • pthread_key_create()

  • pthread_getspecific()

  • pthread_setspecific()

  • pthread_key_delete()

Matters needing attention

  • Synchronized is a recursive mutex, lockCount is used recursively, and threadCount is used in multiple threads.

  • When using @synchronized, nil cannot be passed in, and the nil lock does not work.

  • In everyday development, self is often passed in to ensure that the lifecycle is synchronized and objects are not released prematurely;

  • You cannot use a non-OC object as a lock object because the object argument is of type ID.

  • The underlying caches and linked lists both use looping lookups, so performance is low. But in the development of easy to use, and do not unlock, so the use of high frequency.

pthread_mutex

Define a set of MUtex functions in Posix Threads specifically for Thread synchronization.

Mutex: Used to ensure that only one thread can access the object at any time. When the lock acquisition operation fails, the thread goes to sleep and wakes up waiting for the lock to be released.

Let’s take a look at the API:

Pthread_mutex_t _lock; pthread_mutex_t _lock; Pthread_mutex_init (&_lock, NULL); / / lock pthread_mutex_lock (& _lock); / / unlock pthread_mutex_unlock (& _lock); Pthread_mutex_destroy (&_lock);Copy the code

NSLock

The underlying layer of NSLock encapsulates pthread_mutex, which is also a mutex. But it is a non-recursive mutex, so it cannot be used recursively.

Let’s take a look at the API:

NSLock *lock = [[NSLock alloc] init]; // lock [lock]; // unlock [lock unlock];Copy the code

Source code analysis

NSLock source code in the Foundation framework:

Since OC’s Foundation framework was not open source, we used Swift’s Foundation source code instead

Open the Swift-Corelibs-foundation-Master project and find the implementation of NSLock

NSLockfollowNSLockingagreement

NSLock initializes, destroys, locks, and unlocks pthread_mutex.

  • The initialization method of NSLock encapsulates the initialization of pthread_mutex.

  • To use NSLock, you must call its init method;

  • The pthread_cond_broadcast is broadcast to wake up all threads waiting in the lock.

NSLockThe shortcomings of

NSLock is a non-recursive mutex, so it cannot be used recursively.

Here’s an example:

- (void)lg_testRecursive{ NSLock *lock = [[NSLock alloc] init]; for (int i=0; i<10; i++) { dispatch_async(dispatch_get_global_queue(0, 0), ^{ static void (^testMethod)(int); testMethod = ^(int value){ if (value > 0) { NSLog(@"current value = %d",value); testMethod(value - 1); }}; [lock lock]; testMethod(10); [lock unlock]; }); }}Copy the code
  • In the code of asynchronous functions, useNSLockLock and unlock, can ensure thread safety, from10 to 1Sequential output, circular printing10Times.

Locking in a block method is also thread-safe. NSLock is a non-recursive lock. In a recursive scenario, lock is performed again without UNLOCK, resulting in deadlock:

  • Only the output10The following results cannot be printed.

NSRecursiveLock

NSRecursiveLock also encapsulates pthread_mutex and is a recursive lock. It allows the same thread to lock more than once without causing a deadlock.

  • withNSLockIn contrast, it can be used in recursive scenarios;
  • But it doesn’t@synchronizedPowerful because it does not support recursive locking in multiple threads.

Take a look at the API call:

NSRecursiveLock = [[NSRecursiveLock alloc] init]; // recursiveLock lock; // recursiveLock unlock;Copy the code

Source code analysis

Find the implementation of NSRecursiveLock, which also follows the NSLocking protocol. NSRecursiveLock initializes, destroys, locks, and unlocks pthread_mutex:

The implementation of NSRecursiveLock is very similar to that of NSLock. The main difference is initialization. NSRecursiveLock sets the PTHREAD_MUTEX_RECURSIVE flag on pthread_mutex, So NSRecursiveLock is a recursive mutex.

pthread_mutexattr_settype(attrs, Int32(PTHREAD_MUTEX_RECURSIVE))
Copy the code

NSRecursiveLockThe shortcomings of

NSRecursiveLock keeps track of how many times it gets locked. Each successful lock must balance calls to unlock. Only when this balance is reached can the lock finally be released for use by other threads. Here’s an example:

  • From the print result, it is normally printed in a single thread10 to 1After the multithreaded recursive execution reported an error.

NSCondition

Nsconditions are conditional locks and are used in a similar way to semaphores. The thread will continue to execute only when it meets its requirements; otherwise, the thread will be blocked and put to sleep.

The NSCondition object actually acts as a lock and a thread inspector:

  • The main purpose of the lock is to protect the data source while the condition is being detected, and to perform the tasks triggered by the condition
  • The thread checker mainly decides whether to continue running a thread based on conditions, that is, whether the thread is blocked

Take a look at the API:

[condition lock]; [condition lock]; [condition lock]; // Use condition unlock with lock; // make the current thread wait [condition wait]; // the CPU sends a signal to the thread that it can continue executing without waiting.Copy the code

use

Production consumer model case:

#import "ViewController.h" @interface ViewController () @property (nonatomic, assign) NSUInteger ticketCount; @property (nonatomic, strong) NSCondition *testCondition; @end @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; [self lg_testConditon]; } #pragma mark -- NSCondition - (void)lg_testConditon{ _testCondition = [[NSCondition alloc] init]; For (int I = 0; i < 50; i++) { dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ [self lg_consumer]; }); dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ [self lg_consumer]; }); dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ [self lg_producer]; }); dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ [self lg_producer]; }); }} - (void)lg_producer{// Multithread effects of operations [_testCondition lock]; self.ticketCount = self.ticketCount + 1; NSLog(@" produce an existing count %zd",self.ticketCount); [_testCondition signal]; [_testCondition unlock]; } - (void)lg_consumer{// multithread effects of the operation [_testCondition lock]; While (self.ticketCount == 0) {NSLog(@" wait for count %zd",self.ticketCount); [_testCondition wait]; Self.ticketcount -= 1; self.ticketcount -= 1; NSLog(@" consume a remaining count %zd",self.ticketCount); [_testCondition unlock]; } @endCopy the code

Both producers and consumers are locked to ensure thread safety

When the consumer finds that the inventory is zero, the thread waits

A signal is sent when the producer increases inventory. The consumer receives the signal and the previously waiting thread resumes execution

The consumer determines that the inventory is 0, using a while loop instead of an if:

  • With if, the thread wakes up and runs from the code after wait, but does not rejudge the if condition and continues to run the code after the if block.

  • With a while loop, the code after wait is also run, but the loop condition is reevaluated after the wake up. If the loop condition is not true, the code after the while block is executed, and the wait continues.

Source code analysis

Find an implementation of NSCondition:

  • Pthread_mutex + pthread_cond

    • Pthread_mutex: Encapsulates the mutex;

    • Pthread_cond: Used to control the execution of condition variables.

  • Wait: The operation blocks the thread and puts it into sleep;

  • Signal: the operation wakes up a dormant waiting thread to continue executing the code block following the wait;

  • Broadcast: Wakes up all threads waiting in the lock.

NSConditionLock

NSConditionLock is a conditional lock where one thread acquires the lock and the others must wait. It has a layer of encapsulation for NSCondition inside, with condition detection, which can be more simple and flexible to use.

Take a look at the API:

Unconditional lock:

[conditionLock lock];

[conditionLock unlock];
Copy the code
  • Indicates that the condition expects to acquire the lock. If no other thread acquires the lock (the condition does not need to determine the internal condition), it can execute the following line of code;

  • If another thread has already acquired the lock (which may be conditional or unconditional), it waits until the other thread has unlocked it.

Conditions of locks:

ConditionLock lockWhenCondition:A condition;Copy the code
  • If the condition inside the lock is not equal to condition A, it still cannot acquire the lock and still waits.

  • If the internal condition is equal to condition A, and no other thread acquires the lock, it enters the code area and sets it to acquire the lock, and any other thread will wait for its code to complete until it unlocks.

  • The so-called A conditions are integers, and the conditions are compared internally by integers.

Release conditional lock

ConditionLock unlockWithCondition:A condition.Copy the code
  • Means to release the lock while putting the internalconditionSet toA condition.

Conditional lock + timeout:

Return = [conditionLock lockWhenCondition:A conditional beforeDate:A time];Copy the code
  • Indicates that the thread is no longer blocked if the lock is not acquired and the time elapsed. But note that the value returned isNOIt does not change the state of the lock. The purpose of this function is to handle both states.

NSConditionLock compared with NSCondition:

  • Similarities:

    • Both are mutex;

    • Through condition variables to control the lock, release the lock, so as to achieve the purpose of blocking thread, wake up thread.

  • Difference:

    • NSCondition is based on pthread_mutex, and NSConditionLock is based on NSCondition.

    • NSCondition manually blocks the thread and sends a signal to wake it up, while NSConditionLock automatically decides whether to block or wake the thread based on a value passed in from the outside.

use

Case demonstration:

- (void)lg_testConditonLock { NSConditionLock *conditionLock = [[NSConditionLock alloc] initWithCondition:2]; dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ [conditionLock lockWhenCondition:1]; NSLog (@ "thread 1"); [conditionLock unlockWithCondition:0]; }); dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{ [conditionLock lockWhenCondition:2]; NSLog (@ "thread 2"); [conditionLock unlockWithCondition:1]; }); dispatch_async(dispatch_get_global_queue(0, 0), ^{ [conditionLock lock]; NSLog (@ "thread 3"); [conditionLock unlock]; }); }Copy the code

Print result:

  • The execution result of the case, with a high probability ofThread 3, thread 2, thread 1.

NSConditionLock is initialized with condition 2. In thread priority order:

  • Thread 1 has the highest priority, but does not meet the condition. The code block cannot be executed and enters the waiting state.

  • Thread 3 has the default priority and is also an unconditional lock that can execute code blocks;

  • Thread 2, which has the lowest priority, is eligible and can execute code blocks.

Therefore, there is a high probability that thread 3 will print before thread 2, while thread 1 must wait for thread 2 to complete and release the conditional lock.

Source code analysis

Initialization:

  • NSConditionAnd conditionsvalueAs aNSConditionLockA member variable of;
  • The condition is assigned at initializationvalue.

Unconditional lock:

open func lock() { let _ = lock(before: Date.distantFuture) } open func lock(before limit: Date) -> Bool { _cond.lock() while _thread ! = nil { if ! _cond.wait(until: limit) { _cond.unlock() return false } } #if os(Windows) _thread = GetCurrentThread() #else _thread = pthread_self() #endif _cond.unlock() return true }Copy the code
  • calllockMethod, the underlying call islock(before limit: Date)Method, unconditional judgment, can be directly executed.

Release unconditional lock:

open func unlock() { 
    _cond.lock() 
#if os(Windows) 
    _thread = INVALID_HANDLE_VALUE 
#else
    _thread = nil 
#endif 
    _cond.broadcast() 
    _cond.unlock() 
}
Copy the code
  • callunlockMethod, which actually deals withNSCondition.

Conditions of locks:

open func lock(whenCondition condition: Int) { let _ = lock(whenCondition: condition, before: Date.distantFuture) } open func lock(whenCondition condition: Int, before limit: Date) -> Bool { _cond.lock() while _thread ! = nil || _value ! = condition { if ! _cond.wait(until: limit) { _cond.unlock() return false } } #if os(Windows) _thread = GetCurrentThread() #else _thread = pthread_self() #endif _cond.unlock() return true }Copy the code
  • Internal lock and unlock processes ensure thread safety.

  • If the conditions are inconsistent, call the wait method to enter the wait state.

  • Otherwise, the pthread_self method is called.

Release conditional lock:

open func unlock(withCondition condition: Int) {
    _cond.lock() 
#if os(Windows) 
    _thread = INVALID_HANDLE_VALUE
#else 
    _thread = nil
#endif 
    _value = condition 
    _cond.broadcast() 
    _cond.unlock() 
}
Copy the code
  • Internal lock and unlock processes ensure thread safety.

  • Update condition value;

  • Call the broadcast method to broadcast and wake up all threads waiting in the lock.

Assembly analysis

initWithCondition

Conditionlock -[NSConditionLock initWithCondition:]

  • For complex assembly code, we can set breakpoints on all jump code, for example:bl,bWait for instructions and then readx0,x1Register to view message receiver and method number encounteredretInstruction, passx0View the return value.

Execute the bl 0x188FA9E34 command:

(lldb) register read x0 
        x0 = 0x000000016f7b99c8 
        
(lldb) register read x1 
        x1 = 0x00000001d0748898 

(lldb) po 0x000000016f7b99c8 
6165338568 
        
(lldb) po (SEL)0x00000001d0748898
"init"
Copy the code
  • Calling an unknown objectinitMethods.

Execute the bl 0x1893db4FC command:

(lldb) register read x0 
        x0 = 0x0000000283e72670 
        
(lldb) register read x1 
        x1 = 0x00000001d0748898 
        
(lldb) po 0x0000000283e72670 
<NSConditionLock: 0x283e72670>{condition = 0, name = nil} 

(lldb) po (SEL)0x00000001d0748898 
"init"
Copy the code
  • callNSConditionLockThe object’sinitMethods.

Execute the bl 0x188FA9E28 command:

(lldb) register read x0 
        x0 = 0x0000000283e72670 
        
(lldb) register read x1
        x1 = 0x00000001cffedc93 
     
(lldb) po 0x0000000283e72670 
<NSConditionLock: 0x283e72670>{condition = 0, name = nil} 

(lldb) po (SEL)0x00000001cffedc93 
"zone"
Copy the code
  • callNSConditionLockThe object’szoneMethod, open up memory space;

Execute the bl 0x188FA9E28 command:

(lldb) register read x0 
        x0 = 0x00000001de1d0e20 (void *)0x00000001de1d0e48: NSCondition 
        
(lldb) register read x1
        x1 = 0x00000001d0c3d418
      
(lldb) po (SEL)0x00000001d0c3d418 
"allocWithZone:" 

(lldb) register read x2 
        x2 = 0x00000001ded98000 libsystem_malloc.dylib`virtual_default_zone
Copy the code
  • callNSConditionThe class objectallocWithZone:Methods;
  • Passing in parameters:virtual_default_zone.

Execute the bl 0x188FA9E28 command:

(lldb) register read x0 
        x0 = 0x0000000280220510 
        
(lldb) register read x1 
        x1 = 0x00000001d0748898 
        
(lldb) po 0x0000000280220510 
<NSCondition: 0x280220510>{name = nil} 

(lldb) po (SEL)0x00000001d0748898
"init"
Copy the code
  • callNSConditionThe object’sinitMethods.

  • To comparex8,x9Register;
  • b.ne: The comparison result is not equal to (not equal to), execute the instruction at the label, otherwise continue to execute.

Output x8, X9 register:

(lldb) register read x9 
        x9 = 0x933ba4ad46660065 
        
(lldb) register read x8
        x8 = 0x933ba4ad46660065
Copy the code
  • If yes, continue.

Execute the RET instruction, print out the X0 register, and view the return value

(lldb) register read x0 
        x0 = 0x0000000283e72670 
        
(lldb) po 0x0000000283e72670 
<NSConditionLock: 0x283e72670>{condition = 2, name = nil
}
Copy the code
  • returnNSConditionLockInstance object.

Use the x/8g command to view the memory structure of the instance object:

(lldb) x/8g 0x283e72670 
0x283e72670: 0x000021a1de1d0ec1 0x0000000000000000 
0x283e72680: 0x0000000280220510 0x0000000000000000 
0x283e72690: 0x0000000000000002 0x0000000000000000 
0x283e726a0: 0x0000a817e1de26a0 0x00000000006000cc

(lldb) po 0x0000000280220510
<NSCondition: 0x280220510>{name = nil
}
Copy the code
  • NSConditionAnd the default conditions for initialization Settings are stored as member variablesNSConditionLockIn the instance object of
  • Default conditions for setting:0x0000000000000002, that is, passed in during initialization2.

LockWhenCondition:

Set -[NSConditionLock lockWhenCondition:]

Output x0, X1, x2 registers:

(lldb) register read x0
    x0 = 0x0000000283fcbb40 

(lldb) register read x1
    x1 = 0x00000001d0543629 
    
(lldb) register read x2 
    x2 = 0x0000000000000001 
    
(lldb) po 0x0000000283fcbb40 
<NSConditionLock: 0x283fcbb40>{condition = 2, name = nil} 

(lldb) po (SEL)0x00000001d0543629 
"lockWhenCondition:"
Copy the code
  • performThread 1Code;
  • x2: The conditional value passed in is1.

Execute the bl 0x188FA9E28 command:

(lldb) register read x0 
    x0 = 0x00000001de1c7030 (void *)0x00000001de1c7058: NSDate 
    
(lldb) register read x1 
    x1 = 0x00000001d0f8b6aa
     
(lldb) po (SEL)0x00000001d0f8b6aa 
"distantFuture"
Copy the code
  • callNSDatethedistantFutureMethods.

Execute the b 0x188FA9E28 command:

(lldb) register read x0 
    x0 = 0x00000002813fc360 
    
(lldb) register read x1 
    x1 = 0x00000001d070122e 
    
(lldb) po 0x00000002813fc360
<NSConditionLock: 0x2813fc360>{condition = 2, name = nil} 

(lldb) po (SEL)0x00000001d070122e
"lockWhenCondition:beforeDate:" 

(lldb) register read x2 
    x2 = 0x0000000000000001 
    
(lldb) register read x3 
    x3 = 0x00000001d64d7168 CoreFoundation`_NSConstantDateDistantFuture
Copy the code
  • Call object NSConditionLock lockWhenCondition: beforeDate: method;

  • X2: the passed conditional value 1;

  • X3: the value returned by calling the distantFuture method on the previous bl.

lockWhenCondition: beforeDate:

Settings – [NSConditionLock lockWhenCondition: beforeDate:] symbols breakpoints:

Execute the bl 0x188FA9E28 command:

(lldb) register read x0 
    x0 = 0x0000000280220510 
    
(lldb) register read x1 
    x1 = 0x00000001d0195618 
    
(lldb) po 0x0000000280220510 
<NSCondition: 0x280220510>{name = nil} 

(lldb) po (SEL)0x00000001d0195618
"lock"
Copy the code
  • callNSConditionThe object’slockMethods.

  • To comparex8,x21Register;
  • b.eq: The comparison result is equal to (equal to), execute the instruction at the label, otherwise continue to execute.

Output x8, X21 register:

(lldb) register read x8 
    x8 = 0x0000000000000002 
    
(lldb) register read x21
    x21 = 0x0000000000000001
Copy the code
  • If no, continue.

Execute the bl 0x188FA9E28 command:

(lldb) register read x0 
    x0 = 0x0000000281ad5050
     
 (lldb) register read x1
     x1 = 0x00000001d099aac3
     
 (lldb) po 0x0000000281ad5050
 <NSCondition: 0x281ad5050>{name = nil}
 
 (lldb) po (SEL)0x00000001d099aac3 
 "waitUntilDate:"
Copy the code
  • callNSConditionThe object’swaitUntilDate:Methods;
  • The thread enters the wait state.

The program continues execution, at which point the thread is switched, jumping to the lockWhenCondition: method for thread 2:

  • x2: The conditional value passed in is2.

Program to continue, enter the thread 2 lockWhenCondition: beforeDate: methods:

(lldb) register read x0 
    x0 = 0x0000000283e72670
    
(lldb) register read x1 
    x1 = 0x00000001d070122e 
    
(lldb) po 0x0000000283e72670 
<NSConditionLock: 0x283e72670>{condition = 2, name = nil} 

(lldb) po (SEL)0x00000001d070122e 
"lockWhenCondition:beforeDate:" 

(lldb) register read x2
    x2 = 0x0000000000000002 
    
(lldb) register read x3
    x3 = 0x00000001d64d7168 CoreFoundation`_NSConstantDateDistantFuture
Copy the code
  • And the previous callThread 1The difference between, the passed conditional value is2.

Execute the bl 0x188FA9E28 command:

(lldb) register read x0 
    x0 = 0x0000000280220510 
    
(lldb) register read x1 
    x1 = 0x00000001d0195618 
    
(lldb) po 0x0000000280220510 
<NSCondition: 0x280220510>{name = nil} 

(lldb) po (SEL)0x00000001d0195618
"lock"
Copy the code
  • callNSConditionThe object’slockMethods.

  • To comparex8,x21Register;
  • b.eq: The comparison result is equal to (equal to), execute the instruction at the label, otherwise continue to execute.

Output x8, X21 register:

(lldb) register read x8 
    x8 = 0x0000000000000002 
    
(lldb) register read x21
    x21 = 0x0000000000000002
Copy the code
  • Equal, execute the instruction at the label:

Execute the bl 0x188FAA140 instruction:

(lldb) register read x0 
    x1 = 0x0000000000000001
Copy the code
  • Can only seex0The value of the register is1The exact code executed is unknown.

Execute the bl 0x188FA9E28 command:

(lldb) register read x0 
    x0 = 0x0000000280220510 
    
(lldb) register read x1
    x1 = 0x00000001d03bca18 
    
(lldb) po 0x0000000280220510 
<NSCondition: 0x280220510>{name = nil}

(lldb) po (SEL)0x00000001d03bca18 
"unlock"
Copy the code
  • callNSConditionThe object’sunlockMethods.

Execute the ret command, print the X0 register, and check the return value:

(lldb) register read x0 
    x0 = 0x0000000000000001
Copy the code
  • The return value is1, indicating that the conditions are met and the code block execution is complete.

If the condition is not met, the thread enters the wait state, the code block is not executed because of the timeout, it is unlocked, and 0 is returned:

unlockWithCondition:

Set -[NSConditionLock unlockWithCondition:]

Enter thread 2’s unlockWithCondition: method:

(lldb) register read x2 
    x2 = 0x0000000000000001
Copy the code
  • The conditional value passed in is1.

Execute the bl 0x1893db4FC command:

(lldb) register read x0
    x0 = 0x0000000282688330 
    
(lldb) register read x1
    x1 = 0x00000001d025a602 
    
(lldb) po 0x0000000282688330 
<NSConditionLock: 0x282688330>{condition = 2, name = nil} 

(lldb) po (SEL)0x00000001d025a602
"unlockWithCondition:"
Copy the code
  • callNSConditionLockThe object’sunlockWithCondition:Methods.

Execute the bl 0x188FA9E28 command:

(lldb) register read x0 
    x0 = 0x0000000281ad5050 
    
(lldb) register read x1 
    x1 = 0x00000001d0195618 
    
(lldb) po 0x0000000281ad5050
<NSCondition: 0x281ad5050>{name = nil} 

(lldb) po (SEL)0x00000001d0195618
"lock"
Copy the code
  • callNSConditionThe object’slockMethods.

Execute the bl 0x188FA9E28 command:

(lldb) register read x0 
    x0 = 0x0000000281ad5050 
    
(lldb) register read x1 
    x1 = 0x00000001d004280d 
    
(lldb) po 0x0000000281ad5050
<NSCondition: 0x281ad5050>{name = nil} 

(lldb) po (SEL)0x00000001d004280d 
"broadcast"
Copy the code
  • callNSConditionThe object’sbroadcastMethod is broadcast to wake up all threads waiting in the lock.

Execute the b 0x188FA9E28 command:

(lldb) register read x0 
     x0 = 0x0000000281ad5050 
     
(lldb) register read x1 
     x1 = 0x00000001d03bca18 
     
(lldb) po 0x0000000281ad5050 
<NSCondition: 0x281ad5050>{name = nil}

(lldb) po (SEL)0x00000001d03bca18
"unlock"
Copy the code
  • callNSConditionThe object’sunlockMethods.

Use the B command to indicate the jump without hesitation, no return. When this method completes, thread 2 completes code execution.

However, in thread 2, the broadcast method of the NSCondition object is called, triggering the broadcast operation. At this point, thread 1, which is in the wait state, is woken up to continue the code execution of thread 1.

Wake up theThread 1

The program continues, at which point it switches threads to the lockWhenCondition: method of thread 1

Thread 1 releases the wait state and continues executing the code:

  • To comparex8,x21Register;
  • b.eq: The comparison result is equal to (equal to), execute the instruction at the label, otherwise continue to execute.

Output x8, X21 register:

(lldb) register read x8 
    x8 = 0x0000000000000001
    
(lldb) register read x21 
    x21 = 0x0000000000000001
Copy the code
  • Equal, execute the instruction at the label.

Execute the bl 0x188FAA140 instruction:

(lldb) register read x0 
    x1 = 0x0000000000000001
Copy the code
  • Can only seex0The value of the register is1The exact code executed is unknown.

Execute the bl 0x188FA9E28 command:

(lldb) register read x0 
    x0 = 0x0000000280220510 
    
(lldb) register read x1 
    x1 = 0x00000001d03bca18
    
(lldb) po 0x0000000280220510
<NSCondition: 0x280220510>{name = nil} 

(lldb) po (SEL)0x00000001d03bca18 
"unlock"
Copy the code
  • callNSConditionThe object’sunlockMethods.

Execute the ret command, print the X0 register, and check the return value:

(lldb) register read x0 
    x0 = 0x0000000000000001
Copy the code
  • The return value is1.

The conditional lock is then released using thread 1’s unlockWithCondition: method.

spinlocks

atomic

Atomic is an attribute modifier, not a lock.

It has a spin lock at the bottom, but only to ensure data integrity, single-thread safety, not multi-thread safety. It is not recommended because of low efficiency.

Case demonstration:

In thread A, the index iterated over is even, and there must be three elements in the array;

In thread B, if the element in the array >= 2, get the first element;

If both logic are executed in the same thread, no error will be reported. However, because of multithreading, atomic cannot guarantee multithreading safety, so the program will still crash.

OSSpinLock

Os_unfair_lock OSSpinLock is a spin lock that has been replaced by os_unfair_lock for iOS10 due to security issues. With OSSpinLock, the thread is in a busy wait state. OSSpinLock is still available, but not recommended.

Case demonstration:

#import "ViewController.h" #import <libkern/OSAtomic.h> #import <os/lock.h> @interface ViewController (){ OSSpinLock _spinLock; } @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; [self lg_testOSSPinLock]; } #pragma mark -- OSSpinLock - (void)lg_testOSSPinLock { _spinLock = OS_SPINLOCK_INIT; // thread 1 dispatch_async(dispatch_get_global_queue(0, 0), ^{NSLog(@" thread 1 ready to lock,currentThread:%@",[NSThread currentThread]); OSSpinLockLock(&_spinLock); NSLog (@ "thread 1"); sleep(3); OSSpinLockUnlock(&_spinLock); NSLog(@" Thread 1 unlocked complete "); }); // thread 2 dispatch_async(dispatch_get_global_queue(0, 0), ^{NSLog(@" thread 2 is ready to lock,currentThread:%@",[NSThread currentThread]); OSSpinLockLock(&_spinLock); NSLog (@ "thread 2"); sleep(3); OSSpinLockUnlock(&_spinLock); NSLog(@" Thread 2 unlocked complete "); }); } @endCopy the code

os_unfair_lock

Os_unfair_lock is a spin lock. Running os_UNFAIR_LOCK causes a thread to sleep.

See the API:

#import <libkern/ osatomic. h> #import < OS /lock.h> / / lock os_unfair_lock_lock (& _unfairLock); / / unlock os_unfair_lock_unlock (& _unfairLock);Copy the code

Read-write lock

In general, when a read-write lock is in read-mode, if another thread attempts to lock it in write mode, the read-write lock will block subsequent read-mode lock requests. In this way, the lock in read mode can be held for a long time and the lock request in wait mode can be blocked for a long time.

Read/write locks are suitable for data structures that are read more often than written. The read mode locks can be shared, while the write mode locks can be exclusive, so read/write locks are also called shared/exclusive locks.

Read/write locks implement the following:

  • Read write

  • Read and write are mutually exclusive

  • Write operations are mutually exclusive

  • Normal task execution cannot be blocked

Read/write locks are implemented in two ways:

  • Implemented using pThread

  • Implement using GCD

usepthreadimplementation

Int pthread_rwlock_init(pthread_rwlock_t * RESTRICT rwlock, Int pthread_rwlock_destroy(pthread_rwlock_t * RWlock) // Restrict attrCopy the code
  • Successfully returns0, returns an error number if an error occurs;
  • Like the mutex, you need to pass the read/write lock before freeing the memorypthread_rwlock_destroyClean up the read/write lock and release the read/write lockinitAllocated resources.
Int pthread_rwlock_rdlock(pthread_rwlock_t *rwlock); // write lock int pthread_rwlock_wrlock(pthread_rwlock_t *rwlock); Int pthread_rwlock_unlock(pthread_rwlock_t *rwlock);Copy the code
  • Returns 0 on success, error number on error;

  • The three functions realize the operation of acquiring read lock, acquiring write lock and releasing lock respectively.

  • The two functions that acquire locks are blocking operations.

Non-blocking lock acquisition operations:

Int pthread_rwlock_tryrdlock(pthread_rwlock_t *rwlock) int pthread_rwlock_trywrlock(pthread_rwlock_t *rwlock)Copy the code
  • Successfully returns0, returns an error number.

useGCDimplementation

Create a custom concurrent queue using GCD:

  • Read operations are performed through synchronous functions

  • Write through asynchronous fence function

The synchronous function and asynchronous fence function are added to the same queue, which can meet the mutual exclusion of read and write operations. In addition, the asynchronous fence function only blocks tasks in its own queue and does not affect the execution of other tasks.

Statement:

#import "ViewController.h" 

@interface ViewController () 

@property (nonatomic,strong) NSMutableDictionary *dic; 
@property (nonatomic,strong) dispatch_queue_t queue;

@end
Copy the code

Initialization:

- (void)viewDidLoad { 
    [super viewDidLoad]; 
    
    _dic = [NSMutableDictionary dictionary];
    _queue = dispatch_queue_create("queue", DISPATCH_QUEUE_CONCURRENT); 
}
Copy the code

Read:

- (NSString *)safrGetter:(NSString *)strKey{ __block NSString* strValue; dispatch_sync(self.queue, ^{ strValue = [self.dic objectForKey:strKey]; NSLog(@"safrGetter: %@, %@, %@",strKey,strValue,[NSThread currentThread]); }); return strValue; }Copy the code

Write operations:

- (void)safeSetter:(NSString *)strKey strValue:(NSString *)strValue { dispatch_barrier_async(self.queue, ^{ [self.dic setObject:strValue forKey:strKey]; NSLog(@"safeSetter: %@, %@, %@",strKey,strValue,[NSThread currentThread]); }); }Copy the code

Test code:

- (void)touchesBegan:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event { [self test]; } -(void)test { for (int intIndex=0; intIndex<50; intIndex++) { dispatch_async(dispatch_get_global_queue(0, 0), ^{ [self safeSetter:[NSString stringWithFormat:@"key_%i",intIndex] strValue:[NSString stringWithFormat:@"value_%i",intIndex]]; [self safrGetter:[NSString stringWithFormat:@"key_%i",intIndex]]; }); }}Copy the code