preface

In development, we often encounter the situation of writing data while reading data, which often leads to data chaos and even crash. This is the resource snatch problem caused by thread insecurity. In this case, we need to lock the thread

The classification of the lock

There are two types of locks in OC: spinlocks and mutex

spinlocks

  • A spin lock is a lock used to protect a multithreaded shared resource. It differs from a mutex in that it continuously loops to check whether the lock is available in the form of busy waiting when it tries to acquire the lock. When the previous thread’s task is not finished (locked), the next thread will wait (no sleep), and when the previous thread’s task is finished, the next thread will execute immediately. In a multi-CPU environment, using a spin lock instead of a common mutex can often improve performance for programs with short locks.

  • Advantages: Spin locks do not cause the caller to sleep, so do not perform time-consuming operations such as thread scheduling and CPU time slice rotation. So if the lock can be acquired in a very short time, spin locks are far more efficient than mutex locks

  • Disadvantages: The spin lock is always CPU hogging, it runs without the lock (spin) is CPU hogging, if you can’t get the lock in a short period of time, this will definitely make the CPU less efficient

Summary: High efficiency, but always occupy CPU and resources, can not achieve recursive call.

Common spinlocks:atomic.OSSpinLock.dispatch_semaphore_t

The mutex

  • When the task of the previous thread is not finished (locked), the next thread entersSleep state waitingThe task is completeCPUOther threads can be scheduled. When the previous thread completes the task, the next thread willAutomatically wake up and then execute the task.

  • When it comes to synchronization, one task can only be executed before the next one can be executed.
    • Synchronization: mutually exclusive + sequence

Common mutex:@synchronized.NSLock.pthread_mutex.NSConditionLock.NSCondition(conditional lock).NSRecursiveLock

Performance comparison of locks

You can often see performance diagrams of locks on the Internet:

  • High and low performance comparison: OSSpinLock > dispatch_semaphone > pthread_mutex > NSLock > NSCondition > NSRecursiveLock > NSConditionLock > @synchronized

  • In the past, the new energy of @synchronized was very low, but apple optimized it later. Collect some locks and execute them in 100000 cycles respectively, and then calculate the completion time with CFAbsoluteTimeGetCurrent respectively to get:

  • Iphone11 simulator:



  • Iphone11 true machine:



In comparison, the performance of the lock has improved significantly in the real world. In the case of @synchronized, apple has finally optimized it because of its high “attendance”.

The role of the lock

Here’s a simulation of selling tickets:

- (void)testGlobalQueueSell {
    dispatch_async(dispatch_get_global_queue(0.0), ^ {for (int i = 0; i < 5; i++) {
            [selfsellingTickets]; }});dispatch_async(dispatch_get_global_queue(0.0), ^ {for (int i = 0; i < 8; i++) {
            [selfsellingTickets]; }});dispatch_async(dispatch_get_global_queue(0.0), ^ {for (int i = 0; i < 10; i++) {
            [selfsellingTickets]; }});dispatch_async(dispatch_get_global_queue(0.0), ^ {for (int i = 0; i < 12; i++) {
            [selfsellingTickets]; }}); } - (void)sellingTickets {
    if (self.remainTickets > 0) {
        self.remainTickets--;
        sleep(1);
        NSLog(@" sold one, current remaining tickets: %d".self.remainTickets);
    } else {
        NSLog(@" Tickets are sold out"); }}/ / call
self.remainTickets = 30;
[self testGlobalQueueSell];
Copy the code

The results are as follows:



Because there are multiple threads selling tickets, many tickets are sold at the same time without knowing the actual number of tickets, which leads to the problem of data chaos. At this time, locks are needed to ensure thread safety

- (void)sellingTickets {
    @synchronized (self) {
        if (self.remainTickets > 0) {
            self.remainTickets--;
            sleep(1);
            NSLog(@" sold one, current remaining tickets: %d".self.remainTickets);
        } else {
            NSLog(@" Tickets are sold out"); }}}Copy the code

The mutex @synchronized is selected to ensure that only one thread can handle ticket changes at a time, resulting in the following:



How does @synchronized ensure thread safety? Objc4-818.2 objC4-818.2

@synchronizedThe principle of

Synchronized synchronized synchronized synchronized synchronized synchronized synchronized synchronized synchronized synchronized synchronized synchronized synchronized synchronized synchronized

{ id _rethrow = 0;
    id _sync_obj = (id)appDelegateClassName; // The argument passed in
    objc_sync_enter(_sync_obj);
    try {
        struct _SYNC_EXIT {
            _SYNC_EXIT(id arg) : sync_exit(arg) {}
            ~_SYNC_EXIT() {objc_sync_exit(sync_exit); } id sync_exit; } _sync_exit(_sync_obj); }catch(id e) {_rethrow = e; } {struct _FIN { _FIN(id reth) : rethrow(reth) {}
        ~_FIN() { if (rethrow) objc_exception_throw(rethrow); }
        id rethrow;
    } _fin_force_rethow(_rethrow);}
}
Copy the code
  • In the source@synchronizedInvokes the firstobjc_sync_enterFunction, and then calls the structure_SYNC_EXIT, it hasA constructoranddestructor, the constructor is calledsync_exitFunction, the destructor method is calledobjc_sync_exit, so the focus of the research onobjc_sync_enterandobjc_sync_exitThe above

Structure analysis

  • Objc_sync_enter and objc_sync_exit source code is as follows:

    int objc_sync_enter(id obj)
    {
        int result = OBJC_SYNC_SUCCESS;
        if (obj) {
            SyncData* data = id2data(obj, ACQUIRE);
            ASSERT(data);
            data->mutex.lock(a); }else {
            // @synchronized(nil) does nothing
            if (DebugNilSync) {
                _objc_inform("NIL SYNC DEBUG: @synchronized(nil); set a breakpoint on objc_sync_nil to debug");
            }
            objc_sync_nil(a); }return result;
    }
    
    int objc_sync_exit(id obj)
    {
        int result = OBJC_SYNC_SUCCESS;
        if (obj) {
            SyncData* data = id2data(obj, RELEASE); 
            if(! data) { result = OBJC_SYNC_NOT_OWNING_THREAD_ERROR; }else {
                bool okay = data->mutex.tryUnlock(a);if(! okay) { result = OBJC_SYNC_NOT_OWNING_THREAD_ERROR; }}}else {
            // @synchronized(nil) does nothing
        }
        return result;
    }
    Copy the code

    The data type is SyncData. Data ->mutex.lock() is called when objc_sync_enter is used for locking, and data-> mutex.tryunlock () is called when objc_sync_exit is used for unlocking. The only difference between the two datas is their type. That brings us to SyncData

  • SyncData has the following structure:

    typedef struct alignas(CacheLineSize) SyncData {
        struct SyncData* nextData;
        DisguisedPtr<objc_object> object;
        int32_t threadCount;  // number of THREADS using this block
        recursive_mutex_t mutex;
    } SyncData;
    Copy the code
  • It is a structure:

    • nextDataIs alsoSyncDataType, indicating that it is aSingly linked list
    • objectIs the associated object
    • threadCountLog the number of threads that can be accessed by multiple threads
    • mutex: Yes is a recursive lock

Conclusion: @synchronized is a multithreaded, recursive mutex

SyncData (id2data);

  • Id2data source analysis is as follows:



      1. I created one firstspinlock_tLock, and then toSyncDataLocks are locked during memory creation and assignment to ensure thread safety
      1. Then throughLIST_FOR_OBJGets the linked list, which has the following structure:
      #define LIST_FOR_OBJ(obj) sDataLists[obj].data
      static StripedMap<SyncList> sDataLists;
      
      / / StripedMap structure
      class StripedMap {
      #ifTARGET_OS_IPHONE && ! TARGET_OS_SIMULATOR
          enum { StripeCount = 8 };
      #else
          enum { StripeCount = 64 };
      #endif
      }
      Copy the code

      SDataLists are global hash tables that store synclists, which have a capacity of 8 on real machines and 64 on emulators.

      struct SyncList {
          SyncData *data;
          spinlock_t lock;
          constexpr SyncList(a) : data(nil), lock(fork_unsafe_lock) {}};Copy the code

      SyncData is a one-way linked list, so we can get the structure of StripedMap:



      1. To obtainSyncDataAccording to thewhyConditions for related processing
      1. If you don’t get itSyncDataData is locked and then created and assigned
      1. The last executiondoneFor related storage

But how is the specific process, need to be combined with case analysis

Combined with case analysis

Since SyncData supports multithreading and can be recursively, it can be divided into the following four categories for analysis. StripeCount is adjusted to 1 under the simulator to increase the probability of hash conflict

Single thread recurses the sameobject

void oneThreadOneObject() {
    LGPerson *p = [[LGPerson alloc] init];
    dispatch_async(dispatch_get_global_queue(0.0), ^ {@synchronized (p) {
            NSLog(@ "0");
            @synchronized (p) {
                NSLog(@ "1");
                @synchronized (p) {
                    NSLog(@ "2"); }}}}); }Copy the code
  • Break points at @synchronized respectively, and then run in objC4-818.2. After the breakpoint is reached, hit breakpoints at data judgment, lock processing and unlock processing of ID2data function

    1. On the first entryStripedMapthe64aSyncListIt’s all empty, so you can’t get it the first timedataandcachedata



    Then it will come to the lock creation, because*listpIs empty, so it doesn’t go into the loop



    So I’m going to go toposix_memalignCreate a place



    The main thing here is rightresultTo:

    • Allocate memory space,
    • And then associate the object
    • And then the number of threads is set to zero1
    • Create a recursive lock
    • Then the head plug method is setnextDataThe first onenextDataIs empty and*listpPoint to your

    When the result is processed, the result is saved to the current thread space



    At this point, the first entry is complete, continue the breakpoint and enter the second @synchronized entry

    1. On the second entryStripedMapIt has a value, because it’s the same object, solistpIt’s worth it



    • The next execution will be based onSYNC_DATA_DIRECT_KEYGet the data stored the last time it entered, because it is the same object, so it will enterobjectDistinguish, according towhyThe argument passed in isACQUIREforlockCount++The processing andlockCountstorage



    The breakpoint continues for a third lock

    1. The third entry is the same as the second entry, and finallylockCount++



    Continuing with the breakpoint will take you to objc_sync_exit

    1. Unlock beforeid2dataProcessing: Because at this timedataIt has value, and eventually it goes toRELEASEforlockCount--operation



    • At this point tolockCount--And store, when0Is emptied of the store and calledOSAtomicDecrement32BarrierAtomic manipulation- 1

Single-threaded recursion is differentobject

Case as follows

void oneThreadAnyObject() {
    LGPerson *p1 = [[LGPerson alloc] init];
    LGPerson *p2 = [[LGPerson alloc] init];
    LGPerson *p3 = [[LGPerson alloc] init];
    dispatch_async(dispatch_get_global_queue(0.0), ^ {@synchronized (p1) {
            NSLog(@ "0");
            @synchronized (p2) {
                NSLog(@ "1");
                @synchronized (p3) {
                    NSLog(@ "2"); }}}}); }Copy the code
  • For the first time, @synchronized could not obtain data because the StripedMap was empty, so it went through the creation process, and then used the header interpolation method to point *list to result



    • And then do the related onesresultandlockCountstorage



  • Synchronized: @synchronized; fastCacheOccupied: YES; synchronized: @synchronized



    • This is when you walk intoforCycle judgment



    • But at the moment,threadCount = 1, does not assign here, so we go to create again



    • I’m going to create it, I’m going to assign it, I’m going to insert it using a header, and then I’m going to docachestorage



  • Synchronized: @synchronized; fastCacheOccupied: YES; synchronized: @synchronized



    • Due to theobjectDifferent, socontinueJump out of the loop and go toforCycle, butthreadCountAt this time for1



    • Then go to the creation site



    • I’m going to create it, I’m going to assign it, I’m going to insert it using a header, and then I’m going to docachestorage

Multiple threads recurse the sameobject

void anyThreadOneObject() {
    LGPerson *p = [[LGPerson alloc] init];
    dispatch_async(dispatch_get_global_queue(0.0), ^ {@synchronized (p) {
            NSLog(@ "0");
            dispatch_async(dispatch_queue_create("wushuang.concurrent1", DISPATCH_QUEUE_CONCURRENT), ^{
                @synchronized (p) {
                    NSLog(@ "1");
                    dispatch_async(dispatch_queue_create("wushuang.concurrent2", DISPATCH_QUEUE_CONCURRENT), ^{
                        @synchronized (p) {
                            NSLog(@ "2"); }}); }}); }}); }Copy the code
  • The first entry, as in the previous two cases, goes through the creation process, and then points the *list to result using a header interpolation method, and finally stores the relevant information

  • The second time @synchronized goes to the for loop:



    • thenpThe value is assigned toresultAnd callOSAtomicIncrement32BarrierforthreadCount+1
    • Finally went to thedoneFor related storage
  • On the third @synchronized entry, threadCount+1 is applied as it was on the second entry

Multithreaded recursion is differentobject

void anyThreadAnyObject() {
    LGPerson *p1 = [[LGPerson alloc] init];
    LGPerson *p2 = [[LGPerson alloc] init];
    LGPerson *p3 = [[LGPerson alloc] init];

    dispatch_async(dispatch_get_global_queue(0.0), ^ {@synchronized (p1) {
            NSLog(@ "0");
            dispatch_async(dispatch_queue_create("wushuang.concurrent1", DISPATCH_QUEUE_CONCURRENT), ^{
                @synchronized (p2) {
                    NSLog(@ "1");
                    dispatch_async(dispatch_queue_create("wushuang.concurrent2", DISPATCH_QUEUE_CONCURRENT), ^{
                        @synchronized (p3) {
                            NSLog(@ "2"); }}); }}); }}); }Copy the code
  • When you enter @synchronized for the first time and have no data, you will create and store it

  • The second time @synchronized entered the for loop judgment, because the object is different, will be assigned



    • Finally, relevant storage is carried out
  • The third time @synchronized was the same as the second time

conclusion

The whole locking core processing process is as follows:



NSLockandNSRecursiveLock

NSLock

NSLock is a common thread lock. In order to solve the problem of thread safety, we first look at the following cases:

- (void)threadDemo {
    dispatch_async(dispatch_get_global_queue(0.0), ^ {static void (^ testBlock)(int num);
        
        testBlock = ^(int num) {
            if (num > 0) {
                NSLog(@"current num value %d ", num);
                testBlock(num - 1); }}; testBlock(10);
    });
}
Copy the code
  • Inside the global queue is a block, which is obviously printed in order



  • But putting code in a for loop is problematic because of printing



    In this case, a lock is needed to solve the order problem, using NSLock



  • Loading NSLock into the business code in the block results in the following:



    • Because the lock has not been unlocked, and calledblock, so resulting in a deadlock print only once, noteNSLockcanmultithreadingUse, butNot a recursive

NSRecursiveLock

  • Take a look atNSRecursiveLock, its usage andNSLocksimilar



  • Add a lock to a block of a queue function to ensure that it is executed.



    • The result is a complete print order after the crash, noteNSRecursiveLock Multithreading is not supportedcall
  • Earlier in the article we analyzed that @synchronized can be used multithreaded and recursively. Try using it again:



  • The printout shows upTen timesSequential execution,@synchronizedThat’s the perfect solution

NSCondition

  • The object of NSCondition actually acts as a lock and a thread inspector: the lock is mainly used to protect the data source and perform the tasks triggered by the condition when the condition is detected; The thread checker mainly decides whether to continue running a thread based on conditions, that is, whether the thread is blocked. NSCondition has four apis

    • [condition lock]: Generally usedMultithreaded simultaneous access,Modifying the same data sourceTo ensureAt the same timeThe data sourceOnly be accessed,Modify a, other thread commands need to be inlockOutside waiting, only tounlockTo access
    • [condition unlock]And:lockAt the same time use
    • [condition wait]: Makes the thread wait
    • [condition signal]:The CPU signalsTell the thread not to wait, yesContinue to perform
  • Case study (production and consumption model, e.g. selling milk tea) :

    - (void)testCondition {
        for (int i = 0; i < 50; i++) {
            dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
                [self ws_product];
            });
            dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
                [self ws_consumer];
            });
            dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
                [self ws_product];
            });
            dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ [self ws_consumer]; }); }} - (void)ws_product {
        self.milkTeaCount += 1;
        NSLog(@"Produce one cup, available % zD cup", self.milkTeaCount);
    }
    
    - (void)ws_consumer {
        if (self.milkTeaCount == 0) {
            NSLog(@"Wait to make count: %zd", self.milkTeaCount);
        }
        self.milkTeaCount -= 1;
        NSLog(@"Sold one cup, left % Zd.", self.milkTeaCount);
    }
    Copy the code

    The following output is displayed:



    At this point, there is obviously a problem with the data. The number of multiple productions is still1, the number of repeated consumption or0How to solve it? That’s when you need itConditions for the lock, rightLock up the process of production and consumptionWhen sold out need to wait for production and then continue to sell

Fountion source code interpretation

  • NSLock, NSRecursiveLock, and NSCondition are all encapsulated by pthread, so we don’t know what the underlying principle of pthread is



    Unlock belongs to NSLocking protocol, which can only be invoked using a protocol. Unfortunately, the lock is in the Foundation framework, and it is not open source. Why? Look at swift’s fountion framework: Swift-Corelibs-Foundation

    Go into the source code, find the nsLock. swift file, and then analyze several locks

  • NSLock analysis: after finding NSLock in the source code, then find the initialization and lock unlock place

    • init: callpthread_mutex_init(mutex, nil)Initialize
    • Lock: callpthread_mutex_lock(mutex)To lock
    • Unlock: callpthread_mutex_unlock(mutex)unlock
  • NSRecursiveLock analysis:

    • init:
      withUnsafeMutablePointer(to: &attrib) { attrs in
          pthread_mutexattr_init(attrs)
          pthread_mutexattr_settype(attrs, Int32(PTHREAD_MUTEX_RECURSIVE))
          pthread_mutex_init(mutex, attrs)
      }
      Copy the code
    • Lock:pthread_mutex_lock(mutex)
    • Unlock:pthread_mutex_unlock(mutex)

NSRecursiveLock is initialized with a PTHREAD_MUTEX_RECURSIVE attrs of type PTHREAD_MUTEX_RECURSIVE. That’s why NSRecursiveLock is recursive

NSConditionLock

  • NSConditionLock is also a conditional lock, and once one thread has acquired the lock, the other threads must wait. There are the following apis:

      1. [conditionLock lock]Said:conditionLockExpect to get locks ifThere is noOther threadsGets the lock(No need to judge internalcondition) it canperformLine the following code if alreadyAnother thread has acquired the lock(may be a conditional or unconditional lock), thenWaiting for thestraighttoOther threadsunlock
      1. [conditionLock unlock]: unlock
      1. [conditionLock lockWhenCondition:A]: indicates that if no other thread has acquired the lock, but the internal lockcondition Is not equal to ACondition, it still can’t get the lock, stillWaiting for the. If the internalcondition Is equal to the ACondition, and no other thread has acquired the lock, it enters the code area and sets it to acquire the lock, and any other thread will wait for its code to complete until it unlocks
      1. [conditionLock unlockWithCondition:A]Said:Release the lockWhile putting the insideconditionSet toAconditions
      1. return = [xxx lockWhenCondition:A beforeDate:t]: indicates if the lock is not obtained, andMore thanThis time,No longer blocking the thread. But note that the value returned isNOIt does notChange the state of the lockThe purpose of this function is to be able toImplement processing in two states
      1. condition: the so-calledconditionIt’s integers, internally comparing conditions by integers
  • Let’s take a look at the case:

    - (void)testConditionLock {
        NSConditionLock *lock = [[NSConditionLock alloc] initWithCondition:2];
        dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
            [lock lockWhenCondition:1];
            NSLog(@ "1");
            [lock unlockWithCondition:0];
        });
    
        dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{
            sleep(1);
            [lock lockWhenCondition:2];
            NSLog(@ "2");
            [lock unlockWithCondition:1];
        });
    
        dispatch_async(dispatch_get_global_queue(0.0), ^{
            [lock lock];
            NSLog(@ "3");
            [lock unlock];
        }); 
    }
    Copy the code
    • The printing sequence is highly likely to be:3 -> 2 -> 1Because of3No condition is just plain lock, create condition is2, only2To execute the conditions, and then according to[lock unlockWithCondition:1]After the implementation1. There are four questions:
        1. NSConditionLockandNSConditionThey look alike. What’s the difference?
        1. To create the2What is?
        1. lockWhenConditionWhat did you do?
        1. unlockWithConditionWhat did you do?
    1. Create codeinitWithCondition:As follows:
    • inswift, the corresponding code is as follows:

      When creating an object, it is created lastNSConditionType of a member variable_condAnd thenconditionAssign to another member variable_calue
    1. lockWhenCondition:The code is as follows:
    open func lock(a) {
        let _ = lock(before: Date.distantFuture)
    }
    
    open func lock(whenCondition condition: Int) {
        let _ = lock(whenCondition: condition, before: Date.distantFuture)
    }
    
    open func lock(whenCondition condition: Int.before limit: Date) -> Bool {
        _cond.lock() // _cond calls lock
        while _thread ! = nil || _value ! = condition { // If no _thread has a value or condition has changed, the judgment is entered
            if !_cond.wait(until: limit) { If there is no timeout, the while loop will continue to judge until the condition changes out of the loop
                _cond.unlock()
                return false}}#if os(Windows)
        _thread = GetCurrentThread(a)#else
        _thread = pthread_self() // Thread locks itself
    #endif
        _cond.unlock()  // The unlock method is completed
        return true
    }
    Copy the code
    • Locking in Swift ends up callinglock(whenCondition: before:Method, called first in methodNSConditionThe locklockAnd then inwhileJudge, if not_threadHave a value orconditionIf it changes, it enters the judgmentWait for a timeoutThe callunlockTo return tofalse,Locking failure; ifThere is no timeoutIn thewhileIn a loopKeep judgmentUntil theConditions changeOut of the loop; After the lock is successful_threadAssign, which means lock yourselfpthread_self, the method completes before returningunlockunlock
    1. unlockunlockWithConditionThe code is as follows:
    open func unlock(withCondition condition: Int) {
        _cond.lock()
    #if os(Windows)
        _thread = INVALID_HANDLE_VALUE
    #else
        _thread = nil / / empty
    #endif
        _value = condition // Change the _value value
        _cond.broadcast()
        _cond.unlock()
    }
    Copy the code
    • Called first when unlockinglockLock it, and put it_threadSet tonilAnd thenvalueChanges the value of to the newly passed conditioncondition, and the method is called before completionunlockThat unlocked it, but in the process_condCall thebroadcastMethod? What does this do?
    • broadcast:
      open func broadcast(a) {
      #if os(Windows)
          WakeAllConditionVariable(cond)
      #else
          pthread_cond_broadcast(cond) 
      #endif
      }
      Copy the code

      Method in the final callpthread_cond_broadcastWhat it does isWake up all threads blocking on condition variables. This also confirms the fact that the mutex sleeps when the thread is blocked and wakes up when unlocked

Read/write lock (read/write)

  • Read/write locks are actually a kind ofSpecial mutex, which divides visitors to shared resources intoThe readerandwriter.The readerOnly toShared resourcesforreadAccess,writerYou need toShared resourcesforwriteOperation. This type of lock as opposed to a spin lockCan improve concurrencyBecause, inA multiprocessorIn the system, it allowsAt the same timeThere areMultiple readers come to visitShare resources, maximum possible number of readers for actual logicCPUThe number.writerisexclusive, aRead-write lockAt the same timeThere can only be one writerorMultiple readers(withCPUNumber dependent), butCan'tBoth readers and writers. Preemption also fails during read/write lock holding. If I read and write the lockNo current readers.There are no writersSo the writer canGet read/write locks immediatelyOtherwise it mustThe spinThere, untilThere is noanyA writer or reader. If I read and write the lockNo writer, then the reader canGet the read/write lock immediatelyOtherwise the reader must spin in there untilWrite to releaseThis read-write lock.
  • Read/write locks are suitable for data structuresRead a lot more than writeBecause read mode locking can be shared toWrite modeLocked meansexclusiveSo the read/write lock is also calledShared-exclusive lock

Read/write locks can be implemented with the following pthread_rwlock_t and the fence function dispatch_barrier_async:

    1. pthread_rwlock_tApi:
    • pthread_rwlock_init: Initializes the lock
    • pthread_rwlock_rdlock: read lock
    • pthread_rwlock_tryrdlock: Read attempts to lock
    • pthread_rwlock_wrlock: write lock
    • pthread_rwlock_trywrlock: Write attempt lock
    • pthread_rwlock_unlock: unlock
    • pthread_rwlock_destroy: destroy the lock

Usage:

#import <pthread.h>

@property (nonatomic.assign) pthread_rwlock_t rwlock;
@property (nonatomic.strong) NSMutableDictionary *dic;

- (void)testRWLock {
    pthread_rwlock_init(&_rwlock, NULL);
    self.dic = [NSMutableDictionary dictionary];

    for (int i = 0; i < 10; i++) {
        dispatch_async(dispatch_get_global_queue(0.0), ^ {if (i % 2= =0) {[self ws_writeName:[NSString stringWithFormat:@"name_%d", i]];
            } else{[selfws_readName]; }}); }}/ / read
- (void)ws_readName {
    pthread_rwlock_rdlock(&_rwlock); / / read lock
    NSString *name = [self.dic valueForKey:@"name"];
    NSLog(@"read name ___ : %@ 🎈", name);

    pthread_rwlock_unlock(&_rwlock); / / reading lock
}

/ / write
- (void)ws_writeName: (NSString *)name {
    pthread_rwlock_wrlock(&_rwlock); / / write lock
    [self.dic setValue:name forKey:@"name"];
    NSLog(@" Write name ___ : % @🎉", name);

    pthread_rwlock_unlock(&_rwlock); / / write to unlock
}
Copy the code

The output is as follows:

Now let’s look at the fence scheme

    1. dispatch_barrier_asyncUse the following
@property (nonatomic.strong) NSMutableDictionary *dic;
@property (nonatomic.strong) dispatch_queue_t myQueue;

- (void)testRWLock {
    self.dic = [NSMutableDictionary dictionary];
    self.myQueue = dispatch_queue_create("com.current.thread", DISPATCH_QUEUE_CONCURRENT);

    for (int i = 0; i < 10; i++) {
        dispatch_async(dispatch_get_global_queue(0.0), ^ {if (i % 2= =0) {[self ws_barrier_writeName:[NSString stringWithFormat:@"name_%d", i]];
            } else{[selfws_barrier_readName]; }}); }}/ / read
- (void)ws_barrier_readName {
    dispatch_sync(self.myQueue, ^{
        NSString *name = [self.dic valueForKey:@"name"];
        NSLog(@"read name ___ : %@ 🎈", name);
    });
}

/ / write
- (void)ws_barrier_writeName: (NSString *)name {
    dispatch_barrier_async(self.myQueue, ^{
        [self.dic setValue:name forKey:@"name"];
        NSLog(@" Write name ___ : % @🎉", name);
    });
}
Copy the code

The following output is displayed: