The lock

At the lowest level, there are biweekly mutex and spin locks. Many advanced locks are based on their implementation. The purpose of locking is to ensure that only one thread can access a shared resource at any given time, thus avoiding the problem of data corruption caused by multiple threads.

A mutex is an “exclusive lock”. For example, when thread A successfully locks the mutex, the mutex is already owned by thread A. As long as thread A does not release the lock, thread B will fail to lock the mutex and will release the CPU to another thread. The code that the natural thread B locks will block.

The blocking of mutex failure is implemented by the operating system kernel. When the lock fails, the kernel puts the thread into a “sleep” state. After the lock is released, the kernel wakes up the thread at the appropriate time. When the thread has successfully acquired the lock, it can resume execution.

Spin-locks are the simplest type of lock, and keep spinning, using CPU cycles, until the lock is available. Note that on a single-core CPU, you need preemptive schedulers (that is, constantly interrupting one thread through the clock to run other threads). Otherwise, spin locks cannot be used on a single CPU, because a spinning thread never gives up the CPU.

If you can be sure that the code being locked has a short execution time, you should not use mutexes and should use spin locks instead, otherwise use mutexes.

Spin-locks are similar to mutex in terms of usage, but completely different in terms of implementation: when locking fails, mutex deals with “thread switching”, whereas spin-locks deal with “busy waiting”.

Mutexes are classified into recursive locks and non-recursive locks. Let’s look at some common locks

@synchronized

We have a total of twenty tickets.

Then there are several Windows selling tickets at the same time,

All right, tickets are on sale now

But what we find is that when there are 17 tickets, 18 tickets are displayed below, so the number of votes is not accurate. This is because there are multiple threads operating on the same memory area at the same time, resulting in inaccurate data. We add a lock, and only one thread is operating on the same memory area at the same time.

In the example above we used @synchronized locks, which act as a lock. Sometimes we’ll see something like this. @synchronized is recursively reentrant. So what does the structure of @synchronized look like, and how to achieve the effect of locking ❓? Let’s analyze the source code

@synchronized (p) {

              @synchronized (p1) {

                   
              }
}
Copy the code

Add a breakpoint and see the assembly go

Find the source code for objc and search for objc_sync_Enter

int objc_sync_enter(id obj)

{

int result = OBJC_SYNC_SUCCESS;

if (obj) {

SyncData* data = id2data(obj, ACQUIRE);

ASSERT(data);

data->mutex.lock();

} else {

// @synchronized(nil) does nothing

if (DebugNilSync) {
_objc_inform("NIL SYNC DEBUG: @synchronized(nil); set a breakpoint on objc_sync_nil to debug");

}
objc_sync_nil();

}


return result;

}
Copy the code

If obj is null, then does nothing

Look at the objc_sync_exit

int objc_sync_exit(id obj)

{

int result = OBJC_SYNC_SUCCESS;

if (obj) {

SyncData* data = id2data(obj, RELEASE);

if(! data) { result = OBJC_SYNC_NOT_OWNING_THREAD_ERROR; }else {

bool okay = data->mutex.tryUnlock();

if(! okay) { result = OBJC_SYNC_NOT_OWNING_THREAD_ERROR; }}}else {

// @synchronized(nil) does nothing

}

return result;

}
Copy the code

Data ->mutex.lock() if obj is not null; Bool okay = data-> mutex.tryunlock (); Operation.

SyncData* data = id2data(obj, ACQUIRE); Main process code


static SyncData* id2data(id object, enum usage why)
{

spinlock_t *lockp = &LOCK_FOR_OBJ(object);

SyncData **listp = &LIST_FOR_OBJ(object);

SyncData* result = NULL;
#if SUPPORT_DIRECT_THREAD_KEYS

bool fastCacheOccupied = NO;
//#define __PTK_FRAMEWORK_OBJC_KEY1 41
SyncData *data = (SyncData *)tls_get_direct(SYNC_DATA_DIRECT_KEY);

// if there is data in the thread, go to the if thread lockCount++
if(data){... } SyncCache *cache = fetch_cache(NO);//Check per-thread cache of already-owned locks for matching object
// When the lock exits, lockCount--, threadCount-- if it exists in the cache
if(cache) {... } lockp->lock();// If there is no object==object in listp, then threadCount, the number of threads ++{... }// Create a SyncData to add to the list
posix_memalign((void **)&result, alignof(SyncData), sizeof(SyncData));

result->object = (objc_object *)object;

result->threadCount = 1;

new (&result->mutex) recursive_mutex_t(fork_unsafe_lock);

result->nextData = *listp;

*listp = result;

done:

    lockp->unlock();
    if (result) {
    // Cache and other operations. }return result;    
Copy the code
The data structure
spinlock_t *lockp = &LOCK_FOR_OBJ(object);

SyncData **listp = &LIST_FOR_OBJ(object);
Copy the code
#define LOCK_FOR_OBJ(obj) sDataLists[obj].lock

#define LIST_FOR_OBJ(obj) sDataLists[obj].data

static StripedMap<SyncList> sDataLists;

Copy the code

Let’s take a look at SyncList, StripedMap

struct SyncList {

SyncData *data;

spinlock_t lock;

constexpr SyncList() : data(nil), lock(fork_unsafe_lock){}}; template<typename T>class StripedMap {#ifTARGET_OS_IPHONE && ! TARGET_OS_SIMULATOR enum { StripeCount =8 };

#else

enum { StripeCount = 64 };

#endif
...
}

// Recursive_mutex_t is more powerful than recursive_mutex_t
typedef struct alignas(CacheLineSize) SyncData {

struct SyncData* nextData; / / singly linked lists

DisguisedPtr<objc_object> object;

int32_t threadCount; // number of THREADS using this block

recursive_mutex_t mutex;

} SyncData;



Copy the code

You can see that sDataLists are a global static hash table and print out the emulator we use by class StripedMap, so StripeCount = 64 is the following structure

(StripedMap<SyncList>::PaddedT [64]) array = {

  [0] = {

    value = {

      data = NULL

      lock = {

        mLock = (_os_unfair_lock_opaque = 0}}} [1] = {

    value = {

      data = NULL

      lock = {

        mLock = (_os_unfair_lock_opaque = 0}}} [2] = {

    value = {

      data = NULL

      lock = {

        mLock = (_os_unfair_lock_opaque = 0)}}}... [63] = {

    value = {

      data = NULL

      lock = {

        mLock = (_os_unfair_lock_opaque = 0)}}}}Copy the code
summary

Objc_sync_enter /exit occurs symmetrically. Encapsulating the underlying recursive lock supports two types of storage: ThreadCount =1; threadCount=1; threadCount=1; threadCount=1; If TLS cannot find the object in listp threadCount++ if exit, a lockCount– and threadCount–

Synchronized: Reentrant, recursive, and multithreading 1. There are thread-space guarantees (TLS), which guarantee threadCount, how many threads will lock the lock 2

NSLock, NSRecursiveLock

for (int i = 0; i < 200000; i++) {

        dispatch_async(dispatch_get_global_queue(0.0), ^{

            _testArray = [NSMutableArray array];

        });

    }
Copy the code

_testArray = [NSMutableArray array]; _testArray = [NSMutableArray array]; Locking can solve this problem.

   NSLock *lock = [[NSLock alloc] init];

   for (int i = 0; i < 200000; i++) {

        dispatch_async(dispatch_get_global_queue(0, 0), ^{

            [lock lock];

            _testArray = [NSMutableArray array];

            [lock unlock];

        });

    }
Copy the code

We click in and look at NSLock. It follows the NSLocking protocol, which has two methods lock and unlock

for (int i= 0; i<10; i++) {

dispatch_async(dispatch_get_global_queue(0.0), ^ {static void (^testMethod)(int);

testMethod = ^(int value){

if (value > 0) {

NSLog(@"current value = %d",value);

testMethod(value - 1); }}; testMethod(10);

});

}
Copy the code

How do we lock this example? If we don’t lock it, we will find that the output data will be messed up.

NSLock *lock = [[NSLock alloc] init];

for (int i= 0; i<10; i++) {

dispatch_async(dispatch_get_global_queue(0.0), ^{

[lock lock];

static void (^testMethod)(int);

testMethod = ^(int value){

if (value > 0) {

NSLog(@"current value = %d",value);

testMethod(value - 1); }}; testMethod(10);

[lock unlock];

});

}
Copy the code

If we add the lock in the following position

NSLock *lock = [[NSLock alloc] init];

for (int i= 0; i<10; i++) {

dispatch_async(dispatch_get_global_queue(0.0), ^ {static void (^testMethod)(int);

testMethod = ^(int value){

[lock lock];

if (value > 0) {

NSLog(@"current value = %d",value);

testMethod(value - 1);

}

[lock unlock];

};

testMethod(10);

});

}
Copy the code

We just say 10, we do a lock every time we go in, but there’s no unlock, so we just say 10 and we know that NSRecursiveLock is a recursive lock, so we change NSRecursiveLock to a recursive lock and we say 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 and then it crashes, NSRecursiveLock is just a single thread recursion, and multithreading can cause problems. We know that @synchronized supports multithreading, so switching to @synchronized works fine

for (int i= 0; i<10; i++) {


dispatch_async(dispatch_get_global_queue(0.0), ^ {static void (^testMethod)(int);

testMethod = ^(int value){

@synchronized (self) {

if (value > 0) {

NSLog(@"current value = %d",value);

testMethod(value - 1); }}}; testMethod(10);

});

}
Copy the code

We seeNSRecursiveLockIt’s also a protocol

NSCondition

If production is 0, then consumption should wait for production. When production is positive, a message is sent to the consumer that it is ready to consume, and then consumption begins. In this case, we can use NSCondition

API documentation is also relatively simple, eg:

- (void)lg_testConditon{

_testCondition = [[NSCondition alloc] init];

// Create production-consumer

for (int i = 0; i < 50; i++) {

dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{

[self lg_producer];

});

dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{

[self lg_consumer];

});

dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{

[self lg_consumer];

});

dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ [self lg_producer]; }); }} - (void)lg_producer{

[_testCondition lock]; // Multithreading effects of operations

self.ticketCount = self.ticketCount + 1;

NSLog(@"Produce an existing count %zd",self.ticketCount);

[_testCondition signal]; / / signal

[_testCondition unlock];

}


- (void)lg_consumer{

[_testCondition lock]; // Multithreading effects of operations

if (self.ticketCount == 0) {

NSLog(@"Wait for count %zd",self.ticketCount);

[_testCondition wait];

}

// Pay attention to the consumption behavior after waiting for the judgment of the condition

self.ticketCount -= 1;

NSLog(@"Consume one left count %zd",self.ticketCount);

[_testCondition unlock];

}
Copy the code

NSConditionLock

Some simple uses of the API

NSConditionLock *conditionLock = [[NSConditionLock alloc] initWithCondition:2];

dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{

[conditionLock lockWhenCondition:1];

NSLog(@Thread 1 "");

[conditionLock unlockWithCondition:0];

});

dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{

[conditionLock lockWhenCondition:2];

sleep(0.1);

NSLog(@Thread 2 "");

[conditionLock unlockWithCondition:1];

});

dispatch_async(dispatch_get_global_queue(0.0), ^{

[conditionLock lock];

NSLog(@"Thread 3");

[conditionLock unlock];

});
Copy the code

summary

These locks are the underlying pthread_mutex_lock encapsulation, for NSLock source

open class NSLock: NSObject.NSLocking {... open funclock() {
pthread_mutex_lock(mutex)
}
open func unlock() {
pthread_mutex_unlock(mutex)
}
...
}

Copy the code

For NSRecursiveLock source code

open class NSRecursiveLock: NSObject.NSLocking {

public override init() {

super.init()

...

withUnsafeMutablePointer(to: &attrib) { attrs in

pthread_mutexattr_init(attrs)
// One more tag than NSLock
pthread_mutexattr_settype(attrs, Int32(PTHREAD_MUTEX_RECURSIVE))

pthread_mutex_init(mutex, attrs)

}

...

}
open func lock() {

pthread_mutex_lock(mutex)

}

open func unlock() {

pthread_mutex_unlock(mutex)

}
}

Copy the code

NSCondition source

open class NSCondition: NSObject.NSLocking {

internal var mutex = _MutexPointer.allocate(capacity: 1)

internal var cond = _ConditionVariablePointer.allocate(capacity: 1)

public override init() {


pthread_mutex_init(mutex, nil)


}

deinit {


pthread_mutex_destroy(mutex)

pthread_cond_destroy(cond)

}

open func lock() {

pthread_mutex_lock(mutex)

}

open func unlock() {

pthread_mutex_unlock(mutex)

}

open func wait() {

pthread_cond_wait(cond, mutex)

}

open func wait(until limit: Date) -> Bool {

guard var timeout = timeSpecFrom(date: limit) else {

return false

}

return pthread_cond_timedwait(cond, mutex, &timeout) == 0

}

open func signal() {

pthread_cond_signal(cond)

}

open func broadcast() {

pthread_cond_broadcast(cond) // wait signal

}

open var name: String?

}
Copy the code

NSConditionLock source

open class NSConditionLock : NSObject.NSLocking {

internal var _cond = NSCondition()

internal var _value: Int

internal var _thread: _swift_CFThreadRef?

public convenience override init() {

self.init(condition: 0)

}

public init(condition: Int) {

_value = condition

}

open func lock() {

let _ = lock(before: Date.distantFuture)

}

open func unlock() {

_cond.lock()

_thread = nil

}

open var condition: Int {

return _value

}

open func lock(whenCondition condition: Int) {

let _ = lock(whenCondition: condition, before: Date.distantFuture)

}

open func `try`() -> Bool {

return lock(before: Date.distantPast)

}

open func tryLock(whenCondition condition: Int) -> Bool {

return lock(whenCondition: condition, before: Date.distantPast)

}

open func unlock(withCondition condition: Int) {

_cond.lock()

_thread = nil

_value = condition

_cond.broadcast()

_cond.unlock()

}

open func lock(before limit: Date) -> Bool {

_cond.lock()

while_thread ! = nil {if! _cond.wait(until: limit) { _cond.unlock()return false

}

}

_thread = pthread_self()

_cond.unlock()

return true

}

open func lock(whenCondition condition: Int, before limit: Date) -> Bool {

_cond.lock()

while_thread ! = nil || _value ! = condition {if! _cond.wait(until: limit) { _cond.unlock()return false

}

}

_thread = pthread_self()

_cond.unlock()

return true

}

open var name: String?

}
Copy the code

So NSConditionLock has a value and NSCondition

At the bottom of the lock are encapsulation of pThreads

Read-write lock

Sometimes some scenarios, like operations on files, we’ll ask

  • Read and write the mutex
  • Write about a mutex
  • Multiple read operations can be performed simultaneously
  • Do not block the process while writing

For writing we can use the dispatch_barrier_async function, and as you know from multi-threading, dispatch_barrier_async acts as synchronization on the same queue, but does not block the thread, So you can write without blocking the process and write mutually exclusive

- (void)setAryWithValue:(NSString *)value time:(int)time {

sleep(time);

dispatch_barrier_async(self.read_write_queue, ^{

[self.dataDic setValue:value forKey:@"name"];

NSLog(@"Write status --%@--%@",self.dataDic[@"name"], [NSThread currentThread]);

});

}
Copy the code

For reads, we use dispatch_sync, which is a synchronization function that guarantees synchronization on this queue, and writes are also on this queue, so reads and writes are mutually exclusive. Dispatch_sync is a synchronization function, but we use multiple threads for reads. Dispatch_sync is only synchronized on the current thread, so multi-read is possible.

“Dispatch_async” is an asynchronous function that causes a return value. Than value = self. DataDic [@ “name”); Execute faster and return null

- (void)touchesBegan:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event {

for (int i = 0; i < 10; i++) {

dispatch_async(dispatch_get_global_queue(0.0 ), ^{

NSLog(@"Now the value is --%@", [self valueForDataDicWithKey:@"name"]);

});

}

}

- (NSString *)valueForDataDic{

__block NSString *value;

dispatch_sync(self.read_write_queue, ^{

value = self.dataDic[@"name"];

NSLog(@"Read status --%@--%@",self.dataDic[@"name"], [NSThread currentThread]);

});

return value;

}

Copy the code