-
The types of lock
-
Mutual exclusion (Mutex)
A mechanism that prevents two threads from simultaneously reading and writing to the same common resource (such as a global variable). When a lock fails to be acquired, the thread goes to sleep, waiting to be awakened when the lock is released. Mutex is divided into recursive locks and non-recursive locks.
- Recursive locking
A reentrant lock, in which the same thread can acquire the lock again before it is released, can be called recursively.
Such as:@synchronized
- Non-recursive locks
The lock is not reentrant and cannot be acquired again until it is released.
Such as:NSLock
,pthread_mutex
- Recursive locking
-
spinlocks
The thread double-checks that the lock variable is available. Because the thread keeps executing during this process, it is a busy wait. Once a spin lock is acquired, the thread holds the lock until it is explicitly released. Spin-locking avoids the scheduling overhead of the process context and is therefore effective in cases where threads will only block for a short time. The main difference with mutexes is that mutexes go to sleep and wait to be awakened, while spin-locks do not go to sleep and wait in a busy wait state
-
Conditions for the lock
A condition variable that goes to sleep when some resource requirement of the process is not met and is locked. When the resource is allocated, the conditional lock is opened and the process continues to run
NSCondition
NSConditionLock
-
Recursive locking
The same thread can lock N times without raising a deadlock
NSRecursiveLock
pthread_mutex(recursive)
-
Semaphore
A more advanced synchronization mechanism, mutex is a special case of Semaphore when only 0/1 is used. Semaphores can have more value space to implement more complex synchronization than just mutual exclusion between threads.
dispatch_semaphore
-
Read-write lock
Read/write lock is a special type of spin lock. It divides the visitors to a shared resource into readers and writers. Readers only read the shared resource, while writers need to write to the shared resource. This lock improves concurrency over spin-locks because it allows multiple readers to access a shared resource at the same time on a multiprocessor system, with the maximum possible number of readers being the actual number of logical cpus. A read-write lock can have only one writer or more readers (depending on the number of cpus) at the same time, but it cannot have both readers and writers. Preemption is also invalid during read/write lock retention. If the read-write lock currently has no readers and no writers, then the writer can immediately acquire the read-write lock, otherwise it must spin there until there are no writers or readers. If the read/write lock has no writer, the reader can immediately acquire the read/write lock; otherwise, the reader must spin there until the writer releases the read/write lock.
Only one thread can hold a read-mode lock at a time, but multiple threads can hold a read-mode lock at the same time. Because of this feature, when a read/write lock is in the write lock state, all threads attempting to lock the lock will be blocked until the lock is unlocked. When a read/write lock is in the lock state, all threads attempting to lock it in read mode gain access, but if a thread wishes to lock the lock in write mode, it must wait until all threads release the lock. In general, when a read/write lock is in read mode and another thread attempts to lock it in write mode, the read/write lock will block all subsequent read mode lock requests. This prevents the read mode lock from being held for a long time and waiting write mode lock requests from being blocked for a long time. Read/write locks are suitable for situations where there are many more reads than writes to a data structure. A read/write lock is also called a shared-exclusive lock because it is shared when locked in read mode and exclusive when locked in write mode. -
conclusion
There are really only three types of basic locks: spin-locks, mutex locks, and read-write locks. Others, such as conditional locks, recursive locks, and semaphores, are upper level encapsulated implementations
-
-
Lock usage and source exploration
Before exploring the usage and source code, take a look at a diagram showing the performance comparison of all locks
-
The preparatory work
Before exploring the source code, write a ticket demo to see what it looks like to run without a lock
- (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view. self.ticketCount = 20; [self lg_testSaleTicket]; } - (void)lg_testSaleTicket{ dispatch_async(dispatch_get_global_queue(0, 0), ^{ for (int i = 0; i < 5; i++) { [self saleTicket]; }}); dispatch_async(dispatch_get_global_queue(0, 0), ^{ for (int i = 0; i < 5; i++) { [self saleTicket]; }}); dispatch_async(dispatch_get_global_queue(0, 0), ^{ for (int i = 0; i < 3; i++) { [self saleTicket]; }}); dispatch_async(dispatch_get_global_queue(0, 0), ^{ for (int i = 0; i < 10; i++) { [self saleTicket]; }}); } - (void)saleTicket{ if (self.ticketCount > 0) { self.ticketCount--; Sleep (0.1); NSLog(@" %ld tickets left ",self.ticketCount); }else{NSLog(@" tickets are sold out "); }}Copy the code
Running result:
-
@synchronized
In order to avoid this problem, we can lock and let only one thread access the data at the same time. The specific usage is as follows:Let’s look at the result:The test was found to have none of these problems.
Look at the source code implementationStart assembly debugging firstThe discovery layer calls two methods:objc_sync_exit
andobjc_sync_enter
, you can also view the compiled file validation by clangNow the next sign break pointobjc_sync_exit
andobjc_sync_enter
Find source code inlibobjc.A.dylib
Library, and then go to the library face to find source code
See firstSyncData
structuretypedef struct alignas(CacheLineSize) SyncData { struct SyncData* nextData; DisguisedPtr<objc_object> object; int32_t threadCount; // number of THREADS using this block recursive_mutex_t mutex; } SyncData; Copy the code
Discovery is a single linked list structure
nextData
Point to the next SyncDataobject
An object pointer, the object is objc_object or OC object, which, as you might guess, holds a pointer to the locked object objthreadCount
Record the number of threads using this block of codemutex
After you acquire the structure object, you call its lock() method
Take a look at
id2data
The source code Specific source annotation diagram are explained.
To summarize the general process:- If the thread cache can find that the current thread has a lock so only need to
lockCount+1
Just go back,objc_sync_exit
The method corresponds to minus one - If you can’t find it in the thread cache, look it up in the cache
lockCount+1
To return to,objc_sync_exit
The method corresponds to minus one - If it is not found in the cache, it means that the current thread has not added a lock. At this point, it traverses the cache to see if any other thread has used the lock. If yes, it is used
threadCount
Incremented by 1 and stored in cache, if no other thread is using itthreadCount
Set it to 1 and store it in the cache
Cache structure diagram:
-
OSSpinLock
OSSpinLock is deprecated. Instead, os_UNfair_LOCK is internally encapsulated, and os_Unfair_LOCK is hibernated when locked, rather than the busy state of spin-locks
-
Atomic lock
Atomic applies to a property modifier in OC. It comes with a spin lock, and the property will have a lock when the setter and getter methods are called
static inline void reallySetProperty(id self, SEL _cmd, id newValue, ptrdiff_t offset, bool atomic, bool copy, bool mutableCopy) { ... id *slot = (id*) ((char*)self + offset); . if (! OldValue = *slot; *slot = newValue; } else {// lock spinlock_t& slotlock = PropertyLocks[slot]; slotlock.lock(); oldValue = *slot; *slot = newValue; slotlock.unlock(); }... }Copy the code
id objc_getProperty(id self, SEL _cmd, ptrdiff_t offset, BOOL atomic) { if (offset == 0) { return object_getClass(self); } // Retain release world id *slot = (id*) ((char*)self + offset); if (! atomic) return *slot; // Atomic retain release world spinlock_t& slotlock = PropertyLocks[slot]; slotlock.lock(); Id value = objc_retain(*slot); slotlock.unlock(); // for performance, we (safely) issue the autorelease OUTSIDE of the spinlock. return objc_autoreleaseReturnValue(value); }Copy the code
Osspinlock_t is used for the atomic properties. Osspinlock_t is used for the atomic properties, but OSSpinLock is not used
-
pthread_mutex
Pthread_mutex is a mutex. Instead of waiting while the lock is occupied and another thread requests it, it blocks the thread and sleeps.
#import <pthread.h> // Declare the global mutex pthread_mutex_t _lock; Pthread_mutex_init (&_lock, NULL); / / lock pthread_mutex_lock (& _lock); Pthread_mutex_unlock (&_lock); Pthread_mutex_destroy (&_lock);Copy the code
-
NSLock
First pass the breakpoint debug view
NSLock
The location of the source code is as follows:Found at this timeNSLock
The source of theFoundation
Frame, becauseOC
theFoundation
The framework is closed source so you can’t see the source code, butswift
theFoundation
The framework is open source, so we have also looked at itswift
theFoundation
The framework, because the syntax is different and the implementation logic is basically the sameYou can findNSLock
Bottom is rightpthread_mutex
The encapsulation should beNSLock
Is a mutex that blocks the thread until the task executes, so useNSLock
Be careful not to reenterNSLock
Lock, which causes threads to wait for each other, causing a deadlock -
NSRecursiveLock
Is a recursive lock in a mutex that can be acquired multiple times by the same thread without deadlock. A thread that has acquired the lock starts executing the code protected by the lock (the lock has not been released). If the code calls another function that wants to acquire the lock, it can acquire the lock and execute without deadlock. The bottom is also true
pthread_mutex
Encapsulate the underlying implementation code and alsoNSLock
Really want tolock
Methods andunLock
Methods andNSLock
It’s the same thinginit
whenNSRecursiveLock
Set the lock type to be a recursive lockExample: -
NSCondition
An NSCondition is also a mutex and the difference between it and an NSLock is that an NSLock automatically hibernates the thread when the lock is not acquired, and automatically wakes the thread when the lock is released. An NSCondition allows us to have more flexibility in controlling the state of the thread, hibernating the thread or waking it up whenever necessary
- The main API
- Usage scenarios and examples
For example, an example of production and consumption, only the production of goods can be sold by consumers, consumers to buy things when the goods have not wait for the producer to produce after the purchase, the example code is as follows:
- (void)td_testConditon{ _testCondition = [[NSCondition alloc] init]; For (int I = 0; i < 50; i++) { dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ [self td_producer]; }); dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ [self td_consumer]; }); dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ [self td_consumer]; }); dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ [self td_producer]; }); } } - (void)td_producer{ [_testCondition lock]; Self. ticketCount = self.ticketCount + 1; self.ticketCount = self.ticketCount + 1; NSLog(@" produce an existing count %zd",self.ticketCount); [_testCondition signal]; // Signal [_testCondition unlock]; } - (void)td_consumer{ [_testCondition lock]; If (self.ticketCount == 0) {NSLog(@" wait for count %zd",self.ticketCount); [_testCondition wait]; } self.ticketCount -= 1;} self.ticketCount -= 1; NSLog(@" count %zd left after consuming one ",self.ticketCount); [_testCondition unlock]; }Copy the code
- The source code to explore
The bottom and
NSLock
It looks like it’s all rightpthread_mutex_t
The encapsulation is nothing more than the use ofpthread_cond_t
The conditions of the -
NSConditionLock
Conditional locks, colloquially known as conditional mutex
With the NSConditionLock object, you can ensure that a thread can only acquire a lock if certain conditions are met. Once the lock is acquired and the key parts of the code are executed, the thread can discard the lock and set the association condition to the new condition. The conditions themselves are arbitrary: you can define them according to the needs of your application.
- Use the sample
#pragma mark -- NSConditionLock - (void)td_testConditonLock{// NSConditionLock *conditionLock = [[NSConditionLock alloc] initWithCondition:2]; dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ [conditionLock lockWhenCondition:1]; // conditoion = 1 internal Condition matching // -[NSConditionLock lockWhenCondition: beforeDate:] NSLog(@" thread 1"); [conditionLock unlockWithCondition:0]; }); dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{ [conditionLock lockWhenCondition:2]; Sleep (0.1); NSLog (@ "thread 2"); // self.myLock.value = 1; [conditionLock unlockWithCondition:1]; // _value = 2 -> 1 }); dispatch_async(dispatch_get_global_queue(0, 0), ^{ [conditionLock lock]; NSLog (@ "thread 3"); [conditionLock unlock]; }); }Copy the code
- Sample code analysis
- Source code analysisIt’s not hard to see from the source code
NSConditionLock
The source code is actuallyNSCondition
andNSLock
A lock that combines encapsulation
- Use the sample
-