preface
In development, we often encounter the situation of writing data while reading data, which often leads to data chaos and even crash. This is the resource snatch problem caused by thread insecurity. In this case, we need to lock the thread
The classification of the lock
There are two types of locks in OC: spinlocks and mutex
spinlocks
-
A spin lock is a lock used to protect a multithreaded shared resource. It differs from a mutex in that it continuously loops to check whether the lock is available in the form of busy waiting when it tries to acquire the lock. When the previous thread’s task is not finished (locked), the next thread will wait (no sleep), and when the previous thread’s task is finished, the next thread will execute immediately. In a multi-CPU environment, using a spin lock instead of a common mutex can often improve performance for programs with short locks.
-
Advantages: Spin locks do not cause the caller to sleep, so do not perform time-consuming operations such as thread scheduling and CPU time slice rotation. So if the lock can be acquired in a very short time, spin locks are far more efficient than mutex locks
-
Disadvantages: The spin lock is always CPU hogging, it runs without the lock (spin) is CPU hogging, if you can’t get the lock in a short period of time, this will definitely make the CPU less efficient
Summary: High efficiency, but always occupy CPU and resources, can not achieve recursive call.
Common spinlocks:atomic
.OSSpinLock
.dispatch_semaphore_t
The mutex
- When the task of the previous thread is not finished (locked), the next thread enters
Sleep state waiting
The task is completeCPU
Other threads can be scheduled. When the previous thread completes the task, the next thread willAutomatically wake up and then execute the task
. - When it comes to synchronization, one task can only be executed before the next one can be executed.
Synchronization: mutually exclusive + sequence
Common mutex:@synchronized
.NSLock
.pthread_mutex
.NSConditionLock
.NSCondition(conditional lock)
.NSRecursiveLock
Performance comparison of locks
You can often see performance diagrams of locks on the Internet:
-
High and low performance comparison: OSSpinLock > dispatch_semaphone > pthread_mutex > NSLock > NSCondition > NSRecursiveLock > NSConditionLock > @synchronized
-
In the past, the new energy of @synchronized was very low, but apple optimized it later. Collect some locks and execute them in 100000 cycles respectively, and then calculate the completion time with CFAbsoluteTimeGetCurrent respectively to get:
-
Iphone11 simulator:
-
Iphone11 true machine:
In comparison, the performance of the lock has improved significantly in the real world. In the case of @synchronized, apple has finally optimized it because of its high “attendance”.
The role of the lock
Here’s a simulation of selling tickets:
- (void)testGlobalQueueSell {
dispatch_async(dispatch_get_global_queue(0.0), ^ {for (int i = 0; i < 5; i++) {
[selfsellingTickets]; }});dispatch_async(dispatch_get_global_queue(0.0), ^ {for (int i = 0; i < 8; i++) {
[selfsellingTickets]; }});dispatch_async(dispatch_get_global_queue(0.0), ^ {for (int i = 0; i < 10; i++) {
[selfsellingTickets]; }});dispatch_async(dispatch_get_global_queue(0.0), ^ {for (int i = 0; i < 12; i++) {
[selfsellingTickets]; }}); } - (void)sellingTickets {
if (self.remainTickets > 0) {
self.remainTickets--;
sleep(1);
NSLog(@" sold one, current remaining tickets: %d".self.remainTickets);
} else {
NSLog(@" Tickets are sold out"); }}/ / call
self.remainTickets = 30;
[self testGlobalQueueSell];
Copy the code
The results are as follows:
Because there are multiple threads selling tickets, many tickets are sold at the same time without knowing the actual number of tickets, which leads to the problem of data chaos. At this time, locks are needed to ensure thread safety
- (void)sellingTickets {
@synchronized (self) {
if (self.remainTickets > 0) {
self.remainTickets--;
sleep(1);
NSLog(@" sold one, current remaining tickets: %d".self.remainTickets);
} else {
NSLog(@" Tickets are sold out"); }}}Copy the code
The mutex @synchronized is selected to ensure that only one thread can handle ticket changes at a time, resulting in the following:
How does @synchronized ensure thread safety? Objc4-818.2 objC4-818.2
@synchronized
The principle of
Synchronized synchronized synchronized synchronized synchronized synchronized synchronized synchronized synchronized synchronized synchronized synchronized synchronized synchronized synchronized
{ id _rethrow = 0;
id _sync_obj = (id)appDelegateClassName; // The argument passed in
objc_sync_enter(_sync_obj);
try {
struct _SYNC_EXIT {
_SYNC_EXIT(id arg) : sync_exit(arg) {}
~_SYNC_EXIT() {objc_sync_exit(sync_exit); } id sync_exit; } _sync_exit(_sync_obj); }catch(id e) {_rethrow = e; } {struct _FIN { _FIN(id reth) : rethrow(reth) {}
~_FIN() { if (rethrow) objc_exception_throw(rethrow); }
id rethrow;
} _fin_force_rethow(_rethrow);}
}
Copy the code
- In the source
@synchronized
Invokes the firstobjc_sync_enter
Function, and then calls the structure_SYNC_EXIT
, it hasA constructor
anddestructor
, the constructor is calledsync_exit
Function, the destructor method is calledobjc_sync_exit
, so the focus of the research onobjc_sync_enter
andobjc_sync_exit
The above
Structure analysis
-
Objc_sync_enter and objc_sync_exit source code is as follows:
int objc_sync_enter(id obj) { int result = OBJC_SYNC_SUCCESS; if (obj) { SyncData* data = id2data(obj, ACQUIRE); ASSERT(data); data->mutex.lock(a); }else { // @synchronized(nil) does nothing if (DebugNilSync) { _objc_inform("NIL SYNC DEBUG: @synchronized(nil); set a breakpoint on objc_sync_nil to debug"); } objc_sync_nil(a); }return result; } int objc_sync_exit(id obj) { int result = OBJC_SYNC_SUCCESS; if (obj) { SyncData* data = id2data(obj, RELEASE); if(! data) { result = OBJC_SYNC_NOT_OWNING_THREAD_ERROR; }else { bool okay = data->mutex.tryUnlock(a);if(! okay) { result = OBJC_SYNC_NOT_OWNING_THREAD_ERROR; }}}else { // @synchronized(nil) does nothing } return result; } Copy the code
The data type is SyncData. Data ->mutex.lock() is called when objc_sync_enter is used for locking, and data-> mutex.tryunlock () is called when objc_sync_exit is used for unlocking. The only difference between the two datas is their type. That brings us to SyncData
-
SyncData has the following structure:
typedef struct alignas(CacheLineSize) SyncData { struct SyncData* nextData; DisguisedPtr<objc_object> object; int32_t threadCount; // number of THREADS using this block recursive_mutex_t mutex; } SyncData; Copy the code
-
It is a structure:
nextData
Is alsoSyncData
Type, indicating that it is aSingly linked list
object
Is the associated objectthreadCount
Log the number of threads that can be accessed by multiple threadsmutex
: Yes is a recursive lock
Conclusion: @synchronized is a multithreaded, recursive mutex
SyncData (id2data);
-
Id2data source analysis is as follows:
-
- I created one first
spinlock_t
Lock, and then toSyncData
Locks are locked during memory creation and assignment to ensure thread safety
- I created one first
-
- Then through
LIST_FOR_OBJ
Gets the linked list, which has the following structure:
#define LIST_FOR_OBJ(obj) sDataLists[obj].data static StripedMap<SyncList> sDataLists; / / StripedMap structure class StripedMap { #ifTARGET_OS_IPHONE && ! TARGET_OS_SIMULATOR enum { StripeCount = 8 }; #else enum { StripeCount = 64 }; #endif } Copy the code
SDataLists are global hash tables that store synclists, which have a capacity of 8 on real machines and 64 on emulators.
struct SyncList { SyncData *data; spinlock_t lock; constexpr SyncList(a) : data(nil), lock(fork_unsafe_lock) {}};Copy the code
SyncData is a one-way linked list, so we can get the structure of StripedMap:
- Then through
-
- To obtain
SyncData
According to thewhy
Conditions for related processing
- To obtain
-
- If you don’t get it
SyncData
Data is locked and then created and assigned
- If you don’t get it
-
- The last execution
done
For related storage
- The last execution
-
But how is the specific process, need to be combined with case analysis
Combined with case analysis
Since SyncData supports multithreading and can be recursively, it can be divided into the following four categories for analysis. StripeCount is adjusted to 1 under the simulator to increase the probability of hash conflict
Single thread recurses the sameobject
void oneThreadOneObject() {
LGPerson *p = [[LGPerson alloc] init];
dispatch_async(dispatch_get_global_queue(0.0), ^ {@synchronized (p) {
NSLog(@ "0");
@synchronized (p) {
NSLog(@ "1");
@synchronized (p) {
NSLog(@ "2"); }}}}); }Copy the code
-
Break points at @synchronized respectively, and then run in objC4-818.2. After the breakpoint is reached, hit breakpoints at data judgment, lock processing and unlock processing of ID2data function
-
- On the first entry
StripedMap
the64
aSyncList
It’s all empty, so you can’t get it the first timedata
andcache
data
Then it will come to the lock creation, because*listp
Is empty, so it doesn’t go into the loop
So I’m going to go toposix_memalign
Create a place
The main thing here is rightresult
To:- Allocate memory space,
- And then associate the object
- And then the number of threads is set to zero
1
- Create a recursive lock
- Then the head plug method is set
nextData
The first onenextData
Is empty and*listp
Point to your
When the result is processed, the result is saved to the current thread space
At this point, the first entry is complete, continue the breakpoint and enter the second @synchronized entry
- On the first entry
-
- On the second entry
StripedMap
It has a value, because it’s the same object, solistp
It’s worth it
- The next execution will be based on
SYNC_DATA_DIRECT_KEY
Get the data stored the last time it entered, because it is the same object, so it will enterobject
Distinguish, according towhy
The argument passed in isACQUIRE
forlockCount++
The processing andlockCount
storage
The breakpoint continues for a third lock
- On the second entry
-
- The third entry is the same as the second entry, and finally
lockCount++
Continuing with the breakpoint will take you to objc_sync_exit
- The third entry is the same as the second entry, and finally
-
- Unlock before
id2data
Processing: Because at this timedata
It has value, and eventually it goes toRELEASE
forlockCount--
operation
- At this point to
lockCount--
And store, when0
Is emptied of the store and calledOSAtomicDecrement32Barrier
Atomic manipulation- 1
- Unlock before
Single-threaded recursion is differentobject
Case as follows
void oneThreadAnyObject() {
LGPerson *p1 = [[LGPerson alloc] init];
LGPerson *p2 = [[LGPerson alloc] init];
LGPerson *p3 = [[LGPerson alloc] init];
dispatch_async(dispatch_get_global_queue(0.0), ^ {@synchronized (p1) {
NSLog(@ "0");
@synchronized (p2) {
NSLog(@ "1");
@synchronized (p3) {
NSLog(@ "2"); }}}}); }Copy the code
-
For the first time, @synchronized could not obtain data because the StripedMap was empty, so it went through the creation process, and then used the header interpolation method to point *list to result
- And then do the related ones
result
andlockCount
storage
- And then do the related ones
-
Synchronized: @synchronized; fastCacheOccupied: YES; synchronized: @synchronized
- This is when you walk into
for
Cycle judgment
- But at the moment,
threadCount = 1
, does not assign here, so we go to create again
- I’m going to create it, I’m going to assign it, I’m going to insert it using a header, and then I’m going to do
cache
storage
- This is when you walk into
-
Synchronized: @synchronized; fastCacheOccupied: YES; synchronized: @synchronized
- Due to the
object
Different, socontinue
Jump out of the loop and go tofor
Cycle, butthreadCount
At this time for1
- Then go to the creation site
- I’m going to create it, I’m going to assign it, I’m going to insert it using a header, and then I’m going to do
cache
storage
- Due to the
Multiple threads recurse the sameobject
void anyThreadOneObject() {
LGPerson *p = [[LGPerson alloc] init];
dispatch_async(dispatch_get_global_queue(0.0), ^ {@synchronized (p) {
NSLog(@ "0");
dispatch_async(dispatch_queue_create("wushuang.concurrent1", DISPATCH_QUEUE_CONCURRENT), ^{
@synchronized (p) {
NSLog(@ "1");
dispatch_async(dispatch_queue_create("wushuang.concurrent2", DISPATCH_QUEUE_CONCURRENT), ^{
@synchronized (p) {
NSLog(@ "2"); }}); }}); }}); }Copy the code
-
The first entry, as in the previous two cases, goes through the creation process, and then points the *list to result using a header interpolation method, and finally stores the relevant information
-
The second time @synchronized goes to the for loop:
- then
p
The value is assigned toresult
And callOSAtomicIncrement32Barrier
forthreadCount+1
- Finally went to the
done
For related storage
- then
-
On the third @synchronized entry, threadCount+1 is applied as it was on the second entry
Multithreaded recursion is differentobject
void anyThreadAnyObject() {
LGPerson *p1 = [[LGPerson alloc] init];
LGPerson *p2 = [[LGPerson alloc] init];
LGPerson *p3 = [[LGPerson alloc] init];
dispatch_async(dispatch_get_global_queue(0.0), ^ {@synchronized (p1) {
NSLog(@ "0");
dispatch_async(dispatch_queue_create("wushuang.concurrent1", DISPATCH_QUEUE_CONCURRENT), ^{
@synchronized (p2) {
NSLog(@ "1");
dispatch_async(dispatch_queue_create("wushuang.concurrent2", DISPATCH_QUEUE_CONCURRENT), ^{
@synchronized (p3) {
NSLog(@ "2"); }}); }}); }}); }Copy the code
-
When you enter @synchronized for the first time and have no data, you will create and store it
-
The second time @synchronized entered the for loop judgment, because the object is different, will be assigned
- Finally, relevant storage is carried out
-
The third time @synchronized was the same as the second time
conclusion
The whole locking core processing process is as follows:
NSLock
andNSRecursiveLock
NSLock
NSLock is a common thread lock. In order to solve the problem of thread safety, we first look at the following cases:
- (void)threadDemo {
dispatch_async(dispatch_get_global_queue(0.0), ^ {static void (^ testBlock)(int num);
testBlock = ^(int num) {
if (num > 0) {
NSLog(@"current num value %d ", num);
testBlock(num - 1); }}; testBlock(10);
});
}
Copy the code
-
Inside the global queue is a block, which is obviously printed in order
-
But putting code in a for loop is problematic because of printing
In this case, a lock is needed to solve the order problem, using NSLock
-
Loading NSLock into the business code in the block results in the following:
- Because the lock has not been unlocked, and called
block
, so resulting in a deadlock print only once, noteNSLock
canmultithreading
Use, butNot a recursive
- Because the lock has not been unlocked, and called
NSRecursiveLock
- Take a look at
NSRecursiveLock
, its usage andNSLock
similar
-
Add a lock to a block of a queue function to ensure that it is executed.
- The result is a complete print order after the crash, note
NSRecursiveLock
Multithreading is not supportedcall
- The result is a complete print order after the crash, note
-
Earlier in the article we analyzed that @synchronized can be used multithreaded and recursively. Try using it again:
- The printout shows up
Ten times
Sequential execution,@synchronized
That’s the perfect solution
NSCondition
-
The object of NSCondition actually acts as a lock and a thread inspector: the lock is mainly used to protect the data source and perform the tasks triggered by the condition when the condition is detected; The thread checker mainly decides whether to continue running a thread based on conditions, that is, whether the thread is blocked. NSCondition has four apis
[condition lock]
: Generally usedMultithreaded simultaneous access
,Modifying the same data source
To ensureAt the same time
The data sourceOnly be accessed
,Modify a
, other thread commands need to be inlock
Outside waiting, only tounlock
To access[condition unlock]
And:lock
At the same time use[condition wait]
: Makes the thread wait[condition signal]
:The CPU signals
Tell the thread not to wait, yesContinue to perform
-
Case study (production and consumption model, e.g. selling milk tea) :
- (void)testCondition { for (int i = 0; i < 50; i++) { dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ [self ws_product]; }); dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ [self ws_consumer]; }); dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ [self ws_product]; }); dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ [self ws_consumer]; }); }} - (void)ws_product { self.milkTeaCount += 1; NSLog(@"Produce one cup, available % zD cup", self.milkTeaCount); } - (void)ws_consumer { if (self.milkTeaCount == 0) { NSLog(@"Wait to make count: %zd", self.milkTeaCount); } self.milkTeaCount -= 1; NSLog(@"Sold one cup, left % Zd.", self.milkTeaCount); } Copy the code
The following output is displayed:
At this point, there is obviously a problem with the data. The number of multiple productions is still1
, the number of repeated consumption or0
How to solve it? That’s when you need itConditions for the lock
, rightLock up the process of production and consumption
When sold out need to wait for production and then continue to sell
Fountion source code interpretation
-
NSLock, NSRecursiveLock, and NSCondition are all encapsulated by pthread, so we don’t know what the underlying principle of pthread is
Unlock belongs to NSLocking protocol, which can only be invoked using a protocol. Unfortunately, the lock is in the Foundation framework, and it is not open source. Why? Look at swift’s fountion framework: Swift-Corelibs-Foundation
Go into the source code, find the nsLock. swift file, and then analyze several locks
-
NSLock analysis: after finding NSLock in the source code, then find the initialization and lock unlock place
init
: callpthread_mutex_init(mutex, nil)
Initialize- Lock: call
pthread_mutex_lock(mutex)
To lock - Unlock: call
pthread_mutex_unlock(mutex)
unlock
-
NSRecursiveLock analysis:
init
:withUnsafeMutablePointer(to: &attrib) { attrs in pthread_mutexattr_init(attrs) pthread_mutexattr_settype(attrs, Int32(PTHREAD_MUTEX_RECURSIVE)) pthread_mutex_init(mutex, attrs) } Copy the code
- Lock:
pthread_mutex_lock(mutex)
- Unlock:
pthread_mutex_unlock(mutex)
NSRecursiveLock is initialized with a PTHREAD_MUTEX_RECURSIVE attrs of type PTHREAD_MUTEX_RECURSIVE. That’s why NSRecursiveLock is recursive
NSConditionLock
-
NSConditionLock is also a conditional lock, and once one thread has acquired the lock, the other threads must wait. There are the following apis:
-
[conditionLock lock]
Said:conditionLock
Expect to get locks ifThere is no
Other threadsGets the lock
(No need to judge internalcondition
) it canperform
Line the following code if alreadyAnother thread has acquired the lock
(may be a conditional or unconditional lock), thenWaiting for the
straightto
Other threadsunlock
-
[conditionLock unlock]
: unlock
-
[conditionLock lockWhenCondition:A]
: indicates that if no other thread has acquired the lock, but the internal lockcondition
Is not equal toA
Condition, it still can’t get the lock, stillWaiting for the
. If the internalcondition
Is equal to theA
Condition, and no other thread has acquired the lock, it enters the code area and sets it to acquire the lock, and any other thread will wait for its code to complete until it unlocks
-
[conditionLock unlockWithCondition:A]
Said:Release the lock
While putting the insidecondition
Set toA
conditions
-
return = [xxx lockWhenCondition:A beforeDate:t]
: indicates if the lock is not obtained, andMore than
This time,No longer blocking the thread
. But note that the value returned isNO
It does notChange the state of the lock
The purpose of this function is to be able toImplement processing in two states
-
condition
: the so-calledcondition
It’s integers, internally comparing conditions by integers
-
-
Let’s take a look at the case:
- (void)testConditionLock { NSConditionLock *lock = [[NSConditionLock alloc] initWithCondition:2]; dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ [lock lockWhenCondition:1]; NSLog(@ "1"); [lock unlockWithCondition:0]; }); dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{ sleep(1); [lock lockWhenCondition:2]; NSLog(@ "2"); [lock unlockWithCondition:1]; }); dispatch_async(dispatch_get_global_queue(0.0), ^{ [lock lock]; NSLog(@ "3"); [lock unlock]; }); } Copy the code
- The printing sequence is highly likely to be:
3 -> 2 -> 1
Because of3
No condition is just plain lock, create condition is2
, only2
To execute the conditions, and then according to[lock unlockWithCondition:1]
After the implementation1
. There are four questions:-
NSConditionLock
andNSCondition
They look alike. What’s the difference?
-
- To create the
2
What is?
- To create the
-
lockWhenCondition
What did you do?
-
unlockWithCondition
What did you do?
-
- The printing sequence is highly likely to be:
-
- Create code
initWithCondition:
As follows:
- in
swift
, the corresponding code is as follows:
When creating an object, it is created lastNSCondition
Type of a member variable_cond
And thencondition
Assign to another member variable_calue
- Create code
-
lockWhenCondition:
The code is as follows:
open func lock(a) { let _ = lock(before: Date.distantFuture) } open func lock(whenCondition condition: Int) { let _ = lock(whenCondition: condition, before: Date.distantFuture) } open func lock(whenCondition condition: Int.before limit: Date) -> Bool { _cond.lock() // _cond calls lock while _thread ! = nil || _value ! = condition { // If no _thread has a value or condition has changed, the judgment is entered if !_cond.wait(until: limit) { If there is no timeout, the while loop will continue to judge until the condition changes out of the loop _cond.unlock() return false}}#if os(Windows) _thread = GetCurrentThread(a)#else _thread = pthread_self() // Thread locks itself #endif _cond.unlock() // The unlock method is completed return true } Copy the code
- Locking in Swift ends up calling
lock(whenCondition: before:
Method, called first in methodNSCondition
The locklock
And then inwhile
Judge, if not_thread
Have a value orcondition
If it changes, it enters the judgmentWait for a timeout
The callunlock
To return tofalse
,Locking failure
; ifThere is no timeout
In thewhile
In a loopKeep judgment
Until theConditions change
Out of the loop; After the lock is successful_thread
Assign, which means lock yourselfpthread_self
, the method completes before returningunlock
unlock
-
- unlock
unlockWithCondition
The code is as follows:
open func unlock(withCondition condition: Int) { _cond.lock() #if os(Windows) _thread = INVALID_HANDLE_VALUE #else _thread = nil / / empty #endif _value = condition // Change the _value value _cond.broadcast() _cond.unlock() } Copy the code
- Called first when unlocking
lock
Lock it, and put it_thread
Set tonil
And thenvalue
Changes the value of to the newly passed conditioncondition
, and the method is called before completionunlock
That unlocked it, but in the process_cond
Call thebroadcast
Method? What does this do? broadcast
:open func broadcast(a) { #if os(Windows) WakeAllConditionVariable(cond) #else pthread_cond_broadcast(cond) #endif } Copy the code
Method in the final call
pthread_cond_broadcast
What it does isWake up all threads blocking on condition variables
. This also confirms the fact that the mutex sleeps when the thread is blocked and wakes up when unlocked
- unlock
Read/write lock (read/write)
- Read/write locks are actually a kind of
Special mutex
, which divides visitors to shared resources intoThe reader
andwriter
.The reader
Only toShared resources
forread
Access,writer
You need toShared resources
forwrite
Operation. This type of lock as opposed to a spin lockCan improve concurrency
Because, inA multiprocessor
In the system, it allowsAt the same time
There areMultiple readers come to visit
Share resources, maximum possible number of readers for actual logicCPU
The number.writer
isexclusive
, aRead-write lock
At the same timeThere can only be one writer
orMultiple readers
(withCPU
Number dependent), butCan't
Both readers and writers. Preemption also fails during read/write lock holding. If I read and write the lockNo current readers
.There are no writers
So the writer canGet read/write locks immediately
Otherwise it mustThe spin
There, untilThere is no
anyA writer or reader
. If I read and write the lockNo writer
, then the reader canGet the read/write lock immediately
Otherwise the reader must spin in there untilWrite to release
This read-write lock. - Read/write locks are suitable for data structures
Read a lot more than write
Because read mode locking can be shared toWrite mode
Locked meansexclusive
So the read/write lock is also calledShared-exclusive lock
Read/write locks can be implemented with the following pthread_rwlock_t and the fence function dispatch_barrier_async:
-
pthread_rwlock_t
Api:
pthread_rwlock_init
: Initializes the lockpthread_rwlock_rdlock
: read lockpthread_rwlock_tryrdlock
: Read attempts to lockpthread_rwlock_wrlock
: write lockpthread_rwlock_trywrlock
: Write attempt lockpthread_rwlock_unlock
: unlockpthread_rwlock_destroy
: destroy the lock
Usage:
#import <pthread.h>
@property (nonatomic.assign) pthread_rwlock_t rwlock;
@property (nonatomic.strong) NSMutableDictionary *dic;
- (void)testRWLock {
pthread_rwlock_init(&_rwlock, NULL);
self.dic = [NSMutableDictionary dictionary];
for (int i = 0; i < 10; i++) {
dispatch_async(dispatch_get_global_queue(0.0), ^ {if (i % 2= =0) {[self ws_writeName:[NSString stringWithFormat:@"name_%d", i]];
} else{[selfws_readName]; }}); }}/ / read
- (void)ws_readName {
pthread_rwlock_rdlock(&_rwlock); / / read lock
NSString *name = [self.dic valueForKey:@"name"];
NSLog(@"read name ___ : %@ 🎈", name);
pthread_rwlock_unlock(&_rwlock); / / reading lock
}
/ / write
- (void)ws_writeName: (NSString *)name {
pthread_rwlock_wrlock(&_rwlock); / / write lock
[self.dic setValue:name forKey:@"name"];
NSLog(@" Write name ___ : % @🎉", name);
pthread_rwlock_unlock(&_rwlock); / / write to unlock
}
Copy the code
The output is as follows:
Now let’s look at the fence scheme
-
dispatch_barrier_async
Use the following
@property (nonatomic.strong) NSMutableDictionary *dic;
@property (nonatomic.strong) dispatch_queue_t myQueue;
- (void)testRWLock {
self.dic = [NSMutableDictionary dictionary];
self.myQueue = dispatch_queue_create("com.current.thread", DISPATCH_QUEUE_CONCURRENT);
for (int i = 0; i < 10; i++) {
dispatch_async(dispatch_get_global_queue(0.0), ^ {if (i % 2= =0) {[self ws_barrier_writeName:[NSString stringWithFormat:@"name_%d", i]];
} else{[selfws_barrier_readName]; }}); }}/ / read
- (void)ws_barrier_readName {
dispatch_sync(self.myQueue, ^{
NSString *name = [self.dic valueForKey:@"name"];
NSLog(@"read name ___ : %@ 🎈", name);
});
}
/ / write
- (void)ws_barrier_writeName: (NSString *)name {
dispatch_barrier_async(self.myQueue, ^{
[self.dic setValue:name forKey:@"name"];
NSLog(@" Write name ___ : % @🎉", name);
});
}
Copy the code
The following output is displayed: