preface
Due to the high encapsulation of various third-party libraries, locks are rarely used in iOS development. Just before I was asked about concurrent programming locks in the interview, I didn’t know enough about them, so I decided to sort out the knowledge about locks in iOS to fill in the gaps for everyone.
directory
Part one: What is a lock
Part TWO: Classification of locks
Part three: Performance comparison
Part four: Common deadlocks
Part V: Summary (AttachedDemo)
The body of the
What is a lock
In concurrent research publications over the past few decades, locks have always played the villain role, with accusations ranging from causing deadlocks to locking. Lock Convoying, multiple threads of the same priority repeatedly compete for the same lock, at this time, a large number of threads that can not get the lock but are awakened are forced to switch scheduling, which greatly affects the system performance), hunger, injustice, data Races, and many other concurrent SINS. Interestingly, the real heavy lifting in shared-memory parallel software is — you guessed it — locking.
In computer science, a lock is a synchronization mechanism used to restrict access to resources in a multithreaded environment. You can think of it as a strategy to eliminate concurrency.
if (lock == 0) {
lock = myPID;
}Copy the code
The above code does not guarantee that the task is locked, so it can be executed by multiple tasks at the same time. At this point, it is possible that multiple tasks will detect that the lock is free, so two or more tasks will try to set the lock, unaware that other tasks are also trying to set the lock. That’s when things go wrong. Take a look at the following code (Swift) :
class Account {
private(set// Add /minus public func add(x: x) // add/minus public func add(x: x) Int) { objc_sync_enter(self) defer { objc_sync_exit(self) } val += x } public func minus(x: Int) { objc_sync_enter(self) defer { objc_sync_exit(self) } val -= x; }}Copy the code
This prevents multiple tasks from modifying Val.
Two, the classification of locks
Locks can be classified into different classes according to their properties.
Each task accessing a public resource requires information about the lock, which is then used to determine whether it is accessible. If the information is accessed, the status of the lock changes to locked, so that no other thread can access the resource. When the access is finished, the lock is released, allowing other tasks to access the resource. Some systems have mandatory locks, and if an unauthorized lock accesses the locked data, an exception will be generated during access.
In iOS, locks are divided into mutex, recursive, semaphore, conditional, spinlock, read/write (a special type of spinlock), and distributed locks.
For database lock classification:
classification | classification |
---|---|
Partition by lock granularity | Table level locks, row level locks, page level locks |
By lock level | Shared lock, exclusive lock |
Partition by locking | Automatic lock, display lock |
Divided by how locks are used | Optimistic lock, pessimistic lock |
Division by operation | DML lock, DDL lock |
Here is no more detailed introduction, interested people can bring Wiki to refer to the relevant information.
1. Mutex
In programming, the concept of object mutex is introduced to ensure the integrity of shared data operation. Each object corresponds to a tag called a mutex, which ensures that only one thread can access the object at any one time.
1.1 @ synchronized
- @synchronized takes an argument, which is equivalent to a semaphore
// Use to prevent multiple threads from accessing attributes - (void)setTestInt:(NSInteger)testInt {
@synchronized (self) {
_testInt = testInt; }}Copy the code
1.2 NSLock
- Block and macro definitions
// Define block type typedef void(^MMBlock)(void); // Define the method to get the global queue#define MM_GLOBAL_QUEUE(block) \
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ \
while (1) { \
block();\
}\
})Copy the code
- The test code
NSLock *lock = [[NSLock alloc] init];
MMBlock block = ^{
[lock lock];
NSLog(@"Perform operation");
sleep(1);
[lock unlock];
};
MM_GLOBAL_QUEUE(block);Copy the code
1.3 the pthread
In addition to creating mutex locks, pthreads can also create recursive locks, read-write locks, and once locks. I’ll explain how to use it later. If you want to learn pthread in depth, please refer to relevant documents and materials to learn separately.
-
Static initialization: pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER
-
Dynamic initialization: The pthread_mutex_init() function creates a mutex dynamically, with the attr parameter specifying the properties of the new mutex. If attr is NULL, the default attribute is used, and 0 is returned for successful initialization. This method initializes both normal and recursive locks (like NSRecursiveLock) in a somewhat complicated way.
-
The PTHREAD_MUTEX_ERRORCHECK mutex does not detect deadlocks, the PTHREAD_MUTEX_ERRORCHECK mutex provides error checking, the PTHREAD_MUTEX_RECURSIVE lock, PTHREAD_PROCESS_DEFAULT maps to PTHREAD_PROCESS_NORMAL.
-
Here’s what I got from YYKitcopy:
#import <pthread.h>
//YYKit
static inline void pthread_mutex_init_recursive(pthread_mutex_t *mutex, bool recursive) {
#define YYMUTEX_ASSERT_ON_ERROR(x_) do { \__unused volatile int res = (x_); \ assert(res == 0); The \}while(0) assert(mutex ! = NULL);if(! Recursive) {// Ordinary lock YYMUTEX_ASSERT_ON_ERROR(pthread_mutex_init(mutex, NULL)); }else{// Recursive lock pthread_mutexattr_t attr; YYMUTEX_ASSERT_ON_ERROR(pthread_mutexattr_init (&attr)); YYMUTEX_ASSERT_ON_ERROR(pthread_mutexattr_settype (&attr, PTHREAD_MUTEX_RECURSIVE)); YYMUTEX_ASSERT_ON_ERROR(pthread_mutex_init (mutex, &attr)); YYMUTEX_ASSERT_ON_ERROR(pthread_mutexattr_destroy (&attr)); }#undef YYMUTEX_ASSERT_ON_ERROR
}Copy the code
- The test code
__block pthread_mutex_t lock;
pthread_mutex_init_recursive(&lock,false);
MMBlock block0=^{
NSLog(@"Thread 0: Lock");
pthread_mutex_lock(&lock);
NSLog(@"Thread 0: Sleep for 1 second");
sleep(1);
pthread_mutex_unlock(&lock);
NSLog(@"Thread 0: Unlocked");
};
MM_GLOBAL_QUEUE(block0);
MMBlock block1=^(){
NSLog(@"Thread 1: Lock");
pthread_mutex_lock(&lock);
NSLog(@"Thread 1: Sleep for 2 seconds");
sleep(2);
pthread_mutex_unlock(&lock);
NSLog(@"Thread 1: Unlock");
};
MM_GLOBAL_QUEUE(block1);
MMBlock block2=^{
NSLog(@"Thread 2: Lock");
pthread_mutex_lock(&lock);
NSLog(@"Thread 2: Sleep for 3 seconds");
sleep(3);
pthread_mutex_unlock(&lock);
NSLog(@"Thread 2: Unlocked");
};
MM_GLOBAL_QUEUE(block2);Copy the code
- Output result:
Thread 2: locked thread 0: locked thread 1: locked thread 2: sleep for 3 secondsCopy the code
Thread 2: locked thread 0: locked thread 1: locked thread 2: sleeping 3-second thread 2: unlocked thread 0: sleeping 1-second thread 2: lockedCopy the code
Thread 2: locked thread 0: locked thread 1: locked thread 2: sleeping 3-second thread 2: unlocked thread 0: sleeping 1-second thread 2: locked thread 0: unlocked thread 1: sleeping 2-second thread 0: locked threadCopy the code
2. Recursive locking
The same thread can be locked more than once without causing a deadlock
An 🌰 :
NSLock *lock = [[NSLock alloc] init];
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
static void (^RecursiveMethod)(int);
RecursiveMethod = ^(int value) {
[lock lock];
if (value > 0) {
NSLog(@"value = %d", value);
sleep(2);
RecursiveMethod(value - 1);
}
[lock unlock];
};
RecursiveMethod(5);
});Copy the code
This code is a typical deadlock case. In our thread, RecursiveMethod is called recursively. So every time it enters the block, it adds a lock, and from the second time, since the lock is already in use and not unlocked, it has to wait for the lock to be unlocked, causing a deadlock and the thread to block. The console will output the following information:
value = 5
*** -[NSLock lock]: deadlock ( '(null)') *** Break on _NSLockError() to debug.Copy the code
2.1 NSRecursiveLock
- The implementation code
NSRecursiveLock *lock = [[NSRecursiveLock alloc] init];
MM_GLOBAL_QUEUE(^{
static void (^RecursiveBlock)(int);
RecursiveBlock = ^(int value) {
[lock lock];
if (value > 0) {
NSLog(@"Number of lock layers %d", value);
sleep(1);
RecursiveBlock(--value);
}
[lock unlock];
};
RecursiveBlock(3);
});Copy the code
- The output (from which you can see that no deadlocks have occurred) :
Number of lock layers 3 Number of lock layers 2 Number of lock layers 1 Number of lock layers 3 Number of lock layers 2 Number of lock layers 1 Number of lock layers 3 Number of lock layers 2Copy the code
2.2 the pthread
- Code implementation
__block pthread_mutex_t lock; // The second argument istrueGenerate a recursive lock pthread_mutex_init_recursive(&lock,true);
MM_GLOBAL_QUEUE(^{
static void (^RecursiveBlock)(int);
RecursiveBlock = ^(int value) {
pthread_mutex_lock(&lock);
if (value > 0) {
NSLog(@"Number of lock layers %d", value);
sleep(1);
RecursiveBlock(--value);
}
pthread_mutex_unlock(&lock);
};
RecursiveBlock(3);
});Copy the code
- The output (again, it shows no deadlocks) :
Number of lock layers 3 Number of lock layers 2 Number of lock layers 1 Number of lock layers 3 Number of lock layers 2 Number of lock layers 1 Number of lock layers 3 Number of lock layers 2Copy the code
3. Semaphore
Semaphore, sometimes called a Semaphore, is a facility used in multithreaded environments to ensure that two or more critical pieces of code are not called concurrently. Before entering a critical code segment, the thread must acquire a semaphore; Once that critical piece of code is complete, the thread must release the semaphore. Other threads that want to enter the critical code segment must wait until the first thread releases the semaphore
3.1 dispatch_semaphore_t
- Synchronous implementation
// dispatch_semaphoRE_signal + 1 // dispatch_semaphore_wait __block dispatch_semaphoRE_t semaphore = dispatch_semaphore_create(1); MM_GLOBAL_QUEUE(^{ dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER); NSLog(@"Here is a simple usage, can realize their own producers, consumers.");
sleep(1);
dispatch_semaphore_signal(semaphore);
});Copy the code
3.2 the pthread
- The test code
__block pthread_mutex_t mutex=PTHREAD_MUTEX_INITIALIZER;
__block pthread_cond_t cond=PTHREAD_COND_INITIALIZER;
MM_GLOBAL_QUEUE(^{
//NSLog(@"Thread 0: Lock");
pthread_mutex_lock(&mutex);
pthread_cond_wait(&cond, &mutex);
NSLog(@"Thread 0: wait");
pthread_mutex_unlock(&mutex);
//NSLog(@"Thread 0: Unlocked");
});
MM_GLOBAL_QUEUE(^{
//NSLog(@"Thread 1: Lock"); sleep(3); Pthread_mutex_lock (&mutex); NSLog(@"Thread 1: Signal");
pthread_cond_signal(&cond);
pthread_mutex_unlock(&mutex);
//NSLog(@"Thread 1: Lock");
});Copy the code
4. Conditional locking
3.1 NSCodition
The object of NSCondition actually acts as a lock and a thread inspector: the lock is mainly used to protect the data source and perform the tasks triggered by the condition when the condition is detected; The thread checker mainly decides whether to continue running a thread based on conditions, that is, whether the thread is blocked.
- NSCondition also implements NSLocking protocol, so it also has lock and unlock methods of NSLocking protocol, which can be used as NSLock to solve the thread synchronization problem. The usage is exactly the same.
- (void)getIamgeName:(NSMutableArray *)imageNames{
NSCondition *lock = [[NSCondition alloc] init];
NSString *imageName;
[lock lock];
if (imageNames.count>0) {
imageName = [imageNames lastObject];
[imageNames removeObject:imageName];
}
[lock unlock];
}Copy the code
- At the same time, NSCondition provides more advanced usage. Wait and signal are similar to conditional semaphores. For example, we want to listen for the number of imageNames in the array, and when the number of imageNames is greater than zero, we clear it. The idea is that if the number of imageNames is greater than 0, the empty operation is performed; otherwise, wait waits for the empty operation. Signal occurs when the number of imageNames increases, causing the waiting thread to wake up and resume execution.
-
NSCondition is different from NSLock, @synchronized, etc., in that NSCondition can lock each thread separately without affecting other threads to enter the critical area after locking. This is very powerful. However, just because of this separate locking method, NSCondition cannot solve the resource competition by using wait and locking. Let’s say we have a requirement that we don’t let m be less than 0. Assuming that m=0, thread A decides that m>0 is false and waits. M =1, m=1, m=1, m=1, m=-1, m=-1, m=-1, m=-1, m=-1, m=-1, m=-1, m=-1
-
When I do delete experiments with arrays, adding and deleting doesn’t happen every time, it happens about 3-4 times later. There is no problem simply using lock and unlock.
- (void)executeNSCondition {
NSCondition* lock = [[NSCondition alloc] init];
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
for (NSUInteger i=0; i<3; i++) {
sleep(2);
if(i == 2) { [lock lock]; [lock broadcast]; [lock unlock]; }}}); dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ sleep(1); [self threadMethodOfNSCodition:lock]; }); dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ sleep(1); [self threadMethodOfNSCodition:lock]; }); } -(void)threadMethodOfNSCodition:(NSCondition*)lock{ [lock lock]; [lockwait];
[lock unlock];
}Copy the code
3.2 NSCoditionLock
-
Lock is conditional. If the lock is not requested, execute the code directly
-
An UNLOCK does not clear the condition, and any lock that satisfies the condition is then executed
-
UnlockWithCondition: My understanding is to set the unlock condition (there is only one condition at a time, if the condition has been set, it is equivalent to modify the condition).
-
LockWhenCondition: Executes the code when certain conditions are met
-
NSConditionLock also realized the NSLocking protocol, but the performance was found to be very low during the test.
- NSConditionLock can also do inter-threaded task waiting calls just like NSCondition, and is thread-safe.
- (void)executeNSConditionLock {
NSConditionLock* lock = [[NSConditionLock alloc] init];
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
for (NSUInteger i=0; i<3; i++) {
sleep(2);
if(i == 2) { [lock lock]; [lock unlockWithCondition:i]; }}}); dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ sleep(1); [self threadMethodOfNSCoditionLock:lock]; }); dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ sleep(1); [self threadMethodOfNSCoditionLock:lock]; }); } -(void)threadMethodOfNSCoditionLock:(NSConditionLock*)lock{ [lock lockWhenCondition:2]; [lock unlock]; }Copy the code
The POSIX 3.3 the Conditions
-
POSIX conditional locks require both a mutex and a condition to be implemented, which may seem unrelated, but at run time the mutex will be combined with the condition. The thread will be woken up by a mutex and conditional signal.
-
The condition and mutex are initialized, and when ready_to_go is false, the loop is entered, and the thread is suspended until another thread sets ready_to_go to true and sends a signal to wake it up.
-
The test code
pthread_mutex_t mutex;
pthread_cond_t condition;
Boolean ready_to_go = true;
void MyCondInitFunction()
{
pthread_mutex_init(&mutex, NULL);
pthread_cond_init(&condition, NULL);
}
void MyWaitOnConditionFunction()
{
// Lock the mutex.
pthread_mutex_lock(&mutex);
// If the predicate is already set.then the while loop is bypassed;
// otherwise, the thread sleeps until the predicate is set.
while(ready_to_go == false)
{
pthread_cond_wait(&condition, &mutex);
}
// Do work. (The mutex should stay locked.)
// Reset the predicate and release the mutex.
ready_to_go = false;
pthread_mutex_unlock(&mutex);
}
void SignalThreadUsingCondition()
{
// At this point, there should be work for the other thread to do.
pthread_mutex_lock(&mutex);
ready_to_go = true;
// Signal the other thread to begin work.
pthread_cond_signal(&condition);
pthread_mutex_unlock(&mutex);
}Copy the code
5. Distributed locks
Distributed locking is a way to control synchronous access to shared resources between distributed systems. In distributed systems, they often need to coordinate their actions. If different systems or hosts on the same system share one or a group of resources, the access to these resources must be mutually exclusive to prevent interference and ensure consistency. In this case, distributed locks are required.
5.1 NSDistributedLock
-
Handle mutually exclusive problems between multiple processes or programs.
-
If a process or program that acquires a lock hangs before it is locked, the lock will not be released and can be unlocked by breakLock.
-
IOS is rarely used and will not be studied in detail.
6. Read/write locks
Read/write lock is actually a special kind of spin lock. It divides the visitors to the shared resource into readers and writers. Readers only read the shared resource, while writers write the shared resource. This type of lock improves concurrency over a spin lock because in a multiprocessor system it allows multiple readers to access a shared resource at the same time, with the maximum possible number of readers being the actual number of logical cpus. Writers are exclusive; a read/write lock can have only one writer or more readers at a time (depending on the number of cpus), but not both readers and writers.
6.1 dispatch_barrier_async/dispatch_barrier_sync
-
Let’s start with a requirement: suppose we had six tasks to execute, and now we want to insert task 0, which will not execute until 1, 2, and 4 are all concurrent, and task 4, 5, and 6 are allowed to be concurrent until all of them are completed. The general idea is shown below
-
Directly on the code:
- (void)rwLockOfBarrier {
dispatch_queue_t queue = dispatch_queue_create("thread", DISPATCH_QUEUE_CONCURRENT);
dispatch_async(queue, ^{
NSLog(@"test1");
});
dispatch_async(queue, ^{
NSLog(@"test2");
});
dispatch_async(queue, ^{
NSLog(@"test3");
});
dispatch_barrier_sync(queue, ^{
for (int i = 0; i <= 500000000; i++) {
if (5000 == i) {
NSLog(@"point1");
}else if (6000 == i) {
NSLog(@"point2");
}else if (7000 == i) {
NSLog(@"point3");
}
}
NSLog(@"barrier");
});
NSLog(@"aaa");
dispatch_async(queue, ^{
NSLog(@"test4");
});
dispatch_async(queue, ^{
NSLog(@"test5");
});
dispatch_async(queue, ^{
NSLog(@"test6");
});
}Copy the code
-
Common: 1. Waiting for tasks inserted in the queue before it to finish; 2. Wait for them to finish their own task before performing the next one.
-
Differences: 1. When dispatch_barrier_sync inserts its own tasks into the queue, it waits for its own tasks to finish before continuing to insert tasks written after it and then execute them. 2. Dispatch_barrier_async does not wait for its own task to finish. It continues to queue subsequent tasks and waits for its own task to finish.
6.2 the pthread
- Similar to the above initialization method, static
THREAD_RWLOCK_INITIALIZER
, dynamicpthread_rwlock_init()
,pthread_rwlock_destroy
To destroy the lock
#import <pthread.h>__block pthread_rwlock_t rwlock; pthread_rwlock_init(&rwlock,NULL); / / read MM_GLOBAL_QUEUE (^ {/ / NSLog (@"Thread 0: Sleep for 1 second"); // sleep(1); NSLog(@"Thread 0: Lock");
pthread_rwlock_rdlock(&rwlock);
NSLog(@"Thread 0: read");
pthread_rwlock_unlock(&rwlock);
NSLog(@"Thread 0: Unlocked"); }); / / write MM_GLOBAL_QUEUE (^ {/ / NSLog (@"Thread 1: Sleep for 3 seconds");
sleep(3);
NSLog(@"Thread 1: Lock");
pthread_rwlock_wrlock(&rwlock);
NSLog(@"Thread 1: Write");
pthread_rwlock_unlock(&rwlock);
NSLog(@"Thread 1: Unlock");
});Copy the code
7. Spin locks
What is spinlock? It is a locking mechanism to protect shared resources. In fact, spin locks are similar to mutex in that they are used to solve the mutual exclusion of a resource. Whether it is a mutex or a spin lock, there can be at most one holder at any one time, which means that at most one execution unit can acquire the lock at any one time. But the scheduling mechanism is slightly different. For mutex, if the resource is already occupied, the resource applicant can only go to sleep. But a spinlock does not cause the caller to sleep. If the spinlock has been held by another execution unit, the caller keeps looping to see if the holder of the spinlock has released the lock, hence the name “spin”.
7.1 OSSpinLock
- use
// Initialize spinLock = OS_SPINKLOCK_INIT; / / lock OSSpinLockLock (& spinLock); / / unlock OSSpinLockUnlock (& spinLock);Copy the code
However, YYKit author’s article no longer secure OSSpinLock has said that this spin lock has a priority inversion problem.
7.2 os_unfair_lock
- Then, Apple created os_UNFAIR_LOCK_T, which solved the priority inversion problem.
os_unfair_lock_t unfairLock;
unfairLock = &(OS_UNFAIR_LOCK_INIT);
os_unfair_lock_lock(unfairLock);
os_unfair_lock_unlock(unfairLock);Copy the code
Atomic (property) set/get
Using the attributes of the SET/GET interface to achieve atomic operations, thus ensuring that “shared” variables in multithreading read and write safety, which has been unable to meet part of the multithreading synchronization requirements.
-
When defining a property, there are atomic and nonatomic attributes that modify keywords.
-
For atomic attributes, getters/setters generated by the system guarantee the integrity of get and set operations from other threads. For example, if thread A’s getter is halfway through, thread B calls the setter: thread A’s getter still gets an intact object.
-
Nonatomic has no such guarantee. So nonatomic is faster than atomic.
–
raw3d
Atomic
- Is the default
- This ensures that the CPU completes the current process before another thread can access the property
- It’s slow because you have to make sure the operation is complete
Non-Atomic
- Not by default
- faster
- Thread insecurity
- If two threads access the same property, unexpected results can occur
–
Vijayendra Tripathi
- Suppose there is an atomic attribute “name” if the thread is called A
[self setName:@"A"]
[self setName:@”B”], thread C
[self name] ‘, all the operations on these different threads will be executed sequentially — that is, if one thread is executing the getter/setter, the other threads will have to wait. Therefore, the attribute name is read/write safe.
-
However, if another thread D were calling [name Release] at the same time, it might crash because release is not limited by getter/setter operations. That is, this property is read/write safe, but not thread-safe, because other threads can do things other than read and write. Thread safety needs to be ensured by the developers themselves.
-
If the name attribute is nonatomic, then all threads A, B, C, and D in the above example can execute simultaneously, potentially leading to unexpected results. If atomic, then A, B, and C will be serial, while D will remain parallel.
–
- In simple terms, an atomic lock is added to keep the thread safe, and the reference count is +1 to assure the caller that the object will always exist. If not, if another thread calls the setter, a thread race may occur, causing the reference count to drop to zero and the original object to be discarded.
9, ONCE
9.1 the GCD
- It is used to create singletons.
+ (instancetype) sharedInstance {
static id __instance = nil;
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
__instance = [[self alloc] init];
});
return __instance;
}Copy the code
9.2 the pthread
- Without further ado, let’s get right to the code
// Define the method voidfun() {
NSLog(@"% @", [NSThread currentThread]);
}
- (void)onceOfPthread {
__block pthread_once_t once = PTHREAD_ONCE_INIT;
int i= 0;
while(i > 5) { pthread_once(&once, fun); i++; }}Copy the code
Third, performance comparison
Basic performance – Operation time
This is a typical lock performance test (iOS7.0SDK, iPhone6 simulator, Yosemite 10.10.5). The vertical direction represents the time spent in seconds, and the horizontal direction represents the time spent on different types of locks, which is divided into two parts. ANReadWriteLock (NSLock) and ANReadWriteLock (recursiverwlock) are supported by ANReadWriteLock and ANReadWriteLock (NSLock). For specific performance, atomic manipulation was the best (0.057412 seconds) and @synchronized was the most time-consuming (1.753,565 seconds) (test code).
Multithreaded lock delete array performance test
- Simulator environment: I5 2.6GH+8G ram, Xcode 7.2.1 (7C1002)+iPhone6SP(9.2)
- Xcode 7.2.1 (7C1002)+iPhone6(China)
-
Through the test, it is found that there is a great difference between the simulator and the real one. The obvious sense of ladder on the simulator is not found on the real one. The performance of NSConditionLock on the simulator is very poor. But it works fine on the real thing.
-
These performance tests are just a reference point, so there’s no need to worry about them, because front-end programming is usually thread intensive and can be optimized from elsewhere. Avoid pitfalls in thread-safety and choose your own preferred approach so that you can study it more deeply and become more proficient.
Disclaimer: Test results only represent a reference, due to the influence of various factors, not so accurate.
Comprehensive comparison
You can see that dispatch_semaphore and pthread_mutex have the highest performance after OSSpinLock. Sources say that Apple has optimized the performance of Pthread_mutex in the new system, so it doesn’t look that far behind Dispatch_Semaphore.
Common deadlocks
First, we need to clarify a few concepts
1. Serial and parallel
When using GCD, we put the tasks that need to be processed into a Block, and then append the tasks to the corresponding Queue, called the Dispatch Queue. However, there are two kinds of Dispatch queues. One is the Serial Dispatch Queue that waits for the last task to finish before executing the next one, which is called Serial Queue. The other is a ConcurrentDispatch Queue that can execute the next task without completing the previous one, called a parallel Queue. Both of these follow the FIFO principle.
For a simple example, if you print 1, 2, and 3 in three tasks, the serial queue outputs are ordered 1, 2, and 3, but the order of the parallel queue is not necessarily the same.
Although multiple tasks can be processed at the same time, the amount of parallel queue processing depends on the current system state. If the current system state can process a maximum of two tasks, then 1 and 2 will be placed first. When 3 operates, it depends on which task 1 or 2 completes first, and then 3 follows.
Serial and parallel are simply said here, there are many technical points about them, you can understand.
2. Synchronous and asynchronous
Serial and parallel are for queues, and synchronous and asynchronous are for threads. The biggest difference is that the synchronous thread blocks the current thread and must wait for the task in the synchronous thread to complete and return before continuing to execute the next task. Asynchronous threads, on the other hand, do not wait.
3.GCD API
There are many GCD apis, and here are only the ones used in this article.
-
- Two queues provided by the system
Dispatch_get_global_queue = dispatch_get_main_queue = dispatch_get_main_queueCopy the code
-
- In addition, you can generate your own queues
// DISPATCH_QUQUE_SERIAL dispatch_queue_create("com.demo.serialQueue"DISPATCH_QUEUE_SERIAL) // Similarly, this is a parallel queue dispatch_queue_create("com.demo.concurrentQueue", DISPATCH_QUEUE_CONCURRENT)Copy the code
-
- Next came the creation of synchronous and asynchronous threads
dispatch_sync(... , ^(block)) // Synchronous thread dispatch_async(... , ^(block)) // Async threadCopy the code
Case analysis
Case a
NSLog(@"1"); // dispatch_sync(dispatch_get_main_queue(), ^{NSLog(@)"2"); // Task 2}); NSLog(@"3"); 3 / / taskCopy the code
- As a result, the console outputs:
1Copy the code
Analysis of the
-
- Dispatch_sync indicates a synchronization thread.
-
- Dispatch_get_main_queue indicates the main queue running on the main thread.
-
- Task 2 is the task of synchronizing threads.
Task 1 is executed first, which is absolutely fine, but then the program encounters a synchronous thread, and it waits for task 2 to complete, and then it executes task 3. But this is a queue, there are tasks coming, of course, the task will be added to the bottom of the queue, and then follow the FIFO principle to execute the task. So, now task 2 is added to the end, task 3 comes before task 2, and the question is:
Task 3 has to wait until task 2 is finished, and task 2 comes after task 3, meaning task 2 has to wait until task 3 is finished, so they enter the situation of waiting for each other. This is a deadlock.
Case 2
NSLog(@"1"); // Dispatch_sync (dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{NSLog(@"2"); // Task 2}); NSLog(@"3"); 3 / / taskCopy the code
- As a result, the console outputs:
1
2
3Copy the code
Analysis of the
Task 1 is performed first, and a synchronous thread is encountered and the program enters the wait. Perform task 3 only after task 2 is complete. According to the dispatch_get_global_queue command, task 2 is added to the global parallel queue. After task 2 is executed in the parallel queue, task 3 is returned to the main queue.
Case 3
dispatch_queue_t queue = dispatch_queue_create("com.demo.serialQueue", DISPATCH_QUEUE_SERIAL);
NSLog(@"1"); // dispatch_async(queue, ^{NSLog(@"2"); // task 2 dispatch_sync(queue, ^{NSLog(@"3"); // Task 3}); NSLog(@"4"); // Task 4}); NSLog(@"5"); 5 / / taskCopy the code
- As a result, the console outputs:
The order of 1, 5, 2, // 5 and 2 is not necessarily the orderCopy the code
Analysis of the
Instead of using the serial or parallel queues provided by the system, this example creates a serial queue of DISPATCH_QUEUE_SERIAL itself via the dispatch_queue_CREATE function.
-
- Perform Task 1;
-
- When an asynchronous thread is encountered, add [task 2, synchronous thread, task 4] to a serial queue. Because it is an asynchronous thread, task 5 in the main thread does not have to wait for all tasks in the asynchronous thread to complete;
-
- Since task 5 does not have to wait, the output order of 2 and 5 is uncertain;
-
- After task 2 is executed, a synchronous thread is encountered. At this point, task 3 is added to the serial queue.
-
- Since task 4 joins the serial queue earlier than task 3, task 3 can only be executed after task 4 is completed. However, the synchronization thread of task 3 blocks, so task 4 must wait until task 3 completes. This leads to an infinite wait, causing a deadlock.
Four cases
NSLog(@"1"); // Dispatch_async (dispatch_get_global_queue(0, 0), ^{NSLog(@)"2"); // dispatch_sync(dispatch_get_main_queue(), ^{NSLog(@)"3"); // Task 3}); NSLog(@"4"); // Task 4}); NSLog(@"5"); 5 / / taskCopy the code
- As a result, the console outputs:
The order of 1, 2, 5, 3, 4, // 5 and 2 is not necessarily the orderCopy the code
Analysis of the
First, add [task 1, asynchronous thread, task 5] to the Main Queue. Tasks in the asynchronous thread are: [Task 2, synchronous thread, task 4].
So, task 1 is executed first, and then the task from the asynchronous thread is added to the Global Queue. Because of the asynchronous thread, task 5 does not wait, and the result is that the output order of 2 and 5 is not necessarily the same.
Then look at the order in which tasks are executed in asynchronous threads. After task 2 completes, a synchronous thread is encountered. Add tasks from the synchronous thread to the Main Queue, with task 3 following task 5.
When task 3 is finished, there is no blocking and the program continues task 4.
From the above analysis, the following results can be obtained: 1. The order of 2 and 5 is not necessarily; The four must come after the three.
Case 5
dispatch_async(dispatch_get_global_queue(0, 0), ^{
NSLog(@"1"); // dispatch_sync(dispatch_get_main_queue(), ^{NSLog(@)"2"); // Task 2}); NSLog(@"3"); // Task 3}); NSLog(@"4"); / / task 4while (1) {
}
NSLog(@"5"); 5 / / taskCopy the code
- As a result, the console outputs:
The order of 1, 4, // 1 and 4 is not necessarily the orderCopy the code
Analysis of the
Similar to the analysis of the previous cases, let’s start by looking at which tasks are added to the Main Queue: [asynchronous thread, task 4, infinite loop, task 5].
Tasks added to the asynchronous thread in the Global Queue are: [Task 1, synchronous thread, task 3].
The first is an asynchronous thread. Task 4 does not wait, so the resulting task 1 and task 4 are not in the same order.
After task 4 is complete, the program enters an infinite loop and the Main Queue blocks. However, asynchronous threads added to the Global Queue are not affected and continue to execute synchronous threads following task 1.
In the synchronous thread, task 2 is added to the main thread, and task 3 waits until task 2 completes. The main thread, at this point, is blocked by an infinite loop. So task 2 cannot be executed, of course task 3 cannot be executed, and task 5 after an infinite loop cannot be executed.
In the end, you get 1 and 4 in random order.
Five, the summary
-
- In general, pthread_MUtex is recommended as the preferred solution for real projects;
-
- For time-consuming and conflicting read operations, read/write locks can be used instead of pthread_mutex.
-
- If you are sure that only set/ GET access is available, use the atomic operation attribute.
-
- For performance requirements, OSSpinLock can be considered, you need to ensure that the time to lock the fragment is small enough;
-
- Conditional locks basically use object-oriented NSCondition and NSConditionLock.
-
- @synchronized is suitable for low-frequency scenarios such as initialization or emergency repair;
Apple provides a variety of synchronization solutions (locks) for multi-threaded, shared memory, and most of the comparisons between these solutions have discussed the use of locks and the overhead of locking operations. In my opinion, the best choice depends on the application scenario. High-frequency interface VS low-frequency interface, limited conflict VS fierce competition, and the length of the code snippet are all important basis for selection. The solution suitable for the current application scenario is the king.
Finally, due to the rush of time, please correct any errors or deficiencies. Finally, attached is a collection of all the Demo code, below is my Github and blog.
Contact: [email protected]
github
blog
Reference documentation
- Threading Programming Guide
- OSSpinLock is no longer secure
- IOS lock simple implementation and summary
- Various locks in iOS
- Use of NSRecursiveLock recursive locks
- IOS GCD deadlocks