What are threads, multithreading?
Before we look at iOS multithreading, let’s look at what threads are.
- Thread is the smallest unit that the operating system can schedule operations. It is contained in the process and is the actual operating unit of the process. A thread refers to a single sequential control flow in the process.
- Each running application in the system is a process, and the system allocates a separate memory space for each process. All tasks in a process are executed in threads, so each process must have at least one thread, which is commonly referred to as the main thread.
- A process can open multiple threads that perform different tasks in parallel. This is called multithreading.
- Speaking of multithreading, we have to mention the CPU, which can only execute one machine instruction at any time. Threads can only execute instructions if they have access to the CPU.
- Multi-threaded concurrent operation, in fact, CPU (single-core) fast scheduling between multiple threads, due to the scheduling thread time is fast enough, so it caused the illusion of multi-threaded concurrent execution. The amount of time allocated by the CPU to each thread is called the time slice. At the same time, in order to improve the efficiency of CPU execution, the system uses a time slice rotation scheduling algorithm to schedule threads.
The thread scheduling mentioned above is for single-core devices. Multi-core devices can execute multiple threads simultaneously through parallelism
A common multithreading scheme in iOS
There are four multithreading schemes in iOS, compared to the following
plan | Introduction to the | language | Thread life cycle | Use frequency |
---|---|---|---|---|
pthread | A set of generic multithreading apis Suitable for Unix, Linux, Windows and other systems Cross-platform \ portable Difficult to use |
C | Developer Manual Management | Almost no |
NSThread | The bottom is the pthread Use more object-oriented Easy to use, can execute operation thread objects |
OC | Developer Manual Management | Occasionally use |
GCD | Alternative NSThread Take full advantage of multi-core equipment |
C | Automatic management | The commonly used |
NSOperation | Encapsulation of GCD Use more object-oriented Added some usage functions |
OC | Automatic management | The commonly used |
pthread
Pthread is a set of multi-threaded API based on C language. Because the underlying LANGUAGE is C language, PThread can be used in different operating systems and has strong portability. However, pthreads are extremely cumbersome to use and require manual management of the thread’s declaration cycle, so they are rarely used and won’t be covered here.
NSThread
NSThread is an API provided by Apple to operate threads. It is object-oriented, lightweight and flexible. But like pThreads, NSThreads require the developer to manually manage the thread lifecycle. So it’s rarely used, but NSThread provides some very useful methods
#pragma mark - Thread creation// get currentThread +(NSThread *)currentThread; + (void)detachNewThreadSelector:(SEL)selector toTarget:(id)target withObject:(id)argument; + (void)sleepUntilDate:(NSDate *)date; + (void)sleepForTimeInterval:(NSTimeInterval)ti; // Cancel the thread - (void)cancel; // Start thread - (void)start; // Exit thread + (void)exit; // get mainThread + (NSThread *)mainThread; // initialize method - (id)initWithTarget:(id)target selector:(SEL)selector object:(id)argument NS_AVAILABLE(10_5, 2_0); - (BOOL)isExecuting NS_AVAILABLE(10_5, 2_0); - (BOOL)isFinished NS_AVAILABLE(10_5, 2_0); // Whether to cancel the thread - (BOOL)isCancelled NS_AVAILABLE(10_5, 2_0); - (void)cancel NS_AVAILABLE(10_5, 2_0); // Thread start - (void)start NS_AVAILABLE(10_5, 2_0);#pragma mark - Thread communication/ / communicate with the main thread - (void) performSelectorOnMainThread: (SEL) aSelector arg withObject: (id)waitUntilDone:(BOOL)wait modes:(NSArray *)array;
- (void)performSelectorOnMainThread:(SEL)aSelector withObject:(id)arg waitUntilDone:(BOOL)wait; Equivalent to the first method with kCFRunLoopCommonModes // communicate with other child threads - (void)performSelector:(SEL)aSelector onThread:(NSThread *)thr withObject:(id)argwaitUntilDone:(BOOL)wait modes:(NSArray *)array NS_AVAILABLE(10_5, 2_0);
- (void)performSelector:(SEL)aSelector onThread:(NSThread *)thr withObject:(id)arg waitUntilDone:(BOOL)waitNS_AVAILABLE(10_5, 2_0); // equivalent to the first method with kCFRunLoopCommonModes // Create and start thread implicitly - (void)performSelectorInBackground:(SEL)aSelector withObject:(id)arg NS_AVAILABLE(10_5, 2_0);Copy the code
The usage of NSThread is also very simple, which will not be introduced here. If you are interested, you can try it according to the API provided by the system.
Nsthreads are also used in common development. For example, we often use [NSThread currentThread] to fetch the currentThread and [NSThread mainThread] to fetch the mainThread. Thread survival is also implemented based on NSThreads and runloops.
GCD (Emphasis)
GCD is apple’s solution to solve the parallel computing of multi-core devices, and it will make reasonable use of the characteristics of CPU multi-core. And the GCD can automatically manage the life cycle of threads (such as thread creation, task scheduling, thread destruction, etc.), we only need to tell the GCD specific tasks to perform, there is no need to write any code about threads. It is also easier to use GCD with blocks, so GCD is preferred in multi-threaded development.
Tasks and queues
Before we look at GCD, let’s look at two more important concepts: tasks and queues
task
A task is the action we need to perform, and in GCD, we usually put the action we need to perform in a block. There are two ways to execute tasks: synchronous and asynchronous.
- Synchronization: Synchronization means that once a task is invoked, the caller must wait until the task returns before performing any further operations. The synchronization task is executed in the current thread and no new thread is created.
- Asynchronous: Asynchronous means that the task will return as soon as it is invoked without preventing the caller from performing the next operation. The task is actually executed in a newly created thread.
Therefore, the biggest difference between synchronous and asynchronous is the ability to initiate new threads.
The queue
There are two main types of queues in GCD: serial queues and concurrent queues
- Serial queue: Indicates that only one task is executed at a time. After the execution, the next task is executed. A serial queue will only open one thread to execute the task.
- Concurrent queue: Indicates that multiple tasks are executing at the same time. This means that concurrent queues can start multiple threads executing tasks at the same time.
Both serial and concurrent queue tasks are inserted in FIFO (first in, first out) mode, that is, new tasks are always inserted at the end of the queue, but the first tasks in the serial queue are executed first, and the subsequent tasks are not executed until the task is completed. On the other hand, concurrent queues execute multiple tasks simultaneously without waiting for each other, and their execution sequence and process are unpredictable.
The GCD usage
The procedure of using GCD is actually very simple, mainly divided into two steps
- Create a queue
- Adding tasks (synchronous or asynchronous) to the queue
Create a queue
There are two types of queues in THE GCD, serial queue and concurrent queue. In addition, the GCD also provides two special types of queues, one is the primary queue (which is actually a serial queue) and the other is the global queue (concurrent queue).
The queue is created with the dispatch_queue_create function, which takes two arguments:
- The first parameter is the unique identifier of the queue. It is of type CHAR *. You are advised to use the globally unique identifier of a user-defined queue to prevent conflicts
- The second parameter is the queue type, DISPATCH_QUEUE_SERIAL for creating serial queues and DISPATCH_QUEUE_CONCURRENT for creating concurrent queues.
The code to create the queue is as follows:
Dispatch_queue_t serialQueue = dispatch_queue_create("serialQueue", DISPATCH_QUEUE_SERIAL); Dispatch_queue_t concurrentQueue = dispatch_queue_create("concurrentQueue", DISPATCH_QUEUE_CONCURRENT); // Get the global concurrent queue (parameter 1: queue priority parameter 2: Dispatch_queue_t globalQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0); Dispatch_queue_t mainQueue = dispatch_get_main_queue();Copy the code
The important thing to note here is that the main queue is just a normal serial queue, and any tasks added to the main queue are executed in the main thread
Add tasks synchronously or asynchronously
In the GCD, you can add tasks using dispatch_sync to create synchronous tasks or dispatch_async to create asynchronous tasks. Whether you create synchronous or asynchronous tasks, you need to specify the queue dispatch_queue_t
- Add synchronous and asynchronous tasks to the serial queue
Dispatch_queue_t serialQueue = dispatch_queue_create("serialQueue", DISPATCH_QUEUE_SERIAL);
NSLog(@"Task 1");
dispatch_async(serialQueue, ^{
sleep(3);
NSLog(@Task 2 - % @ "",[NSThread currentThread]);
});
NSLog(@"Task 3");
dispatch_sync(serialQueue, ^{
sleep(1);
NSLog(@"The task 4 -- % @",[NSThread currentThread]);
});
NSLog(@"Task 5");
Copy the code
The final output is as follows:
Tasks 1 and 3 print first, and then task 2. Task 4 will be executed after task 2 is completed, and task 5 will be executed after task 4 is completed. Thus, the conclusion above can be verified:
- Asynchronous tasks do not block the current thread and are executed in a newly created thread. (Task 1 and task 3 are executed first, and Task 2 is executed later)
- The synchronization task blocks the current thread, and subsequent tasks are executed only after the synchronization task is completed (task 5 is executed only after task 4 is executed).
- Tasks in a serial queue follow the FIFO (first in, first out) principle. The tasks added first are executed first, and the subsequent tasks (task 2 and task 4) are executed only after the previous tasks are completed.
- Add synchronous and asynchronous tasks to the concurrent queue
Dispatch_queue_t concurrentQueue = dispatch_queue_create("concurrentQueue", DISPATCH_QUEUE_CONCURRENT);
NSLog(@"Task 1");
dispatch_async(concurrentQueue, ^{
NSLog(@"Start task 2");
sleep(3);
NSLog(@Task 2 - % @ "",[NSThread currentThread]);
});
NSLog(@"Task 3");
dispatch_sync(concurrentQueue, ^{
NSLog(@"Start task 4.");
sleep(3);
NSLog(@"The task 4 -- % @",[NSThread currentThread]);
});
NSLog(@"Task 5");
Copy the code
The result is as follows:
- Asynchronous tasks do not block the current thread, so tasks 1 and 3 are executed first, followed by task 2
- Multiple tasks in a concurrent queue can be executed simultaneously, so tasks 2 and 4 are executed concurrently
- An asynchronous task initiates a new thread, and a synchronous task executes on the current thread. So task 2 is executed in the child thread and task 4 is executed in the main thread.
- The asynchronous task blocks the current thread, so task 5 is not executed until task 4 is complete
Task and queue combination execution effect
There are two types of queues: serial queue and concurrent queue, plus the main queue provided by the system, there are three types of queues in total (since the tasks added to the main queue are executed in the main thread, the main queue is treated as a special queue separately).
The task is divided into two types: synchronous task and asynchronous task, so there are 6 combinations of queue plus task, the effects and comparison are as follows:
Serial queue (manually created) | The home side column | Concurrent queue | |
---|---|---|---|
Synchronization Task (sync) | ● No new threads are created ● Perform tasks in serial |
deadlocks | ● No new threads are created ● Perform tasks in serial |
Asynchronous task | ● Start a new thread ● Perform tasks in serial |
● No new threads are created ● Perform tasks in serial |
● Start a new thread ● Execute tasks concurrently |
- Only asynchronous tasks start new threads
- Tasks are executed concurrently only when asynchronous tasks are added to the concurrent queue
Also note that a deadlock occurs when you use sync to add synchronization tasks to the main queue. Task nesting is not considered here for the moment.
Deadlock generation
- The first: adding a synchronization task to the main queue causes a deadlock
- (void)viewDidLoad {
[super viewDidLoad];
NSLog(@"Task 1");
dispatch_sync(dispatch_get_main_queue(), ^{
NSLog(@Task 2 "");
});
NSLog(@"Task 3");
}
Copy the code
The execution diagram is as follows
First of all, when we do viewDidLoad, we’re actually adding viewDidLoad to the main queue, because viewDidLoad is now at the head of the queue, so we do viewDidLoad first.
There are 3 tasks in viewDidLoad, all of which are executed in the main thread. After task 1 is executed, task 2 is added to the main queue via dispatch_sync. However, due to the nature of synchronization tasks, they must be executed now and returned to the main queue. Therefore, task 2 must be completed before task 3 can be performed.
Now there are two tasks in the main queue, viewDidLoad and task 2, and task 2 has to wait for viewDidLoad to finish, and task 2 has to finish task 2 and task 3, but task 3 has to finish task 2 in order to finish task 2, So task 2 is waiting for viewDidLoad to finish, and viewDidLoad is waiting for Task 2 to finish, causing a deadlock.
- The second type: nested synchronous tasks in an asynchronous task and added to a serial queue will cause a deadlock
Dispatch_queue_t serialQueue = dispatch_queue_create("serialQueue", DISPATCH_QUEUE_SERIAL); Dispatch_async (serialQueue, ^{block1 NSLog(@"Task 1"); Dispatch_sync (serialQueue, ^{block2 NSLog(@)Task 2 "");
});
NSLog(@"Task 3");
});
Copy the code
The execution diagram is as follows:
First of all, when adding asynchronous tasks via dispatch_async, a new thread will be opened, so the tasks in block1 will be executed in the child thread, and since it is an asynchronous task added in the serial queue, block1 will be added to the serial queue and at the head of the queue.
Execute the methods of block1 in the child thread, execute task 1 first, then execute the dispatch_sync method, at which point synchronization task block2 is added to the serial queue, and task 3 is not executed until block2 is completed.
At this point, there are two tasks in the serial queue, block1 and block2. To execute block2, you must wait for block1 to complete. To execute block1, you must execute block2 and task 3. Therefore, block1 waits for Block2 to complete, and block2 waits for Block1 to complete, resulting in a deadlock.
- Third: synchronization tasks are nested in the synchronization task and added to the serial queue
Dispatch_queue_t serialQueue = dispatch_queue_create("serialQueue", DISPATCH_QUEUE_SERIAL);
dispatch_sync(queue, ^{
NSLog(@"Task 1");
dispatch_sync(queue, ^{
NSLog(@Task 2 "");
});
NSLog(@"Task 3");
});
Copy the code
In fact, the deadlock mode is similar to the first one. The synchronization task is still executed on the main thread, but is added to a custom serial queue. Therefore, the cause of deadlock is basically the same as the first one, which will not be described here.
Other uses of GCD
Fence method: dispatch_barrier_async
The fence method is mainly to add a fence between multiple groups of operations, so as to separate multiple groups of operations, so that each group of operations are executed sequentially. For example, if there are two groups of operations and you need to execute the first group of operations and then the second group of operations, you need to use dispatch_barrier_async. The code is as follows:
Dispatch_queue_t concurrentQueue = dispatch_queue_create("concurrentQueue", DISPATCH_QUEUE_CONCURRENT); // Task group 1for (int i = 0; i < 5; i++) {
dispatch_async(concurrentQueue, ^{
[NSThread sleepForTimeInterval:1];
NSLog(@"Execute group 1 task %d",i); }); } // dispatch_barrier_sync(concurrentQueue, ^{NSLog(@)"Fence method"); }); // Task group 2for (int i = 0; i < 5; i++) {
dispatch_async(concurrentQueue, ^{
[NSThread sleepForTimeInterval:1];
NSLog(@"Execute Group 2 task %d",i);
});
}
Copy the code
The premise is that all tasks need to be added to the same queue
The result is as follows:
It can be seen that 5 tasks in task group 1 are executed concurrently. After completion of execution, fence function will be executed first, and all operations in task group 2 will be executed finally, as shown in the following figure:
It is also important to note that the incoming queue must be manually created with dispatch_queue_CREATE. If the incoming queue is a serial queue or a global queue, the effect of this function is the same as that of dispatch_async
Queue group: dispatch_group
A queue group is a very useful feature that allows you to perform the next step after a set of asynchronous tasks have been completed. For example, if you have multiple interfaces, you need to wait until all interfaces return results before updating the UI in the main thread.
Queue groups can be used in three ways:
- Type 1: dispatch_group_async and dispatch_group_notify
- (void)testGroup1{
dispatch_queue_t concurrentQueue = dispatch_queue_create("concurrentQueue", DISPATCH_QUEUE_CONCURRENT);
dispatch_group_t group = dispatch_group_create();
dispatch_group_async(group, concurrentQueue, ^{
[NSThread sleepForTimeInterval:1];
NSLog(@"Network Task 1:%@", [NSThread currentThread]);
});
dispatch_group_async(group, concurrentQueue, ^{
[NSThread sleepForTimeInterval:1];
NSLog(@"Network task 2:%@", [NSThread currentThread]);
});
dispatch_group_notify(group, dispatch_get_main_queue(), ^{
NSLog(@"Main thread update UI:%@", [NSThread currentThread]);
});
}
Copy the code
- The second option is dispatch_group_enter, dispatch_group_leave, and dispatch_group_notify
- (void)testGroup2{
dispatch_queue_t concurrentQueue = dispatch_queue_create("concurrentQueue", DISPATCH_QUEUE_CONCURRENT);
dispatch_group_t group = dispatch_group_create();
dispatch_group_enter(group);
dispatch_async(concurrentQueue, ^{
[NSThread sleepForTimeInterval:1];
NSLog(@"Network Task 1:%@", [NSThread currentThread]);
dispatch_group_leave(group);
});
dispatch_group_enter(group);
dispatch_async(concurrentQueue, ^{
[NSThread sleepForTimeInterval:1];
NSLog(@"Network task 2:%@", [NSThread currentThread]);
dispatch_group_leave(group);
});
dispatch_group_notify(group, dispatch_get_main_queue(), ^{
NSLog(@"Main thread update UI:%@", [NSThread currentThread]);
});
}
Copy the code
- Third, dispatch_group_async and dispatch_group_wait are used together
- (void)testGroup3{
dispatch_queue_t concurrentQueue = dispatch_queue_create("concurrentQueue", DISPATCH_QUEUE_CONCURRENT);
dispatch_group_t group = dispatch_group_create();
dispatch_group_async(group, concurrentQueue, ^{
[NSThread sleepForTimeInterval:1];
NSLog(@"Network Task 1:%@", [NSThread currentThread]);
});
dispatch_group_async(group, concurrentQueue, ^{
[NSThread sleepForTimeInterval:1];
NSLog(@"Network task 2:%@", [NSThread currentThread]); }); // dispatch_group_wait(group, DISPATCH_TIME_FOREVER); dispatch_async(dispatch_get_main_queue(), ^{ NSLog(@"Main thread update UI:%@", [NSThread currentThread]);
});
}
Copy the code
The execution results of the preceding three methods are the same as the following:
Semaphore: dispatch_semaphore
A semaphore is a token used to control the number of resources to be accessed. When we set a semaphore, adding semaphore processing before thread access tells the system to execute multiple threads according to the number of semaphore we set. Semaphore is actually implemented by counting, if the semaphore count is less than zero, it will wait and block the thread. If the semaphore count is 0 or greater than 0, do not wait and count -1.
The GCD provides three methods to help us use semaphores
function | role |
---|---|
dispatch_semaphore_create | Creates a semaphore with an initial value of 0 |
dispatch_semaphore_signal | Send signal, semaphore count +1 |
dispatch_semaphore_wait | If the semaphore is greater than 0, make the semaphore -1 and perform the subsequent operations If the semaphore <=0, the current thread is blocked until the semaphore >0 |
Example code is as follows:
- (void)testSemaphore{
dispatch_queue_t concurrentQueue = dispatch_queue_create("concurrentQueue", DISPATCH_QUEUE_CONCURRENT); // dispatch_semaphoRE_t seq = dispatch_semaphore_create(0); NSLog(@"Task 1");
dispatch_async(concurrentQueue, ^{
[NSThread sleepForTimeInterval:1];
NSLog(@Task 2 ""); Dispatch_semaphore_signal (seq); }); Dispatch_semaphore_wait (seq, DISPATCH_TIME_FOREVER); dispatch_async(concurrentQueue, ^{ [NSThread sleepForTimeInterval:1]; NSLog(@"Task 3"); Dispatch_semaphore_signal (seq); }); // dispatch_semaphore_wait(seq, DISPATCH_TIME_FOREVER); NSLog(@"The task 4");
}
Copy the code
The result is as follows:
When dispatch_semaphore_wait is executed, the semaphore -1 is less than 0, so the thread is blocked and waiting. When task 2 is complete, dispatch_semaphore_signal is called. At this time, the semaphore +1 and the program continues to execute.
Therefore, semaphores can also be used to implement the sequential execution of multiple asynchronous tasks, as well as the unified execution of certain operations after the completion of multiple asynchronous tasks.
NSOperation
NSOperation is actually a higher level of encapsulation of GCD, fully object-oriented, easier to use than GCD, the code is also more readable. And NSOperation also provides some more useful functionality than GCD does. Such as:
- You can set dependencies between operations to control the sequential execution of multiple asynchronous tasks
- You can set the priority of an operation
- You can cancel operations.
- You can set the maximum number of concurrent threads
A subclass of NSOperation
NSOperation is an abstract class and cannot be used directly. To use its functionality, use its subclasses NSInvocationOperation and NSBlockOperation. You can also subclass NSOperation.
NSBlockOperation
NSBlockOperation is to put the task in a block and call it at the appropriate time.
NSBlockOperation *operation1 = [NSBlockOperation blockOperationWithBlock:^{
NSLog(@"Task 1: %@", [NSThread currentThread]);
}];
[operation1 start];
Copy the code
And NSBlockOperation can also add additional operations via the addExecutionBlock: method, And tasks added through addExecutionBlock: and tasks added through blockOperationWithBlock: can be executed concurrently in different threads.
- (void)testBlock{
NSBlockOperation *op = [NSBlockOperation blockOperationWithBlock:^{
NSLog(@"Main task :%@",[NSThread currentThread]);
}];
[op addExecutionBlock:^{
NSLog(@"Additional Task 1:%@",[NSThread currentThread]);
}];
[op addExecutionBlock:^{
NSLog(@"Additional Task 2:%@",[NSThread currentThread]);
}];
[op start];
}
Copy the code
The result is as follows:
Tasks created with blockOperationWithBlock: are executed synchronously in the current thread by default, but when blockOperationWithBlock: and addExecutionBlock: are used together, In addition, if addExecutionBlock is added enough, blockOperationWithBlock is also executed in the child thread.
With addExecutionBlock: Adding a task always opens up a new thread in which to execute the additional task.
NSInvocationOperation
NSInvocationOperation can specify a target and a selector
- (void)testOp{
NSInvocationOperation *invocationOp = [[NSInvocationOperation alloc] initWithTarget:self selector:@selector(opeartion) object:nil];
[invocationOp start];
}
- (void)opeartion{
NSLog(@"Task % @", [NSThread currentThread]);
}
Copy the code
By default, NSInvocationOperation does not start a thread when it calls the start method. It is executed synchronously on the current thread, and only when operation is added to the NSOperationQueue will a new thread start the operation asynchronously.
NSOperation depends on the Settings
NSOperation allows you to set dependencies between tasks so that tasks are executed in a predetermined dependency sequence
- (void)testOp{
NSOperationQueue *queue = [[NSOperationQueue alloc] init];
NSBlockOperation *op1 = [NSBlockOperation blockOperationWithBlock:^{
NSLog(@"Task 1 :%@",[NSThread currentThread]);
}];
NSBlockOperation *op2 = [NSBlockOperation blockOperationWithBlock:^{
NSLog(@"Task 2 :%@",[NSThread currentThread]);
}];
NSInvocationOperation *op3 = [[NSInvocationOperation alloc] initWithTarget:self selector:@selector(testOp) object:nil]; [op2 addDependency:op1]; [op3 addDependency:op2]; [queue addOperations:@[op1, op2, op3]waitUntilFinished:NO];
}
- (void)methond3{
NSLog(@"Task 3 :%@",[NSThread currentThread]);
}
Copy the code
Originally, the three tasks were executed concurrently, but after the dependency was added, they were executed sequentially, as follows:
Since three tasks are executed sequentially, only one thread is needed.
NSOperationQueue
There’s also the concept of a queue in NSOperation, NSOperationQueue, and NSOperationQueue and NSOperation are usually used together, so once an NSOperation is added to an NSOperationQueue, A new thread is automatically opened for asynchronous execution
- (void)testOperation{
NSOperationQueue *queue = [[NSOperationQueue alloc] init];
NSBlockOperation *operation1 = [NSBlockOperation blockOperationWithBlock:^{
NSLog(@"Task 1: %@", [NSThread currentThread]);
}];
[operation1 start];
NSBlockOperation *operation2 = [NSBlockOperation blockOperationWithBlock:^{
NSLog(@"Task 2, %@", [NSThread currentThread]);
}];
[queue addOperation:operation2];
}
Copy the code
The result is as follows:
As you can see, task 1 is not added to the NSOperationQueue and is executed in the main thread, and task 2 is added to the NSOperationQueue and is executed in the child thread.
Note: After NSOperation is added to NSOperationQueue, the start method is automatically executed without manual invocation.
- NSOperationQueue Sets the maximum number of concurrent tasks
NSOperationQueue *queue = [[NSOperationQueue alloc] init]; / / set the maximum number of concurrent queue. MaxConcurrentOperationCount = 1;for (int i = 0; i < 5; i++) {
NSBlockOperation *op = [NSBlockOperation blockOperationWithBlock:^{
NSLog(@"Task % d, % @",i, [NSThread currentThread]);
}];
[queue addOperation:op];
}
Copy the code
Setting the maximum concurrency to 1 in the code causes the tasks to be executed sequentially, resulting in the following:
- NSOperationQueue can cancel/suspend/resume queue operations
NSOperationQueue *queue = [[NSOperationQueue alloc] init]; // Suspend task queue.suspended = YES; // Restore task queue.suspended = NO; Queue cancelAllOperations [queue cancelAllOperations];Copy the code
- NSOperationQueue can get the main queue and the current queue in the following ways
// get the currentQueue [NSOperationQueue currentQueue]; // get the mainQueue [NSOperationQueue mainQueue];Copy the code
NSOperation summary
Summary of NSOperation properties and methods
- Operation priority
// Set the operation priority @property NSOperationQueuePriority queuePriority;Copy the code
- Judgment of operating state
// Whether the operation is being performed @property (readonly, getter=isExecuting) BOOL executing; // Whether the operation is complete @property (readonly, getter=isFinished) BOOL finished; // Whether the operation is executed concurrently @property (readonly, getter=isConcurrent) BOOL concurrent; // Whether the operation is executed asynchronously @property (readonly, getter=isAsynchronous) BOOL asynchronous; // Is the operation ready @property (readonly, getter=isReady) BOOL ready;
Copy the code
- Cancel the operation
// Whether the operation was cancelled @property (readonly, getter=isCancelled) BOOL cancelled; // Cancel the operation - (void)cancel;Copy the code
- Synchronous operation
// add task dependency - (void)addDependency:(NSOperation *)op; // remove task dependency - (void)removeDependency:(NSOperation *)op; // Get all dependencies of the current task @property (readonly, copy) NSArray<NSOperation *> *dependencies; // Block the task execution thread until the task completes - (void)waitUntilFinished; // Call completionBlock @property (nullable, copy) void (^completionBlock)(void); // Call completionBlock @property (nullable, copy) void (^completionBlock)(void);Copy the code
Summary of NSOperationQueue properties and methods
- Add tasks
// add a single task - (void)addOperation:(NSOperation *)op; - (void)addOperations:(NSArray<NSOperation *> *)ops; // add an operation of type NSBlockOperation directly to the queue - (void)addOperationWithBlock:(void (^)(void))block; - (void)addBarrierBlock:(void (^)(void))barrier;Copy the code
- Maximum concurrency
/ / set the maximum number of concurrent @ property NSInteger maxConcurrentOperationCount;Copy the code
- The queue status
// Suspend \ restore queue operation YES: suspend NO: restore @property (getter=isSuspended) BOOL SUSPENDED; Void cancelAllOperations; void cancelAllOperations; // Block the current thread until all operations in the queue are completed - (void)waitUntilAllOperationsAreFinished;
Copy the code
- Access to the queue
@property (class,readonly, strong, nullable) NSOperationQueue *currentQueue; @property (class,readonly, strong) NSOperationQueue *mainQueue;
Copy the code
Multithreading security risks
In the single thread condition, the task is serial execution, so there is no safety problem, multithreading can greatly improve the efficiency of the program, but multithreading also has hidden dangers. When multiple threads access the same resource, data corruption and data security issues can easily occur. For example, there are two threads accessing and modifying the same variable at the same time, as follows:
Thread A and thread B both read the value of the Integer, which is 17, and then incremented the value of the Integer at the same time. When the value of the Integer is changed, thread A and thread B concurrently change the value of the Integer to 18, resulting in data corruption. The solution is to use thread synchronization, which allows threads to execute in a predetermined order. A common thread synchronization technique is locking. Take changing the value of Integer as an example. Using thread synchronization, the result is as follows:
Thread A then reads the value of the Integer, changes it to 18, and then unlocks it. Thread B then accesses the Integer, locks it, reads the value of 18, changes it to 19, and unlocks it. Therefore, using locking technology can solve the security problem of multithreading.
IOS thread synchronization scheme
There are several common thread synchronization techniques in iOS. Let’s use a simple Demo to compare them.
Example: suppose there is 5000 yuan in the bank account now, use multi-threading, deposit and withdraw money in the bank account for several times, to ensure that the final bank deposit is correct.
If we were to use multithreading but not thread synchronization, the code would look like this:
- (void)moneyTest{
__block int totalMoney = 5000;
dispatch_async(dispatch_get_global_queue(0, 0), ^{
for (int i = 0; i < 5; i++) {
[NSThread sleepForTimeInterval:2];
totalMoney+=100;
NSLog(@"Deposit 100, account balance :% D",totalMoney); }}); dispatch_async(dispatch_get_global_queue(0, 0), ^{for (int i = 0; i < 5; i++) {
[NSThread sleepForTimeInterval:2];
totalMoney-=200;
NSLog(@"Withdraw 200, account balance :%d",totalMoney); }}); }Copy the code
If the normal process is followed, the account balance should eventually become 4500 yuan after five times of deposit and withdrawal, but the final execution result is quite different, as follows:
During the whole process, there were problems in the calculation of the account balance. At the same time, after 10 withdrawals, the account balance remained 4700 yuan, which is the disadvantage brought by multi-threading.
Now, let’s use the following techniques to solve the problem of saving and withdrawing money
The following lock implementation can be found in GNUstep corresponding implementation, although GNUstep is not the official source, but also has a certain reference value.
OSSpinLock
OSSpinLock is called a “spin lock”. As the name implies, threads are busy while waiting for the unlock and use CPU resources all the time.
OSSpinLock is no longer safe because it causes priority inversion, whereby a thread with a lower priority obtains the lock first, and the CPU allocates resources to it for subsequent tasks. If a thread with a higher priority enters, the CPU allocates resources to the thread with a higher priority. In this case, the low-priority thread cannot obtain resources and cannot release the lock, while the high-priority thread occupies CPU resources because it is waiting for the low-priority thread to unlock and is in the busy state. Hence the problem of priority inversion.
OSSpinLock Api:
#import <libkern/OSAtomic.h>OSSpinLock lock = OS_SPINLOCK_INIT; // Try locking (if you need to wait, do not lock, return directlyfalseIf there is no need to wait, lock and returntrue) bool result = OSSpinLockTry(&lock); / / lock OSSpinLockLock (& lock); / / unlock OSSpinLockUnlock (& lock)Copy the code
Go back to the above Demo and lock the save/withdraw operation as follows:
- (void)moneyTest{// Initialize lock __block OSSpinLock lock = OS_SPINLOCK_INIT; __block int totalMoney = 5000; dispatch_async(dispatch_get_global_queue(0, 0), ^{for(int i = 0; i < 5; i++) { OSSpinLockLock(&lock); // lock [NSThread sleepForTimeInterval:.1]; totalMoney+=100; OSSpinLockUnlock(&lock); / / unlock NSLog (@"Deposit 100, account balance :% D",totalMoney); }}); dispatch_async(dispatch_get_global_queue(0, 0), ^{for(int i = 0; i < 5; i++) { OSSpinLockLock(&lock); // lock [NSThread sleepForTimeInterval:.1]; totalMoney-=200; OSSpinLockUnlock(&lock); / / unlock NSLog (@"Withdraw 200, account balance :%d",totalMoney); }}); }Copy the code
The running results are as follows:
As can be seen, the whole process is carried out in sequence, starting with deposit and then withdrawal, and the final account balance is 4500 yuan, which solves the problem of data confusion.
os_unfair_lock
Os_unfair_lock is used to replace OSSpinLock and is supported starting with iOS 10. Threads waiting for locks are dormant (unlike OSSpinLock’s busy state) and do not consume CPU resources. Therefore, using OS_UNFAIR_LOCK does not cause priority inversion problems.
Os_unfair_lockApi is as follows:
#import <os/lock.h>Os_unfair_lock LOCK = OS_UNFAIR_LOCK_INIT; Bool result = os_UNFAIR_lock_trylock (&lock); / / lock os_unfair_lock_lock (& lock); / / unlock os_unfair_lock_unlock (& lock);Copy the code
Use the same method as OSSpinLock.
pthread_mutex
Pthread_mutex is called a mutex. When a thread acquires the use of a shared resource, it locks the resource. If another thread tries to acquire the lock, it will be blocked and put to sleep until the resource is unlocked. If more than one thread tries to acquire the lock on the resource, they all go to sleep. Once the resource is unlocked, they all wake up, but it is the first thread that gets the resource.
Threads that use mutex locks sleep while waiting for the lock and do not consume CPU resources
The pthread_mutex Api is as follows:
#import <pthread.h>
/*
* Mutex type attributes
*/
#define PTHREAD_MUTEX_NORMAL 0
#define PTHREAD_MUTEX_ERRORCHECK 1
#define PTHREAD_MUTEX_RECURSIVE 2 //
#define PTHREAD_MUTEX_DEFAULT PTHREAD_MUTEX_NORMAL// Initialize the lock attribute pthread_mutexattr_t attr; pthread_mutexattr_init(&attr); pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_NORMAL); Pthread_mutex_t mutex; pthread_mutex_t mutex; pthread_mutex_init(&mutex, &attr); // Try to lock pthread_mutex_trylock(&mutex); / / lock pthread_mutex_lock (& mutex); / / unlock pthread_mutex_unlock (& mutex); / / destroyed pthread_mutexattr_destroy (& attr); pthread_mutex_destroy(&mutex);Copy the code
Ordinary lock
Use pthread_mutex to lock deposits and withdrawals as follows:
- (void)moneyTest{// Initialize the lock __block pthread_mutex_t mutex; pthread_mutex_init(&mutex, NULL); __block int totalMoney = 5000; dispatch_async(dispatch_get_global_queue(0, 0), ^{for(int i = 0; i < 5; i++) { pthread_mutex_lock(&mutex); // lock [NSThread sleepForTimeInterval:.1]; totalMoney+=100; pthread_mutex_unlock(&mutex); / / unlock NSLog (@"Deposit 100, account balance :% D",totalMoney); }}); dispatch_async(dispatch_get_global_queue(0, 0), ^{for(int i = 0; i < 5; i++) { pthread_mutex_lock(&mutex); // lock [NSThread sleepForTimeInterval:.1]; totalMoney-=200; pthread_mutex_unlock(&mutex); / / unlock NSLog (@"Withdraw 200, account balance :%d",totalMoney); }}); Pthread_mutexattr_destroy (&mutex); pthread_mutexattr_destroy(&mutex); }Copy the code
Recursive locking
When initializing the lock, we can specify the lock type to be PTHREAD_MUTEX_RECURSIVE, at which point we create a recursive lock. A recursive lock means that the same thread can acquire the lock of a shared resource multiple times. If other threads want to acquire the lock, they must wait for the thread to release all the locks. Let’s create a recursive function Demo to understand the use of recursive locking:
#import "XLMutexRecursiveTest.h"
#import <pthread.h>
@interface XLMutexRecursiveTest ()
@property(nonatomic, assign)pthread_mutex_t mutex;
@end
@implementation XLMutexRecursiveTest
- (instancetype)init
{
self = [super init];
ifPthread_mutexattr_t attr; pthread_mutexattr_init(&attr); //pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_NORMAL); pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE); // Initialize lock pthread_mutex_init(&_mutex, &attr); Pthread_mutexattr_destroy (&attr); }return self;
}
- (void)recursiveTask{
pthread_mutex_lock(&_mutex);
NSLog(@"recursiveTask");
static int count = 0;
if (count < 5) {
count++;
[self recursiveTask];
}
pthread_mutex_unlock(&_mutex);
}
- (void)dealloc{
pthread_mutex_destroy(&_mutex);
}
@end
Copy the code
The normal lock PTHREAD_MUTEX_NORMAL is created first, and XLMutexRecursiveTest instance is instantiated to make the call
XLMutexRecursiveTest *recursiveTest = [[XLMutexRecursiveTest alloc] init];
[recursiveTest recursiveTask];
Copy the code
After execution, the program will remain stuck in the same spot where the NSLog was first printed. This is because _mutex was first recursiveTask locked, NSLog was executed, and when count < 5, _mutex was recursiveTask locked. Therefore, the second recursiveTask will wait to unlock, while the first recursiveTask must wait for the second task to complete, thus causing a deadlock.
Let’s change the lock to a recursive lock and re-execute it. We’ll see that all tasks print normally, as shown below
Note: When pthread_mutex is not used, pthread_mutexattr_destroy and pthread_mutex_destroy are called to destroy the lock and its attributes.
Condition variables,
Conditional variables are a common way to implement “wait -> wake up” logic in multithreading, similar to semaphores in GCD. Conditional variables are used to synchronize threads using a global shared variable. It is mainly divided into three steps:
- Thread one is suspended while waiting for the condition variable to be true
- Thread two makes the condition variable true
- Wake up thread one
And to prevent resource contention, condition variables are often used in combination with mutex. Because condition variables are usually shared by multiple threads or processes, there is a high probability of resource contention, so they need to be mutex before they can be used. Pthread_mutex uses the following Api for conditional variables:
// initialize the lock pthread_mutex_t mutex; pthread_mutex_init(&mutex, NULL); Pthread_cond_t condt; pthread_cond_init(&condt, NULL); Pthread_cond_wait (&condt, &mutex); // Activate a thread waiting for the condition pthread_cond_signal(&condt); // Activate all threads waiting for the condition pthread_cond_broadcast(&condt); / / destroyed pthread_cond_destroy (& condt); pthread_mutex_destroy(&mutex);Copy the code
A typical use of condition variables is the producer-consumer model. Let’s create a simple producer-consumer Demo to understand the use of condition variables and mutex. The code is as follows:
#import "XLMutexConditionLockTest.h"
#import <pthread.h>@interface XLMutexConditionLockTest () @property(nonatomic, strong)NSMutableArray *cupsRemain; @property(nonatomic, assign)pthread_mutex_t mutex; @property(nonatomic, assign)pthread_cond_t condt; @end @implementation XLMutexConditionLockTest - (instancetype)init { self = [super init];if(self) {// Initialize lock pthread_mutex_init(&_mutex, NULL); Pthread_cond_init (&_condt, NULL); }return self;
}
- (void)testSaleAndProduce{ dispatch_async(dispatch_get_global_queue(0, 0), ^{ [self _saleCup]; }); dispatch_async(dispatch_get_global_queue(0, 0), ^{ [self _produceCup]; }); } // First cup - (void)_saleCup{pthread_mutex_lock(&_mutex);if(self.cupsRemain.count == 0) {// If the cup margin is 0, wait for the production cup NSLog(@"No cups currently in stock."); pthread_cond_wait(&_condt, &_mutex); } // There are cups available for sale [self.cupsRemain removeLastObject]; NSLog(@"Sold a cup."); pthread_mutex_unlock(&_mutex); } // Create cup - (void)_produceCup{pthread_mutex_lock(&_mutex); // Sleep (2); [self.cupsRemain addObject:@"yellow cup"];
NSLog(@"Produced a yellow cup."); Pthread_cond_signal (&_condt); pthread_mutex_unlock(&_mutex); } - (void)dealloc{// pthread_cond_destroy(&_condt); pthread_mutex_destroy(&_mutex); } @endCopy the code
The result is as follows:
You can see that although the _produceCup method sleeps for 2s, the _saleCup method waits for the _produceCup execution to complete. Thus, the whole process of condition variables can be summarized as follows:
- First, the saleCup and produceCup methods are called in separate threads.
- The line where saleCup resides first obtains mutex, locks it, and then determines whether there is inventory. At this time, the inventory is 0, so the pthread_cond_wait method is called. The pthread_cond_wait method is mainly divided into three steps:
- The current thread is placed on the thread queue waiting for the condition to be satisfied.
- Unlock mutex
- Suspends (blocks) the current thread and waits to be woken up. (Pthread_cond_wait does not return at this point)
- Call produceCup. Since the mutex has been unlocked, the produceCup thread can lock it and add an element to the array. Then call pthread_cond_signal to activate the saleCup thread. Finally, the pthread_mutex_unlock method is called to unlock mutex.
- Upon receipt of the pthread_cond_signal, the thread on which saleCup is located is activated and the pthread_cond_wait function returns. When pthread_cond_wait returns, mutex is automatically locked again.
- Remove the last element of the array, and finally unlock mutex.
dispatch_semaphore
“Dispatch_semaphore” is called a semaphore. Dispatch_semaphore controls the maximum number of concurrent thread accesses by setting a global semaphore. Assuming an initial semaphore value of 1, this means that only one thread is allowed to access the resource at the same time to ensure thread synchronization. The usage is as follows:
- (void)testDispatch{// Set signal initialization int semaphoreValue = 1; // dispatch_semaphore_t semaphore = dispatch_semaphore_create(semaphoreValue); __block int totalMoney = 5000; dispatch_async(dispatch_get_global_queue(0, 0), ^{for(int i = 0; i < 5; I++) {// if the semaphore <=0, then dispatch_semaphore_wait will make the thread sleep, * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * [NSThread sleepForTimeInterval:.1]; totalMoney+=100; // Dispatch_semaphore_signal (semaphore); NSLog(@"Deposit 100, account balance :% D",totalMoney); }}); dispatch_async(dispatch_get_global_queue(0, 0), ^{for(int i = 0; i < 5; i++) { dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER); // Semaphore -1 [NSThread sleepForTimeInterval:.1]; totalMoney-=200; dispatch_semaphore_signal(semaphore); NSLog(@"Withdraw 200, account balance :%d",totalMoney); // semaphore +1}}); }Copy the code
If the initial semaphore value is 1, dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER) determines whether the semaphore value is greater than 0. If the semaphore value is greater than 0, subsequent operations are performed and the value of the semaphore is -1. If the semaphore <=0, this method puts the current thread in a dormant wait state until the semaphore value >0.
A call to dispatch_semaphore_signal(Semaphore) causes the semaphore to +1, and the two methods are used together to achieve thread synchronization.
dispatch_queue(DISPATCH_QUEUE_SERIAL)
Dispatch_queue (DISPATCH_QUEUE_SERIAL) is a serial queue, whether synchronous or asynchronous tasks are added to the serial queue, they are executed in serial mode. Use as follows
- (void)testDispatchQueue{dispatch_queue_t queue = dispatch_queue_create("lock_queue", DISPATCH_QUEUE_SERIAL);
__block int totalMoney = 5000;
dispatch_async(queue, ^{
for (int i = 0; i < 5; i++) {
[NSThread sleepForTimeInterval:.2];
totalMoney+=100;
NSLog(@"Deposit 100, account balance :% D",totalMoney); }}); dispatch_async(queue, ^{for (int i = 0; i < 5; i++) {
[NSThread sleepForTimeInterval:.2];
totalMoney-=200;
NSLog(@"Withdraw 200, account balance :%d",totalMoney); // semaphore +1}}); }Copy the code
NSLock && NSRecursiveLock && NSCondition && NSConditionLock
NSLock, NSRecursiveLock, NSCondition, and NSConditionLock encapsulate normal locks, recursive locks, and conditional variables in pthread_mutex, making them object-oriented and easier to use. Pthread_mutex is used in much the same way as pthread_mutex. I won’t go into the details here, but just list the common apis
NSLock
@protocol NSLocking // - (void)lock; // unlock - (void)unlock; @end@interface NSLock: NSObject <NSLocking> // Try to lock - (BOOL)tryLock; // set lock expiration time - (BOOL)lockBeforeDate:(NSDate *)limit;
@end
Copy the code
NSRecursiveLock
@interface NSRecursiveLock: NSObject <NSLocking> // try to lock - (BOOL)tryLock; // set lock expiration time - (BOOL)lockBeforeDate:(NSDate *)limit;
@end
Copy the code
NSCondition
In fact, NSCondition encapsulates a mutex and a condition variable. It unifies the lock method of the former and the wait/signal method of the latter in the NSCondition object and exposes it to the user. Its locking and unlocking process is the same as that of NSLock
@interface NSCondition : NSObject <NSLocking>
- (void)wait;
- (BOOL)waitUntilDate:(NSDate *)limit;
- (void)signal;
- (void)broadcast;
@end
Copy the code
NSConditionLock
NSConditionLock is another encapsulation of NSCondition. Different from NSCondition, NSConditionLock can set specific condition values
@interface NSConditionLock: NSObject <NSLocking> // - (void)lock condition :(NSInteger)condition; // Try to lock - (BOOL)tryLock; - (BOOL)tryLockWhenCondition:(NSInteger)condition; - (void)unlockWithCondition:(NSInteger)condition; // set lock expiration time - (BOOL)lockBeforeDate:(NSDate *)limit;
- (BOOL)lockWhenCondition:(NSInteger)condition beforeDate:(NSDate *)limit;
@end
Copy the code
@synchronized
@synchronized actually encapsulates a mutex recursive lock inside. If you pass in an obj parameter, the internal recursion lock for obj is automatically generated and stored in a hash table. Through obJ’s memory address to the hash table can get the corresponding recursion lock OBj. To see the internal implementation of @synchronized, download the objc source code and look at the objc_sync_enter and objc_sync_exit functions in the objc_sync.mm file.
The use of @synchronized is simple, as follows:
@synchronized (obj) {// synchronized code}Copy the code
Apply @synchronized to the case of saving and withdrawing money as follows:
- (void)testSynchronized{
__block int totalMoney = 5000;
NSObject *obj = [NSObject new];
dispatch_async(dispatch_get_global_queue(0, 0), ^{
@synchronized (obj) {
for (int i = 0; i < 5; i++) {
[NSThread sleepForTimeInterval:.2];
totalMoney+=100;
NSLog(@"Deposit 100, account balance :% D",totalMoney); }}}); dispatch_async(dispatch_get_global_queue(0, 0), ^{ @synchronized (obj) {for (int i = 0; i < 5; i++) {
[NSThread sleepForTimeInterval:.2];
totalMoney-=200;
NSLog(@"Withdraw 200, account balance :%d",totalMoney); }}}); }Copy the code
Obj passed in must have a value; if OBj passes nil, @synchronized(nil) has no effect. For simultaneous multithreading synchronization, you must pass in the same OBJ
Performance comparison of various locks
Ibireme’s OSSpinLock is no longer safe.
Knowledge of lock related supplement
Identify a spin lock or mutex by assembly code
The way to distinguish a spin lock from a mutex can be determined by whether the thread is asleep or busy while waiting for the lock. If the thread is dormant. It’s a mutex, or if it’s busy, it’s a spin lock. Assembly code can be traced in OC to determine whether a lock is a spinlock or a mutex. OSSpinLock and OS_UNfair_LOCK are examples of assembly code tracing:
#import "XLLockTest.h"
#import <libkern/OSAtomic.h>
#import <os/lock.h>
@interface XLLockTest ()
@property(nonatomic, assign)OSSpinLock lock;
@end
@implementation XLLockTest
- (instancetype)init{
self = [super init];
if (self) {
_lock = OS_SPINLOCK_INIT;
}
return self;
}
- (void)test{
dispatch_async(dispatch_get_global_queue(0, 0), ^{
[self thread2];
});
dispatch_async(dispatch_get_global_queue(0, 0), ^{
[self thread1];
});
}
- (void)thread1{
OSSpinLockLock(&_lock);
NSLog(@"thread1");
OSSpinLockUnlock(&_lock);
}
- (void)thread2{
OSSpinLockLock(&_lock);
sleep(60);
NSLog(@"thread2");
OSSpinLockUnlock(&_lock);
}
Copy the code
OSSpinLock
The breakpoint is in the thread1 method, which calls the test method and executes assembly code step by step using the LLDB instruction si. First enter the OSSpinLockLock method
The _OSSpinLockLockSlow function is called inside the OSSpinLockLock method
If you enter the _OSSpinLockLockSlow function and execute si, you will find that the program has been looping through an assembly instruction as follows:
If you are familiar with assembly, you can see that the assembly code is actually a while loop, so you can see that OSSpinLock is a spin lock.
os_unfair_lock
Change the lock in Demo to OS_UNFAIR_lock, and then trace the assembly code in the same way. The os_UNFAIR_LOCK_LOCK method is first entered, and the _OS_UNFAIR_LOCK_lock_SLOW function is called internally
__ulock_wait is called internally by the _OS_UNfair_LOCK_lock_slow function
Syscall is called inside the __ulock_wait function. Syscall is a system level function, after which the current thread goes to sleep.
Os_unfair_lock is a mutex.
Spin locks versus mutex
spinlocks
A spin lock is a process in which one thread is in a busy-waiting state while another thread is in a busy-waiting state. A thread in the busy state is always active, but does not perform any valid tasks internally, just keeps looping to see if the resource lock owner has released the lock.
Spin-locks are suitable for the following situations
- The thread waits for a lock for a short time
- Locking code (critical sections) is often invoked, but contention rarely occurs.
- It is more efficient to use spin locks when CPU resources are abundant
- Multicore processors are also suitable for using spinlocks
The mutex
In a mutex, when one thread acquires the resource lock, other threads will be blocked and enter a sleep-waiting state. Threads sleep without consuming CPU resources and are not awakened until the resource lock is released.
Mutex is suitable for the following situations
- The thread is expected to wait a long time for the lock
- Single-core processors are suitable for mutex
- Use mutex when there are IO operations on critical sections
- Use mutex when critical section code is complex, or when there is a large number of loops
- There are many competing resources in critical sections, and mutex is used in the case of intense competition
Atomic attributes in OC
Properties can be modified in OC using atomic or nonatomic, representing atomic and nonatomic. In layman’s terms, properties modified by atomic are thread-safe, while properties modified by nonatomic are not thread-safe. Why are atomic modified properties thread-safe? See objC’s objc-accessors.mm file to see the underlying implementation of atomic. By reading the objC source code, you can see that atomic modifications to properties are actually adding spin locks inside the setter and getter methods of properties.
id objc_getProperty(id self, SEL _cmd, ptrdiff_t offset, BOOL atomic) {
......
if(! atomic)return*slot; // Obtain the spinlock from the global hash table spinlock_t& slotlock = PropertyLocks[slot]; slotlock.lock(); id value = objc_retain(*slot); slotlock.unlock();return objc_autoreleaseReturnValue(value);
}
void objc_setProperty(id self, SEL _cmd, ptrdiff_t offset, id newValue, BOOL atomic, signed char shouldCopy) {
......
if(! atomic) { oldValue = *slot; *slot = newValue; }else{// Spinlock_t & slotlock = PropertyLocks[slot]; slotlock.lock(); oldValue = *slot; *slot = newValue; slotlock.unlock(); }... }Copy the code
However, atomic guarantees that getter and setter access methods are thread-safe, not that the entire object is thread-safe. Suppose we use atomic to decorate properties of type NSArray
@property(atomic, strong)NSArray *sourceArray;
Copy the code
If multiple threads were to add data to sourceArray, it would definitely cause memory problems because the atomic method is only for the getter and setter methods of sourceArray itself. If you use [_sourceArray objectAtIndex:index], It is not thread-safe because it has nothing to do with setters and getters of sourceArray. To ensure thread-safe [_sourceArray objectAtIndex:index], lock the use of _sourceArray.
Read and write security schemes in iOS
In the development process, there is a special case, is in the critical area of I/O operations, if we use any of the above locks to lock the critical area, then only one read or write operation can be performed at a time. However, there is no data problem when multiple threads simultaneously perform read and write operations. Data problems occur only when multiple threads simultaneously perform read and write operations. To sum up, it is to meet the following scenarios:
- Only one thread can write data at a time.
- Multiple threads can read data at the same time.
- Read and write operations cannot be performed at the same time.
The above scenario is typical of multiple read and single write operations, which are often used for reading and writing data such as files. To achieve this effect in iOS, there are two common solutions:
- Pthread_rwlock read-write lock
- Dispatch_barrier_async Asynchronous fence call
The usage of dispatch_barrier_async is described in detail in GCD above. This section mainly describes the use of pthread_rwlock.
The pthread_rwlock Api is as follows:
- (void)testRwLock{// Initialize the lock pthread_rwlock_t RwLock; pthread_rwlock_init(&rwLock, NULL); Pthread_rwlock_rdlock (&rwlock); // read - try to lock pthread_rwlock_tryrdlock(&rwLock); // write - lock pthread_rwlock_wrlock(&rwLock); // read - try to lock pthread_rwlock_trywrlock(&rwLock); / / unlock pthread_rwlock_unlock (& rwLock); / / destroyed pthread_rwlock_destroy (& rwLock); }Copy the code
To simulate read and write operations, the code is as follows:
#import "XLLockTest.h"
#import <pthread.h>
@interface XLLockTest ()
@property(nonatomic, assign)pthread_rwlock_t rwlock;
@property (strong, nonatomic) dispatch_queue_t queue;
@end
@implementation XLLockTest
- (instancetype)init{
self = [super init];
if (self) {
pthread_rwlock_init(&_rwlock, NULL);
self.queue = dispatch_queue_create("rw_queue", DISPATCH_QUEUE_CONCURRENT);
}
return self;
}
- (void)test{
for (int i = 0; i < 10; i++) {
dispatch_async(self.queue, ^{
[self readThread];
});
dispatch_async(self.queue, ^{
[self writeThread];
});
}
}
- (void)readThread{
pthread_rwlock_rdlock(&_rwlock);
sleep(1);
NSLog(@"Read operation");
pthread_rwlock_unlock(&_rwlock);
}
- (void)writeThread{
pthread_rwlock_wrlock(&_rwlock);
sleep(1);
NSLog(@"Write operation");
pthread_rwlock_unlock(&_rwlock);
}
- (void)dealloc{
pthread_rwlock_destroy(&_rwlock);
}
Copy the code
Call the test method of XLLockTest and print the following:
As you can see, two reads may be performed at the same time, but only one write is performed.
Refer to the article
OSSpinLock is no longer secure
In-depth understanding of locks in iOS development
conclusion
The above content is purely personal understanding, if there is anything wrong, welcome to comment.
Learn together and make progress together