I. Basic
Let’s start with a few basic concepts in multithreading
1.1 process
The official explanation is that a process is the smallest unit of resource allocation, which is somewhat abstract. Generally speaking, a process is a running program. The App we run, or the widget attached to the App, can be called a process
1.2 the thread
Threads are the basic execution unit of a process, and each process has at least one thread.
1.3 the main thread
When a program starts, a process is created by the operating system, and a thread runs immediately at the same time. This thread is called the master thread. The main thread, also known as the UI thread, is where the refresh of all UI operations must be performed
There is only one main thread in a process, and the other threads are called child threads or background threads
1.4 Single-core and multi-core, parallel and concurrent
Although nominally multithreaded, in reality the CPU selects only one thread to perform tasks at a time.
Concurrency (Parallelism)
In single-core CPU, multithreading can be regarded as a pseudo-concept, CPU uses time fragmentation technology, in different threads to quickly switch tasks, creating an illusion that multithreading is executing at the same time. This quick switching of a single CPU to perform a task is called concurrency.
Since it is still sequential, one might wonder if a single-core CPU still needs multithreaded programming, and the answer is yes. We usually divide tasks into computation-intensive and IO intensive tasks. For computation-intensive tasks, we do not need to enable multi-threading, but for IO intensive tasks, CPU performance is largely idle. The more tasks, the higher CPU utilization (within a certain limit). Enabling multithreading will still greatly improve the efficiency of task execution.
Parallel (Concurrency)
In multi-core CPUS, multi-threading is realized in a real sense. Multiple threads can be assigned to multiple cores to execute simultaneously, which is called parallelism. Whether multiple tasks can be executed at the same time is the basis for judging parallelism or concurrency.
Although parallelism and concurrency are two different concepts, the two concepts are so similar that they are sometimes used interchangeably. For example, sometimes serial and parallel, this time said parallelism is not only multi-core parallelism, but also the concept of single-core concurrency in it.
1.5 Serial and concurrent, synchronous and asynchronous
There are usually three basic concepts covered in GCDS: tasks, queues, and synchronous/asynchronous
All we have to do is execute the task. How does the task execute, as determined by both the queue and synchronous/asynchronous
Whether a queue is serial or concurrent determines how tasks can be executed, whether or not they can be executed simultaneously (which is a relative term).
Synchronous/asynchronous determines whether a task executes immediately on the current thread and has the ability to start a new thread.
We can understand these concepts by thinking of lanes as queues, toll booth Windows as threads, and vehicles passing toll booths as tasks.
The main queue is a relatively special kind of serial queue that only works on the main thread. That is, for the main queue, only one window is allowed and no other window is allowed.
All the cars on the road can only drive in one lane, which is called serial queue. Cars on the road can choose any lane to drive in, which is called concurrent queue synchronization, which means no queue cutting is allowed, and there is only one toll window open, and all the cars can only pass through that one window. Asynchronous is to allow queue-jumping, and there are multiple toll Windows open, the vehicle can choose a window through.Copy the code
We can see a curious relationship between queues and synchronous asynchrony.
Serial + synchronous: the road has only one lane and only one toll window is open. As a result, vehicles can only drive in this lane and pass through this toll window. Serial + asynchronous: the road has only one lane, there are multiple toll Windows open, the result is that cars can only drive in this lane, through the window where the lane is, allow queue cutting. Concurrency + synchronization: A road has multiple lanes and only one toll window. As a result, cars can only drive through the lane where the window is located. Concurrent + asynchronous: Road with multiple lanes, there are multiple charging window opens, the results from a vehicle lanes from the window of in charge, allows to jump the queue allowed to cut in line is, if the current vehicle because of some reason can't pay cost, such as remove from heat, no oil, under a car can be directly over his row in front of him, don't tangle lane cannot overtake problems), advance payment through. Such as the vehicle to return to normal in accordance with the current order of payment through. That is: Serial + Synchronous = Serial execution of task, new thread not started Serial + asynchronous = serial execution of task, new thread started, asynchronous task not allowed to execute immediately Concurrent + synchronous = serial execution of task, not started new thread concurrent + asynchronous = concurrent execution of task, new thread started, Allow asynchronous tasks not to be executed immediately. In particular, primary queue + Synchronization = deadlock Primary queue + asynchronous = Tasks to be executed in serial mode. No new thread is started, allowing asynchronous tasks not to be executed immediatelyCopy the code
Note: An important distinction between synchronous asynchrony is whether the task is executed immediately, which is an important concept. Take the synchronous asynchro of the main queue as an example. Although the tasks are executed serially and no new thread is opened, the synchronous task needs to be executed immediately. The asynchronous task can skip the task and execute the next task first, which directly affects whether the queue is deadlocked.
Two: platoon combat
2.1 Obtaining and creating queues
IOS queues mainly include the following types: Main Queue, Global Queue, and Custom Queue
// dispatch_queue_t mainQueue = dispatch_get_main_queue(); // dispatch_queue_t globalQueue = dispatch_get_global_queue(0, 0); Dispatch_queue_t serialQueue = dispatch_queue_create("serial", DISPATCH_QUEUE_SERIAL); Dispatch_queue_t concurrentQueue = dispatch_queue_create("concurrent", DISPATCH_QUEUE_CONCURRENT);Copy the code
2.2 Queue Synchronization Asynchronism
Start by defining a few tasks
- (void)doTask1 { for (int i = 1 ; i < 5; I ++) {NSLog(@" task 1: %d currentThread: %@", I, [NSThread currentThread]); } } - (void)doTask2 { for (int i = 1 ; i < 5; I ++) {NSLog(@" task 2: %d currentThread: %@", I, [NSThread currentThread]); } } - (void)doTask3 { for (int i = 1 ; i < 5; I ++) {NSLog(@" task 3: %d currentThread: %@", I, [NSThread currentThread]); }}Copy the code
GCD: dispatch_sync/dispatch_async(queue, ^{});
So let’s say there are two factors that affect the execution of a task, synchronous/asynchronous, and queue. But are there any other factors that affect the performance of tasks in practical applications?
Yes, that is the type of the current queue.
What does that mean? The dispatch_sync/dispatch_async methods add tasks to a queue. However, the dispatch_sync/dispatch_async methods also add tasks to a queue. The type of the queue also indirectly affects the execution of tasks.
So let’s experiment with some code
// 1. 当前队列串行,添加同步任务到当前串行队列
// 死锁
dispatch_async(serialQueue, ^{
dispatch_sync(serialQueue, ^{
[self doTask1];
});
[self doTask2];
});
// 2. 当前队列串行,添加同步任务到另一串行队列
// 无法开启新线程,串行执行任务
dispatch_async(serialQueue, ^{
dispatch_sync(mainQueue, ^{
[self doTask1];
});
[self doTask2];
});
任务1:1 当前线程:<_NSMainThread: 0x6000014b87c0>{number = 1, name = main}
任务1:2 当前线程:<_NSMainThread: 0x6000014b87c0>{number = 1, name = main}
任务1:3 当前线程:<_NSMainThread: 0x6000014b87c0>{number = 1, name = main}
任务1:4 当前线程:<_NSMainThread: 0x6000014b87c0>{number = 1, name = main}
任务2:1 当前线程:<NSThread: 0x60000148fa40>{number = 6, name = (null)}
任务2:2 当前线程:<NSThread: 0x60000148fa40>{number = 6, name = (null)}
任务2:3 当前线程:<NSThread: 0x60000148fa40>{number = 6, name = (null)}
任务2:4 当前线程:<NSThread: 0x60000148fa40>{number = 6, name = (null)}
// 3. 当前队列串行,添加异步任务到当前串行队列
// 无法开启新线程,先继续往下执行当前串行队列的任务,最后执行异步添加的任务
dispatch_async(serialQueue, ^{
dispatch_async(serialQueue, ^{
[self doTask1];
});
[self doTask2];
});
任务2:1 当前线程:<NSThread: 0x600002d38300>{number = 7, name = (null)}
任务2:2 当前线程:<NSThread: 0x600002d38300>{number = 7, name = (null)}
任务2:3 当前线程:<NSThread: 0x600002d38300>{number = 7, name = (null)}
任务2:4 当前线程:<NSThread: 0x600002d38300>{number = 7, name = (null)}
任务1:1 当前线程:<NSThread: 0x600002d38300>{number = 7, name = (null)}
任务1:2 当前线程:<NSThread: 0x600002d38300>{number = 7, name = (null)}
任务1:3 当前线程:<NSThread: 0x600002d38300>{number = 7, name = (null)}
任务1:4 当前线程:<NSThread: 0x600002d38300>{number = 7, name = (null)}
// 4. 当前队列串行,添加异步任务到另一串行队列(非主队列)
// 开启新线程,并发执行任务
dispatch_async(serialQueue, ^{
dispatch_async(serialQueue2, ^{
[self doTask1];
});
[self doTask2];
});
任务1:1 当前线程:<NSThread: 0x6000028b50c0>{number = 4, name = (null)}
任务2:1 当前线程:<NSThread: 0x6000028e4fc0>{number = 7, name = (null)}
任务1:2 当前线程:<NSThread: 0x6000028b50c0>{number = 4, name = (null)}
任务2:2 当前线程:<NSThread: 0x6000028e4fc0>{number = 7, name = (null)}
任务1:3 当前线程:<NSThread: 0x6000028b50c0>{number = 4, name = (null)}
任务2:3 当前线程:<NSThread: 0x6000028e4fc0>{number = 7, name = (null)}
任务2:4 当前线程:<NSThread: 0x6000028e4fc0>{number = 7, name = (null)}
任务1:4 当前线程:<NSThread: 0x6000028b50c0>{number = 4, name = (null)}
// 5. 当前队列串行,添加异步任务到主队列
// 无法开启新线程,先继续往下执行当前串行队列的任务,最后执行主队列的任务
dispatch_async(serialQueue, ^{
dispatch_async(mainQueue, ^{
[self doTask1];
});
[self doTask2];
});
任务2:1 当前线程:<NSThread: 0x600003575480>{number = 6, name = (null)}
任务2:2 当前线程:<NSThread: 0x600003575480>{number = 6, name = (null)}
任务2:3 当前线程:<NSThread: 0x600003575480>{number = 6, name = (null)}
任务2:4 当前线程:<NSThread: 0x600003575480>{number = 6, name = (null)}
任务1:1 当前线程:<_NSMainThread: 0x60000353c140>{number = 1, name = main}
任务1:2 当前线程:<_NSMainThread: 0x60000353c140>{number = 1, name = main}
任务1:3 当前线程:<_NSMainThread: 0x60000353c140>{number = 1, name = main}
任务1:4 当前线程:<_NSMainThread: 0x60000353c140>{number = 1, name = main}
// 6. 当前队列串行,添加同步任务到并发队列
// 无法开启新线程,串行执行任务
dispatch_async(serialQueue, ^{
dispatch_sync(globalQueue, ^{
[self doTask1];
});
[self doTask2];
});
任务1:1 当前线程:<NSThread: 0x600003f45c40>{number = 4, name = (null)}
任务1:2 当前线程:<NSThread: 0x600003f45c40>{number = 4, name = (null)}
任务1:3 当前线程:<NSThread: 0x600003f45c40>{number = 4, name = (null)}
任务1:4 当前线程:<NSThread: 0x600003f45c40>{number = 4, name = (null)}
任务2:1 当前线程:<NSThread: 0x600003f45c40>{number = 4, name = (null)}
任务2:2 当前线程:<NSThread: 0x600003f45c40>{number = 4, name = (null)}
任务2:3 当前线程:<NSThread: 0x600003f45c40>{number = 4, name = (null)}
任务2:4 当前线程:<NSThread: 0x600003f45c40>{number = 4, name = (null)}
// 7. 当前队列串行,添加异步任务到并发队列
// 开启新线程,并发执行任务
dispatch_async(serialQueue, ^{
dispatch_async(globalQueue, ^{
[self doTask1];
});
[self doTask2];
});
任务2:1 当前线程:<NSThread: 0x600003edf780>{number = 6, name = (null)}
任务1:1 当前线程:<NSThread: 0x600003eb2700>{number = 5, name = (null)}
任务2:2 当前线程:<NSThread: 0x600003edf780>{number = 6, name = (null)}
任务1:2 当前线程:<NSThread: 0x600003eb2700>{number = 5, name = (null)}
任务2:3 当前线程:<NSThread: 0x600003edf780>{number = 6, name = (null)}
任务1:3 当前线程:<NSThread: 0x600003eb2700>{number = 5, name = (null)}
任务2:4 当前线程:<NSThread: 0x600003edf780>{number = 6, name = (null)}
任务1:4 当前线程:<NSThread: 0x600003eb2700>{number = 5, name = (null)}
// 8. 当前队列并发,添加同步任务到当前并发队列
// 无法开启新线程,串行执行任务
dispatch_async(globalQueue, ^{
dispatch_sync(globalQueue, ^{
[self doTask1];
});
[self doTask2];
});
任务1:1 当前线程:<NSThread: 0x600000e08f80>{number = 7, name = (null)}
任务1:2 当前线程:<NSThread: 0x600000e08f80>{number = 7, name = (null)}
任务1:3 当前线程:<NSThread: 0x600000e08f80>{number = 7, name = (null)}
任务1:4 当前线程:<NSThread: 0x600000e08f80>{number = 7, name = (null)}
任务2:1 当前线程:<NSThread: 0x600000e08f80>{number = 7, name = (null)}
任务2:2 当前线程:<NSThread: 0x600000e08f80>{number = 7, name = (null)}
任务2:3 当前线程:<NSThread: 0x600000e08f80>{number = 7, name = (null)}
任务2:4 当前线程:<NSThread: 0x600000e08f80>{number = 7, name = (null)}
// 9.当前队列并发,添加同步任务到另一并发队列
// 无法开启新线程,串行执行任务
dispatch_async(globalQueue, ^{
dispatch_sync(concurrentQueue, ^{
[self doTask1];
});
[self doTask2];
});
任务1:1 当前线程:<NSThread: 0x6000024d8b80>{number = 5, name = (null)}
任务1:2 当前线程:<NSThread: 0x6000024d8b80>{number = 5, name = (null)}
任务1:3 当前线程:<NSThread: 0x6000024d8b80>{number = 5, name = (null)}
任务1:4 当前线程:<NSThread: 0x6000024d8b80>{number = 5, name = (null)}
任务2:1 当前线程:<NSThread: 0x6000024d8b80>{number = 5, name = (null)}
任务2:2 当前线程:<NSThread: 0x6000024d8b80>{number = 5, name = (null)}
任务2:3 当前线程:<NSThread: 0x6000024d8b80>{number = 5, name = (null)}
任务2:4 当前线程:<NSThread: 0x6000024d8b80>{number = 5, name = (null)}
// 10. 当前队列并发,添加异步任务到当前并发队列
// 开启新线程,异步执行任务
dispatch_async(globalQueue, ^{
dispatch_async(globalQueue, ^{
[self doTask1];
});
[self doTask2];
});
任务2:1 当前线程:<NSThread: 0x600000001e00>{number = 8, name = (null)}
任务1:1 当前线程:<NSThread: 0x600000001f80>{number = 7, name = (null)}
任务2:2 当前线程:<NSThread: 0x600000001e00>{number = 8, name = (null)}
任务1:2 当前线程:<NSThread: 0x600000001f80>{number = 7, name = (null)}
任务2:3 当前线程:<NSThread: 0x600000001e00>{number = 8, name = (null)}
任务1:3 当前线程:<NSThread: 0x600000001f80>{number = 7, name = (null)}
任务2:4 当前线程:<NSThread: 0x600000001e00>{number = 8, name = (null)}
任务1:4 当前线程:<NSThread: 0x600000001f80>{number = 7, name = (null)}
// 11. 当前队列并发,添加异步任务到另一并发队列
// 开启新线程,异步执行任务
dispatch_async(globalQueue, ^{
dispatch_async(concurrentQueue, ^{
[self doTask1];
});
[self doTask2];
});
任务1:1 当前线程:<NSThread: 0x600003dbab80>{number = 7, name = (null)}
任务2:1 当前线程:<NSThread: 0x600003dcf6c0>{number = 5, name = (null)}
任务1:2 当前线程:<NSThread: 0x600003dbab80>{number = 7, name = (null)}
任务2:2 当前线程:<NSThread: 0x600003dcf6c0>{number = 5, name = (null)}
任务1:3 当前线程:<NSThread: 0x600003dbab80>{number = 7, name = (null)}
任务2:3 当前线程:<NSThread: 0x600003dcf6c0>{number = 5, name = (null)}
任务2:4 当前线程:<NSThread: 0x600003dcf6c0>{number = 5, name = (null)}
任务1:4 当前线程:<NSThread: 0x600003dbab80>{number = 7, name = (null)}
// 12. 当前队列并发,添加同步任务到串行队列
// 无法开启新线程,串行执行任务
dispatch_async(globalQueue, ^{
dispatch_sync(serialQueue, ^{
[self doTask1];
});
[self doTask2];
});
任务1:1 当前线程:<NSThread: 0x6000028ec0c0>{number = 7, name = (null)}
任务1:2 当前线程:<NSThread: 0x6000028ec0c0>{number = 7, name = (null)}
任务1:3 当前线程:<NSThread: 0x6000028ec0c0>{number = 7, name = (null)}
任务1:4 当前线程:<NSThread: 0x6000028ec0c0>{number = 7, name = (null)}
任务2:1 当前线程:<NSThread: 0x6000028ec0c0>{number = 7, name = (null)}
任务2:2 当前线程:<NSThread: 0x6000028ec0c0>{number = 7, name = (null)}
任务2:3 当前线程:<NSThread: 0x6000028ec0c0>{number = 7, name = (null)}
任务2:4 当前线程:<NSThread: 0x6000028ec0c0>{number = 7, name = (null)}
// 13. 当前队列并发,添加异步任务到串行队列
// 开启新线程,并发执行任务。
dispatch_async(globalQueue, ^{
dispatch_async(serialQueue, ^{
[self doTask1];
});
[self doTask2];
});
任务2:1 当前线程:<NSThread: 0x600000b6ee00>{number = 5, name = (null)}
任务1:1 当前线程:<NSThread: 0x600000b24fc0>{number = 6, name = (null)}
任务2:2 当前线程:<NSThread: 0x600000b6ee00>{number = 5, name = (null)}
任务1:2 当前线程:<NSThread: 0x600000b24fc0>{number = 6, name = (null)}
任务1:3 当前线程:<NSThread: 0x600000b24fc0>{number = 6, name = (null)}
任务2:3 当前线程:<NSThread: 0x600000b6ee00>{number = 5, name = (null)}
任务1:4 当前线程:<NSThread: 0x600000b24fc0>{number = 6, name = (null)}
任务2:4 当前线程:<NSThread: 0x600000b6ee00>{number = 5, name = (null)}
Copy the code
Conclusion:
- current queue serial, serial queue, a sync task is added to the current deadlock - current queue serial, add serial queue synchronization task to another, unable to open a new thread, performing their tasks - current queue serial, serial asynchronous task is added to the current serial queue and unable to open a new thread, serial port to perform a task, to continue to carry out the tasks of the current serial queue, Final serial asynchronous add tasks - current queue, add another serial asynchronous tasks to queue (not the home side column), open a new thread, concurrent execution task - current queue serial, asynchronous task is added to the master queue and unable to open a new thread, serial, serial mission - the current queue synchronization task is added to the concurrent queue, Serial execution of tasks - Current queue serial, add asynchronous tasks to the concurrent queue, concurrent execution of tasks - Current queue concurrent, add synchronous tasks to the current concurrent queue, cannot start new threads, serial execution of tasks - Current queue concurrent, add synchronous tasks to another concurrent queue, cannot start new threads, Serial mission - current queue concurrently, asynchronous task is added to the current concurrent queue and open a new thread, concurrent execution task - current queue concurrently, another concurrent asynchronous task is added to the queue, open a new thread, concurrent concurrent execution task - current queue, add the task of synchronous serial queue. Unable to open a new thread, Serial execution of tasks - The current queue is concurrent, add asynchronous tasks to the serial queue, start a new thread, and execute tasks concurrently.Copy the code
For the task execution sequence:
Synchronous or asynchronous tasks executed in serial queues and synchronous tasks executed in concurrent queues are executed in serial threads in exactly the same order. Only when an asynchronous task is executed on a concurrent queue can the task be executed concurrently
Whether a new thread can be started
New threads cannot be started for synchronization. New threads can be started for asynchronism, but new threads cannot be started for asynchronously adding tasks to the current serial queue or main queue
Tasks are executed sequentially and concurrently
Synchronous tasks are executed in serial mode. In most cases, asynchronous tasks are executed concurrently. However, when tasks are added to the current serial queue asynchronously, the tasks are executed in serial mode and the tasks added asynchronously are executed at the end of the queue.
For a deadlock
Adding a synchronization task to the current serial queue will lock the current serial queue, resulting in a deadlock
2.3 a deadlock
Through the deadlock situation above, we will analyze, deadlock is how to cause?
Dispatch_async (serialQueue, ^{dispatch_sync(serialQueue, ^{[self doTask1]); }); [self doTask2]; }); * * * * * * * * * * * * * * * * * * * * * * * * * * Dispatch_async (serialQueue, ^{dispatch_sync(serialQueue2, ^{[self doTask1]); }); [self doTask2]; });Copy the code
One interesting case is that adding task1 to the current serialQueue does not deadlock if you add Task1 to another serialQueue
Example 1
The dispatch_async (serialQueue, ^ {}); The block is seen as task0
Synchronization requires that the current thread execute the synchronization task immediately. Task0 is added to the serial queue and executed immediately. Task1 is added to the serial queue and executed immediately, but the queue is FIFO. Task1 waits for task0 to complete, and Task0 waits for Task1 to complete. Deadlock is generated.
Example 2
Synchronization requires that the current thread execute the synchronization task immediately. After the synchronization task task0 is added to the serial queue, the task is executed immediately. After the synchronization task task1 is added to the serial queue, the task is executed immediately. After the synchronization task task2 is executed, the task0 is completed.
Deadlock is most common when synchronization tasks are added to the primary queue. The principle of deadlock can be easily understood by looking at example 1. The primary queue itself is a special serial queue.
It’s the serial queue tasks waiting for each other that cause deadlocks, so
Deadlocks are specific to queues and should not be confused with threads
Adding a synchronization task to the current serial queue will lock the current serial queue, resulting in a deadlock
Three. Advanced chapter
In addition to the basic usage of GCD mentioned above, there are some advanced uses of GCD, mainly for special requirements in complex multithreaded scenarios, including queue groups, fence functions, semaphores, etc
3.1 the queue group
Queue groups are an implementation of the observer design pattern for tasks that depend on multiple asynchronous requests. When multiple asynchronous tasks are requested to complete, the queue group is notified for subsequent operations. The official description is as follows:
A group of blocks submitted to queues for asynchronous invocation. A set of code blocks submitted to a queue for asynchronous invocationCopy the code
What exactly does that mean? Let’s say we want to draw a poster that depends on two other images, so asynchronously download the two images and then draw the new poster on the main thread.
What should I do with the asynchronous queue?
Dispatch_async (concurrentQueue, ^{NSLog(@" picture 1 download "); sleep(2); // simulate time-consuming operation NSLog(@" picture 1 download ready "); Dispatch_async (concurrentQueue, ^{NSLog(@" picture 2 download ")); sleep(3); // simulate time-consuming operation NSLog(@" picture 2 download ready "); Dispatch_async (dispatch_get_main_queue(), ^{NSLog(@" "); }); }); }); 2022-01-21 22:18:43.936010+0800 图 1下载 2022-01-21 22:18:43.936010+0800 2022-01-21 22:18:46.943587+0800 图 2下载 ok 2022-01-21 22:18:46.944198+0800 draw the posterCopy the code
On the one hand, the code layer after layer, looking disgusting on the other hand, look at the time, the total time = time in picture 1 + time in picture 2, very bad!
This is where queue groups come in handy.
dispatch_group_t group = dispatch_group_create(); dispatch_queue_t downloadQueue = dispatch_queue_create("download", DISPATCH_QUEUE_CONCURRENT); Dispatch_group_async (group, downloadQueue, ^{NSLog(@" picture 1 started downloading "); sleep(2); NSLog(@" Image 1 download ready "); }); Dispatch_group_async (group, downloadQueue, ^{NSLog(@" picture 2 started downloading "); sleep(3); NSLog(@" Image 2 download ready "); }); Dispatch_group_notify (dispatch_get_main_queue(), ^{NSLog(@" "); }); // Print 2022-01-21 22:41:12.017102+0800 Picture 1 start download 2022-01-21 22:41:12.017126+0800 Picture 2 start download 2022-01-21 22:41:14.023804+0800 Picture 1 download ok 2022-01-21 22:41:15.022048+0800 Picture 2 download ok 2022-01-21 22:41:15.022342+0800 draw posterCopy the code
On the one hand, no more layers of nesting, clean code on the other hand, total time = Max (time in Picture 1, time in Picture 2)
3.2 Fence function
As the name implies, the fence function is to set up a fence and wait for the previous task to complete before executing the task in the fence, similar to the use of queue groups.
The official documentation says so
// Submits a barrier block for asynchronous execution and returns immediately dispatch_barrier_async(dispatch_queue_t queue, dispatch_block_t block); // Submits a barrier block object for execution and ease. // Commit a task to execution and wait until the task completes dispatch_barrier_sync(dispatch_queue_t queue, dispatch_block_t block);Copy the code
Detailed documentation
Calls to this function always return immediately after the block is submitted and never wait for the block to be invoked. When the barrier block reaches the front of a private concurrent queue, it is not executed immediately. Instead, the queue waits until its currently executing blocks finish executing. At that point, the barrier block executes by itself. Any blocks submitted after the barrier block are not executed until the barrier block completes.
The queue you specify should be a concurrent queue that you create yourself using the dispatch_queue_create function. If the queue you pass to this function is a serial queue or one of the global concurrent queues, this function behaves like the dispatch_async function.
This function submits a barrier block to a dispatch queue for synchronous execution. Unlike dispatch_barrier_async, this function does not return until the barrier block has finished. Calling this function and targeting the current queue results in deadlock.
When the barrier block reaches the front of a private concurrent queue, it is not executed immediately. Instead, the queue waits until its currently executing blocks finish executing. At that point, the queue executes the barrier block by itself. Any blocks submitted after the barrier block are not executed until the barrier block completes.
The queue you specify should be a concurrent queue that you create yourself using the dispatch_queue_create function. If the queue you pass to this function is a serial queue or one of the global concurrent queues, this function behaves like the dispatch_syncfunction.
Unlike with
dispatch_barrier_async
, no retain is performed on the target queue. Because calls to this function are synchronous, it “borrows” the reference of the caller. Moreover, noBlock_copy
is performed on the block.As an optimization, this function invokes the barrier block on the current thread when possible.
The dispatch_barrier_async/dispatch_barrier_sync documentation gives a lot of detail to note:
- Unlike queue groups, which can cross queues, a fence can only control task nodes for the same concurrent queue.
- The queue of the fence must be a custom concurrent queue, otherwise equivalent to
dispatch_sync/dispatch_async
dispatch_barrier_async
Will start a new thread to execute the task asynchronously,dispatch_barrier_sync
No new thread will be starteddispatch_barrier_async
Instead of waiting for the fence task to complete,dispatch_barrier_sync
It will wait for the tasks in the fence to complete before executing the tasks below.
Experiment with the code
// dispatch_async(concurrentQueue, ^{NSLog(@" picture 1 download ")); sleep(2); NSLog(@ %@, [NSThread currentThread]); }); Dispatch_async (concurrentQueue, ^{NSLog(@" picture 2 download ")); sleep(2); NSLog(@ %@,[NSThread currentThread]); }); dispatch_barrier_async(concurrentQueue, ^{ sleep(2); NSLog(@" go back to main thread to draw poster currentThread %@", [NSThread currentThread]); }); NSLog(@" continuing "); 2022-01-24 15:04:57.002133+0800 Demo-oc [6197:186112] 2022-01-24 15:04:57.002133+0800 2022-01-24 15:04:57.002136+0800 Demo-OC[6197:186163] Demo-OC[6197:186170 15:04:59.006487+0800 Demo-oc [61970:186170] <NSThread: 0x600003EA1200 >{number = 6, name = (null)} 2022-01-24 15:04:59.006487+0800 demo-oc [61970:186163] 0x600003ef2040>{number = 7, Name = (null)} 2022-01-24 15:05:01.011588+0800 demo-oc [61970:186163] 0x600003EF2040 >{number = 7, name = (null)} // dispatch_async(concurrentQueue, ^{NSLog(@" picture 1 download "); sleep(2); NSLog(@ %@, [NSThread currentThread]); }); Dispatch_async (concurrentQueue, ^{NSLog(@" picture 2 download ")); sleep(2); NSLog(@ %@,[NSThread currentThread]); }); dispatch_barrier_sync(concurrentQueue, ^{ sleep(2); NSLog(@" go back to main thread to draw poster currentThread %@", [NSThread currentThread]); }); NSLog(@" continuing "); 2022-01-24 15:05:53.837821+0800 demo-oc [6218:187106] [6218:187108] 2022-01-24 15:05:55.839467+0800 demo-oc [6218:187106] 图 2下载 <NSThread: 0x6000010B1ec0 >{number = 8, name = (null)} 2022-01-24 15:05:55.839467+0800 demo-oc [6218:187108] 0x6000010adc00>{number = 6, Name = (null)} 2022-01-24 15:05:57.840356+0800 demo-oc [6218:187071] 0x6000010A8000 >{Number = 1, name = main} 2022-01-24 15:05:57.840628+0800 Demo-OC[6218:187071Copy the code
In addition to doing this task-dependent operation, the fence function can also handle thread synchronization, which is covered in thread safety.
3.3 a semaphore
Semaphores are also an implementation of the observer mode in iOS. We have N threads open, but we only allow 5 threads to work at the same time. When one thread is finished, the next thread is allowed to enter.
In multithreaded programming, this is about controlling the maximum number of concurrent threads. We know that can be set directly in the NSOperationQueue maxConcurrentOperationCount, in the communist party, we use semaphore to implement this function.
Using the image download example, if we now have 20 images to download at the same time, and then go back to the main thread to draw the poster, do we download all the images asynchronously as we did before? Yes, but loading 20 images at the same time may have a significant impact on performance. It is best to allow only 5 images to be downloaded at the same time.
The semaphore API is very simple, basically three methods
// create a semaphore with an initial value dispatch_semaphore_create(intptr_t value); // If the semaphore value is >0, the initial semaphore value is reduced by 1. Wait (dispatch_semaphore_wait(dispatch_semaphore_t dSEMa, dispatch_time_t timeout) // dispatch_semaphore_signal(dispatch_semaphore_t dsema);Copy the code
Let’s do it in code
- (void)viewDidLoad { [super viewDidLoad]; _semaphore = dispatch_semaphore_create(5); for (int i = 0; i < 20; i++) { [[[NSThread alloc] initWithTarget:self selector:@selector(downloadImage) object:nil] start]; } } - (void)downloadImage { dispatch_semaphore_wait(_semaphore, DISPATCH_TIME_FOREVER); NSLog(@" image download started "); sleep(2); NSLog(@" image download ready "); dispatch_semaphore_signal(_semaphore); } // Print 2022-01-24 11:42:03.030225+0800 images start download 2022-01-24 11:42:03.030240+0800 images start download 2022-01-24 11:42:03.030229+0800 2022-01-24 11:42:03.030321+0800 Images start downloading 2022-01-24 11:42:05.031118+0800 2022-01-24 11:42:05.031137+0800 2022-01-24 11:42:05.031142+0800 2022-01-24 11:42:05.031547+0800 Start download 2022-01-24 11:42:05.031548+0800 2022-01-24 11:42:05.031569+0800 Image start download 2022-01-24 11:42:05.031581+0800 image start download 2022-01-24 11:42:05.031641+0800 2022-01-24 11:42:07.035715+0800 Image download ok 2022-01-24 11:42:07.035715+0800 image download ok 2022-01-24 11:42:07.035716+0800 2022-01-24 11:42:07.035715+0800 2022-01-24 11:42:07.035715+0800 2022-01-24 11:42:07.036092+0800 2022-01-24 11:42:07.036117+0800 Images start downloading 2022-01-24 11:42:07.036129+0800 2022-01-24 11:42:07.036130+0800 2022-01-24 11:42:09.041318+0800 2022-01-24 11:42:09.041334+0800 2022-01-24 11:42:09.041347+0800 2022-01-24 11:42:09.041352+0800 2022-01-24 11:42:09.041350+0800 2022-01-24 11:42:09.041680+0800 Start download 2022-01-24 11:42:09.041749+0800 start download 2022-01-24 11:42:09.041767+0800 2022-01-24 11:42:09.041760+0800 Images start downloading 2022-01-24 11:42:09.041783+0800 images start downloading 2022-01-24 11:42:11.045680+0800 2022-01-24 11:42:11.045680+0800 2022-01-24 11:42:11.047088+0800 2022-01-24 11:42:11.047116+0800 2022-01-24 11:42:11.047130+0800 Picture download okCopy the code
Four. Thread safety
Looking at the code above, the semaphore nicely controls the maximum number of concurrent threads, but if we add another requirement, what if we want to do one thing after downloading 8 images?
- (void)viewDidLoad { [super viewDidLoad]; _semaphore = dispatch_semaphore_create(5); _count = 0; for (int i = 1; i <= 20; i++) { [[[NSThread alloc] initWithTarget:self selector:@selector(downloadImage) object:nil] start]; } } - (void)downloadImage { dispatch_semaphore_wait(_semaphore, DISPATCH_TIME_FOREVER); NSLog(@" image download started "); sleep(2); _count++; NSLog(@" %d image download ready ", _count); if (_count == 8) { NSLog(@"do something"); } dispatch_semaphore_signal(_semaphore); } // Print 2022-01-24 13:04:16.282125+0800 image start download 2022-01-24 13:04:16.282127+0800 image start download 2022-01-24 13:04:16.282144+0800 2022-01-24 13:04:16.282171+0800 Picture start download 2022-01-24 13:04:16.282190+0800 picture start download 2022-01-24 13:04:18.287153+0800 2022-01-24 13:04:18.287153+0800 2022-01-24 13:04:18.287153+0800 2022-01-24 13:04:18.287153+0800 2022-01-24 13:04:18.287153+0800 2022-01-24 13:04:18.287153+0800 2022-01-24 13:04:18.287181+0800 图片 4 下载 ok 2022-01-24 13:04:18.287183+0800 图片 3 下载 ok 2022-01-24 13:04:18.287428+0800 图片 start downloading 2022-01-24 13:04:18.287439+0800 Pictures start downloading 2022-01-24 13:04:18.287462+0800 pictures start downloading 2022-01-24 13:04:18.287464+0800 pictures start downloading 2022-01-24 13:04:18.287464+0800 pictures start downloading 2022-01-24 13:04:20.287629+0800 下载 2022-01-24 13:04:20.287629+0800 下载 2022-01-24 13:04:20.287629+0800 2022-01-24 13:04:20.287674+0800 2022-01-24 13:04:20.287786+0800 2022-01-24 13:04:20.287674+0800 2022-01-24 13:04:20.287674+0800 13:04:20. 287825 + 0800 image download start the 2022-01-24 13:04:20. 287845 + 0800 picture download start 13:04:20 2022-01-24. The 287862 + 0800 images are downloaded ok 7 2022-01-24 2022-01-24 13:04:20.288072+0800 Do something 2022-01-24 13:04:20.288075+0800 do something 2022-01-24 13:04:20.288214+0800 Images start downloading 2022-01-24 13:04:22.292002+0800 2022-01-24 13:04:22.292002+0800 2022-01-24 13:04:22.292017+0800 2022-01-24 13:04:22.292017+0800 2022-01-24 13:04:22.292002+0800 1322-01-24 13:04:22.292355+0800 1322-01-24 13:04:22.292031+0800 1322-01-24 13:04:22.292355+0800 1322-01-24 13:04:22.292355+0800 1322-01-24 13:04:22.292031+0800 2022-01-24 13:04:22.292355+0800 Pictures start download 2022-01-24 13:04:22.292380+0800 pictures start download 2022-01-24 13:04:22.292397+0800 pictures start download 2022-01-24 13:04:22.292397+0800 pictures start download 2022-01-24 13:04:24.292628+0800 图片 start download 2022-01-24 13:04:24.292628+0800 图片 start download 2022-01-24 13:04:24.292628+0800 2022-01-24 13:04:24.292902+0800 2022-01-24 13:04:24.292902+0800 2022-01-24 13:04:24.292902+0800 2022-01-24 13:04:24.292902+0800 13:04:24.293007+0800 The 18th picture download is goodCopy the code
Obviously, images are downloaded in the wrong order. Why? Because the semaphore above is only for a maximum of five threads downloading images at a time, there is no limit to the download index of images being modified by only one thread at a time.
Multithreading access to the same resource easily causes data confusion. We usually only allow simultaneous reads, but not simultaneous writes, which means that for writes, we need thread synchronization. How is thread synchronization implemented? We generally use the lock technology to achieve, to the task that requires thread synchronization to add a lock, so that only one thread can access at a time.
What are the locking techniques in GCD?
4.1 Semaphore locking
An initial semaphore value of 1 enables only one thread to access a block of resources at a time. In the example above, we add a semaphore for index modification.
- (void)viewDidLoad { [super viewDidLoad]; _semaphore = dispatch_semaphore_create(5); _indexSemaphone = dispatch_semaphore_create(1); _count = 0; for (int i = 1; i <= 20; i++) { [[[NSThread alloc] initWithTarget:self selector:@selector(downloadImage) object:nil] start]; } } - (void)downloadImage { dispatch_semaphore_wait(_semaphore, DISPATCH_TIME_FOREVER); sleep(2); dispatch_semaphore_wait(_indexSemaphone, DISPATCH_TIME_FOREVER); _count++; NSLog(@" %d image download ready ", _count); dispatch_semaphore_signal(_indexSemaphone); if (_count == 8) { NSLog(@"do something"); } dispatch_semaphore_signal(_semaphore); } // Print 2022-01-24 16:10:48.777692+0800 demo-oc [7331:234611 2022-01-24 16:10:48.778164+0800 Demo-OC[7331-234616 2022-01-24 16:10:48.778848+0800 Demo-OC[7331:234614] 第4 页 标 签 : 022-01-24 16:10:48.778848+0800 Demo-OC 2022-01-24 16:10:50.783141+0800 Demo-OC[7331:234612] 2022-01-24 16:10:50.783722+0800 Demo-OC[7331:234618 [7331:234618] Do something 2022-01-24 16:10:50.783989+0800 Demo-OC[7331:234619 2022-01-24 16:10:52.786884+0800 Demo-OC[7331:234621] [7331:234621] [7331:234621] [7331:234621 2022-01-24 16:10:52.787026+0800 Demo-OC[7331.234623] 2022-01-24 16:10:52.787339+0800 Demo-OC[7331:234624 2022-01-24 16:10:54.790658+0800 Demo-OC[7331:234627 2022-01-24 16:10:54.790868+0800 demo-oc [7331:234626] 第17 页 下载 2022-01-24 16:10:54.791026+0800 Demo-OC[7331:234629] The 20th picture is ready to downloadCopy the code
4.2 Fence function lock
Adding a fence to tasks that require thread synchronization also allows tasks within the fence to execute synchronously. The queue passed in by the fence function must be a self-created concurrent queue, passing in serial or global concurrent queues equivalent to dispatch_async
- (void)viewDidLoad { [super viewDidLoad]; _semaphore = dispatch_semaphore_create(5); _concurrentQueue = dispatch_queue_create("concurrent", DISPATCH_QUEUE_CONCURRENT); _count = 0; for (int i = 1; i <= 20; i++) { [[[NSThread alloc] initWithTarget:self selector:@selector(downloadImage) object:nil] start]; } } - (void)downloadImage { dispatch_semaphore_wait(_semaphore, DISPATCH_TIME_FOREVER); sleep(2); dispatch_barrier_async(self.concurrentQueue, ^{ self.count++; NSLog(@" %d image download ready ", self.count); if (self.count == 8) { NSLog(@"do something"); }}); dispatch_semaphore_signal(_semaphore); }Copy the code
More generally, the fence function is used for multi-read single-write operations, also known as read/write locks
- (void)viewDidLoad { [super viewDidLoad]; _queue = dispatch_queue_create("concurrent", DISPATCH_QUEUE_CONCURRENT); for (int i = 0 ; i < 1000; i ++) { [self read]; [self read]; [self read]; [self read]; [self write]; [self write]; [self write]; [self write]; }} - (void) read {dispatch_async (_queue, ^ {NSLog (@ "read"); sleep(1); }); } - (void)write {dispatch_barrier_async(_queue, ^{NSLog(@" write "); sleep(1); }); }Copy the code