Introduction to the
GCD (Grand Central Dispatch) was introduced in macOS10.6 and later in iOS4.0. GCD is introduced mainly because it is more convenient to use than traditional multithreading schemes such as NSThread, NSOperationQueue and NSInvocationOperation, and the operation of GCD is implemented at the system level. Because it is implemented as part of the system, it is more efficient than previous threads.
GCD also uses block syntax, which makes writing more concise.
As for what is multithreaded, the pros and cons of multithreaded programming are not discussed here, but the use of GCD is discussed.
Dispatch Queue is introduced
The most straightforward description apple gives of GCDS is to add the tasks you want to execute to the Dispatch Queue. So the Dispatch Queue will be the key to the rest of the discussion.
Let’s start with the following code:
dispatch_async(queue, ^{
// The task to be performed
});
Dispatch_async () is a function that adds tasks to the queue. This code adds the tasks to the queue in the form of a block, and the queue processes the tasks in order.
In addition, the Dispatch Queue can be divided into two types according to different processing modes:
The Serial Dispatch Queue is executed in sequence. The first task in the Queue can be started only after the first task is completed. This is what we call a serial queue.
Concurrent Dispatch Queue, in which tasks from the Queue are successively added to parallel threads and executed simultaneously. That’s what we call a parallel queue. ⚠️ Note: The number of tasks that can be executed simultaneously depends on the current processing capacity of the system.
⚠️ Note: The Dispatch Queue is not a thread in our mind. It is a task Queue. It is only responsible for the management and scheduling of tasks, not the execution of tasks
Dispatch Queue created
Now that you know what a Dispatch Queue is, let’s take a look at how it is created. Let’s start with a piece of code:
dispatch_queue_t aSerialDispatchQueue =
dispatch_queue_create(“MySerialDispatchQueue”, NULL);
This code gets a Dispatch Queue from the dispatch_queue_create() function.
The first parameter is the name of the Dispatch Queue, which can be set to NULL but is not recommended because Xcode and Instruments will use this parameter as the display name when debugging. Therefore, it is recommended to create a proper name for each Dispatch Queue;
The second parameter is set to NULL, which results in a Queue of type Serial Dispatch Queue or DISPATCH_QUEUE_SERIAL, like this:
dispatch_queue_t aSerialDispatchQueue =
dispatch_queue_create(“MySerialDispatchQueue”, DISPATCH_QUEUE_SERIAL);
If we want a Queue of type Concurrent Dispatch Queue, the second parameter is set to DISPATCH_QUEUE_CONCURRENT, like this:
dispatch_queue_t aConcurrentDispatchQueue =
dispatch_queue_create(“MyConcurrentDispatchQueue”, DISPATCH_QUEUE_CONCURRENT);
The return values are of type dispatch_queue_t.
validation
Let’s use code to verify that both queues perform as described above.
Verify the Serial Dispatch Queue:
dispatch_queue_t serialQueue
= dispatch_queue_create(“queue_1”, DISPATCH_QUEUE_SERIAL);
dispatch_async(serialQueue, ^{
NSLog(@” task 1 begin”);
[NSThread sleepForTimeInterval:3.f];
NSLog(@” task 1 stop”);
});
dispatch_async(serialQueue, ^{
NSLog(@” task 2 begin”);
[NSThread sleepForTimeInterval:2.f];
NSLog(@” task 2 stop”);
});
dispatch_async(serialQueue, ^{
NSLog(@” task 3 begin”);
[NSThread sleepForTimeInterval:1.f];
NSLog(@” task 3 stop”);
});
Take a look at the print:
Verify the Concurrent Dispatch Queue again:
dispatch_queue_t concurrentQueue
= dispatch_queue_create(“queue_2”, DISPATCH_QUEUE_CONCURRENT);
dispatch_async(concurrentQueue, ^{
NSLog(@” task 1 begin”);
[NSThread sleepForTimeInterval:3.f];
NSLog(@” task 1 stop”);
});
dispatch_async(concurrentQueue, ^{
NSLog(@” task 2 begin”);
[NSThread sleepForTimeInterval:2.f];
NSLog(@” task 2 stop”);
});
dispatch_async(concurrentQueue, ^{
NSLog(@” task 3 begin”);
[NSThread sleepForTimeInterval:1.f];
NSLog(@” task 3 stop”);
});
Take a look at the print:
Relationships between multiple Dispatch queues
As can be seen from the above verification, the Serial Dispatch Queue executes in Serial and the Concurrent Dispatch Queue executes in parallel. What if we create multiple Serial Dispatch queues, which are also executed in sequence? No, they execute concurrently with each other, which means that multiple Dispatch queues execute concurrently
What if you want multiple Serial Dispatch queues to still execute serially? More on that later.
Holding and releasing the Dispatch Queue
Since macOS10.8 and iOS6.0, the GCD has supported ARC mode, so there is no need to manually manage the holding and releasing of the Dispatch Queue.
Here are two functions that manage the Dispatch Queue in MRC mode:
Dispatch_retain (aSerialDispatchQueue);
dispatch_release(aSerialDispatchQueue);
Dispatch Queue provided by the system
In addition to the manually created Dispatch Queue, we are provided with several ready-made queues, the Main Dispatch Queue and the Global Dispatch Queue:
The Main Dispatch Queue is the Dispatch Queue that executes on the Main thread. Since there is only one Main thread and the tasks in the Main thread are executed in sequence, the Main Dispatch Queue is naturally a Queue of the Serial Dispatch Queue type. The tasks appended to the Main Dispatch Queue are executed in the Main thread RunLoop. Some tasks, such as interface updates, are also performed in this thread.
Method of acquisition:
dispatch_queue_t mainDispatchQueue = dispatch_get_main_queue();
The Global Dispatch Queue is a Concurrent Dispatch Queue type Queue that can be used by all applications. The Global Dispatch Queue has four priorities: High Priority, default Priority, Low Priority, and background priority.
// High priority
dispatch_queue_t globalDispatchQueueHigh
= dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
// Default priority
dispatch_queue_t globalDispatchQueueDefault
= dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
// Low priority
dispatch_queue_t globalDispatchQueueLow
= dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0);
// Background priority
dispatch_queue_t globalDispatchQueueBackground
= dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0);
For the Main and Global Dispatch queues, there is no need to worry about hold and release even in MRC mode. The dispatch_retain(), dispatch_release() functions do not change anything.
Dispatch Queue Indicates the destination Queue
The GCD dispatch_set_target_queue() function assigns a dispatch_object_t object to the target queue for processing. The above dispatch_queue_t object belongs to the dispatch_object_t object.
Serial Dispatch queues execute in parallel between Serial Dispatch queues.
dispatch_queue_t queue1
= dispatch_queue_create(“queue_1”, DISPATCH_QUEUE_SERIAL);
dispatch_queue_t queue2
= dispatch_queue_create(“queue_2”, DISPATCH_QUEUE_SERIAL);
dispatch_queue_t queue3
= dispatch_queue_create(“queue_3”, DISPATCH_QUEUE_SERIAL);
dispatch_async(queue1, ^{
NSLog(@” task 1 begin”);
[NSThread sleepForTimeInterval:3.f];
NSLog(@” task 1 stop”);
});
dispatch_async(queue2, ^{
NSLog(@” task 2 begin”);
[NSThread sleepForTimeInterval:2.f];
NSLog(@” task 2 stop”);
});
dispatch_async(queue3, ^{
NSLog(@” task 3 begin”);
[NSThread sleepForTimeInterval:1.f];
NSLog(@” task 3 stop”);
});
Print result:
According to the printed results, although three serial Dispatch queues are created, the serial Dispatch queues are executed in parallel.
What would happen if we added the three Serial Dispatch Queue queues we created to a target Queue?
dispatch_queue_t targetQueue
= dispatch_queue_create(“target_queue”, DISPATCH_QUEUE_SERIAL);
dispatch_queue_t queue1 = dispatch_queue_create(“queue_1”, DISPATCH_QUEUE_SERIAL);
dispatch_queue_t queue2 = dispatch_queue_create(“queue_2”, DISPATCH_QUEUE_SERIAL);
dispatch_queue_t queue3 = dispatch_queue_create(“queue_3”, DISPATCH_QUEUE_SERIAL);
dispatch_set_target_queue(queue1, targetQueue);
dispatch_set_target_queue(queue2, targetQueue);
dispatch_set_target_queue(queue3, targetQueue);
dispatch_async(queue1, ^{
NSLog(@” task 1 begin”);
[NSThread sleepForTimeInterval:3.f];
NSLog(@” task 1 stop”);
});
dispatch_async(queue2, ^{
NSLog(@” task 2 begin”);
[NSThread sleepForTimeInterval:2.f];
NSLog(@” task 2 stop”);
});
dispatch_async(queue3, ^{
NSLog(@” task 3 begin”);
[NSThread sleepForTimeInterval:1.f];
NSLog(@” task 3 stop”);
});
Print result:
As you can see from the print, the three queues added to the target queue are executed in serial order. It depends on the nature of the target queue.
If I change the targetQueue to a parallel queue, I believe that the three queues must execute in parallel. I have already verified this:
Continuing, we now swap the targetQueue targetQueue back into the Serial Dispatch Queue and the three added Concurrent Dispatch queues for a Concurrent Queue, adding two additional tasks to each of them, At this time, the three tasks in the added queue are: 1-1, 1-2, 1-3. 2-1, 2-2, 2-3; 3-1, 3-2, 3-3, look at the print result again:
By printing the results, it is found that all tasks are executed in serial order, and the parallelism of the three added parallel queues is invalidated.
Not yet, what if the target Queue were replaced with a parallel Concurrent Dispatch Queue?
As you can see from the printed results, all tasks are executed in parallel.
From the above test, it can be seen that no matter what queue is added, the tasks it contains (of course, these tasks are not executed by the original queue) will be executed according to the nature of the target queue, and their priority also follows the priority of the target queue.
Dispatch queues generated by dispatch_queue_create(), both Serial and Concurrent Dispatch queues, use Global dispatches Queue default priority Threads of the same priority can be changed using the dispatch_set_target_queue() function.
Post the official document (translation is not good, only rely on your English skills) :
Deferred appending task
dispatch_queue_t mainDispatchQueue = dispatch_get_main_queue();
dispatch_time_t time = dispatch_time(DISPATCH_TIME_NOW, 3* NSEC_PER_SEC);
dispatch_after(time,mainDispatchQueue,^{
/ / task…
});
The above code adds a task delay of 3 seconds to the queue. Note that it is added to the queue, not executed.
The first parameter time is of type dispatch_time_t, which can be obtained from the dispatch_time() or dispatch_walltime() function.
The dispatch_time() function takes the time from the time specified by the first parameter to the time specified by the second parameter. DISPATCH_TIME_NOW indicates the current time and the type is dispatch_time_t. NSEC_PER_SEC is the unit of seconds, and NSEC_PER_MSEC is the unit of milliseconds. The dispatch_walltime() function calculates the absolute time.
Dispatch Group
In practical applications, it is often necessary to execute a specific task after completing some tasks. If you use a Serial Dispatch Queue, you simply add all the tasks to the Queue and append the tasks you want to execute at the end. However, this requirement is more difficult to implement when using queues of the type Concurrent Dispatch Queue or multiple Dispatch queues simultaneously.
This is where the Dispatch Group comes in. Here’s how the Dispatch Group is used:
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_group_t group = dispatch_group_create();
Dispatch_group_async (group,queue, ^{NSLog(@” task 1″); });
Dispatch_group_async (group,queue, ^{NSLog(@” task 2″); });
Dispatch_group_async (group,queue, ^{NSLog(@” task 3″); });
Dispatch_group_notify (dispatch_get_main_queue(), ^{NSLog(@” last task to be performed “); })
The group is created using the dispatch_group_create() function of type dispatch_group_t.
The dispatch_group_async() function is the same as the dispatch_async() function, which adds tasks to the queue. The difference is that the first parameter of the dispatch_group_async() function specifies which Dispatch Group the current task belongs to.
The first parameter of dispatch_group_notify() specifies the Dispatch Group to monitor. After all tasks of this Group are executed, the third parameter of dispatch_group_notify() is appended to the second parameter queue.
In addition to adding monitoring for tasks, you can also use the wait function to look at the following code:
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_group_t group = dispatch_group_create();
Dispatch_group_async (group, queue,^{NSLog(@” task 1″); });
Dispatch_group_async (group, queue,^{NSLog(@” task 2″); });
Dispatch_group_async (group, queue,^{NSLog(@” task 3″); });
dispatch_group_wait(group, 10*NSEC_PER_SEC);
The dispatch_group_wait() function is used to wait for group tasks to be processed. The second parameter represents the wait time and is of type dispatch_time_t. The function returns a value of type long 0 if all tasks are completed within the set time. If the return value is not 0, there are still tasks in progress. If the second parameter is set to DISPATCH_TIME_FOREVER, the function will return 0 because it will hang indefinitely and wait until all tasks have completed.
So what does waiting really mean? This means that once the dispatch_group_wait() function is called, it is called and does not return, i.e. the thread on which dispatch_group_wait() was executed has stopped. When the function returns a value, the current thread continues.
If you set the second parameter of the function to DISPATCH_TIME_NOW, you do not need to wait to determine whether all the tasks in the group are completed.
Dispatch_barrier_async () function
In general, it is possible for multiple tasks to perform data read and write operations simultaneously. However, data contention may occur when multiple tasks perform data write operations simultaneously. Especially for a series of complex read and write operations, the use of the Serial Dispatch Queue reduces the read operation efficiency. The use of the Concurrent Dispatch Queue not only causes data competition among multiple write tasks, but also may cause read and write order disorder due to Concurrent execution.
So use the dispatch_barrier_async() function in conjunction with the Concurrent Dispatch Queue to solve this problem.
Take a look at the following code:
dispatch_queue_t queue = dispatch_create_queue(“OneConcurrentDispatchQueue”, DISPATCH_QUEUE_CONCURRENT);
dispatch_async(queue, block_mission1_reading);
dispatch_async(queue, block_mission2_reading);
dispatch_async(queue, block_mission3_reading);
dispatch_async(queue, block_mission4_reading);
dispatch_barrier_async(queue, block_mission5_writing);
dispatch_async(queue, block_mission6_reading);
dispatch_async(queue, block_mission7_reading);
dispatch_async(queue, block_mission8_reading);
The dispatch_barrier_async() function will wait
Block_mission1_reading,
Block_mission2_reading,
Block_mission3_reading,
Block_mission4_reading Adds the block_mission5_writing task to the queue. When the dispatch_barrier_async() adds the writing task to the queue, the queue returns to the normal action. Continue parallel processing of subsequent tasks appended to the queue.
Suspension and recovery of the Dispatch Queue
When we want to suspend a Dispatch Queue
dispatch_suspend(queue);
restore
dispatch_resume(queue);
When the Dispatch Queue hangs, tasks appended to the Queue that have not yet been executed stop executing after that, and resume execution after recovery.
The specified task can be executed only once
Tasks specified by the dispatch_once() function are performed only once, such as initialization of singletons.
The normal way we write singletons is:
static NSObject obj = nil;
@synchronized (self) {
if (obj == nil) {
obj = …
}
}
This is done using the dispatch_once() function:
static NSObject obj = nil;
static dispatch_once_t pred;
dispatch_once( &pred, ^{
obj = …
});
The dispatch_once() function provides 100% security in multi-threaded environments.
Difference between dispatch_sync() and dispatch_async()
The dispatch_async() function is asynchronous. It only adds tasks to the queue. It does not care whether the tasks added to the queue are completed or not, and returns immediately.
The dispatch_sync() function, as opposed to the dispatch_async() function, is synchronous. The dispatch_sync() function not only adds tasks to the queue but also waits for the added tasks to complete before returning. The thread from which dispatch_sync() is called is suspended until dispatch_sync() returns and the thread resumes.
One of the more important questions about the dispatch_sync() function is deadlocks. Why do deadlocks occur? For example, if you have a serial queue and dispatch_sync() is called from that queue, then the serial queue thread is suspended when dispatch_sync() is called, and the tasks added by dispatch_sync() are not handled by the thread. Never return, so the thread will always be suspended.
At the heart of this (and perhaps a little convoluted) problem is: The thread that calls the dispatch_sync() function (thread, not queue, multiple threads on a parallel queue might not happen unless the calling and the task appended to the function are assigned to the same thread in the parallel queue) and the thread that handles the task appended to the function are the same thread. A deadlock occurs at this point.
Here are two examples:
dispatch_queue_t mainQueue = dispatch_get_main_queue();
Dispatch_sync (mainQueue, ^{NSLog(@” task “); });
/ / a deadlock
dispatch_queue_t queue = dispatch_queue_create(“OneSerialDispatchQueue”, NULL);
dispatch_async(queue, ^{
Dispatch_sync (queue, ^{NSLog(@” task “); });
});
/ / a deadlock
Dispatch_apply () function
This function adds multiple tasks to the queue at once and is synchronous as the dispatch_sync() function is used. It does not return until all tasks added to the queue have completed. And the dispatch_apply() function appends block tasks to the queue with parameters, so that the function passes the append sequence number as an argument to the block task.
Take a look at the following code:
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_apply(8, queue, ^(size_t index){
NSLog(@” This is the %zu task “, index);
});
NSLog(@” All tasks completed “);
Execution Result:
Although I’m executing the results sequentially, it’s also possible to execute the results unordered because I’m using parallel queues. But no matter what the order of the first eight tasks is, all tasks must be processed and this task must be executed last.
Understanding the concept of synchronization will help you understand why.
< > reprinted