Multithreading (I)- Principle multithreading (II)-GCD foundation multithreading (III)-GCD function multithreading (IV)-GCD timer
Introduction to the
What is the communist party?
The full name of GCD is Grand Central Dispatch. It is a multi-core programming solution developed by Apple. It is implemented by pure C language and provides very powerful functions to perform related operations on multiple threads.
GCD
The advantage of
GCD
Will automatically utilize moreCPU
Core (e.g. Dual-core, quad-core)GCD
The life cycle of threads (thread creation, task scheduling, thread destruction) is automatically managed by the programmerGCD
What do you want to do without writing any thread management code
Tasks and queues
Two important concepts have been added to GCD: tasks and queues. Tasks are the actions we want to perform, and queues indicate how multiple actions can be performed. In short, the core of GCD is to add tasks to queues and specify functions to perform them.
task
The task is encapsulated with a block that has no parameters and no return value.
typedef void (^dispatch_block_t)(void);
dispatch_block_t block = ^{
};
Copy the code
There are two ways to execute tasks: synchronous and asynchronous. The main differences between the two are whether to wait for queued tasks to finish and whether to have the ability to start a new thread.
- Synchronous execution
(sync)
:- The system synchronously adds tasks to a specified queue. Before the added tasks are completed, the system waits until the tasks in the queue are complete.
- You can only execute tasks in the current thread and do not have the ability to start new threads.
dispatch_sync(, { ()-> Void in
})
Copy the code
- Asynchronous execution
(async)
:- Asynchronously adds a task to the specified queue, which does not wait and can continue to execute the task.
- You can execute tasks in new threads and have the ability to start new threads.
dispatch_async(, { ()-> Void in
})
Copy the code
So the synchronization task blocks the current thread and waits for the task in the block to complete before executing the next task. However, asynchronous tasks can execute the next statement without waiting for the completion of the current statement.
Note that async has the ability to start a new thread, but it does not have to start a new thread. This depends on the type of queue specified by the task.
The queue
Queue: Used to store tasks. There are two types of queues in the GCD: serial queues and parallel queues.
- Serial queues
(Serial Dispatch Queue)
: Only one task is executed at a time. Let the tasks be executed one after another. (Only one thread is started. After one task is completed, the next task is executed.) - Concurrent queue
(Concurrent Dispatch Queue)
: Allows multiple tasks to be executed concurrently. (Multiple threads can be opened and tasks can be executed simultaneously)
Because queues are used in conjunction with task times, the following four methods result:
-
- Synchronous function serial queue:
- Does not start the thread and executes the task in the current thread
- Tasks are executed sequentially, one task after another
- You can get clogged up
-
- Synchronous function concurrent queue
- Does not start the thread and executes the task in the current thread
- One task after another
-
- Asynchronous function serial queue
- Start thread A new thread
- One task after another
-
- Asynchronous function concurrent queue
- Start the thread and execute the task in the current thread
- Tasks are executed asynchronously, in no order, and
CPU
Scheduling the
GCD
The concept of
The steps to use GCD are simple: first create a queue (serial queue or concurrent queue) and then append the task to the wait queue of the task to execute the task (synchronous or asynchronous execution).
Queue creation
#define DISPATCH_TARGET_QUEUE_DEFAULT NULL
dispatch_queue_t
dispatch_queue_create(const char *label, dispatch_queue_attr_t attr)
{
return _dispatch_lane_create_with_target(label, attr,
DISPATCH_TARGET_QUEUE_DEFAULT, true);
}
Copy the code
Its parameters are as follows:
const char *label
: Unique identifier of the queue, which can be null.dispatch_queue_attr_t attr
: Identifies the type of queue, whether serial or concurrent.DISPATCH_QUEUE_SERIAL
: serial queueDISPATCH_QUEUE_CONCURRENT
: Concurrent queue
How to create a serial queue
dispatch_queue_t queue = dispatch_queue_create("serial_queue", DISPATCH_QUEUE_SERIAL);
Copy the code
The usual primary queue is the serial queue, dispatch_get_main_queue()
- A queue used exclusively for scheduling tasks on the main thread. Also called
UI
The queue - The thread will no longer be started
- If a task is being executed and another task is added, the task is blocked
In fact, the main queue is not special. However, by default, if no other thread is opened, the program is placed in the main queue, and the main queue will be executed in the main thread, so this causes the special phenomenon of the main queue.
How to create a concurrent queue
dispatch_queue_t queue = dispatch_queue_create("concurrent_queue", DISPATCH_QUEUE_CONCURRENT);
Copy the code
The common global queue is the concurrent queue dispatch_get_global_queue(long identifier, unsigned long flags), which can execute asynchronous tasks directly. The priority of the global queue is DISPATCH_QUEUE_PRIORITY_DEFAULT. This value is a macro with a value of 0, so you can also pass 0. Unsigned long Flags: Flags that are reserved for future use. Mark this parameter as reserved for future use. Pass 0 now.
The concept of thread priority is introduced here. The higher the priority is, the sooner the thread is executed.
DISPATCH_QUEUE_PRIORITY_HIGH
: 2DISPATCH_QUEUE_PRIORITY_DEFAULT
: 0DISPATCH_QUEUE_PRIORITY_LOW
: (2)DISPATCH_QUEUE_PRIORITY_BACKGROUND
:INT16_MIN
So let’s see how the queue is actually created, right?
First we generate the following code and print it on the console:
We then trace the code to see how it was created. When we create a concurrent or serial queue, we end up with the following code:
DISPATCH_NOINLINE static dispatch_queue_t _dispatch_lane_create_with_target(const char *label, Dispatch_queue_attr_t dqa, dispatch_queue_t tq, bool legacy) {// dqai.dqai_concurrent DISPATCH_TARGET_QUEUE_DEFAULT == NULL // Serial queue dQA == NULL, Dispatch_queue_attr_info_t dqai = _dispatch_queue_attr_to_info(DQA); // Serial queue dqai = {}, concurrent value...... _dispatch_queue_attr_overcommit_t overcommit = dqai.dqai_overcommit; . If (overcommit == _dispatch_queue_attr_overcommit_Unspecified) {// Assign the overcommit value // The overcommit of the concurrent queue is _dispatch_queue_attr_overcommit_disabled // the overcommit of the serial queue is _dispatch_queue_attr_overcommit_enabled overcommit = dqai.dqai_concurrent ? _dispatch_queue_attr_overcommit_disabled : _dispatch_queue_attr_overcommit_enabled; } if (! Tq) {// Set TQ // DISPATCH_QOS_UNSPECIFIED = 0 DISPATCH_QOS_DEFAULT = 4 // Created qos = 0 // _dispatch_get_root_queue(dispatch_qos_t qos, Bool OverCOMMIT) // The first parameter is fixed at 4. The second parameter is true for the serial queue and false for the concurrent queue. Tq = _dispatch_get_root_queue(qos == DISPATCH_QOS_UNSPECIFIED? DISPATCH_QOS_DEFAULT : qos, // 4 overcommit == _dispatch_queue_attr_overcommit_enabled)->_as_dq; // 0 1 if (unlikely(! tq)) { DISPATCH_CLIENT_CRASH(qos, "Invalid queue attribute"); Queue dispatch_lane_t dq = _dispatch_object_alloc(vtable, sizeof(struct dispatch_lane_s)); // constructor _dispatch_queue_init(dq, DQF, dqai.dqai_concurrent? DISPATCH_QUEUE_WIDTH_MAX : 1, DISPATCH_QUEUE_ROLE_INNER | (dqai.dqai_inactive ? DISPATCH_QUEUE_INACTIVE : 0)); Dq ->dq_label = label; Dq_priority = _dispatch_priority_make((dispatch_qOS_t)dqai.dqai_qos, dqai.dqai_relpri); if (overcommit == _dispatch_queue_attr_overcommit_enabled) { dq->dq_priority |= DISPATCH_PRIORITY_FLAG_OVERCOMMIT; } if (! dqai.dqai_inactive) { _dispatch_queue_priority_inherit_from_target(dq, tq); _dispatch_lane_inherit_wlh_from_target(dq, tq); } _dispatch_retain(tq); // assign tq to targetq dq->do_targetq = tq; _dispatch_object_debug(dq, "%s", __func__); return _dispatch_trace_queue_create(dq)._dq; } // Dqai dispatch_queue_attr_info_t _dispatch_queue_attr_to_info(dispatch_queue_attr_t dQA) { dispatch_queue_attr_info_t dqai = { }; // The serial queue passes NULL and returns an empty structure if (! dqa) return dqai; // Assign to the concurrent queue...... return dqai; } // The first parameter is always 4. The second parameter is true for serial queue and false for concurrent queue DISPATCH_ALWAYS_INLINE DISPATCH_CONST static inline dispatch_queue global_t _dispatch_get_root_queue(dispatch_qos_t qos, Bool overcommit) {// 4-1= 3 // 2*3+0/1 = 6/7 // the value of _dispatch_root_queues is the same as that of _dispatch_root_queues &_dispatch_root_queues[2 * (qos - 1) + overcommit]; } static inline dispatch_queue_class_t dispatch_queue_init(dispatch_queue_class_t dqu, dispatch_queue_flags_t dqf, uint16_t width, uint64_t initial_state_bits) { uint64_t dq_state = DISPATCH_QUEUE_STATE_INIT_VALUE(width); dispatch_queue_t dq = dqu._dq; dispatch_assert((initial_state_bits & ~(DISPATCH_QUEUE_ROLE_MASK | DISPATCH_QUEUE_INACTIVE)) == 0); if (initial_state_bits & DISPATCH_QUEUE_INACTIVE) { dq_state |= DISPATCH_QUEUE_INACTIVE + DISPATCH_QUEUE_NEEDS_ACTIVATION; dq->do_ref_cnt += 2; // rdar://8181908 see _dispatch_lane_resume if (dx_metatype(dq) == _DISPATCH_SOURCE_TYPE) { dq->do_ref_cnt++; // released when DSF_DELETED is set } } dq_state |= (initial_state_bits & DISPATCH_QUEUE_ROLE_MASK); dq->do_next = DISPATCH_OBJECT_LISTLESS; dqf |= DQF_WIDTH(width); os_atomic_store2o(dq, dq_atomic_flags, dqf, relaxed); dq->dq_state = dq_state; dq->dq_serialnum = os_atomic_inc_orig(&_dispatch_queue_serial_numbers, relaxed); return dqu; }Copy the code
According to the code, the data related to the concurrent queue and serial queue can be obtained as follows:
- Concurrent queue
_DISPATCH_ROOT_QUEUE_ENTRY(DEFAULT,DISPATCH_PRIORITY_FLAG_FALLBACK,
.dq_label = "com.apple.root.default-qos",
.dq_serialnum = 10,
)
Copy the code
- Serial queues
_DISPATCH_ROOT_QUEUE_ENTRY(DEFAULT,
DISPATCH_PRIORITY_FLAG_FALLBACK | DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
.dq_label = "com.apple.root.default-qos.overcommit",
.dq_serialnum = 11,
)
Copy the code
The result is exactly the same as the printed one, which goes through the creation process.
Similarly, the main queue information can also be obtained according to the printed data as follows:
struct dispatch_queue_static_s _dispatch_main_q = { DISPATCH_GLOBAL_OBJECT_HEADER(queue_main), #if ! DISPATCH_USE_RESOLVERS .do_targetq = _dispatch_get_default_queue(true), #endif .dq_state = DISPATCH_QUEUE_STATE_INIT_VALUE(1) | DISPATCH_QUEUE_ROLE_BASE_ANON, .dq_label = "com.apple.main-thread", .dq_atomic_flags = DQF_THREAD_BOUND | DQF_WIDTH(1), .dq_serialnum = 1, };Copy the code
Having looked at the information about GCD queues, let’s look at the use of queues and tasks together.
GCD
The use of
The use of GCD is mainly the cooperation between tasks and queues. We know that the execution of tasks can be divided into synchronous and asynchronous, and queues can be divided into serial and concurrent.
1. Serial queue of synchronous function:
As can be seen from the code:
- All tasks are executed on the main thread by default
- A synchronous task does not have the ability to start a new thread and can only execute the task on the current thread
- Tasks in a serial queue can only be executed sequentially, one after the other
- If there are time-consuming tasks, congestion will occur
Since the main queue is also a serial queue, let’s see how the synchronization function performs with the main queue:
We found that the program crashed at the first synchronization task because a deadlock occurred there. That’s because we execute the syncTaskMainQueue method in the main thread, putting the syncTaskMainQueue task in the main queue. When we append “Print 1” to the main queue, “Print 1” waits for syncTaskMainQueue, which waits for “Print 1” to complete before continuing. This creates a deadlock.
Will syncTaskMainQueue deadlock if we execute it on another thread? Call the following method:
[NSThread detachNewThreadSelector:@selector(syncTaskMainQueue) toTarget:self withObject:nil];
Copy the code
The program executes because syncTaskMainQueue is running on another thread and our print tasks are running on the main thread.
2. Synchronize the function to the concurrent queue
- All tasks are executed on the main thread by default
- Although it is a concurrent task, a synchronous task does not have the ability to start a new thread, so it can only be executed on the current thread
- Tasks are executed sequentially, one after the other
- If there are time-consuming tasks, congestion will occur
It can be concluded that no new thread can be started for synchronous tasks in either serial or concurrent queues, only the current thread can execute the tasks, and the tasks are executed sequentially one after another, and will be blocked if there are time-consuming tasks.
3. Serial queue of asynchronous functions
- A new thread is started
- One task after another, in order
4. Asynchronous function concurrent queue
- Start other threads to execute tasks
- Tasks are executed asynchronously, in no order, and
CPU
Scheduling the
It can be concluded that when an asynchronous task is executed in a serial queue, only one new thread will be opened, and the tasks will be executed sequentially in the new thread and may be blocked. In the case of concurrent queues, as many new threads are created for as many tasks, tasks are executed asynchronously, andCPU
Scheduling depends, in no particular order.
conclusion
The core of GCD is to add tasks to queues and specify functions to perform them. Tasks are divided into synchronous tasks and asynchronous tasks, and queues are divided into serial queues and concurrent queues. The primary queue dispatch_get_main_queue() is a common serial queue, and the global queue dispatch_get_global_queue(0, 0) is a common concurrent queue.
The cooperative use of tasks and queues can be divided into synchronous function serial queue, synchronous function concurrent queue, asynchronous function serial queue and asynchronous function concurrent queue.
When a synchronous task is executed, no new thread will be opened in either serial queue or concurrent queue. Tasks can only be executed in the current thread, and the tasks are executed one after another in order, and will be blocked if there are time-consuming tasks. Note that adding a synchronization task to the main queue may cause a deadlock.
When an asynchronous task is executed in a serial queue, only a new thread will be opened, and the tasks will be executed sequentially in the new thread, and may be blocked. With concurrent queues, as many new threads are created for as many tasks as possible, and the tasks are executed asynchronously, depending on CPU scheduling, in no particular order.