Welcome to the iOS Basics series (suggested in order)

IOS low-level – Alloc and init explore

IOS Bottom – Isa for everything

IOS low-level – Analysis of the nature of classes

IOS Underlying – cache_t Process analysis

IOS Low-level – Method lookup process analysis

IOS bottom layer – Analysis of message forwarding process

IOS Low-level – How does Dyld load app

IOS low-level – class load analysis

IOS low-level – Load analysis of categories

IOS Exploration – Principles related to multithreading

In this paper,

This article mainly lists the apis commonly used in the use of GCD, and explains the usage scenarios and attention to details.

1. Multi-threaded GCD

1.1 为什么是GCD

There are many ways to program threads: pThread, NSThread, GCD, NSOperation. Why choose communist Party CD?

plan Introduction to the language The life cycle
pthread Generic API, cross-platform, difficult to use C Programmer management
NSThread Object oriented, easy to use, can directly operate threads OC Programmer management
GCD Take full advantage of multi-core equipment C Automatic management
NSOperation Object-oriented, GCD based, more features OC Automatic management

GCD is apple’s solution for multi-core parallel computing. It is a set of PURE C LANGUAGE API, which will automatically use more CPU cores and automatically manage the life cycle of threads (creating threads, scheduling tasks and destroying threads).

Managing the life cycle of threads is a tricky and error-prone task and should be avoided whenever possible. Therefore, GCD and GCD-based NSOperation are the first choice for multi-threading in daily development. Programmers only need to tell GCD what tasks they want to perform without writing any thread management code.

GCD and NSOperation:

NSOperation is highly abstract to threads. Subclassing the design idea of NSOperation has the advantages of object orientation and can customize the implementation of special requirements, so that the implementation is multithreaded support and simple interface, which will make the program structure of the project better. It is recommended to use in complex projects.

GCD itself is very simple and easy to use. For uncomplicated multi-threaded operations, it will save the amount of code. The use of Block parameter will make the code more readable, so it is recommended to use it in simple projects.

1.2 How to Use the GCD

In a nutshell, add the task to the queue and specify the function that executes the task. This is the core idea of using the COMMUNIST Party of China.

This sentence refers to tasks, queues, functions three concepts:

  • taskIs:blockThe part of the code within a block that is to be executed
  • The queue: Queue for executing tasks, that is, queue for storing tasks
  • function: Whether the task can be executed in a new thread, whether the ability to start a new thread

1.3 Queues and Functions

Serial queue:

  • No new threads are opened, and only one task is executed at a time, one task after another.

Concurrent queue (Concurrent) :

  • Multiple threads can be opened to allow multiple tasks to execute concurrently (simultaneously).

Main queue:

  • Queues that execute tasks on the main thread are added to the main queue by default
  • Essentially,serialThe queue

Global queue:

  • The queue used by the system can be borrowed for asynchronous tasks if there are no special requirements on the queue
  • Essentially,concurrentThe queue

Sync function:

  • synchronousAfter a task is added to a specified queue, the system waits until all tasks in the queue are completed
  • You can only execute tasks in the current thread and do not have the ability to start new threads

Async:

  • asynchronousThe task is added to the specified queue. The task can continue to be executed without waiting
  • You can execute tasks in new threads and have the ability to start new threads
Different queues and different combinations of functions have different effects on different threads.Copy the code

1.4 Combination of queues and functions

Global queues are similar to concurrent queues and do not have additional analysis, so there are the following six permutations:

Synchronization function + serial queue 2. Synchronization function + concurrent queue 3. Asynchronous function + serial queue 4. asynchronous function + concurrent queue 5. synchronous function + main queue 6. Asynchronous function + main queueCopy the code

The combination of different queues and different functions on the main thread:

Function \ queue Serial queues Concurrent queue The home side column
Synchronization function No thread is started and tasks are executed sequentially No thread is started and tasks are executed sequentially A deadlock
An asynchronous function There are open threads (1) to execute tasks in serial Threads are enabled to execute tasks concurrently No new thread is started and the task is executed sequentially
  • Synchronization functionDoes not start the thread, and isserialPerform a task
  • An asynchronous functionThread may not be started, seeAsynchronous main queue
  • The main thread,Synchronize the main queueWill produceA deadlockBecause theThe synchronization task appended to the main queueandThe main thread itselfThe two are waiting for each other and blockingThe home side column
  • Synchronize the main queueDoesn’t necessarily produceA deadlock, such as in otherThe child thread
  • A deadlockNot necessarily only inThe home side column, otherSerial queuesMay also be

These points will be verified later in the code section.

Based on synchronous concurrent and asynchronous serial may be less used, relatively difficult to understand, to elaborate:

Synchronous concurrency executes tasks sequentially without starting threads. While concurrent queues can execute multiple tasks at the same time, the synchronization function does not have the ability to start a thread, so new threads are not created and tasks can only be executed on one thread. The following operations can only be performed on the same thread until the tasks currently in the queue complete. So there is no concurrency, and tasks can only be executed sequentially, not simultaneously.

An asynchronous serial executes tasks sequentially and opens a thread. Because asynchronous functions have the ability to start new threads, serial queues only execute one task at a time, and the tasks are executed one after the other in order to start only one thread.

In a nutshell,

Concurrent tasks are executed in the same thread because no new thread is started. Therefore, the concurrent tasks are executed in sequence.

Asynchronous serial starts a thread, but because it is serial, tasks are executed sequentially.

Although both synchronous and asynchronous serial tasks are performed sequentially, the reasons are somewhat different, with synchronous concurrency being synchronous and asynchronous serial being serial.

2. API usage instructions

2.1 dispatch_block_t

① dispatch_block_t Creates tasks. ② dispatch_queue_t adds tasks to queues. ③ dispatch_async specifies functions for executing tasksCopy the code

These are the three lines of code that best illustrate the idea of adding a task to a queue and specifying a function to perform the task.

However, a general merger says:

dispatch_async(dispatch_queue_create("com.juejin.cn", NULL), ^{
    NSLog(@"hello word");
});
Copy the code

2.2 dispatch_queue_t

  • The home side columnUse:dispatch_get_main_queue()To get inThe main threadIn theSynchronize the main queueCan causeA deadlock
  • Global queueUse:dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)Gets. The first argument is priority. The default value isDISPATCH_QUEUE_PRIORITY_DEFAULT, priorities are as follows:
#define DISPATCH_QUEUE_PRIORITY_HIGH 2
#define DISPATCH_QUEUE_PRIORITY_DEFAULT 0
#define DISPATCH_QUEUE_PRIORITY_LOW (-2)
#define DISPATCH_QUEUE_PRIORITY_BACKGROUND INT16_MIN
Copy the code
  • Serial queuesUse:dispatch_queue_create("..." ,DISPATCH_QUEUE_SERIAL)Create,Or dispatch_queue_create ("..." ,NULL)Can also be
#define DISPATCH_QUEUE_SERIAL NULL
Copy the code
  • Concurrent queueUse:dispatch_queue_create("..." ,DISPATCH_QUEUE_CONCURRENT)create

2.3 dispatch_sync and dispatch_async

① Synchronization + serial:

dispatch_queue_t queue = dispatch_queue_create("juejin",DISPATCH_QUEUE_SERIAL); for (int i = 0; i<20; i++) { dispatch_sync(queue, ^{ NSLog(@"%d-%@",i,[NSThread currentThread]); }); } -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - the 2020-12-09 16:01:41. 064806 + 0800 001-- Functions and queues [26946:1461774] 0-<NSThread: 0x2801ef080>{number = 1, name = main} 2020-12-09 16:01:41.064905+0800 001 0x2801ef080>{number = 1, name = main} 2020-12-09 16:01:41.064951+0800 001 0x2801ef080>{number = 1, name = main} ... -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- don't open a thread, sequentiallyCopy the code

② Synchronization + concurrency:

dispatch_queue_t queue = dispatch_queue_create("juejin",DISPATCH_QUEUE_CONCURRENT); for (int i = 0; i<20; i++) { dispatch_sync(queue, ^{ NSLog(@"%d-%@",i,[NSThread currentThread]); }); } -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - the 2020-12-09 16:06:04. 335006 + 0800 001-- Functions and queues [27053:1463520] 0-<NSThread: 0x283627040>{number = 1, name = main} 2020-12-09 16:06:04.335107+0800 001 0x283627040>{number = 1, name = main} 2020-12-09 16:06:04.335153+0800 001 0x283627040>{number = 1, name = main} ... -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- don't open a thread, sequentiallyCopy the code

③ Asynchronous + serial:

dispatch_queue_t queue = dispatch_queue_create("juejin",DISPATCH_QUEUE_SERIAL); for (int i = 0; i<20; i++) { dispatch_async(queue, ^{ NSLog(@"%d-%@",i,[NSThread currentThread]); }); } -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - the 2020-12-09 16:09:56. 688148 + 0800 [27187:1465347] 0-<NSThread: 0x282435900>{number = 6, name = (null)} 2020-12-09 16:09:56.688234+0800 001 0x282435900>{number = 6, name = (null)} 2020-12-09 16:09:56.688281+0800 001-- 0x282435900>{number = 6, name = (null)} ... -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- start a thread, sequentiallyCopy the code

④ Asynchronous + concurrent:

dispatch_queue_t queue = dispatch_queue_create("juejin",DISPATCH_QUEUE_CONCURRENT); for (int i = 0; i<20; i++) { dispatch_async(queue, ^{ NSLog(@"%d-%@",i,[NSThread currentThread]); }); } -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - the 2020-12-09 16:12:47. 086771 + 0800 001-- Functions and queues [27221:1466848] 0-<NSThread: 0x2827D8B40 >{number = 6, name = (null)} 2020-12-09 16:12:47.086875+0800 001 0x2827D8B40 >{number = 6, name = (null)} 2020-12-09 16:12:47.086931+0800 001 0x2827f74c0>{number = 4, name = (null)} ... -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- open multiple threads, out-of-order executionCopy the code

⑤ Synchronization + main queue:

In the main thread, the result of deadlock is verified by synchronizing the main queue.


dispatch_async(dispatch_get_global_queue(0, 0), ^{ for (int i = 0; i<20; i++) { dispatch_sync(dispatch_get_main_queue(), ^{ NSLog(@"%d-%@",i,[NSThread currentThread]); }); }}); -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - the 2020-12-09 16:18:29. 902827 + 0800 001 - function and queue (27272-1468868) 0-<NSThread: 0x282d0a200>{number = 1, name = main} 2020-12-09 16:18:29.912577+0800 001 0x282d0a200>{number = 1, name = main} 2020-12-09 16:18:29.913928+0800 001 0x282d0a200>{number = 1, name = main} ... -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- don't open a thread, sequentiallyCopy the code

In child threads, it is verified that synchronization of the main queue does not result in deadlock.

⑥ Asynchronous + main queue:

for (int i = 0; i<20; i++) { dispatch_async(dispatch_get_main_queue(), ^{ NSLog(@"%d-%@",i,[NSThread currentThread]); }); } -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - the 2020-12-09 16:31:28. 193584 + 0800 [27289:1470863] 0-<NSThread: 0x2818AAEC0 >{number = 1, name = main} 2020-12-09 16:31:28.193679+0800 001 0x2818AAEC0 >{number = 1, name = main} 2020-12-09 16:31:28.193725+0800 001 0x2818aaec0>{number = 1, name = main} ... -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- don't open a thread, sequentiallyCopy the code

The main queue tasks are executed on the main thread, so the asynchronous function does not necessarily start the thread.

⑦ (synchronous) serial + synchronous serial


Adding synchronous serial to the same serial queue on both child and main threads causes a deadlock. This is because the tasks appended to the serial queue and the tasks existing in the serial queue wait for each other.

Deadlocks are not necessarily confined to the primary queue, but other serial queues are also possible.

2.4 dispatch_barrier_async and dispatch_barrier_sync

Like the synchronization function, the fence function can also synchronize.

The synchronous effect means that fence functions are usually used with asynchronous concurrency.

The following structure is available when dispatch_barrier_async is used:

Dispatch_async (concurrentQueue, ^{sub-thread task A}); dispatch_barrier_async(concurrentQueue), ^{ }); A dispatch_async(concurrentQueue, ^{child task B}); The output sequence is as follows: main thread task A -> child thread task A -> child thread task BCopy the code

The following structure is available when dispatch_barrier_sync is used:

Dispatch_async (concurrentQueue, ^{sub-thread task A}); dispatch_barrier_sync(concurrentQueue), ^{ }); A dispatch_async(concurrentQueue, ^{child task B}); The output sequence is: subthread task A -> main thread task A -> subthread task BCopy the code

According to the results, the following conclusions can be drawn:

  • forThe same queueCan control the task execution sequence,synchronousrole
  • dispatch_barrier_syncblockingThe queueAlso blockthread.dispatch_barrier_asyncOnly the blockedThe queue

Attention to detail:

  • Asynchronous fence functionIt’s the queue that blocks,Specialized teamNon-fenced queues are not blocked. Such as blockingAFNetworkingNetwork requests often fail because their internal queues are self-created and need to fetch their queues to block
  • Global queueThe action is similar toConcurrent queue, if usingBarrier functionblockingGlobal queue, causing the program to crash while performing time-consuming tasks. becauseGlobal queueIt is the queue used by the system. Many system tasks are executed when APP is used. Blocking the queue will cause many unknown consequences

To summarize the related characteristics of the fence function:

  • synchronization
  • Blocking queue
  • Optional blocking thread
  • Specialized team
  • No global queue

2.5 dispatch_group_t

Dispatch_group_t is a group of block objects submitted to the queue for asynchronous invocation. The count of outstanding associated tasks is maintained internally, reduced when new tasks are associated and increased when tasks are completed. This count can be used to allow the application to determine when all tasks associated with the group are completed. Therefore, related tasks can be grouped into a group for execution, and the execution of all tasks in the group can be monitored for corresponding processing.

In daily development, it is troublesome to control the order of network requests based on the fence function. Using dispatch_group_t is feasible and common.

Apis for dispatch_group_t include:

Dispatch_group_create:

  • Create a new scheduling group to which block objects can be assigned.

dispatch_group_async:

void dispatch_group_async(dispatch_group_t group,
                          dispatch_queue_t queue,
                          dispatch_block_t block);
Copy the code
  • Asynchronously schedules a block for execution on the specified queue and simultaneously associates it with the specified scheduling group

Dispatch_group_enter:

void dispatch_group_enter(dispatch_group_t group);
Copy the code
  • Add unfinished associated tasks in the corresponding scheduling group, execute once, task count minus 1(source is -1)

Dispatch_group_leave:

void dispatch_group_leave(dispatch_group_t group);
Copy the code
  • withdispatch_group_enterCorresponding to, the uncompleted associated tasks in the corresponding scheduling group are reduced and executed once to count the tasksAdd 1(Source code is +1). When the count is zero,dispatch_group_waitUnblocking sumdispatch_group_notifyThe block executes

Dispatch_group_wait:

long dispatch_group_wait(dispatch_group_t group, dispatch_time_t timeout);
Copy the code
  • Blocks the current thread, waiting for the scheduling group to complete. The current thread is unblocked when the return value is 0 when the scheduling group completes or not 0 when the timeout expires

Dispatch_group_notify:

void dispatch_group_notify(dispatch_group_t group,dispatch_queue_t queue, dispatch_block_t block);
Copy the code
  • withdispatch_group_waitSimilarly, a scheduling group responds when it completes execution, but does not block the thread

Attention to detail:

  • Dispatch_group_notify Automatically controls task counts. Dispatch_group_enter and dispatch_group_leave manually control task counts. Therefore, dispatch_group_notify is easier to use, and dispatch_group_Enter and dispatch_group_leave are more suitable for complex environments.

  • “Dispatch_group_enter” and “dispatch_group_leave” must be deployed in pairs. You can enter “several times” and “leave” must be configured for several times. If enter is greater than leave, notify will not be executed. If leave is greater than Enter, the program will crash.

dispatch_group_t group = dispatch_group_create(); dispatch_queue_t queue = dispatch_get_global_queue(0, 0); dispatch_queue_t queue1 = dispatch_queue_create("com.juejin.cn", DISPATCH_QUEUE_CONCURRENT); Dispatch_group_async (group, queue, ^{NSLog(@" first task "); }); Dispatch_async (group, queue, ^{dispatch_async(queue1, ^{NSLog(@" second task ")); }); }); Dispatch_group_notify (dispatch_get_main_queue(), ^{NSLog(@" both tasks completed "); }); -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- the above code according to the cognitive, Order should be the first task - > the second task - > two tasks are done -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- reality: 2021-02-04 09:28:22.897517+0800 005-- GCD advanced [11694:2915771] first mission 2021-02-04 09:28:22.897753+0800 2021-02-04 09:28:22.897755+0800 005-- GCD Upgrade [11694:2915769] The second missionCopy the code

Special attention:

  • When the intra-group task is inThe child threadWhen executed,dispatch_group_notifyThe task is executed when it is incomplete, and the count needs to be manually controlled to prevent errors

2.6 dispatch_once_t

static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{

});
Copy the code

Dispatch_once_t is executed only once (in theory) during the entire app run. The default onceToken is 0 and after one execution it is -1

Attention to detail:

  • dispatch_once_tIt is not required to be executed once. If it needs to be executed again, it needs to be setonceToken0

Usage Scenarios:

  • A singleton can be created only once
  • Method SwizzlingDark magic to prevent the method from being swapped back
  • Fixed the UI construction phase

2.7 dispatch_after

Dispatch_after (dispatch_time(DISPATCH_TIME_NOW, (int64_t)(2.0 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{});Copy the code

This function waits until the specified time and then asynchronously adds the task to the specified queue.

Usage Scenarios:

  • Popover that disappears automatically
  • Prevent the button from being clicked repeatedly
  • Other scenarios that require delayed execution

2.8 dispatch_semaphore_t

dispatch_semaphore_create:

dispatch_semaphore_t dispatch_semaphore_create(long value);
Copy the code
  • createdispatch_semaphore_tSemaphore of type. The size of the semaphore must be greater than 0

dispatch_semaphore_wait:

long dispatch_semaphore_wait(dispatch_semaphore_t dsema, dispatch_time_t timeout);
Copy the code

This function decrement the semaphore by one. If the value minus 1 is less than 0, it waits and blocks the current thread.

Dispatch_semaphore_signal:

long dispatch_semaphore_signal(dispatch_semaphore_t dsema)
Copy the code

This function increments the semaphore by one.

Attention to detail:

  • dispatch_semaphore_waitanddispatch_semaphore_signalThey come in pairs.
  • When creating a signal, the value can be greater than 1 to control the maximum number of concurrent requests
  • dispatch_semaphore_tIt’s also a form of lock

Usage Scenarios:

  • synchronous
  • Thread-safe locking
  • Controls the maximum number of concurrent requests

2.9 dispatch_source_t

An object that coordinates the processing of specific underlying system events, such as file system events, timers, and UNIX signals.

Dispatch_source_merge_data, one of its functions, is called on any thread, and the dispatch_source_merge_data handle is executed. This process is called Custom Event user events, which are events that the Dispatch Source supports.

dispatch_source_create:

dispatch_source_t
dispatch_source_create(dispatch_source_type_t type,
	uintptr_t handle,
	unsigned long mask,
	dispatch_queue_t _Nullable queue)
Copy the code
  • Create a new scheduling source to monitor the underlying system objects and automatically submit a handler block to the scheduling queue in response to events

dispatch_source_set_event_handler:

void dispatch_source_set_event_handler(dispatch_source_t source,
	dispatch_block_t _Nullable handler)
Copy the code
  • Create a callback to the event source

Dispatch_source_merge_data:

void dispatch_source_merge_data(dispatch_source_t source, unsigned long value)
Copy the code
  • Merges the data to the dispatch source and commits its event handler block to its target queue. You can use this function to indicate what is defined in an applicationDISPATCH_SOURCE_TYPE_DATA_ADDorDISPATCH_SOURCE_TYPE_DATA_ORAn event occurred on the dispatch event source of type.

Dispatch_source_get_data:

unsigned long dispatch_source_get_data(dispatch_source_t source)
Copy the code
  • Get the source event data

Dispatch_resume:

void dispatch_resume(dispatch_object_t object)
Copy the code
  • Continue to source

Dispatch_suspend:

void dispatch_suspend(dispatch_object_t object)
Copy the code
  • Hang the origin

Usage Scenarios:

  • Source-based timer
  • Listen for changes in data

3. Write in the back

There are many more GCD apis, and the above list only lists the common ones. When working with an API, it’s much easier to use it if you know the underlying implementation.

The next chapter is the underlying principles of multithreading. Stay tuned.