This section focuses on the most frequently used GCD in multithreading
- The GCD introduction
- Function and queue four combinations (synchronous, asynchronous, serial, parallel)
- Performance scheduling energy dissipation
- The interview questions
- Thread resource sharing
- Barrier function
- Group scheduling Group
- The GCD singleton
- The signal semaphore
1 the GCD introduction
GCD, which stands for Grand Central Dispatch, is developed purely in C and provides many powerful functions.
- Advantage:
- GCD is a solution proposed by Apple for multi-core parallel computing.
- – GCD automatically utilizes more CPU cores;
- GCD automatically manages the life cycle of threads (creating threads, scheduling tasks, destroying threads).
- The programmer just tells the GCD what he wants to do, and doesn’t need to write any thread management code (scheduling, destruction, neither)
- Core: Adds a task to a queue and specifies the function to execute the task
Here extended out the task, the queue, the function to execute the task three content. We’re going to analyze it one by one
// Task (block) dispatch_block_t block = ^{NSLog(@"hello GCD"); }; Dispatch_queue_t queue = dispatch_queue_create("syncTest", DISPATCH_QUEUE_SERIAL); // Dispatch_async (queue, block);Copy the code
1.1 the task
GCD’s job is to use block-wrapped functions, with no incoming or backarguments. Once the task is created, it is queued by the function waiting to execute it.
1.2 the queue
GCD queues include serial queues and parallel queues. Serial queue: Only one task is allowed to execute at a time. Parallel queue: Allows multiple tasks to be executed at the same time.
1.3 Functions for performing tasks
Tasks are performed by synchronous functions and asynchronous functions:
- Dispatch_sync Synchronization function:
- The next statement is executed only after the current statement is completed
- The block task is executed on the current thread without starting the thread
- Dispatch_async Asynchronous function:
- The next statement can be executed without waiting for the current statement to complete
- The thread is enabled to execute block tasks
Function and queue four combinations (synchronous, asynchronous, serial, parallel)
-
Main queue dispatch_GET_MAIN_queue:
- A serial queue dedicated to scheduling tasks on the main thread
- Will not start the thread
- If the main thread is executing a task, other tasks will be scheduled only after the current task is completed.
-
Global concurrent queue dispatch_GET_GLOBAL_queue:
- To make it easier for programmers, Apple provides global queues (concurrent queues, a shortcut to multithreading requirements).
- When using multithreaded development, if there is no special requirement for queue, you can directly use global queue to perform asynchronous tasks
Q & A:
Q: How many queues are there?
// dispatch_queue = dispatch_queue_create("ypy", DISPATCH_QUEUE_SERIAL); // DISPATCH_QUEUE_CONCURRENT = dispatch_queue_create("ypy", DISPATCH_QUEUE_CONCURRENT); // mainQueue (serial queue) dispatch_queue_t mainQueue = dispatch_get_main_queue(); // globalQueue (parallel queue) dispatch_queue_t globalQueue = dispatch_get_global_queue(0, 0); NSLog(@"\n%@ \n%@ \n%@ \n%@", serial, concurrent, mainQueue, globalQueue);Copy the code
A: There are only serial queues and parallel queues.
2.1 Synchronization + Serial Deadlock
- (void)mainSyncTest{ NSLog(@"0 %@", [NSThread currentThread]); // wait for dispatch_sync(dispatch_get_main_queue(), ^{NSLog(@" 1% @", [NSThread currentThread]); }); NSLog(@"2 %@", [NSThread currentThread]); }Copy the code
Analysis:
- The main queue (main) is a serial queue, and the function (sync) is a synchronization function. In the case of synchronous functions & serial queues
- After the 0 is printed, the dispatch_sync sync function queues blocks into the mainSyncTest function and waits until the mainSyncTest function finishes executing.
- But the block is not executed, so the dispatch_sync function is not completed. The program cannot proceed.
- Therefore, mainSyncTest, such as dispatch_sync, executes the block after executing the dispatch_sync test, while mainSyncTest says dispatch_sync is not executed.
Q1: If the above code changes synchronization + main queue execution to synchronization + custom serial queue, will it be blocked?
dispatch_queue_t serial = dispatch_queue_create("ypy", DISPATCH_QUEUE_SERIAL);
NSLog(@"0 %@", [NSThread currentThread]);
dispatch_sync(serial, ^{
NSLog(@"1 %@", [NSThread currentThread]);
});
NSLog(@"2 %@", [NSThread currentThread]);
Copy the code
Summary: Although the main queue execution is changed to a custom serial queue, which solves the congestion problem, does it have anything to do with the queue and thread that the syncTest code itself runs on? [Previous blocking code] function code and block code are in the main queue + main thread [new code] function code is in the custom queue + main thread
Q2: Put syncTest code in this custom queue and execute it. Will congestion occur?
dispatch_queue_t serial = dispatch_queue_create("ypy", DISPATCH_QUEUE_SERIAL); dispatch_sync(serial, ^{ [self mainSyncTest]; }); - (void)mainSyncTest{ NSLog(@"0 %@", [NSThread currentThread]); // wait for dispatch_sync(dispatch_get_main_queue(), ^{NSLog(@" 1% @", [NSThread currentThread]); }); NSLog(@"2 %@", [NSThread currentThread]); }Copy the code
Note that the thread is still in the main thread. The main queue is a special queue, and the APP is bound to the main thread when it starts. Threads will not switch.
Q3: Can a custom queue block other custom queues?Q4: Is it related to the main thread? If the current function is executed on a child thread, and the task returns to the main thread, can blocking also be prevented?
Note that the thread is still in the main thread. Therefore, the synchronization operation performed by the same thread on the same queue will block and Crash
Continue to explore, as mentioned above, custom queue cut the main queue, can also prevent blocking? Bottom line: The main queue is a special queue that is bound to the main thread when the APP starts and does not switch threads.
- The current environment is main queue + main thread. The sync synchronization + main_queue main queue task is blocked
- Current environment: main queue + main thread, switch to a custom serial queue, no thread is created (block is executed on the main thread), no block
- Current environment: custom serial queue + main thread, switch to the main queue, will block
- Current environment: custom serial queue + main thread, switch to the new custom serial queue, no thread (block execution in the main thread), no block
- Current environment: custom serial queue + child thread, switch to the main queue, the main queue is bound to the main thread, so it will switch back to the main thread to execute the block task, after executing back to the child thread to execute the subsequent task. Does not block
2.2 Synchronization and Parallel
for (int i = 0; i<20; i++) {
dispatch_sync(dispatch_get_global_queue(0, 0), ^{
NSLog(@"%d-%@",i,[NSThread currentThread]);
});
}
NSLog(@"hello queue");
Copy the code
The summary does not block threads, but only passes one at a time. Yes Time-consuming operation.
2.3 Asynchronous + Serial
- (void)mainAsyncTest{
dispatch_async(dispatch_get_main_queue(), ^{
NSLog(@"1 %@", [NSThread currentThread]);
});
NSLog(@"2 %@", [NSThread currentThread]);
}
Copy the code
As you can see, when async + serial, the Block (print 1) in the async function is executed after the mainAsyncTest function is executed (print 2), and then the Block (print 1) is executed in the new thread.
We can compare the phenomenon of synchronous + serial blocking deadlock in 2.1 above. The difference between the two is: synchronous + serial:
- For dispatch_sync, wait until the mainSyncTest is executed before inserting the block task into the tail.
- Dispatch_sync must wait for the block to be executed before it is complete.
Asynchronous + serial:
- 1. For dispatch_async, insert the block task in the tail of the dispatch_async task without waiting for mainAsyncTest to complete.
- Dispatch_async doesn’t have to wait for the block to finish executing, it just plugs the block into the tail, and it’s done.
2.4 Asynchronous + parallel
for (int i = 0; i<20; i++) {
dispatch_sync(dispatch_get_global_queue(0, 0), ^{
NSLog(@"%d-%@",i,[NSThread currentThread]);
});
}
NSLog(@"hello queue");
Copy the code
The summary opens multiple threads in an indefinite order.
3. Performance scheduling energy consumption
CFAbsoluteTime time = CFAbsoluteTimeGetCurrent(); dispatch_queue_t queue = dispatch_queue_create("thread", DISPATCH_QUEUE_SERIAL); // dispatch_async(queue, ^{// NSLog(@" async "); / /}); Dispatch_sync (queue, ^{NSLog(@" sync ")); }); NSLog(@"%f", CFAbsoluteTimeGetCurrent() - time);Copy the code
Compare the time of no operation, creating a thread, creating a thread to call a synchronous function, and creating a thread to call an asynchronous function:
- When there is no operation, there is no time consuming
CFAbsoluteTime time = CFAbsoluteTimeGetCurrent();
NSLog(@"%f", CFAbsoluteTimeGetCurrent() - time);
Copy the code
- Create thread: 0.00009 seconds
CFAbsoluteTime time = CFAbsoluteTimeGetCurrent();
dispatch_queue_t queue = dispatch_queue_create("thread",
NSLog(@"%f", CFAbsoluteTimeGetCurrent() - time);
Copy the code
- Create thread and call asynchronous function: 0.00040 seconds
CFAbsoluteTime time = CFAbsoluteTimeGetCurrent(); dispatch_queue_t queue = dispatch_queue_create("thread", DISPATCH_QUEUE_SERIAL); Dispatch_async (queue, ^{NSLog(@" async "); }); NSLog(@"%f", CFAbsoluteTimeGetCurrent() - time);Copy the code
- Create thread and call synchronization function: 0.000232 seconds
CFAbsoluteTime time = CFAbsoluteTimeGetCurrent(); dispatch_queue_t queue = dispatch_queue_create("thread", DISPATCH_QUEUE_SERIAL); Dispatch_sync (queue, ^{NSLog(@" sync ")); }); NSLog(@"%f", CFAbsoluteTimeGetCurrent() - time);Copy the code
Conclusion:
- Each time a thread is created, there is a loss of time
- After a thread is created, synchronous execution is more time-consuming than asynchronous execution
4. Interview questions
The interview topic 1:
// Dispatch_queue = dispatch_queue_create("ypy", DISPATCH_QUEUE_SERIAL); NSLog(@"1"); // Dispatch_async (queue, ^{NSLog(@"2")); // dispatch_sync(queue, ^{NSLog(@"3"); }); }); NSLog(@"5");Copy the code
Answer: Print 1, 5, 2 after the crash
Interview Question 2:
// Dispatch_queue_t queue = dispatch_queue_create("ypy", DISPATCH_QUEUE_CONCURRENT); NSLog(@"1 %@",[NSThread currentThread]); // Dispatch_async (queue, ^{NSLog(@" 2% @",[NSThread currentThread])); // Dispatch_sync (queue, ^{NSLog(@" %@",[NSThread currentThread])); }); NSLog(@"4 %@",[NSThread currentThread]); }); NSLog(@"5 %@",[NSThread currentThread]);Copy the code
The answer
- Print results: 1 -> 5 -> 2 -> 3 -> 4
- Unlike interview question 1, this is the DISPATCH_QUEUE_CONCURRENT parallel queue.
- Refer to 2.2 Synchronization + Parallel Analysis. The dispatch_sync synchronization function in the concurrent queue does not block threads, but only passes one task at a time.
Interview Question 3:
// Dispatch_queue_t queue = dispatch_queue_create("ypy", DISPATCH_QUEUE_CONCURRENT); NSLog(@"1"); // Dispatch_async (queue, ^{NSLog(@"2"); dispatch_async(queue, ^{ NSLog(@"3"); }); NSLog(@"4"); }); NSLog(@"5");Copy the code
Answer: Print results: 1 -> 5 -> 2 -> 4 -> 3
Interview question 4:
Select the possible options in print order: A: 1230789 B: 1237890 C: 3120798 D: 2137890
// Dispatch_queue_t queue = dispatch_queue_create("ypy", DISPATCH_QUEUE_CONCURRENT); dispatch_async(queue, ^{ NSLog(@"1"); }); dispatch_async(queue, ^{ NSLog(@"2"); }); // dispatch_sync(queue, ^{NSLog(@"3"); }); NSLog(@"0"); dispatch_async(queue, ^{ NSLog(@"7"); }); dispatch_async(queue, ^{ NSLog(@"8"); }); dispatch_async(queue, ^{ NSLog(@"9"); });Copy the code
A and B.
- Is a concurrent queue;
- Asynchrony & concurrency is out of order, so 1 and 2 prints are out of order, 7, 8, 9 prints are out of order;
- Synchronization & concurrency queues tasks for execution, so 0 must be printed after 3, and 7, 8, 9 must be printed after 0.
- If 0 is printed after 3, 7, 8 and 9 are printed after 0. There’s only choice A and choice C.
5 Thread resource sharing
- Read more and write:
Using serial queue, asynchronous function supports many people to buy tickets, and synchronous function limits only one ticket at a time.
@interface ViewController () @property (nonatomic, assign) NSInteger tickets; @property (nonatomic, strong) dispatch_queue_t queue; @end @implementation ViewController - (void)viewDidLoad {[super viewDidLoad]; _tickets = 20; _queue = dispatch_queue_create("ypy", DISPATCH_QUEUE_SERIAL); } - (void)touches began :(NSSet< touches *> *)touches withEvent:(touches *> *) {// touches are created dispatch_async(dispatch_get_global_queue(0, 0), ^{ [self saleTickes]; }); Dispatch_async (dispatch_get_global_queue(0, 0), ^{// Second thread sell tickets [self saleTickes]; }); } - (void)saleTickes {while (self.tickets > 0) {// simulate latency [NSThread sleepForTimeInterval:1.0]; // Apple does not recommend that programmers use mutex, serial queue synchronization tasks can achieve the same effect! // @synchronized // dispatch_sync(_queue, ^{// check tickets if (self.tickets > 0) {self.tickets--; NSLog(@" remaining %zd %@", self.tickets, [NSThread currentThread]); } else {NSLog(@" no tickets "); }}); } } @endCopy the code
6. Barrier function
Control task execution sequence, synchronization.
- “Dispatch_barrier_async” : not until all previous tasks have been executed (without blocking threads)
- Dispatch_barrier_sync: Blocks the thread until all previous tasks are completed. During the blockage, subsequent tasks are suspended and waiting.
Important: Fence functions can only control the same concurrent queue
- The fence function is only used in parallel queue-asynchronous functions. It is used to listen for multiple signals (tasks) to see if they are complete.
(Signals (tasks) in serial or synchronization are themselves executed sequentially and do not require the use of a fence function.)
Pit point: Why can’t the fence function use the dispatch_GET_global_queue queue? Because many system tasks are also executing in the global queue. We need to manually create a clean queue with dispatch_queue_create, place the tasks that need to be executed, and then use the fence function to listen for the results of the task execution.
@interface ViewController () @end @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; __weak typeof(self) weakSelf = self; __block CFAbsoluteTime time = CFAbsoluteTimeGetCurrent(); / / request token [self requestToken: ^ (id value) {/ / take token [weakSelf requestDataWithToken: value handle: ^ (BOOL success) { Success? NSLog (@ "success") : NSLog (@ "failure"); the NSLog (@ "% f", CFAbsoluteTimeGetCurrent () - time);}];}]; } /** obtain token request */ - (void)requestToken:(void(^)(id value))successBlock{NSLog(@" start requesting token"); [NSThread sleepForTimeInterval:1]; if (successBlock) { successBlock(@"b2a8f8523ab41f8b4b9b2a79ff47c3f1"); } /** Request all data */ - (void)requestDataWithToken: (NSString *) Token handle: (void(^)(BOOL success))successBlock { dispatch_queue_t queue = dispatch_queue_create("ypy", DISPATCH_QUEUE_CONCURRENT); dispatch_async(queue, ^{ [self requestHeadDataWithToken: token handle:^(id value) { NSLog(@"%@", value); }]; }); dispatch_async(queue, ^{ [self requestListDataWithToken:token handle:^(id value) { NSLog(@"%@", value); }]; }); dispatch_barrier_async(queue, ^{ successBlock(true); }); } / * * * / head data request - (void) requestHeadDataWithToken: (nsstrings *) token handle: void (^) value) (id) successBlock {the if (token. Length == 0) {NSLog(@" no token, cannot request data due to security "); return; } [NSThread sleepForTimeInterval:2]; If (successBlock) {successBlock(@" I'm the boss, listen to me "); }} / * * * / list data request - (void) requestListDataWithToken: (nsstrings *) token handle: void (^) value) (id) successBlock {the if (token. Length == 0) {NSLog(@" no token, cannot request data due to security "); return; } [NSThread sleepForTimeInterval:1]; If (successBlock) {successBlock(@" I'm listing data "); } } @endCopy the code
7 Scheduling Group Group
Similar to the fence function, it controls the order in which tasks are executed.
- Dispatch_group_create create a group
- Dispatch_group_async Incoming tasks (automatically manage incoming and outbound tasks)
- Dispatch_group_notify Notify that a group task is completed
- Dispatch_group_wait Wait time for executing grouped tasks
- Dispatch_group_enter into groups
- Dispatch_group_leave out group
In-group and out-group should be used in pairs
@interface ViewController () @end @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; __weak typeof(self) weakSelf = self; __block CFAbsoluteTime time = CFAbsoluteTimeGetCurrent(); [self requestToken:^(id value) {dispatch_group_t group = dispatch_group_CREATE (); dispatch_queue_t concurrent = dispatch_queue_create("ypy", DISPATCH_QUEUE_CONCURRENT); dispatch_group_enter(group); dispatch_async(concurrent, ^{ [weakSelf requestHeadDataWithToken:value handle:^(id value) { NSLog(@"%@",value); dispatch_group_leave(group); }]; }); dispatch_group_enter(group); dispatch_async(concurrent, ^{ [weakSelf requestListDataWithToken:value handle:^(id value) { NSLog(@"%@",value); dispatch_group_leave(group); }]; }); dispatch_group_notify(group, concurrent, ^{NSLog(@"%f", CFAbsoluteTimeGetCurrent() - time);});}]; } /** obtain token request */ - (void)requestToken:(void(^)(id value))successBlock{NSLog(@" start requesting token"); [NSThread sleepForTimeInterval:1]; if (successBlock) { successBlock(@"b2a8f8523ab41f8b4b9b2a79ff47c3f1"); }} / * * * / head data request - (void) requestHeadDataWithToken: (nsstrings *) token handle: void (^) value) (id) successBlock {the if (token. Length == 0) {NSLog(@" no token, cannot request data due to security "); return; } [NSThread sleepForTimeInterval:2]; If (successBlock) {successBlock(@" I'm the boss, listen to me "); }} / * * * / list data request - (void) requestListDataWithToken: (nsstrings *) token handle: void (^) value) (id) successBlock {the if (token. Length == 0) {NSLog(@" no token, cannot request data due to security "); return; } [NSThread sleepForTimeInterval:1]; If (successBlock) {successBlock(@" I'm listing data "); } } @endCopy the code
- The GCD singleton
Singletons: Use the in-memory nature of static to ensure the uniqueness of objects. Override the implementation of allocWithZone so that when the outside world is created using alloc, it always returns static declared objects. Here is the core code for KCImageManger:
Static id instance; @implementation KCImageManger /** Invoke 1, +load every time the class is initialized. If a class does not implement the +load method itself, the system will not call it, regardless of whether other levels of superclasses implement the method. The +load method is invoked in the SuperClass -->SubClass --> CategaryClass sequence. 3. + Initialize is called before the class or its subclass receives the first message, but after its superclass receives initialize. The +initialize method will not be called if the program has not been sending messages to a class or its subclasses. Only the thread that +initialize can operate on the class or class instance. All other threads must block until + Initialize completes. */ + (void)initialize{NSLog(@" superclass "); if (instance == nil) { instance = [[self alloc] init]; */ + (instanceType)manager{return instance; } /** * All methods that allocate space for objects of a class, */ +(instancetype)allocWithZone:(struct _NSZone *)zone{static dispatch_once_t onceToken; dispatch_once(&onceToken, ^{ instance = [super allocWithZone:zone]; }); return instance; } /** simple */ +(instanceType)shareManager{static dispatch_once_t onceToken; dispatch_once(&onceToken, ^{ instance = [[self alloc] init]; }); return instance; } @endCopy the code
9 Semaphore
Controls the maximum concurrency of the GCD. Maximum number of signals (tasks) that can be performed at one time.
- Dispatch_semaphore_create: creates a semaphore
- Dispatch_semaphore_wait: indicates the semaphore waiting
- Dispatch_semaphore_signal: indicates that the semaphore is released
After adding a semaphore to dispatch_semaphore_wait, add a semaphore to dispatch dispatch_semaphore_signal; otherwise, crash will occur
Dispatch_queue_t queue = dispatch_get_global_queue(0, 0); Semaphore_t sem = dispatch_semaphore_create(2); // A maximum of two tasks can be executed simultaneously // Task 1 dispatch_async(queue, ^{dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER); sleep(1); NSLog(@" execute task 1"); sleep(1); NSLog(@" Task 1 completed "); dispatch_semaphore_signal(sem); }); // Task 2 dispatch_async(queue, ^{dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER); sleep(1); NSLog(@" execute task 2"); sleep(1); NSLog(@" Task 2 completed "); dispatch_semaphore_signal(sem); }); // Task 3 dispatch_async(queue, ^{dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER); sleep(1); NSLog(@" execute task 3"); sleep(1); NSLog(@" Task 3 completed "); dispatch_semaphore_signal(sem); });Copy the code