1. The GCD

  • What is communist Party CD?

    Grand Central Dispatch is pure C and provides a lot of powerful functionsCopy the code
  • The advantage of the GCD

    The GCD is apple multicore parallel computing solutions the GCD will automatically use more CPU kernel, such as dual-core, quad-core) GCD automatically manage the life cycle of a thread (create threads, thread scheduling tasks, destroy) programmers only need to tell the GCD want what mission, don't need to write any code thread managementCopy the code

Use functions to queue tasks

2. Functions and queues

  • Thread synchronous, thread asynchronous and serial queues, concurrent queues

  • Thread synchronization, @synchronized Multiple threads accessing at the same time can lock, allowing only one task to be executed at a time, so serial and concurrent synchronization can cause congestion.

  • The thread is asynchronous and does not block the current thread.

  • Serial queue: thread FIFO, queuing

  • Concurrent queue: Multiple tasks can be executed simultaneously and asynchronously concurrently.

  • Asynchronous concurrency creates a new thread for each task being executed

  • Asynchronous serial opens only one thread number = 6(main thread number = 1).

    2.1 the function:

    Asynchronous' dispatch_async '- do not wait for the current statement to complete, the next statement can be executed - will start the thread to execute the block task - asynchronous is a code word for multithreading synchronous' dispatch_sync' - must wait for the current statement to complete, Will execute the next statement - not start the thread - in the current block to execute the taskCopy the code

Note: Async has the ability to start a new thread, but it does not have to start a new thread. This depends on the type of queue specified by the task.

Examples: github.com/tanghaitao/…

Personal understanding, queue, and thread is similar to bank call system and the number, the task of doing things, is how to deal with the bank call system control, can handle multiple, at the same time can also handle each time, even for a while dealing with a short while dealing with multiple, specific things is, by taking the number of call to the who, who will go to the window.

2.2 the queue:

serial

concurrent

- (void)syncTest{//1: dispatch_queue_t queue = dispatch_queue_create("Cooci", DISPATCH_QUEUE_SERIAL); //dispatch_queue_t queue = dispatch_queue_create("Cooci", NULL); //2: dispatch_block_t taskBlock = ^{NSLog(@"%@",[NSThread currentThread]); }; //3: dispatch_sync(queue, taskBlock); }Copy the code

2.2.1 Serial Synchronization queue:

Serial synchronization queue: FIFO: first in, first out

/** Serial sync queue: FIFO: */ - (void)serialSyncTest{//1: dispatch_queue_t queue = dispatch_queue_create("Cooci", DISPATCH_QUEUE_SERIAL); for (int i = 0; i<20; i++) { dispatch_sync(queue, ^{ NSLog(@"%d-%@",i,[NSThread currentThread]); //1-<NSThread: 0x6000001e4900>{number = 1, name = main} }); }}Copy the code

2.2.2 Serial Asynchronous Queue:

*/ - (void)serialAsyncTest{//1: create a serial queue NSLog(@" %@",[NSThread currentThread]); //<NSThread: 0x6000024d41c0>{number = 1, name = main} dispatch_queue_t queue = dispatch_queue_create("Cooci", DISPATCH_QUEUE_SERIAL); for (int i = 0; i<20; i++) { dispatch_async(queue, ^{ NSLog(@"%d-%@",i,[NSThread currentThread]); //{number = 6, name = (null)} }); } for (int i = 0; i<1000000; i++) { } NSLog(@"hello queue"); }Copy the code

Only one thread number = 6(main thread number = 1) was opened.

2.2.3 Synchronizing concurrent Queues

Block until the previous task completes because it is the same queue and thread

/** Block the lock queue: Resume supend thread operation, */ - (void)concurrentSyncTest{//1: create a concurrent queue dispatch_queue_t queue = dispatch_queue_create("Cooci", DISPATCH_QUEUE_CONCURRENT); for (int i = 0; i<20; i++) { dispatch_sync(queue, ^{ NSLog(@"%d-%@",i,[NSThread currentThread]); }); } for (int i = 0; i<1000000; i++) { } NSLog(@"hello queue"); }Copy the code

2.2.4 Asynchronous Concurrent Queue

Asynchronous concurrency: with asynchronous functions, not every task will open up a thread, only asynchronous concurrency, asynchronous serial, will open up a thread.

/** Async concurrency: With asynchronous functions you don't have to have a thread for every task, you have to have asynchronous concurrency, asynchronous serial, sorry, Name = (null) */ - (void)concurrentAsyncTest{//1: create concurrent queue dispatch_queue_t queue = dispatch_queue_create("Cooci", DISPATCH_QUEUE_CONCURRENT); for (int i = 0; i<20; i++) { dispatch_async(queue, ^{ NSLog(@"%d-%@",i,[NSThread currentThread]); // 0-<NSThread: 0x600003dc5dc0>{number = 6, name = (null)} // 1-<NSThread: 0x600003dc1000>{number = 5, name = (null)} // 5-<NSThread: 0x600003d98700>{number = 4, name = (null)} // 2-<NSThread: 0x600003d9cbc0>{number = 3, name = (null)} }); } // for (int i = 0; i<1000000; i++) { // // } NSLog(@"hello queue"); }Copy the code

3. Combine functions and queues

The following code: asynchrony opens threads, consumes memory, and execution lags

- (void)textDemo{ dispatch_queue_t queue = dispatch_queue_create("haitao", DISPATCH_QUEUE_CONCURRENT); NSLog(@"1"); dispatch_async(queue, ^{ NSLog(@"2"); dispatch_async(queue, ^{ NSLog(@"3"); }); NSLog(@"4"); }); NSLog(@"5"); // 1 1 2 2 3}Copy the code

The following code: block the current thread and queue synchronously.

- (void)textDemo1{ dispatch_queue_t queue = dispatch_queue_create("cooci", DISPATCH_QUEUE_CONCURRENT); NSLog(@"1"); dispatch_async(queue, ^{ NSLog(@"2"); dispatch_sync(queue, ^{ NSLog(@"3"); }); NSLog(@"4"); }); NSLog(@"5"); // 1 5 2 3 4}Copy the code

The following code: synchronous block current thread, serial FIFO, FIFO,

- (void)textDemo2{dispatch_queue_t queue = dispatch_queue_create(" cooCI ", DISPATCH_QUEUE_SERIAL); NSLog(@"1"); // dispatch_async(queue, ^{NSLog(@"2")); Dispatch_sync (queue, ^{NSLog(@"3"); }); NSLog(@"4"); }); NSLog(@"5"); // 1 5 2}Copy the code

Queue serial, nslog('2'),block, 4,3 sync.nslog ('2'),block,3,4 deadlockCopy the code

Executing the following code in viewDidLoad causes a deadlock. A circular wait caused by a queue

dispatch_sync(dispatch_get_main_queue(),^{
  [self doSomething];
});
Copy the code

Two tasks are submitted in the main queue, a viewDidLoad and a block task, which are finally assigned to the main thread. When the method call in the block is done, the execution in this viewDidLoad method is going to go down, because viewDidLoad is the first one in the queue, and viewDidLoad is calling a block, so the block is in the queue again, and viewDidLoad is going to finish, You have to finish the block, and then you have to wait for viewDidLoad in the queue to finish, so it’s a waiting deadlock.

Serial Dispatch_get_main_queue () Dispatch_get_main_queue (); serial dispatch_get_main_queue(); Global queue(dispatch_get_global_queue(0, 0)) is a concurrent queue. If there is no special requirement for a global queue, when executing asynchronous tasks, the dispatch_get_global_queue(0, 0) is a concurrent queue. You can use global queues directlyCopy the code
__block int a = 0; // dispatch_group_t t = dispatch_get_main_queue(); // dispatch_get_global_queue : While (a<5) {// Time is long enough -- dispatch_async(dispatch_get_global_queue(0, 0)) ^{ NSLog(@"%@===%d",[NSThread currentThread],a); a++; }); }Copy the code

In this case, variable A is thread unsafe, as shown below, ===90, not 5. A on the main thread is also thread unsafe because the result is not fixed data each time

{number = 1, name = main}****7

{number = 1, name = main}****5

While loop inside the asynchronous concurrent, will open new threads, a = 0 at the beginning, after entering asynchronous concurrent queue, +, but because of carried out as a five global concurrent queue, so every time can lead to perform a task scheduling among different threads, repeated many times back and forth, and N + implementation. The first time a=0,a++, prints ==1, the second thread has not executed a++, continues to print ==1, then one thread a++ completes, another thread a++ also executes a++, prints ==3, multiple asynchronous concurrent queues running the same task at the same time can cause a lot of problems, Similar to the same task needs different departments (asynchronous concurrent queue) scheduling at the same time. Leads to a lot of repetitive work.

The solution to thread insecurity: an asynchronous concurrent queue executes the same task, adding a while loop to the asynchronous concurrent queue, but this can only be executed in one thread (number = 3), which is inefficient.

dispatch_async(dispatch_get_global_queue(0, 0), ^ {while (a < 5) {/ / statement in it is normal NSLog (@ "% @ = = = % d", [NSThread currentThread], a); a++; }});Copy the code

4. GCD applies the fence function

4.1 background:

When making network requests, we will encounter a situation as shown in the figure below. The token request needs to be completed before other network requests are executed, resulting in nested network requests. If the logic is very complex, it will lead to poor read performance and is not good for maintenance. Asynchronous to synchronous, to ensure thread safety, if only executed in one thread, such as the above code while loop on the same queue, only one task can be executed at a time, Sync blocked, the efficiency is very low, and the operation after the task is completed cannot be monitored.

    [self requestToken:^(id value) {
        weakSelf.token = value;

        [weakSelf requestHeadDataWithToken:value handle:^(id value) {
            NSLog(@"%@",value);
            weakSelf.headData = value;
        }];

        [weakSelf requestListDataWithToken:value handle:^(id value) {
            NSLog(@"%@",value);
            weakSelf.listData = value;
        }];
    }];
Copy the code

Primary solution; Sys blocked — 1: user experience 2: asynchronous, do not know when the request is completed, cannot track //sync blocked waiting, poor user experience

Dispatch_block_t task = ^{ Dispatch_sync (queue, ^{[self requestToken:^(id value) {weakself. token = value;}]; }); // async, do not know when the call is complete, Dispatch_async (queue, ^{ [weakSelf requestHeadDataWithToken:self.token handle:^(id value) { NSLog(@"%@",value); weakSelf.headData = value; }];  }); dispatch_async(queue, ^{ [weakSelf requestListDataWithToken:self.token handle:^(id value) { NSLog(@"%@",value); weakSelf.listData = value; }];  }); }; dispatch_async(queue, task); NSLog at sign "that's it, okay? I have other things to do ");Copy the code

This is where the fence function dispatch_barrier_async is born

4.2 characteristics:

  1. sequential
  2. Thread safety
for (int i = 0; i<1000; i++) { dispatch_async(concurrentQueue, ^{ NSString *imageName = [NSString stringWithFormat:@"%d.jpg", (i % 10)]; NSURL *url = [[NSBundle mainBundle] URLForResource:imageName withExtension:nil]; NSData *data = [NSData dataWithContentsOfURL:url]; UIImage *image = [UIImage imageWithData:data]; NSLog(@"%zd --- %@ ---- %d",self.mArray.count,[NSThread currentThread],i); // dispatch_barrier_async(concurrentQueue, ^{async blocks queue, sync blocks queue and thread [self.marray addObject:image]; NSLog(@"%zd ===== %@ ====== %d",self.mArray.count,[NSThread currentThread],i); / /}); // @synchronized(self){ // [self.mArray addObject:image]; // } if (i==199) { NSLog(@"++++++++:%zd",self.mArray.count); }}); }Copy the code

Without fence function: the number of arrays is not normal, not fixed at 10000.

Add the fence function:

[self.mArray removeAllObjects]; dispatch_queue_t concurrentQueue = dispatch_queue_create("haitao", DISPATCH_QUEUE_CONCURRENT); // dispatch_queue_t concurrentQueue = dispatch_get_global_queue(0, 0); // signal for (int i = 0; i<10000; i++) { dispatch_async(concurrentQueue, ^{ NSString *imageName = [NSString stringWithFormat:@"%d.jpg", (i % 10)]; NSURL *url = [[NSBundle mainBundle] URLForResource:imageName withExtension:nil]; NSData *data = [NSData dataWithContentsOfURL:url]; UIImage *image = [UIImage imageWithData:data]; NSLog(@"%zd --- %@ ---- %d",self.mArray.count,[NSThread currentThread],i); Dispatch_barrier_async (concurrentQueue, ^{async blocks queue, sync blocks queue and thread [self.marray addObject:image]; NSLog(@"%zd ===== %@ ====== %d",self.mArray.count,[NSThread currentThread],i); }); // @synchronized(self){ // [self.mArray addObject:image]; // } if (i==199) { NSLog(@"++++++++:%zd",self.mArray.count); }}); }Copy the code

[self.mArray removeAllObjects];

The dispatch_barrier_async function ensures that only one task is scheduled at a time, with only one thread (number = 64, name = (null)} ======). Thread-safe, similar to sync, but better than sync is that it does not block the current thread.

4.3 Three disadvantages of fence function:

4.3.1 FIFO is not guaranteed

The dispatch_barrier_async function is not guaranteed to be executed in a serial FIFO, because tasks are scheduled simultaneously and the start and end times of tasks are not controllable:

As you can see from the print below,

First mission: 102 111 104 130 131

After the fence function is executed: 104 131 111 102 130

 55 ===== <NSThread: 0x600001e958c0>{number = 5, name = (null)} ====== 104
 56 ===== <NSThread: 0x600001e958c0>{number = 5, name = (null)} ====== 131
 57 ===== <NSThread: 0x600001e958c0>{number = 5, name = (null)} ====== 111
 58 ===== <NSThread: 0x600001e958c0>{number = 5, name = (null)} ====== 102
 59 ===== <NSThread: 0x600001e958c0>{number = 5, name = (null)} ====== 130
Copy the code
 0 --- <NSThread: 0x600001e92380>{number = 81, name = (null)} ---- 102
 0 --- <NSThread: 0x600001eb4900>{number = 46, name = (null)} ---- 111
 0 --- <NSThread: 0x600001ea0380>{number = 42, name = (null)} ---- 105
 0 --- <NSThread: 0x600001e83740>{number = 82, name = (null)} ---- 103
 0 --- <NSThread: 0x600001eb9e00>{number = 43, name = (null)} ---- 104
 0 --- <NSThread: 0x600001ebb840>{number = 72, name = (null)} ---- 113
 0 --- <NSThread: 0x600001e85b40>{number = 79, name = (null)} ---- 115
 0 --- <NSThread: 0x600001e48700>{number = 75, name = (null)} ---- 116
 0 --- <NSThread: 0x600001eba800>{number = 77, name = (null)} ---- 112
 0 --- <NSThread: 0x600001e92300>{number = 78, name = (null)} ---- 117
 0 --- <NSThread: 0x600001efeb80>{number = 73, name = (null)} ---- 114
 0 --- <NSThread: 0x600001eba4c0>{number = 74, name = (null)} ---- 118
 0 --- <NSThread: 0x600001e48700>{number = 75, name = (null)} ---- 119
 0 --- <NSThread: 0x600001e85b40>{number = 79, name = (null)} ---- 120
 0 --- <NSThread: 0x600001ebb840>{number = 72, name = (null)} ---- 121
 0 --- <NSThread: 0x600001e48700>{number = 75, name = (null)} ---- 122
 0 --- <NSThread: 0x600001e85b40>{number = 79, name = (null)} ---- 123
 0 --- <NSThread: 0x600001ebb840>{number = 72, name = (null)} ---- 124
 0 --- <NSThread: 0x600001e48700>{number = 75, name = (null)} ---- 125
 0 --- <NSThread: 0x600001eba4c0>{number = 74, name = (null)} ---- 126
 0 --- <NSThread: 0x600001efeb80>{number = 73, name = (null)} ---- 127
 0 --- <NSThread: 0x600001ebb840>{number = 72, name = (null)} ---- 128
 0 --- <NSThread: 0x600001e92300>{number = 78, name = (null)} ---- 129
 0 --- <NSThread: 0x600001eba700>{number = 80, name = (null)} ---- 130
 0 --- <NSThread: 0x600001ea0380>{number = 42, name = (null)} ---- 131
 0 --- <NSThread: 0x600001e83740>{number = 82, name = (null)} ---- 132
 0 --- <NSThread: 0x600001eba800>{number = 77, name = (null)} ---- 133
 0 --- <NSThread: 0x600001eb9e00>{number = 43, name = (null)} ---- 134
Copy the code

4.3.2 It is not guaranteed that all tasks will be executed after completion

The dispatch_barrier_async function does not guarantee that all tasks will be executed after dispatch_barrier_async.

4.3.3 AfN network requests cannot be executed sequentially

Can the fence function dispatch_barrier_async be placed in network requests to ensure sequential execution, all in the same thread? The answer is NO. All tasks must be in the same queue.

// The queue is not the same as the queue returned by the callback.

Dispatch_queue_t concurrentQueue = dispatch_queue_CREATE (" cooCI ", DISPATCH_QUEUE_CONCURRENT); dispatch_barrier_async(concurrentQueue, ^{// But the afN queue is af_URl_session_manager_creation_queue = dispatch_queue_create("com.alamofire.networking.session.manager.creation", DISPATCH_QUEUE_SERIAL); These are two different queues, so there is no guarantee that the tasks will be executed in order.Copy the code

5. GCD applies the scheduling group

5.1 background

# dispatch_barrier_async barrier function of 2, 3 bugs raises scheduling group Even if the queue is different, but the group is the same, so it can ensure the task in the order.

Dispatch_group_t group = dispatch_group_create(); dispatch_queue_t queue = dispatch_get_global_queue(0, 0); Dispatch_group_async (group, queue, ^{// task 1}) dispatch_group_async(group, queue, ^{// task 2}) dispatch_group_notify(dispatch_get_main_queue(), ^{// Task 3 // Task 2 and then task 3})Copy the code

If it is a network request, I recommend using YTKNetwork. Multiple requests are called at the same time.

YKBLoginSendMessageAPI *loginSendAPI = [YKBLoginSendMessageAPI new]; YTKBatchRequest *batchRequest = [[YTKBatchRequest alloc] initWithRequestArray:@[recommendAPI, adlistAPI,hotArticeAPI,loginSendAPI]]; kWeakSelf(self); [MBProgressHUD showActivityMessageInWindow:nil]; [batchRequest startWithCompletionBlockWithSuccess:^(YTKBatchRequest * _Nonnull batchRequest) { [MBProgressHUD hideHUD]; [weakself.tableView.mj_header endRefreshing]; [weakself recommendAPISuccess:(YKBCardRecommendAPI *)batchRequest.requestArray[0]]; [weakself adlistAPISuccess:(YKBAdListAPI *)batchRequest.requestArray[1]]; [weakself hotArticeAPiSuuccess:(YKBQuoteArticeListCheckApi *)batchRequest.requestArray[2]]; } failure:^(YTKBatchRequest * _Nonnull  batchRequest) { [weakself.tableView.mj_header endRefreshing]; [MBProgressHUD hideHUD]; [MBProgressHUD ShowErrorMessage :@" Request failed, please try again later "];}];Copy the code

5.2 Delayed Waiting

long timeout = dispatch_group_wait(group, 0);

 if (timeout == 0) { notify }
Copy the code

5.3 the enter – leave

/ / the problem: If there are multiple dispatch_group_leave dispatch_group_leave dispatch_group_leave dispatch_group_leave dispatch_group_leave dispatch_group_leave dispatch_group_leave dispatch_group_leave dispatch_group_leave dispatch_group_leave dispatch_group_leave notifications dispatch_queue_t queue = dispatch_get_global_queue(0, 0); dispatch_group_t group = dispatch_group_create(); dispatch_group_enter(group); Dispatch_async (queue, ^{NSLog(@" first done "); dispatch_group_leave(group); }); dispatch_group_enter(group); Dispatch_async (queue, ^{NSLog(@" second gone "); }); Dispatch_group_notify (dispatch_get_main_queue(), ^{NSLog(@" all tasks completed, UI can be updated "); });Copy the code

6. Maximum number of concurrent GCD requests

The value of dispatch_semaphoRE_t controls the number of concurrent requests. It can prevent program stress, memory consumption, etc.

// dispatch_semaphoRE_t lock = dispatch_semaphore_create(1); begin = CACurrentMediaTime(); for (int i = 0; i < count; i++) { dispatch_semaphore_wait(lock, DISPATCH_TIME_FOREVER); dispatch_semaphore_signal(lock); }Copy the code

The default concurrency for SDWebImage is 6 and for AFNetWorking is 4.

/ / AFN images are downloaded task four biggest return [self initWithSessionManager: our sessionManager downloadPrioritization:AFImageDownloadPrioritizationFIFO maximumActiveDownloads:4 imageCache:[[AFAutoPurgingImageCache alloc] init]];Copy the code

Specific code: github.com/tanghaitao/…

dispatch_queue_t queue = dispatch_get_global_queue(0, 0); // Semaphore // Summary: Dispatch_semaphore_t semaphore = dispatch_semaphoRE_create (2); Dispatch_async (queue, ^{dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER); NSLog (@ "- % @ - task 1", [NSThread currentThread]); NSLog(@" Perform task 1"); sleep(6); NSLog(@" Task 1 completed "); dispatch_semaphore_signal(semaphore); }); Dispatch_async (queue, ^{dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER); NSLog (@ "- % @ - task 2", [NSThread currentThread]); NSLog(@" Perform task 2"); sleep(5); NSLog(@" Task 2 completed "); dispatch_semaphore_signal(semaphore); }); Dispatch_async (queue, ^{dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER); NSLog (@ "- % @ - task 3", [NSThread currentThread]); NSLog(@" Perform task 3"); NSLog(@" Task 3 completed "); dispatch_semaphore_signal(semaphore); });Copy the code
-- <NSThread: 0x600003C9F740 >{number = 5, name = (null)} ---- Task 2 -- <NSThread: 0x600003CB4180 >{Number = 4, Name = (null)} ---- Task 1 Perform task 2 Perform task 1 Perform task 2 Complete -- <NSThread: 0x600003C90D80 >{Number = 3, Name = (NULL)} ---- Task 3 Perform task 3. Task 3 Complete Task 1Copy the code

Analysis: Perform only two tasks at a time, and when one of the two tasks is completed (task 2 has a short sleep time sleep(5)<sleep(6), task 2 is completed before Task 1), task 1 is still sleeping, and task 3 will be executed immediately, because task 1 still needs sleep(6-5) for 1 second, so task 3 will be completed first. And then task 1 is done.

This is the sense of the semaphore, like a valve, which allows only a few tasks to be executed at a time. If these tasks start to be executed, as long as one task is completed, the next task can be executed.

If dispatch_semaphoRE_create (1), it looks like a serial effect.

7. Dispatch_source

Application scenarios: countdown, banner, etc

{ self.queue = dispatch_queue_create("com.haitao", 0); self.source = dispatch_source_create(DISPATCH_SOURCE_TYPE_DATA_ADD, 0, 0, dispatch_get_main_queue()); /** 1: b create source 2: bind callback 3: */ // runloop --> source // the following functions are called to wrap source // wrap task block ----> function trigger // dispatch_sync(_queue, ^{// //}); // // save the code block --> asynchronous dispatch_source_set_event_handler() // encapsulate the trigger function dispatch_source_set_event_handler(self.source, ^{ NSUInteger value = dispatch_source_get_data(self.source); self.totalComplete += value; NSLog(@" progress: %.2f", self.totalComplete/100.0); Self. ProgressView. Progress = self. TotalComplete / 100.0; }); self.isRunning = YES; dispatch_resume(self.source); // resume (OC): Dispatch_resume (c) / / [task resume] - (IBAction) didClickStartOrPauseAction: (id) sender {the if (self. Set the) {/ / are running is suspended dispatch_suspend(self.source); dispatch_suspend(self.queue); // MainQueue suspends self.isRunning = NO; [sender setTitle: @ "pause..." forState: UIControlStateNormal]; }else{ dispatch_resume(self.source); dispatch_resume(self.queue); self.isRunning = YES; [sender setTitle: @ "loading..." forState: UIControlStateNormal]; }} - (void)touch began :(NSSet< touch *> *)touches withEvent:(UIEvent *)event{NSLog(@" touch started loading "); for (NSUInteger index = 0; index < 100; index++) { dispatch_async(self.queue, ^{ if (! Self. isRunning) {NSLog(@" pause download "); return ; } sleep(2); dispatch_source_merge_data(self.source, 1); }); }}Copy the code

The countdown:

// GCD countdown - (void)startCoundown {__block int timeout = showtime + 1; Dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0); dispatch_source_t _timer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0,queue); Dispatch_source_set_timer (_timer,dispatch_walltime(NULL, 0),1.0 * NSEC_PER_SEC, 0); // Execute dispatch_source_set_event_handler(_timer, ^{if(timeout <= 0){// Close dispatch_source_cancel(_timer); dispatch_async(dispatch_get_main_queue(), ^{ [self dismiss]; }); }else{ dispatch_async(dispatch_get_main_queue(), ^ {[_countBtn setTitle: [nsstrings stringWithFormat: @ "skip % d", a timeout] forState: UIControlStateNormal]; }); timeout--; }}); dispatch_resume(_timer); }Copy the code