Multithreading series chapter plan content: IOS multithreaded programming (a) multithreaded foundation iOS multithreaded programming (two) Pthread iOS multithreaded programming (three) NSThread iOS multithreaded programming (four) GCD iOS multithreaded programming (five) GCD underlying principle iOS multithreaded programming (six) NSOperation IOS multithreaded programming (7) Synchronization mechanism and lock iOS multithreaded programming (8) RunLoop

preface

This paper mainly introduces the concept and use of GCD, the core concept of GCD, the collocation of functions and queues, the complex combination of functions and queues, as well as the thread synchronization mechanism in GCD has done a detailed analysis. Did not do the underlying source code analysis, if you want to understand the underlying principle of GCD analysis, you can move to iOS multithreaded programming (five) GCD underlying principle.

Add: Multithreading scheme in iOS:

1. The GCD

GCD full name Grand Central Dispatch, based on THE C language to achieve multi-threading mechanism, is a multi-core programming solution provided by Apple. It allows a program to be shred into multiple single tasks and then submitted to a work queue for concurrent or serial execution. First released on Mac OS X 10.6 Snow Leopard and also available on iOS 4 and above.

His advantages include:

  • GCD can be used for parallel computing with multiple cores, and will automatically make reasonable use of the CPU core (such as dual-core, quad-core).
  • GCD is simple to use; all the developer has to do is define the task to execute, append it to the appropriate queue, and specify the function to execute the task. With Block, it is more convenient and flexible to use.
  • GCD automatically manages the thread lifecycle (thread creation, task scheduling, thread destruction).

2. Core concepts of the COMMUNIST Party of China

This is how we usually use GCD in development.

dispatch_async(dispatch_queue_create("com.xxx.testqueue", DISPATCH_QUEUE_CONCURRENT), ^{
    NSLog(@"gcd test");
});
Copy the code

The essence of GCD usage is to “define the” task “to be executed, add the task to the” queue “, and specify the “function” to execute the task. How to understand this sentence? Break up the sample code above:

// 1. Define the task to execute (print "GCD test")
dispatch_block_t task = ^{
    NSLog(@"gcd test");
};
// 2. Specify the target queue of the task (concurrent queue)
dispatch_queue_t queue = dispatch_queue_create("com.xxx.testqueue", DISPATCH_QUEUE_CONCURRENT);
// 3. Use an asynchronous function to submit the task to the target queue
dispatch_async(queue, task);
Copy the code

This makes it clear that there are three elements to using GCD: “tasks,” “queues,” and “functions.”

2.1 the task

A task is an operation to be performed, which in GCD is a code Block that takes no arguments and returns no values.

2.2 the function

Functions determine how tasks are executed. Synchronous execution or asynchronous execution. The main differences are the need to wait for the result of the current task and the ability to start a new thread.

2.2.1 Synchronization Function (sync)

  • A synchronous function performs a task that, once invoked, the caller must wait for the completion of the task before continuing to perform subsequent actions.
  • Does not have the ability to start new threads (tasks can only execute tasks in the current thread).

2.2.2 Asynchronous Functions (Async)

  • Asynchronous functions perform tasks that the caller can continue to perform without waiting for the task to complete.

  • Ability to start new threads (you can execute tasks in new threads).

Note that although asynchronous execution has the ability to start a new thread, it does not have to start a new thread, but also depends on the queue to which the task belongs (for example, if the task in the main queue is executed asynchronously, no new thread will be started).

In general, we also refer to tasks executed synchronously as synchronous tasks and tasks executed asynchronously as asynchronous tasks.

2.3 the queue

A queue is a special linear table that allows only delete operations at the front end of the table and insert operations at the back end. So only the earliest element in the queue can be removed from the queue first. So the basic feature of queues is FIFO(first in first out).

In GCD, we need to add tasks to the queue, new tasks are always inserted at the end of the queue, and tasks are always scheduled from the head of the queue. Each time a task is scheduled, the task is removed from the queue.

There are two types of queues in GCD: serial queues and concurrent queues. Both follow the FIFO principle, the main difference between the two is: the execution order is different, the number of threads used is different.

2.3.1 Serial Queue

A serial queue has only one task scheduled at a time, and the tasks are executed one after another, all on the same thread. Task 2 can be scheduled only after task 1 is scheduled, and so on.

2.3.2 Concurrent queue

Concurrent queues allow multiple tasks to execute concurrently (” simultaneously “). Depending on how many threads are available, task 1 and task 2 can be executed concurrently in different threads, assuming that two threads are available.

It is important to note that the concurrency capability of concurrent queues only works with asynchronous functions.

If the difference between the two is compared in terms of execution time, the serial queue has only one task executing at the same time, while the concurrent queue may have multiple tasks executing at the same time, and it is also uncertain which task is finished first (which is affected by task complexity and CPU scheduling).

As shown in the figure above, in the concurrent queue, the red line position indicates that task 2, 3, and 4 are all executed at the same time, and task 4 is scheduled before task 3 (CPU scheduling), but task 3 has a low complexity, so task 3 is completed before task 4 (task complexity). For serial queues, only one task can be executed at a time, and the tasks are executed strictly in the order in the queue.

More rigorously, consider the difference between concurrency, which is true simultaneous execution across multiple cores, and concurrency, which is a rotation mechanism by CPU time slices that makes it look like we’re executing simultaneously. At a macro level, we can think of the two as the same thing, because the granularity of CPU time is so small.

2.3.3 the home side column

The primary queue is a special serial queue that is created and bound to the main thread when libDispatch_init is initialized. This is all done before the main() function.

This means that when the program completes and starts, it already has the main queue, and all tasks placed in the main queue are executed in the main thread. No new threads are created, whether synchronous or asynchronous, and tasks are executed only on the main thread. This is why tasks are usually put in the main queue when the main thread refreshes the UI.

The main queue can be obtained via dispatch_get_main_queue().

2.3.4 Global Concurrent Queue

The global concurrent queue is essentially a concurrent queue provided by the system, which is easy to program and can be used without creating it. You can run dispatch_get_global_queue(long indentifier. Unsigned long flags) to obtain the global concurrent queue. This function takes two arguments, the first of which is the queue priority, usually written to 0, which is the default priority. Quality of Service class values can be used to obtain global concurrent queues of different priorities.

 *  - QOS_CLASS_USER_INITIATED
 *  - QOS_CLASS_DEFAULT
 *  - QOS_CLASS_UTILITY
 *  - QOS_CLASS_BACKGROUND
Copy the code

It can also be identified by the priority of the queue, and their mapping is as follows:

 *  - DISPATCH_QUEUE_PRIORITY_HIGH:         QOS_CLASS_USER_INITIATED
 *  - DISPATCH_QUEUE_PRIORITY_DEFAULT:      QOS_CLASS_DEFAULT
 *  - DISPATCH_QUEUE_PRIORITY_LOW:          QOS_CLASS_UTILITY
 *  - DISPATCH_QUEUE_PRIORITY_BACKGROUND:   QOS_CLASS_BACKGROUND
Copy the code

The second parameter is reserved. Zero is recommended, because Apple says passing anything other than zero can result in a null return value.


Queues, tasks, and threads are described by analogy in an article that is easy to understand and copied for everyone to understand:

Suppose there are five people who have to go through a gate, and there are 10 entrances in total, and the administrator can decide how many entrances to open at the same time, and can decide whether to let one person through or multiple people through at the same time. By default, however, the administrator opens only one entry, and only one person can pass through a channel at a time.

In this story, people are tasks, administrators are systems, and portals are threads.

  • Five people means five tasks, and 10 entries represent 10 threads.
  • A serial queue is like five people standing in a long line.
  • A concurrent queue is like five people standing in multiple queues, like two or three.
  • A synchronization task is like an administrator opening only one entry (the current thread).
  • An asynchronous task is like an administrator opening multiple entries at the same time (current thread + new thread).

“Asynchronous execution + concurrent queue” can be interpreted as: now the administrator opens multiple entrances (e.g., 3 entrances) and 5 people form multiple queues (e.g., 3 queues) so that all 5 people can pass through the gate together at the same time.

“Synchronous execution + concurrent queue” can be understood as: now the administrator opens only one entry, and 5 people queue up in multiple queues. Although the 5 people lined up in many lines, but only 1 entrance ah, the 5 people wanted to hurry past, but 1 entrance can only pass 1 person at a time, so we had to go over one by one, the performance of the result is: through the entrance in order.

In the language of the COMMUNIST Party of China:

  • “Asynchronous execution + concurrent queue” means that the system starts multiple threads (main thread + other sub-threads), and multiple tasks can run simultaneously.
  • “Synchronous execution + concurrent queue” means that the system only starts the main thread by default, but does not start the child thread. Although the tasks are in the concurrent queue, they can only be executed one by one.

3. Collocation of queues and functions

There are two types of queue (serial/concurrent) and two types of task execution (synchronous/asynchronous) in GCD, so naturally there are four combinations:

1. Synchronous execution + serial queue 2. Synchronous execution + concurrent queue 3. Asynchronous execution + concurrent queue

The global concurrency queue has the same synchronous/asynchronous execution results as our custom normal concurrency queue. For the main queue, it is a special serial queue, as long as the tasks in the main queue must be executed in the main thread. We also consider the main queue, so there are six combinations:

5. Synchronous execution + primary queue 6. Asynchronous execution + primary queue

So what are the differences between these different combinations?

3.1 Synchronization + Serial

// Synchronization + serial tasks
- (void)sync_serial{

    // Prints the current thread
    NSLog(@"currentThread---%@"[NSThread currentThread]);  
    NSLog(@"begin");
    
    // Serial queue
    dispatch_queue_t queue = dispatch_queue_create("com.xxx.queue", DISPATCH_QUEUE_SERIAL);
    
    1 / / task
    dispatch_sync(queue, ^{
        sleep(1);                                       // Simulate time-consuming operations
        NSLog(@ "task 1 - % @"[NSThread currentThread]);   // Prints the current thread
    });
    2 / / task
    dispatch_sync(queue, ^{
        sleep(1);                                       // Simulate time-consuming operations
        NSLog(@ "task 2 - % @"[NSThread currentThread]);  // Prints the current thread
    });
    3 / / task
    dispatch_sync(queue, ^{
        sleep(1);                                       // Simulate time-consuming operations
        NSLog(3 - % @ "@" task[NSThread currentThread]);  // Prints the current thread
    });
    
    NSLog(@"end");
}
Copy the code

Print result:

currentThread---<NSThread: 0x600002314640>{number = 1, name = main} begin Task1---<NSThread: 0x600002314640>{number = 1, name = main} Task2---<NSThread: 0x600002314640>{number = 1, name = main} Task3---<NSThread: 0x600002314640>{number = 1, name = main}
end
Copy the code

Visible, synchronous + serial task

  • All tasks are executed in the current thread (the main thread in this case),The new thread is not started. (Synchronous executionDoes not have the ability to start new threads)
  • The task is completely top-downsequential(Synchronous executionWait until the current task is complete before continuing.)

3.2 Synchronization + Concurrency

// Synchronization + concurrent tasks
- (void)sync_concurrent{
    
    // Prints the current thread
    NSLog(@"currentThread---%@"[NSThread currentThread]);
    NSLog(@"begin");
    
    // Concurrent queue
    dispatch_queue_t queue = dispatch_queue_create("com.xxx.queue", DISPATCH_QUEUE_CONCURRENT);
    
    1 / / task
    dispatch_sync(queue, ^{
        sleep(1);                                       // Simulate time-consuming operations
        NSLog(@ "task 1 - % @"[NSThread currentThread]);  // Prints the current thread
    });
    2 / / task
    dispatch_sync(queue, ^{
        sleep(1);                                       // Simulate time-consuming operations
        NSLog(@ "task 2 - % @"[NSThread currentThread]);  // Prints the current thread
    });
    3 / / task
    dispatch_sync(queue, ^{
        sleep(1);                                       // Simulate time-consuming operations
        NSLog(3 - % @ "@" task[NSThread currentThread]);  // Prints the current thread
    });
    
    NSLog(@"end");
}
Copy the code

Print result:

currentThread---<NSThread: 0x600002aa0100>{number = 1, name = main} begin Task1---<NSThread: 0x600002aa0100>{number = 1, name = main} Task2---<NSThread: 0x600002aa0100>{number = 1, name = main} Task3---<NSThread: 0x600002aa0100>{number = 1, name = main}
end
Copy the code

Synchronous + concurrent results are the same as synchronous + serial above.

  • All tasks are executed on the current thread (in this case, the main thread) without opening a new thread (Synchronous executionDoes not have the ability to start new threads).
  • Tasks are executed from top to bottom. (Synchronous executionYou can continue to perform tasks only after the current task is complete.

When we talked about concurrent queues, we said that concurrent queues allow multiple tasks to be executed concurrently, so why are tasks still executed sequentially? Tasks are performed by threads rather than queues. Although concurrent queues support the simultaneous execution of multiple tasks, synchronization does not have the ability to start threads, but only the current thread. There is only one current thread (main thread), so tasks are executed in the main thread.

3.3 Asynchronous + Serial

// Asynchronous + serial tasks
- (void)async_serial{
    
    // Prints the current thread
    NSLog(@"currentThread---%@"[NSThread currentThread]);
    NSLog(@"begin");
    
    // Serial queue
    dispatch_queue_t queue = dispatch_queue_create("com.xxx.queue", DISPATCH_QUEUE_SERIAL);
    
    1 / / task
    dispatch_async(queue, ^{
        sleep(1);                                       // Simulate time-consuming operations
        NSLog(@ "task 1 - % @"[NSThread currentThread]);      // Prints the current thread
    });
    2 / / task
    dispatch_async(queue, ^{
        sleep(1);                                       // Simulate time-consuming operations
        NSLog(@ "task 2 - % @"[NSThread currentThread]);      // Prints the current thread
    });
    3 / / task
    dispatch_async(queue, ^{
        sleep(1);                                       // Simulate time-consuming operations
        NSLog(3 - % @ "@" task[NSThread currentThread]);      // Prints the current thread
    });
    
    NSLog(@"end");
}

Copy the code

Print result:

currentThread---<NSThread: 0x600000470340>{number = 1, name = main} begin end Task1---<NSThread: 0x60000043dc00>{number = 6, name = (null)} task2---<NSThread: 0x60000043dc00>{number = 6, name = (null)} task3---<NSThread: 0x60000043dc00>{number = 6, name = (null)}
Copy the code

See, asynchronous + serial task

  • A new thread is started (Asynchronous executionHas the ability to start threads) but only one new thread can be started, no matter how many tasks there are (Serial queuesAll tasks are executed on the same thread.
  • All the missions are inbeginandendExecuted after (Asynchronous executionContinue without waiting for the task to complete).
  • Tasks are executed in the order in the queue (Serial queuesOnly one task is executed at a time, one task after another).

3.4 Asynchronous + Concurrent

// Async + concurrent tasks
- (void)async_concurrent{
    
    // Prints the current thread
    NSLog(@"currentThread---%@"[NSThread currentThread]);
    NSLog(@"begin");
    
    // Concurrent queue
    dispatch_queue_t queue = dispatch_queue_create("com.xxx.queue", DISPATCH_QUEUE_CONCURRENT);
    
    1 / / task
    dispatch_async(queue, ^{
        sleep(1);                                       // Simulate time-consuming operations
        NSLog(@ "task 1 - % @"[NSThread currentThread]);  // Prints the current thread
    });
    2 / / task
    dispatch_async(queue, ^{
        sleep(1);                                       // Simulate time-consuming operations
        NSLog(@ "task 2 - % @"[NSThread currentThread]);  // Prints the current thread
    });
    3 / / task
    dispatch_async(queue, ^{
        sleep(1);                                       // Simulate time-consuming operations
        NSLog(3 - % @ "@" task[NSThread currentThread]);  // Prints the current thread
    });
    
    NSLog(@"end");
}

Copy the code

Print result:

currentThread---<NSThread: 0x600003e7c180>{number = 1, name = main} begin end Task1---<NSThread: 0x600003e35580>{number = 3, name = (null)} task3---<NSThread: 0x600003e353c0>{number = 6, name = (null)} task2---<NSThread: 0x600003e34bc0>{number = 7, name = (null)}
Copy the code

See, asynchronous + concurrent task

  • This example starts three threads, and the tasks are unordered and executed alternately (Asynchronous executionAbility to start new threads, concurrent queues can take advantage of multiple threads to perform multiple tasks at the same time).
  • Task is tobeginandendAfter the execution (Asynchronous executionContinue without waiting for the task to complete).

3.5 Synchronization + Primary Queue

// Sync + main queue task
- (void)sync_main{
    
    // Prints the current thread
    NSLog(@"currentThread---%@"[NSThread currentThread]);
    NSLog(@"begin");
    
    / / the home team
    dispatch_queue_t queue = dispatch_get_main_queue();
    
    1 / / task
    dispatch_sync(queue, ^{
        sleep(1);                                       // Simulate time-consuming operations
        NSLog(@ "task 1 - % @"[NSThread currentThread]);  // Prints the current thread
    });
    2 / / task
    dispatch_sync(queue, ^{
        sleep(1);                                       // Simulate time-consuming operations
        NSLog(@ "task 2 - % @"[NSThread currentThread]);  // Prints the current thread
    });
    3 / / task
    dispatch_sync(queue, ^{
        sleep(1);                                       // Simulate time-consuming operations
        NSLog(3 - % @ "@" task[NSThread currentThread]);  // Prints the current thread
    });
    
    NSLog(@"end");
}
Copy the code

Print result:

currentThread---<NSThread: 0x6000021a49c0>{number = 1, name = main}
begin
(lldb) 
Copy the code

In the sync + main queue task, when running to BEGIN, the program crashes and exits. This is because:

When task 1 is appended, task 1 is added to the main queue. Since the main queue is a serial queue, task 1 needs to wait for the completion of sync_main execution, and since it is synchronous execution, Sync_main needs to wait for task 1 to complete, which causes a deadlock.

So, on the main thread, when performing the sync + main queue task,

  • This can cause a deadlock to occur.

In fact, any synchronization task in the current serial queue causes a deadlock (see examples 3 and 4 in GCD’s complex combination of functions and queues example)

However, if you put the sync + main queue task on another thread (not the main thread), no deadlock will occur.

  • All tasks are executed on the main thread, not the current thread, and the tasks are executed sequentially.

3.6 Asynchronous + Primary Queue

// asynchronous + main queue task
- (void)async_main{
    
    // Prints the current thread
    NSLog(@"currentThread---%@"[NSThread currentThread]);
    NSLog(@"begin");
    
    / / the home team
    dispatch_queue_t queue = dispatch_get_main_queue();
    
    1 / / task
    dispatch_async(queue, ^{
        sleep(1);                                       // Simulate time-consuming operations
        NSLog(@ "task 1 - % @"[NSThread currentThread]);  // Prints the current thread
    });
    2 / / task
    dispatch_async(queue, ^{
        sleep(1);                                       // Simulate time-consuming operations
        NSLog(@ "task 2 - % @"[NSThread currentThread]);  // Prints the current thread
    });
    3 / / task
    dispatch_async(queue, ^{
        sleep(1);                                       // Simulate time-consuming operations
        NSLog(3 - % @ "@" task[NSThread currentThread]);  // Prints the current thread
    });
    
    NSLog(@"end");
}
Copy the code

Print result:

currentThread---<NSThread: 0x6000012641c0>{number = 1, name = main} begin end Task1---<NSThread: 0x6000012641c0>{number = 1, name = main} Task2---<NSThread: 0x6000012641c0>{number = 1, name = main} Task3---<NSThread: 0x6000012641c0>{number = 1, name = main}
Copy the code

In asynchronous + main queue tasks.

  • All tasks are performed in the main thread (althoughAsynchronous executionThe ability to start threads, but because it is the main queue, all tasks are in the main thread.
  • Tasks are executed after begin and end (Asynchronous executionContinue without waiting for the task to complete).
  • Tasks are executed sequentially (the main queue is a serial queue where only one task is executed at a time, one task after another).

summary

The first thing to be clear about is that tasks are executed on “threads.” Threads are created by the system, and neither queues nor functions have the ability to create threads, only enabled.

A queue is just a repository for tasks. Whether a serial queue or a concurrent queue, tasks are first-in, first-out, which is a basic feature of queues. However, the task in the serial queue is only one at a time, and the next task can be dispatched only after the previous task is scheduled. Tasks in a concurrent queue are queued according to the number of available threads. If the number of available threads is 3, three tasks are queued. If the number of available threads is 1, only one task is queued like a serial queue.

Functions determine how tasks are executed.

As long as the execution is synchronous, the current thread needs to wait for the completion of the task before continuing the subsequent actions. As long as the execution is asynchronous, the current thread can call the task and continue the subsequent behavior.

No matter how “queue” is paired with “function”, it does not affect the basic nature of queues and functions.

For synchronous functions, there is no need to start a thread because you are waiting for the result of the task. For asynchronous functions, since the current thread does not care about the result of the task at the moment, it needs to be executed immediately later, so the task can not be placed in the current thread, or it will still block the current thread, which is contrary to asynchronous nature.

So:

  • (1) as long as it isSynchronous executionA new thread is not started and is executed either on the current thread or on the main thread (main queue).
  • (2) as long as it isAsynchronous executionThe task cannot be executed on the thread that schedules the task (except the main queue), either on the thread or on the main thread (main queue).
  • 3.The home side columnIn particular, any task appended to the main queue must be executed by the main queue, no matter how busy the main queue is, it will wait for the main queue to be idle. And a deadlock occurs when the main queue synchronizes with the main queue. (In fact, synchronous append tasks are deadlocked in the current serial queue, see4. Examples of complex combinations of GCD functions and queuesCases 3 and 4).
  • (4) onlyAsynchronous concurrentThe task is disordered execution.

4. Examples of complex combinations of GCD functions and queues

Now that you understand the properties of functions and queues, let’s look at how they work together.

Example 1: What is the output of the following code for the execution of an asynchronous concurrent task?

- (void)textDemo1{
    
    dispatch_queue_t queue = dispatch_queue_create("com.xxx.queue", DISPATCH_QUEUE_CONCURRENT);
    NSLog(@ "1");
    dispatch_async(queue, ^{
        NSLog(@ "2");
        dispatch_async(queue, ^{
            NSLog(@ "3");
        });
        NSLog(@ "4");
    });
    NSLog(5 "@");
}
Copy the code

The code of the program is executed from top to bottom. The current thread, the main thread, first creates a concurrent queue, and then the main thread outputs “1”. Append asynchronous task 1 to the concurrent queue. Because of the nature of asynchron, the main thread will then execute down, printing “5”. Asynchronous tasks in concurrent queues open child threads, so next print “2” in child thread 1; Then, it appends asynchronous task 2 to the concurrent queue again. Also due to the asynchronous characteristics, the current sub-thread 1 does not wait and directly executes, outputting “4”. Finally, it executes asynchronous task 2, starts another sub-thread 2, outputting “3”.

So it’s 1, 5, 2, 4, 3.

It is important to note that this is not necessarily the order of execution as long as the nested form is similar. If it were more complicated to simulate this part of the task by adding sleep(1) to the main thread before NSLog(@”5″), the result would be 1, 2, 4, 3, 5. As mentioned earlier, task completion time depends on task complexity and CPU scheduling.

Example 2: What is the output of the following code when a synchronization task is executed in a concurrent queue?

- (void)textDemo2{
    
    dispatch_queue_t queue = dispatch_queue_create("com.xxx.queue", DISPATCH_QUEUE_CONCURRENT);
    NSLog(@ "1");
    dispatch_async(queue, ^{
        NSLog(@ "2");
        dispatch_sync(queue, ^{
            NSLog(@ "3");
        });
        NSLog(@ "4");
    });
    NSLog(5 "@");
}
Copy the code

Create concurrent queue, output “1” first, and then add the task of asynchronous concurrent queue. Due to the characteristics of asynchronous, so the main thread to continue execution, output “5”, and then open the asynchronous task of child thread 1, the output is “2”, then additional synchronization task to the concurrent queue, because is synchronization task, so the current thread to wait 1 sync task is done, In other words, output “4” can be continued only after output “3” is completed, so the answer is: 1 5 2 3 4.

Example 3: What is the output of the following code when a synchronization task is executed in the current serial queue?

- (void)textDemo3{
    
    dispatch_queue_t queue = dispatch_queue_create("com.xxx.queue", DISPATCH_QUEUE_SERIAL);
    NSLog(@ "1");
    dispatch_async(queue, ^{
        NSLog(@ "2");
        dispatch_sync(queue, ^{
            NSLog(@ "3");
        });
        NSLog(@ "4");
    });
    NSLog(5 "@");
}
Copy the code

Serial queue queue is created in this case, the first output “1”, additional asynchronous tasks 1 to serial queue, because is asynchronous task, so the main thread execution down directly, output “5”, then the asynchronous serial queue scheduling task 1, the asynchronous tasks within the block, the code is still order, we will task breakdown, That is, the tasks in the serial queue are laid out as shown below,

According to THE FIFO principle, the NSLog(@”2″) is first scheduled to output “2”, and then the synchronization task is added to the serial queue, and the NSLog(@”3″) task is queued. At this time, the tasks in the serial queue are arranged as shown in the following figure

Because the synchronization task is appened last, sync waits for NSLog(@”3″) to complete before continuing. In addition, because it is a serial queue, the tasks must be executed in sequence. NSLog(@”3″) waits for NSLog(@”4″) and NSLog(@”4″) waits for sync.

That’s what happens when you wait for each other, so you end up with 1, 5, 2 deadlocks.

Output “2” in the open subthread 1, and then append synchronization task 1 to the serial queue. At this time, there are two tasks in the serial queue (asynchronous task 1 which has not finished execution and added synchronization task 1). Because the NSLog(@”3″) task is executed synchronously, the asynchronous task in the queue has to wait for the NSLog(@”3″) to complete, and because it is a serial queue, the NSLog(@”3″) has to wait for the asynchronous task to complete. This waiting for each other leads to deadlocks.

Example 4: So if NSLog(@”4″); What about task removal?

- (void)textDemo4{
    
    dispatch_queue_t queue = dispatch_queue_create("com.xxx.queue", DISPATCH_QUEUE_SERIAL);
    NSLog(@ "1");
    dispatch_async(queue, ^{
        NSLog(@ "2");
        dispatch_sync(queue, ^{
            NSLog(@ "3");
        });
    });
    NSLog(5 "@");
}
Copy the code

The previous analysis process is the same as in Example 3: createSerial queues, main thread output “1”, appendAsynchronous Task 1toSerial queues, the main thread outputs “5”. Then the schedulingSerial queuesIn theAsynchronous tasks, output “2”, continue appendingSynchronization Task 1toSerial queues, there are two tasks in the queue (unfinished)Asynchronous Task 1And just appendedSynchronization Task 1Although)Asynchronous tasksThere’s no NSLog at sign “4” task, but the wholeAsynchronous tasksStill need to waitSynchronization taskBecause of the serial queue, synchronous tasks need to wait for asynchronous tasks to complete. Such mutual waiting still causes deadlock.

Example 5: Let’s look at an example multiple-choice question

- (void) textDemo5{//
    dispatch_queue_t queue = dispatch_queue_create("com.lg.cooci.cn", DISPATCH_QUEUE_CONCURRENT);
    / / 1 2 3
    // 0 (7 8 9)
    dispatch_async(queue, ^{ 
        NSLog(@ "1");
    });
    dispatch_async(queue, ^{
        NSLog(@ "2");
    });
    
    dispatch_sync(queue, ^{
        NSLog(@ "3");
    });
    
    NSLog(@ "0");

    dispatch_async(queue, ^{
        NSLog(@ "7");
    });
    dispatch_async(queue, ^{
        NSLog(8 "@");
    });
    dispatch_async(queue, ^{
        NSLog(@ "9");
    });
    // A: 1230789
    // B: 1237890
    // C: 3120798
    // D: 2137890
}

Copy the code

In the example, it is a concurrent queue. In the case of asynchronous concurrency, there is no fixed order for tasks at the same level. Output “3” is a synchronous task, that is, output “0” must wait for the completion of output “3”, so “3” must be before “0”. Tasks after “0” are asynchronous and concurrent, so “7”, “8”, “9” must come after “0”. And those are A and C.

5. Communication between GCD threads

In development, we usually put time-consuming tasks into the child thread, and the main thread continues to process other tasks. After the child thread completes the task, the main thread is notified to perform operations such as UI refresh. So how do we notify the main thread to execute the task in the child thread?

In fact, it is very simple to use the main queue, because the tasks in the main queue must be executed in the main thread.

/** * Communication between threads */
- (void)communication {
    
    NSLog(@"begin");
    // Get the global concurrent queue
    dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
    // Get the main queue
    dispatch_queue_t mainQueue = dispatch_get_main_queue();
    
    dispatch_async(queue, ^{
        // The child thread executes
        // 1. Get imageUrl
        NSURL *imageUrl = [NSURL URLWithString:@"https://xxxxx.jpg"];
        // 2. Read data from imageUrl (download image) -- time consuming operation
        NSData *imageData = [NSData dataWithContentsOfURL:imageUrl];
        // 3. Create an image from binary data
        UIImage *image = [UIImage imageWithData:imageData];
        
        NSLog(1 - % @ "@"[NSThread currentThread]);
        
        // return to the main thread
        dispatch_async(mainQueue, ^{
            self.imageView.image = image;
            NSLog(2 - % @ "@"[NSThread currentThread]);
        });
    });
    NSLog(@"end");
}
Copy the code

The above code requests the image data in the child thread, and when the data is obtained, it returns to the main thread to update the image. The printed result is as follows, which is in line with our expectations.

begin
end
1---<NSThread: 0x600003540980>{number = 3, name = (null)}
2---<NSThread: 0x60000350c240>{number = 1, name = main}
Copy the code

Extension: Why update the UI on the main thread?

First of all, UIKit is not thread-safe. When multiple threads operate on the UI at the same time, they grab resources, which can lead to crashes, UI exceptions, etc. If the same background image is set in two threads, it is very likely that the background image will be released twice, causing the program to crash. Or if one thread traverses for a subView and deletes the subView in another thread, it will cause confusion.

So why not make UIKit thread-safe? For performance and efficiency. Multithreaded access will involve thread synchronization overhead, UIKit framework is a huge, UI operations involved in rendering access various View object attribute, if in order to ensure that the UI thread safe operation, would bring huge cost (View hierarchy, attributes), and will cost a lot of resources to slow down the running speed (lock). This does not necessarily lead to greater efficiency.

So, UI operations are best performed in a single thread, which thread? In the Cocoa Touch framework, UIApplication initialization is done on the main thread, and all views on the interface are on the leaf node of the UIApplication instance (memory management perspective), so all user interaction events are passed on the main thread and responded to on the main thread.

Working with the UI on the main thread helps us avoid unnecessary complications and bugs, and it becomes a common development rule.

So, can child threads update the UI at all?

Sometimes it can, but it can be a problem. The UI update in the child thread is an illusion, in fact, the child thread code completed, and automatically entered the main thread, the implementation of the child thread UI update stack, this intermediate time is very short, let everyone mistakenly think that the thread can update the UI. If the child thread is always running, then the stack of UI updates in the child thread, the main thread doesn’t know, and it can’t be updated until the child thread ends.

6. GCD common functions

With GCD, we usually use an “asynchronous + concurrent” format to perform related tasks in child threads, which allows the main thread to continue to perform subsequent operations and allows the CPU core to perform multiple tasks more quickly. But asynchronously concurrent tasks are executed out of order, and it is uncertain which task finishes first. What if the execution of one (group) task in the queue depends on data from another (group) task?

GCD provides fence functions, scheduling groups, and semaphores to solve this problem.

6.1 Fence functions: dispatch_barrier_async/sync

The fence function sets up a “fence” at a specific position in the queue, and this fence can also perform tasks

Only when all tasks in front of the fence have been scheduled can the task behind the fence continue to be scheduled.

/** * dispatch_barrier_async */
- (void)barrier {
    
    dispatch_queue_t queue = dispatch_queue_create("com.xxx.queue", DISPATCH_QUEUE_CONCURRENT);
    NSLog(@"begin");
    dispatch_async(queue, ^{
        // Append task 1
        [NSThread sleepForTimeInterval:2];              // Simulate time-consuming operations
        NSLog(1 - % @ "@"[NSThread currentThread]);// Prints the current thread
    });
    dispatch_async(queue, ^{
        // Append task 2
        [NSThread sleepForTimeInterval:2];              // Simulate time-consuming operations
        NSLog(2 - % @ "@"[NSThread currentThread]);// Prints the current thread
    });
    
    dispatch_barrier_async(queue, ^{
        // Append the task barrier
        [NSThread sleepForTimeInterval:2];              // Simulate time-consuming operations
        NSLog(@"barrier---%@"[NSThread currentThread]);// Prints the current thread
    });
    
    dispatch_async(queue, ^{
        // Add task 3
        [NSThread sleepForTimeInterval:2];              // Simulate time-consuming operations
        NSLog(3 - % @ "@"[NSThread currentThread]);// Prints the current thread
    });
    dispatch_async(queue, ^{
        // Add task 4
        [NSThread sleepForTimeInterval:2];              // Simulate time-consuming operations
        NSLog(4 - % @ "@"[NSThread currentThread]);// Prints the current thread
    });
    NSLog(@"end");
}

Copy the code

Print result:

begin
end
2      ---<NSThread: 0x600003cefbc0>{number = 5, name = (null)}
1      ---<NSThread: 0x600003ceff80>{number = 4, name = (null)}
barrier---<NSThread: 0x600003ceff80>{number = 4, name = (null)}
3      ---<NSThread: 0x600003ceff80>{number = 4, name = (null)}
4      ---<NSThread: 0x600003cefbc0>{number = 5, name = (null)}
Copy the code

If you replace the dispatch_barrier_async function with dispatch_barrier_sync in the above example, the following output is printed:

begin
2      ---<NSThread: 0x60000186a740>{number = 6, name = (null)}
1      ---<NSThread: 0x600001868140>{number = 5, name = (null)}
barrier---<NSThread: 0x600001820680>{number = 1, name = main}
end
3      ---<NSThread: 0x600001868140>{number = 5, name = (null)}
4      ---<NSThread: 0x600001818bc0>{number = 8, name = (null)}
Copy the code

Note that using custom concurrent queues only makes sense when using the fence function. If it is a global concurrent queue, the fence function does not work; If you’re using a serial queue, it’s inherently sequential. So there’s no point.

6.2 Dispatch Group: dispatch_group

The need to execute subsequent processing operations after multiple tasks appended to the queue have all been executed is often present in our programs. If only one serial queue is used, you simply append all the tasks you want to execute to the queue and execute the finished operation at the end. What if you use concurrent queues? Or with multiple different queues, this requirement becomes quite complex.

Serial queues can fulfill this requirement, but time-consuming tasks still need to be executed one after another, which is inefficient. More often, we want tasks to be executed in asynchronous sub-threads without affecting user interaction and main thread transactions. Therefore, we need to use concurrent queues to fulfill such requirements, but asynchronous concurrent tasks are scheduled in multiple child threads, and it is not certain which one will finish first, which makes such requirements extremely difficult, as is the case for monitoring tasks in different queues. So GCD provides a dispatch_group.

The dispatch_group functions are as follows:

  • dispatch_group_notify
  • dispatch_group_enter/dispatch_group_enter
  • dispatch_group_wait

The scheduling group keeps the number of tasks associated with it, increasing its count when a new task is associated (+1) and decreasing its count when a task is completed (-1). When all tasks associated with the dispatch group are complete, the dispatch group uses dispatch_group_WAIT or dispatch_group_notify to notify the task completion.

  • Dispatch_group_notify: The execution of tasks in the monitoring group is complete

When in use, we use dispatch_group_async function to append tasks to specified queues and associate tasks with scheduling groups. These queues can be unrelated, that is, tasks can be submitted to different queues as long as they are associated with the same scheduling group. The dispatch group monitors the execution of tasks in the dispatch group. When all tasks associated with the dispatch group are complete, the dispatch_group_notify function is called to execute the specified task.

Let’s implement a case where there are four tasks: Task 1, Task 2, task 3, and task 4; Task 3 must be executed after Task 2, task 4 must be executed after the first three tasks, and task 4 needs to be executed by the main thread.

Analysis is as follows:

Task 3 must be executed after Task 2, so these two tasks should be executed sequentially. Meanwhile, task 2 and Task 3 can be executed concurrently with task 1 as a whole. Finally, task 4 can be executed after the completion of the first three tasks, and the scheduling group can be implemented as follows:

Case 1:

/** * Dispatch_group_notify */- (void)groupNotify{
    
    // Get the global concurrent queue
    dispatch_queue_t globalQuene = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
    // Create a serial queue
    dispatch_queue_t serialQuene = dispatch_queue_create("com.xxx.queue", DISPATCH_QUEUE_SERIAL);
    // Create a scheduling group
    dispatch_group_t group = dispatch_group_create();
    
    NSLog(@"begin" );
    
    
    // Submit task 1 to the global concurrent queue and associate the scheduling group
    dispatch_group_async(group, globalQuene, ^{
        [NSThread sleepForTimeInterval:2];              // Simulate time-consuming operations
        NSLog(1 - % @ "@"[NSThread currentThread]); // Prints the current thread
    });
    // Submit task 2 to the serial queue and associate the scheduling group
    dispatch_group_async(group, serialQuene, ^{
        [NSThread sleepForTimeInterval:2];              // Simulate time-consuming operations
        NSLog(2 - % @ "@"[NSThread currentThread]); // Prints the current thread
    });
    // Submit task 3 to the serial queue and associate the scheduling group
    dispatch_group_async(group, serialQuene, ^{
        [NSThread sleepForTimeInterval:2];              // Simulate time-consuming operations
        NSLog(3 - % @ "@"[NSThread currentThread]); // Prints the current thread
    });
    // Submit task 4 to the main queue and associate the scheduling group
    dispatch_group_notify(group, dispatch_get_main_queue(), ^{
        [NSThread sleepForTimeInterval:2];              // Simulate time-consuming operations
        NSLog(4 - % @ "@"[NSThread currentThread]); // Prints the current thread
        NSLog(@"group---end");
    });
    
    NSLog(@"end");
}
Copy the code

Print the following:

begin
end
2     ---<NSThread: 0x600000c74600>{number = 6, name = (null)}
1     ---<NSThread: 0x600000c79cc0>{number = 7, name = (null)}
3     ---<NSThread: 0x600000c79cc0>{number = 4, name = (null)}
4     ---<NSThread: 0x6000013445c0>{number = 1, name = main}
group---end
Copy the code

The result of the example may also be 1234. However, the dispatch_group_notify task is called only after the three asynchronous tasks have been executed. This does not depend on the queue to which the task belongs or the location of dispatch_group_notify. Even if we put the dispatch_group_notify function before the three asynchronous tasks, the result is the same.

The dispatch_group_notify function listens to all tasks associated with the dispatch group before executing its own tasks. The underlying layer is achieved through semaphores.

  • dispatch_group_enter / dispatch_group_leave

In the above example, we submit tasks to queues and associate scheduling groups with the dispatch_group_async function. In fact, there is a more flexible way to use dispatch_group_enter with dispatch_group_leave.

// Method 1:dispatch_group_async(group, queue, ^{/ /...
}); 
 
// Method 2:
dispatch_group_enter(group);
dispatch_async(queue, ^{/ /...dispatch_group_leave(group); });Copy the code

To some extent, method one and method two are equivalent. (Dispatch_group_async is also implemented with dispatch_group_Enter and dispatch_group_leave.)

Dispatch_group_enter: marked into groups, execution time, is equal to the number of the group to perform tasks within + 1; Dispatch_group_leave: indicates that the dispatch_group_leave command is executed once, which is equal to the number of tasks to be executed in the group -1. When the number of tasks to be executed is 0, dispatch_group_wait is unblocked and the dispatch_group_notify function is called back.

Let’s change the above case to dispatch_group_enter/dispatch_group_leave as follows:

Case 2:

/** * Dispatch groups dispatch_group_enter and dispatch_group_leave */- (void)groupEnter_leave{
    
    // Get the global concurrent queue
    dispatch_queue_t globalQuene = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
    // Create a serial queue
    dispatch_queue_t serialQuene = dispatch_queue_create("com.xxx.queue", DISPATCH_QUEUE_SERIAL);
    // Create a scheduling group
    dispatch_group_t group = dispatch_group_create();
    
    NSLog(@"begin" );
    
    // Submit task 1 to the global concurrent queue and associate the scheduling group
    dispatch_group_enter(group);
    dispatch_async(globalQuene, ^{
        [NSThread sleepForTimeInterval:2];              // Simulate time-consuming operations
        NSLog(1 - % @ "@"[NSThread currentThread]); // Prints the current thread
        dispatch_group_leave(group);
    });
    
    // Submit task 2 to the serial queue and associate the scheduling group
    dispatch_group_enter(group);
    dispatch_async(serialQuene, ^{
        [NSThread sleepForTimeInterval:2];              // Simulate time-consuming operations
        NSLog(2 - % @ "@"[NSThread currentThread]); // Prints the current thread
        dispatch_group_leave(group);
    });
    // Submit task 3 to the serial queue and associate the scheduling group
    dispatch_group_enter(group);
    dispatch_async(serialQuene, ^{
        [NSThread sleepForTimeInterval:2];              // Simulate time-consuming operations
        NSLog(3 - % @ "@"[NSThread currentThread]); // Prints the current thread
        dispatch_group_leave(group);
    });
    
    // Submit task 4 to the main queue and associate the scheduling group
    dispatch_group_notify(group, dispatch_get_main_queue(), ^{
        [NSThread sleepForTimeInterval:2];              // Simulate time-consuming operations
        NSLog(4 - % @ "@"[NSThread currentThread]); // Prints the current thread
        NSLog(@"group---end");
    });
    
    NSLog(@"end");
    
}

Copy the code

The result is the same as the preceding example.

When using this method, ensure that dispatch_group_Enter and dispatch_group_leave are deployed together. Otherwise, dispatch_group_notify may fail to be called.

  • dispatch_group_wait

Once the dispatch_group_wait function is called, the function processes the status of the call without returning a value, only if

  • The currentThread of the function stops
  • Or reach the wait time specified by the wait function
  • Or all operations in the Dispatch Group are completed

The function returns the value. That is, dispatch_group_wait waits and blocks the current thread until the above condition occurs.

A timeout of DISPATCH_TIME_FOREVER means a permanent wait; When timeout is specified as DISPATCH_TIME_NOW, it means that all tasks associated with the Dispatch Group can be determined without any wait.

If the function returns a value of 0, it means that all tasks associated with the scheduling group have completed. If the return value of the function is non-zero, it means that the tasks associated with the scheduling group did not complete within the specified time. You can condition the return value to determine if the wait period is exceeded.

Add the dispatch_group_wait function to the code for case 1 as follows:

/** * Dispatch_group_notify */- (void)groupNotify{
    
    // Get the global concurrent queue
    dispatch_queue_t globalQuene = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
    // Create a serial queue
    dispatch_queue_t serialQuene = dispatch_queue_create("com.xxx.queue", DISPATCH_QUEUE_SERIAL);
    // Create a scheduling group
    dispatch_group_t group = dispatch_group_create();
    
    NSLog(@"begin" );
    
    
    // Submit task 1 to the global concurrent queue and associate the scheduling group
    dispatch_group_async(group, globalQuene, ^{
        [NSThread sleepForTimeInterval:2];              // Simulate time-consuming operations
        NSLog(1 - % @ "@"[NSThread currentThread]); // Prints the current thread
    });
    // Submit task 2 to the serial queue and associate the scheduling group
    dispatch_group_async(group, serialQuene, ^{
        [NSThread sleepForTimeInterval:2];              // Simulate time-consuming operations
        NSLog(2 - % @ "@"[NSThread currentThread]); // Prints the current thread
    });
    // Submit task 3 to the serial queue and associate the scheduling group
    dispatch_group_async(group, serialQuene, ^{
        [NSThread sleepForTimeInterval:2];              // Simulate time-consuming operations
        NSLog(3 - % @ "@"[NSThread currentThread]); // Prints the current thread
    });
    // Submit task 4 to the main queue and associate the scheduling group
    dispatch_group_notify(group, dispatch_get_main_queue(), ^{
        [NSThread sleepForTimeInterval:2];              // Simulate time-consuming operations
        NSLog(4 - % @ "@"[NSThread currentThread]); // Prints the current thread
        NSLog(@"group---end");
    });
    
    long time = dispatch_group_wait(group, DISPATCH_TIME_FOREVER);
    NSLog(@"time = %ld",time);
    
    NSLog(@"end");
    
}
Copy the code

The print result is as follows:

begin
2     ---<NSThread: 0x6000036dcd40>{number = 6, name = (null)}
1     ---<NSThread: 0x6000036d8fc0>{number = 4, name = (null)}
3     ---<NSThread: 0x6000036dcd40>{number = 6, name = (null)}
time = 0
end
4     ---<NSThread: 0x60000369c040>{number = 1, name = main}
group---end
Copy the code

When timeout is set to DISPATCH_TIME_FOREVER, the dispatch_group_wait will block the current thread until all tasks associated with the dispatch group are completed. Therefore, the main thread code will not continue to execute until tasks 2, 1, and 3 are completed. If the printed time value is 0, all associated tasks are completed. The dispatch_group_notify function is called back to perform task 4.

If time is set to DISPATCH_TIME_NOW, the following output is displayed:

begin
time = 49
end
1     ---<NSThread: 0x6000038b0ac0>{number = 6, name = (null)}
2     ---<NSThread: 0x6000038c6c00>{number = 3, name = (null)}
3     ---<NSThread: 0x6000038c6c00>{number = 3, name = (null)}
4     ---<NSThread: 0x6000038f0100>{number = 1, name = main}
group---end
Copy the code

6.3 Semaphore: dispatch_semaphore

Dispatch_semaphore is implemented based on the Mach kernel’s semaphore interface, which is complex because it is built on the complexity of the operating system. But we can think of it in a simpler way. Essentially, a semaphore is a signal that holds a count, and signals can be passed between threads to determine thread behavior based on the count that the signal holds.

At the same time, semaphores are powerful.

  • A semaphore can control the maximum number of concurrent accesses by a thread;
  • Semaphores can keep threads synchronized, synchronize asynchronous tasks, and allow multiple asynchronous threads with different queues to execute serially;
  • Semaphores can ensure thread safety and lock the thread;

Semaphores allow you to control the access of multiple consumers to a limited number of resources. If you create a semaphore with two resources, a maximum of two threads can access a critical section at a time. Other threads that want to use the resource must wait in a FIFO queue. If you create a semaphore with only one resource, then only one thread can access the critical section at a time.

Common functions in semaphores are:

“Dispatch_semaphore_signal” dispatch_semaphore_wait” Wait for a signal, let the semaphore -1, if the semaphore is less than 0, wait according to the timeout set (blocking thread).

How does a semaphore work?

Let’s first look at AFNetworking’s semaphores

- (NSArray *)tasksForKeyPath:(NSString *)keyPath {
    __block NSArray *tasks = nil;
    // Initialize the semaphore
    dispatch_semaphore_t semaphore = dispatch_semaphore_create(0);
    [self.session getTasksWithCompletionHandler:^(NSArray *dataTasks, NSArray *uploadTasks, NSArray *downloadTasks) {
        if ([keyPath isEqualToString:NSStringFromSelector(@selector(dataTasks))]) {
            tasks = dataTasks;
        } else if ([keyPath isEqualToString:NSStringFromSelector(@selector(uploadTasks))]) {
            tasks = uploadTasks;
        } else if ([keyPath isEqualToString:NSStringFromSelector(@selector(downloadTasks))]) {
            tasks = downloadTasks;
        } else if ([keyPath isEqualToString:NSStringFromSelector(@selector(tasks))]) {
            tasks = [@[dataTasks, uploadTasks, downloadTasks] valueForKeyPath:@"@unionOfArrays.self"];
        }
        / / signal
        dispatch_semaphore_signal(semaphore);
    }];
    // Check the signal
    dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);

    return tasks;
}
Copy the code

The dispatch_semaphoRE_CREATE function first creates a semaphore with an initial value, also known as a transmitted semaphore.

dispatch_semaphore_t
dispatch_semaphore_create(intptr_t value);
Copy the code

Parameter specifies the starting value of the semaphore. This number indicates the maximum number of threads that can access the critical section (note: the initial value here is 0, that is, someone who wants to use the semaphore will be blocked until someone increases the semaphore).

Since this is an asynchronous task, we ignore the tasks in the cross-step thread and proceed directly to the main thread

intptr_t
dispatch_semaphore_wait(dispatch_semaphore_t dsema, dispatch_time_t timeout)
Copy the code

Waiting for a signal, decrement the semaphore count. If the semaphore is less than zero, it waits (blocking thread) based on the timeout set.

This is similar to dispatch_group_wait in the dispatch group, which is actually implemented by semaphores.

Dispatch_semaphore_wait (dispatch_semaphore_wait), dispatch_semaphore_wait (dispatch_semaphore_wait), DISPATCH_TIME_FOREVER (DISPATCH_TIME_FOREVER), so it waits and blocks until it receives a signal correctly.

Since the main thread is blocked, we focus on the cross-step thread. After the child thread processes a certain task, it calls the dispatch_semaphore_signal function

intptr_t
dispatch_semaphore_signal(dispatch_semaphore_t dsema);
Copy the code

The dispatch_semaphore_signal function sends signals that increase the count of the semaphore by 1. If the previous value is less than zero, this function will wake up the thread currently waiting at dispatch_semaphore_wait (return non-zero if a thread is awakened, zero otherwise).

Finally, the main thread is woken up and the asynchronous task is returned correctly for use by the other AFNetworking modules.

For a low-level implementation of semaphores, see iOS Multithreaded Programming (v) GCD fundamentals.

6.4 Executing the dispatch_once function

The dispatch_once function ensures that the code is executed only once during program execution and can be used if this is required. In general, singletons are created using this function.

/** * dispatch_once */
- (void)once {
    static dispatch_once_t onceToken;
    dispatch_once(&onceToken, ^{
    <#code to be executed once#>
        // execute code only once (thread-safe by default)
    });
}
Copy the code

6.5 Delay execution function: dispatch_after

If you need to specify how many seconds the task will take, you can use the dispatch_after function. The task will be appended to the main queue after the specified time, but it is important to note that this time is not exact.

/** * dispatch_after */
- (void)after {

    dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(2.0 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
        After 2.0 seconds, asynchronously append the task code to the main queue and start execution
        NSLog(@"after---%@"[NSThread currentThread]);  // Prints the current thread
    });
}
Copy the code

6.6 Quick iteration function: dispatch_apply

Usually, we use for loops or for… The in function does the traversal, and GCD also provides a quick iteration method called dispatch_apply.

The dispatch_apply function performs tasks at the specified number of times. The reason for this speed is to take advantage of the simultaneous execution of multiple threads.

If dispatch_apply is used in a serial queue, it is still executed sequentially and does not make sense to iterate quickly. If you execute asynchronously in a concurrent queue, you can traverse asynchronously in multiple threads at the same time.

/** * quick iteration function: dispatch_apply */
- (void)apply {
    dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
    
    NSLog(@"apply---begin");
    dispatch_apply(6, queue, ^(size_t index) {
        NSLog(@"%zd---%@",index, [NSThread currentThread]);
    });
    NSLog(@"apply---end");
}
Copy the code

The print result is as follows:

apply---begin
0---<NSThread: 0x600003d4c640>{number = 1, name = main}
1---<NSThread: 0x600003d7f000>{number = 5, name = (null)}
2---<NSThread: 0x600003d02880>{number = 6, name = (null)}
3---<NSThread: 0x600003d00940>{number = 4, name = (null)}
5---<NSThread: 0x600003d7f000>{number = 5, name = (null)}
4---<NSThread: 0x600003d4c640>{number = 1, name = main}
apply---end
Copy the code

As you can see, dispatch_apply can execute tasks in a concurrent queue using different threads. The tasks are unordered.

Note, however, that apply-end is printed last on both serial and concurrent queues and dispatch_apply is synchronous and waits for all tasks to complete.