This is the sixth day of my participation in the August More text Challenge. For details, see:August is more challenging

Write in the front: the exploration of the underlying principles of iOS is my usual development and learning in the continuous accumulation of a step forward road. I hope it will be helpful for readers to record my journey of discovery.Copy the code

The directory is as follows:

  1. Exploring the underlying principles of iOS alloc
  2. Exploration of the underlying principles of iOS structure in vivo alignment
  3. The nature of the object explored by iOS underlying principles & the underlying implementation of ISA
  4. The underlying principles of the ISA-class (part 1)
  5. The underlying principles of the ISA class (middle)
  6. The underlying principles of isA-Class (part 2)
  7. Exploring the nature of Runtime Runtime & Methods in iOS Underlying principles
  8. Objc_msgSend explores the underlying principles of iOS
  9. Runtime Runtime slow lookup process
  10. Dynamic method resolution for exploring the underlying principles of iOS
  11. IOS underlying principles to explore the message forwarding process
  12. IOS Application loading principle
  13. Application loading principle (2)
  14. IOS underlying principle exploration class load
  15. IOS Underlying principles to explore the classification of loading
  16. Associated object for iOS underlying principle exploration
  17. Exploration of iOS underlying principles of sorcerer KVC
  18. Exploring the underlying principles of iOS — KVO Principles | more challenging in August
  19. Exploration of the underlying principles of iOS rewrite KVO | August more text challenges
  20. Exploring the underlying principles of iOS multithreading | more challenging in August

Summary column for the above

  • Summary of the phase of iOS underlying principle exploration

preface

In the challenge of exploring the Underlying principles of iOS: Multithreading | August, we sorted out the life cycle of threads, the concept of threads and processes, and the relationship between them. And the meaning and principle of multithreading is also involved.

In fact, apple’s Thread programming guide has a section on alternatives to threads. Because one of the problems with creating your own threads is that they add uncertainty to the developer’s code. Threading is a relatively low-level and complex way to support concurrency in an application. If we don’t fully understand what we mean by our choices, it’s easy to run into problems in synchronization or timing that can range in severity from subtle behavior changes to application crashes and data corruption. So today we’ll take a look at GCD, the one we use the most, and explore the functions and queues of GCD.

The scheduling queue

The Grand Central Dispatch (GCD) Dispatch queue is a powerful tool for performing tasks. Scheduling queues allow you to execute arbitrary blocks of code asynchronously or synchronously, whether related to the caller. You can use scheduling queues to perform almost any task you used to perform on a separate thread. The advantage of scheduling queues is that they are easier to use than the corresponding threaded code and more efficient in performing these tasks.

About Scheduling Queues

Scheduling queues is a simple way to execute tasks asynchronously and concurrently in your application. One task is simply that your application needs to perform some work. For example, you can define a task to perform some calculation, create or modify a data structure, process some data read from a file, or any number of things. You can define a task by placing the corresponding code in a function or block object and adding it to the scheduling queue.

A scheduling queue is an object-like structure that manages the tasks you submit to it. All scheduling queues are first-in, first-out data structures. Therefore, the tasks you add to the queue are always started in the same order in which they were added. GCD automatically provides you with some scheduling queues, but you can create others for specific purposes. Table 3-1 lists the types of scheduling queues available to your application and how you can use them.

Table 3-1 Scheduling queue types

type describe
Serial Serial queues (also known asPrivate Scheduling queue) executes one task at a time in the order they were added to the queue. The currently executing task runs on different threads managed by the scheduling queue (which may vary by task). Serial queues are commonly used to synchronize access to specific resources. You can create as many serial queues as you want, and each queue runs concurrently with respect to all the others. In other words, if you create four serial queues, each queue performs only one task at a time, but you can still execute up to four tasks simultaneously, one for each queue. For information about how to create a serial queue, seeCreate a serial scheduling queue.
Concurrent Concurrent queues (also known as aGlobal Scheduling queue) execute one or more tasks concurrently, but the tasks are still started in the order they were added to the queue. The currently executing task runs on different threads managed by the scheduling queue. The exact number of tasks to perform at any given point is variable and depends on system conditions. In iOS 5 and later, you can specifyDISPATCH_QUEUE_CONCURRENTQueue type to create your own concurrent scheduling queue. In addition, there are four predefined global concurrent queues for your application to use. For more information on how to get a global concurrent queue, seeGet the global concurrent scheduling queue.
Main Dispatch Queue The primary scheduling queue is a globally available serial queue that performs tasks on the main thread of an application. This queue works with the application’s run cycle, if one exists, to interleave the execution of the queued task with the execution of other event sources attached to the run cycle. Because it runs on the main thread of the application, the main queue is often used as a key synchronization point for the application. While you do not need to create the primary dispatch queue, you do need to ensure that your application emptying it appropriately. For more information on how to manage this queue, seePerform tasks on the main thread.

Scheduling queues offer several advantages over threads when adding concurrency to an application. The most immediate advantage is the simplicity of the work queue programming model. With threads, you must write code for the work to be performed and for the creation and management of the threads themselves. Scheduling queues lets you focus on the work you really want to perform without having to worry about thread creation and management. Instead, the system handles the creation and management of all threads for you. The advantage is that the system can manage threads more efficiently than any single application. The system can dynamically adjust the number of threads based on available resources and current system conditions. In addition,

Although you might think it would be difficult to rewrite code for scheduling queues, writing code for scheduling queues is often easier than writing code for threads. The key to writing code is to design tasks that are self-contained and can run asynchronously. (This is actually true for both threads and scheduled queues.) The advantage of scheduling queues, however, is predictability. If you have two tasks accessing the same shared resource but running on different threads, then either thread can modify the resource first, and you need to use a lock to ensure that both tasks do not modify the resource at the same time. With a schedule queue, you can add two tasks to a serial schedule queue to ensure that only one task modifs the resource at any given time.

While you are correct to point out that two tasks running in a serial queue will not run at the same time, you must remember that any concurrency provided by the threads is lost or significantly reduced if both threads acquire the lock at the same time. More importantly, the threading model requires the creation of two threads, which take up kernel space and user-space memory. Scheduling queues do not pay the same memory loss for their threads, and the threads they do use remain busy and do not block.

Some other key points to remember about scheduling queues include:

  • Scheduling queues execute their tasks concurrently with other scheduling queues. Serialization of tasks is limited to tasks in a single scheduling queue.
  • The system determines the total number of tasks to be performed at any given time. Therefore, an application with 100 tasks in 100 different queues may not perform all of these tasks at the same time (unless it has 100 or more valid kernels).
  • Queue priorities are taken into account when selecting which new tasks to start. For information on how to set the priority of a serial queue, see Providing cleanup for queues.
  • Tasks in a queue must be ready for execution when they are added to the queue. (If you have worked with Cocoa operation objects before, note that this behavior is different from that used by model operations.)
  • A private schedule queue is an object of reference counting. In addition to keeping queues in your own code, note that a scheduling source can also be attached to a queue and increase its retention count. Therefore, you must ensure that all scheduling sources are cancelled and that all retention calls are balanced with appropriate release calls. For more information about preserving and releasing Queues, see Memory Management of Dispatch Queues. For more information about scheduling sources, see About Scheduling Sources.

Queue correlation technique

In addition to dispatching queues, Grand Central Dispatch offers a variety of techniques for using queues to help manage code. Table 3-2 lists these technologies and provides links to where you can find more information about them.

Table 3-2 Technologies for using scheduling queues

technology describe
Dispatch Groups A scheduling group is a monitoring groupBlock objectWhether or not to complete the method. (You can monitor blocks synchronously or asynchronously as needed.) Groups provide a useful synchronization mechanism for code that depends on the completion of other tasks. For more information about using groups, seeWaiting queue task group.
Dispatch semaphores Scheduled semaphores are similar to traditional semaphores, but usually more efficient. The scheduling semaphore calls down to the kernel only if the calling thread needs to be blocked because the semaphore is not available. If the semaphore is available, no kernel call is made. For an example of how to use scheduling semaphores, seeUse scheduling semaphores to regulate the use of limited resources.
Dispatch sources The scheduling source generates notifications in response to specific types of system events. You can use scheduling sources to monitor events, such as process notifications, signals, and descriptor events. When an event occurs, the scheduling source asynchronously submits your task code to the specified scheduling queue for processing. For more information about creating and using scheduling sources, seeScheduling source.

Use blocks to implement tasks

Block objects are a C-based language feature that you can use in C, objective-C, and C++ code. A block can easily define a separate unit of work. Although they look like function Pointers, blocks are actually represented by underlying data structures similar to objects, and are created and managed for you by the compiler. The compiler packages the code you provide (and any associated data) and wraps it into a form that can live in the heap and be passed around in your application.

One of the main advantages of blocks is their ability to use variables outside of their lexical scope. When you define a block in a function or method, that block in some ways acts as a traditional block of code. For example, a block can read the value of a variable defined in a parent scope. Block-accessed variables are copied into block data structures on the heap so that blocks can access them later. When blocks are added to the scheduling queue, these values must generally be kept in read-only format. However, synchronized blocks can also use a variable with the __block keyword to return data to the parent’s call scope.

You can declare blocks inline in your code using syntax similar to that used for function Pointers. The main difference between a block and a function pointer is that the block name is preceded by a caret (^) instead of an asterisk (*). Just like a function pointer, you can pass parameters to a block and receive a return value from it. Listing 3-1 shows you how to declare and execute blocks synchronously in your code. The variable aBlock is declared as aBlock that takes a single integer argument and returns no value. The actual block that matches the stereotype is then allocated to aBlock and declared inline. The last line immediately executes the block, printing the specified integer to standard output.

Listing 3-1 is an example of a simple block

int x = 123;
int y = 456;
 
// Block declaration and assignment
void (^aBlock)(int) = ^(int z) {
    printf("%d %d %d\n", x, y, z);
};
 
/ / Block is carried out
aBlock(789);   // prints: 123 456 789
Copy the code

The following is a summary of some key criteria to consider when designing a module:

  • For blocks that you plan to execute asynchronously using a scheduling queue, it is safe to capture scalar variables from a parent function or method and use them in the block. However, you should not attempt to capture large structures or other pointer-based variables that are allocated and deleted by the calling context. By the time your block is executed, the memory referenced by this pointer may have disappeared. Of course, it is safe to allocate memory (or objects) yourself and explicitly transfer ownership of that memory to a block.
  • The scheduling queue copies the blocks that are added to it and releases the blocks when the execution is complete. In other words, you do not need to explicitly copy blocks before adding them to the queue.
  • Although queues are more efficient than raw threads for performing small tasks, there is still overhead in creating blocks and executing them on the queue. If a block does too little work, it may be cheaper to execute it inline than to dispatch it to a queue. The way to determine if a block is doing too little work is to use a performance tool to collect metrics for each path and compare them.
  • Do not cache data associated with the underlying thread with the expectation that it will be accessible from different blocks. If tasks in the same queue need to share data, the context pointer of the schedule queue is used to store the data.
  • If your block creates multiple Objective-C objects, you might want to include a portion of the block code in an @AutoRelease block to handle the memory management of those objects. Although GCD scheduling queues have their own automatic release pools, they do not guarantee when these pools will be emptied. If your application is memory constrained, creating your own automatic release pool allows you to free memory for automatic release objects at more regular intervals.

Create and manage scheduling queues

Before adding a task to a queue, you must decide what type of queue you want to use and how you intend to use it. Scheduling queues can execute tasks either sequentially or concurrently. In addition, if you consider the specific purpose of the queue, you can configure the queue properties accordingly. The following sections show you how to create scheduling queues and configure them for use.

Get the global concurrent scheduling queue

Concurrent scheduling queues are useful when you have multiple tasks that can run in parallel. A concurrent queue is still a queue because it takes tasks out in a first-in, first-out order; However, a concurrent queue may cause other tasks to be dequeued before any previous tasks have completed. The actual number of tasks performed by a concurrent queue at any given time is variable and can change dynamically as conditions change in the application. A number of factors affect the number of tasks executed by concurrent queues, including the number of available kernels, the amount of work being completed by other processes, and the number and priority of tasks in other serial-scheduled queues.

The system provides four concurrent scheduling queues for each application. These queues are global to the application and are distinguished only by their priority. Because they are global, you do not have to create them explicitly. Instead, you use the dispatch_GET_global_queue function to request one of the queues, as shown in the following example:

dispatch_queue_t aQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
Copy the code

In addition to getting the default concurrent queue, you can also get queues with high and low priority by passing DISPATCH_QUEUE_PRIORITY_HIGH and DISPATCH_QUEUE_PRIORITY_LOW constants to the function. Or get the background queue DISPATCH_QUEUE_PRIORITY_BACKGROUND by passing a constant. As you might expect, tasks in the high-priority concurrent queue execute before tasks in the default queue and the low-priority queue. Again, tasks in the default queue execute before tasks in the low-priority queue.

Note: The second argument of the dispatch_get_global_queue function is reserved for future extensions. Now, you should always pass this parameter by 0.

Although scheduling queues are reference counting objects, you do not need to retain and release global concurrent queues. Because they are global to your application, these queue retention and release calls are ignored. Therefore, you do not need to store references to these queues. Dispatch_get_global_queue You can call this function whenever you need a reference to one of them.

Create a serial scheduling queue

Serial queues are useful when you want tasks to be executed in a particular order. Serial queues execute only one task at a time and always pull tasks from the head of the queue. You can use serial queues instead of locks to protect shared resources or variable data structures. Unlike locks, serial queues ensure that tasks execute in a predictable order. And as long as you commit tasks asynchronously to a serial queue, the queue never deadlocks.

Unlike the concurrent queues created for you, you must explicitly create and manage any serial queues to be used. You can create any number of serial queues for your application, but you should avoid creating large numbers of serial queues only as a means of performing as many tasks as possible simultaneously. If you want to execute a large number of tasks concurrently, commit them to one of the global concurrent queues. When creating serial queues, try to determine the purpose of each queue, such as protecting resources or synchronizing some key behavior of the application.

Listing 3-2 shows the steps required to create a custom serial queue. The dispatch_queue_CREATE function takes two parameters: the queue name and a set of queue properties. The debugger and performance tools display the queue names to help you track how the task is performing. Queue attributes are reserved for future use and should be NULL.

Listing 3-2 creates a new serial queue

dispatch_queue_t queue;
queue = dispatch_queue_create("com.example.MyQueue", NULL);
Copy the code

In addition to any custom queues you create, the system automatically creates a serial queue and binds it to the main thread of your application. For more information about getting the main thread queue, see Getting the Public Queue at run time.

Get the public queue at run time

Grand Central Dispatch provides functions that allow you to access several common Dispatch queues from your application:

  • Will thedispatch_get_current_queueFunction is used for debugging purposes or to test the identity of the current queue. fromBlock objectCalling this function internally returns the queue to which the block was committed (and it may now be running). Calling this function from outside the block returns the application’s default concurrent queue.
  • The use of thedispatch_get_main_queueFunction to get the serial scheduling queue associated with the application’s main thread. This queue is for Cocoa applications and calls on the main threaddispatch_mainFunction or configure a run loop (usingCFRunLoopRefType orNSRunLoopObject is automatically created by the application.
  • The use of thedispatch_get_global_queueFunction to get any shared concurrent queues.

Add a task to a queue

To execute a task, you must dispatch it to the appropriate scheduling queue. Tasks can be scheduled synchronously or asynchronously, individually or in groups. Once in the queue, the queue is responsible for executing your task as quickly as possible, taking into account its constraints and the tasks that already exist in the queue.

Add a single task to a queue

There are two ways to add tasks to a queue: asynchronously or synchronously. If possible, asynchronous execution using dispatch_async and dispatch_asynC_f functions takes precedence over synchronous alternatives. When you add a block object or function to a queue, you have no way of knowing when the code will execute. Thus, adding blocks or functions asynchronously lets you schedule the execution of your code and continue to do other work from the calling thread. This is especially important if you schedule tasks from the main thread of your application — perhaps in response to some user event.

Although you should add tasks asynchronously whenever possible, sometimes you may need to add tasks synchronously to prevent race conditions or other synchronization errors. In these cases, you can add tasks to the queue using the dispatch_sync and dispatch_sync_f functions. These functions block the current thread of execution until the specified task completes execution.

Important: You should never call the dispatch_sync or dispatch_sync_f functions for tasks that are executed in the same queue that you plan to pass to the function. This is especially important for serial queues that guarantee deadlocks, but should also be avoided for concurrent queues.

The following example shows how to use the block-based variant to dispatch tasks asynchronously and synchronously:

dispatch_queue_t myCustomQueue;
myCustomQueue = dispatch_queue_create("com.example.MyCustomQueue", NULL);
 
dispatch_async(myCustomQueue, ^{
    printf("Do some work here.\n");
});
 
printf("The first block may or may not have run.\n");
 
dispatch_sync(myCustomQueue, ^{
    printf("Do some more work here.\n");
});
printf("Both blocks have completed.\n");
Copy the code

The completion block is executed when the task is complete

By their very nature, the tasks assigned to the queue run independently of the code that created them. However, when the task is complete, your application may still want to be notified of this fact so that it can merge the results. For traditional asynchronous programming, you can do this using a callback mechanism, but for scheduling queues, you can use completion blocks.

The completion block is just another piece of code that you dispatch to the queue when the original task ends. The calling code usually starts the task with a completion block as an argument. All the task code has to do is submit the specified block or function to the specified queue when it is done.

Listing 3-4 shows the average function implemented using blocks. The last two arguments to the averaging function allow the caller to specify the queue and block to use when reporting the results. After the averaging function calculates its value, it passes the result to the specified block and dispatches it to the queue. To prevent the queue from being prematurely freed, it is critical to retain the queue initially and free it after the scheduled block is completed.

Listing 3-4 performs the completion callback after the task

void average_async(int *data, size_t len,
   dispatch_queue_t queue, void (^block)(int))
{
   // Retain the queue provided by the user to make
   // sure it does not disappear before the completion
   // block can be called.
   dispatch_retain(queue);
 
   // Do the work on the default concurrent queue and then
   // call the user-provided block with the results.
   dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ int avg = average(data, len); dispatch_async(queue, ^{ block(avg); });// Release the user-provided queue when done
      dispatch_release(queue);
   });
}
Copy the code

Perform tasks on the main thread

The Grand Central Dispatch provides a special Dispatch queue that you can use to perform tasks on the main thread of your application. This queue is automatically provided for all applications and emptied by any application that has a run loop set on its main thread (managed by a CFRunLoopRef type or NSRunLoop object). If you are not creating a Cocoa application and do not want to explicitly set the run loop, you must call this dispatch_main function to explicitly empty the primary dispatch queue. You can still add tasks to the queue, but if you don’t call this function, those tasks will never execute.

You can get the dispatch queue for the application’s main thread by calling this dispatch_get_main_queue function. Tasks added to this queue are executed sequentially on the main thread itself. Therefore, you can use this queue as a synchronization point so that work can be done in other parts of the application.

conclusion

GCD

Grand Central Dispatch, a pure C language, provides a lot of powerful functions.

advantage

  • GCD is a solution provided by Apple for multi-core parallel computing;
  • The GCD will automatically use more cores of the CPU
  • GCD automatically manages the thread lifecycle (creating threads, scheduling tasks, thread destruction)
  • In development, we just need to tell the COMMUNIST party what to do and give it to the communist party to do it. We don’t need to write the code to manage the threads.

use

Adds a task to a queue and specifies the function that the task executes

function

Task contents The block that encapsulates the task with block has no parameters and no return value

Dispatch_async Performs tasks asynchronously

  • The next statement can be executed without waiting for the current statement to complete
  • This will enable the thread to execute the block task
  • Asynchronous is another name for multithreading

dispatch_sync

  • You must wait for the current statement to complete before executing subsequent statements
  • Will not start the thread
  • The task that currently executes the block

The queue

Data structure: First in, first out (FIFO) first added, first scheduled

At the same time, the serial queue can schedule only one task to execute; Concurrent queues can schedule multiple tasks for execution.

The home side column

  • dispatch_get_main_queue()
  • A serial queue dedicated to scheduling tasks on the main thread;
  • If a task is currently executing on the main thread, then whatever task is currently added to the main queue will not be scheduled.

Global concurrent queue

  • For our convenience, Apple provides a global queuedispatch_get_global_queue(0, 0)
  • A global queue is a concurrent queue;
  • With multithreaded development, global queues can be used directly for asynchronous tasks if there is no special requirement for queues.

Functions and queues

Synchronize function serial queue

  • No new threads will be started, tasks will be executed in the current thread.
  • Tasks are executed one by one;
  • It creates a blockage.

Synchronize functions to concurrent queues

  • No new threads will be started, tasks will be executed in the current thread.
  • Tasks are executed one after the other.

Asynchronous function serial queue

  • Start a new thread;
  • Tasks are executed one after the other.

Asynchronous function concurrent queue

  • Start a new thread
  • Tasks are executed concurrently, and the order depends on the CPU scheduling.

validation

case

Time-consuming problem

The creation of a queue is a time-consuming operation

The serial queue performs synchronization tasks

The serial queue performs asynchronous tasks

The concurrent queue performs synchronization tasks

Concurrent queues perform asynchronous tasks

conclusion

  • Whether a synchronous or asynchronous task is initiated by a serial or concurrent queue, it will take some time.
  • Initiating asynchronous execution can save some time.

A classic case

The order in which tasks are executed in a concurrent queue

For queueT, 1 will print before 2. For queueT, 5 will print before 2. For queueT, 4 will print after 3. For queueT, 5 will print before 2. The final order is 1-5-2-3-4; Validation:

The order in which tasks are executed in a serial queue

Serial task, it collapses. Why is that? First, the queue is changed into a serial queue, and the tasks will be executed one by one. First, 5 is printed out, and 2 is printed out. This is ok.

The addition of task 3 blocks the code execution after task 2, and because the serial queue can only multiexecute tasks one by one, task 3 is waiting for task 2 to complete. Thus, a deadlock is created.

So that’s _dispatch_sync_f_slow

conclusion

Today we have organized and summarized GCD’s functions and queue-related concepts, and explained the actual use of the concepts in conjunction with the actual Demo. In the next installment, we’ll dig deep into the GCD source code and explore how it works. Everybody, come on!!