Modern usage of WWDC Grand Central Dispatch This content from developer.apple.com/wwdc17/706, pure notes, convenient yourself if forget, behind a good reference

GCD for concurrency

How does code run in parallel on multiple different kernels at the same time? Concurrency is about how you structure your application of independent components to make it run at the same time, parallel core is usually need more support, the kernel you want to use all at the same time And concurrent can even in a single core system implementation, it is about how you’re involved in different tasks as part of the application So let’s start with parallel to talk and how to use it when you are writing application For example, let’s imagine that you’re developing an app that processes a lot of images and you want to take advantage of the Mac Pro’s multi-core to process those images faster

This increases your speed because multiple cores are processing different parts of the image at the same time

And how do we do that?

concurrentPerform

GCD expresses parallelism using an API called concurrentPerform

DispatchQueue.concurrentPerform(1000) { i in /* iteration i */ }
Copy the code

DispatchQueue. 3.0 before dispatch_apply concurrentPerform and Swift (DISPATCH_APPLY_AUTO, 1000, ^(size_t I){/* Iteration I */}) is the same,

Parallel optimization is implemented in GCD

Here, the number of cycles is a subtle number. For example, if we have a task divided into three parts and executed in parallel, and then output the result after all execution, the operation effect might be like this:

Here you can see that the ideal scenario is three code blocks running in parallel on all three kernels

But the real world isn’t always this perfect and what happens if a third kernel is taken over to perform UI rendering?

The load balancer has to move that third block of code onto the first kernel in order to execute the third block of code because the third kernel is occupied and our CPU is idle we can use that time to do more parallel work so instead our work takes longer

Repeat count can use large enough So that the load balancer can flexibly fill gaps in the system And maximize the use of the available resources in the system, but, please remember all each CPU is not always available in the system has run many tasks at the same time, we can’t in order to pursue the maximum concurrent and affect the other tasks.

concurrency

Let’s imagine that we write a simple news application. He has layers of user information, network, and data.

Let’s look at multithreading in terms of running tasks

The UI is performed in the main thread, which has the highest priority, and a secondary thread in the CPU may be interrupted to perform a superior operation if a user action occurs while performing a task. Concurrency is well supported on almost all platforms and languages, and its benefits are undeniable:

1. It can improve the utilization of resources. When we talk about resources on the machine, we generally refer to THE CPU, but in general, the NETWORK or disk I/O is much slower than the memory I/O. If we start multiple threads, let the CPU execute B while THREAD A is doing I/O, and then execute A while thread B is doing I/O. This results in higher CPU utilization than AB serial execution. 2. Faster response. After the main thread accepts the user’s request, we hand over time-consuming operations to the child thread, and then tell the user to do something else while waiting.

Let’s see what that looks like for the CPU and the white lines up here are content switching between subsystems, okay

You can use instrumentation for tracing and it shows you what the CPU and threads are doing while they’re running in your application and if you want more information you can refer to last year’s presentation on Deep Parsing System Tracing and the instrumentation team described how you can use system tracing

Disadvantages of concurrency

Excessive thread switching, such as the one below, is used for resource consumption between threads. A task may take only 20 us, but a thread switch takes 10 us.

Concurrent and lock

The most common concept associated with multiple threads is Lock. It is inevitable that multiple threads are interested in the same resource. In order to ensure the state of the resource is orderly, we need to control the read and write of the resource:

NSLock The most basic mutex, NSRecursiveLock, is a recursive lock that can be acquired multiple times by a thread without causing a deadlock. NSCondition, which contains a lock and a thread inspector. Lock NSConditionLock condition, don't need to check, take a lock detection condition, whether meet the pthread_mutex POSIX mutex is a super easy to use mutexes, more familiar with. Pthread_rwlock Is a read-write lock that is exclusive to a file. If multiple threads are writing to the same file, the result is immutable. OSSpinLock is a read-write lock that can be read by multiple threads. However, the difference is different. With a mutex, when one thread acquires the lock, other threads that want to acquire the lock block until the lock is released. But spin locks are different. Once one thread acquires the lock, other threads loop around to see if the lock is released. OS Unfair Lock Locks provided by Apple to address priority inversion are discussed separately below. The dispatch_semaphore mechanism implements locks. I use a lot of them. Synchronized a convenient way to create a mutex that does everything other mutex does. Poor performance.Copy the code

Lock owners: PThread Mutex, OS Unfair Lock, lock resources can only be owned by one thread at a time, other threads have to wait. In the case of a dispatch semaphore, nsconditions allow a thread to read or write a resource that is locked and not owned by a particular thread. Like pthread_rwlock, it is possible for multiple threads to have read permissions on the shared resource.

Let’s start with fair_lock. When multiple threads compete for the lock at the same time, fairness is considered to make the different threads as fair as possible. This fairness comes at a significant performance cost.In the figure above, the lock is held by thread 1, then released and held by thread 2, causing a context switch.Thread locking occurs with locks, and after holding resources, lower-priority tasks may appear before higher-priority tasks.

Repeatedly switching between different operations

Example each connection corresponds to a serial queue. The code in these queues is executed when the network changes. If the queue form is as follows, S represents the network changes and Q represents the serial queue that the code will execute when the network changes.

When the network changes, three queues are generated for three threads.

This is where a context switch occurs

But if you specify 3 tasks in the same queue.When the network changes, the code for all three queues will be executed in the same queue, with no context switch.In addition, if too many tasks are added to the global parallel queue, too many threads will be generated. Apple recommends that every subsystem (database, network…) There is a queue hierarchy as shown below:

Unified Queue Identity

Presumably each queue has an identifier in the kernel that can be optimized according to this identifier. Previously, the two source queues were set to one, and when S1 was processed, S2 was triggered and a new thread was created.After optimization, no additional threads will be created. The idea is that when S2 fires, the system knows that the corresponding queue is executing and flags it. When the processing is finished, then the corresponding processing.

How to deal with code

Do not change the Source handler and target queue after the activate call. The system creates a snapshot and optimizes scheduling based on this snapshot.

let mySource = DispatchSource.makeReadSource(fileDescriptor: fd, queue: myQueue) mySource.setEventHandler(qos: UserInteractive) {... } mySource.setCancelHandler { close(fd) } mySource.activate() mySource.setTarget(queue: otherQueue)//wrong!!!!!!!!!!!!!!!! Example of error. Don't mess with itCopy the code

Serial Dispatch Queue Serial Dispatch Queue

Dispatch Source

Dispatch Source is a basic type in GCD, which can be literally called a Dispatch Source. Its function is to catch certain low-level system events when they occur, and then do other logical processing. There are multiple types of Dispatch sources, respectively listening for the corresponding type of system events. Let’s take a look at the types:

Timer Dispatch Source: indicates the Timer Dispatch Source. Signal Dispatch Source: Listens for UNIX Signal scheduling sources, such as SIGSTOP signals representing suspended instructions. Descriptor Dispatch Source: Schedule Source that listens for file-related operations and socket-related operations. Process Dispatch Source: A Dispatch Source that listens for process-related status. Mach Port Dispatch Source: The Dispatch Source that listens for Mach related events. Custom Dispatch Source: A scheduling Source that listens for Custom events.Copy the code

Use the GCD function to specify the type of system event that you want to listen for, specify a closure or function that captures the event for logical processing as a callback function, and then specify a Dispatch Queue that the callback function executes. When the specified system event is listened for, the callback function will be called. Executes the callback function as a task on the specified queue. That is, when a system event is heard, a task is triggered and queued automatically

Once the Diaptach Source is associated with the Dispatch Queue, the Dispatch Source automatically adds tasks (callback functions) to the associated Queue whenever it listens for system events. Sometimes the callback function takes a long time to execute, during which time the Dispatch Source listens to multiple system events, theoretically forming an event backlog. Fortunately, the Dispatch Source has a good mechanism to solve this problem. When there are multiple events backlog, they will be associated and combined according to the event type. Form a new event.

Protect the hierarchy of queues

Specify the target queue when creating the queue.

Q1 = dispatch_queue_create_with_target("Q1", DISPATCH_QUEUE_SERIAL, Q1 = dispatch_queue_create("Q1",DISPATCH_QUEUE_SERIAL) dispatch_set_target_queue(Q1, EQ)Copy the code

Use new Tools

Instrument Has the GCDPerformance tool to diagnose problems.Copy the code

The end.