• The GCD Handbook
  • Original post by Soroush
  • The Nuggets translation Project
  • Translator: LoneyIsError
  • Proofreader: woopqww111hsinshufan

Grand Central Dispatch, or GCD, is an extremely powerful tool. It gives you some low-level components, like queues and semaphores, that you can use in interesting ways to achieve useful multithreading effects. Unfortunately, the C-based API is a bit of a mystery, and it doesn’t obviously tell you how to use this low-level component to implement higher-level methods. In this article, I hope to describe some of the usages that you can implement through the underlying components that GCD provides you.

The background work

Probably the simplest use, GCD lets you do some work on the background thread and then go back to the main thread to continue processing, because components like those that belong to UIKit can only be used (mainly) in the main thread.

In this tutorial, I’ll use the doSomeExpensiveWork() method to represent some long-running tasks that return values.

This pattern can be established like this:

let defaultPriority = DISPATCH_QUEUE_PRIORITY_DEFAULT
let backgroundQueue = dispatch_get_global_queue(defaultPriority, 0)
dispatch_async(backgroundQueue, {
	let result = doSomeExpensiveWork()
	dispatch_async(dispatch_get_main_queue(), {
		//use `result` somehow
	})
})
Copy the code

In practice, I never use any queue priority other than DISPATCH_QUEUE_PRIORITY_DEFAULT. This returns a queue that can support execution by hundreds of threads. You can also create your own queue with the dispatch_queue_create method if your performance draining work is always happening in a particular background queue. Dispatch_queue_create can create a queue of any name, whether it is serial or parallel.

Note that each call uses dispatch_async and not dispatch_sync. Dispatch_async returns before block execution, dispatch_sync waits until block execution is complete. Internal calls can use dispatch_sync (because whenever it returns), but external calls must be dispatch_async (otherwise, the main thread will be blocked).

To create a singleton

Dispatch_once is an API that can be used to create singletons. It is no longer necessary in Swift, where there is an easier way to create singletons. For the future, of course, I’ll write it here (in Objective-C).

+ (instancetype) sharedInstance {  
	static dispatch_once_t onceToken;  
	static id sharedInstance;  
	dispatch_once(&onceToken, ^{  
		sharedInstance = [[self alloc] init];  
	});  
	return sharedInstance;  
}  
Copy the code

Flattening an entire block

Now the COMMUNIST CD is starting to get interesting. Using a semaphore, we can suspend one thread for any length of time until another thread sends a signal to it. This semaphore, like the rest of the GCD, is thread-safe, and they can be fired from anywhere.

When you need to synchronize an asynchronous API that you cannot modify, you can use semaphores to solve the problem.

// on a background queue
dispatch_semaphore_t semaphore = dispatch_semaphore_create(0)
doSomeExpensiveWorkAsynchronously(completionBlock: {
    dispatch_semaphore_signal(semaphore)
})
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER)
//the expensive asynchronous work is now done
Copy the code

Dispatch_semaphore_wait blocks the thread until dispatch_semaphore_signal is called. This means that signal must be called in another thread because the current thread is completely blocked. In addition, you should not call wait in the main thread, only in the background thread.

You can choose any timeout when calling dispatch_semaphore_wait, but I prefer to always use DISPATCH_TIME_FOREVER.

It may not be entirely obvious why you want to flatten an existing complete block code, but it’s handy. One situation I’ve used recently is to perform a series of asynchronous tasks that must happen consecutively. This simple abstraction that uses this method is called AsyncSerialWorker:

typealias DoneBlock = () -> () typealias WorkBlock = (DoneBlock) -> () class AsyncSerialWorker { private let serialQueue  = dispatch_queue_create("com.khanlou.serial.queue", DISPATCH_QUEUE_SERIAL) func enqueueWork(work: WorkBlock) { dispatch_async(serialQueue) { let semaphore = dispatch_semaphore_create(0) work({ dispatch_semaphore_signal(semaphore) }) dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER) } } }Copy the code

This small class creates a serial queue and allows you to add work to a block. When your work is done, WorkBlock calls DoneBlock, turning on the semaphore and allowing the serial queue to continue.

Limit the number of concurrent blocks.

In the previous example, the semaphore serves as a simple marker, but it can also be used as a finite resource counter. If you want to open a specific number of connections on a specific resource, you can use the following code:

class LimitedWorker {
    private let concurrentQueue = dispatch_queue_create("com.khanlou.concurrent.queue", DISPATCH_QUEUE_CONCURRENT)
    private let semaphore: dispatch_semaphore_t

    init(limit: Int) {
    	semaphore = dispatch_semaphore_create(limit)
    }

    func enqueueWork(work: () -> ()) {
        dispatch_async(concurrentQueue) {
            dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER)
            work()
            dispatch_semaphore_signal(semaphore)
        }
    }
}
Copy the code

This example is taken from Apple’s Concurrency Programming Guide. They can better explain what’s going on here:

When you create a semaphore, you can specify the number of resources available to you. This value is the initial count variable of the semaphore. The dispatch_semaphore_wait method decrement the count variable by one each time you wait for a semaphore to send a signal. If the resulting value is negative, the function tells the kernel to block your thread. On the other end, the dispatch_semaphore_signal function increments the count variable with 1 to indicate that the resource has been freed. If a task blocks and waits for a resource, one of them is then released to do its work.

The effect is similar to maxConcurrentOperationCount NSOperationQueue. If you use the original GCD queue instead of NSOperationQueue, you can use the signal owner to limit the number of blocks executed simultaneously.

One thing to note is that every time you call enqueueWork, if you open the semaphore limit, a new thread is started. If you have a low limit and a large number of working queues, you can create hundreds of threads. As always, configure the file first, then change the code.

Waiting for many concurrent tasks to complete

If you have multiple blocks of work to perform and you need to send a notification when they are done collectively, you can use groups. Dispatch_group_async allows you to add work to the queue (the work in the block should be synchronous) and keep track of how many items are added. Note that work can be added to different queues within the same Dispatch group and can be tracked. When all tracing is complete, the block starts dispatch_group_notify, as if it were a complete block.

dispatch_group_t group = dispatch_group_create()
for item in someArray {
	dispatch_group_async(group, backgroundQueue, {
		performExpensiveWork(item: item)
	})
}
dispatch_group_notify(group, dispatch_get_main_queue(), {
	// all the work is complete
}
Copy the code

Having a complete block is a good example of flattening a function. The Dispatch Group thinks that when it returns, the block should be done, so you need the block to wait until the other work is done.

There are more manual ways to use Dispatch groups, especially if your performance draining work is already asynchronous:

// must be on a background thread
dispatch_group_t group = dispatch_group_create()
for item in someArray {
	dispatch_group_enter(group)
	performExpensiveAsyncWork(item: item, completionBlock: {
		dispatch_group_leave(group)
	})
}

dispatch_group_wait(group, DISPATCH_TIME_FOREVER)

// all the work is complete
Copy the code

This code is complex, but reading it line by line will help you understand it. Like semaphores, groups also remain thread-safe and are internal counters that you can manipulate. You can use this counter to ensure that multiple long running tasks are completed before the completion block is executed. Increment the counter with enter and decrement the counter with leave. Dispatch_group_async handles all these details for you, so I’d like to use it as much as possible.

The last point in this code is the wait method: it blocks the thread and waits for the counter to reach zero before continuing execution. Note that even if you use the Enter /leave API, you can add a dispatch_group_notify block to the queue. The reverse is also true: you can also use dispatch_group_wait when you use the dispatch_group_async API.

Dispatch_group_wait, like dispatch_semaphore_wait, can be set to timeout. Again, DISPATCH_TIME_FOREVER is quite sufficient and I never felt the need to use anything else to set timeouts. Just like dispatch_semaphore_wait, never use dispatch_group_wait on the main thread.

The biggest difference between the two is that with notify, you can call entirely from the main thread, whereas with wait, you must happen on background queues (at least the wait part, because it completely blocks the current queue).

Isolation of the queue

The Dictionary (and Array) types of the Swift language are both value types. When they are changed, their references are completely replaced by the new structure. Of course, because Swift objects that update instance variables are not atomic, they are not thread-safe. Dual threads can update a dictionary at the same time (for example, adding a value) and both attempts are in the same block of memory, which can cause memory corruption. We can use isolated queues to achieve thread safety. Let’s create an Identity Map. The Identity Map is a dictionary that maps an item from its ID attribute to a model object.

class IdentityMap<T: Identifiable> {
	var dictionary = Dictionary<String, T>()

	func object(forID ID: String) -> T? {
		return dictionary[ID] as T?
	}

	func addObject(object: T) {
		dictionary[object.ID] = object
	}
}
Copy the code

This object is basically a wrapper around a dictionary. If our method addObject is called by multiple threads at the same time, it may damage memory because they are working on the same reference. This is called the reader-writers problem. In summary, we can have multiple readers reading at the same time, but only one thread can write at any given time. Fortunately, the COMMUNIST Party has given us good tools to deal with such situations. There are four apis we can use:

  • dispatch_sync
  • dispatch_async
  • dispatch_barrier_sync
  • dispatch_barrier_async

Ideally, reads can be synchronous, while writes can be asynchronous and must be unique when referencing the object. The GCD’s set of Barrier apis can do something special: they must wait until the queue is completely empty before executing a block. Dictionary writes using the Barrier API will be restricted, ensuring that no writes, either reads or writes, will ever happen at the same time.

class IdentityMap<T: Identifiable> {
	var dictionary = Dictionary<String, T>()
	let accessQueue = dispatch_queue_create("com.khanlou.isolation.queue", DISPATCH_QUEUE_CONCURRENT)

	func object(withID ID: String) -> T? {
		var result: T? = nil
		dispatch_sync(accessQueue, {
			result = dictionary[ID] as T?
		})
		return result
	}

	func addObject(object: T) {
		dispatch_barrier_async(accessQueue, {
			dictionary[object.ID] = object
		})
	}
}
Copy the code

Dispatch_sync adds the block to our quarantined queue and waits for it to execute before returning. In this way, we will have the result of our simultaneous reading. (If we didn’t synchronize, our getter might need a complete block.) Because The accessQueue is concurrent, these synchronous reads can occur simultaneously. Dispatch_barrier_async adds blocks to the quarantined queue. This async part means that it will actually execute the block before returning (performing the write operation). This is good for our performance, but also has the disadvantage that performing a “read” operation immediately after a “write” operation can result in retrieving old data before the change. The barrier part of dispatch_barrier_async means that it will wait until each block in the current run queue has finished executing. Other blocks queue up behind it and execute when the Barrier scheduling completes.

conclusion

Grand Central Dispatch is a framework with many underlying languages. Using them, this is the more advanced technique I can build. If there are other advanced uses of GCD that you use that I haven’t listed here, I’d love to hear about them and add them to the list.