Original address:

Original author:

Release Date:

Concurrency and multithreading are a core part of iOS development. Let’s take a closer look at what makes them so powerful and how we can leverage them in our Own Cocoa Touch applications.

Concurrency is the concept of multiple things happening at the same time. This is generally done through time splitting, and true parallelism can be achieved if the host operating system has multiple CPU cores. We’ve all experienced a lack of concurrency, most likely in the form of applications freezing up while running heavy tasks. UI freezes are not necessarily due to a lack of concurrency — they can just be symptoms of software errors — but these freezes occur whenever software needs to do something resource-intensive that doesn’t utilize all the computing power at its disposal. If you’ve ever profiled an app that hung in this way, you might have seen reports like this.

Anything to do with file I/O, data processing, or networking usually requires background tasks (unless you have a very compelling excuse to stop the whole program). There are few reasons why these tasks should prevent your users from interacting with the rest of your application. Consider how much better the user experience of your application might be if, instead, the profiler reported something like this.

Analyzing an image, processing a document or piece of audio, or writing a chunk of data to disk are tasks that can greatly benefit from delegating to background threads. Let’s take a look at enforcing this behavior in iOS apps.


A brief history of

In the old days, the maximum amount of work a computer could accomplish per CPU cycle was determined by clock speed. As processor designs became more compact, thermal and physical constraints began to limit higher clock speeds. As a result, chip makers began adding additional processor cores on each chip to improve overall performance. By increasing the number of cores, a single chip can execute more CPU instructions per cycle without increasing speed, size, or heat output. There’s just one problem…

How do we leverage this extra core? Multithreading.

Multithreading is an implementation handled by the host operating system that allows the creation and use of a number of n threads. Its main purpose is to provide simultaneous execution of two or more parts of a program to take advantage of all available CPU time. Multithreading is a powerful technique in the programmer’s toolbox, but it comes with its own responsibilities. A common misconception is that multithreading requires a multi-core processor, but that’s not the case — a single-core CPU can work on many threads, but we’ll look at why threads are a problem in the first place. Before we delve further, let’s take a look at the nuances of concurrency and parallelism with a simple diagram.

In the first case described above, we observed that tasks can run concurrently, but not in parallel. This is similar to having multiple conversations in a chat room and interleaving (context switching) between them, but never really talking to two people at the same time. This is what we call concurrency. It’s the illusion of multiple things happening at once, when, in fact, they switch very quickly. Concurrency is doing a lot of things at once. In contrast to the parallel mode, both tasks run simultaneously. Both execution models represent multithreading, that is, multiple threads working toward a common goal. Multithreading is a common technology that combines concurrency and parallelism into programs.


Thread burden

Modern multitasking operating systems like iOS have hundreds of programs (or processes) running at any given moment. However, most of these programs are system daemons or background processes with very low memory footprint, so what is really needed is a way for individual applications to take advantage of the additional available core. An application (process) can have many threads (child processes) running on shared memory. Our goal is to be able to control these threads and use them to our advantage.

Historically, introducing concurrency into an application has required the creation of one or more threads. Threads are low-level structures that need to be managed manually. A quick glance at Apple’s Threaded Programming Guide shows just how much complexity threaded code adds to the code base. In addition to building an app, developers also have to.

  • Create new threads responsibly, dynamically adjusting the number of threads as system conditions change
  • Manage them carefully and reallocate them from memory once they are finished executing.
  • Use synchronization mechanisms such as Mutexes, locks, and semaphores to coordinate resource access between threads, adding more overhead to your application code.
  • Reduce the risks associated with the coding of an application that bears most of the costs associated with creating and maintaining whatever threads it uses, rather than the host operating system.

This is unfortunate because it adds enormous complexity and risk without guaranteeing performance gains.


Grand Central Dispatch

IOS takes an asynchronous approach to managing concurrent threads. Asynchronous functions are common in most programming environments and are often used to start tasks that may take a long time, such as reading a file from disk or downloading a file from the network. When an asynchronous function is called, it does some work behind the scenes to start the background task, but returns immediately, regardless of how long the original task may have taken to actually complete.

The core technology for asynchronous launch tasks provided by iOS is Grand Central Dispatch (GCD). GCD abstracts thread management code and moves it down to the system level, exposing a lightweight API for defining tasks and executing them on the appropriate scheduling queue. GCD is responsible for all thread management and scheduling, providing a holistic approach to task management and execution while also providing greater efficiency than traditional threads.

Let’s look at the main components of the GCD.

What does it include? Let’s start on the left.

  • The dispatchQueue. main main thread, the UI thread, is supported by a serial queue. All tasks are executed consecutively, so keep the order of execution intact. It is important to ensure that all UI updates are assigned to this queue and never run any blocking tasks on this queue. We want to ensure that the application’s running loop (called CFRunLoop) is never blocked to maintain the highest frame rate. The main queue then has the highest priority, and any tasks pushed to this queue are executed immediately.
  • Dispatchqueue.global: A group of global concurrent queues, each queue managing its own thread pool. Depending on the priority of your task, you can specify which particular queue to execute your task on, although you should resort to using the default most of the time. Because the tasks on these queues execute concurrently, there is no guarantee that the order in which the tasks are queued will remain.

Notice that we’re not dealing with individual threads anymore? We’re dealing with queues that manage thread pools internally, and you’ll soon see why queues are a more sustainable approach to multithreading.

Serial queue. Serial queue: main thread

As an exercise, let’s look at the following code, which is activated when the user presses a button in the app. This expensive computation function can be anything. Let’s say it’s post-processing images stored on the device.

import UIKit

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        compute()
    }

    private func compute(a) -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter + = 1}}}Copy the code

At first glance, this may seem innocuous, but if you run this task in a real application, the UI freezes completely until the loop is terminated, which requires…… For a while. We can demonstrate this by dissecting this task in Instruments. You can start the Time Profiler module for Instruments by going to Xcode > Open Developer Tool > Instruments under the Xcode menu options. Let’s take a look at the thread module of the profiler to see where the CPU usage is highest.

We can see that the main thread is clearly at 100% capacity for about 5 seconds. This is a significant amount of time to block the UI. Looking at the call tree below the chart, we can see that the main thread worked at 99.9% capacity for 4.43 seconds! Since serial queues work in FIFO, tasks are always completed in the order they were inserted. Obviously, the compute() method is the culprit here. Can you imagine clicking a button and making the UI freeze for that long?

A background thread

How can we make this better? Dispatchqueue.global () comes to the rescue! That’s what background threads are for. Referring to the GCD architecture diagram above, we can see that in iOS, anything that is not the main thread is the background thread. They can run alongside the main thread, leaving it completely idle and ready to handle other UI events, such as scrolling, responding to user events, animations, and so on. Let’s make a small change to the button click handler above.

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        DispatchQueue.global(qos: .userInitiated).async { [unowned self] in
            self.compute()
        }
    }

    private func compute(a) -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter + = 1}}}Copy the code

Unless specified, a piece of code will normally be executed on the main queue by default, so to force it to execute on a different thread, we’ll put our calculation call package in an asynchronous closure that will be submitted to the DispatchQueue.global queue. Remember, we’re not really managing threads here. We commit the task (in the form of a closure or block) to the desired queue, assuming that it is guaranteed to execute at some point in time. Queues decide which threads to assign tasks to, and they do all the hard work of assessing system requirements and managing the actual threads. That’s the magic of Grand Central Dispatch. As the old saying goes, you can’t improve what you can’t measure. So we measured our really bad button click handler, and now that we’ve improved it, we’ll measure it again to get some concrete data on performance.

Looking at the profiler again, it’s clear that this is a huge step forward. This task takes the same amount of time, but this time in the background, without locking the UI. Even though our application is doing the same amount of work, the perceived performance is much better because the user will be free to do other things while the application processes.

You may have noticed that we accessed a global queue with a. UserInitiated priority. This is a property that we can use to give our tasks a sense of urgency. If we run the same task on a global queue and pass it a qos property called background, iOS will consider it a utility task and allocate fewer resources to execute it. So, while we can’t control when tasks are executed, we can control their priority.

Description of main thread and main queue

You may be wondering why the Profiler shows “main thread” and why we call it “main queue”. If you look at the GCD architecture we introduced above, the main queue is only responsible for managing the main thread. The scheduling queue section of the Concurrent Programming Guide says: “A primary scheduling queue is a globally available serial queue that performs tasks on the main thread of an application. Because it runs on the main thread of the application, the main queue is often used as the key synchronization point for the application.

The terms “execute on the main thread” and “execute on the main queue” are used interchangeably.


Concurrent queue

So far, our tasks have been carried out in a completely serial fashion. Dispatchqueue. main is a serial queue by default, whereas dispatchqueue. global provides you with four concurrent dispatch queues based on the priority parameters you pass in.

Suppose we want to take five images and have our application process them in parallel on a background thread. How do we do that? We can rotate a custom concurrent queue with the identifier of our choice and assign these tasks there. All that is required is the.concurrent attribute during queue construction.

class ViewController: UIViewController {
    let queue = DispatchQueue(label: "com.app.concurrentQueue", attributes: .concurrent)
    let images: [UIImage] = [UIImage].init(repeating: UIImage(), count: 5)

    @IBAction func handleTap(_ sender: Any) {
        for img in images {
            queue.async { [unowned self] in
                self.compute(img)
            }
        }
    }

    private func compute(_ img: UIImage) -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter + = 1}}}Copy the code

Running through the profiler, we can see that the application is now rotating five discrete threads to parallel a for-loop.

Parallelization of N tasks

So far, we’ve looked at how to push computationally expensive tasks to background threads without blocking the UI thread. But what about parallel tasks that are limited? How can Spotify download multiple songs in parallel, while limiting the maximum number to three? There are several approaches to this problem, but now is a good time to explore another important construct in multithreaded programming: semaphores.

Semaphores are a signaling mechanism. They are typically used to control access to shared resources. Imagine a scenario where a thread can lock access to a piece of code while executing it and unlock it when it’s done, letting other threads execute that part of code. For example, you’ll see this type of behavior in database writes and reads. What if you want only one thread to write to the database and prevent any reads in the meantime? This is a common problem in thread safety and is called reader-writer locking. Semaphores can be used to control concurrency in our application, allowing us to lock the number of n threads.

let kMaxConcurrent = 3 // Or 1 if you want strictly ordered downloads!
let semaphore = DispatchSemaphore(value: kMaxConcurrent)
let downloadQueue = DispatchQueue(label: "com.app.downloadQueue", attributes: .concurrent)

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        for i in 0..<15 {
            downloadQueue.async { [unowned self] in
                // Lock shared resource access
                semaphore.wait()

                // Expensive task
                self.download(i + 1)

                // Update the UI on the main thread, always!
                DispatchQueue.main.async {
                    tableView.reloadData()

                    // Release the lock
                    semaphore.signal()
                }
            }
        }
    }

    func download(_ songId: Int) -> Void {
        var counter = 0

        // Simulate semi-random download times.
        for _ in 0..<Int.random(in: 999999.10000000) {
            counter + = songId
        }
    }
}
Copy the code

Notice how effectively we limit our download system to k downloads. The moment a download completes (or the thread completes execution), it decrement the flag, allowing the management queue to spawn another thread and start downloading another song. You can apply a similar pattern to database transactions when dealing with concurrent reads and writes.

Semaphores usually don’t require code like the one in our example, but they become more powerful when you need to enforce synchronous behavior when consuming asynchronous apis. The above content can also use a custom NSOperationQueue and maxConcurrentOperationCount, but anyway, it is a problem worthy to be discussed.


Use OperationQueue for finer control

GCD is great when you want to dispatch one-time or closed tasks to a queue in a “set — forget –” way, it provides a very lightweight way. But what if we want to create a repeatable, structured, long-running task that generates associated state or data? What if we wanted to model this sequence of operations so that they could be cancelled, paused, and traced, while still using a closed and friendly API? Imagine an operation like this.

This is quite troublesome in the COMMUNIST party. We wanted to define a set of tasks in a more modular way, while maintaining readability and exposing more control. In this case, we can use Operation objects and queue them on the OperationQueue, which is a high-level wrapper for DispatchQueue. Let’s look at some of the benefits of using these abstractions and what they offer compared to low-level GCI apis.

  • You might want to create dependencies between tasks, and while you can do this via GCD, you’d better define them specifically as Operation objects or units of work and push them to your own queue. This will allow for maximum reuse because you can use the same pattern elsewhere in the application.
  • The Operation and OperationQueue classes have a number of properties that can be observed using KVO (key-value observation). This is another important benefit if you want to monitor the status of operations or operation queues.
  • You can pause, resume, and cancel operations. Once you Dispatch a task with Grand Central Dispatch, you no longer have any control or insight into how that task will perform. The Operation API is more flexible in this respect, giving developers control over the life cycle of the Operation.
  • OperationQueue allows you to specify the maximum number of queued operations that can run simultaneously, giving you finer control over concurrency.

The use of Operation and OperationQueue could fill an entire blog post, but let’s look at a quick example of what modeling dependencies looks like. (GCD can also create dependencies, but you’re better off breaking up large tasks into a series of subtasks that can be composed). To create a chain of interdependent operations, we can do this.

class ViewController: UIViewController {
    var queue = OperationQueue(a)var rawImage = UIImage? = nil
    let imageUrl = URL(string: "https://example.com/portrait.jpg")!
    @IBOutlet weak var imageView: UIImageView!

    let downloadOperation = BlockOperation {
        let image = Downloader.downloadImageWithURL(url: imageUrl)
        OperationQueue.main.async {
            self.rawImage = image
        }
    }

    let filterOperation = BlockOperation {
        let filteredImage = ImgProcessor.addGaussianBlur(self.rawImage)
        OperationQueue.main.async {
            self.imageView = filteredImage
        }
    }

    filterOperation.addDependency(downloadOperation)

    [downloadOperation, filterOperation].forEach {
        queue.addOperation($0)}}Copy the code

So why not opt for a higher level of abstraction and avoid using GCDS altogether? While GCD is ideal for inline asynchronous processing, Operation provides a more comprehensive, object-oriented computing model for encapsulating all the data around structured, repeatable tasks in an application. Developers should use the highest possible level of abstraction for any given problem, and for consistently scheduled, repeatable work, this abstraction is Operation. Other times, for one-off tasks or closures that we want to start, it makes more sense to sprinkle in a few GCDS. We can mix OperationQueue and GCD to get the best of both worlds.


Cost of concurrency

The purpose of DispatchQueue and Friends is to make it easier for application developers to execute code concurrently. However, these techniques do not guarantee improved application efficiency or responsiveness. You want to use queues in a way that is efficient and doesn’t overburden other resources. For example, it is perfectly feasible to create 10,000 tasks and submit them to a queue, but doing so would allocate a significant amount of memory and introduce significant overhead for allocating and reallocating blocks of action. That’s the opposite of what you want! It’s best to take a thorough look at your application to make sure that concurrency enhances your application’s performance, not degrades it.

We’ve talked about how concurrency comes at a cost in terms of complexity and system resource allocation, but introducing concurrency also carries a number of other risks, such as.

  • Deadlock A condition in which a thread locks up a critical part of code and may cause an application’s running loop to come to a complete stop. In the context of GCD, you should be very careful when using dispatchqueue.sync {} calls because you can easily get yourself into a situation where two synchronous operations are waiting for each other.
  • Priority inversion. A situation where a low-priority task prevents a high-priority task from executing, effectively reversing their priorities. GCD allows background queues to have different priorities, so this can easily happen.
  • Producer-consumer problem: A race condition where one thread is creating a data resource and another thread is accessing the data resource. This is a synchronization problem that can be solved using locks, semaphores, serial queues, or barrier scheduling if you use concurrent queues in GCD.
  • … There are many other kinds of locks and data race conditions that are very difficult to debug! Thread safety is of greatest concern when dealing with concurrency.

Parting thoughts + further reading

If you make it that far, I applaud you. Hopefully this article has given you a head start on multithreading on iOS and how to use some of it in your applications. We didn’t cover many low-level constructs like locks and mutexes and how they help us achieve synchronization, nor did we delve into specific examples of how concurrency can hurt your application. We’ll leave that for another day, but if you’re eager to dig deeper, you can dig into some additional readings and videos.

  • Building Concurrent User Interfaces on iOS (WWDC 2012)
  • Concurrency and parallelism. Understand the I/O
  • Apple’s official guide to concurrent programming
  • Mutexes and Closure capture in Swift
  • Locks, thread safety and Swift
  • Advanced NSOperations(WWDC 2015)
  • NSHipster. NSOperation

Translation via www.DeepL.com/Translator (free version)