Parallel programming with Swift: Basics
The original Parallel programming with Swift: Basics, “the authors | Jan Olbrich translation | JACK edit | JACK
About a year ago, my team started a new project. This time we want to use all the knowledge we’ve learned from previous projects. One of the decisions we made was to make all the model apis asynchronous. This allows us to change the model implementation without affecting the rest of the App. If our App can handle asynchronous calls, then communication with the back end, cache, and database is not a problem. It also benefits when we do collaborative development.
As developers, we must understand concurrency and parallelism. Otherwise, it may cause us trouble and confusion when programming. So let’s learn how to do concurrent programming.
Synchronous VS Asynchronous
So what’s the real difference between synchronous and asynchronous? Imagine that we have a task queue. If it is synchronous, we need to finish the previous task before the next one starts. It behaves in the same way as a FIFO queue (first-in, first-out).
Each statement is executed in order.
func method1() {
statement1()
statement2()
statement3()
statement4()
}
Copy the code
So synchronization means that only one task can be processed at a time.
In contrast, asynchrony can process multiple tasks simultaneously. For example, it will process Item1, pause Item2, and then continue and complete Item1.
The following code shows a callback that executes statement2 before calling callback1().
func method2() {
statement1 {
callback1()
}
statement2
}
Copy the code
Concurrency VS Parallelism
Concurrency and parallelism are often used interchangeably (even Wikipedia is misused in some places). This can be confusing and cause problems. But these problems can be easily avoided if the distinction is made. Let’s explain it with an example:
Imagine that we have A bunch of boxes (threads) at position A and we want to ship them to position B (thread of execution). For this, we can use the worker (CPU). In a synchronous environment, we can only use one worker to do this. He carried boxes from A to B, one at A time.
But if we can use multiple workers at the same time. They’ll each pick up a box, which will greatly increase our productivity, won’t it? Since we use multiple workers, it adds the same factor as the number of workers we have. As long as at least two workers are carrying boxes at the same time, they are parallel.
Parallelism is about executing a task at the same time. A thread is about executing a task at the same time.
Consider, what happens if we only have 1 worker and may need to use more workers on a task? We should consider the possibility of multiple nodes in processing (the CPU running time is divided into periods, which are then allocated to individual threads for execution), which is where concurrency comes in. You can think of it as dividing the steps from A to B. A worker can carry A box from A to the midpoint of the entire distance, and then go back to A to grab the next box. With multiple workers, we can have them all move boxes to different distances. In this way, we process the boxes asynchronously. When we have multiple workers, we can process the boxes in parallel.
The difference between parallelism and concurrency is obvious. Parallelism refers to performing tasks “simultaneously”, and concurrency emphasizes the ability to process multiple transactions at a time, not necessarily at the same time. Concurrency can be parallel or not. Most computers and phones work in parallel (depending on the number of cores), but the applications we install run concurrently. (In operating systems, concurrency is defined as the number of programs running between start and finish at any one time, all running on the same processor, but only one program running on the processor at any one time.)
Concurrent mechanism
Each operating system provides different tools to use concurrency. In iOS, we rely on processes and threads, as well as Dispatch Queues.
process
A process is an instance of an application. It contains everything you need to execute your application, including the heap, stack, and all other resources.
Although iOS is a multitasking operating system, it does not support multiple processes in an App. Therefore, your App has only one process (an App is a process). On macOS, you can use the Process class to generate new child processes. The child is independent of the parent, but contains all the information that the parent had when it created the child. If you are working on macOS, you can create and execute a process as follows:
let task = Process()
task.launchPath = "/bin/sh" //executable you want to run
task.arguments = arguments //here is the information you want to pass
task.terminationHandler = {
// do here something in case the process terminates
}
task.launch()
Copy the code
thread
Threads can be thought of as lightweight processes. Threads, in contrast to processes, share memory with the process they are in, which can cause problems. For example, two threads are changing resources (such as variables) at the same time. This results in unreasonable results when we revisit the resource. Threads are a limited resource on iOS (or any POSIX-compatible system). A process in iOS is limited to creating 64 threads at the same time, and this limit may be exceeded in some cases. You can create and execute threads by:
class CustomThread: Thread {
override func main() {
do_something
}
}
let customThread = CustomThread()
customThread.start()
Copy the code
The scheduling queue
Since we only have one process and are limited to a maximum of 64 threads, there must be some other way to execute code concurrently. Apple’s solution is to schedule queues. You can add a task to a scheduling queue and expect it to be executed at some point. Apple provides us with different types of scheduling queues. One is a SerialQueue, in which all tasks are processed in the order they were added to the queue. The other is the ConcurrentQueue (ConcurrentQueue), where, as the name implies, tasks can be executed concurrently.
From that point of view, it’s not really concurrency, right? Especially with serial queues, there is no obvious benefit. Concurrency queues don’t make concurrency any easier. We do have multiple threads, so what’s the point of scheduling queues?
Let’s think about what happens if we have multiple queues. We can run a queue on a thread and then add it to one of the queues whenever we schedule a task. In addition, we can optimize our system resources by allocating tasks to queues based on priority and current workload.
Apple calls this implementation the Grand Central Dispatch (GCD). So how does that work in iOS?
DispatchQueue.main.async {
// execute async on main thread
}
Copy the code
The biggest advantage of scheduling queues is that they change the mindset of concurrent programming. Instead of thinking about a problem in a thread, it’s much easier to think of it as a Block pushed to a different queue.
Operation queue
Cocoa’s high-level abstraction of GCD is the operation queue. Instead of creating a set of discrete units of work, the tasks will be pushed to an action queue and executed in the correct order. There are different types of queues: main queues, which execute on main queues, and custom queues, which execute not on main queues.
let operationQueue: OperationQueue = OperationQueue()
operationQueue.addOperations([operation1], waitUntilFinished: false)
Copy the code
Creating an Operation object can be done in two ways. Create using Block or subclassing Operation. If subclassing is used, don’t forget the last call to Finish (), otherwise the operation will never stop.
class CustomOperation: Operation {
override func main() {
guard isCancelled == false else {
finish()
return
}
// Do something
finish()
}
func finish() {
willChangeValue(forKey: "isFinished")
willChangeValue(forKey: "isExecuting")
...
didChangeValue(forKey: "isExecuting")
didChangeValue(forKey: "isFinished")
}
}
Copy the code
A very good suggestion is to use dependencies between Operation objects, if Operation2 depends on Operation1, then Operation2 will only be executed after Operation1 has been executed.
operation2.addDependency(operation1) //execute operation1 before operation2
Copy the code
Run Loops
A Run Loop is similar to a queue. The system executes all tasks in the queue and then starts from scratch. Screen redrawing, for example, is done by Run Loop. Note that Run Loops isn’t really a way to create concurrency. Instead, they are bound to a single thread. However, you can execute your code asynchronously in a Run Loop, which reduces the burden of concurrency considerations. Instead of having a Run Loop for each thread, it is created the first time it is requested.
When using Run Loops, you need to consider that they have different modes. For example, while scrolling on the device, the Run Loop of the main thread changes mode and delays all incoming events. Once your device stops scrolling, Run Loop returns to its default state and handles all events. Remember that a Run Loop always requires an Input Source, otherwise it will exit immediately.
Methods for controlling concurrency
We see operating systems offering different ways to achieve concurrency. But as mentioned above, concurrency can cause problems. The easiest problem to create and the hardest to identify is multiple concurrent tasks accessing the same resource. Without a reasonable mechanism to handle these accesses, it can lead to resource competition. The most common solution, then, is to lock access to resources. In this way, other threads cannot access resources while locked, thus resolving resource contention.
Priority inversion
To understand the different locking mechanisms, we need to understand thread priority. It’s not hard to guess that high-priority threads execute before low-priority threads. When a lower-priority thread locks a resource, the higher-priority thread must wait to access the resource, which increases the priority of the lower-priority thread, known as priority inversion. It can cause high-priority threads to freeze and never be executed. So you definitely want to avoid that.
Imagine having two high-priority threads (1 and 2) and one low-priority thread (3). If thread 3 locks the resource and thread 1 wants to access it, thread 1 has to wait. Since thread 2 has a higher priority, all tasks on it finish first. Since 3 has a lower priority, thread 3 is stuck and will not execute, while thread 1 will block indefinitely because it cannot access resources.
Priority inheritance
The solution to priority inversion is priority inheritance. That is, whenever thread 1 is blocked due to a lock, thread 1 loses its priority and transfers it to thread 3, which inherits thread 1’s priority. Thus thread 3 and thread 2 have high priority and are both executed (depending on the operating system). When thread 3 releases the lock, the high priority is transferred back to thread 1, which continues execution.
atomic
The atomicity mentioned here is the same as in database transactions. Suppose you want to write one value at a time in one operation. If the program is running on a 32-bit operating system, of type INT64_t, and has no atomicity, very strange things can happen. Why is that? Let’s see what happens:
int64_t x = 0
Thread1:
x = 0xFFFF
Thread2:
x = 0xEEDD
Copy the code
Non-atomic operations might cause thread 1 to start writing x, but since we’re working on a 32-bit operating system, we have to split 0xFFFF into two 0xFF writes.
Meanwhile, when thread 2 decides to write values to x, it might do so in the following order:
Thread1: part1
Thread2: part1
Thread2: part2
Thread1: part2
Copy the code
So, finally we get:
x == 0xEEFF
Copy the code
That’s not what we want. Because x is neither 0xFFFF nor 0xEEDD.
If atomic operations are used, then only one transaction is produced, and the operations in the transaction are either done at all or not at all. This results in the following:
Thread1: part1
Thread1: part2
Thread2: part1
Thread2: part2
Copy the code
So, the final value of x is the result of the assignment in thread 2. Swift itself does not implement atomicity. There’s a proposal on Swift Evolution. But for now, you have to make it happen yourself.
The lock
Locking is a simple way to prevent multiple threads from accessing the same resource at the same time. The thread first checks to see if it can access the protected part. If it can get in, it locks the protected part and continues. Once you exit, it unlocks. If the thread encounters a locked part on entry, it waits. This is usually done by sleeping and waking up periodically to check if the resource is still locked.
In iOS, we can use NSLock. Note, however, that when the lock is released, it must be on the same thread as when it was locked.
let lock = NSLock()
lock.lock()
//do something
lock.unlock()
Copy the code
In addition, there are many other types of locks available. For example, with recursive locking, a thread can lock a resource multiple times, and each lock must be released once. Other threads are excluded until the lock is fully released.
Another type is a read-write lock. It is suitable for large applications where there are many threads reading resources frequently but few writes. As long as no thread writes to the resource, all threads can access it. Whenever a thread wants to perform a write operation, it locks the resource. No other thread can read it until the lock is released.
At the process level, there are distributed locks. The difference is that if a process is blocked, it reports only to the process, which can decide how to handle the situation.
Spinlock
Spinlocks consist of multiple operations that occupy CPU resources for a long time, the caller thread does not sleep, is always busy, and so on, without context switching (the spinlocks use registers to store thread state). These operations and changes consume a lot of computing time. Spin locks can be used if the lock can be acquired in a short time and resources are not tied up for a long time. The basic idea of a spin lock is to make the waiting thread poll the lock and keep it in a busy state. This consumes more resources than a thread that has gone to sleep. It is the characteristics of the spin lock that make it faster and more efficient in small-scale operations.
That sounds good in theory, but iOS is always different. IOS has a concept called DispatchQoS. Under a QoS mechanism, the system gives priority to those tasks with higher service levels (priorities), and it may occur that low-priority threads are not executed at all. Setting a spinlock on such a thread and letting a higher-priority thread try to access it would cause the higher-priority thread to starve the lower-priority thread, which in turn would cause the higher-priority thread to be unable to release the required resources and block itself. Therefore, spin locking is prohibited on iOS.
Mutex
Mutex can be used across threads or even processes, with context switching. Unfortunately, you have to implement your own Mutex because Swift doesn’t. You can do this using C’s pthread_mutex.
var m = pthread_mutex_t()
pthread_mutex_lock(&m)
// do something
pthread_mutex_unlock(&m)
Copy the code
Semaphore
A semaphore is a data structure used to support mutual exclusion in thread synchronization. It consists of a counter, a FIFO queue, and the methods wait() and signal().
Each time a thread wants to access a protected resource, it calls the wait() method on the semaphore. The counter -1 allows the thread to continue as long as it is not less than 0. Otherwise, the semaphore stores the thread in its queue. Whenever a thread exits a protected part, it calls signal() to notify the semaphore. The semaphore first checks to see if there are any threads waiting in the queue. If so, it will wake up the thread. If not, the counter +1. In iOS, we can do this using DispatchSemaphore.
let s = DispatchSemaphore(value: 1)
_ = s.wait(timeout: DispatchTime.distantFuture)
// do something
s.signal()
Copy the code
One might think of a binary semaphore (a semaphore with a counter value of 1) as the same as a mutex, but a mutex is a locking mechanism, and a semaphore is a signaling mechanism. That may not be clear, but what’s the difference?
Locking mechanisms are about protecting and managing access to shared resources. So it prevents multiple threads from accessing a resource at the same time. The signaling system is more like, “Hey, I’m done! Next move on!” . For example, if you’re listening to music on your phone (semaphore) (thread A) and there’s an incoming call (Xianche thread B), the phone will fetch A shared resource (headset). When finished, the phone sends a signal to the MP3 player to continue playing. A mutex must be locked and unlocked by the same thread, and a semaphore can be released by one thread and acquired by another.
So what do we know? Suppose you have a low-priority thread (1) that is accessing a resource, and another high-priority thread (2) that has just called wait() on the semaphore. (2) Is sleeping, waiting for the semaphore to wake it up. Now we have a thread (3) that has a higher priority than thread (1). This thread, combined with QoS, prevents (1) signaling to the semaphore, thus starving the other two threads. So semaphores in iOS don’t have priority inheritance.
Synchronized
In OC, mutex can be created using @synchronized. Since Swift doesn’t, we have to dig deeper. As you can see, @synchronized simply calls objc_sync_Enter.
let lock = self
objc_sync_enter(lock)
closure()
objc_sync_exit(lock)
Copy the code
Since I have seen this question many times on the Internet, I will answer it as well. As far as I know, this is not a private method, so using it will not be rejected on shelves.
Scheduling of concurrent queues
Since there is no Mutex in Swift and @synchronized has been removed, DispatchQueue becomes the preferred standard for Swift developers. When used in synchronization, DispatchQueue has the same behavior as a mutex because all operations are queued in the same queue, ensuring uniqueness and exclusivity. The downside is that it takes a lot of time to allocate and context-switch. It doesn’t matter if your application doesn’t need any high computing power, but if you lose frames, you might want to consider a different solution (such as mutex).
Scheduling barrier
Using a Dispatch Barrier, we can create a protected zone for operations that need to be performed together. We can also control the order in which asynchronous code is executed. It sounds strange to do this, but imagine that you have a time-consuming task to do that can be broken up into several parts that need to be executed in order, and can also be broken down into smaller parts again. These smaller pieces of the same part can run asynchronously. A Dispatch Barrier is used to synchronize the larger parts while the smaller parts execute.
Trampoline
Trampoline is not a real mechanic provided by the system. Rather, it is a way to ensure that a method is invoked on the right thread. The idea is simple: check to see if the method is on the right thread at the beginning, otherwise call yourself on the right thread and return. Sometimes you need to implement a waiting process using the locking mechanism described above.
func executeOnMain() { if ! Thread.isMainThread { DispatchQueue.main.async(execute: {() -> Void in executeOnMain() }) return } // do something }Copy the code
Remember, don’t use it too often. This does make sure you’re on the right thread, but at the same time, it can be confusing to the people you’re co-developing with. They may not understand why you are changing threads everywhere. At some point, it messes up your code and distracts your logic.
conclusion
Wow, that’s a pretty heavy post. There are many options for concurrent programming, and this article has only scratched the surface. At the same time, there are many mechanisms to do this and many circumstances to consider. I might annoy everyone at work whenever I talk about threads, but they are important, and slowly my colleagues began to agree. Just today, I fixed a Bug where some operations accessed arrays asynchronously, and we learned that Swift does not support atomic operations. Guess what? It ended in collapse. If we all knew more about concurrency, this probably wouldn’t have happened, but to be honest, I didn’t expect it at first either.
Knowing the tools at your disposal is the best advice I can give you. Through the above posts, I hope you found a starting point for concurrency. Good luck!