This time I will talk to you about multi-threading in iOS development, and resource protection, which is the concept of Lock.

Multithreading is everywhere

First, we need to understand a basic concept. In the case of APP programs, almost all programs are run under the multi-threaded mechanism. For example, all apps have a main thread, which is used for UI-related processing. If you need to request some network data, you need to do it in another thread, otherwise it will block the main thread’s response, and the user interface will obviously lag. I believe this knowledge should be more clear.

When we’re developing programs, it’s easy to get used to single-threaded thinking. So that’s what we’re going to talk about this time. In most cases, that’s not a problem. But when it comes to multiple threads accessing the same resource, unexpected problems can occur.

Sharing Resources

To be more specific, suppose we have a file in our APP called data.json:

{
	"favorites": 20
}Copy the code

Think of it as a piece of data displayed on the UI, such as the number of visits to a particular video. If we want to increase the number of accesses, we usually need to modify the file through an asynchronous thread, for example:

let queue1 = DispatchQueue(label: "operate favorite 1")
queue1.async {
 self.addFavorite(num: 1)
 print(self.readFavorite())

 }
Copy the code

Here the addFavorite method is used to write to the file. ReadFavorite is used to read the contents of a file. Assuming the initial value of favorites in our file is 20, the print statement will print 21 after the asynchronous operation is complete. Because we increment it by 1 by calling the addFavorite method.

So far, everything is as expected, and we performed the favorites numerically correctly.

But there is one issue that many people may miss. The file that the addFavorite and readFavorite methods operate on is actually a shared resource. Other threads can read and write to this file as well. Let’s look at another example:

let queue1 = DispatchQueue(label: "operate favorite 1")
queue1.async {
 self.addFavorite(num: 1)
 print(self.readFavorite())

 }
let queue2 = DispatchQueue(label: "operate favorite 2")
queue2.async {
 self.addFavorite(num: 1)
 print(self.readFavorite())

 }
Copy the code

This time we turn on two asynchronous operations that call the addFavorite method and then readFavorite after the output file operation. So what do you think will come out of this program?

You might expect queue1 to print 21, then queue2 to print 22. But in fact, this code, with a high probability, will print two 21s.

It can also be understood that one of the addFavorite calls is mysteriously missing.

Let’s start with the definition of the addFavorite function:

func addFavorite(num: Int) {

 do {

 var fileURL = try FileManager.default.url(for: FileManager.SearchPathDirectory.documentDirectory, in: FileManager.SearchPathDomainMask.userDomainMask, appropriateFor: nil, create: true)
 fileURL = fileURL.appendingPathComponent("data.json")

 let jsonData = try Data(contentsOf: fileURL)

 if var jsonObj = try JSONSerialization.jsonObject(with: jsonData, options: JSONSerialization.ReadingOptions.allowFragments) as? [String: Any] {

 if let count = jsonObj["favorites"] as? NSNumber {

 jsonObj["favorites"] = NSNumber.init(value: count.intValue + num)

 let jsonData = try JSONSerialization.data(withJSONObject: jsonObj, options: JSONSerialization.WritingOptions.prettyPrinted)
 try jsonData.write(to: fileURL)

 }

 }

 } catch {

 }

 }
Copy the code

This is the complete code for the addFavorite function, which does an uncomplicated job of getting the favorites value from data.json, adding num, and writing the result back to the file. This logic is fine if there are no concurrent operations. But if you take into account the dimension of multithreading, there’s a problem.

For example, queue1 in our code reads and writes data.json, but it’s likely that Queue2 is also running, reading data.json before queue1’s write is complete. At this point, the Favorites they both read will be the original value 20, and they will both add 20 to 1. The result is that both outputs are 21.

And that’s why I said that, in a very high probability, it will print two 21s. Thread scheduling is controlled by the operating system, and if Queue2 is raised when queue1 has already written to the file, the correct output is obtained. On the other hand, if Queue2 is raised before Queue1 is written, two identical 21s will be printed. It’s all controlled by the operating system.

More basic background knowledge about multithreading, we had, but many here are an article computing. LLNL. Gov/tutorials/p… , interested friends can study.

How to solve

With all that said, how do you solve this problem? In fact, the industry’s predecessors have provided us with many solutions. I’m going to show you one of them, which is NSLock. NSLock is an API package provided by iOS, and is also a common solution for thread sharing resources – locking. NSLock is an encapsulation of the thread locking mechanism. Let’s look at how NSLock works in the example above:

var lock = NSLock()
let queue1 = DispatchQueue(label: "operate favorite 1")
queue1.async {
 lock.lock()
 self.addFavorite(num: 1)
 lock.unlock()
 print(self.readFavorite())

 }
let queue2 = DispatchQueue(label: "operate favirite 2")
queue2.async {
 lock.lock()
 self.addFavorite(num: 1)
 lock.unlock()
 print(self.readFavorite())

 }
Copy the code

This time we’re adding lock and unlock before and after addFavorite calls. If we run the program again, the result is what we expect. So what does NSLock do, both of these threads are trying to get this lock before they call addFavorite, but this lock can only be gotten by one thread at a time. The other thread that fails will be suspended by the operating system. Until the lock is unlocked by another thread.

For example, if the operating system dispatches Queue1, it successfully acquires the lock through the lock() method, starts reading and writing files, and then calls UNLOCK to release the lock.

So if Queue1 is still reading and writing files, Queue2 is also scheduled and its lock method is executed. Because Queue1 is holding the lock, the operating system suspends Queue2 until Queue1 calls UNLOCK to release the lock. To allow Queue2 to continue.

Such a mechanism ensures that only one thread can operate on the file at a time, and there will be no shared resource security issues mentioned earlier. There are also performance costs associated with locking. For example, Queue2 is temporarily suspended because it can’t get a lock. But for more critical resources, it’s a price worth paying.

At the end

It’s easy to overlook the consideration of shared resource protection. After all, in our normal development, this kind of resource conflict situation does not always occur. But if it does, it can be difficult to debug and find. So it’s more important to get in the habit of working with these resources, thinking about whether it’s going to be handled by multiple threads at the same time, and writing the code in advance.

If you find this article helpful, you can also follow the wechat official account Swift-cafe and I will share more of my original content with you

This site articles are original content, if you need to reprint, please indicate the source, thank you.