So let’s look at the problem elicitation, let’s look at some classic questions.

The pitfalls of multithreading

First of all, if we use multithreading, it must be good, because we can do some things at the same time, greatly improving the efficiency. For example, when we download videos, we can download multiple videos at the same time, which saves a lot of time and provides better user experience. However, there are some security risks when using the same resource, for example, the same resource may be shared by multiple threads, that is, multiple threads may access the same resource, which may cause some data confusion and data security problems. Let’s look at some examples.

Case of deposit and withdrawal

For example, NOW I have 1000 yuan, and there are two threads to deal with it. One thread is to withdraw 100 yuan, and the other thread is to deposit 100 yuan. I made a schematic diagram as follows:

Diagram of deposit and withdrawal

It should be clear from the above diagram that the final result of saving 100 and taking 100 is still 1000 yuan. However, looking at the diagram above, the final result is either 900 yuan or 1100 yuan, which is not consistent with the result of 1000 yuan. Therefore, it is obvious that there are hidden dangers in using multi-threading.

Case of selling tickets

This is a little bit different from the one above, because it’s two operations, whereas selling tickets is one operation. Similarly, I now have 1000 tickets, and there are two threads to sell tickets. One thread sells 100 tickets, and the other thread also sells 100 tickets. If I operate at the same time, there will also be exceptions.

We can also use code to demonstrate the effect:

These two, you can try them out. Next, in view of the above problems, we will lead to today’s protagonist, thread synchronization technology

Thread synchronization technology

The solution: Use thread synchronization (synchronization, which is synchronized and runs in a predetermined order). Let’s take a look at the options for thread synchronization: The most common technique for thread synchronization is locking. Here are some of the options (roughly this many options, and of course there are others)

OSSpinLock (spin lock)

Os_unfair_lock (Mutex)

Pthread_mutex (mutexes, recursive locks)

dispatch_queue (DISPATCH_QUEUE_SERIAL)

NSLock

NSRecursiveLock

NSCondition

NSConditionLock

@synchronized

These are all thread synchronization schemes, I will introduce one by one, and introduce their advantages and disadvantages, performance, how to choose and so on. Let’s look at the examples above.

OSSpinLock (spin lock)

OSSpinLock is called a “spin lock”. The lock is held in a busy-wait state, occupying CPU memory. <libkern/OSAtomic. H > <libkern/OSAtomic. H > <libkern/OSAtomic.

Initialize locks :OSSpinLock spinLock = OS_SPINLOCK_INIT;

Lock: OSSpinLockLock (& _spinLock);

Unlock: OSSpinLockUnlock (& _spinLock);

Let’s look at selling tickets first:

I have locked and unlocked it. Why is there still a problem in the above code? Have you found the reason? This is because SpinLock is a local variable, so we go in to initialize a new lock, this lock is not used, is not the purpose of the lock. Therefore, all threads must use the same lock to achieve the purpose of locking. Take a look at the following code:

It does have 85 left, no problem. Each time a saleTicket is executed, a // lock is entered: OSSpinLockLock(&_SpinLock) So this code, the first time it comes in, it locks _SpinLock normally, the second time it comes in, it finds that _SpinLock has been locked, it waits here, it waits for the lock to be unlocked, and then it locks again, and so on, and so on, and so on, and so on, and so on, so that there’s only one thread working on that part of the code at the same time. This solves the thread synchronization problem.

Now let’s look at saving and withdrawing money. Saving and withdrawing money are two operations. Do we use the same lock or two locks? After thinking about this problem, we know how to do it, because saving and withdrawing money cannot be carried out at the same time, so we can use the same lock (there is a problem with two locks, you can try it yourself), please look at the following code, let’s verify:

How does a spin lock wait when it’s busy? Busy waiting is always busy and still waiting, like this while(lock not released) will keep executing, occupying the CPU until the lock is released. OSSpinLock is out of date and is no longer secure, so priority inversion may occur.

Let’s talk about thread scheduling

In fact you look at the map, if with the passage of time, the operating system time to thread1 a little time, give thread2 time again, again give thread3 time, and the cycle time is very short, so had been very fast switching, it down to our feeling is performed at the same time. This is a solution to multithreading. That’s how multithreading works, or we could say it’s a time-slice scheduling algorithm. This algorithm is used to call processes or threads

There is also the issue of thread priority. For example, if thread1 has a higher priority, the operating system will give thread1 more time to execute. Other threads have less time to execute. In this case, we have a priority inversion problem when using spin locks. For example, thread1 has a very high priority and thread2 has a very low priority. As thread1 has a very high priority, the CPU spends a lot of time on thread1. As a result, Thread2 may have no time to unlock, and Thread1 will wait for a long time, which makes it feel like a deadlock Night’s sleep.

Think about it this way: if the high priority is not busy, but sleep, rest will not use CPU, then you have solved the problem.

Os_unfair_lock (Mutex)

Os_unfair_lock os_unfair_lock is used to replace insecure OSSpinLock and is supported from iOS10.

From underlying calls, threads waiting for os_UNFAIR_lock are dormant and not busy (as will be proved later).

< OS /lock.h> < OS /lock.h>

Os_unfair_lock unfairLock = OS_UNFAIR_LOCK_INIT;

Lock: os_unfair_lock_lock (& _unfairLock);

Unlock: os_unfair_lock_unlock (& _unfairLock);

Let’s take a look at usage

The same goes for saving and withdrawing money. We can try it ourselves.

Pthread_mutex (mutex)

A mutex is called a “mutex “, and the thread waiting for the lock is put to sleep

It’s pretty much the same thing, but let’s see how it works, because it’s a little bit more code, right

// Initialize the property

pthread_mutexattr_t attr;

pthread_mutexattr_init(&attr);

pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_NORMAL);

// Initialize the lock

pthread_mutex_init(&_mutex, &attr); (&attr can also pass NULL, in which case, the above is the default, can not write above)

/ / lock

pthread_mutex_lock(&_mutex);

/ / unlock

pthread_mutex_unlock(&_mutex);

// Destroy related resources

pthread_mutexattr_destroy(&attr);

pthread_mutex_destroy(&_mutex);

PTHREAD_MUTEX_NORMAL is the lock type, as explained later. The default PTHREAD_MUTEX_NORMAL is passed first

First look at the results:

There is no problem, remember to destroy, the two locks mentioned before, there is no destruction method, then we will not write, if provided, we would better write!

Pthread_mutex (recursive lock)

Let’s look at the following code for another case:

What happens to the above code? So deadlocks, we’re going to wait for each other, we’re only going to print the first NSLog, so what do we do about that? OtherMutexTest2 = otherMutexTest2 = otherMutexTest2 = otherMutexTest2 = otherMutexTest2 = otherMutexTest2 = otherMutexTest2 = otherMutexTest2

Now let’s look at another situation: what happens when recursion occurs? The following figure

How do we deal with the above situation? If it looks like the screenshot, there will be a dormant wait. If we want to go ahead, we can solve this problem by setting the type of lock: there are three types

We can solve this problem immediately by changing the lock type to a recursive lock, and adding a recursive stop condition, otherwise it will keep running

One other thing to note is that this allows locking to be repeated on the same thread, but not on multiple threads. Recursive locking: Allows the same thread to lock a lock more than once.

Assembly analysis of spin lock and mutex

Spin lock: always busy, etc., occupying CPU memory, always executing code; Mutex: Do not wait, sleep. Not executing code. How do we prove this? We can prove this with an assembly implementation, starting with OSSpinLock:

First of all, if we use the above debugging, is not to see what effect, because it is a large section of a large section of assembly code execution, we need to execute assembly instructions sentence by sentence. If only S is used, it will be a line of OC code. A line of OC may correspond to a large section of assembly, so we also need to add I. I means instruction means the execution of assembly instructions line by line, referred to as SI. There is a Nexti, it is also a line by line assembly instructions to execute, but nexti it is a function will be the execution of the past. Because we’re going to look at the implementation, we’re going to use si.

Let’s look at my code again:

I’m creating 10 threads to sell tickets, and I’m sleeping 100 in the middle of selling tickets, so that the first thread goes in, we don’t care, we’re just looking at what the second thread does while it’s waiting here. So let’s focus on assembly code number 2. Sleep (100) is just a little bit longer so we can see what we’re doing. If the time is too short, the second thread will not wait, and we will not see the effect, as shown below

From the above results, if you go into OSSpinLockLock, it will loop all the time in 81aEF. This is an outer loop, and this is what we call a spin lock. It will loop all the time, taking up CPU memory. As soon as someone releases the lock the conditional loop will end and the loop will not be repeated.

Next, let’s look at the mutex pthread_mutex

The search method is the same as the above, I just screenshot the most critical figure, please see the following:

Is executed in the end, the direct callsys, call the system method, is it like I said before the inside of the runloop dormant method, and we know that sleep is can’t do anything, no CPU memory, so when we finally see, my simulator and play out in a moment, that is, indeed, sleep, do not take up any CPU memory.

Os_unfair_lock_lock We can try using the above method, which also results in a mutex.

I will continue to cover locks in the next blog post as there are more to cover

Then I will continue to work hard to write other blogs, your support is my biggest motivation!

If you feel I have written to help you, please like to follow me, I will continue to update 😄

Thanks for supporting 🙏🙏🙏!

I’m GDCoder, and we’ll see you next time!