The classification of the lock

spinlocks

The thread repeatedly checks that the lock variable is available. Because the thread keeps executing during this process, it is a busy wait. Once a spin lock is acquired, the thread holds it until it explicitly releases the spin lock. Spin-locks avoid the scheduling overhead of the process context and are therefore effective in situations where threads block only for short periods of time.

The mutex

A mechanism used in multithreaded programming to prevent two threads from simultaneously reading or writing to the same common resource (such as a global variable). This is accomplished by slicing code into critical sections. There are two important points to note here: mutual exclusion and synchronization. Mutual exclusion is when multiple threads perform the same operation, only one thread can perform the operation at a time. Synchronization is when multiple threads perform the same operation in the corresponding order. Mutexes fall into two categories, recursive and non-recursive.

The mutexes are:

  • NSLock
  • pthread_mutex
  • @synchronized

Conditions for the lock

A conditional variable that goes to sleep when some resource requirement is not met

It’s locked. When the resource is allocated, the conditional lock is opened and the process continues.

  • NSCondition
  • NSConditionLock

Recursive locking

The same thread can be locked N times without causing a deadlock.

  • NSRecursiveLock
  • pthread_mutex(recursive)

A semaphore

Semaphore: a more advanced synchronization mechanism. Mutex is a special case of semaphore with a value of only 0/1. Semaphores can have more value space for more complex synchronization, rather than just mutual exclusion between threads.

  • dispatch_semaphore

Read-write lock

A read/write lock is a special mutex lock that divides visitors to a shared resource into readers and writers. Readers only read the shared resource, while writers write the shared resource. This type of lock improves concurrency over a spin lock because in a multiprocessor system it allows multiple readers to access a shared resource at the same time, with the maximum possible number of readers being the actual number of logical cpus. Writers are exclusive; a read/write lock can have only one writer or more readers at a time (depending on the number of cpus), but not both readers and writers. Preemption also fails during read/write lock holding.

If the read-write lock currently has no reader and no writer, then the writer can acquire the read-write lock immediately, otherwise it must spin there until there are no writers or readers. If the read-write lock has no writer, the reader can acquire the read-write lock immediately, otherwise the reader must spin there until the writer releases the read-write lock.

Only one thread can hold the read-write lock in write mode at a time, but more than one thread can hold the read-write lock at the same time. Because of this feature, when a write/write lock is in the write/lock state, all threads attempting to lock the lock are blocked until the lock is unlocked. When a read-write lock is in read-lock state, all threads attempting to lock it in read mode gain access, but if a thread wishes to lock it in write mode, it must wait until all threads release the lock. Generally, when the read-write lock is in the read-mode lock state, if another thread tries to lock in write mode, the read-write lock will block the subsequent read-mode lock request. In this way, the read-mode lock will not be occupied for a long time, while the waiting write mode lock request will be blocked for a long time. Read/write locks are suitable for situations where data structures are read much more often than written. The read-write lock is also called shared-exclusive lock because it can be shared while the write lock means exclusive lock.

conclusion

In fact, the basic lock includes three types, spin lock, mutex lock read and write lock, other such as conditional lock, recursive lock, semaphore is the upper packaging and implementation!

pthread

Posix Thread defines a set of functions for Thread synchronization called Mutex to ensure that only one Thread can access the object at any time. When the lock acquisition operation fails, the thread goes to sleep and wakes up waiting for the lock to be released.

  1. Create and destroy

POSIX defines A macro PTHREAD_MUTEX_INITIALIZER to statically initialize the mutex int pthread_mutex_init(pthread_mutex_t *mutex, const pthread_mutexattr_t *mutexattr) C: Pthread_mutex_destroy () is used to unregister a mutex

  1. Lock operations
  • int pthread_mutex_lock(pthread_mutex_t *mutex)
  • int pthread_mutex_unlock(pthread_mutex_t *mutex)
  • int pthread_mutex_trylock(pthread_mutex_t *mutex)
  • pthread_mutex_trylock()Semantic andpthread_mutex_lock()Similar, except that it returns when the lock is already occupiedEBUSYInstead of hanging and waiting.

Analysis of NSLock and NSReLock

Here we introduce NSLock and NSReLock through a few use cases.

  • Case 1

For example, if we print the code out of order without locking it, we print it out of order when we lock it before and after testMethod(10).

  • Case 2

Similarly, if we change the lock and unlock, we get a deadlock, testMethod recursively executes. The reason for this is that NSLock is not recursive. This can be resolved using @synchronized or NSRecursiveLock. Since @synchronized was analyzed earlier, let’s try NSRecursiveLock. NSRecursiveLock is also used very frequently. We can also see NSRecursiveLock used in recursive locking scenarios in many tripartite libraries.

  • Case 3

When we use NSRecursiveLock, we find that it prints the first time, but fails the second time. This is because NSRecursiveLock is recursive, but does not support multithreading.

  • Case 4

We solved the problem of both recursive calls and multithreading with @synchronized.

The analysis of the NSCondtion

The NSCondition object actually acts as a lock and a thread inspector. The lock is used to protect the data source and perform the tasks triggered by the condition when the condition is detected. The thread checker mainly decides whether to continue running a thread based on conditions, that is, whether the thread is blocked.

NSConditionapiIntroduction:

  • [condition lock]: Allows multiple threads to access and modify the same data source at the same time. This ensures that the data source can be accessed and modified only once at the same timelockOutside waiting, only tounlock, can be accessed.
  • [condition unlock]: Used with lock.
  • [condition wait]: Makes the current thread wait.
  • [condition signal]: The CPU signals the thread to stop waiting and continue executing.

case

- (void)cx_testConditon{ _testCondition = [[NSCondition alloc] init]; For (int I = 0; i < 50; i++) { dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ [self cx_producer]; }); dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ [self cx_producer]; }); dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ [self cx_consumer]; }); dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ [self cx_consumer]; }); } } - (void)cx_producer { [_testCondition lock]; Self.ticketcount = self.ticketCount + 1; NSLog(@" produce an existing count %zd",self.ticketCount); if (self.ticketCount > 0) { [_testCondition signal]; // signal} [_testCondition unlock]; } - (void)cx_consumer { [_testCondition lock]; If (self.ticketCount == 0) {NSLog(@" wait for count %zd",self.ticketCount); [_testCondition wait]; Self.ticketcount -= 1; self.ticketcount -= 1; NSLog(@" consume a remaining count %zd ",self.ticketCount); [_testCondition unlock]; }Copy the code

In this code, we define the production method CX_producer and the consumption method cx_consumer. When the ticketCount value is changed, both the producer method and the consumption method will be locked. However, in the consumption method, when the ticketCount is less than zero, the consumption will be prohibited. If ticketCount is greater than zero in the production method CX_producer, a signal is sent to continue execution. This ensures the security of transactions.

Foundation source code about lock encapsulation

NSLock, NSRecursiveLock, AND NSCondition, for example, are all encapsulated by PThread, but the underlying locks are implemented under the NSFoundation framework, But NSFoundation frameworks are not open source, so how can we explore their underlying implementations? Here we take a coincidence, with swift foundation framework source code to explore. Source code has been uploaded to Github, interested partners can download.

NSLock

In our code we hold down the Control + Command key and go to the header implementation of NSLock and you can see that NSLock is a class that inherits from NSObject, but follows the NSLocking protocol. Because here can only see the declaration of the protocol, the specific implementation of the source code we open to see.

NSRecursiveLock

We know that NSRecursiveLock is recursive relative to NSLock. We can see that NSRecursiveLock is recursive relative to NSLock. NSRecursiveLock has an attrs parameter in its underlying pthread_mutex_init. The underlying implementation of lock and UNLOCK methods is the same.

NSCondition

NSCondition is initialized with pthread_cond_init(cond, nil). Its wait method calls pthread_cond_wait(cond, mutex) underneath. Through the analysis of these locks, we can see that their bottom layer is based on the encapsulation of Pthread. When we do not know which lock to use, pthread is the most perfect implementation.

NSConditionLock

NSConditionLock introduction

  • 1.1 NSConditionLockA lock in which once one thread acquires the lock, the other threads must wait.
  • 1.2 [conditionLock lock]saidconditionLockExpect to acquire the lock if no other thread acquires the lock (no need to judge internalconditionIt can execute this line of code, and if another thread has already acquired the lock (possibly a conditional or unconditional lock), wait until the other thread has unlocked it.
  • 1.3 ConditionLock lockWhenCondition:A conditionIndicates if no other thread has acquired the lock, but the internal lockconditionIs not equal toA condition, it still can’t get the lock, still waiting. If the internalconditionIs equal to theA condition, and no other thread obtains the lock, then enters the code area, and sets it to obtain the lock at the same time, any other thread will wait for its code to complete, until it unlocks.
  • 1.4 [conditionLock unlockWithCondition:A condition]Means to release the lock while putting the internalconditionSet toA condition.
  • 1.5 Return = [conditionLock lockWhenCondition:A conditional beforeDate:A time]Indicates that the thread is no longer blocked if the lock is not acquired and the time elapsed. But note that the value returned isNOIt does not change the state of the lock. The purpose of this function is to handle both states.
  • 1.6 the so-calledconditionIt’s integers, internally comparing conditions by integers.

case

- (void)cx_testConditonLock{ NSConditionLock *conditionLock = [[NSConditionLock alloc] initWithCondition:2]; dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ [conditionLock lockWhenCondition:1]; NSLog (@ "thread 1"); [conditionLock unlockWithCondition:0]; }); dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{ [conditionLock lockWhenCondition:2]; Sleep (0.1); NSLog (@ "thread 2"); [conditionLock unlockWithCondition:1]; }); dispatch_async(dispatch_get_global_queue(0, 0), ^{ [conditionLock lock]; NSLog (@ "thread 3"); [conditionLock unlock]; }); }Copy the code
  • [NSConditionLock lockWhenCondition:] thread 1 calls [NSConditionLock lockWhenCondition:] and is in waiting because the condition is not met.

  • Now the current thread 3 calls [NSConditionLock lock:], essentially calling [NSConditionLock lockBeforeDate:], there is no need to compare conditions, so thread 3 prints

  • Then thread 2 executes [NSConditionLock lockWhenCondition:], and prints, When the print is complete, [NSConditionLock unlockWithCondition:] is called, value is set to 1, and boradcast is sent, at which point thread 1 receives the current signal and wakes up to execute and print.

  • The current print is thread 3-> Thread 2 -> thread 1.

  • [NSConditionLock lockWhenCondition:] : conditionlock lockWhenCondition:] : conditionlock lockWhenCondition:] : conditionlock lockWhenCondition:] : conditionlock lockWhenCondition:] : conditionlock lockWhenCondition:]

  • [NSConditionLock unlockWithCondition:] : this condition changes the current value and broadcasts it to wake up the current thread.

NSConditionLock execution process analysis

Through the above case, we may have several questions:

  • NSConditionLockNSConditionWhat’s the difference?
  • Initialization[[NSConditionLock alloc] initWithCondition:2]It passes in a parameter 2. What does that value do
  • lockWhenConditionHow is the process controlled
  • unlockWithConditionWhat did he do

In front of several locks we are through the source code to see the underlying implementation, but when we do not have the source code when we should use what kind of thinking to analyze it? Let’s try to explore this with disassembly. This environment is using a real machine.

  • initWithConditionProcess tracking

InitWithCondition -[NSConditionLock initWithCondition:] -[NSConditionLock initWithCondition:]

After the breakpoint we can see the assembly code where x0, x1, and x2 represent the method caller, calling method, and parameters respectively. Here we output after the call to our OC code can be one-to-one correspondence. Here we focus on tracking bl execution because BL stands for jump. Step by step, we will execute the breakpoint BL.

You can’t see the x0 output here for now, but you can see that the init method is called with an argument of 2.

Here we trace NSConditionLock to calling init with an argument of 2.

NSConditionLock calls the zone method, that is, open up memory.

Here NSCondition calls allocWithZone.

Here NSCondition calls the init method.

Here is return, x0 is the return object, print the memory structure of X0, you can see that it has two member variables NSCondition and 2.

  • lockWhenConditionProcess tracking

Here NSDate calls the distantFuture method with an argument of 1.

The waitUntilDate method is executed to wait.

Here NSConditionLock call lockWhenCondition: beforeDate:, the first parameter is 1, the second parameter for [NSDate distantFuture] return values. And new symbols breakpoints here – [NSConditionLock lockWhenCondition: beforeDate:].

Here will be broken to lockWhenCondition: beforeDate: method.

LockWhenCondition: beforeDate: after will once again came to lockWhenCondition method, only in 4 thread, parameters into 2.

Will also come to lockWhenCondition after thread 4 lockWhenCondition: beforeDate: method. In bl, NSCondition calls the lock method.

  • unlockWithConditionProcess tracking

So we’re going to get the unlockWithCondition method, and we’re going to lock it as well.

Here NSCondition calls the broadcast method.

When the method ends, NSCondition calls the unlock method.

Then it will come to our OC in code thread a lockWhenCondition: beforeDate: method, and conducted an unlock operation here, we two locking one-to-one correspondence with the above.

The end of the execution also returns 0x0000000000000001, which is 1.

Finally, execute the unlockWithCondition method in Thread one of the OC code. It then executes the assembly process of the unlockWithCondition method above. Here 1 means not waiting.

Disassembly analysis and source comparison

By contrast, we can see that the implementation process of disassembly analysis is consistent with the source logic.

GCD implements multiple read and single write

For example, to maintain a piece of data in memory, there may be multiple places to manipulate the data, how to ensure the security of the database? The following three points need to be met:

  • 1. Read and write are mutually exclusive
  • 2. Write mutually exclusive
  • 3. Read and write
- (instancetype)init {
    self = [super init];
    if (self) {
        _currentQueue = dispatch_queue_create("chenxi", DISPATCH_QUEUE_CONCURRENT);
        _dic = [NSMutableDictionary dictionary];
    }
    return self;
}

- (void)cx_setSafeObject:(id)object forKey:(NSString *)key {
    key = [key copy];
    __weak __typeof(self)weakSelf = self;
    dispatch_barrier_async(_currentQueue, ^{
        [weakSelf.dic setObject:object forKey:key];
    });
}

- (id)cx_safeObjectForKey:(NSString *)key {
    __block id temp;
    __weak __typeof(self)weakSelf = self;
    dispatch_sync(_currentQueue, ^{
        temp = [weakSelf.dic objectForKey:key];
    });
    return temp;
}
Copy the code
  • First we need to maintain a GCD queue, preferably not global queue, after all, we all know that global queue encounter fence function is pit point, here will not analyze!

  • Because performance, deadlock, and congestion considerations do not consider serial queues, use custom concurrent queues!

  • First let’s look at the read operation: cx_safe0bjectForKey we can’t use asynchronous function because of multithreading! Description:

    • Thread 2 gets:nameThread 3 fetchage.
    • What would have been read if there was confusion due to asynchronous concurrencynameAnd I readage
    • We allow multiple tasks in at the same time! But the read operation requires synchronous return, so we choose synchronous function (read concurrent)
  • Let’s take a look at the write operation. The key is copied during the write operation. Insert a quote from the reference to explain this:

The function caller is free to pass an NSMutableString key and can modify it after the function returns. So we must copy the string passed in to make sure the function works correctly. If the string passed in is not mutable (that is, the normal NSString type), calling copy is essentially a null operation.

  • Dispatch_barrierasync = dispatch_barrierasync

    • Fence function task: all previous tasks are completed, and no other tasks are executed until the task after it starts, so it is better to encourage write operations one after another (write mutually exclusive), no mess!
    • Why not an asynchronous function? It should be easy to analyze, after all, it creates chaos!
    • Why not use a synchronization function? If both the read and write operations are performed, then it is possible to use the synchronization function: I need to wait for the read operation to return to execute, obviously this is not reasonable!