1 Resource contention and thread synchronization
Under the condition of contention, the scramble of multiple threads for the same race resource will cause thread safety problem. A race resource is a shared resource visible to multiple threads, including global (nonconst) variables, static (local) variables, heap variables, resource files, and so on.
Contention between threads can cause a number of problems:
- A thread that is working on a shared resource is interrupted by another thread, the time slice runs out and is forced to switch to another thread
- The shared resource is not notified after modification by other threads, causing data inconsistency between threads
- Due to compiler optimization and other reasons, the execution sequence of several operation instructions is disordered, resulting in unpredictable results
1.1 Atomic Operation
Atomic operations, that is, indivisible operations; This operation must be done on the same CPU slice, so that even if threads are switched, multiple threads do not see incomplete data in the same block of memory.
Atom represents the smallest indivisible unit, specifically the more specific internal implementation and structure that cannot be observed in the scale space or layer. The smallest unit of execution for a computer program is a single instruction. We can write atomic operations by referring to the instruction manuals of various cpus and using their assembly instructions. This approach is far too inefficient.
Some simple expressions can be considered as the smallest unit of execution in modern programming languages. Some simple expressions actually compile into more than one assembly instruction, so they are not really atomic. Taking the addition instruction operation implementation x += n as an example, GCC compiles the assembly form as follows:
. movl 0xc(%ebp), %eax addl$n, %eax
movl %eax, 0xc(%ebp)
...
Copy the code
It is also not safe to place it in the context of the thread:
dispatch_group_t group = dispatch_group_create();
__block int i = 1;
for (int k = 0; k < 300; k++) {
dispatch_group_enter(group);
dispatch_async(dispatch_get_global_queue(0, 0), ^{
++i;
dispatch_group_leave(group);
});
dispatch_group_enter(group);
dispatch_async(dispatch_get_global_queue(0, 0), ^{
--i;
dispatch_group_leave(group);
});
}
dispatch_group_notify(group, dispatch_get_main_queue(), ^{
NSLog(@"----result=%d i=%d",self.pro1,i);
});
Copy the code
In the example above, the global variable I should theoretically end up with 1, but in practice it should end up with 0, -1, 2, -2, 1.
To avoid errors, many operating systems or compilers provide built-in functions or apis for common atomization operations, including common expressions that combine what are actually multiple instructions. In the above operation, replacing I ++/ I — with OSAtomicIncrement32(&i)/OSAtomicDecrement32(&i) yields the expected result 1
Here are some examples of atomic manipulation apis on different platforms
windows API | macOS/iOS API | GCC built-in functions | role |
---|---|---|---|
InterlockExchange | OSAtomicAdd32 | AO_SWAP | Atoms swap two values |
InterlockDecrement | OSAtomicDecrement32 | AO_DEC | Atom minus one value |
InterlockIncrement | OSAtomicIncrement32 | AO_INC | Atomic increment by one |
InterlockXor | OSAtomicXor32 | AO_XOR | The xor of atoms |
In OC, the atomoc modifier for an attribute variable performs a similar function to the API. The compiler uses a locking mechanism to ensure the atomicity of the variable being modified, and it is added by default. In practical application scenarios, the operation of attribute values generally includes three steps (read, operation and write). Even if the write operation is atomic, thread safety cannot be guaranteed. On the other hand, synchronization locks are expensive in ios (macOS doesn’t have a similar problem), so nonatomic modifications are often added.
@property (nonatomic,assign)int pro1;
Copy the code
In real business, it is common to lock the core business code to make it atomic as a whole, rather than specific attribute accessors.
1.2 Reentrant and thread safety
Function reentrant When a program is reentrant, it indicates that the function has not completed its execution, and that it has entered the function again due to external factors or internal calls. The function is reentrant in two cases
- Multiple threads execute this function simultaneously
- The function itself (possibly after multiple calls) calls itself
A function that can be reentrant is called reentrant, indicating that it has no adverse consequences after being reentrant. Reentrant functions have the following characteristics:
- Do not use any local (static) nonconst variables
- Do not use any Pointers to local (static) or global nonconst variables
- Rely only on the arguments provided by the calling method
- Locks that are not dependent on any single resource (mutex, etc.)
- Do not call any non-reentrant functions
Reentrant is a strong guarantee of concurrency, and a reentrant function can be used safely in a multithreaded environment. That is, when dealing with multithreading, we can split the program into reentrant functions and focus our attention on something other than the reentrant functions.
In the functional programming paradigm, because the entire system does not need to maintain redundant data variables, it is state flow. So you can think of it all as a bunch of reentrant functions. So functional programming has its inherent advantages in high concurrency programming.
1.3 Excessive CPU optimization
1.3.1 Out-of-order optimization and memory barrier
The CPU has a dynamic scheduling mechanism in which the order of instructions may be switched during execution because of the efficiency of the execution. Some seemingly independent variables actually affect each other, and this compiler optimization can lead to potentially incorrect results.
Memory barriers are commonly used. It acts as a barrier, forcing the processor to perform any load and store operations in front of the barrier before allowing it to perform load and store operations behind the barrier. Ensure that a thread’s memory operations are always completed in a predetermined order. To use a memory barrier, you simply call the OSMemoryBarrier() function wherever you need it in your code.
class A {
let lock = NSRecursiveLock()
var _a : A? = nil
var a : A? {
lock.lock()
if _a == nil {
let temp = A()
OSMemoryBarrier()
_a = temp
}
lock.unlock()
return _a
}
}
Copy the code
It is worth noting that most lock types incorporate memory barriers to ensure that the loading and storing instructions preceding it are completed before the critical section is entered.
1.3.2 Register optimization and volatile variables
In some cases, the compiler loads variables into registers, and if those variables are visible to multiple threads, this optimization may prevent other threads from noticing any changes to the variables, causing thread synchronization problems.
Adding the keyword volatile before a variable forces the compiler to load the variable from memory every time it is used. You can declare a variable as volatile if its value can change at any time from an external source that the compiler cannot detect. The volatile identifier modifier is used extensively in many atomic operation apis. In the system library, for example, all atomic variables are used
<libkern/OSAtomic.h>
int32_t OSAtomicIncrement32( volatile int32_t *__theValue )
Copy the code
The most common method of thread synchronization is to use locks. A lock is an unenforced mechanism in which each thread first attempts to acquire an Acquireuytreewq lock before accessing data or resources and then releases it after the access is complete. A lock is acquired when it is already occupied, and the thread waits until the lock is released.
2.1 Mutex
2.1.1 Basic Concepts
Mutex is a common type of lock on many platforms. It is a sleep-waiting lock. That is, when the lock is in the occupied state, other threads will hang, and when the lock is released, all waiting threads will wake up and compete for the lock again. Context switching between user – and kernel-state is involved during suspension and release, and this switching is performance – consuming.
A mutex is similar to a binary semaphore, except that it can only be released by the thread that acquired the lock and not by a human hand. On some platforms, it is implemented with binary semaphores. Semaphores are covered in more detail in 2.3.
Mutex can be shared by multiple processes or visible to threads within a process. It can be divided into ordinary lock, error – detection lock, recursive lock. Let’s take a closer look at some of the mutex uses and considerations using pthread_mutex in pthread.
2.1.2 pthread_mutexes
Pthread_mutex is a mutex in pthreads that is cross-platform. Pthreads, short for POSIX Threads, are the POSIX standard for threads (Portable Operation System Interface). POSIX is a Unix API design standard compatible with major platforms. So pthread_mutex is relatively low-level and can be implemented as a mutex across platforms.
Let’s take a look at the most common invocation:
static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_mutex_lock(&mutex);
block();
pthread_mutex_unlock(&mutex);
Copy the code
Pthread_mutex can define its scope, whether it is shared by multiple processes, or only visible within the process. The default is the latter
PTHREAD_PROCESS_PRIVATE: synchronization between different threads within the same process **/ pthread_mutexattr_setpshared(&mattr,PTHREAD_PROCESS_PRIVATE);Copy the code
Pthread_mutex can be divided into ordinary locks, error-detecting locks, and recursive locks. You can use attributes to achieve corresponding functions.
/* The mutex type can be: PTHREAD_MUTEX_NORMAL 0: common lock (default). Deadlock detection is not provided. Attempting to re-lock the mutex results in a deadlock. If the mutex that a thread is trying to unlock is not locked or unlocked by the thread, indeterminate behavior will occur. PTHREAD_MUTEX_ERRORCHECK 1: provides error check for error locks. An error is returned if the mutex that a thread tried to re-lock is already locked by that thread. An error is returned if a thread attempts to unlock a mutex that is not locked or unlocked by the thread. PTHREAD_MUTEX_RECURSIVE 2: Nested/recursive locks that retain the notion of lock counts. The lock count is set to 1 the first time a thread successfully acquires a mutex. Each time the thread relocks the mutex, the lock count increases by one. Each time a thread unlocks the mutex, the lock count decreases by one. When the lock count reaches zero, the mutex is available to other threads. An error is returned if a thread attempts to unlock a mutex that is not locked or unlocked by the thread. */ pthread_mutexattr_settype(&mattr ,PTHREAD_MUTEX_NORMAL);Copy the code
Pthread_mutex is also called in a convenient way, using a globally unique mutex. Experiments show that all attributes of the lock are default, visible in the process, and the type is normal lock
static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_mutex_lock(&mutex);
block();
pthread_mutex_unlock(&mutex);
Copy the code
It also provides a non-blocking version of pthread_mutex_trylock. If an attempt to acquire a lock is made to find that the mutex is locked, or the maximum number of recursive locks has been exceeded, it is returned immediately without suspension. The lock can only be successfully added if the lock is not occupied.
static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
int res = pthread_mutex_trylock(&mutex);
if(res == 0){
block();
pthread_mutex_unlock(&mutex);
}else if(res == EBUSY){
printf("Mutex cannot be obtained because the mutex it points to is locked.");
}else if (res == EAGAIN){
printf("The mutex could not be acquired because the maximum number of recursive locks for Mutex was exceeded.");
}
Copy the code
2.1.3 NSLock与NSRecursiveLock
NSLock is the most common type of lock in iOS. It corresponds to the common type of mutex. Another recursive subclass is NSRecursiveLock; Let’s start with its official documentation:
An NSLock object can be used to mediate access to An application's global data or to protect a critical section of code, allowing it to run atomically. Warning The NSLock class uses POSIX threads to implement its locking behavior. When sending an unlock message to an NSLock object, you must be sure that message is sent from the same thread that sent the initial lock message. Unlocking a lock from a different thread can resultin undefined behavior.
You should not use this class to implement a recursive lock. Calling the lock method twice on the same thread will lock up your thread permanently. Use the NSRecursiveLock class to implement recursive locks instead.
Unlocking a lock that is not locked is considered a programmer error and should be fixed in your code. The NSLock class reports such errors by printing an error message to the console when they occur.
Copy the code
From the document we can know:
- The implementation is based on PHThread
- It is illegal to attempt to release a lock held by another thread
- If used in recursively nested locking scenarios, use its subclass NSRecursiveLock. Recursive calls are not raised in all cases, and NSLock is superior to NSRecursiveLock in performance. When we accidentally cause a deadlock with NSLock, we can try replacing it with NSRecursiveLock.
- Lock and unlock correspond one to one. If you attempt to release an unlocked lock, an abnormal crash will occur. The lock never waits and the corresponding UNLOCK goes hungry, leaving the current thread suspended
2.1.4 @ synchronized
@synchronized(self){
// your code hear
};
Copy the code
At runtime, @synchronized prefixes objc_sync_Enter to the code block and inserts objc_sync_exit at the end of the code block. Here are the two function declaration files.
/**
* Begin synchronizing on 'obj'.
* Allocates recursive pthread_mutex associated with 'obj' if needed.
*
* @param obj The object to begin synchronizing on.
*
* @return*/ OBJC_EXPORT int objc_sync_Enter (id _Nonnull obj) OBJC_AVAILABLE(10.3, 2.0, 9.0, 1.0, 2.0); /** * End synchronizing on'obj'.
*
* @param obj The object to end synchronizing on.
*
* @return OBJC_SYNC_SUCCESS or OBJC_SYNC_NOT_OWNING_THREAD_ERROR
*/
OBJC_EXPORT int
objc_sync_exit(id _Nonnull obj)
OBJC_AVAILABLE(10.3, 2.0, 9.0, 1.0, 2.0);
Copy the code
These two functions are in runtime/objc-sync.mm and are open source. You can see the source code implementation here. In the source code, when you call objc_sync_Enter (obj), it looks for the appropriate SyncData using the hash of obJ’s memory address and locks it. When you call objc_sync_exit(obj), it looks for the appropriate SyncData and unlocks it. SyncData is a node in a linked list with the following data structure:
typedef struct SyncData {
struct SyncData* nextData;
id object;
int threadCount; // number of THREADS using this block
recursive_mutex_t mutex;
} SyncData;
typedef struct {
SyncData *data;
unsigned int lockCount; // number of times THIS THREAD locked this block
} SyncCacheItem;
typedef struct SyncCache {
unsigned int allocated;
unsigned int used;
SyncCacheItem list[0];
} SyncCache;
Copy the code
The locking code is as follows:
/ Begin synchronizing on 'obj'.
// Allocates recursive mutex associated with 'obj' if needed.
// Returns OBJC_SYNC_SUCCESS once lock is acquired.
int objc_sync_enter(id obj)
{
int result = OBJC_SYNC_SUCCESS;
if(obj) { SyncData* data = id2data(obj, ACQUIRE); require_action_string(data ! = NULL,done, result = OBJC_SYNC_NOT_INITIALIZED, "id2data failed");
result = recursive_mutex_lock(&data->mutex);
require_noerr_string(result, done."mutex_lock failed");
} else {
// @synchronized(nil) does nothing
if (DebugNilSync) {
_objc_inform("NIL SYNC DEBUG: @synchronized(nil); set a breakpoint on objc_sync_nil to debug");
}
objc_sync_nil();
}
done:
return result;
}
// End synchronizing on 'obj'.
// Returns OBJC_SYNC_SUCCESS or OBJC_SYNC_NOT_OWNING_THREAD_ERROR
int objc_sync_exit(id obj)
{
int result = OBJC_SYNC_SUCCESS;
if(obj) { SyncData* data = id2data(obj, RELEASE); require_action_string(data ! = NULL,done, result = OBJC_SYNC_NOT_OWNING_THREAD_ERROR, "id2data failed");
result = recursive_mutex_unlock(&data->mutex);
require_noerr_string(result, done."mutex_unlock failed");
} else {
// @synchronized(nil) does nothing
}
done:
if ( result == RECURSIVE_MUTEX_NOT_LOCKED )
result = OBJC_SYNC_NOT_OWNING_THREAD_ERROR;
return result;
}
Copy the code
As you can see, the core logic is RECURsive_mutex_LOCK and RECURsive_mutex_unlock. These two functions are in apple’s private library and their implementation is unknown. However, the documentation shows that it is based on the recursive type pthread_mutex, which we have discussed in previous articles.
It is important to note that the main purpose of the obj object passed in is to generate the hash index of the linked list node. The life cycle of this object has no effect on the code block or the locking process. That is, once it’s passed in, it’s safe to say when you’re going to free it or set it to nil. However, if an empty object is passed in, no lock or unlock operation is performed.
2.2 the spin lock
Spin-locks are similar to mutex, except that when a spin-lock is occupied by one thread, other threads do not go to sleep (suspended), but run (spin/idle) until the lock is released. Because it does not involve switching between user and kernel states, it is far more efficient than mutex.
While it is more efficient than mutex, it has some drawbacks:
- The spin lock takes up CPU all the time. It runs without a lock, so it takes up CPU. If you can’t get the lock in a short period of time, it will definitely make the CPU less efficient. In the case of high concurrent execution (high collision probability, fierce competition), or the code fragment is time-consuming (for example, involving kernel execution files IO, socket, thread, etc.), it is easy to cause the risk of CPU usage skyrocketing
- It is possible to cause a deadlock when using a spin lock and a deadlock when calling recursively.
- Spin-locking can cause priority inversion problems. Specifically, if a low-priority thread acquires a lock and accesses a shared resource, and a high-priority thread attempts to acquire the lock, the spin lock will be in a busy state and consume a lot of CPU. The low-priority thread cannot compete with the high-priority thread for CPU time, resulting in the task being delayed and unable to release the lock. Due to priority inversion, OSSpinLock is no longer secure in the new version of iOS. Unless developers can ensure that all threads accessing the lock are of the same priority, all types of spin locks in iOS will no longer be used. Os_unfair_lock is recommended for ios10.
Therefore, we should use spin locks with caution. Spin locks are only really needed if the kernel is preemptible or SMP. In a single-CPU, non-preemptible kernel, the operation of spin locks is empty. Spin locks are suitable for situations where the lock user holds the lock for a short time.
#import <libkern/OSAtomic.h>
OSSpinLock lock = OS_SPINLOCK_INIT;
OSSpinLockLock(&lock);
OSSpinLockUnlock(&lock);
Copy the code
2.3 a semaphore
Semaphore, sometimes called a Semaphore, is a facility used in multithreaded environments to coordinate threads to ensure that they use common resources properly.
Semaphores can be divided into several categories:
-
Binary semaphore/binary semaphore: The semaphore is only allowed to have a value of 0 or 1. There are only two states: occupied and unoccupied, which can only be acquired by one thread at a time.
-
Integer semaphore: The value of a semaphore is an integer that can be obtained by multiple threads at the same time until the value of the semaphore becomes 0.
-
Record semaphore: Each semaphore s, in addition to an integer value (count), has a wait queue List, which is the identity of the various threads blocking the semaphore. When a semaphore is released and the value is incremented by one, the system automatically wakes up a waiting thread from the wait queue to acquire the semaphore and then decrement the semaphore by one.
The semaphore controls access to a shared resource through a counter whose value is a non-negative integer that is subtracted by one by all threads passing through it. If the counter is greater than 0, access is allowed and the counter is decayed by 1; If it is 0, access is forbidden and all threads attempting to pass through it are in a wait state.
Sem_t in 2.3.1 pthread
His specific call is as follows:
#include <semaphore.h>// Initialize semaphore: Int sem_init(sem_t *sem, int pshared, unsigned int val); // This function applies for a semaphore, waits until no semaphore is available, occupies a semaphore when one is available, and reduces the value of the semaphore by 1. int sem_wait(sem_t *sem); // Semaphore + 1: This function releases a semaphore whose value is incremented by 1. int sem_post(sem_t *sem); // Destroy semaphore: int sem_destory(sem_t *sem);Copy the code
It is worth noting that the above initialization method has been deprecated by Apple. It basically returns -1 when called, so the call fails. All subsequent operations are also invalid. IOS does not support creating nameless semaphores, so the solution is to create named semaphores. Instead, create a named semaphore with an initial value of 2. At the end of the use, the corresponding Unlick method is called.
sem_t *semt = sem_open("sem name", O_CREAT, 0664, 2); sem_unlink(semt);Copy the code
Let’s look at a simple example. The result is obvious: at some point, only two threads are Posting waite, and all the other threads are suspended. After 1s, both threads have posted. The other two threads wake up and continue running.
func testSem_t(name:String){
letSemt = sem_open (name, O_CREAT, 0664, 2)ifsemt ! = SEM_FAILED {for i in0... 5 { DispatchQueue.global().async { sem_wait(semt)print("waite \(i)")
sleep(1)
sem_post(semt)
print("post \(i)")
}
}
sem_unlink(name)
}else{
if errno == EEXIST {
print("Semaphore with name \(name) already exists.\n")}else{
print( "Unhandled error: \(errno). name=\(name) \n")}let newName = name + "\(arc4random()%500)"
print("new name = \(newName)")
testSem_t(name: newName)
}
}
Copy the code
Note that an error is returned when a semaphore with the same name is created repeatedly. Rerun in time will also be more likely to get errors. Therefore, while we are trying to ensure that the semaphore name created each time is unique, we should also handle the return error of the duplicate name. In this example, the processing method is simple and only for reference. (errno is a global variable and is the error code returned by the kernel <errno. H >)
2.3.2 dispatch_semaphore
Dispatch_semaphore is a GCD semaphore that is used to control concurrent execution of multiple threads. Wait /signal events are used to control the maximum number of concurrent execution of threads. When the maximum number of threads is reduced to 1, the semaphore is used as a synchronization lock.
The examples in 2.3.1 are implemented with dispatch_semaphore as follows:
let semt = DispatchSemaphore(value: 7)
for i in0... 20 { DispatchQueue.global().async {print(" \(i)")
semt.wait()
print("waite \(i)")
sleep(1)
semt.signal()
print("post \(i) ")}}Copy the code
2.3.2 Purpose of semaphore
- A binary semaphore is equivalent to a mutex, meaning that when the semaphore starts at 1, wait equals lock, and signal equals unlock. However, it allows locking in one thread and unlocking in another thread, which is more flexible and brings more uncertainty.
In the following code, thread A waits for thread B to call before running one by one. If you switch to NSLock it is theoretically not allowed by other threads, but it works fine. Instead of an NSRecursiveLock recursive lock, all lock operations will fail and the thread will not hang. With pthread_mutex, locking also fails when the property is set to be recursive. (Normal mutex may be implemented by semaphores for unknown reasons, but it is not recommended.)
let semt = DispatchSemaphore(value: 1)
let q1 = DispatchQueue(label:"A")
let q2 = DispatchQueue(label:"B")
for i in0... 20 { q1.async {print(" \(i)")
semt.wait()
print("waite \(i)")
}
q2.asyncAfter(deadline: .now() + .seconds(i * 1)){
semt.signal()
print("post \(i) ")}}Copy the code
-
Controls the maximum number of concurrent requests for a block of code. By setting the initial value of the semaphore, it is easy to achieve the number of concurrent executions of a piece of code. Or to control the maximum number of simultaneous visits to a resource.
-
When the semaphore value is 0 and the Waite /signal belongs to different threads, the classic producer-consumer model can be applied. That is, a one-to-one way of observation and monitoring. When the producer completes the production, immediately notifies the consumer to buy. When there is no product, consumers have to wait.
var a : Int32 = 0
let semt = DispatchSemaphore(value:0)
for i in0.. <303 { DispatchQueue.global().asyncAfter(deadline: .now() + .seconds(1)) {print("task start \(i) a= \( OSAtomicAdd32(1, &a) )")
semt.signal()
}
}
for i in0.. <5 { DispatchQueue.global().async { var count : Int32 = 0while(true){
semt.wait()
print("Obsever \(I) finish a= (OSAtomicAdd32(-1, &a)) \(OSAtomicAdd32(1, &count)))}}}Copy the code
In the example above, the semaphore value corresponds to the amount of inventory. The initial inventory is 0. The producer produced a total of 303 items, and each item was sold in a timely manner. There are 5 consumers (or dealers) in total, and each time a product is produced, they will snap it up differently. As can be seen from the result, due to high concurrency, the maximum inventory fluctuates, but the final inventory is 0. Five consumers are snapping up the total amount of production. And the total amount is the same. And since the remainder is 3, the first three grabbed one more piece.
The above producer and consumer models are more suitable to be realized by conditional variables. Let’s take a closer look.
2.4 Conditional Variables
As a synchronization device, Condition variables are similar to fences, allowing threads to wait for a Condition to occur in an uncontested manner. When this condition does not occur, the thread remains dormant. When notified by other threads that the condition has occurred, the thread is awakened and continues to execute. Condition variables are relatively low-level synchronization primitives, which are rarely used directly and are often used to achieve thread synchronization between high levels. A classic example of using condition variables is Thread pools.
NSCondition is an implementation of condition variables on iOS. It is a special type of lock through which different threads can be scheduled. A thread is blocked by a condition until another thread satisfies the condition to send a signal to that thread so that it can execute correctly. For example, you can start one thread to download images and one thread to process images. In this case, the thread that needs to process the image, since no image will block, will be given a signal to the thread that needs to process the image when the download thread is finished.
func consumer() {
DispatchQueue.global().async {
print("start to track")
while(true){
self.conditionLock.wait()
print("in \(Thread.current)")
}
}
}
func producer() {let queue1 = DispatchQueue.global()
for i in0... 5 { queue1.asyncAfter(deadline: .now() + .milliseconds(i*300), execute: {print(i)
self.conditionLock.signal()
})
}
}
Copy the code
Start to track 0in <NSThread: 0x604000272dc0>{number = 3, name = (null)}
1
in <NSThread: 0x604000272dc0>{number = 3, name = (null)}
2
in <NSThread: 0x604000272dc0>{number = 3, name = (null)}
3
in <NSThread: 0x604000272dc0>{number = 3, name = (null)}
4
in <NSThread: 0x604000272dc0>{number = 3, name = (null)}
5
in <NSThread: 0x604000272dc0>{number = 3, name = (null)}
Copy the code
In the same way that lock and unlock correspond, wait() and signal() in NSCondition also need to correspond. Multiple threads waite() are unlocked in sequence. Extra wait() threads can cause a deadlock if they do not wait for signal(). In the same way, multiple threads signal() at the same time, and the excess will not be processed. In the example above, when the time delay is 0, only one lock will be executed at a time because there is only one lock at a time and any extra keys will be discarded.
NSConditionLock is another condition variable, the only difference being that it can be passed in an integer to determine the condition. That is, the ability to deal with multiple conditions. Like any lock, lock(whenCondition:) corresponds to unlock(withCondition:)** and can only be unlocked if the condition value is the same. Because NSLock is inherited, they are similar, such as lock()/unlock(), except that condition is specified or modified
let conditionLock = NSConditionLock()
let queue1 = DispatchQueue.global()
for i in1... 5 { queue1.asyncAfter(deadline: .now() + .milliseconds(0), execute: { conditionLock.lock()print("dosomthing thread1 cordition=\(i) ")
if i == 3 {
conditionLock.unlock(withCondition:3)
}
conditionLock.unlock()
})
DispatchQueue.global().async {
conditionLock.lock(whenCondition:3)
print("in \(Thread.current)")
conditionLock.unlock()
}
}
Copy the code
The code above yields the following results
dosomthing thread1 cordition=1
dosomthing thread1 cordition=2
dosomthing thread1 cordition=3
in <NSThread: 0x604000663600>{number = 4, name = (null)}
in <NSThread: 0x604000663700>{number = 5, name = (null)}
dosomthing thread1 cordition=5
in <NSThread: 0x6040006635c0>{number = 6, name = (null)}
in <NSThread: 0x60000026f340>{number = 3, name = (null)}
dosomthing thread1 cordition=4
in <NSThread: 0x600000275780>{number = 7, name = (null)}
Copy the code
In the code above, multiple threads wait until condition=3 to execute.
Read/write locks in a broad logical sense, a read/write lock can also be considered a shared version of a mutex. Read/write locks can reduce the cost of thread mutexes to some extent if a critical section is mostly read and only a few writes.
For a lock, there are two ways to obtain it: share and Exclusive. Write operations are exclusive and read operations are shared
Read/write Lock status | Obtain it in shared mode | Obtain in exclusive mode |
---|---|---|
free | successful | successful |
Shared | successful | Waiting for the |
exclusive | Waiting for the | Waiting for the |
NSString *path = [[NSBundle mainBundle] pathForResource:@"t.txt" ofType:nil];
dispatch_group_t group = dispatch_group_create();
__block double start = CFAbsoluteTimeGetCurrent();
for (int k = 0; k <= 3000; k++) {
dispatch_group_enter(group);
dispatch_async(dispatch_get_global_queue(0, 0), ^{
[self readBookWithPath:path];
dispatch_group_leave(group);
});
dispatch_group_enter(group);
dispatch_async(dispatch_get_global_queue(0, 0), ^{
[self writeBook:path string:[NSString stringWithFormat:@"--i=%d--",k]];
dispatch_group_leave(group);
});
}
dispatch_group_notify(group, dispatch_get_main_queue(), ^{
NSLog(@"----result=%@ time=%f",[self readBookWithPath:path],CFAbsoluteTimeGetCurrent()-start);
});
Copy the code
- (NSString *)readBookWithPath:(NSString *)path {
pthread_rwlock_rdlock(&rwLock);
NSLog(@"start read ---- ");
NSString *contentString = [NSString stringWithContentsOfFile:path encoding:NSUTF8StringEncoding error:nil];
NSLog(@"end read ---- %@",contentString);
pthread_rwlock_unlock(&rwLock);
return contentString;
}
- (void)writeBook:(NSString *)path string:(NSString *)string {
pthread_rwlock_wrlock(&rwLock);
NSLog(@"start wirte ---- ");
[string writeToFile:path atomically:YES encoding:NSUTF8StringEncoding error:nil];
NSLog(@"end wirte ---- %@",string);
pthread_rwlock_unlock(&rwLock);
}
Copy the code
The command output is...... 2017-12-24 17:24:20.506522+0800 lock[8591:299152] start wirte ---- 2017-12-24 17:24:20.507522+0800 lock[8591:299152] end Wirte ---- -- I =2998-- 2017-12-24 17:24:20.507685+0800 lock[8591:299162] startread---- 2017-12-24 17:24:20.507828+0800 Lock [8591:299162] endread---- -- I =2998-- 2017-12-24 17:24:20.507943+0800 Lock [8591:299154] start wirte ---- 2017-12-24 17:24:20.508872+0800 Lock [8591:299154] end wirte ---- -- I =2999-- 2017-12-24 17:24:20.509065+0800 lock[8591:299161] startread---- 2017-12-24 17:24:20.509240+0800 Lock [8591:299161] endread---- -- I =2999-- 2017-12-24 17:24:20.509358+0800 Lock [8591:299157] start wirte ---- 2017-12-24 17:24:20.510294+0800 Lock [8591:299157] end wirte ---- -- I =3000-- 2017-12-24 17:24:20.510443+0800 lock[8591:298979] startread---- 2017-12-24 17:24:20.510582+0800 LOCK [8591:298979] endread---- -- I =3000-- 2017-12-24 17:24:20.510686+0800 lock[8591:298979] ----result=-- I =3000-- time=5.968375Copy the code
2.4 critical region
Critical sections are a more rigorous synchronization method than mutex. Visible only to this process, it is illegal for other processes to attempt to obtain it (semaphores and mutex can). Acquiring a lock is called entering a critical region, and releasing a lock is called leaving a critical region. In addition, it has the same properties as a mutex.