The application of multi-threading technology in iOS has derived the locking mechanism. Just imagine, if there were no concept of multi-threading in iOS, all the code would be executed in a synchronous environment, so there would be no competition for resources, of course, there would be no way to take advantage of the advantages of multi-core. Therefore, the application of multithreading is widely distributed, and the application of locking is local, so the two should complement each other to improve the efficiency of operation and improve the stability of the program.
First, the understanding of the type of lock.
There are three basic types of locks: mutex, spin, and read-write.
Among them, the mutex thread will go to sleep before accessing the critical area in the lock and wait for the system to schedule after unlocking. The spin lock thread does not go to sleep until it accesses the critical area in the lock. Read/write lock is an idea that uses mutex to realize specific application scenarios: multi-read parallel, read/write mutually exclusive, write/write mutually exclusive; Other types of locks, such as semaphores, conditional locks, and recursive locks, can be understood as upper levels of encapsulation implemented by the above basic types of locks.
Thinking about two, read and write lock implementation logic
If there is such a common resource, it is time-consuming to write, so during this period of time to avoid the program to write and read again, so as to avoid the problem of contention for resources, of course, the read process can be parallel.
Let’s start with a piece of code that simulates a time-consuming write process and a fast read process.
- (void)readAndWriteLock {// Write dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{NSLog(@" I start writing "); for (int i = 0; i < 10000; I++) {} NSLog(@" I'm done "); }); // read for (int I = 0; i < 10; I ++) {dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{NSLog(@" %d", I); for (int j = 0; j < i; J++) {} NSLog(@" I finished reading %d", I); }); }}Copy the code
A loop of 10,000 operations was performed within the write asynchrony to simulate time-consuming tasks; After that, multithreading is opened up for reading. In the asynchronous reading, simple cycles of different numbers are realized to simulate time-consuming tasks, which are far less time-consuming than writing.
Print the following
It is clear from the output that there are many read operations going on simultaneously during the write process. It doesn’t really matter whether you start reading or writing first, because it’s up to the system, but you need to control the code so that the reads and writes are mutually exclusive. At the same time, you need to implement write mutexes. In the case of multireading, there’s really no need to interfere too much, because multireading itself allows logic.
Thinking three, simple encapsulation read and write lock, meet read and write logic
Read add and unlock using mutex encapsulation; Write add and unlock
Static pthread_mutex_t r_plock = PTHREAD_MUTEX_INITIALIZER; Static pthread_mutex_t w_plock = PTHREAD_MUTEX_INITIALIZER; Static int current_read_times = 0; static int current_read_times = 0;Copy the code
Perform read lock
// readLock - (void)readLock {pthread_mutex_lock(&r_plock); current_read_times ++; if (current_read_times == 1) { pthread_mutex_lock(&w_plock); } pthread_mutex_unlock(&r_plock); }Copy the code
If current_READ_times is 1, then the write lock is locked. If current_read_times is 1, then the write lock is locked. This write lock locks the write process. The application part of the two locks is different.
Read unlock
- (void)readUnLock {pthread_mutex_lock(&r_plock); current_read_times --; if (current_read_times == 0) { pthread_mutex_unlock(&w_plock); } pthread_mutex_unlock(&r_plock); }Copy the code
A read/write lock does not limit current_read_times, so the read/write lock locks current_read_times. In the case of lock write lock state change, read and write mutually exclusive. If current_READ_times is 0, the current read is complete. If current_read_times is 0, the write lock is unlocked. Here, the judgment of unlocking == 0 and the judgment of locking == 1 are a pair, so that each locking process corresponds to one unlocking condition. There will no longer be a case of locking without unlocking.
Write lock
// writeLock - (void)writeLock {pthread_mutex_lock(&w_plock); }Copy the code
This is just locking the write operation.
Write to unlock
// writeUnLock - (void)writeUnLock {pthread_mutex_unlock(&w_plock); }Copy the code
Here only the write operation is unlocked.
The final code
- (void)readAndWriteLock {// Write dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ [self writeLock]; NSLog(@" I'm writing "); for (int i = 0; i < 10000; I++) {} NSLog(@" I'm done "); [self writeUnLock]; }); // read for (int I = 0; i < 10; i++) { dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ [self readLock]; NSLog(@" I start reading %d", I); for (int j = 0; j < i; J++) {} NSLog(@" I finished reading %d", I); [self readUnLock]; }); }}Copy the code
Print the following
You can view the write process as a complete print sequence, while the read process is not executed sequentially because there is no lock protection.
A simple read-write lock is done. Don’t laugh at bad code.