Multithreading, as an important method to realize concurrent execution of software, also begins to have more and more important status!

Because multithreading can be switched quickly by the CPU in the time slice, it creates the following advantages

  • Better resource utilization
  • Programming is simpler in some cases
  • Faster program response

But it is not perfect, because multithreading is often associated with resource grabbing problems, as a senior developer concurrent programming that is a must, while addressing thread safety has become a fundamental we must master

Atomic operation

A spinlock encapsulates a spinlock_T spinlock

Spin lock: If the shared data has been locked by another thread, the thread will wait for the lock in an infinite loop. Once the accessed resource is unlocked, the thread waiting for the resource will execute immediately. Spin locks are also shown below

Mutex: If the shared data has already been locked by another thread, the thread will go to sleep and wait for the lock. Once the accessed resource is unlocked, the thread waiting for the resource is woken up.

Here’s how the spinlock works:

bool lock = false; // Any thread can apply for a lockdo {
    while(test_and_set(&lock); Test_and_set is an atomic operation Critical section lock =false; Reminder section (+) {// the code that does not need a lock can be taken into account.Copy the code

There is an interesting article about atomicity, which is also posted here, so you can talk about why atomic is sometimes not thread safe. OSSpinLock is no longer secure

At the bottom, the operation will be compiled into assembly code after more than one instruction. Therefore, the scheduling system may interrupt the execution halfway and execute other codes. However, the execution of our atomic single instruction will not be interrupted, thus ensuring safety.

BUG with spin locks

Although atomic manipulation is very simple, it is only suitable for relatively simple and specific situations. In complex situations, such as when we want to ensure atomicity of changes to a complex data structure, atomic manipulation instructions are inadequate,

If critical sections take too long to execute, using spin locks is not a good idea. Earlier we introduced the time slice rotation algorithm, where threads exit their time slice in various situations. One way is to run out of time slices and be forcibly preempted by the operating system. In addition, when the thread is doing I/O, or when it goes to sleep, it actively cedes time slices. Obviously, in a while loop, the thread is busy and so on, wasting CPU time and eventually being preempted by the operating system. If a critical section takes a long time to execute, such as a file read or write, this wait is unnecessary

Here’s the lock we love to hate

IOS lock

You can also refer to this article to expand: iOS locks

Locking is not an optional mechanism; each spot purchase view acquires the lock before accessing data or resources and releases the lock after access. If you try to acquire the lock while it is already occupied, the thread will wait until the lock is available again!

A semaphore

A Binary Semaphore has only two states: occupied and unoccupied. It is suitable for resources that are exclusively accessed by a single thread. When the binary semaphore is in the unoccupied state, the first thread that attempts to acquire the binary semaphore acquires the lock and sets the binary semaphore in the occupied state. All other threads that attempt to acquire the binary semaphore wait until the lock is released

Now we are on this basis, we put the learning of thinking by binary -> multivariate, our semaphore was born, multivariate semaphore for short semaphore

  • Decrement the semaphore value by 1

  • If the semaphore value is less than 0, the wait state is entered, otherwise execution continues. After accessing the play resource, the thread releases the semaphore and does the following

  • Increase the semaphore value by 1

  • If the semaphore value is less than 1, wake up a waiting thread

let sem = DispatchSemaphore(value: 1)
    
for index in1... 5 { DispatchQueue.global().async { sem.wait()print1 <NSThread: 0x600003FA8200 >{number = 3, name = (null)} 2 <NSThread: 0x600003f90140>{number = 4, name = (null)} 3 <NSThread: 0x600003f94200>{number = 5, name = (null)} 4 <NSThread: 0x600003fa0940>{number = 6, name = (null)} 5 <NSThread: 0x600003f94240>{number = 7, name = (null)}Copy the code

The mutex

Mutex, or Mutex, is similar to a binary semaphore, but unlike a semaphore, a semaphore can be acquired and released by any thread throughout the system; That is, whichever thread locks the lock must release the lock.

For details, see common Lock usage

Mutex can be classified as recursive Mutex and non-recursive Mutex. Recursive locks are also called reentrant mutex, and non-recursive locks are also called non-reentrant mutex. The only difference is:

  • The same thread can acquire the same recursive lock multiple times without causing a deadlock.
  • If a thread acquires the same non-recursive lock more than once, a deadlock occurs.

NSLock is the simplest mutex! But it’s non-recursive! Pthread_mutex encapsulates pthread_mutex and is very simple to use. @synchronized is the most frequently used mutex we use, but the least synchronized!

int main(int argc, const char * argv[]) {
    NSString *obj = @"Iceberg";
    @synchronized(obj) {
        NSLog(@"Hello,world! = > % @", obj); }}Copy the code

The underlying clang

int main(int argc, const char * argv[]) {
    
    NSString *obj = (NSString *)&__NSConstantStringImpl__var_folders_8l_rsj0hqpj42b9jsw81mc3xv_40000gn_T_block_main_54f70c_mi_0;
    
    {
        id _rethrow = 0;
        id _sync_obj = (id)obj;
        objc_sync_enter(_sync_obj);
        try {
                struct _SYNC_EXIT {
                    _SYNC_EXIT(id arg) : sync_exit(arg) {}
                    ~_SYNC_EXIT() {
                        objc_sync_exit(sync_exit);
                    }
                    id sync_exit;
                } _sync_exit(_sync_obj);

                NSLog((NSString *)&__NSConstantStringImpl__var_folders_8l_rsj0hqpj42b9jsw81mc3xv_40000gn_T_block_main_54f70c_mi_1 , obj);
                
            } catch (id e) {
                _rethrow = e;
            }
        
        {
            struct _FIN {
                _FIN(id reth) : rethrow(reth) {}
                ~_FIN() {
                    if(rethrow) objc_exception_throw(rethrow); } id rethrow; } _fin_force_rethow(_rethrow); }}}Copy the code

We find that the objc_sync_enter function is called before the try statement and takes the object to be locked. Because there is no try{}catch{}finally{} statement in C++, you cannot call the objc_sync_exit function in finally{}. So objc_sync_exit is called from the destructor in the _SYNC_EXIT structure, again taking the currently locked object. This is neat because _SYNC_EXIT of the _SYNC_EXIT structure type is a local variable with a life cycle of a try{} block that contains the code that the @sychronized{} code needs to execute. When the code is complete, the _SYNC_EXIT local variable is released from the stack. Its destructor is called, followed by the objc_sync_exit function. Even if code ina try{} block fails to execute and jumps to a catch{} statement, the local variable _sync_exit is released, perfectly mimicking the function of finally.

For reasons of space, here is a very good blog to share: Low-level analysis synchronized

int objc_sync_enter(id obj)
 {
  int result = OBJC_SYNC_SUCCESS;
 
  if(obj) { SyncData* data = id2data(obj, ACQUIRE); require_action_string(data ! = NULL,done, result = OBJC_SYNC_NOT_INITIALIZED, "id2data failed");
   
  result = recursive_mutex_lock(&data->mutex);
  require_noerr_string(result, done."mutex_lock failed");
  } else {
  // @synchronized(nil) does nothing
  if (DebugNilSync) {
  _objc_inform("NIL SYNC DEBUG: @synchronized(nil); set a breakpoint on objc_sync_nil to debug");
  }
  objc_sync_nil();
  }
 
 done: 
  return result;
 }
Copy the code

For every object you call sychronized, Objective-C Runtime assigns a recursive lock and stores it in a hash table. perfect

In fact, if you think @sychronized performance is low, you can use NSRecursiveLock’s ready-made encapsulated recursive lock

NSRecursiveLock *lock = [[NSRecursiveLock alloc] init];
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
   
    static void (^RecursiveBlock)(int);
    RecursiveBlock = ^(int value) {
        [lock lock];
        if (value > 0) {
            NSLog(@"value:%d", value);
            RecursiveBlock(value - 1);
        }
        [lock unlock];
    };
    RecursiveBlock(2);
});

2016-08-19 14:43:12.327 ThreadLockControlDemo[1878:145003] value:2
2016-08-19 14:43:12.327 ThreadLockControlDemo[1878:145003] value:1
Copy the code

Condition variables,

As a means of synchronization, Condition Variable acts like a fence. For conditional variables, there are two ready-made operations:

  • First, a thread can wait on a condition variable, and a condition variable can be waited on by multiple threads
  • Second, threads can wake up condition variables. One or all of the threads waiting for the condition variable are woken up and continue support.

In other words: using condition variables allows many threads to wait together for a certain time to happen, and when that time happens, all threads can resume execution together!

I’m sure you’ve seen NSCondition in lock usage, which encapsulates the condition variable pthread_cond_t and the mutex

- (void) signal { pthread_cond_signal(&_condition); } - (void) lock {int err = pthread_mutex_lock(&_mutex); }Copy the code

NSConditionLock is implemented with the help of NSCondition, which is essentially a producer-consumer model. “Condition met” can be understood as providing new content for the producer. NSConditionLock holds an NSCondition object and the _condition_value attribute, which is assigned when it is initialized:

- (id) initWithCondition: (NSInteger)value {if(nil ! = (self = [super init])) { _condition = [NSCondition new] _condition_value = value; }return self;
}
Copy the code

A critical region

A more rigorous means of synchronization than mutex. In terminology, the acquisition of a critical region is called entering a critical region, and the release of a lock is called leaving a critical region. Differences from mutex and semaphore:

  • (1) Mutex and semaphore word system any process is visible.
  • (2) The scope of the critical section is limited to this process, and other processes cannot obtain the lock.
// CRITICAL_SECTION g_cs; Char g_cArray[10]; UINT ThreadProc10(LPVOID pParam) {EnterCriticalSection(&g_cs); // Write to the shared resourcefor(int i = 0; i < 10; i++) { g_cArray[i] = a; Sleep(1); } // Leave the critical section LeaveCriticalSection(&g_cs);return0; } UINT ThreadProc11(LPVOID pParam) {EnterCriticalSection(&g_cs); // Write to the shared resourcefor(int i = 0; i < 10; i++) { g_cArray[10 - i - 1] = b; Sleep(1); } // Leave the critical section LeaveCriticalSection(&g_cs);return 0;
}
……
void CSample08View::OnCriticalSection() {/ / initialize the critical section InitializeCriticalSection (& g_cs); // Start thread AfxBeginThread(ThreadProc10, NULL); AfxBeginThread(ThreadProc11, NULL); Sleep(300); CString sResult = CString(g_cArray); AfxMessageBox(sResult); }Copy the code

Read-write lock

int pthread_rwlock_rdlock(pthread_rwlock_t *rwptr);
int pthread_rwlock_wrlock(pthread_rwlock_t *rwptr);
int pthread_rwlock_unlock(pthread_rwlock_t *rwptr);
Copy the code

ReadWriteLock manages a set of locks, a read-only lock and a write lock. A read lock can be held by multiple threads without a write lock, and the write lock is exclusive.

#include pthread_rwlock_t rwlock = PTHREAD_RWLOCK_INITIALIZER; // Define and initialize read/write lock write mode: pthread_rwlock_wrlock(&rwlock); // Add lock write write write...... pthread_rwlock_unlock(&rwlock); // Unlock read mode: pthread_rwlock_rdlock(&rwlock); // add read lock read read... pthread_rwlock_unlock(&rwlock); / / unlockCopy the code
  • Use condition variables to implement read/write locks

This is done using a condition variable + a mutex. Note: Condition variables must be used in conjunction with mutex, and must be locked for both waiting and releasing.

#include 
      
        // Multithreading, mutex header file required
      pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER; // Define and initialize the mutex pthread_cond_t cond = PTHREAD_COND_INITIALIZER; // Define and initialize the condition variable write mode: pthread_mutex_lock(&mutex); / / lockwhile(w ! = 0 || r > 0) { pthread_cond_wait(&cond, &mutex); } w = 1; pthread_mutex_unlock(&mutex); Write to write... pthread_mutex_lock(&mutex); w = 0; pthread_cond_broadcast(&cond); Pthread_mutex_unlock (&mutex); // Unlock read mode: pthread_mutex_lock(&mutex);while(w ! = 0) { pthread_cond_wait(&cond, &mutex); } r++; pthread_mutex_unlock(&mutex); Read read... pthread_mutex_lock(&mutex); r- -;if(r == 0) pthread_cond_broadcast(&cond); Pthread_mutex_unlock (&mutex); / / unlockCopy the code
  • Use mutex to implement read/write locks

This is done using 2 mutex +1 integer variable

#include 
      
        // Multithreading, mutex header file required
      pthread_mutex_t r_mutex = PTHREAD_MUTEX_INITIALIZER; // Define and initialize the mutex pthread_mutex_t w_mutex = PTHREAD_MUTEX_INITIALIZER; int readers = 0; Write mode: pthread_mutex_lock(&w_mutex); Write to write... pthread_mutex_unlock(&w_mutex); Read mode: pthread_mutex_lock(&r_mutex);if(readers == 0) pthread_mutex_lock(&w_mutex); readers++; pthread_mutex_unlock(&r_mutex); Read read... pthread_mutex_lock(&r_mutex); readers- -;if(reader == 0)
     pthread_mutex_unlock(&w_mutex);
pthread_mutex_unlock(&r_mutex); 
Copy the code
  • Use semaphores to implement read and write locks

This is done using 2 semaphores +1 integer variable. Given an initial value of 1, the semaphore is equivalent to a mutex.

#include 
      
        // Thread semaphore header file required
      sem_t r_sem; // Define semaphore sem_init(&r_sem, 0, 1); // initialize semaphore sem_t w_sem; // Define semaphore sem_init(&w_sem, 0, 1); // Initialize the semaphore int readers = 0; Write mode: sem_wait(&w_sem); Write to write... sem_post(&w_sem); Read mode: sem_wait(&r_sem);if(readers == 0) sem_wait(&w_sem); readers++; sem_post(&r_sem); Read read... sem_wait(&r_sem); readers- -;if(readers == 0)
     sem_post(&w_sem);
sem_post(&r_sem);
Copy the code

Thread safety is now in all areas of multi-threaded development must be mastered! Only to have a grasp of the bottom, in order to play in the real real development! Now iOS development and other development are surface basic layer development, the real big cow development road also please continue to work hard, this blog for everyone to learn together!