After introducing thread safety hazards in the previous article, we also explained the synchronization tools:

  • Atomic operation;
  • Memory barriers and volatile variables;
  • The lock

This article will highlight the various types of locks and the differences and relationships between them.

The lock and other synchronization schemes introduced in this article are shown below:

In the introduction of the above synchronization scheme, the example is still the example proposed in the above article – saving and withdrawing money and selling tickets for example, for a variety of synchronization scheme demonstration, this code – thread synchronization scheme.

A,pthread

Earlier, we learned that PThreads are a set of common THREAD apis based on the C language.

Using pthreads requires careful destruction of resources to prevent memory leaks.

1.1 the mutex

Mutex. The thread that acquired the lock has access to the resource and must wait until other threads want to continue accessing the resource. There’s a kind of competition between multiple threads, I’m leaving, let you come in. Prevents read/write contention for resources.

1.2 recursive locking

Pthread_mutex can specify the type of lock, and a recursive lock is obtained when specified as recursive.

A recursive lock is a variant of a mutex that can only be accessed by one thread, but allows the same thread to repeatedly lock the same lock without releasing the lock it owns.

A recursive lock, often used in recursive calls, looks like this:

- (void)otherTest
{
    pthread_mutex_lock(&_mutex);    
    static int count = 0;
    if (count < 10) {
        count++;
        [self otherTest];
    }
    
    pthread_mutex_unlock(&_mutex);
}
Copy the code

Or in the lock code, another method is called and the other method is also locked:

- (void)otherTest
{
    pthread_mutex_lock(&_mutex);    
    [self otherTest];
    pthread_mutex_unlock(&_mutex);
}

- (void)otherTest2
{
    pthread_mutex_lock(&_mutex);
    NSLog(@"%s", __func__);
    pthread_mutex_unlock(&_mutex);
}
Copy the code

1.3 conditions lock

Conditional locks are another variant of mutex. Conditional locking, which is a kind of collaboration, I'm ready to tell you to start, is generally used for thread synchronization, and only one task is done together. This kind of cooperative relationship, the most classic is "producer - consumer". In practice, it allows the developer to specify A "condition" in the code, and thread A waits for A condition before locking. Thread B releases the lock after accessing the critical resource and wakes up thread A with the condition.Copy the code

Here is an example of a “producer consumer” :

- (void)otherTest { [[[NSThread alloc] initWithTarget:self selector:@selector(__remove) object:nil] start]; [[[NSThread alloc] initWithTarget:self selector:@selector(__add) object:nil] start]; } // producer-consumer mode // thread 1 removes elements from array - (void)__remove {pthread_mutex_lock(&_mutex); If (self.data.count == 0) {// wait pthread_cond_wait(&_cond, &_mutex); } [self.data removeLastObject]; pthread_mutex_unlock(&_mutex); } // Thread 2 adds elements to the array - (void)__add {pthread_mutex_lock(&_mutex); sleep(1); [self.data addObject:@"Test"]; pthread_cond_signal(&_cond); pthread_mutex_unlock(&_mutex); }Copy the code

Second, the communist party

After being introduced as the PThread solution, THE GCD also provided some solutions for us.

2.1 dispatch_queue

The nature of serial queues provides a solution for thread synchronization.

Since all tasks in a serial queue are executed sequentially one after the other, multithreading is directly avoided at the source, and of course there is no thread synchronization problem.

2.2 dispatch_semaphore

“Dispatch_semaphore” is a semaphore, and we’re going to have to rephrase that.

A semaphore is a special variable to which the program accesses atomic operations and allows only wait (P) and send (V) information operations. The simplest semaphore is a variable that can only take 0 and 1, and this is the most common form of semaphore, called binary semaphore. A semaphore that can take more than one positive integer is called a universal semaphore. Binary semaphores are mainly discussed here.

2.2.1 Semaphore working principle

The semaphore has only two operations: wait and send, i.e. P(s) and V(s), where S is the number of available resources:Copy the code

P(s) : if S >0, s=s-1;

If s=0, suspend the process;Copy the code

V(s) : If another process is suspended while waiting for S, let it resume running.

If no process is suspended due to sv, s=s+1.Copy the code
struct samephore {
  int count;
  queueType queue;
}

/ / P
procedure P(var s:samephore) {
    s.count--; 
    if (s.count<0) {
      // Insert the process into the end of the corresponding wait queue s.kueue;
      // re-schedule
      asleep(s.queue);	
    }
  	/ / execution
} 

/ / V operation
procedure V(var s:samephore); 
{ 
    s.count++; 
if (s.count<=0) {// Wake up the s.kueue header process
    // Change its state to ready state and insert it into ready queue
    wakeup(s.queue); }}Copy the code
  • S >0 indicates that a critical resource is available. Why not wake up the process?

    S >0 indicates that a critical resource is currently available, and no process is blocked on the resource, so there is no need to wake up.

  • Why wake up a process when s<0 represents an unusable critical resource?

    The essence of the V primitive operation is that when a process finishes using a critical resource, it releases the critical resource and increases S by 1 to notify other processes. If S <0, it indicates that a process is blocking the resource and therefore wakes up a process from the blocking queue to “change hands” the resource.

    When A enters, s=1; when B enters, s=0; when C enters, s=-1; it indicates that one of the processes is blocked. D enters, s=-2. When A runs out of resources of this class, it performs operation V, s=-1, to release the resources of this class. Because s<0, it indicates that some processes are blocking the resources of this class, so it wakes up one.

  • The absolute value of s represents the number of processes waiting and also represents the critical resource. What is going on here?

When s<0, its absolute value represents the number of processes blocked in the system for requesting this type of resource.

When s>0, indicates the number of available critical resources. Notice that it means different things in different situations.

When s=0, it means we’ve just run out.

2.2.2 API

2.3 dispatch_barrier_

The alternative is to use the GCD fence function dispatch_barrier_async or dispatch_barrier_sync, which is discussed later in read and write security.

Third, the Cocoa

In macOS and iOS, PThreads are wrapped as object oriented.

3.1 OSSpinLock

OSSpinLock, a spinlock, performs very well, but has priority inversion documents. Os_unfair_lock is the new os_unfair_lock lock for iOS10.

3.2 os_unfair_lock

Os_unfair_lock replaces OSSpinLock, so how does it solve priority inversion?

Os_unfair_lock stores information about the thread owner, which is used by the system to attempt to resolve priority inversion.

Note that os_UNFAIR_LOCK locks must be in the same thread as the lock. Otherwise, the thread may abort.

3.3 NSLock, NSRecursiveLock

3.4 NSCondition

3.5 NSConditionLock

3.6 @synchronized

Four, read and write security

4.1 Read/Write Security Scenario

Placeholder: fill in the figure above. Refer to the diagram in the interview section.

4.2 pthread_rwlock

4.3 dispatch_barrier_async

Five, the classification and comparison of locks

5.1 classification

Here are excerpts from the previous article:

The lock describe The instance
The mutex 1. A mutex, acting as a protective barrier around a resource, is a semaphore that allows access to only one thread at a time.
2. If a mutex is in use and another thread tries to acquire it, that thread blocks until the mutex is released by its original owner. pthread_mutex
Recursive locking 1. Recursive locks are mutexes. Recursive locks allow a single thread to acquire the lock multiple times before releasing it. Other threads block until the lock owner releases the lock as many times as he acquired it.
2. Recursive locks are used primarily during recursive iterations, but may also be used when multiple methods need to acquire locks separately. NSRecursiveLock
Read-write lock 1. Read/write locks are also called shared exclusive locks. This type of lock is typically used for larger operations and can significantly improve performance if protected data structures are read frequently and modified occasionally.
2. During normal operation, multiple threads can simultaneously access data structures. However, when a thread wants to write to the structure, it blocks until all readers release the lock, at which point it acquires the lock and can update the structure. While the writer thread is waiting for a lock, the new reader thread blocks until the writer thread completes. pthread_rwlock_t
Distribution of the lock Distributed locks provide mutually exclusive access at the process level. Unlike true mutex, distributed locks do not block a process or prevent it from running. It simply reports when the lock is busy and lets the process decide how to proceed.
spinlocks 1. The spin lock repeatedly polls its locking condition until the condition is established.
2. Spin locks are most commonly used in multiprocessor systems where the expected wait time is short. In these cases, polling is usually more effective than intercepting threads, which involves context switching and updating of thread data structures. OSSpinLock

5.2 contrast

5.2.1 Spin Locks and Mutex

For the above classification of locks, we still need to focus on spin locks and mutex:

  • Mutex: When a global resource is being accessed by multiple threads and is in contention, mutex is needed to resolve the contention problem and ensure atomicity (data independence) of access to the global resource. In other words, mutex is not necessary for multiple threads to access the same global resource.

If a thread is unable to acquire the mutex, it is suspended without wasting CPU time. When other threads release the mutex, the operating system activates the suspended thread.

  • Spin lock: A lock used to protect a multithreaded shared resource. It differs from a mutex in that the spin lock attempts to acquire the lock in the form of a busy waiting loop that continuously checks whether the lock is available (busy can be analyzed by assembly).

In a multi-CPU environment, using a spin lock instead of a common mutex can often improve performance for programs with short locks.

Note when using spin locks:

  • Since spinning does not release the CPU, the thread holding the spin lock should release the spin lock as soon as possible, otherwise the thread waiting for the spin lock will keep spinning somewhere, wasting CPU time.
  • A thread holding a spin lock should release the spin lock before sleep so that other threads can acquire the spin lock. In kernel programming, if the code holding the spin lock sleeps, the entire system may hang.

The arrangement is as follows:

5.2.2 Performance Comparison

We tested the performance of different synchronization schemes by comparing the performance of thread schemes, as shown in the figure below:

In the tests, there are three distinct levels of performance:

  • Os_unfair_lock, OSSpinLock and its replacement, semaphore
  • Second echelon: Pthread_mutex and its encapsulated NSLock, NSRecursiveLock, and NSCondition
  • Third echelon: serial queue,@synchronizedAnd NSConditionLock.

And it turns out that, in tests, there may be fluctuations within the same echelon, but the echelons are distinct.

With so many synchronization schemes to understand the performance differences, we also need to give some tips on how to use them.

5.3 the use of

  • Multicore processors: Spin locks are a better choice if the expected time for a thread to wait for a lock is short, less than the time it takes the thread to switch context twice. If the time is long, mutex is a good choice.
  • Single-core processor: Spin locks are generally not recommended. Because only one thread at the same time on chip in running state, if a thread has access to the lock, but the time comes to thread b, but the thread b acquiring a lock, can only wait for to unlock, but because they don’t hang, thread a couldn’t enter the running state, can only wait for thread b time slice is depleted, have the opportunity to be a OS thread scheduling is carried out. The cost of using spin locks is high.
  • If the locking code is called frequently, but contention is rare, the use of spin locks should be preferred because of the high context switching overhead of mutex.

Recommended options: Semaphore, PThread series locks, or its object-oriented NSLock series locks.

reference

link

  1. Threading Programming Guide

The sample code

  1. Thread synchronization scheme
  2. Performance comparison of thread schemes
  3. Read and write security