preface

When most Objective-C programmers use attributes, they don’t pay much attention to a particular modifier prefix. Instead, they mindlessly use their non-default state, which is atomic.

@interface PropertyClass

@property (atomic, strong)    NSObject *atomicObj;  // It is atomic by default
@property (nonatomic.strong) NSObject *nonatomicObj;

@end
Copy the code

The use of non-atomic operations is generally recommended in introductory tutorials, because most operations for beginners are on the main thread, do not use thread-safe features, and use of a large number of inefficient execution.

So how does he achieve thread safety? What kind of technology is used?


The principle of

Implementation of attributes

First let’s look at what attributes contain. By reviewing the source code, its structure is as follows:

struct property_t {
    const char *name;       / / name
    const char *attributes; / / features
};
Copy the code

Properties have a simple structure and contain fixed names and elements. Property_getName and property_getAttributes are used to obtain properties.

In this example, the atomicObj feature is T@”NSObject”,&,V_atomicObj, where V stands for strong, atomic is not displayed by default, and N is displayed for nonatomic.

So how exactly does atomic manipulation work? By introducing the Runtime, we can debug the stack of functions called.

As you can see, property properties are taken into account at compile time, and Setter methods call directly the atomic version of objc_setProperty. The runtime is not used for dynamic analysis of features, but for execution performance.

static inline void reallySetProperty(id self, SEL _cmd, 
    id newValue, ptrdiff_t offset, bool atomic, bool copy.bool mutableCopy) {
    // If the offset is 0, isa is changed
    if (offset == 0) {
        object_setClass(self, newValue);
        return;
    }

    id oldValue;
    id *slot = (id((*)char*)self + offset);// Get the original value
    // Copy by feature
    if (copy) {
        newValue = [newValue copyWithZone:nil];
    } else if (mutableCopy) {
        newValue = [newValue mutableCopyWithZone:nil];
    } else {
        if (*slot == newValue) return;
        newValue = objc_retain(newValue);
    }
    // Determine atomicity
    if(! atomic) {// Non-atomic direct assignment
        oldValue = *slot;
        *slot = newValue;
    } else {
        // Atomic operations use spin locks
        spinlock_t& slotlock = PropertyLocks[slot];
        slotlock.lock();
        oldValue = *slot;
        *slot = newValue;        
        slotlock.unlock();
    }

    objc_release(oldValue);
}

id objc_getProperty(id self, SEL _cmd, ptrdiff_t offset, BOOL atomic) {
    / / get the isa
    if (offset == 0) {
        return object_getClass(self);
    }

    // Non-atomic operations return directly
    id *slot = (id((*)char*)self + offset);
    if(! atomic)return *slot;
        
    // Atom-operated spin locks
    spinlock_t& slotlock = PropertyLocks[slot];
    slotlock.lock();
    id value = objc_retain(*slot);
    slotlock.unlock();
    
    // For performance reasons, autorelease out of lock
    return objc_autoreleaseReturnValue(value);
}
Copy the code

What is a spin lock?

Locks are used to solve the problem of threads competing for resources and are generally divided into two types, spin locks (SPIN locks) and mutex locks (mutex).

A mutex can be interpreted as a thread acquiring a lock, finding that the lock is occupied, asking the system to wake it up when the lock is idle and immediately sleeping.

Spin locks are relatively simple. When the thread finds that the lock is occupied, it will loop through the status of the lock until it acquires it.

Atomic operation has the smallest granularity and is limited to read and write. It requires high performance. If mutex is used, it will cost a lot of resources to switch threads. In contrast, since reading and writing operations are relatively short and can be completed in a time slice, spin is more suitable for this scenario.

Spin lock pit

However, after iOS 10, Apple abandoned OSSpinLock in favor of the new OS_UNfair_Lock due to a huge bug.

In the new iOS, the system maintains 5 different thread priorities /QoS: Background, Utility, Default, User-initiated, and User-interactive. High-priority threads always execute before low-priority threads, and a thread is not disturbed by threads of lower priority than itself. This thread scheduling algorithm breaks the Spin lock by creating potential priority inversion problems.

The description is quoted from the writings of ibireme.

My understanding is that when the low-priority thread acquires the lock, the high-priority thread falls into a busy state when accessing it. Because it is a circular call, the system scheduling resources are occupied, causing the low-priority thread to be unable to process the resources and release the lock, resulting in deadlock.

So why is spinlock_T still used for atomic manipulation?

using spinlock_t = mutex_tt<LOCKDEBUG>;
using mutex_t = mutex_tt<LOCKDEBUG>;

class mutex_tt : nocopy_t {
    os_unfair_lock mLock; // Handle priority mutex
    void lock() {
        lockdebug_mutex_lock(this);
        os_unfair_lock_lock_with_options_inline
            (&mLock, OS_UNFAIR_LOCK_DATA_SYNCHRONIZATION);
    }
    void unlock() {
        lockdebug_mutex_unlock(this); os_unfair_lock_unlock_inline(&mLock); }}Copy the code

Almost got duped by Apple! In the original system, the spin lock has been changed to mutex implementation, but the name has not changed.

To fix the priority inversion problem, Apple had to abandon the use of spin locks in favor of os_UNFAIR_LOCK, which was optimized for performance.


Question and answer

Atomic implementation mechanism

With atomic modifier, the compiler sets the default read-write method to atomic read-write and adds protection with mutex.

Why can’t we guarantee absolute thread safety?

Individual atomic operations are absolutely thread-safe, but combined operations are not.

- (void)competition {
    self.intSource = 0;

    dispatch_async(queue1, ^{
      for (int i = 0; i < 10000; i++) {
          self.intSource = self.intSource + 1; }});dispatch_async(queue2, ^{
      for (int i = 0; i < 10000; i++) {
          self.intSource = self.intSource + 1; }}); }Copy the code

It’s going to be less than 20,000. When fetching a value, it is atomic thread-safe. For example, two threads get the current value 0 sequentially and increment it to 1, so both queues write 1 sequentially, so it is not thread-safe.

The solution would be to increase granularity and combine the read and write operations into a single atomic operation to solve the problem of writing outdated data.

os_unfair_lock_t unfairLock;
- (void)competition {
    self.intSource = 0;

    unfairLock = &(OS_UNFAIR_LOCK_INIT);
    dispatch_async(queue1, ^{
      for (int i = 0; i < 10000; i++) {
          os_unfair_lock_lock(unfairLock);
          self.intSource = self.intSource + 1; os_unfair_lock_unlock(unfairLock); }});dispatch_async(queue2, ^{
      for (int i = 0; i < 10000; i++) {
          os_unfair_lock_lock(unfairLock);
          self.intSource = self.intSource + 1; os_unfair_lock_unlock(unfairLock); }}); }Copy the code

conclusion

By learning the atomicity of attributes, the understanding of locks in the system is deepened, including spin locks, mutex locks, read and write locks and so on.

I thought the implementation was a spinlock, but I took a second look and realized it was a mutex. It also gave me a small lesson, access to the source code or to get to the bottom of the matter, only floating on the surface, may not get the truth you want.

reference

Runtime library that can be compiled

OSSpinLock is no longer secure