1. Memory layout

As shown in the figure above, the memory layout is divided into the following areas:

  • The kernel: Processed by system control, about 1 GB
  • The stack: Functions, methods, local variables, and so on are stored here
  • The heap: Alloc allocates objects, block copy…
  • bss: uninitialized global variables, static variables…
  • data: Initialized global variables, static variables…
  • text: program code
  • keep: Controlled by the system

(0xC0000000 = 3221225472 = 3GB), so it occupies 3GB from the stack area to the retention area. The stack area extends from high address to low address, and the heap area climbs from low address to high address. When the BSS and DATA areas are not initialized, they are generally referred to as the global stack area memory address: Memory address starting with 0x7: Data segment starting with 0x6. Memory address of BSS: data segment starting with 0x1

Two. Memory management scheme

IOS provides three types of memory management scheme, TaggedPointer NONPOINTER_ISA, hash table.

1. TaggedPointer:

  • Small objects -NSNumber,NSDate, etc
  • No longer a simple address, but a real value with a value, a type, and so on. It is no longer an object, and memory is not stored in the heap and is not neededmalloc/free
  • Read times are 3 times faster and create times are 106 times faster.

### bit operation knowledge supplement

  • (1) A ^b^b=a; (2) a^b^b=a;
  1010 1101  a
^ 0000 1100  b 
  ---------
  1010 0001
^ 0000 1100  b
  ---------
  1010 1101  a
Copy the code
  • (2) Reverse (~)
~ 100001 -- -- -- -- -- -- -- 011110Copy the code
  • (3) Left shift (<<) Right shift (>>) operation
10000111 << 3 = 10000111000
10000111 >> 3 = 10000
Copy the code
  • (4) and (&) or (|), (a | b ^ b = b)
  1000 1100  a
| 1010 1010  b
------------
  1010 1110
& 1010 1010 b
-------------
  1010 1010 b
Copy the code

Source code analysis

  • TaggedPointerGeneration:
#if __has_feature(objc_fixed_enum) || __cplusplus >= 201103L
enum objc_tag_index_t : uint16_t
#else
typedef uint16_t objc_tag_index_t;
enum
#endif
{
    // 60-bit payloads
    OBJC_TAG_NSAtom            = 0, 
    OBJC_TAG_1                 = 1, 
    OBJC_TAG_NSString          = 2, 
    OBJC_TAG_NSNumber          = 3, 
    OBJC_TAG_NSIndexPath       = 4, 
    OBJC_TAG_NSManagedObjectID = 5, 
    OBJC_TAG_NSDate            = 6,

    // 60-bit reserved
    OBJC_TAG_RESERVED_7        = 7, 

    // 52-bit payloads
    OBJC_TAG_Photos_1          = 8,
    OBJC_TAG_Photos_2          = 9,
    OBJC_TAG_Photos_3          = 10,
    OBJC_TAG_Photos_4          = 11,
    OBJC_TAG_XPC_1             = 12,
    OBJC_TAG_XPC_2             = 13,
    OBJC_TAG_XPC_3             = 14,
    OBJC_TAG_XPC_4             = 15,

    OBJC_TAG_First60BitPayload = 0, 
    OBJC_TAG_Last60BitPayload  = 6, 
    OBJC_TAG_First52BitPayload = 8, 
    OBJC_TAG_Last52BitPayload  = 263, 

    OBJC_TAG_RESERVED_264      = 264
};
#if __has_feature(objc_fixed_enum) && ! defined(__cplusplus)
typedef enum objc_tag_index_t objc_tag_index_t;
#endif

static inline void * _Nonnull
_objc_makeTaggedPointer(objc_tag_index_t tag, uintptr_t value)
{
    // PAYLOAD_LSHIFT and PAYLOAD_RSHIFT are the payload extraction shifts.
    // They are reversed here for payload insertion.

    // assert(_objc_taggedPointersEnabled());
    if(tag <= OBJC_TAG_Last60BitPayload) { // assert(((value << _OBJC_TAG_PAYLOAD_RSHIFT) >> _OBJC_TAG_PAYLOAD_LSHIFT) == value); uintptr_t result = (_OBJC_TAG_MASK | ((uintptr_t)tag << _OBJC_TAG_INDEX_SHIFT) | ((value << _OBJC_TAG_PAYLOAD_RSHIFT) >>  _OBJC_TAG_PAYLOAD_LSHIFT));return _objc_encodeTaggedPointer(result);
    } else {
        // assert(tag >= OBJC_TAG_First52BitPayload);
        // assert(tag <= OBJC_TAG_Last52BitPayload);
        // assert(((value << _OBJC_TAG_EXT_PAYLOAD_RSHIFT) >> _OBJC_TAG_EXT_PAYLOAD_LSHIFT) == value);
        uintptr_t result =
            (_OBJC_TAG_EXT_MASK |
             ((uintptr_t)(tag - OBJC_TAG_First52BitPayload) << _OBJC_TAG_EXT_INDEX_SHIFT) |
             ((value << _OBJC_TAG_EXT_PAYLOAD_RSHIFT) >> _OBJC_TAG_EXT_PAYLOAD_LSHIFT));
        return_objc_encodeTaggedPointer(result); }}Copy the code

Tag << _OBJC_TAG_INDEX_SHIFT; tag << _OBJC_TAG_INDEX_SHIFT; (value << _OBJC_TAG_PAYLOAD_RSHIFT) >> _OBJC_TAG_PAYLOAD_LSHIFT) stores value, _OBJC_TAG_MASK is used to quickly flag that this is a TaggedPointer type and then call _objc_encodeTaggedPointer to confuse it, which is why printing the address directly doesn’t tell you that this is a special address.

  • Encode, decode_objc_encodeTaggedPointerand_objc_decodeTaggedPointerUse ofIs a ^ ^ b b = aThis principle.
static inline void * _Nonnull
_objc_encodeTaggedPointer(uintptr_t ptr)
{
    return (void *)(objc_debug_taggedpointer_obfuscator ^ ptr);
}

static inline uintptr_t
_objc_decodeTaggedPointer(const void * _Nullable ptr)
{
    return (uintptr_t)ptr ^ objc_debug_taggedpointer_obfuscator;
}
static void
initializeTaggedPointerObfuscator(void)
{
    if (sdkIsOlderThan(10_14, 12_0, 12_0, 5_0, 3_0) ||
        // Set the obfuscator to zero for apps linked against older SDKs,
        // in case they're relying on the tagged pointer representation. DisableTaggedPointerObfuscation) { objc_debug_taggedpointer_obfuscator = 0; } else { // Pull random data into the variable, then shift away all non-payload bits. arc4random_buf(&objc_debug_taggedpointer_obfuscator, sizeof(objc_debug_taggedpointer_obfuscator)); objc_debug_taggedpointer_obfuscator &= ~_OBJC_TAG_MASK; }}Copy the code

In sdkIsOlderThan(10_14, 12_0, 12_0, 5_0, 3_0, objC_debug_taggedPOinter_obfuscator is 0, you can see the special address directly. Later versions alone are not visible and require manual _objc_decodeTaggedPointer.

  • Determine whetherTaggedPointertype
static inline bool 
_objc_isTaggedPointer(const void * _Nullable ptr) 
{
    return ((uintptr_t)ptr & _OBJC_TAG_MASK) == _OBJC_TAG_MASK;
}
Copy the code

Through the (4) a bit operations complement | b&b = b can quickly determine whether to TaggedPointer

  • TaggedPointerThe values
static inline uintptr_t _objc_getTaggedPointerValue(const void * _Nullable ptr) { // assert(_objc_isTaggedPointer(ptr));  uintptr_t value = _objc_decodeTaggedPointer(ptr); uintptr_t basicTag = (value >> _OBJC_TAG_INDEX_SHIFT) & _OBJC_TAG_INDEX_MASK;if (basicTag == _OBJC_TAG_INDEX_MASK) {
        return (value << _OBJC_TAG_EXT_PAYLOAD_LSHIFT) >> _OBJC_TAG_EXT_PAYLOAD_RSHIFT;
    } else {
        return(value << _OBJC_TAG_PAYLOAD_LSHIFT) >> _OBJC_TAG_PAYLOAD_RSHIFT; }}Copy the code

The _objc_decodeTaggedPointer is decrypted first and then the value is retrieved using the reverse of the TaggedPointer generation algorithm.

practice

   extern uintptr_t objc_debug_taggedpointer_obfuscator;

    int a = 10;
    NSString * t = [NSString stringWithFormat:@"jensen"]; NSNumber *aNum = @(a); // 64 NSLog(@"%s %p %@ 0x%lx",object_getClassName(aNum),aNum,aNum,_objc_encodeTaggedPointer(aNum));
    NSLog(@"%s %p %@ 0x%lx",object_getClassName(t),t,t,_objc_encodeTaggedPointer(t));

uintptr_t _objc_encodeTaggedPointer(uintptr_t ptr)
{
    return (objc_debug_taggedpointer_obfuscator ^ ptr);
}
Copy the code

Print result:

__NSCFNumber 0xa39a2c1af54f3585 10 0xb0000000000000a3
 NSTaggedPointerString 0xb39cca4dc3a96380 jensen 0xa006e65736e656a6
Copy the code

conclusion

TaggedPointer generates a numeric value by performing a series of bit operations on a value and type. Using this data, you can quickly determine the type and get the corresponding value. For small types (NSNumber,NSDate, etc.), there is no need to use 64-bit storage, which greatly saves memory and improves creation and access efficiency.

The interview questions


- (void)taggedPointer_1 {
    dispatch_queue_t queue = dispatch_queue_create("jensen", DISPATCH_QUEUE_CONCURRENT);
    for (int i = 0; i<10000; i++) {
        dispatch_async(queue, ^{
            self.nameStr = [NSString stringWithFormat:@"jensen"];
            NSLog(@"% @",self.nameStr);
        });
    }
}

- (void)taggedPointer_2 {
    dispatch_queue_t queue = dispatch_queue_create("jensen2", DISPATCH_QUEUE_CONCURRENT);
    for (int i = 0; i<10000; i++) {
        dispatch_async(queue, ^{
            self.nameStr = [NSString stringWithFormat:@"Let's do it together."];
            NSLog(@"% @",self.nameStr); }); }}Copy the code

Test result :taggedPointer_1 runs normally, but taggedPointer_2 crashes.

From the crash information, we know it was caused by overrelease. Self.namestr = [NSString stringWithFormat:@” Let’s do it “]; Call the set method of the property.

static inline void reallySetProperty(id self, SEL _cmd, id newValue, ptrdiff_t offset, bool atomic, bool copy, bool mutableCopy)
{
    if (offset == 0) {
        object_setClass(self, newValue);
        return;
    }

    id oldValue;
    id *slot = (id*) ((char*)self + offset);

    if (copy) {
        newValue = [newValue copyWithZone:nil];
    } else if (mutableCopy) {
        newValue = [newValue mutableCopyWithZone:nil];
    } else {
        if (*slot == newValue) return;
        newValue = objc_retain(newValue);
    }

    if(! atomic) { oldValue = *slot; *slot = newValue; }else {
        spinlock_t& slotlock = PropertyLocks[slot];
        slotlock.lock();
        oldValue = *slot;
        *slot = newValue;        
        slotlock.unlock();
    }

    objc_release(oldValue);
}
Copy the code

From the above code, we know that object assignment (set) is actually retain/copy the new value and release the old value. It is not secure in this case because multithreading operations are constantly retain/release. Causes the object to overfree.

__attribute__((aligned(16), flatten, noinline))
id 
objc_retain(id obj) {
    if(! obj)return obj;
    if (obj->isTaggedPointer()) return obj;
    return obj->retain();
}

__attribute__((aligned(16), flatten, noinline))
void 
objc_release(id obj) {
    if(! obj)return;
    if (obj->isTaggedPointer()) return;
    return obj->release();
}
Copy the code

If it is a TaggedPointer type, the retain/release will be retuan and will not actually call the retain/release of the object. A TaggedPointer is not a TaggedPointer when the object assigned Jensen is of type TaggedPointer, when the string contains Chinese characters, or is too long for a TaggedPointer to be stored.

2.NONPOINTER_ISA: indicates a non-pointer ISA

What is a NONPOINTER_ISA?

We know that in OC, everything is objc_Object.

struct objc_object {
    Class _Nonnull isa  OBJC_ISA_AVAILABILITY;
};

Copy the code

Up until now, I had thought of ISA as just a pointer, where the ISA of an instance object points to a class and the pointer of a class object points to a metaclass. But isa contains other information besides Pointers, such as reference counting of objects, whether they contain C++ destructors, whether they are weakly referenced, etc… This isa is called NONPOINTER_ISA. Isa isa federation of types isa_t, which stores a lot of object information internally by bit-field technology.

union isa_t 
{
    isa_t() { }
    isa_t(uintptr_t value) : bits(value) { }

    Class cls;
    uintptr_t bits;

# if __arm64__
# define ISA_MASK 0x0000000ffffffff8ULL
# define ISA_MAGIC_MASK 0x000003f000000001ULL
# define ISA_MAGIC_VALUE 0x000001a000000001ULL
    struct {
        uintptr_t nonpointer        : 1;
        uintptr_t has_assoc         : 1;
        uintptr_t has_cxx_dtor      : 1;
        uintptr_t shiftcls          : 33; // MACH_VM_MAX_ADDRESS 0x1000000000
        uintptr_t magic             : 6;
        uintptr_t weakly_referenced : 1;
        uintptr_t deallocating      : 1;
        uintptr_t has_sidetable_rc  : 1;
        uintptr_t extra_rc          : 19;
# define RC_ONE (1ULL<<45)
# define RC_HALF (1ULL<<18)
    };
}
Copy the code
  • nonpointer0: indicates the pure ISA pointer. 1: indicates that the ISA contains not only the address of the class object, but also the class information and reference count of the object
  • has_assoc: Flag bit of the associated object, 0 does not exist, 1 exists
  • has_cxx_dtor: does the object have a destructor in C++ or Objc? If it has a destructor, it needs to do the destructor logic. If it does not, it can release the object faster
  • shiftcls: Stores the value of the class pointer. With pointer optimization turned on, 33 bits are used to store class Pointers in the ARM64 architecture.
  • magic: used by the debugger to determine whether the current object is a real object or has no space to initialize
  • weakly_referenced: a weak variable that identifies whether an object is or has been referred to an ARC. Objects without weak references can be freed faster.
  • deallocating: indicates whether the object is freeing memory
  • has_sidetable_rc: When the object reference technique is greater than 10, the variable is borrowed to store the carry
  • extra_rc: represents the reference count value of this object, which is actually the reference count value minus 1. For example, if the object’s reference count is 10, extra_rc is 9. If the reference count is greater than 10, the following has_sideTABLE_rc is used.

Note: when the object is rewritten retain, release, allocWithZone (rr/awz), it is no longer a NONPOINTER_ISA

Hash table: reference count table, weak reference table

SideTables is a system maintained hash table that stores internal hash tables, SideTables. Each hash table is mainly used to record object reference count, weak reference object storage, etc.

SideTables

SideTables data structure:

class StripedMap {
#if TARGET_OS_IPHONE && ! TARGET_OS_SIMULATOR
    enum { StripeCount = 8 };
#else
    enum { StripeCount = 64 };
#endif

    struct PaddedT {
        T value alignas(CacheLineSize);
    };

    PaddedT array[StripeCount];

    static unsigned int indexForPointer(const void *p) {
        uintptr_t addr = reinterpret_cast<uintptr_t>(p);
        return ((addr >> 4) ^ (addr >> 9)) % StripeCount;
    }

 public:
    T& operator[] (const void *p) { 
        return array[indexForPointer(p)].value; 
    }
    const T& operator[] (const void *p) const { 
        return const_cast<StripedMap<T>>(this)[p]; 
    }

    // Shortcuts for StripedMaps of locks.
    void lockAll() {
        for (unsigned int i = 0; i < StripeCount; i++) {
            array[i].value.lock();
        }
    }

    void unlockAll() {
        for (unsigned int i = 0; i < StripeCount; i++) {
            array[i].value.unlock();
        }
    }

    void forceResetAll() {
        for (unsigned int i = 0; i < StripeCount; i++) {
            array[i].value.forceReset();
        }
    }

    void defineLockOrder() {
        for (unsigned int i = 1; i < StripeCount; i++) {
            lockdebug_lock_precedes_lock(&array[i-1].value, &array[i].value);
        }
    }

    void precedeLock(const void *newlock) {
        // assumes defineLockOrder is also called
        lockdebug_lock_precedes_lock(&array[StripeCount-1].value, newlock);
    }

    void succeedLock(const void *oldlock) {
        // assumes defineLockOrder is also called
        lockdebug_lock_precedes_lock(oldlock, &array[0].value);
    }

    const void *getLock(int i) {
        if (i < StripeCount) return &array[i].value;
        else return nil;
    }
    
#if DEBUG
    StripedMap() {
        // Verify alignment expectations.
        uintptr_t base = (uintptr_t)&array[0].value;
        uintptr_t delta = (uintptr_t)&array[1].value - base;
        assert(delta % CacheLineSize == 0);
        assert(base % CacheLineSize == 0);
    }
#else
    constexpr StripedMap() {}
#endif
};
Copy the code
  • static unsigned int indexForPointer(const void *p)Object Pointers are hashed to calculate the corresponding subscript number.
  • T& operator[] (const void *p)rewrite[]Operator, which can be passed,&SideTables()[oldObj]Method to get the corresponding object pointerSideTable.
  • lldbDebugging,SideTablesStructureSideTable
(lldb) p indexForPointer(p)
(unsigned int) $4 = 4
(lldb) p array[indexForPointer(p)].value
((anonymous namespace)::SideTable) A $5 = {
  slock = {
    mLock = (_os_unfair_lock_opaque = 0)
  }
  refcnts = {
    Buckets = 0x0000000000000000
    NumEntries = 0
    NumTombstones = 0
    NumBuckets = 0
  }
  weak_table = {
    weak_entries = 0x0000000000000000
    num_entries = 0
    mask = 0
    max_hash_displacement = 0
  }
}
Copy the code

SideTable

SideTable internal data structure:

struct SideTable { spinlock_t slock; RefcountMap refcnts; weak_table_t weak_table; . };Copy the code

Refcnts reference count table, which is a Map for storing reference counts, is described below. Weak_table Weak reference table.

doubt

1.SidleTables are hash tables that contain multiple hash tables. Why do I need to use more than one? A: You need to lock and unlock SidleTable. Frequent operations degrade performance. Multiple tables can be locked separately to improve efficiency. 2. Why not a class for a SidleTable? Creating and managing sidleTables requires performance, so several classes share a SidleTable

Reference counting

1. What is the reference technique out of alloc? 2. When does the object call Dealloc? 3. When does reference counting add or subtract? 4. Where does reference count exist? 5. What should be done at the dealloc level?

With the above questions, we launch the source code analysis. The core of reference counting is the retain and release of an object, so we will start with these two functions:

retain

-(id) retain
{
    return _objc_rootRetain(self);
}
id
_objc_rootRetain(id obj)
{
    assert(obj);

    return obj->rootRetain();
}
objc_object::rootRetain()
{
    return rootRetain(false.false); } objC_object ::rootRetain(bool tryRetain, bool handleOverflow) {//1. isTaggedPointer returns directlyif (isTaggedPointer()) return(id)this; //2. Status used to mark locks bool sideTableLocked =false; Bool transcribeToSideTable =false;

    isa_t oldisa;
    isa_t newisa;

    do {
        transcribeToSideTable = false; oldisa = LoadExclusive(&isa.bits); newisa = oldisa; 4 / / not nonpointerif(slowpath(! newisa.nonpointer)) { ClearExclusive(&isa.bits);if(! tryRetain && sideTableLocked) sidetable_unlock();if (tryRetain) returnsidetable_tryRetain() ? (id)this : nil; //5. Non-nonpointer; jump nonpointerelse return sidetable_retain();
        }
        // don't check newisa.fast_rr; we already called any RR overrides //6. Destruct, return nil if (slowpath(tryRetain && newisa.deallocating) {ClearExclusive(& ISa.bits); if (! tryRetain && sideTableLocked) sidetable_unlock(); return nil; // uintptr_t carry; //8.extra_rc++ newisa.bits = addc(newisa.bits, RC_ONE, 0, &carry); // extra_rc++ if (slowpath(carry)) { //9. newisa.extra_rc++ overflowed if (! handleOverflow) { ClearExclusive(&isa.bits); return rootRetain_overflow(tryRetain); } // Leave half of the retain counts inline and // prepare to copy the other half to the side table. if (! tryRetain && ! sideTableLocked) sidetable_lock(); sideTableLocked = true; transcribeToSideTable = true; Newisa. extra_rc = RC_HALF; newisa.has_sidetable_rc = true; } } while (slowpath(! StoreExclusive(&isa.bits, oldisa.bits, newisa.bits))); if (slowpath(transcribeToSideTable)) { // Copy the other half of the retain counts to the side table. sidetable_addExtraRC_nolock(RC_HALF); } if (slowpath(! tryRetain && sideTableLocked)) sidetable_unlock(); return (id)this; } id objc_object::sidetable_retain() { #if SUPPORT_NONPOINTER_ISA assert(! isa.nonpointer); #endif SideTable& table = SideTables()[this]; table.lock(); size_t& refcntStorage = table.refcnts[this]; if (! (refcntStorage & SIDE_TABLE_RC_PINNED)) { refcntStorage += SIDE_TABLE_RC_ONE; } table.unlock(); return (id)this; }Copy the code
  • 1.TaggedPointerType, directreturn
  • 2. NotnonpointerType, callsidetable_retainTo the reference count table value +1
  • 3.nonpointerType,extra_rc++, determines whether the extra_rc stores an overflowRC_HALF(RC_HALF)The other half stores the reference technique table of the hash table of values.

release

Like retain, the source code is no longer posted here.

  • 1.TaggedPointerType, directreturn
  • 2. NotnonpointerType, callsidetable_retainTo the reference count table value -1
  • 3.nonpointerType,extra_rc--To determine whether to overflow
  • 4. When an overflow occurs, determine if the hash table still has any value, and if so, borrow from the hash table, stored in extra_rcRC_HALF(RC_HALF)Reference counting.
  • If the hash table is also gone, mark itdeallocatingSet to true and senddeallocThe message.

retainCount()

inline uintptr_t 
objc_object::rootRetainCount()
{
    if (isTaggedPointer()) return (uintptr_t)this;

    sidetable_lock();
    isa_t bits = LoadExclusive(&isa.bits);
    ClearExclusive(&isa.bits);
    if (bits.nonpointer) {
        uintptr_t rc = 1 + bits.extra_rc;
        if (bits.has_sidetable_rc) {
            rc += sidetable_getExtraRC_nolock();
        }
        sidetable_unlock();
        return rc;
    }

    sidetable_unlock();
    return sidetable_retainCount();
}
Copy the code
  • 1.TaggedPointerReturns the(uintptr_t)this
    1. nonpointerReturns the1 + bits.extra_rcIf the reference counter table has a value, you also need to add the stored value of the reference counter table
    1. nonnonpointerReturns the stored value of the counter table

dealloc

   if (isTaggedPointer()) return;  // fixme necessary?

    if(fastpath(isa.nonpointer && ! isa.weakly_referenced && ! isa.has_assoc && ! isa.has_cxx_dtor && ! isa.has_sidetable_rc)) { assert(! sidetable_present()); free(this); }else {
        object_dispose((id)this);
    }
Copy the code
  • 1. TaggedPointer, return directly
  • 2.isa.nonpointer && ! isa.weakly_referenced && ! isa.has_assoc && ! isa.has_cxx_dtor && ! isa.has_sidetable_rc, direct release
  • 3. Remove destructors and associated objects
  • 4. Erase the object in the reference count table, and set the weak reference table to Nil
  • 5. Release

Conclusion: through to retain, release, retainCount, dealloc source analysis, the 5 questions all can find the answer in it. I will not repeat it here.

Weak reference weak

1. How to add weak reference counting to weak reference objects? 2. How can objects in the weak reference table be set to nil during object destruction?

  NSObject * n = [[NSObject alloc] init];
   __weak NSObject *weakN = n;
Copy the code

weak
objc_initWeak

id
objc_initWeak(id *location, id newObj)
{
    if(! newObj) { *location = nil;return nil;
    }

    return storeWeak<DontHaveOld, DoHaveNew, DoCrashIfDeallocating>
        (location, (objc_object*)newObj);
}
Copy the code
  • newObjIf no, return, otherwise callstoreWeak
static id 
storeWeak(id *location, objc_object *newObj)
{
    assert(haveOld  ||  haveNew);
    if(! haveNew) assert(newObj == nil); Class previouslyInitializedClass = nil; id oldObj; SideTable *oldTable; SideTable *newTable; // Acquire locksfor old and new values.
    // Order by lock address to prevent lock ordering problems. 
    // Retry if the old value changes underneath us.
 retry:
    if (haveOld) {
        oldObj = *location;
        oldTable = &SideTables()[oldObj];
    } else {
        oldTable = nil;
    }
    if (haveNew) {
        newTable = &SideTables()[newObj];
    } else {
        newTable = nil;
    }

    SideTable::lockTwo<haveOld, haveNew>(oldTable, newTable);

    if(haveOld && *location ! = oldObj) { SideTable::unlockTwo<haveOld, haveNew>(oldTable, newTable); goto retry; } // Prevent a deadlock between the weak reference machinery // and the +initialize machinery by ensuring that no // weakly-referenced object has an un-+initialized isa.if (haveNew  &&  newObj) {
        Class cls = newObj->getIsa();
        if(cls ! = previouslyInitializedClass && ! ((objc_class *)cls)->isInitialized()) { SideTable::unlockTwo<haveOld, haveNew>(oldTable, newTable); class_initialize(cls, (id)newObj); // If this class is finished with +initializethen we're good. // If this class is still running +initialize on this thread // (i.e. +initialize called storeWeak on an instance of itself) // then we may proceed but it will appear initializing and // not yet initialized to the check above. // Instead set previouslyInitializedClass to recognize it on retry. previouslyInitializedClass = cls; goto retry; } } // Clean up old value, if any. if (haveOld) { weak_unregister_no_lock(&oldTable->weak_table, oldObj, location); } // Assign new value, if any. if (haveNew) { newObj = (objc_object *) weak_register_no_lock(&newTable->weak_table, (id)newObj, location, crashIfDeallocating); // weak_register_no_lock returns nil if weak store should be rejected // Set is-weakly-referenced bit in refcount table.  if (newObj && ! newObj->isTaggedPointer()) { newObj->setWeaklyReferenced_nolock(); } // Do not set *location anywhere else. That would introduce a race. *location = (id)newObj; } else { // No new value. The storage is not changed. } SideTable::unlockTwo
      
       (oldTable, newTable); return (id)newObj; }
      ,>Copy the code
  • Call if old values existweak_unregister_no_lockTo deal with.
void
weak_unregister_no_lock(weak_table_t *weak_table, id referent_id, 
                        id *referrer_id)
{
    objc_object *referent = (objc_object *)referent_id;
    objc_object **referrer = (objc_object **)referrer_id;

    weak_entry_t *entry;

    if(! referent)return;

    if ((entry = weak_entry_for_referent(weak_table, referent))) {
        remove_referrer(entry, referrer);
        bool empty = true;
        if(entry->out_of_line() && entry->num_refs ! = 0) { empty =false;
        }
        else {
            for (size_t i = 0; i < WEAK_INLINE_COUNT; i++) {
                if (entry->inline_referrers[i]) {
                    empty = false; 
                    break; }}}if (empty) {
            weak_entry_remove(weak_table, entry);
        }
    }

    // Do not set *referrer = nil. objc_storeWeak() requires that the 
    // value not change.
}
Copy the code
  1. First callweak_entry_for_referentfromwaek_tableTo deriveentry
  2. And then callremove_referrerIn theentrythereferrersTo find the index of the address,entry->referrers[index] = nil; entry->num_refs--;Set to nil and setnum_refsMinus 1
  3. judgeentryIs it worth it? Noweak_tableRemove theentry
  • If no old value exists, callweak_register_no_lock
 // now remember it and where it is being stored
    weak_entry_t *entry;
    if ((entry = weak_entry_for_referent(weak_table, referent))) {
        append_referrer(entry, referrer);
    } 
    else {
        weak_entry_t new_entry(referent, referrer);
        weak_grow_maybe(weak_table);
        weak_entry_insert(weak_table, &new_entry);
Copy the code

(entry = weak_entry_for_referent(weak_table, (1) The entry is present, call append_referrer and add new_referrer to the entry->referrers The new_referrer is first assigned to entry->inline_referrers[I] and then the entry->inline_referrers loop is copied to new_referrers Assign new_referrers to entry->referrers = new_referrers; (2) Entry does not exist. Weak_entry_t add referent to weak_entry_t array inline_referrers, ‘ ‘expand weak_table,weak_grow_maybe(weak_table) Weak_entry_insert (weak_table, &new_entry); weak_entry_insert(weak_table, &new_entry);

In dealloc, we know that when an object is destructing (deealloc), if there is a weak reference object:

. SideTable& table = SideTables()[this]; table.lock();if(isa.weakly_referenced) { weak_clear_no_lock(&table.weak_table, (id)this); } // In the reference technical label, remove this object.if (isa.has_sidetable_rc) {
        table.refcnts.erase(this);
    }
     table.unlock()
    ...
Copy the code

Weak_clear_no_lock is called when a weak reference object exists

void 
weak_clear_no_lock(weak_table_t *weak_table, id referent_id) 
{
    objc_object *referent = (objc_object *)referent_id;

    weak_entry_t *entry = weak_entry_for_referent(weak_table, referent);
    if (entry == nil) {
        /// XXX shouldn't happen, but does with mismatched CF/objc //printf("XXX no entry for clear deallocating %p\n", referent); return; } // zero out references weak_referrer_t *referrers; size_t count; if (entry->out_of_line()) { referrers = entry->referrers; count = TABLE_SIZE(entry); } else { referrers = entry->inline_referrers; count = WEAK_INLINE_COUNT; } for (size_t i = 0; i < count; ++i) { objc_object **referrer = referrers[i]; if (referrer) { if (*referrer == referent) { *referrer = nil; } else if (*referrer) { _objc_inform("__weak variable at %p holds %p instead of %p. " "This is probably incorrect use of  " "objc_storeWeak() and objc_loadWeak(). " "Break on objc_weak_error to debug.\n", referrer, (void*)*referrer, (void*)referent); objc_weak_error(); } } } weak_entry_remove(weak_table, entry); }Copy the code
  • inweak_tableOf the objectentry
  • cycleentryUnder thereferrers, set it to nil,*referrer = nil;
  • weak_tableRemove theentry

Variable modifiers

Variable modifiers have the following cases:

typedef enum {
    objc_ivar_memoryUnknown,     // unknown / unknown
    objc_ivar_memoryStrong,      // direct access / objc_storeStrong
    objc_ivar_memoryWeak,        // objc_loadWeak[Retained] / objc_storeWeak
    objc_ivar_memoryUnretained   // direct access / direct access
} objc_ivar_memory_management_t;
Copy the code

Parsing the handling of setter methods for different modifiers of variables by source code:

void _object_setIvar(id obj, Ivar ivar, id value, bool assumeStrong)
{
    if(! obj || ! ivar || obj->isTaggedPointer())return;

    ptrdiff_t offset;
    objc_ivar_memory_management_t memoryManagement;
    _class_lookUpIvar(obj->ISA(), ivar, offset, memoryManagement);

    if (memoryManagement == objc_ivar_memoryUnknown) {
        if (assumeStrong) memoryManagement = objc_ivar_memoryStrong;
        else memoryManagement = objc_ivar_memoryUnretained;
    }

    id *location = (id *)((char *)obj + offset);

    switch (memoryManagement) {
    case objc_ivar_memoryWeak:       objc_storeWeak(location, value); break;
    case objc_ivar_memoryStrong:     objc_storeStrong(location, value); break;
    case objc_ivar_memoryUnretained: *location = value; break;
    case objc_ivar_memoryUnknown:    _objc_fatal("impossible"); }}Copy the code
  • TaggedPointerType, directreturn
  • Gets the memory modifierobjc_ivar_memory_management_t._class_lookUpIvar(obj->ISA(), ivar, offset, memoryManagement) (1)objc_ivar_memoryWeak, the callobjc_storeWeakOperate the weak reference table, which has been analyzed above.objc_ivar_memoryStrong, the callobjc_storeStrongRetain new value, release old value (3)objc_ivar_memoryUnretained, directly stores value to*location. That explains whyUnretainedIt’s not safe.

Automatic release pool AutoReleasePool

Introduction to automatic release pool

AutoReleasePool was introduced by ARC to manage reference counting of objects. Here are a few highlights of AutoReleasePool:

  • A thread’s auto-release pool is a collection of Pointers in the form of a stack.
  • Each pointer is either an object to be released or a pool boundary, which is automatically released;
  • A pool token is a pointer to the pool boundary. When the pool is ejected, all objects hotter than the sentinel are released.
  • The stack is a bidirectional linked list of pages. Add and remove pages as needed.
  • Thread local storage points to hot pages, which store new automatically released objects.

AutoReleasePool structure diagram:

AutoReleasePool data structure:

class AutoreleasePoolPage; struct AutoreleasePoolPageData { magic_t const magic; // 16 __unsafe_unretained id *next; //8 pthread_t const thread; AutoreleasePoolPage * const parent; //8 AutoreleasePoolPage *child; //8 uint32_t const depth; // 4 uint32_t hiwat; // 4 AutoreleasePoolPageData(__unsafe_unretained id* _next, pthread_t _thread, AutoreleasePoolPage* _parent, uint32_t _depth, uint32_t _hiwat) : magic(), next(_next), thread(_thread), parent(_parent), child(nil), depth(_depth), hiwat(_hiwat) { } };Copy the code
  • AutoreleasePoolPageIs inherited fromAutoreleasePoolPageDataClasses of structures,Objc4-779.1 -Versions start to stand aloneAutoreleasePoolPageDataStruct, before the variable was directly inAutoreleasePoolPageIn the.
  • magic_t const magic: used for verificationAutoreleasePoolPageIs the structure complete
  • __unsafe_unretained id *next: points to the next position of the newly added autoRelease object, initialized to begin()
  • pthread_t const thread: Current thread
  • AutoreleasePoolPage * const parent: points to the parent node. The first parent node is nil
  • AutoreleasePoolPage *child: points to a child node, and the last child node is nil
  • uint32_t const depth: indicates the depth, starting from 0 and increasing by +1
  • uint32_t hiwat: indicates the high water Mark

Auto release pool exploration

Use clang-rewrite-objc main.m -o main.cpp to compile the following code:

int main(int argc, const char * argv[]) {
    @autoreleasepool {
        NSLog(@"Jensen");
    }
    return 0;
}
Copy the code

Compile result:

int main(int argc, const char * argv[]) {

    /* @autoreleasepool */ { __AtAutoreleasePool __autoreleasepool; 
        NSLog((NSString *)&__NSConstantStringImpl__var_folders_v7_6tlrq64x5w5gqg17582f4p500000gn_T_main_3f39be_mi_0);
    }
    return 0;
}
struct __AtAutoreleasePool {
  __AtAutoreleasePool() {atautoreleasepoolobj = objc_autoreleasePoolPush(); } ~__AtAutoreleasePool() {objc_autoreleasePoolPop(atautoreleasepoolobj); } void * atautoreleasepoolobj; };Copy the code

@Autoreleasepool {} actually instantiates __AtAutoreleasePool, calling objc_autoreleasePoolPush in the constructor

atautoreleasepoolobj = objc_autoreleasePoolPush();

    static inline void *push() 
    {
        id *dest;
        if (slowpath(DebugPoolAllocation)) {
            // Each autorelease pool starts on a new pool page.
            dest = autoreleaseNewPage(POOL_BOUNDARY);
        } else {
            dest = autoreleaseFast(POOL_BOUNDARY);
        }
        ASSERT(dest == EMPTY_POOL_PLACEHOLDER || *dest == POOL_BOUNDARY);
        return dest;
    }
Copy the code
  • By environment variablesOBJC_DEBUG_POOL_ALLOCATIONDetermines whether automatic release pooling is allowed to trace debugging, and if so calledautoreleaseNewPageOtherwise enterautoreleaseFast.Here, we analyzeautoreleaseFast.
  • Automatic release pool initialization will be calledobjc_autoreleasePoolPush
  static inline id *autoreleaseFast(id obj)
    {
        AutoreleasePoolPage *page = hotPage();
        if(page && ! page->full()) {return page->add(obj);
        } else if (page) {
            return autoreleaseFullPage(obj, page);
        } else {
            returnautoreleaseNoPage(obj); }}Copy the code
  • Gets the currentAutoreleasePoolPagethehotPage
  • There arehotPage, and not full, call directlypage->add(obj)Adds the object toAutoreleasePoolPage
  • There arehotPage, but full, callautoreleaseFullPage
  • There is nohotPageIs the first time to join, callautoreleaseNoPage
id *add(id obj) { ASSERT(! full()); unprotect(); id *ret = next; // faster than `return next-1` because of aliasing
        *next++ = obj;
        protect();
        return ret;
    }
Copy the code

Add the object to the hotPage.

  static __attribute__((noinline))
    id *autoreleaseFullPage(id obj, AutoreleasePoolPage *page)
    {
        // The hot page is full. 
        // Step to the next non-full page, adding a new page if necessary.
        // Then add the object to that page.
        ASSERT(page == hotPage());
        ASSERT(page->full()  ||  DebugPoolAllocation);

        do {
            if (page->child) page = page->child;
            else page = new AutoreleasePoolPage(page);
        } while (page->full());

        setHotPage(page);
        return page->add(obj);
    }
Copy the code

The loop finds the last page, creates a new AutoreleasePoolPage as the parent page, sets the newly created page to hotPage, and calls Add to add the object to the new page.

    static __attribute__((noinline))
    id *autoreleaseNoPage(id obj)
    {
        // "No page"could mean no pool has been pushed // or an empty placeholder pool has been pushed and has no contents yet ASSERT(! hotPage()); bool pushExtraBoundary =false;
        if (haveEmptyPoolPlaceholder()) {
            // We are pushing a second pool over the empty placeholder pool
            // or pushing the first object into the empty placeholder pool.
            // Before doing that, push a pool boundary on behalf of the pool 
            // that is currently represented by the empty placeholder.
            pushExtraBoundary = true;
        }
        else if(obj ! = POOL_BOUNDARY && DebugMissingPools) { // We are pushing an object with no poolin place, 
            // and no-pool debugging was requested by environment.
            _objc_inform("MISSING POOLS: (%p) Object %p of class %s "
                         "autoreleased with no pool in place - "
                         "just leaking - break on "
                         "objc_autoreleaseNoPool() to debug", 
                         objc_thread_self(), (void*)obj, object_getClassName(obj));
            objc_autoreleaseNoPool(obj);
            return nil;
        }
        else if(obj == POOL_BOUNDARY && ! DebugPoolAllocation) { // We are pushing a pool with no poolin place,
            // and alloc-per-pool debugging was not requested.
            // Install and return the empty pool placeholder.
            return setEmptyPoolPlaceholder();
        }

        // We are pushing an object or a non-placeholder'd pool. // Install the first page. AutoreleasePoolPage *page = new AutoreleasePoolPage(nil); setHotPage(page); // Push a boundary on behalf of the previously-placeholder'd pool.
        if (pushExtraBoundary) {
            page->add(POOL_BOUNDARY);
        }
        
        // Push the requested object or pool.
        return page->add(obj);
    }
Copy the code

Create the first page directly, set this page to hotPage, and add the POOL_BOUNDARY

Use _objc_autoreleasePoolPrint (); Prints an empty autorelease pool:

The page property occupies 56(3 * 16 + 8) bytes. A page can hold 505((4096-56)/8 = 505) objects. The first page contains the POOL boundary, occupying 1 object. So the first page can hold 504 objects and 1 special tag, and the other pages can hold 505 objects.

objc_autoreleasePoolPop

void
_objc_autoreleasePoolPop(void *ctxt)
{
    objc_autoreleasePoolPop(ctxt);
}

void
objc_autoreleasePoolPop(void *ctxt)
{
    AutoreleasePoolPage::pop(ctxt);
}

 static inline void
    pop(void *token)
    {
        AutoreleasePoolPage *page;
        id *stop;
        if (token == (void*)EMPTY_POOL_PLACEHOLDER) {
            // Popping the top-level placeholder pool.
            page = hotPage();
            if(! page) { // Pool was never used. Clear the placeholder.return setHotPage(nil);
            }
            // Pool was used. Pop its contents normally.
            // Pool pages remain allocated for re-use as usual.
            page = coldPage();
            token = page->begin();
        } else {
            page = pageForPointer(token);
        }

        stop = (id *)token;
        if(*stop ! = POOL_BOUNDARY) {if(stop == page->begin() && ! page->parent) { // Start of coldest page may correctly not be POOL_BOUNDARY: // 1. top-level pool is popped, leaving the cold pagein place
                // 2. an object is autoreleased with no pool
            } else {
                // Error. For bincompat purposes this is not 
                // fatal in executables built with old SDKs.
                returnbadPop(token); }}if (slowpath(PrintPoolHiwat || DebugPoolAllocation || DebugMissingPools)) {
            return popPageDebug(token, page, stop);
        }

        return popPage<false>(token, page, stop);
    }
Copy the code
  • Automatically release the pool destructor when called_objc_autoreleasePoolPop
  • tokenSpecifies the location to release to
  • findtokenThe corresponding page
  • popPage<false>(token, page, stop);Began to pop
    template<bool allowDebug>
    static void
    popPage(void *token, AutoreleasePoolPage *page, id *stop)
    {
        if (allowDebug && PrintPoolHiwat) printHiwat();

        page->releaseUntil(stop);

        // memory: delete empty children
        if (allowDebug && DebugPoolAllocation  &&  page->empty()) {
            // special case: delete everything during page-per-pool debugging
            AutoreleasePoolPage *parent = page->parent;
            page->kill(a);setHotPage(parent);
        } else if(allowDebug && DebugMissingPools && page->empty() && ! page->parent) { // specialcase: delete everything for pop(top)
            // when debugging missing autorelease pools
            page->kill(a);setHotPage(nil);
        } else if (page->child) {
            // hysteresis: keep one empty child if page is more than half full
            if (page->lessThanHalfFull()) {
                page->child->kill(a); }else if (page->child->child) {
                page->child->child->kill(a); }}}Copy the code
  • page->releaseUntil(stop);Release object
  • If the page is empty, kill the child as well
  void releaseUntil(id *stop) 
    {
        // Not recursive: we don't want to blow out the stack // if a thread accumulates a stupendous amount of garbage while (this->next ! = stop) { // Restart from hotPage() every time, in case -release // autoreleased more objects AutoreleasePoolPage *page = hotPage(); // fixme I think this `while` can be `if`, but I can't prove it
            while (page->empty()) {
                page = page->parent;
                setHotPage(page);
            }

            page->unprotect();
            id obj = *--page->next;
            memset((void*)page->next, SCRIBBLE, sizeof(*page->next));
            page->protect();

            if (obj != POOL_BOUNDARY) {
                objc_release(obj);
            }
        }

        setHotPage(this);

#if DEBUG
        // we expect any children to be completely empty
        for (AutoreleasePoolPage *page = child; page; page = page->child) {
            ASSERT(page->empty());
        }
#endif
    }
Copy the code
  • Loop over, retrieve the object, and release it.

Conclusion: When you want to pop the token object, the system gives a pointer to the token object. This pointer is used to specify the extent to which the token object is released. The inner loop iterates through the release of the executing object until it stops the object and sets the current page to hotPage.

autorelease

Now that WE’ve seen objc_autoreleasePoolPush and objc_autoreleasePoolPop, let’s look at what autoRelease does.

static inline id autorelease(id obj) { ASSERT(obj); ASSERT(! obj->isTaggedPointer()); id *dest __unused = autoreleaseFast(obj); ASSERT(! dest || dest == EMPTY_POOL_PLACEHOLDER || *dest == obj);return obj;
    }
       static inline id *autoreleaseFast(id obj)
    {
        AutoreleasePoolPage *page = hotPage();
        if(page && ! page->full()) {return page->add(obj);
        } else if (page) {
            return autoreleaseFullPage(obj, page);
        } else {
            returnautoreleaseNoPage(obj); }}Copy the code

The implementation of autoRelease is similar to objc_autoreleasePoolPush, which I won’t go into here.

Automatically release pools and runloops

  • The App starts, apple registered in the main thread RunLoop two Observer, the callback is _wrapRunLoopWithAutoreleasePoolHandler ().

  • The first Observer monitors an event called Entry(about to enter Loop), which creates an automatic release pool within its callback by calling _objc_autoreleasePoolPush(). Its order is -2147483647, the highest priority, ensuring that the release pool is created before all other callbacks.

  • The second Observer monitors two events: calling _objc_autoreleasePoolPop() and _objc_autoreleasePoolPush() while waiting (ready to sleep) torelease the old pool and create a new one; _objc_autoreleasePoolPop() is called upon Exit(about to Exit the Loop) torelease the automatic release pool. The order of this Observer is 2147483647, the lowest priority, ensuring that its release pool occurs after all other callbacks.

  • The code that executes on the main thread is usually written inside such callbacks as event callbacks and Timer callbacks. These callbacks are surrounded by AutoreleasePool created by RunLoop, so there is no memory leak and the developer does not have to show that the Pool was created.

  • There is only one autoreleasePool per thread

  • When nested, autoreleasePool creates only one page, but with two pool boundaries

  observers = (
     "<CFRunLoopObserver 0x600001238280 [0x10b19ab68]>{valid = Yes, activities = 0x1, repeats = Yes, order = -2147483647, callout = _wrapRunLoopWithAutoreleasePoolHandler (0x10dd891b1), context = <CFArray 0x600002d3c1b0 [0x10b19ab68]>{type = mutable-small, count = 0, values = ()}}"."<CFRunLoopObserver 0x60000123c500 [0x10b19ab68]>{valid = Yes, activities = 0x20, repeats = Yes, order = 0, callout = _UIGestureRecognizerUpdateObserver (0x10d95b473), context = <CFRunLoopObserver context 0x60000083cfc0>}"."<CFRunLoopObserver 0x600001238140 [0x10b19ab68]>{valid = Yes, activities = 0xa0, repeats = Yes, order = 1999000, callout = _beforeCACommitHandler (0x10ddb8dfc), context = <CFRunLoopObserver context 0x7fdae6d020c0>}"."<CFRunLoopObserver 0x6000012381e0 [0x10b19ab68]>{valid = Yes, activities = 0xa0, repeats = Yes, order = 2001000, callout = _afterCACommitHandler (0x10ddb8e75), context = <CFRunLoopObserver context 0x7fdae6d020c0>}"."<CFRunLoopObserver 0x600001238320 [0x10b19ab68]>{valid = Yes, activities = 0xa0, repeats = Yes, order = 2147483647, callout = _wrapRunLoopWithAutoreleasePoolHandler (0x10dd891b1), context = <CFArray 0x600002d3c1b0 [0x10b19ab68]>{type = mutable-small, count = 0, values = ()}}"
     ),
Copy the code