Write in front: iOS underlying principle exploration is my usual development and learning in the accumulation of a section of advanced road. Record my continuous exploration of the journey, I hope to be helpful to all readers.Copy the code

The directory is as follows:

  1. IOS underlying principles of alloc exploration
  2. The underlying principles of iOS are explored
  3. The underlying principles of iOS explore the nature of objects & isa’s underlying implementation
  4. Isa-basic Principles of iOS (Part 1)
  5. Isa-basic Principles of iOS (Middle)
  6. Isa-class Basic Principles of iOS Exploration (2)
  7. IOS fundamentals explore the nature of Runtime Runtime & methods
  8. Objc_msgSend: Exploring the underlying principles of iOS
  9. Slow lookups in iOS Runtime
  10. A dynamic approach to iOS fundamentals
  11. The underlying principles of iOS explore the message forwarding process
  12. Dyld (part 1)
  13. IOS Basic Principles of application loading principle dyld (ii)
  14. IOS basic principles explore the loading of classes
  15. The underlying principles of iOS explore the loading of categories
  16. IOS underlying principles to explore the associated object
  17. IOS underlying principle of the wizard KVC exploration
  18. Exploring the underlying principles of iOS: KVO Principles | More challenges in August
  19. Exploring the underlying principles of iOS: Rewritten KVO | More challenges in August
  20. The underlying principles of iOS: Multi-threading | More challenges in August
  21. GCD functions and queues in iOS
  22. GCD principles of iOS (Part 1)
  23. IOS Low-level – What do you know about deadlocks?
  24. IOS Low-level – Singleton destruction is possible?
  25. IOS Low-level – Dispatch Source
  26. IOS bottom – a fence letter blocks the number
  27. IOS low-level – Be there or be Square semaphore
  28. IOS underlying GCD – In and out into a scheduling group
  29. Basic principles of iOS – Basic use of locks
  30. IOS underlying – @synchronized Flow analysis
  31. IOS low-level – The principle of lock exploration
  32. IOS Low-level – allows you to implement a read/write lock
  33. Implementation of Objective-C Block
  34. Implementation of Objective-C Block
  35. IOS bottom – Block, comprehensive resolution!
  36. IOS Basics – Startup Optimization (part 1)
  37. IOS Basics – Startup Optimization (2)
  38. Exploration of basic principles of iOS — Memory management of memory five areas
  39. Tagged Pointer Format Changes for memory management

Summary of the above column

  • Summary of iOS underlying principles of exploration

Sort out the details

  • Summary of iOS development details

preface

NONPOINTER_ISA (non-pointer ISA)

Nonpointer: indicates whether pointer optimization is enabled for isa Pointers. (We explore the underlying implementation of ISA in the nature of objects explored in iOS Underlying Principles & The Underlying implementation of ISA)

  • 0: pure ISA pointer;
  • 1: Not only class object address, ISA contains class information, object reference count, etc.

Similar to Tagged Pointer, NONPOINTER_ISA is an optimization of isa’s memory bits so that its 64 bits are fully utilized and stored, rather than being wasted.

Its ShiftCLs, similar to PalLoad, is used to host valid data.

MRC & ARC

The object’s reference count is stored in extra_rc in ISA;

  • extra_rcThe extra_rc value is 9 if the object’s reference count is 10. If the reference count is greater than 10, the has_SIDETABLE_RC flag bit is used.

retain

release

The release process is basically the reverse operation of retain, and then when the reference count is 0, the destructor DealLocate is called: a Dealloc message is sent to our object.

Real release

.if (performDealloc) {
    ((void(*)(objc_object *, SEL))objc_msgSend)(this, @selector(dealloc)); }...Copy the code

The appendix

Retain the source code

ALWAYS_INLINE id
objc_object: :rootRetain(bool tryRetain, objc_object::RRVariant variant)
{
    if (slowpath(isTaggedPointer())) return (id)this;

    bool sideTableLocked = false;
    bool transcribeToSideTable = false;

    isa_t oldisa;
    isa_t newisa;

    oldisa = LoadExclusive(&isa.bits);

    if (variant == RRVariant::FastOrMsgSend) {
        // These checks only apply to objc_retain()
        // They are here to avoid us reloading isa.
        if (slowpath(oldisa.getDecodedClass(false)->hasCustomRR())) {
            ClearExclusive(&isa.bits);
            if (oldisa.getDecodedClass(false)->canCallSwiftRR()) {
                return swiftRetain.load(memory_order_relaxed)((id)this);
            }
            return ((id(*)(objc_object *, SEL))objc_msgSend)(this, @selector(retain)); }}if(slowpath(! oldisa.nonpointer)) {// A class is always a class, so we can perform a check
        // Outside the CAS ring
        if (oldisa.getDecodedClass(false)->isMetaClass()) {
            ClearExclusive(&isa.bits);
            return (id)this; }}do {
        transcribeToSideTable = false;
        newisa = oldisa;
        if(slowpath(! newisa.nonpointer)) { ClearExclusive(&isa.bits);if (tryRetain) return sidetable_tryRetain() ? (id)this : nil;
            else return sidetable_retain(sideTableLocked);
        }
        // do not check newisa.fast_rr; We have called RR override
        if (slowpath(newisa.isDeallocating())) {
            ClearExclusive(&isa.bits);
            if (sideTableLocked) {
                ASSERT(variant == RRVariant::Full);
                sidetable_unlock();
            }
            if (slowpath(tryRetain)) {
                return nil;
            } else {
                return (id)this;
            }
        }
        uintptr_t carry;
        newisa.bits = addc(newisa.bits, RC_ONE, 0, &carry);  // extra_rc++

        if (slowpath(carry)) {
            // newisa.extra_rc++ overflowed
            if(variant ! = RRVariant::Full) { ClearExclusive(&isa.bits);return rootRetain_overflow(tryRetain);
            }
            // Retain half of the retain count
            // Get ready to copy the other half to the side table.
            if(! tryRetain && ! sideTableLocked) sidetable_lock(); sideTableLocked =true;
            transcribeToSideTable = true;
            newisa.extra_rc = RC_HALF;
            newisa.has_sidetable_rc = true; }}while(slowpath(! StoreExclusive(&isa.bits, &oldisa.bits, newisa.bits)));if (variant == RRVariant::Full) {
        if (slowpath(transcribeToSideTable)) {
            // Put the remaining half in the side table.
            sidetable_addExtraRC_nolock(RC_HALF);
        }

        if(slowpath(! tryRetain && sideTableLocked)) sidetable_unlock(); }else{ ASSERT(! transcribeToSideTable); ASSERT(! sideTableLocked); }return (id)this;
}
Copy the code

sidetable_retain

id
objc_object: :sidetable_retain(bool locked){#ifSUPPORT_NONPOINTER_ISA ASSERT(! isa.nonpointer); #endif SideTable& table = SideTables()[this];
    
    if(! locked) table.lock(); size_t& refcntStorage = table.refcnts[this];
    if (! (refcntStorage & SIDE_TABLE_RC_PINNED)) {
        refcntStorage += SIDE_TABLE_RC_ONE;
    }
    table.unlock();

    return (id)this;
}
Copy the code

rootRelease

ALWAYS_INLINE bool
objc_object: :rootRelease(bool performDealloc, objc_object::RRVariant variant)
{
    if (slowpath(isTaggedPointer())) return false;

    bool sideTableLocked = false;

    isa_t newisa, oldisa;

    oldisa = LoadExclusive(&isa.bits);

    if (variant == RRVariant::FastOrMsgSend) {
        // These checks are only meaningful for objc_release()
        // They are here so that we avoid a re-load of the isa.
        if (slowpath(oldisa.getDecodedClass(false)->hasCustomRR())) {
            ClearExclusive(&isa.bits);
            if (oldisa.getDecodedClass(false)->canCallSwiftRR()) {
                swiftRelease.load(memory_order_relaxed)((id)this);
                return true;
            }
            ((void(*)(objc_object *, SEL))objc_msgSend)(this, @selector(release));
            return true; }}if(slowpath(! oldisa.nonpointer)) {// a Class is a Class forever, so we can perform this check once
        // outside of the CAS loop
        if (oldisa.getDecodedClass(false)->isMetaClass()) {
            ClearExclusive(&isa.bits);
            return false; }}retry:
    do {
        newisa = oldisa;
        if(slowpath(! newisa.nonpointer)) { ClearExclusive(&isa.bits);return sidetable_release(sideTableLocked, performDealloc);
        }
        if (slowpath(newisa.isDeallocating())) {
            ClearExclusive(&isa.bits);
            if (sideTableLocked) {
                ASSERT(variant == RRVariant::Full);
                sidetable_unlock();
            }
            return false;
        }

        // don't check newisa.fast_rr; we already called any RR overrides
        uintptr_t carry;
        newisa.bits = subc(newisa.bits, RC_ONE, 0, &carry);  // extra_rc--
        if (slowpath(carry)) {
            // don't ClearExclusive()goto underflow; }}while(slowpath(! StoreReleaseExclusive(&isa.bits, &oldisa.bits, newisa.bits)));if (slowpath(newisa.isDeallocating()))
        goto deallocate;

    if (variant == RRVariant::Full) {
        if (slowpath(sideTableLocked)) sidetable_unlock();
    } else{ ASSERT(! sideTableLocked); }return false;

 underflow:
    // newisa.extra_rc-- overflow: borrow or release from side table

    // abandon newisa to undo the decrement
    newisa = oldisa;

    if (slowpath(newisa.has_sidetable_rc)) {
        if(variant ! = RRVariant::Full) { ClearExclusive(&isa.bits);return rootRelease_underflow(performDealloc);
        }

        // Transfer retain count from side table to inline storage.

        if(! sideTableLocked) { ClearExclusive(&isa.bits); sidetable_lock(); sideTableLocked =true;
            // Need to start over to avoid a race against 
            // the nonpointer -> raw pointer transition.
            oldisa = LoadExclusive(&isa.bits);
            goto retry;
        }

        // Try to remove some retain counts from the side table.        
        auto borrow = sidetable_subExtraRC_nolock(RC_HALF);

        bool emptySideTable = borrow.remaining == 0; // we'll clear the side table if no refcounts remain there

        if (borrow.borrowed > 0) {
            // Side table retain count decreased.
            // Try adding them to the inline count.
            bool didTransitionToDeallocating = false;
            newisa.extra_rc = borrow.borrowed - 1;  // redo the original decrement toonewisa.has_sidetable_rc = ! emptySideTable; bool stored = StoreReleaseExclusive(&isa.bits, &oldisa.bits, newisa.bits);if(! stored && oldisa.nonpointer) {// Inline update failed. 
                // Try it again right now. This prevents livelock on LL/SC 
                // architectures where the side table access itself may have 
                // dropped the reservation.
                uintptr_t overflow;
                newisa.bits =
                    addc(oldisa.bits, RC_ONE * (borrow.borrowed-1), 0, &overflow); newisa.has_sidetable_rc = ! emptySideTable;if(! overflow) { stored = StoreReleaseExclusive(&isa.bits, &oldisa.bits, newisa.bits);if(stored) { didTransitionToDeallocating = newisa.isDeallocating(); }}}if(! stored) {// Inline update failed.
                // Put the retains back in the side table.
                ClearExclusive(&isa.bits);
                sidetable_addExtraRC_nolock(borrow.borrowed);
                oldisa = LoadExclusive(&isa.bits);
                goto retry;
            }

            // Decrement successful after borrowing from side table.
            if (emptySideTable)
                sidetable_clearExtraRC_nolock();

            if(! didTransitionToDeallocating) {if (slowpath(sideTableLocked)) sidetable_unlock();
                return false; }}else {
            // Side table is empty after all. Fall-through to the dealloc path.}}deallocate:
    // Real release.

    ASSERT(newisa.isDeallocating());
    ASSERT(isa.isDeallocating());

    if (slowpath(sideTableLocked)) sidetable_unlock();

    __c11_atomic_thread_fence(__ATOMIC_ACQUIRE);

    if (performDealloc) {
        ((void(*)(objc_object *, SEL))objc_msgSend)(this, @selector(dealloc));
    }
    return true;
}
Copy the code

sidetable_subExtraRC_nolock

// Move some reservation counts from the side table to the ISA field.
// Returns the actual count subtracted, which may be less than requested.
objc_object::SidetableBorrow
objc_object: :sidetable_subExtraRC_nolock(size_t delta_rc)
{
    ASSERT(isa.nonpointer);
    SideTable& table = SideTables()[this];

    RefcountMap::iterator it = table.refcnts.find(this);
    if (it == table.refcnts.end()  ||  it->second == 0) {
        // The edge table reservation count is zero. Can't borrow
        return { 0.0 };
    }
    size_t oldRefcnt = it->second;

    // Isa-side bits should not be set here
    ASSERT((oldRefcnt & SIDE_TABLE_DEALLOCATING) == 0);
    ASSERT((oldRefcnt & SIDE_TABLE_WEAKLY_REFERENCED) == 0);

    size_t newRefcnt = oldRefcnt - (delta_rc << SIDE_TABLE_RC_SHIFT);
    ASSERT(oldRefcnt > newRefcnt);  // shouldn't underflow
    it->second = newRefcnt;
    return { delta_rc, newRefcnt >> SIDE_TABLE_RC_SHIFT };
}
Copy the code