To prepare

  • Objc4-818.2 – the source code

A, ARC&MRC

  • ARCisLLVMandRuntimeThe result of cooperation.
  • ARCManual invocation is prohibited inretain/release/retainCount/dealloc.
  • ARCThe new addedweak,strongProperty keyword.

Second, the alloc

For more information about alloc, see the following three articles:

  • OC Object Principle Exploration (3) : the nature of object & ISA
  • OC object principle exploration (2) : Memory alignment & MALloc
  • OC object principle exploration (I) : Alloc

Retain and release

3.1 retain

View the retain function in the source code:

inline id objc_object::retain() { ASSERT(! isTaggedPointer()); return rootRetain(false, RRVariant::FastOrMsgSend); }Copy the code

Check out the rootRetain function:

ALWAYS_INLINE ID objc_Object ::rootRetain(bool tryRetain, objC_Object ::RRVariant variant) {// If TaggedPointer, return if (slowpath(isTaggedPointer())) return (id)this; bool sideTableLocked = false; bool transcribeToSideTable = false; isa_t oldisa; isa_t newisa; oldisa = LoadExclusive(&isa.bits); do { transcribeToSideTable = false; newisa = oldisa; // If not nonpointer isa if (slowpath(! newisa.nonpointer)) { ClearExclusive(&isa.bits); if (tryRetain) return sidetable_tryRetain() ? (id)this : nil; Elsereturn sideTABLE_retain (sideTableLocked); } // don't check newisa.fast_rr; We already called any RR overrides // No reference counting if (slowPath (newisa.isdeallocating ())) {ClearExclusive(& ISa.bits); if (sideTableLocked) { ASSERT(variant == RRVariant::Full); sidetable_unlock(); } if (slowpath(tryRetain)) { return nil; } else { return (id)this; } } uintptr_t carry; Newisa.bits = addC (newISa.bits, RC_ONE, 0, &carry); Extra_rc ++ // If extra_rc is full of if (slowPath (carry)) {// newisa.extra_rc++ overstep 3 = RRVariant::Full) { ClearExclusive(&isa.bits); return rootRetain_overflow(tryRetain); } // Leave half of the retain counts inline and // prepare to copy the other half to the side table. if (! tryRetain && ! sideTableLocked) sidetable_lock(); sideTableLocked = true; transcribeToSideTable = true; Newisa. extra_rc = RC_HALF; // set has_sidetable_rc to true newisa.has_sidetable_rc = true; } } while (slowpath(! StoreExclusive(&isa.bits, &oldisa.bits, newisa.bits))); // When the storage is full, If (variant == RRVariant::Full) {if (transcribeToSideTable)) {// Copy the other half of the Retain counts to the sidetable. // Retain counts to sideTable sidetable_addExtraRC_nolock(RC_HALF); } if (slowpath(! tryRetain && sideTableLocked)) sidetable_unlock(); } else { } return (id)this; }Copy the code
  • isTaggedPointerDo nothing directlyreturn.
  • nonnonpointer isaThrough thesidetable_retainPerform reference counting operations.
  • isnonpointer isa
    • If you are releasing, no reference counting operation is performed.
    • By moving theRC_ONESize moved toextra_rcLocation, proceedextra_rc++Operation.
    • ifextra_rcFilled with,extra_rcTo reduce the reference count to half,has_sidetable_rcSet totrueThe other half of the reference count passessidetable_addExtraRC_nolockFunction tosidetableIn the.

Take a look at the sidetable_retain function:

id objc_object::sidetable_retain(bool locked) { #if SUPPORT_NONPOINTER_ISA ASSERT(! isa.nonpointer); SideTable SideTable& table = SideTables()[this]; if (! locked) table.lock(); Size_t & refcntStorage = table.refcnts[this]; // Add the reference count to if (! (refcntStorage & SIDE_TABLE_RC_PINNED)) { refcntStorage += SIDE_TABLE_RC_ONE; } table.unlock(); return (id)this; }Copy the code
  • Gets the location of the objectSideTable.
  • To obtainReference counter tableTo add the reference count.

sidetable_addExtraRC_nolock

bool objc_object::sidetable_addExtraRC_nolock(size_t delta_rc) { ASSERT(isa.nonpointer); SideTable SideTable& table = SideTables()[this]; Size_t & refcntStorage = table.refcnts[this]; size_t oldRefcnt = refcntStorage; // isa-side bits should not be set here ASSERT((oldRefcnt & SIDE_TABLE_DEALLOCATING) == 0); ASSERT((oldRefcnt & SIDE_TABLE_WEAKLY_REFERENCED) == 0); if (oldRefcnt & SIDE_TABLE_RC_PINNED) return true; uintptr_t carry; Size_t newRefcnt = addc(oldRefcnt, delta_rc << SIDE_TABLE_RC_SHIFT, 0, &carry); if (carry) { refcntStorage = SIDE_TABLE_RC_PINNED | (oldRefcnt & SIDE_TABLE_FLAG_MASK); return true; } else { refcntStorage = newRefcnt; return false; }}Copy the code
  • Gets the location of the objectSideTable.
  • To obtainReference counter tableTo add the reference count.

Looking at this, one should wonder why the extra_RC and the hash table should be split 50/50 when the extra_RC is running out of space.

  • Here’s why. PassisaIt’s easy to getextra_rcThrough theextra_rcStorage for reference counting is handy. The hash table is taken firstSideTableThis table, and then get the reference count table in the table, before the operation, table operation also need to do lock and unlock operation, very waste of performance.
  • So why do you save half of it instead of all of it, because now you’re just thinking about it++Operation, if all saved in the past, when--When you operate, you have to go againSideTableTo operate, also waste performance, so leave half and half,++and--inextra_rcAll have reference counts to operate on.

3.2 release

Look at the release function in the source code:

inline void objc_object::release() { ASSERT(! isTaggedPointer()); rootRelease(true, RRVariant::FastOrMsgSend); }Copy the code

Look at the rootRelease function:

ALWAYS_INLINE bool objc_object::rootRelease(bool performDealloc, objc_object::RRVariant variant) { if (slowpath(isTaggedPointer())) return false; bool sideTableLocked = false; isa_t newisa, oldisa; oldisa = LoadExclusive(&isa.bits); retry: do { newisa = oldisa; // If not nonpointer isa if (slowpath(! newisa.nonpointer)) { ClearExclusive(&isa.bits); Return sideTABLE_release (sideTableLocked, performDealloc); } // If (slowPath (newisa.isdeallocating ())) {ClearExclusive(& ISa.bits); if (sideTableLocked) { ASSERT(variant == RRVariant::Full); sidetable_unlock(); } return false; } // don't check newisa.fast_rr; we already called any RR overrides uintptr_t carry; Newisa.bits = subc(newisa.bits, RC_ONE, 0, &carry); // If (slowpath(carry)) {// Don't ClearExclusive() goto underflow; } } while (slowpath(! StoreReleaseExclusive(&isa.bits, &oldisa.bits, newisa.bits))); . underflow: // newisa.extra_rc-- underflowed: borrow from side table or deallocate // abandon newisa to undo the decrement newisa = oldisa; // Check if (slowPath (newisa.has_sidetable_rc)) {if (variant! = RRVariant::Full) { ClearExclusive(&isa.bits); return rootRelease_underflow(performDealloc); } // Transfer retain count from side table to inline storage. if (! sideTableLocked) { ClearExclusive(&isa.bits); sidetable_lock(); sideTableLocked = true; // Need to start over to avoid a race against // the nonpointer -> raw pointer transition. oldisa = LoadExclusive(&isa.bits); goto retry; } auto borrow = sidetable_subExtraRC_nolock(RC_HALF); bool emptySideTable = borrow.remaining == 0; // we'll clear the side table if no refcounts remain there if (borrow.borrowed > 0) { // Side table retain count decreased. // Try to add them to the inline count. bool didTransitionToDeallocating = false; Newisa.extra_rc = borrow. Borrowed - 1; // redo the original decrement too newisa.has_sidetable_rc = ! emptySideTable; bool stored = StoreReleaseExclusive(&isa.bits, &oldisa.bits, newisa.bits); . } else { // Side table is empty after all. Fall-through to the dealloc path. } } deallocate: // Really deallocate. if (slowpath(sideTableLocked)) sidetable_unlock(); if (performDealloc) { ((void(*)(objc_object *, SEL))objc_msgSend)(this, @selector(dealloc)); } return true; }Copy the code
  • isTaggedPointerDo nothing directlyreturn.
  • nonnonpointer isaThrough thesidetable_releasePerform reference counting operations.
  • isnonpointer isa
    • If you are releasing, no reference counting operation is performed.
    • By moving theRC_ONESize moved toextra_rcLocation, proceedextra_rc--Operation.
    • ifextra_rcTo overflow or jump downunderflow
      • ifhas_sidetable_rcfortrue, fromsidetableTake half of the reference count in the(half - 1)Stored in theextra_rcIn the.
      • Otherwise, byobjc_msgSendMessage sending calldeallocFunction.

SideTables structure analysis

Take a look at the SideTables definition:

static StripedMap<SideTable>& SideTables() {
    return SideTablesMap.get();
}
Copy the code
template<typename T> class StripedMap { #if TARGET_OS_IPHONE && ! TARGET_OS_SIMULATOR enum { StripeCount = 8 }; #else enum { StripeCount = 64 }; #endif ... };Copy the code
  • SideTablesThe essence of aThe hash table, and through theStripedMapWrapped, each element isSideTableTable,True machineThe maximum number of open tables iseightIn other cases, the maximum number of open tables is64A.
  • Why multipleSideTableBecause if all objects share a single table, we need to unlock and unlock the table, which will be a high performance cost.

Let’s look at the definition of SideTable:

struct SideTable { spinlock_t slock; RefcountMap refcnts; weak_table_t weak_table; . };Copy the code
  • slockIt’s a spin lock,refcntsIt’s the reference counter table,weak_tableIs a weak reference table.

The following structure can be obtained from the above source code:

Fifth, rootRetainCount

Look at the rootRetainCount function:

inline uintptr_t objc_object::rootRetainCount() { if (isTaggedPointer()) return (uintptr_t)this; sidetable_lock(); isa_t bits = __c11_atomic_load((_Atomic uintptr_t *)&isa.bits, __ATOMIC_RELAXED); If (bits.nonpointer) {// The extra_rc uintptr_t rc = bits.extra_rc; If (bits.has_sidetable_rc) {rc += sidetable_getExtraRC_nolock(); // If has_sidetable_rc is true, add the reference count to the hash table. } sidetable_unlock(); return rc; } sidetable_unlock(); return sidetable_retainCount(); }Copy the code

Weak reference table

Here’s how we write weak references:

int main(int argc, const char * argv[]) {
    @autoreleasepool {
        NSObject *objc = [[NSObject alloc] init];
        __weak typeof(id) weakObjc = objc;
    }
    return 0;
}
Copy the code

Through assembly debugging, we can see that the weak reference calls the objc_initWeak function:

6.1 Weak reference source code analysis

Enter the objc_initWeak function:

id objc_initWeak(id *location, id newObj) { if (! newObj) { *location = nil; return nil; } return storeWeak<DontHaveOld, DoHaveNew, DoCrashIfDeallocating> (location, (objc_object*)newObj); }Copy the code

The true implementation of the objc_initWeak function is to call storeWeak. In fact, the objc_destroyWeak function is also called storeWeak internally, with different arguments.

Enter the objc_destroyWeak function:

Void objc_destroyWeak(id *location) {// The destroyweak function is called storeWeak, StoreWeak <DoHaveOld, DontHaveNew, DontCrashIfDeallocating> (location, nil); }Copy the code

Enter the storeWeak function:

num CrashIfDeallocating { DontCrashIfDeallocating = false, DoCrashIfDeallocating = true }; template <HaveOld haveOld, HaveNew haveNew, enum CrashIfDeallocating crashIfDeallocating> static id storeWeak(id *location, objc_object *newObj) { ASSERT(haveOld || haveNew); if (! haveNew) ASSERT(newObj == nil); Class previouslyInitializedClass = nil; id oldObj; SideTable *oldTable; SideTable *newTable; Retry: // destroy haveOld = true if (haveOld) {oldObj = *location; oldTable = &SideTables()[oldObj]; } else {oldTable = nil; If (haveNew) {newTable = &sidetables ()[newObj]; } else {newTable = nil; } SideTable::lockTwo<haveOld, haveNew>(oldTable, newTable); if (haveOld && *location ! = oldObj) { SideTable::unlockTwo<haveOld, haveNew>(oldTable, newTable); goto retry; If (haveNew && newObj) {Class CLS = newObj->getIsa(); if (cls ! = previouslyInitializedClass && ! ((objc_class *)cls)->isInitialized()) { SideTable::unlockTwo<haveOld, haveNew>(oldTable, newTable); class_initialize(cls, (id)newObj); previouslyInitializedClass = cls; goto retry; If (haveOld) {weak_unregister_no_lock(&oldTable->weak_table, oldObj, location); // Destroy if (haveOld) {weak_unregister_no_lock(&oldTable->weak_table, oldObj, location);  If (haveNew) {// weak_register_no_lock, NewObj = (objc_object *) Weak_register_no_lock (&newTable->weak_table, (id)newObj, location, crashIfDeallocating ? CrashIfDeallocating : ReturnNilIfDeallocating); if (! newObj->isTaggedPointerOrNil()) { newObj->setWeaklyReferenced_nolock(); } *location = (id)newObj; } else { // No new value. The storage is not changed. } SideTable::unlockTwo<haveOld, haveNew>(oldTable, newTable); callSetWeaklyReferenced((id)newObj); return (id)newObj; }Copy the code

The key function in storeWeak is Weak_register_no_lock. Click to view:

id weak_register_no_lock(weak_table_t *weak_table, id referent_id, id *referrer_id, WeakRegisterDeallocatingOptions deallocatingOptions) { // referent_id:(id)newObj referrer_id:location objc_object *referent = (objc_object *)referent_id; objc_object **referrer = (objc_object **)referrer_id; . weak_entry_t *entry; // If in weak_table, Weak_entry_t if ((entry = Weak_entry_for_referent (weak_table, Append_referrer (entry, referrer))) {// add the referrer directly to weak_entry_t. Weak_entry_t weak_entry_t new_entry(referent, referrer); // Create weak_entry_t weak_entry_t new_entry(referent, referrer); Weak_grow_maybe (weak_table); Weak_entry_insert (weak_table, &new_entry); // Insert weak_entry_t into weak_table. } return referent_id; }Copy the code

Check weak_entry_for_referent function:

static weak_entry_t * weak_entry_for_referent(weak_table_t *weak_table, Weak_entries weak_entry_t * Weak_entries = weak_table-> Weak_entries; if (! weak_entries) return nil; size_t begin = hash_pointer(referent) & weak_table->mask; size_t index = begin; size_t hash_displacement = 0; Weak_table ->weak_entries[index].referent! = referent) { index = (index+1) & weak_table->mask; if (index == begin) bad_weak_table(weak_table->weak_entries); hash_displacement++; if (hash_displacement > weak_table->max_hash_displacement) { return nil; Weak_entry_t return & Weak_table -> Weak_entries [index]; }Copy the code

6.2 Weak reference table structure analysis

View weak_table_t structure:

struct weak_table_t {
    weak_entry_t *weak_entries;
    size_t    num_entries;
    uintptr_t mask;
    uintptr_t max_hash_displacement;
};
Copy the code

Weak_entry_t store a list of weak_entry_t in weak_table_t, check weak_entry_t structure:

Struct weak_entry_t {// The object address opened on the heap DisguisedPtr<objc_object> Referent; Union {struct {weak_referrer_t *referrers; uintptr_t out_of_line_ness : 2; uintptr_t num_refs : PTR_MINUS_2; uintptr_t mask; uintptr_t max_hash_displacement; }; struct { // out_of_line_ness field is low bits of inline_referrers[1] weak_referrer_t inline_referrers[WEAK_INLINE_COUNT]; }; }; };Copy the code

Weak reference table structure diagram:

6.3 summary

  • inSideTablesIn the first getobjWhere theSideTable.
  • inSideTableTo obtain the weak reference table:weak_table_t.
  • inweak_table_tLook forobjThe correspondingweak_entry_t
    • If so, willWeak pointer addressAdd directly toweak_entry_tIn the.
    • If there is no
      • Create a new oneweak_table_tAnd willobjandWeak pointer addressAdd it in.
      • Check whether the storage space is larger thanThree quarters ofPerform capacity expansion.
      • Will the newweak_entry_tInserted into theweak_tableIn the.

Function call flowchart:

Seven, dealloc

7.1 Dealloc source Code Analysis

Check the dealloc:

- (void)dealloc {
    _objc_rootDealloc(self);
}
Copy the code

Check the _objc_rootDealloc:

void
_objc_rootDealloc(id obj)
{
    ASSERT(obj);
    obj->rootDealloc();
}
Copy the code

Check the rootDealloc:

Inline void objc_object::rootDealloc() {return if (isTaggedPointer()) return; // fixme necessary? * if (fastPath (isa.nonpointer && // if ISA is nonpointer! Isa.weakly_referenced && // No weak reference! Isa.has_assoc && // No associated object! Isa.has_cxx_dtor && // no C++ destructor! Isa.has_sidetable_rc)) // No additional use of SideTabel for reference counting stores {assert(! sidetable_present()); free(this); } else {object_dispose((id)this); // If the above conditions are not found, call object_dispose function}}Copy the code

Check the object_dispose:

id object_dispose(id obj) { if (! obj) return nil; objc_destructInstance(obj); free(obj); return nil; }Copy the code

Check the objc_destructInstance:

void *objc_destructInstance(id obj) { if (obj) { // Read all of the flags at once for performance. bool cxx = obj->hasCxxDtor(); bool assoc = obj->hasAssociatedObjects(); // This order is important. if (cxx) object_cxxDestruct(obj); Object_cxxDestruct if (assoc) _object_remove_assocations(obj); // if C++ destructors are available, call object_cxxDestruct if (assoc) _object_remove_assocations(obj); // If there is an associated object, call _object_remove_assocations to remove the associated object, obj->clearDeallocating(); // Call clearDeallocating} return obj; }Copy the code

Check the clearDeallocating:

The inline void objc_object: : clearDeallocating () {/ / if the isa not nonpointer if (slowpath (! Isa.nonpointer)) {// Call sidetable_clearDeallocating(); } // If isa is nonpointer, And have a weak reference or additional use SideTable stored reference counting else if (slowpath (isa) weakly_referenced | | isa. Has_sidetable_rc)) {/ / calls ClearDeallocating_slow clearDeallocating_slow function (); }}Copy the code

Check the sidetable_clearDeallocating:

Void objc_object::sidetable_clearDeallocating() {// Get SideTable SideTable& Table = SideTables()[this]; table.lock(); Refcnts RefcountMap::iterator it = table.refcnt.find (this); if (it ! = table.refcnt.end ()) {if (it->second & SIDE_TABLE_WEAKLY_REFERENCED) {// Call weak_clear_no_lock: Weak_clear_no_lock (&table. Weak_table, (id)this); Weak_clear_NO_lock (&table. Weak_table, (id)this); Tab.refcnt. erase(it);} // Call table.refcnt. erase(it); } table.unlock(); }Copy the code

Check the clearDeallocating_slow:

NEVER_INLINE void objc_object::clearDeallocating_slow() { ASSERT(isa.nonpointer && (isa.weakly_referenced || Isa.has_sidetable_rc)) // Get SideTable SideTable& table = SideTables()[this]; table.lock(); Weakly_referenced) {// Call weak_clear_no_lock: Weak_clear_no_lock (&table. Weak_table, (id)this); Weak_clear_NO_lock (&table. Weak_table, (id)this); } // If (isa.has_sidetable_rc) {table.refcnt. erase: Erase the reference count table.refcnt.erase (this) for the object from the reference count table; } table.unlock(); }Copy the code

7.2 summary

  • Judge the five conditions
    • 1. The isa for nonpointer; 2. No weak references; 3. There is no associated object. 4. No C++ destructor; 5. SideTabel is not used for reference counting storage;
    • If all 5 conditions are true, call free to destroy the object, otherwise call object_Dispose to do some processing before releasing the object.
  • callobject_disposeCarry out relevant processing
    • 1. If there is a C++ destructor, callobject_cxxDestruct;
    • 2. If there are associated objects, call_object_remove_assocationsFunction to remove the associated object;
    • 3. Callweak_clear_no_lockSets a weak reference pointer to this object to nil;
    • 4. Calltable.refcnts.eraseErase the object’s reference count from the reference count table (if ISA is nonpointer, determine firstisa.has_sidetable_rc)
  • callfreeThe function destroys the object.