OC Basic principles of exploration document summary

This paper mainly analyzes the process of reference counting changes in memory management at the source level as well. Including alloc/retain/release/dealloc retainCount source code analysis

Main Contents:

1. Reference counting process

2, weak reference table structure

SideTable hash table

The analysis of the retain

retain

Global search for retain() in source

Source:

// Equivalent to calling [this retain], with shortcuts if there is no override // Equivalent to calling [this retain], if there is no override, Use the shortcut inline ID objc_object::retain() {ASSERT(! isTaggedPointer()); // If the retain method is not overridden, the default if (fastPath (! ISA()->hasCustomRR())) { return rootRetain(); Return ((id(*)(objc_object *, SEL))objc_msgSend)(this, @selector(retain))); }Copy the code

Description:

  • If we override the retain method, a message is sent to invoke it
  • If not, call the existing rootRetain function directly
  • So to override the retain method we usually need [super retain]

rootRetain

Retain -> rootRetain() -> rootRetain(bool tryRetain, bool handleOverflow), find the rootRetain function for analysis

Source:

/* Perform an actual reatain operation: If carry is full, place half of the reference count in the hash table. */ ALWAYS_INLINE ID objC_object ::rootRetain(bool tryRetain, bool handleOverflow) { if (isTaggedPointer()) return (id)this; Bool sideTableLocked = false; bool transcribeToSideTable = false; Isa is used because isa stores the reference count isa_t oldisa; isa_t newisa; do { transcribeToSideTable = false; oldisa = LoadExclusive(&isa.bits); newisa = oldisa; // check if nonpointer isa if (slowpath(! Newisa.nonpointer)) {// If it is not nonpointer isa, sidetable ClearExclusive(& ISa.bits); if (rawISA()->isMetaClass()) return (id)this; // If it is a metaclass, return the current object if (! tryRetain && sideTableLocked) sidetable_unlock(); if (tryRetain) return sidetable_tryRetain() ? (id)this : nil; else return sidetable_retain(); } // don't check newisa.fast_rr; We already called any RR overrides Direct dealloc if (slowPath (tryRetain && newisa.deallocating) {ClearExclusive(& ISa.bits); if (! tryRetain && sideTableLocked) sidetable_unlock(); return nil; } uintptr_t carry; // Perform the reference count +1 operation on the EXTRA_rc in bits with 1ULL<<45 (arm64, which stores the reference count value for this object // state identifier carry, Bits = addC (newISa.bits, RC_ONE, 0, &carry); newISa.bits = addC (newisa.bits, RC_ONE, 0, &carry); // Extra_rc ++ // If the extra_rc is loaded up with if (slowPath (carry)) {// Operate hashtable // newisa.extra_rc++ overstep 3 learning map if (! handleOverflow) { ClearExclusive(&isa.bits); return rootRetain_overflow(tryRetain); } // Leave half of the retain counts inline and // prepare to copy the other half to the side table. // Only half is stored in extra_rc, and the remaining half is stored in the hash table if (! tryRetain && ! sideTableLocked) sidetable_lock(); sideTableLocked = true; transcribeToSideTable = true; // Set the value newisa.extra_rc = RC_HALF; Newisa.has_sidetable_rc = true; // Set the identifier of has_sidetable_rc for ISA to YES, indicating that data is stored in the hash table. } } while (slowpath(! StoreExclusive(&isa.bits, oldisa.bits, newisa.bits))); If (slowPath (transcribeToSideTable)) {// Copy the other half of the retain counts to the side table. // Copy another half to the hash sidetable_addExtraRC_nolock(RC_HALF); } if (slowpath(! tryRetain && sideTableLocked)) sidetable_unlock(); return (id)this; }Copy the code

Code structure:

  1. If the object type is small, do not retain and return directly
  2. If it is not nonpointer, operate the hash table directly
    • If it isa metaclass, it is returned directly, and the metaclass does not need to perform reference counting. (As we analyzed earlier, the metaclass is not nonpointer_ISA.)
    • If it is not a metaclass, operate the hash table sidetable directly
  3. If the object is dealloc, exit without doing anything
  4. Add +1 to extra_RC in ISA and check whether the store is full via CARRY
  5. If extra_RC has reached its maximum value and data is full, carry is YES
  6. When carry is YES, half of the EXTRA_RC is taken out and assigned to the EXTRA_RC in ISA, and the other half is taken out and stored in the counting table of the SideTable. And has_sidetable_rc in ISA is set to YES, indicating that reference counts are stored in the hash table.

Description:

  • If nonPOinter_ISA is not used, only hash tables store reference counts, which is also easy to understand because non-Nonpointer isas store only class information and nothing else. Unable to store reference count
  • Nonpointer_isa has two values, extra_RC and has_SIDETABLE_rc, which store reference counts and whether sideTable is stored, respectively.
  • If the EXTRA_rc is not full, the reference count is only stored in extar_rc(ARM holds 19 bits)
  • If extra_RC is full, half of the data needs to be taken out and stored in sideTable. And set has_sidetable_rc in ISA to YES(has_sidetable_rc has two bits)

Retain process:

  • Start with a +1 reference count for extra_RC in ISA
  • If extra_RC overflows, it takes half and puts it into the reference count table in sideTable. Only half is left in extra_RC
  • If you are dealloc, just dealloc without doing anything

Sidetable’s understanding:

Source:

// RefcountMap disguises its Pointers because we // Don't want the table to act as a root for 'leaks'. // // When the reference count is 0, Will automatically delete the objc typedef objc: : DenseMap < DisguisedPtr < objc_object >, size_t, RefcountMapValuePurgeable > RefcountMap; // Template parameters. enum HaveOld { DontHaveOld = false, DoHaveOld = true }; enum HaveNew { DontHaveNew = false, DoHaveNew = true }; Struct SideTable {spinlock_t slock; RefcountMap refcnts; Weak_table_t Weak_table; // Reference count table, only when ISA optimization is not enabled or when isa_T reference count overflows under ISA optimization. SideTable() {memset(&weak_table, 0, sizeof(weak_table)); SideTable() {_objc_fatal("Do not delete SideTable."); Void lock() {slock.lock(); } void unlock() { slock.unlock(); } void forceReset() { slock.forceReset(); } // Address-ordered lock discipline for a pair of side tables. template<HaveOld, HaveNew> static void lockTwo(SideTable *lock1, SideTable *lock2); template<HaveOld, HaveNew> static void unlockTwo(SideTable *lock1, SideTable *lock2); };Copy the code

Description:

  • SideTable stores reference count tables and weak reference tables
  • The reference count table is used only when ISA optimization is not enabled or when the isa_T reference count overflows under ISA optimization
  • This reference count table is where objects are stored as keys and reference counts as values
  • When an object is weakly referenced, it needs to be stored in the weak reference table
  • There is also a spinlock, which is implemented by spinlock_t
  • Spin locks are simply busy and efficient.
  • Locking is used for thread safety to prevent errant readings of reference counts.
  • The SideTable cannot be removed. The SideTable was created by the system

weak_table_t

Source:

/** * The global weak references table. Stores object ids as keys, * and weak_entry_t structs as their values.  * Store object ID as key, store weak_entry_t structure as value */ struct weak_table_t {weak_entry_t *weak_entries; // size_t num_entries is a dynamic array containing a number of weak references; // Uintptr_t mask; 1 uintptr_t max_hash_displacement; // Maximum number of possible conflicts};Copy the code

Description:

  • Weak_table_t is a global weak-reference table that stores multiple weak-reference tables
  • In Weak_entries, object ID is used as the key and Weak_entry_t is stored as the value.

Conclusion:

  • Weak reference tables exist in the sideTable table
  • There is weakly_refrenced in isa to determine whether there isa weak reference
  • Weak-reference tables use object ids as keys and weak-reference dynamic arrays are stored as values

release

Objc_release () -> release() -> rootRelease() -> rootRelease(bool performDealloc, Bool handleUnderflow) finds release in rootRelease

ALWAYS_INLINE bool objc_object::rootRelease(bool performDealloc, bool handleUnderflow) { if (isTaggedPointer()) return false; bool sideTableLocked = false; isa_t oldisa; isa_t newisa; retry: do { oldisa = LoadExclusive(&isa.bits); newisa = oldisa; if (slowpath(! Newisa.nonpointer)) {// If it is not nonpointer_isa, use sideTable to remove ClearExclusive(& ISa.bits); if (rawISA()->isMetaClass()) return false; if (sideTableLocked) sidetable_unlock(); return sidetable_release(performDealloc); } // don't check newisa.fast_rr; The extra_rc-1 uintptr_t carry; newisa.bits = subc(newisa.bits, RC_ONE, 0, &carry); Slowpath (carry)) {// Don't ClearExclusive() goto underflow; } } while (slowpath(! StoreReleaseExclusive(&isa.bits, oldisa.bits, newisa.bits))); if (slowpath(sideTableLocked)) sidetable_unlock(); return false; underflow: // newisa.extra_rc-- underflowed: Borrow from side table or deallocate // If the EXTRA_rc in ISA is 0, start fetching data from sideTable. Deallocate // abandon newisa to undo the decrement newisa = oldisa; If (slowpath(newisa.has_sidetable_rc)) {if (! handleUnderflow) { ClearExclusive(&isa.bits); return rootRelease_underflow(performDealloc); } // Transfer retain count from side table to inline storage. // Transfer reference count from sideTable to extra_rc sideTableLocked) { ClearExclusive(&isa.bits); sidetable_lock(); sideTableLocked = true; // Need to start over to avoid a race against // the nonpointer -> raw pointer transition. goto retry; } // Try to remove some retain counts from the side table sidetable_subExtraRC_nolock(RC_HALF); To avoid races, has_sidetable_rc must remain set // Even if the sidetable count is now zero. Has_sidetable_rc if (borrowed > 0) {// Side table retain count decreased. // Try to add them to the inline count. // newisa.extra_rc = digi-1; // newisa.extra_rc = digi-1; // Rement of the original decrement too, count -1 bool Stored = StoreReleaseExclusive(& ISa. bits, oldisa.bits, newISa.bits); if (! stored) { // Inline update failed. // Try it again right now. This prevents livelock on LL/SC // architectures where the  side table access itself may have // dropped the reservation. isa_t oldisa2 = LoadExclusive(&isa.bits); isa_t newisa2 = oldisa2; if (newisa2.nonpointer) { uintptr_t overflow; newisa2.bits = addc(newisa2.bits, RC_ONE * (borrowed-1), 0, &overflow); if (! overflow) { stored = StoreReleaseExclusive(&isa.bits, oldisa2.bits, newisa2.bits); } } } if (! stored) { // Inline update failed. // Put the retains back in the side table. sidetable_addExtraRC_nolock(borrowed); goto retry; } // Decrement successful after borrowing from side table. // This decrement cannot be the deallocating decrement - the side // table lock and has_sidetable_rc bit ensure that if everyone // else tried to -release while we worked, the last one would block. sidetable_unlock(); return false; } else {// Side table is empty after all. fall-through to the dealloc path.}} // Really deallocate if (slowpath(newisa.deallocating)) { ClearExclusive(&isa.bits); if (sideTableLocked) sidetable_unlock(); return overrelease_error(); // does not actually return } newisa.deallocating = true; if (! StoreExclusive(&isa.bits, oldisa.bits, newisa.bits)) goto retry; if (slowpath(sideTableLocked)) sidetable_unlock(); __c11_atomic_thread_fence(__ATOMIC_ACQUIRE); if (performDealloc) { ((void(*)(objc_object *, SEL))objc_msgSend)(this, @selector(dealloc)); } return true; }Copy the code

Description:

  • The process is the reverse of retain
  • Start by directly applying -1 to the data in EXTRA_RC in ISA
  • If it is already 0, the data is retrieved from the sideTable
  • If there is no data in sideTable, dealloc is done
  • If you have data, take half of it and put it in extra_RC, and apply -1 to extra_RC

The process of release is:

  • If the EXTRA_RC reference count in ISA is not 0, subtract 1;
  • If the value is 0, we first get the value from the reference count of sideTable, take half of it and store it in extra_RC, and continue -1.
  • If the reference count is zero after the subtraction, dealloc begins.

dealloc

In the source through dealloc -> _objc_rootDealloc -> rootDealloc, in rootDealloc real dealloc operation

rootDealloc

Source:

inline void objc_object::rootDealloc() { if (isTaggedPointer()) return; // fixme necessary? /* If it is nonpointer_isa, it will not enter. If it is not nonPOinter_ISA, one of these conditions is still present. If (fastPath (isa.nonpointer &&! isa.weakly_referenced && ! isa.has_assoc && ! isa.has_cxx_dtor && ! isa.has_sidetable_rc)) { assert(! sidetable_present()); free(this); } // If nonPOinter_ISA or nonPOinter_ISA exists, you need to dispose of something else {object_dispose((id)this); }}Copy the code

Description:

  • If the object type is small, no operation is required
  • If it is nonPOinter_ISA, but there is no weak reference table, associated object, destructor, or extra data table sideTable, you can delete it directly
  • If it is nonPOinter_ISA, or nonPOinter_ISA, but there are weak reference tables, associated objects, destructors, and extra data tables sideTable, you need to remove these first.

object_dispose

Source:

id object_dispose(id obj) { if (! obj) return nil; // First destroy the object objc_destructInstance(obj); // clear space free(obj); return nil; }Copy the code

Description:

  • We simply call objc_destructInstance to destroy the object and then clear its space

objc_destructInstance

Source:

/*********************************************************************** * objc_destructInstance * Destroys an instance without freeing memory. * Calls C++ destructors * Calls C++ destructors C++ destructor * Calls ARC ivar cleanup Remove the * Post Associative References from the ARC variable. Returns' obj '. Does nothing if 'obj' is nil. Will return obj, if obj is nil, Will not do anything * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * / void * objc_destructInstance (id obj) { if (obj) { // Read all of the flags at once for performance. bool cxx = obj->hasCxxDtor(); bool assoc = obj->hasAssociatedObjects(); // This order is important. SideTable */ / if (CXX) object_cxxDestruct(obj); sideTable */ / if (CXX) object_cxxDestruct(obj); sideTable */ / if (CXX) object_cxxDestruct(obj); //C++ destructor if (assoc) _object_remove_assocations(obj); // Delete the associated object obj->clearDeallocating(); } return obj; }Copy the code

clearDeallocating

Source:

/ * delete has_sidetable_rc and extra_rc * / inline void objc_object: : clearDeallocating () {/ / if not nonpointer, Clear has_sidetable_rc and extra_rc if (slowpath(! isa.nonpointer)) { // Slow path for raw pointer isa. sidetable_clearDeallocating(); } / / judging whether there is a weak reference and the reference count for extra else if (slowpath (isa) weakly_referenced | | isa. Has_sidetable_rc)) {/ / missile path for non - pointer isa with weak refs and/or side table data. clearDeallocating_slow(); } assert(! sidetable_present()); }Copy the code

Description:

  • This is just destroying, not clearing memory
  • One is to note that destruction includes C++ destructors, associated objects, weak references, sideTable.
  • The second is to note that the destruction sequence is C++ destructor, associated object, weak reference, sideTable.

Conclusion:

  • C++ destructors, associated objects, weak references, and sideTable need to be cleared before dealloc. Use isa to determine whether to clear them
  • The order of removal is C++ destructor, associated object, weak reference, sideTable

retainCount

Source:

inline uintptr_t objc_object::rootRetainCount() { if (isTaggedPointer()) return (uintptr_t)this; Sidetable_lock (); sidetable_lock(); isa_t bits = __c11_atomic_load((_Atomic uintptr_t *)&isa.bits, __ATOMIC_RELAXED); If (bits.nonpointer) {uintptr_t rc = bits.extra_rc; if (bits.has_sidetable_rc) { rc += sidetable_getExtraRC_nolock(); } sidetable_unlock(); return rc; } sidetable_unlock(); return sidetable_retainCount(); }Copy the code

Description:

  • The reference count in extra_RC and sideTable adds up to the reference count for this object

What to print for an interview question?

- (void)test2{
    NSLog(@"%ld",CFGetRetainCount((__bridge CFTypeRef)[NSObject alloc]));
}
Copy the code

Description:

  • View the run result and print 1.
  • Here we are not attaching this object to any variable, but we are already referring to the +1 count.

Initialize isa source code

inline void objc_object::initIsa(Class cls, bool nonpointer, UNUSED_WITHOUT_INDEXED_ISA_AND_DTOR_BIT bool hasCxxDtor) { ASSERT(! isTaggedPointer()); isa_t newisa(0); if (! nonpointer) { newisa.setClass(cls, this); } else { ASSERT(! DisableNonpointerIsa); ASSERT(! cls->instancesRequireRawIsa()); #if SUPPORT_INDEXED_ISA ASSERT(cls->classArrayIndex() > 0); newisa.bits = ISA_INDEX_MAGIC_VALUE; // isa.magic is part of ISA_MAGIC_VALUE // isa.nonpointer is part of ISA_MAGIC_VALUE newisa.has_cxx_dtor = hasCxxDtor; newisa.indexcls = (uintptr_t)cls->classArrayIndex(); #else newisa.bits = ISA_MAGIC_VALUE; // isa.magic is part of ISA_MAGIC_VALUE // isa.nonpointer is part of ISA_MAGIC_VALUE # if ISA_HAS_CXX_DTOR_BIT newisa.has_cxx_dtor = hasCxxDtor; # endif newisa.setClass(cls, this); #endif newisa.extra_rc = 1; } // This write must be performed in a single store in some cases // (for example when realizing a class because other threads // may simultaneously try to use the class). // fixme use atomics here to guarantee single-store and to // guarantee memory order w.r.t. the class index table // ... but not too atomic because we don't want to hurt instantiation isa = newisa; }Copy the code

Description:

  • Extra_rc = 1; newisa.extra_rc = 1; Extra_rc is set to 1 when ISA is initialized.
  • That is, when you alloc an object the reference count is 1.

Conclusion:

  • The value retainCount gets is the value in EXTRA_RC in ISA plus the reference count in sideTable
  • When alloc creates an object, the default reference count is 1