Apple source analysis into the implementation of memory management methods
So far we have only covered the use and usage of memory management methods, but what does the internal implementation of these methods look like? How exactly is reference counting managed? Next, we go to Runtime’s latest source code, objC4-779.1 (current at the time of this writing), and examine the implementation of alloc, retainCount, retain, release, dealloc, and other methods.
The source code download address: opensource.apple.com/tarballs/ob…
alloc
The stack of function calls for the alloc method is
// NSObject.mm1) objc_alloc 2) callAlloc// objc-runtime-new.mm③ _objc_rootAllocWithZone ④ _class_createInstanceFromZone ⑤ calloc,// objc-object.h
initInstanceIsa->initIsa
Copy the code
1) objc_alloc
// Calls [cls alloc].
id
objc_alloc(Class cls)
{
return callAlloc(cls, true/*checkNil*/.false/*allocWithZone*/);
}
Copy the code
(2) callAlloc
// Call [cls alloc] or [cls allocWithZone:nil], with appropriate
// shortcutting optimizations.
// A call to [CLS alloc] or [CLS allocWithZone:nil] will come to this function and optimize with the appropriate shortcut
static ALWAYS_INLINE id
callAlloc(Class cls, bool checkNil, bool allocWithZone=false)
{
// If it is __OBJC2__ code (to determine if the current language is Objective-C 2.0)
#if __OBJC2__
// If (checkNil &&! CLS), directly returns nil
if(slowpath(checkNil && ! cls))return nil;
// Call _objc_rootAllocWithZone if CLS does not implement a custom allocWithZone method
if(fastpath(! cls->ISA()->hasCustomAWZ())) {return _objc_rootAllocWithZone(cls, nil);
}
#endif
// No shortcuts available.
// No shortcut is available
// If allocWithZone is true, send the CLS allocWithZone:nil message
if (allocWithZone) {
return ((id(*) (id, SEL, struct _NSZone *))objc_msgSend)(cls, @selector(allocWithZone:), nil);
}
// Otherwise, the alloc message is sent
return ((id(*) (id, SEL))objc_msgSend)(cls, @selector(alloc));
}
Copy the code
Note: SlowPath and fastPath macros are defined as follows:
#define fastpath(x) (__builtin_expect(bool(x), 1)) #define slowpath(x) (__builtin_expect(bool(x), 0)) Copy the code
They both use __builtin_expect() :
long __builtin_expect(long exp.long c); Copy the code
__builtin_expect() is provided by GCC (version >= 2.96) for programmers. Since most programmers are bad at branch prediction, GCC provides this built-in function to help programmers deal with branch prediction. The goal is to provide the “branching” information to the compiler so that the compiler can optimize the code to reduce performance degradation caused by instruction jumps. What it means is that the probability that e to the exp is equal to c is high. Fastpath (x) means that the probability of x being 1 is high, and slowPath (x) means that the probability of x being 0 is high. It is used with if, where if (fastpath(x)) indicates a high probability of executing the if statement, and if (slowpath(x)) indicates a low probability of executing the if statement.
The callAlloc function performs the following steps: 1. Determine if the class implements a custom allocWithZone method. If not, call the _objc_rootAllocWithZone function (a shortcut). 2. If the shortcut is not available (that is, the condition in step 1 does not hold), send a message to the CLS class based on the value of allocWithZone. Since allocWithZone is false, an ALLOc message is sent to the CLS.
Let’s look at the second case first, which is to send an ALLOc message to the CLS.
+ (id)alloc {
return _objc_rootAlloc(self);
}
Copy the code
// Base class implementation of +alloc. cls is not nil.
// Calls [cls allocWithZone:nil].
id
_objc_rootAlloc(Class cls)
{
return callAlloc(cls, false/*checkNil*/.true/*allocWithZone*/);
}
Copy the code
Do you have a lot of question marks? Why is it calling callAlloc again? But here’s the difference:
checkNil
forfalse
.checkNil
This function is used to determine whether a null operation is requiredfalse
.allocWithZone
fortrue
, so it will be sent to the objectallocWithZone:nil
The message.
// Replaced by ObjectAlloc
+ (id)allocWithZone:(struct _NSZone *)zone {
return _objc_rootAllocWithZone(self, (malloc_zone_t *)zone);
}
Copy the code
As you can see, both the first (shortcut) and the second (non-shortcut) call the _objc_rootAllocWithZone function and both pass CLS and nil.
Note: NSZone is ignored in ARC. As mentioned in the chapter “ios-Memory Management 3: ARC Comes out — New Rules for ARC Enforcement”, nszones are simply ignored on current runtime systems (where the compiler macro __ objc__ is set), both under MRC and ARC. So now there’s no difference between the allocWithZone and alloc methods.
(3) _objc_rootAllocWithZone
// objc-runtime-new.mm
NEVER_INLINE
id
_objc_rootAllocWithZone(Class cls, malloc_zone_t *zone __unused)
{
// allocWithZone under __OBJC2__ ignores the zone parameter
// allocWithZone ignores the zone parameter under __OBJC2__
return _class_createInstanceFromZone(cls, 0.nil,
OBJECT_CONSTRUCT_CALL_BADALLOC);
}
Copy the code
This function calls the _class_createInstanceFromZone function, and you can see that the zone parameter is ignored and passed to nil.
(4) _class_createInstanceFromZone
/*********************************************************************** * class_createInstance * fixme * Locking: none * * Note: this function has been carefully written so that the fastpath * takes no branch. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * /
static ALWAYS_INLINE id
_class_createInstanceFromZone(Class cls, size_t extraBytes, void *zone,
int construct_flags = OBJECT_CONSTRUCT_NONE,
bool cxxConstruct = true,
size_t *outAllocatedSize = nil)
{
ASSERT(cls->isRealized());
// Read class's info bits all at once for performance
bool hasCxxCtor = cxxConstruct && cls->hasCxxCtor(); // Get whether CLS has a constructor
bool hasCxxDtor = cls->hasCxxDtor(); // Get whether CLS has a destructor
bool fast = cls->canAllocNonpointer(); // Get whether CLS can allocate nonpointers. If so, memory optimization is enabled
size_t size;
// Obtain the size of the space to be applied
size = cls->instanceSize(extraBytes);
if (outAllocatedSize) *outAllocatedSize = size;
id obj;
// zone == nil, call calloc to claim memory space
if (zone) {
obj = (id)malloc_zone_calloc((malloc_zone_t *)zone, 1, size);
} else {
obj = (id)calloc(1, size);
}
// If the memory space application fails, call callBadAllocHandler
if(slowpath(! obj)) {if (construct_flags & OBJECT_CONSTRUCT_CALL_BADALLOC) {
return _objc_callBadAllocHandler(cls);
}
return nil;
}
// Initialize isa. If it is nonPointer, initInstanceIsa is called
if(! zone && fast) { obj->initInstanceIsa(cls, hasCxxDtor); }else {
// Use raw pointer isa on the assumption that they might be
// doing something weird with the zone or RR.
obj->initIsa(cls);
}
// If CLS has no constructor, return the object directly
if(fastpath(! hasCxxCtor)) {return obj;
}
// Perform constructor processing and return
construct_flags |= OBJECT_CONSTRUCT_FREE_ONFAILURE;
return object_cxxConstructFromClass(obj, cls, construct_flags);
}
Copy the code
In the _class_createInstanceFromZone function, memory space is allocated by calling the C function calloc and ISA is initialized for the object.
Next, let’s take a look at initializing isa(Nonpointer).
(5) initInstanceIsa
// objc-object.h
inline void
objc_object::initInstanceIsa(Class cls, boolhasCxxDtor) { ASSERT(! cls->instancesRequireRawIsa()); ASSERT(hasCxxDtor == cls->hasCxxDtor()); initIsa(cls,true, hasCxxDtor);
}
Copy the code
initIsa
// objc-config.h
// Define SUPPORT_INDEXED_ISA=1 on platforms that store the class in the isa
// field as an index into a class table.
// Note, keep this in sync with any .s files which also define it.
// Be sure to edit objc-abi.h as well.
#if __ARM_ARCH_7K__ >= 2 || (__arm64__ && ! __LP64__)
# define SUPPORT_INDEXED_ISA 1
#else
# define SUPPORT_INDEXED_ISA 0
#endif
// objc-object.h
inline void
objc_object::initIsa(Class cls, bool nonpointer, boolhasCxxDtor) { ASSERT(! isTaggedPointer());if(! nonpointer) { isa = isa_t((uintptr_t)cls); }else{ ASSERT(! DisableNonpointerIsa); ASSERT(! cls->instancesRequireRawIsa()); isa_t newisa(0);
#if SUPPORT_INDEXED_ISA // For 64-bit systems, this is 0
ASSERT(cls->classArrayIndex() > 0);
newisa.bits = ISA_INDEX_MAGIC_VALUE;
// isa.magic is part of ISA_MAGIC_VALUE
// isa.nonpointer is part of ISA_MAGIC_VALUE
newisa.has_cxx_dtor = hasCxxDtor;
newisa.indexcls = (uintptr_t)cls->classArrayIndex();
#else
newisa.bits = ISA_MAGIC_VALUE; // Set the bits of isa to ISA_MAGIC_VALUE
// isa.magic is part of ISA_MAGIC_VALUE
// isa.nonpointer is part of ISA_MAGIC_VALUE
newisa.has_cxx_dtor = hasCxxDtor;
newisa.shiftcls = (uintptr_t)cls >> 3;
#endif
// This write must be performed in a single store in some cases
// (for example when realizing a class because other threads
// may simultaneously try to use the class).
// fixme use atomics here to guarantee single-store and to
// guarantee memory order w.r.t. the class index table
/ /... but not too atomic because we don't want to hurt instantiationisa = newisa; }}Copy the code
Assign the bits of ISA to ISA_MAGIC_VALUE in the initIsa method. ISA_MAGIC_VALUE initializes the MAGIC and Nonpointer fields of ISA. Let’s verify this.
#if SUPPORT_PACKED_ISA
// extra_rc must be the MSB-most field (so it matches carry/overflow flags)
// nonpointer must be the LSB (fixme or get rid of it)
// shiftcls must occupy the same bits that a real class pointer would
// bits + RC_ONE is equivalent to extra_rc + 1
// RC_HALF is the high bit of extra_rc (i.e. half of its range)
// future expansion:
// uintptr_t fast_rr : 1; // no r/r overrides
// uintptr_t lock : 2; // lock for atomic property, @synch
// uintptr_t extraBytes : 1; // allocated with extra bytes
# if __arm64__
# define ISA_MASK 0x0000000ffffffff8ULL
# define ISA_MAGIC_MASK 0x000003f000000001ULL
# define ISA_MAGIC_VALUE 0x000001a000000001ULL // here
# define ISA_BITFIELD \
uintptr_t nonpointer : 1; \
uintptr_t has_assoc : 1; \
uintptr_t has_cxx_dtor : 1; \
uintptr_t shiftcls : 33; /*MACH_VM_MAX_ADDRESS 0x1000000000*/ \
uintptr_t magic : 6; \
uintptr_t weakly_referenced : 1; \
uintptr_t deallocating : 1; \
uintptr_t has_sidetable_rc : 1; \
uintptr_t extra_rc : 19
# define RC_ONE (1ULL<<45) # define RC_HALF (1ULL<<18) # elif __x86_64__ # define ISA_MASK 0x00007ffffffffff8ULL # define ISA_MAGIC_MASK 0x001f800000000001ULL # define ISA_MAGIC_VALUE 0x001d800000000001ULL # define ISA_BITFIELD \ uintptr_t nonpointer : 1; \ uintptr_t has_assoc : 1; \ uintptr_t has_cxx_dtor : 1; \ uintptr_t shiftcls : 44; /*MACH_VM_MAX_ADDRESS 0x7fffffe00000*/ \ uintptr_t magic : 6; \ uintptr_t weakly_referenced : 1; \ uintptr_t deallocating : 1; \ uintptr_t has_sidetable_rc : 1; \ uintptr_t extra_rc : 8 # define RC_ONE (1ULL<<56) # define RC_HALF (1ULL<<7) # else # error unknown architecture for packed isa # endif // SUPPORT_PACKED_ISA #endifCopy the code
Under __arm64__, the value of ISA_MAGIC_VALUE is 0x000001a000000001ULL.
Corresponding to ISA_BITFIELD, ISA_MAGIC_VALUE is indeed the Magic and Nonpointer fields used to initialize ISA.
Extra_rc is not operated on when ISA is initialized. That is, the alloc method does not actually set the reference count of the object to 1.
Why? Alloc didn’t count the reference to 1? No hurry, let’s leave it open and analyze other memory management methods.
Summary: The alloc method goes through a stack of function calls, finally applying for memory space by calling the C function calloc and initializing the object’s ISA without setting the object’s reference count to 1.
init
// NSObject.mm
// Calls [[cls alloc] init].
id
objc_alloc_init(Class cls)
{
return [callAlloc(cls, true/*checkNil*/.false/*allocWithZone*/) init];
}
- (id)init {
return _objc_rootInit(self);
}
id
_objc_rootInit(id obj)
{
// In practice, it will be hard to rely on this function.
// Many classes do not properly chain -init calls.
return obj;
}
Copy the code
The init method of the base class does nothing but return the object created by alloc. We can override the init method to do some initialization on the instance created by alloc.
new
// Calls [cls new]
id
objc_opt_new(Class cls)
{
#if __OBJC2__
if(fastpath(cls && ! cls->ISA()->hasCustomCore())) {return [callAlloc(cls, false/*checkNil*/.true/*allocWithZone*/) init];
}
#endif
return ((id(*) (id, SEL))objc_msgSend)(cls, @selector(new));
}
+ (id)new {
return [callAlloc(self.false/*checkNil*/) init];
}
Copy the code
The new method is simple, it’s just nested alloc and init.
copy & mutableCopy
- (id)copy {
return [(id)self copyWithZone:nil];
}
- (id)mutableCopy {
return [(id)self mutableCopyWithZone:nil];
}
Copy the code
Copy and mutableCopy are also very simple, just calling copyWithZone and mutableCopyWithZone methods.
retainCount
As we all know, the retainCount method retrieves the reference count of an object. So, where does it come from and how does it come from? I’m sure you’ve already thought of isa and Sidetable, so let’s go into the source code and see how it works.
The retainCount function call stack is:
// NSObject.mm1) retainCount 2) _objc_rootRetainCount// objc-object.h(3) objc_object: : rootRetainCount// NSObject.mm(4) objc_object: : sidetable_getExtraRC_nolock objc_object: : sidetable_retainCountCopy the code
1 the retainCount
- (NSUInteger)retainCount {
return _objc_rootRetainCount(self);
}
Copy the code
(2) _objc_rootRetainCount
uintptr_t
_objc_rootRetainCount(id obj)
{
ASSERT(obj);
return obj->rootRetainCount();
}
Copy the code
(3) objc_object: : rootRetainCount
inline uintptr_t
objc_object::rootRetainCount()
{
// If it is tagged pointer, return this
if (isTaggedPointer()) return (uintptr_t)this;
sidetable_lock();
isa_t bits = LoadExclusive(&isa.bits); / / get the isa
ClearExclusive(&isa.bits);
// If ISA is nonpointer
if (bits.nonpointer) {
uintptr_t rc = 1 + bits.extra_rc; // Reference count = 1 + the value of the extra_rc in ISA
// If you also use sideTable to store reference counts
if (bits.has_sidetable_rc) {
rc += sidetable_getExtraRC_nolock(); // Add the value of the reference count in sideTable
}
sidetable_unlock();
return rc;
}
sidetable_unlock();
If isa is not nonpointer, return the value of sidetable_retainCount()
return sidetable_retainCount();
}
Copy the code
(4) objc_object: : sidetable_getExtraRC_nolock
size_t
objc_object::sidetable_getExtraRC_nolock()
{
ASSERT(isa.nonpointer);
SideTable& table = SideTables()[this]; / / get SideTable
RefcountMap::iterator it = table.refcnts.find(this); / / get refcnts
if (it == table.refcnts.end()) return 0; // Return 0 if not found
else return it->second >> SIDE_TABLE_RC_SHIFT; // If it is found, the reference count is obtained using the SIDE_TABLE_RC_SHIFT bit mask
}
#define SIDE_TABLE_RC_SHIFT 2
Copy the code
If ISA is nonpointer, the object’s reference count is stored in the extra_RC of its ISA_T and in the RefCountMap of the SideTable. Because the extra_RC stores a reference count other than the object itself, you need to add the reference count of the object itself, 1; Plus the reference count stored in the SideTable, which is obtained by the sidetable_getExtraRC_nolock() function.
There are two hashes in the sidetable_getExtraRC_nolock() function:
- The first time according to the memory address of the current object, through the hash search from
SideTables()
Where it isSideTable
; - Second time according to the memory address of the current object, through the hash search from
SideTable
In therefcnts
Gets its reference count table in the
objc_object::sidetable_retainCount
uintptr_t
objc_object::sidetable_retainCount()
{
SideTable& table = SideTables()[this];
size_t refcnt_result = 1; // Set the reference count of the object itself to 1
table.lock();
RefcountMap::iterator it = table.refcnts.find(this);
if(it ! = table.refcnts.end()) {// this is valid for SIDE_TABLE_RC_PINNED too
refcnt_result += it->second >> SIDE_TABLE_RC_SHIFT; // Reference count = 1 + reference count stored in SideTable
}
table.unlock();
return refcnt_result;
}
Copy the code
If ISA is not nonpointer, it directly stores the memory addresses of Class and meta-class objects. There is no way to store reference counts, so the reference counts are stored in the SideTable. In this case, the reference counts are obtained through sidetable_retainCount().
RetainCount method:
- in
arm64
Before,isa
notnonpointer
. Object reference counts are all stored inSideTable
,retainCount
Method returns the reference count of the object itself, 1, plusSideTable
Value stored in;- from
arm64
To start,isa
isnonpointer
. Object’s reference count is first stored in itsisa
In theextra_rc
In, if 19 bitsextra_rc
If there is not enough storage, then the overflow will be stored toSideTable
,retainCount
Method returns the reference count of the object itself, 1, plusisa
In theextra_rc
The stored value, plusSideTable
Is stored in.- So, we actually passed
retainCount
Methods the printalloc
The reference count of the created object is 1, which isretainCount
The credit of the method,alloc
Method does not set the reference count of the object.
Why: That’s not true either. The alloc method doesn’t set the reference count of the object to 1, and it doesn’t call the retainCount method internally. Wouldn’t the reference count for the object that we created alloc be zero? Wouldn’t that be dealloc? The dealloc method is called inside the release method. Dealloc is only called if you call dealloc directly, or if you call release and the release method determines that the object’s reference count is zero. See Release source analysis for details.
retain
In the iOS – old memory management (2) : from the MRC “article has been explained, holding objects have two ways, one is through the alloc/new/copy/mutableCopy method to create objects, and the second is through retain method. The retain method counts the object’s reference by +1.
The function call stack for the retain method is:
// NSObject.mm1) objc_retain// objc-object.h (2) objc_object: :retain
// NSObject.mm
③ retain(4) _objc_rootRetain// objc-object.h(5) objc_object: : rootRetain// NSObject.mm6 objc_object: : sidetable_retain addc// objc-os.h
objc_object::rootRetain_overflow
objc_object::sidetable_addExtraRC_nolock
Copy the code
1) objc_retain
#if __OBJC2__
__attribute__((aligned(16), flatten, noinline))
id
objc_retain(id obj)
{
if(! obj)return obj;
if (obj->isTaggedPointer()) return obj;
return obj->retain(a); }#else
id objc_retain(id obj) { return [obj retain]; }
#endif
Copy the code
If it is __OBJC2__, the objc_object::retain function is called; Otherwise, the retain method is called.
(2) objc_object: : retain
// Equivalent to calling [this retain], with shortcuts if there is no override
inline id
objc_object::retain() { ASSERT(! isTaggedPointer());if(fastpath(! ISA()->hasCustomRR())) {return rootRetain();
}
return ((id(*)(objc_object *, SEL))objc_msgSend)(this.@selector(retain));
}
Copy the code
If the method is not overridden, call objc_object::rootRetain directly, which is the shortcut; Otherwise, the retain method is called.
(3) retain
// Replaced by ObjectAlloc
- (id)retain {
return _objc_rootRetain(self);
}
Copy the code
(4) _objc_rootRetainCount
NEVER_INLINE id
_objc_rootRetain(id obj)
{
ASSERT(obj);
return obj->rootRetain();
}
Copy the code
(5) objc_object: : rootRetain
ALWAYS_INLINE id
objc_object::rootRetain()
{
return rootRetain(false.false);
}
ALWAYS_INLINE id
objc_object::rootRetain(bool tryRetain, bool handleOverflow)
{
// If it is tagged pointer, return this
if (isTaggedPointer()) return (id)this;
bool sideTableLocked = false;
bool transcribeToSideTable = false; // Whether reference counts need to be stored in sideTable
isa_t oldisa;
isa_t newisa;
do {
transcribeToSideTable = false;
/ / get the isa
oldisa = LoadExclusive(&isa.bits);
newisa = oldisa;
// If ISA is not nonpointer
if(slowpath(! newisa.nonpointer)) { ClearExclusive(&isa.bits);if (rawISA()->isMetaClass()) return (id)this;
if(! tryRetain && sideTableLocked) sidetable_unlock();TryRetain == false, call sidetable_retain
if (tryRetain) return sidetable_tryRetain() ? (id)this : nil;
else return sidetable_retain();
}
// don't check newisa.fast_rr; we already called any RR overrides
if (slowpath(tryRetain && newisa.deallocating)) {
ClearExclusive(&isa.bits);
if(! tryRetain && sideTableLocked) sidetable_unlock();return nil;
}
uintptr_t carry; The extra_rc of isa is used to determine whether the extra_rc of ISA is overflow
newisa.bits = addc(newisa.bits, RC_ONE, 0, &carry); // extra_rc++
// If extra_rc overflows
if (slowpath(carry)) {
// newisa.extra_rc++ overflowed
// If handleOverflow == false, call rootRetain_overflow
if(! handleOverflow) { ClearExclusive(&isa.bits);return rootRetain_overflow(tryRetain);
}
// Leave half of the retain counts inline and
// prepare to copy the other half to the side table.
// Keep half the reference count in extra_rc
// Prepare to store the other half of the reference count in the Sidetable
if(! tryRetain && ! sideTableLocked) sidetable_lock(); sideTableLocked =true;
transcribeToSideTable = true; // Set transcribeToSideTable to true
newisa.extra_rc = RC_HALF; // Set the value of extra_rc to RC_HALF # define RC_HALF (1ULL<<18)
newisa.has_sidetable_rc = true; // Set has_sidetable_rc to true}}while(slowpath(! StoreExclusive(&isa.bits, oldisa.bits, newisa.bits)));// Save the updated isa.bits
// If it is necessary to store the overflow reference count in the SideTable
if (slowpath(transcribeToSideTable)) {
// Copy the other half of the retain counts to the side table.
// Store the RC_HALF reference count in the Sidetable
sidetable_addExtraRC_nolock(RC_HALF);
}
if(slowpath(! tryRetain && sideTableLocked)) sidetable_unlock();return (id)this;
}
Copy the code
6 objc_object: : sidetable_retain
Let’s start with a few offsets:
// The order of these bits is important.
#define SIDE_TABLE_WEAKLY_REFERENCED (1UL<<0) #define SIDE_TABLE_DEALLOCATING (1UL<<1) // MSB-ward of weak bit #define SIDE_TABLE_RC_ONE (1UL<<2) // MSB-ward of deallocating bit #define SIDE_TABLE_RC_PINNED (1UL<<(WORD_BITS-1)) #define SIDE_TABLE_RC_SHIFT 2 #define SIDE_TABLE_FLAG_MASK (SIDE_TABLE_RC_ONE-1)Copy the code
- SIDE_TABLE_WEAKLY_REFERENCED: Indicates whether the object has a weak reference
- SIDE_TABLE_DEALLOCATING: marks whether the object is dealloc
- SIDE_TABLE_RC_ONE: indicates the start bit of the object reference count storage. The reference count is stored in bits 2 to 63
- SIDE_TABLE_RC_PINNED: reference count overflow flag (last bit)
Here is the reference count table for the object:
id
objc_object::sidetable_retain()
{
#if SUPPORT_NONPOINTER_ISAASSERT(! isa.nonpointer);#endif
SideTable& table = SideTables()[this]; / / get SideTable
table.lock();
size_t& refcntStorage = table.refcnts[this]; / / get refcnt
if (! (refcntStorage & SIDE_TABLE_RC_PINNED)) { // If it is obtained and not overflowed
refcntStorage += SIDE_TABLE_RC_ONE; // Add the reference count by 1
}
table.unlock();
return (id)this;
}
Copy the code
If ISA is not nonpointer, sidetable_retain is called and, after two hash lookups, the object’s reference count table is obtained, with the reference count +1.
addc
static ALWAYS_INLINE uintptr_t
addc(uintptr_t lhs, uintptr_t rhs, uintptr_t carryin, uintptr_t *carryout)
{
return __builtin_addcl(lhs, rhs, carryin, carryout);
}
Copy the code
If ISA is nonpointer, addc is called to count the references in extra_rc +1. This function increases the reference count.
objc_object::rootRetain_overflow
NEVER_INLINE id
objc_object::rootRetain_overflow(bool tryRetain)
{
return rootRetain(tryRetain, true);
}
Copy the code
If the store in extra_RC is full, rootRetain_overflow is called, which in turn calls rootRetain, but the handleOverflow argument is passed true.
objc_object::sidetable_addExtraRC_nolock
// Move some retain counts to the side table from the isa field.
// Returns true if the object is now pinned.
// Move some reference counts from ISA to sideTable
bool
objc_object::sidetable_addExtraRC_nolock(size_t delta_rc)
{
ASSERT(isa.nonpointer);
SideTable& table = SideTables()[this];
size_t& refcntStorage = table.refcnts[this];
size_t oldRefcnt = refcntStorage;
// isa-side bits should not be set here
ASSERT((oldRefcnt & SIDE_TABLE_DEALLOCATING) == 0);
ASSERT((oldRefcnt & SIDE_TABLE_WEAKLY_REFERENCED) == 0);
if (oldRefcnt & SIDE_TABLE_RC_PINNED) return true;
uintptr_t carry;
size_t newRefcnt =
addc(oldRefcnt, delta_rc << SIDE_TABLE_RC_SHIFT, 0, &carry);
if (carry) {
refcntStorage =
SIDE_TABLE_RC_PINNED | (oldRefcnt & SIDE_TABLE_FLAG_MASK);
return true;
}
else {
refcntStorage = newRefcnt;
return false; }}Copy the code
If the store in extra_RC is full, we call sidetable_addExtraRC_nolock to transfer the RC_HALF reference count in extra_RC to the sideTable, Call addc to add a reference count to the refCNt table.
Summary: Retain method:
- if
isa
notnonpointer
“That’s rightSidetable
The reference count in is +1;- if
isa
isnonpointer
, it willisa
In theextra_rc
The stored reference count is +1, and if it overflows, it willextra_rc
In theRC_HALF
(extra_rc
Half of the full value) of reference counts transferred tosidetable
In the store. fromrootRetain
We can see in the function that ifextra_rc
Overflow, set its value toRC_HALF
“Right againsidetable
In therefcnt
Increasing reference countRC_HALF
.extra_rc
is19
, whileRC_HALF
A macro is(1ULL<<18)
We’re actually doing the plus 1 operation.
release
When we don’t need to use (hold) an object, we need to call the release method. The release method counts the references to the object by -1.
The call stack for the release method is:
Nsobject. mm ① objc_release // objc-object.h ② objc_object::release // nsobject. mm ③ release ④ _objc_rootRelease // H ⑤ objc_object::rootRelease // nsobject. mm ⑥ Objc_object ::sidetable_release subc // objc-os.h objc_object::rootRelease_underflow objc_object::sidetable_subExtraRC_nolock objc_object::overrelease_errorCopy the code
1) objc_release
#if __OBJC2__
__attribute__((aligned(16), flatten, noinline))
void
objc_release(id obj)
{
if(! obj)return;
if (obj->isTaggedPointer()) return;
return obj->release();
}
#else
void objc_release(id obj) { [obj release]; }
#endif
Copy the code
If __OBJC2__, call objc_object::release; Otherwise call the release method.
(2) objc_object: : release
// Equivalent to calling [this release], with shortcuts if there is no override
inline voidobjc_object::release() { ASSERT(! isTaggedPointer());if(fastpath(! ISA()->hasCustomRR())) { rootRelease();return;
}
((void(*)(objc_object *, SEL))objc_msgSend)(this.@selector(release));
}
Copy the code
If the method is not overridden, call objc_object::rootRelease directly, which is the shortcut; Otherwise call the release method.
(3) release
// Replaced by ObjectAlloc
- (oneway void)release {
_objc_rootRelease(self);
}
Copy the code
(4) _objc_rootRelease
NEVER_INLINE void
_objc_rootRelease(id obj)
{
ASSERT(obj);
obj->rootRelease();
}
Copy the code
(5) objc_object: : rootRelease
ALWAYS_INLINE bool
objc_object::rootRelease()
{
return rootRelease(true.false);
}
ALWAYS_INLINE bool
objc_object::rootRelease(bool performDealloc, bool handleUnderflow)
{
// Return false if it is tagged pointer
if (isTaggedPointer()) return false;
bool sideTableLocked = false;
isa_t oldisa;
isa_t newisa;
retry:
do {
/ / get the isa
oldisa = LoadExclusive(&isa.bits);
newisa = oldisa;
// If ISA is not nonpointer
if(slowpath(! newisa.nonpointer)) { ClearExclusive(&isa.bits);if (rawISA()->isMetaClass()) return false;
if (sideTableLocked) sidetable_unlock();
/ / call sidetable_release
return sidetable_release(performDealloc);
}
// don't check newisa.fast_rr; we already called any RR overrides
uintptr_t carry;
newisa.bits = subc(newisa.bits, RC_ONE, 0, &carry); // extra_rc--
The reference count in extra_RC is already zero
if (slowpath(carry)) {
// don't ClearExclusive()
// Run the underflow command to handle the underflow
gotounderflow; }}while(slowpath(! StoreReleaseExclusive(&isa.bits, oldisa.bits, newisa.bits)));// Save the updated isa.bits
if (slowpath(sideTableLocked)) sidetable_unlock();
return false;
underflow:
// newisa.extra_rc-- underflowed: borrow from side table or deallocate
// abandon newisa to undo the decrement
// extra_rc-- underflows, borrowing from sideTable or dealLoc objects
newisa = oldisa;
// If isa has_sidetable_rc field value is 1
if (slowpath(newisa.has_sidetable_rc)) {
// If handleUnderflow == false, call rootRelease_underflow
if(! handleUnderflow) { ClearExclusive(&isa.bits);return rootRelease_underflow(performDealloc);
}
// Transfer retain count from side table to inline storage.
// Transfer the reference count from sideTable to extra_RC for storage
if(! sideTableLocked) { ClearExclusive(&isa.bits); sidetable_lock(); sideTableLocked =true;
// Need to start over to avoid a race against
// the nonpointer -> raw pointer transition.
goto retry;
}
// Try to remove some retain counts from the side table.
// Try to remove (borrow) some reference counts from sideTable, passing RC_HALF
// Borrowed is the reference count for the actual deletion (lending) of the SideTable
size_t borrowed = sidetable_subExtraRC_nolock(RC_HALF);
// To avoid races, has_sidetable_rc must remain set
// even if the side table count is now zero.
// To avoid competition, has_sidetable_rc must remain set
// Even though the reference count in sideTable is now 0
if (borrowed > 0) { // If the question is greater than 0
// Side table retain count decreased.
// Try to add them to the inline count.
// Assign it -1 to extra_rc
newisa.extra_rc = borrowed - 1; // redo the original decrement too
// Store the changed isa.bits
bool stored = StoreReleaseExclusive(&isa.bits,
oldisa.bits, newisa.bits);
// If the storage fails, retry immediately
if(! stored) {// Inline update failed.
// Try it again right now. This prevents livelock on LL/SC
// architectures where the side table access itself may have
// dropped the reservation.
isa_t oldisa2 = LoadExclusive(&isa.bits);
isa_t newisa2 = oldisa2;
if (newisa2.nonpointer) {
uintptr_t overflow;
newisa2.bits =
addc(newisa2.bits, RC_ONE * (borrowed- 1), 0, &overflow);
if(! overflow) { stored = StoreReleaseExclusive(&isa.bits, oldisa2.bits, newisa2.bits); }}}// If the storage fails again, the reference count is saved back to the sideTable
if(! stored) {// Inline update failed.
// Put the retains back in the side table.
sidetable_addExtraRC_nolock(borrowed);
goto retry;
}
// Decrement successful after borrowing from side table.
// This decrement cannot be the deallocating decrement - the side
// table lock and has_sidetable_rc bit ensure that if everyone
// else tried to -release while we worked, the last one would block.
sidetable_unlock();
return false;
}
else {
// Side table is empty after all. Fall-through to the dealloc path.}}// If the reference count is 0, the dealLoc object
// Really deallocate.
// If newisa is currently in deallocating state, the object is guaranteed to dealloc only once
if (slowpath(newisa.deallocating)) {
ClearExclusive(&isa.bits);
if (sideTableLocked) sidetable_unlock();
/ / call overrelease_error
return overrelease_error();
// does not actually return
}
// Set newisa to the deallocating state
newisa.deallocating = true;
// If the storage fails, retry again
if(! StoreExclusive(&isa.bits, oldisa.bits, newisa.bits))goto retry;
if (slowpath(sideTableLocked)) sidetable_unlock();
__c11_atomic_thread_fence(__ATOMIC_ACQUIRE);
// If performDealloc == true, send a dealloc message to the object
if (performDealloc) {
((void(*)(objc_object *, SEL))objc_msgSend)(this.@selector(dealloc));
}
return true;
}
Copy the code
6 objc_object: : sidetable_release
// rdar://20206767
// return uintptr_t instead of bool so that the various raw-isa
// -release paths all return zero in eax
uintptr_t
objc_object::sidetable_release(bool performDealloc)
{
#if SUPPORT_NONPOINTER_ISAASSERT(! isa.nonpointer);#endif
/ / get SideTable
SideTable& table = SideTables()[this];
bool do_dealloc = false; // Identifies whether the dealloc method needs to be executed
table.lock();
auto it = table.refcnts.try_emplace(this, SIDE_TABLE_DEALLOCATING);
/ / get refcnts
auto &refcnt = it.first->second;
if (it.second) {
do_dealloc = true;
// If the object is in the deallocating state
} else if (refcnt < SIDE_TABLE_DEALLOCATING) {
// SIDE_TABLE_WEAKLY_REFERENCED may be set. Don't change it.
do_dealloc = true;
refcnt |= SIDE_TABLE_DEALLOCATING;
// If the reference count has a value
} else if (! (refcnt & SIDE_TABLE_RC_PINNED)) {
// The reference count is -1
refcnt -= SIDE_TABLE_RC_ONE;
}
table.unlock();
// If the criteria are met, the dealLoc object
if (do_dealloc && performDealloc) {
((void(*)(objc_object *, SEL))objc_msgSend)(this.@selector(dealloc));
}
return do_dealloc;
}
Copy the code
If ISA is not nonPOINTER, then the reference count in the Sidetable is -1, and if the reference count =0, the dealloc object;
subc
static ALWAYS_INLINE uintptr_t
subc(uintptr_t lhs, uintptr_t rhs, uintptr_t carryin, uintptr_t *carryout)
{
return __builtin_subcl(lhs, rhs, carryin, carryout);
}
Copy the code
Subc is the antioperation of addc to reduce the reference count.
objc_object::rootRelease_underflow
NEVER_INLINE bool
objc_object::rootRelease_underflow(bool performDealloc)
{
return rootRelease(performDealloc, true);
}
Copy the code
If the extra_RC overflows, rootRelease_underflow is called, which in turn calls rootRelease, but the handleUnderflow argument passes true.
objc_object::sidetable_subExtraRC_nolock
// Move some retain counts from the side table to the isa field.
// Returns the actual count subtracted, which may be less than the request.
size_t
objc_object::sidetable_subExtraRC_nolock(size_t delta_rc)
{
ASSERT(isa.nonpointer);
/ / get SideTable
SideTable& table = SideTables()[this];
/ / get refcnt
RefcountMap::iterator it = table.refcnts.find(this);
if (it == table.refcnts.end() || it->second == 0) {
// Side table retain count is zero. Can't borrow.
return 0;
}
size_t oldRefcnt = it->second;
// isa-side bits should not be set here
ASSERT((oldRefcnt & SIDE_TABLE_DEALLOCATING) == 0);
ASSERT((oldRefcnt & SIDE_TABLE_WEAKLY_REFERENCED) == 0);
// Reduce the reference count
size_t newRefcnt = oldRefcnt - (delta_rc << SIDE_TABLE_RC_SHIFT);
ASSERT(oldRefcnt > newRefcnt); // shouldn't underflow
it->second = newRefcnt;
return delta_rc;
}
Copy the code
The purpose of the sidetable_subExtraRC_nolock is to request that some reference counts stored in the SideTable be transferred to ISA. Returns the actual reference count subtracted, which may be less than the requested value.
objc_object::overrelease_error
NEVER_INLINE uintptr_t
objc_object::overrelease_error()
{
_objc_inform_now_and_on_crash("%s object %p overreleased while already deallocating; break on objc_overrelease_during_dealloc_error to debug", object_getClassName((id)this), this);
objc_overrelease_during_dealloc_error();
return 0; // allow rootRelease() to tail-call this
}
Copy the code
If the current object is in deallocating state, the release again executes an overrelease_error, which is used to report errors when release is over-called.
Summary: Release method:
- if
isa
notnonpointer
“That’s rightSidetable
If the reference count =0, thendealloc
Object;- if
isa
isnonpointer
, it willisa
In theextra_rc
The stored reference count goes to -1. If underflow, that isextra_rc
The reference count in is already 0has_sidetable_rc
Whether it istrue
That is, whether it is in useSidetable
Storage. If so, apply for theSidetable
In the applicationRC_HALF
A reference count is transferred toextra_rc
In storage, if insufficientRC_HALF
There will be as many applications as there are and then there will beSidetable
Is subtracted from the reference count inRC_HALF
(or less thanRC_HALF
The actual value of), and the actual applied reference is counted as -1 and stored inextra_rc
In the. ifextra_rc
The reference count is 0 andhas_sidetable_rc
forfalse
orSidetable
If the reference count is 0, thendealloc
Object.
Why do you need to do that? Directly from the firstSidetable
Is it ok to do -1 on the reference count in? I think it’s for performance, after all, accessing objectsisa
Faster.
autorelease
The function call stack for the autoRelease method is:
// NSObject.mm1) objc_autorelease// objc-object.h (2) objc_object: : autorelease// NSObject.mm(3) autorelease (4) _objc_rootAutorelease// objc-object.h(5) objc_object: : rootAutorelease// NSObject.mm6 objc_object: : rootAutorelease2 7 AutoreleasePoolPage: : autoreleaseCopy the code
1) objc_autorelease
#if __OBJC2__
__attribute__((aligned(16), flatten, noinline))
id
objc_autorelease(id obj)
{
if(! obj)return obj;
if (obj->isTaggedPointer()) return obj;
return obj->autorelease();
}
#else
id objc_autorelease(id obj) { return [obj autorelease]; }
#endif
Copy the code
If __OBJC2__, call objc_object:: autoRelease; Otherwise the autoRelease method is called.
(2) objc_object: : autorelease
// Equivalent to [this autorelease], with shortcuts if there is no override
inline idobjc_object::autorelease() { ASSERT(! isTaggedPointer());if(fastpath(! ISA()->hasCustomRR())) {return rootAutorelease();
}
return ((id(*)(objc_object *, SEL))objc_msgSend)(this.@selector(autorelease));
}
Copy the code
If the method is not overridden, call objc_object::rootAutorelease directly, which is the shortcut; Otherwise the autoRelease method is called.
(3) autorelease
// Replaced by ObjectAlloc
- (id)autorelease {
return _objc_rootAutorelease(self);
}
Copy the code
(4) _objc_rootAutorelease
NEVER_INLINE id
_objc_rootAutorelease(id obj)
{
ASSERT(obj);
return obj->rootAutorelease();
}
Copy the code
(5) objc_object: : rootAutorelease
// Base autorelease implementation, ignoring overrides.
inline id
objc_object::rootAutorelease()
{
if (isTaggedPointer()) return (id)this;
if (prepareOptimizedReturn(ReturnAtPlus1)) return (id)this;
return rootAutorelease2();
}
Copy the code
6 objc_object: : rootAutorelease2
__attribute__((noinline,used))
idobjc_object::rootAutorelease2() { assert(! isTaggedPointer());return AutoreleasePoolPage::autorelease((id)this);
}
Copy the code
In this function, the AutoRelease method of the AutoreleasePoolPage class is called. See iOS – Talking about AutoRelease and @Autoreleasepool for more information about the AutoreleasePoolPage class and about autorelease and @Autoreleasepool.
dealloc
The function call stack for the dealloc method is:
// NSObject.mm1) dealloc 2) _objc_rootDealloc// objc-object.h(3) rootDealloc// objc-runtime-new.mm(4) object_dispose (5 objc_destructInstance// objc-object.h6 clearDeallocating// NSObject.mmAll landowners sidetable_clearDeallocating clearDeallocating_slowCopy the code
1) dealloc
// Replaced by NSZombies
- (void)dealloc {
_objc_rootDealloc(self);
}
Copy the code
(2) _objc_rootDealloc
void
_objc_rootDealloc(id obj)
{
ASSERT(obj);
obj->rootDealloc();
}
Copy the code
(3) rootDealloc
inline void
objc_object::rootDealloc()
{
// Return TaggerPointer (); // Return TaggerPointer ()
if (isTaggedPointer()) return; // fixme necessary? *
if (fastpath(isa.nonpointer && // If isa is nonpointer! isa.weakly_referenced &&// There are no weak references! isa.has_assoc &&// There is no associated object! isa.has_cxx_dtor &&// there is no C++ destructor! isa.has_sidetable_rc))// No additional SideTabel is used for reference counting storage{ assert(! sidetable_present()); free(this); // If the above conditions are true, call free directly to destroy the object
}
else {
object_dispose((id)this); // Call object_Dispose function if the above condition is not true}}Copy the code
(4) object_dispose
/***********************************************************************
* object_dispose
* fixme
* Locking: none
**********************************************************************/
id
object_dispose(id obj)
{
if(! obj)return nil;
objc_destructInstance(obj); // Call the objc_destructInstance function
free(obj); // Call free to destroy the object
return nil;
}
Copy the code
(5) objc_destructInstance
/*********************************************************************** * objc_destructInstance * Destroys an instance without freeing memory. * Calls C++ destructors. * Calls ARC ivar cleanup. * Removes associative references. * Returns `obj`. Does nothing if `obj` is nil. **********************************************************************/
void *objc_destructInstance(id obj)
{
if (obj) {
// Read all of the flags at once for performance.
bool cxx = obj->hasCxxDtor();
bool assoc = obj->hasAssociatedObjects();
// This order is important.
if (cxx) object_cxxDestruct(obj); // call object_cxxDestruct if there is a C++ destructor
if (assoc) _object_remove_assocations(obj); // If there are associated objects, call the _object_remove_assocations function to remove them
obj->clearDeallocating(); // Call the clearDeallocating function
}
return obj;
}
Copy the code
6 clearDeallocating
inline void
objc_object::clearDeallocating()
{
// If ISA is not nonpointer
if(slowpath(! isa.nonpointer)) {// Slow path for raw pointer isa.
// Call the sidetable_clearDeallocating function
sidetable_clearDeallocating();
}
// if isa is nonpointer and there isa weak reference or additional use of SideTable to store reference counts
else if (slowpath(isa.weakly_referenced || isa.has_sidetable_rc)) {
// Slow path for non-pointer isa with weak refs and/or side table data.
// Call clearDeallocating_slowclearDeallocating_slow(); } assert(! sidetable_present()); }Copy the code
All landowners sidetable_clearDeallocating
void
objc_object::sidetable_clearDeallocating()
{
/ / get SideTable
SideTable& table = SideTables()[this];
// clear any weak table items
// clear extra retain count and deallocating bit
// (fixme warn or abort if extra retain count == 0 ?)
table.lock();
/ / get refcnts
RefcountMap::iterator it = table.refcnts.find(this);
if(it ! = table.refcnts.end()) {if (it->second & SIDE_TABLE_WEAKLY_REFERENCED) {
// Call weak_clear_NO_lock to set the weak reference pointer to the object to nil
weak_clear_no_lock(&table.weak_table, (id)this);
}
// Call table.refcnT. erase: erase the reference count of this object from the reference count table
table.refcnts.erase(it);
}
table.unlock();
}
Copy the code
clearDeallocating_slow
// Slow path of clearDeallocating()
// for objects with nonpointer isa
// that were ever weakly referenced
// or whose retain count ever overflowed to the side table.
NEVER_INLINE void
objc_object::clearDeallocating_slow()
{
ASSERT(isa.nonpointer && (isa.weakly_referenced || isa.has_sidetable_rc));
/ / get SideTable
SideTable& table = SideTables()[this];
table.lock();
// If there is a weak reference
if (isa.weakly_referenced) {
// Call weak_clear_NO_lock to set the weak reference pointer to the object to nil
weak_clear_no_lock(&table.weak_table, (id)this);
}
// If there is a SideTable to store reference counts
if (isa.has_sidetable_rc) {
// Call table.refcnT. erase: erase the reference count of this object from the reference count table
table.refcnts.erase(this);
}
table.unlock();
}
Copy the code
Summary: Dealloc method:
- (1).
isa
fornonpointer
; 2. No weak references; 3. No associated object; 4. NoC++
The destructor of; 5. No extrasSideTabel
If all 5 conditions are true, call it directlyfree
Function destroys the object, otherwise calledobject_dispose
Do some pre-release processing;- (2) 1
C++
The destructor of theobject_cxxDestruct
;
2. If there is an associated object, call_object_remove_assocations
Function to remove an associated object;
3. Callweak_clear_no_lock
Sets the weak reference pointer to the object tonil
;
4. Calltable.refcnts.erase
Erases the reference count of this object from the reference count table (ifisa
fornonpointer
, but also to judgeisa.has_sidetable_rc
)- (3) call
free
Function destroys the object.According to the dealloc procedure, the variable of the __weak modifier sets the __weak to nil when the object is dealloc. As you can see, when __weak is used heavily, it consumes CPU resources, so it is recommended to use the __weak modifier only when you need to avoid circular references. In iOS – Memory Management Cliches (part 3) : ARC came out — Ownership Modifiers mentioned that __weak has a performance cost. When an object is dealloc, it has to walk through the object’s weak table and set all the weak Pointers in the table to nil. The more weak Pointers to the object, the more performance cost. So __unsafe_unretained is faster than __weak. Choosing __unsafe_unretained provides some performance improvement when the lifetime of the object is clearly known.
weak
Removal of weak
Above from the dealloc method implementation we know that when the object dealloc, weak_clear_NO_lock function will be called to set the weak reference pointer to the object to nil, then the specific implementation of the function is what?
weak_clear_no_lock
// objc-weak.mm
/** * Called by dealloc; nils out all weak pointers that point to the * provided object so that they can no longer be used. * * @param weak_table * @param referent The object being deallocated. */
void
weak_clear_no_lock(weak_table_t *weak_table, id referent_id)
{
// Get the weak point address, that is, the memory address of the object
objc_object *referent = (objc_object *)referent_id;
// Find the entry container that manages the Referent
weak_entry_t *entry = weak_entry_for_referent(weak_table, referent);
// If entry == nil, there is no weak reference that needs to be set to nil
if (entry == nil) {
/// XXX shouldn't happen, but does with mismatched CF/objc
//printf("XXX no entry for clear deallocating %p\n", referent);
return;
}
// zero out references
weak_referrer_t *referrers;
size_t count;
if (entry->out_of_line()) {
Referrers is an array that stores all weak references to the referent_id
referrers = entry->referrers;
// Weak reference array length
count = TABLE_SIZE(entry);
}
else {
referrers = entry->inline_referrers;
count = WEAK_INLINE_COUNT;
}
// Iterate through the array of weak references, setting all weak references to referent_id to nil
for (size_t i = 0; i < count; ++i) {
objc_object **referrer = referrers[i];
if (referrer) {
if (*referrer == referent) {
*referrer = nil;
}
else if (*referrer) {
_objc_inform("__weak variable at %p holds %p instead of %p. "
"This is probably incorrect use of "
"objc_storeWeak() and objc_loadWeak(). "
"Break on objc_weak_error to debug.\n",
referrer, (void*)*referrer, (void*)referent); objc_weak_error(); }}}// Remove the management container of the corresponding weak reference from the Weak_table
weak_entry_remove(weak_table, entry);
}
Copy the code
Summary: Clean up weak. When an object is destroyed, through a series of function call stacks inside the dealloc method, through two hash look-up, the first time according to the address of the object to find its Sidetable, according to the address of the object to find its weak reference table in the Sidetable weak_table. The weak reference table stores a mapping between the object’s address (as key) and an array of weak pointer addresses (as value). The weak reference array is traversed in the weak_clear_NO_lock function, and weak variables pointing to the address of the object are set to nil.
Add the weak
Let’s look at how the weak variable is added to the weak reference table.
A pointer declared as __weak after compilation. Initialize a variable with an __weak modifier via objc_initWeak, and destroy it at the end of its scope via objc_destroyWeak.
{
id obj = [[NSObject alloc] init];
id __weak obj1 = obj;
}
Compile / * -- -- -- -- -- -- -- -- -- - * /
id obj1;
objc_initWeak(&obj1,obj);
objc_destroyWeak(&obj1);
Copy the code
The objc_initWeak function call stack looks like this:
// NSObject.mm1) objc_initWeak 2) storeWeak// objc-weak.mm(3) weak_register_no_lock weak_unregister_no_lockCopy the code
1) objc_initWeak
/** * Initialize a fresh weak pointer to some object location. * It would be used for code like: * * (The nil case) * __weak id weakPtr; * (The non-nil case) * NSObject *o = ... ; * __weak id weakPtr = o; * * This function IS NOT thread-safe with respect to concurrent * modifications to the weak variable. (Concurrent weak clear is safe.) * * @param location Address of __weak ptr. * @param newObj Object ptr. */
id
objc_initWeak(id *location, id newObj) // * Location is the __weak pointer address, newObj is the object address
{
// If the object is nil, set the weak pointer to nil
if(! newObj) { *location =nil;
return nil;
}
return storeWeak<DontHaveOld, DoHaveNew, DoCrashIfDeallocating>
(location, (objc_object*)newObj);
}
Copy the code
(2) storeWeak
// Update a weak variable.
// If HaveOld is true, the variable has an existing value
// that needs to be cleaned up. This value might be nil.
// If HaveNew is true, there is a new value that needs to be
// assigned into the variable. This value might be nil.
// If CrashIfDeallocating is true, the process is halted if newObj is
// deallocating or newObj's class does not support weak references.
// If CrashIfDeallocating is false, nil is stored instead.
// Update the weak variable
// If HaveOld == true, the variable has an old value and it needs to be cleaned up. This old value may be nil
// If HaveNew == true, a new value needs to be assigned to the variable. This new value may be nil
// If CrashIfDeallocating == true, then the update stops if the object is being destroyed or the object does not support weak references
// If CrashIfDeallocating == false, nil is stored
enum CrashIfDeallocating {
DontCrashIfDeallocating = false, DoCrashIfDeallocating = true
};
template <HaveOld haveOld, HaveNew haveNew,
CrashIfDeallocating crashIfDeallocating>
static id
storeWeak(id *location, objc_object *newObj)
{
assert(haveOld || haveNew);
if(! haveNew) assert(newObj ==nil);
Class previouslyInitializedClass = nil;
id oldObj;
SideTable *oldTable; // The old table to hold the existing weak variable
SideTable *newTable; // A new table to hold the new weak variable
// Acquire locks for old and new values.
// Order by lock address to prevent lock ordering problems.
// Retry if the old value changes underneath us.
retry:
// Get the weak reference tables associated with the old and new values, respectively
// If the weak variable has an old value, get the existing object (the old value object) and the old table
if (haveOld) {
oldObj = *location;
oldTable = &SideTables()[oldObj];
} else {
oldTable = nil;
}
// Create a new table if there is a new value to assign to the variable
if (haveNew) {
newTable = &SideTables()[newObj];
} else {
newTable = nil;
}
// Add locks to haveOld and haveNew respectively
SideTable::lockTwo<haveOld, haveNew>(oldTable, newTable);
// Determine whether the values of oldObj and location are the same, that is, whether they are the same object. If not, retrieve the table associated with the old values
if(haveOld && *location ! = oldObj) {/ / unlock
SideTable::unlockTwo<haveOld, haveNew>(oldTable, newTable);
goto retry;
}
// Prevent a deadlock between the weak reference machinery
// and the +initialize machinery by ensuring that no
// weakly-referenced object has an un-+initialized isa.
// If there is a new value, check whether the class to which the new value belongs has been initialized
// If not, initialize first to prevent deadlocks from the +initialize internal call storeWeak
if (haveNew && newObj) {
Class cls = newObj->getIsa();
if(cls ! = previouslyInitializedClass && ! ((objc_class *)cls)->isInitialized()) { SideTable::unlockTwo<haveOld, haveNew>(oldTable, newTable); class_initialize(cls, (id)newObj);
// If this class is finished with +initialize then we're good.
// If this class is still running +initialize on this thread
// (i.e. +initialize called storeWeak on an instance of itself)
// then we may proceed but it will appear initializing and
// not yet initialized to the check above.
// Instead set previouslyInitializedClass to recognize it on retry.
previouslyInitializedClass = cls;
gotoretry; }}// If there is an old value, call weak_unregister_no_lock to clear the old value
// Clean up old value, if any.
if (haveOld) {
// Remove all weak references to the old value instead of assigning it to nil
weak_unregister_no_lock(&oldTable->weak_table, oldObj, location);
}
// If there is a new value to assign, call weak_register_no_lock to repoint all weak Pointers to the new object
// Assign new value, if any.
if (haveNew) {
newObj = (objc_object *)
weak_register_no_lock(&newTable->weak_table, (id)newObj, location,
crashIfDeallocating);
// weak_register_no_lock returns nil if weak store should be rejected
// If the storage is successful
// If the object is Tagged Pointer, do not do the operation
// If ISA is not nonpointer, set the weak reference bit in the SideTable
If ISA is nonpointer, set isa's Weakly_referenced weak reference flag bit
// Set is-weakly-referenced bit in refcount table.
if(newObj && ! newObj->isTaggedPointer()) { newObj->setWeaklyReferenced_nolock(); }// Point the location to the new object
// Do not set *location anywhere else. That would introduce a race.
*location = (id)newObj;
}
else {
// No new value. The storage is not changed.
}
/ / unlock
SideTable::unlockTwo<haveOld, haveNew>(oldTable, newTable);
return (id)newObj;
}
Copy the code
The execution of the store_weak function is as follows:
- Get weak reference tables associated with old and new values, respectively;
- If there is an old value, it is called
weak_unregister_no_lock
Function clears the old value, removing all references to the old valueweak
Refer to, rather than assign tonil
; - If there is a new value, it is called
weak_register_no_lock
Function to assign a new value that will allweak
The pointer is redirected to a new object; - judge
isa
Whether it isnonpointer
To set the weak reference flag bit. If it is notnonpointer
To set upSideTable
Weak reference flag bit in, otherwise setisa
theweakly_referenced
Weak reference flag bit.
(3) weak_register_no_lock
/** * Registers a new (object, weak pointer) pair. Creates a new weak * object entry if it does not exist. * * @param weak_table The global weak table. * @param referent The object pointed to by the weak reference. * @param referrer The weak pointer address. */
id
weak_register_no_lock(weak_table_t *weak_table, id referent_id,
id *referrer_id, bool crashIfDeallocating)
{
objc_object *referent = (objc_object *)referent_id;
objc_object **referrer = (objc_object **)referrer_id;
if(! referent || referent->isTaggedPointer())return referent_id;
// ensure that the referenced object is viable
bool deallocating;
if(! referent->ISA()->hasCustomRR()) { deallocating = referent->rootIsDeallocating(); }else {
BOOL (*allowsWeakReference)(objc_object *, SEL) =
(BOOL(*)(objc_object *, SEL))
object_getMethodImplementation((id)referent,
SEL_allowsWeakReference);
if ((IMP)allowsWeakReference == _objc_msgForward) {
return nil;
}
deallocating =
! (*allowsWeakReference)(referent, SEL_allowsWeakReference);
}
if (deallocating) {
if (crashIfDeallocating) {
_objc_fatal("Cannot form weak reference to instance (%p) of "
"class %s. It is possible that this object was "
"over-released, or is in the process of deallocation.",
(void*)referent, object_getClassName((id)referent));
} else {
return nil; }}// now remember it and where it is being stored
weak_entry_t *entry;
if ((entry = weak_entry_for_referent(weak_table, referent))) {
append_referrer(entry, referrer);
}
else {
weak_entry_t new_entry(referent, referrer);
weak_grow_maybe(weak_table);
weak_entry_insert(weak_table, &new_entry);
}
// Do not set *referrer. objc_storeWeak() requires that the
// value not change.
return referent_id;
}
Copy the code
Weak_register_no_lock is used to save weak reference information. The specific implementation is as follows:
- Determine whether the object is being released and whether weak references are supported
allowsWeakReference
, if the instance object’sallowsWeakReference
Method returnsNO
The call_objc_fatal
And print in the console"Cannot form weak reference to instance (%p) of class %s. It is possible that this object was over-released, or is in the process of deallocation."
;
(onallowsWeakReference
Has been inIOS – Memory Management Cliche 3: ARCAbout the) - The query
weak_table
, judge whether the weak reference information associated with the object has been saved in the weak reference table; - If there is already weak reference information
append_referrer
Function to add weak reference information to the presententry
The container; If there is no associated information, one is createdentry
, and insert intoweak_table
Weak reference table.
weak_unregister_no_lock
/** * Unregister an already-registered weak reference. * This is used when referrer's storage is about to go away, but referent * isn't dead yet. (Otherwise, zeroing referrer later would be a * bad memory access.) * Does nothing if referent/referrer is not a currently active weak reference. * Does not zero referrer. * * FIXME currently requires old referent value to be passed in (lame) * FIXME unregistration should be automatic if referrer is collected * * @param weak_table The global weak table. * @param referent The object. * @param referrer The weak reference. */
void
weak_unregister_no_lock(weak_table_t *weak_table, id referent_id,
id *referrer_id)
{
objc_object *referent = (objc_object *)referent_id;
objc_object **referrer = (objc_object **)referrer_id;
weak_entry_t *entry;
if(! referent)return;
if ((entry = weak_entry_for_referent(weak_table, referent))) {
remove_referrer(entry, referrer);
bool empty = true;
if(entry->out_of_line() && entry->num_refs ! =0) {
empty = false;
}
else {
for (size_t i = 0; i < WEAK_INLINE_COUNT; i++) {
if (entry->inline_referrers[i]) {
empty = false;
break; }}}if(empty) { weak_entry_remove(weak_table, entry); }}// Do not set *referrer = nil. objc_storeWeak() requires that the
// value not change.
}
Copy the code
Weak_unregister_no_lock is used to remove weak reference information. The implementation is as follows:
- The query
weak_table
, judge whether the weak reference information associated with the object has been saved in the weak reference table; - If so, call
remove_referrer
Method to remove the associated weak reference. It then determines if the storage array is empty, and if so, callsweak_entry_remove
removeentry
The container.
The objc_destroyWeak call stack looks like this:
// NSObject.mm1) objc_destroyWeak 2) storeWeakCopy the code
objc_destroyWeak
/** * Destroys the relationship between a weak pointer * and the object it is referencing in the internal weak * table. If the weak pointer is not referencing anything, * there is no need to edit the weak table. * * This function IS NOT thread-safe with respect to concurrent * modifications to the weak variable. (Concurrent weak clear is safe.) * * @param location The weak pointer address. */
void
objc_destroyWeak(id *location)
{
(void)storeWeak<DoHaveOld, DontHaveNew, DontCrashIfDeallocating>
(location, nil);
}
Copy the code
StoreWeak is called in both objc_initWeak and objc_destroyWeak, but with different arguments.
objc_initWeak
Pass in the object address, andDontHaveOld
,DoHaveNew
,DoCrashIfDeallocating
;objc_destroyWeak
willnil
The incoming andDoHaveOld
,DontHaveNew
,DontCrashIfDeallocating
.
The storeWeak function registers the address of the object assigned to argument 2 as key and the address of the __weak modifier for argument 1 in the Weak table. If argument 2 is nil, the address of the variable is removed from the Weak table.
Summary: Add weak. A pointer marked __weak is called objc_initWeak after compilation, and storeWeak is called after objc_initWeak initializes the weak variable. The process of adding weak is as follows: After a series of function call stacks, the weak reference variable is finally added in the weak_register_no_lock() function. The specific location of the added variable is searched by the hash algorithm. If an array of weak references already exists for the current object, add the weak reference variable to it. If not, create a weak reference table and add the weak reference variables to it.
conclusion
Above is the memory management method of the specific implementation, the next to do a small summary:
Memory Management methods | The specific implementation |
---|---|
alloc | Through a series of function call stacks, and finally by calling C functionscalloc To apply for memory space and initialize the objectisa But does not set the reference count of the object to 1. |
init | The base classinit The method doesn’t do anything, it justalloc The created object is returned. We can rewriteinit Methods toalloc The created instance does some initialization. |
new | new The method is simple, but nestedalloc andinit . |
Copy, mutableCopy | Call thecopyWithZone andmutableCopyWithZone Methods. |
retainCount | 1) ifisa notnonpointer , the reference count value =SideTable + 1 stored in the reference count table in;(2) if isa isnonpointer , the reference count value =isa In theextra_rc Stored value + 1 +SideTable The value stored in the reference count table in |
retain | 1) ifisa notnonpointer , just forSidetable The reference count in is +1;(2) if isa isnonpointer , it willisa In theextra_rc The stored reference count is +1, and if it overflows, it willextra_rc In theRC_HALF (extra_rc Half of the full value) of reference counts transferred tosidetable In the store. |
release | 1) ifisa notnonpointer , just forSidetable If the reference count =0, thendealloc Object;(2) if isa isnonpointer , it willisa In theextra_rc The stored reference count goes to -1. If underflow, that isextra_rc The reference count in is already 0has_sidetable_rc Whether it istrue That is, whether it is in useSidetable Storage. If so, apply for theSidetable In the applicationRC_HALF A reference count is transferred toextra_rc In storage, if insufficientRC_HALF There will be as many applications as there are and then there will beSidetable Is subtracted from the reference count inRC_HALF (or less thanRC_HALF The actual value of), and the actual applied reference is counted as -1 and stored inextra_rc In the. ifextra_rc The reference count is 0 andhas_sidetable_rc forfalse orSidetable If the reference count is 0, thendealloc Object. |
dealloc | (1) Determine whether there are things to be processed before destroying the object (such as weak references, associated objects,C++ The destructor of,SideTabel Reference count table, etc.);② If not, call directly free The function destroys an object;③ If there is, call first object_dispose Do some pre-release processing (set the weak reference pointer tonil , remove associated objects,object_cxxDestruct And in theSideTabel Erase the reference count from the reference count table, and so onfree Function destroys the object. |
removeweak .weak Pointer to anil The process of |
When an object is destroyed, indealloc The method internally goes through a series of function call stacks, with two hashes, the first time finding the object based on its addressSidetable The second time according to the address of the object inSidetable theweak_table Is found in its weak reference table. Traversing the weak reference array will point to the address of the objectweak I’m going to set all the variables to thetanil . |
addweak |
Through a series of function call stacks, and finally atweak_register_no_lock() Function, to add a weak reference variable, the specific location of the add is found by the hash algorithm. If an array of weak references already exists for the current object, add the weak reference variable to it. If not, create a weak reference table and add the weak reference variables to it. |
I suggest that you look through objC4 source code, so that the impression will be deeper. In addition, the source code analysis of this article is not very detailed analysis, if you are interested in their own research, it is good to get to the bottom of it. If I have time later, I will analyze the details and update this article.