Write in front: iOS underlying principle exploration is my usual development and learning in the accumulation of a section of advanced road. Record my continuous exploration of the journey, I hope to be helpful to all readers.Copy the code
The directory is as follows:
- IOS underlying principles of alloc exploration
- The underlying principles of iOS are explored
- The underlying principles of iOS explore the nature of objects & isa’s underlying implementation
- Isa-basic Principles of iOS (Part 1)
- Isa-basic Principles of iOS (Middle)
- Isa-class Basic Principles of iOS Exploration (2)
- IOS fundamentals explore the nature of Runtime Runtime & methods
- Objc_msgSend: Exploring the underlying principles of iOS
- Slow lookups in iOS Runtime
- A dynamic approach to iOS fundamentals
- The underlying principles of iOS explore the message forwarding process
- Dyld (part 1)
- IOS Basic Principles of application loading principle dyld (ii)
- IOS basic principles explore the loading of classes
- The underlying principles of iOS explore the loading of categories
- IOS underlying principles to explore the associated object
- IOS underlying principle of the wizard KVC exploration
- Exploring the underlying principles of iOS: KVO Principles | More challenges in August
- Exploring the underlying principles of iOS: Rewritten KVO | More challenges in August
- The underlying principles of iOS: Multi-threading | More challenges in August
- GCD functions and queues in iOS
- GCD principles of iOS (Part 1)
- IOS Low-level – What do you know about deadlocks?
- IOS Low-level – Singleton destruction is possible?
- IOS Low-level – Dispatch Source
- IOS bottom – a fence letter blocks the number
- IOS low-level – Be there or be Square semaphore
- IOS underlying GCD – In and out into a scheduling group
- Basic principles of iOS – Basic use of locks
- IOS underlying – @synchronized Flow analysis
- IOS low-level – The principle of lock exploration
- IOS Low-level – allows you to implement a read/write lock
- Implementation of Objective-C Block
- Implementation of Objective-C Block
- IOS bottom – Block, comprehensive resolution!
- IOS Basics – Startup Optimization (part 1)
- IOS Basics – Startup Optimization (2)
- Exploration of basic principles of iOS — Memory management of memory five areas
- Tagged Pointer Format Changes for memory management
Summary of the above column
- Summary of iOS underlying principles of exploration
Sort out the details
- Summary of iOS development details
preface
NONPOINTER_ISA (non-pointer ISA)
Nonpointer: indicates whether pointer optimization is enabled for isa Pointers. (We explore the underlying implementation of ISA in the nature of objects explored in iOS Underlying Principles & The Underlying implementation of ISA)
- 0: pure ISA pointer;
- 1: Not only class object address, ISA contains class information, object reference count, etc.
Similar to Tagged Pointer, NONPOINTER_ISA is an optimization of isa’s memory bits so that its 64 bits are fully utilized and stored, rather than being wasted.
Its ShiftCLs, similar to PalLoad, is used to host valid data.
MRC & ARC
The object’s reference count is stored in extra_rc in ISA;
extra_rc
The extra_rc value is 9 if the object’s reference count is 10. If the reference count is greater than 10, the has_SIDETABLE_RC flag bit is used.
retain
release
The release process is basically the reverse operation of retain, and then when the reference count is 0, the destructor DealLocate is called: a Dealloc message is sent to our object.
Real release
.if (performDealloc) {
((void(*)(objc_object *, SEL))objc_msgSend)(this, @selector(dealloc)); }...Copy the code
The appendix
Retain the source code
ALWAYS_INLINE id
objc_object: :rootRetain(bool tryRetain, objc_object::RRVariant variant)
{
if (slowpath(isTaggedPointer())) return (id)this;
bool sideTableLocked = false;
bool transcribeToSideTable = false;
isa_t oldisa;
isa_t newisa;
oldisa = LoadExclusive(&isa.bits);
if (variant == RRVariant::FastOrMsgSend) {
// These checks only apply to objc_retain()
// They are here to avoid us reloading isa.
if (slowpath(oldisa.getDecodedClass(false)->hasCustomRR())) {
ClearExclusive(&isa.bits);
if (oldisa.getDecodedClass(false)->canCallSwiftRR()) {
return swiftRetain.load(memory_order_relaxed)((id)this);
}
return ((id(*)(objc_object *, SEL))objc_msgSend)(this, @selector(retain)); }}if(slowpath(! oldisa.nonpointer)) {// A class is always a class, so we can perform a check
// Outside the CAS ring
if (oldisa.getDecodedClass(false)->isMetaClass()) {
ClearExclusive(&isa.bits);
return (id)this; }}do {
transcribeToSideTable = false;
newisa = oldisa;
if(slowpath(! newisa.nonpointer)) { ClearExclusive(&isa.bits);if (tryRetain) return sidetable_tryRetain() ? (id)this : nil;
else return sidetable_retain(sideTableLocked);
}
// do not check newisa.fast_rr; We have called RR override
if (slowpath(newisa.isDeallocating())) {
ClearExclusive(&isa.bits);
if (sideTableLocked) {
ASSERT(variant == RRVariant::Full);
sidetable_unlock();
}
if (slowpath(tryRetain)) {
return nil;
} else {
return (id)this;
}
}
uintptr_t carry;
newisa.bits = addc(newisa.bits, RC_ONE, 0, &carry); // extra_rc++
if (slowpath(carry)) {
// newisa.extra_rc++ overflowed
if(variant ! = RRVariant::Full) { ClearExclusive(&isa.bits);return rootRetain_overflow(tryRetain);
}
// Retain half of the retain count
// Get ready to copy the other half to the side table.
if(! tryRetain && ! sideTableLocked) sidetable_lock(); sideTableLocked =true;
transcribeToSideTable = true;
newisa.extra_rc = RC_HALF;
newisa.has_sidetable_rc = true; }}while(slowpath(! StoreExclusive(&isa.bits, &oldisa.bits, newisa.bits)));if (variant == RRVariant::Full) {
if (slowpath(transcribeToSideTable)) {
// Put the remaining half in the side table.
sidetable_addExtraRC_nolock(RC_HALF);
}
if(slowpath(! tryRetain && sideTableLocked)) sidetable_unlock(); }else{ ASSERT(! transcribeToSideTable); ASSERT(! sideTableLocked); }return (id)this;
}
Copy the code
sidetable_retain
id
objc_object: :sidetable_retain(bool locked){#ifSUPPORT_NONPOINTER_ISA ASSERT(! isa.nonpointer); #endif SideTable& table = SideTables()[this];
if(! locked) table.lock(); size_t& refcntStorage = table.refcnts[this];
if (! (refcntStorage & SIDE_TABLE_RC_PINNED)) {
refcntStorage += SIDE_TABLE_RC_ONE;
}
table.unlock();
return (id)this;
}
Copy the code
rootRelease
ALWAYS_INLINE bool
objc_object: :rootRelease(bool performDealloc, objc_object::RRVariant variant)
{
if (slowpath(isTaggedPointer())) return false;
bool sideTableLocked = false;
isa_t newisa, oldisa;
oldisa = LoadExclusive(&isa.bits);
if (variant == RRVariant::FastOrMsgSend) {
// These checks are only meaningful for objc_release()
// They are here so that we avoid a re-load of the isa.
if (slowpath(oldisa.getDecodedClass(false)->hasCustomRR())) {
ClearExclusive(&isa.bits);
if (oldisa.getDecodedClass(false)->canCallSwiftRR()) {
swiftRelease.load(memory_order_relaxed)((id)this);
return true;
}
((void(*)(objc_object *, SEL))objc_msgSend)(this, @selector(release));
return true; }}if(slowpath(! oldisa.nonpointer)) {// a Class is a Class forever, so we can perform this check once
// outside of the CAS loop
if (oldisa.getDecodedClass(false)->isMetaClass()) {
ClearExclusive(&isa.bits);
return false; }}retry:
do {
newisa = oldisa;
if(slowpath(! newisa.nonpointer)) { ClearExclusive(&isa.bits);return sidetable_release(sideTableLocked, performDealloc);
}
if (slowpath(newisa.isDeallocating())) {
ClearExclusive(&isa.bits);
if (sideTableLocked) {
ASSERT(variant == RRVariant::Full);
sidetable_unlock();
}
return false;
}
// don't check newisa.fast_rr; we already called any RR overrides
uintptr_t carry;
newisa.bits = subc(newisa.bits, RC_ONE, 0, &carry); // extra_rc--
if (slowpath(carry)) {
// don't ClearExclusive()goto underflow; }}while(slowpath(! StoreReleaseExclusive(&isa.bits, &oldisa.bits, newisa.bits)));if (slowpath(newisa.isDeallocating()))
goto deallocate;
if (variant == RRVariant::Full) {
if (slowpath(sideTableLocked)) sidetable_unlock();
} else{ ASSERT(! sideTableLocked); }return false;
underflow:
// newisa.extra_rc-- overflow: borrow or release from side table
// abandon newisa to undo the decrement
newisa = oldisa;
if (slowpath(newisa.has_sidetable_rc)) {
if(variant ! = RRVariant::Full) { ClearExclusive(&isa.bits);return rootRelease_underflow(performDealloc);
}
// Transfer retain count from side table to inline storage.
if(! sideTableLocked) { ClearExclusive(&isa.bits); sidetable_lock(); sideTableLocked =true;
// Need to start over to avoid a race against
// the nonpointer -> raw pointer transition.
oldisa = LoadExclusive(&isa.bits);
goto retry;
}
// Try to remove some retain counts from the side table.
auto borrow = sidetable_subExtraRC_nolock(RC_HALF);
bool emptySideTable = borrow.remaining == 0; // we'll clear the side table if no refcounts remain there
if (borrow.borrowed > 0) {
// Side table retain count decreased.
// Try adding them to the inline count.
bool didTransitionToDeallocating = false;
newisa.extra_rc = borrow.borrowed - 1; // redo the original decrement toonewisa.has_sidetable_rc = ! emptySideTable; bool stored = StoreReleaseExclusive(&isa.bits, &oldisa.bits, newisa.bits);if(! stored && oldisa.nonpointer) {// Inline update failed.
// Try it again right now. This prevents livelock on LL/SC
// architectures where the side table access itself may have
// dropped the reservation.
uintptr_t overflow;
newisa.bits =
addc(oldisa.bits, RC_ONE * (borrow.borrowed-1), 0, &overflow); newisa.has_sidetable_rc = ! emptySideTable;if(! overflow) { stored = StoreReleaseExclusive(&isa.bits, &oldisa.bits, newisa.bits);if(stored) { didTransitionToDeallocating = newisa.isDeallocating(); }}}if(! stored) {// Inline update failed.
// Put the retains back in the side table.
ClearExclusive(&isa.bits);
sidetable_addExtraRC_nolock(borrow.borrowed);
oldisa = LoadExclusive(&isa.bits);
goto retry;
}
// Decrement successful after borrowing from side table.
if (emptySideTable)
sidetable_clearExtraRC_nolock();
if(! didTransitionToDeallocating) {if (slowpath(sideTableLocked)) sidetable_unlock();
return false; }}else {
// Side table is empty after all. Fall-through to the dealloc path.}}deallocate:
// Real release.
ASSERT(newisa.isDeallocating());
ASSERT(isa.isDeallocating());
if (slowpath(sideTableLocked)) sidetable_unlock();
__c11_atomic_thread_fence(__ATOMIC_ACQUIRE);
if (performDealloc) {
((void(*)(objc_object *, SEL))objc_msgSend)(this, @selector(dealloc));
}
return true;
}
Copy the code
sidetable_subExtraRC_nolock
// Move some reservation counts from the side table to the ISA field.
// Returns the actual count subtracted, which may be less than requested.
objc_object::SidetableBorrow
objc_object: :sidetable_subExtraRC_nolock(size_t delta_rc)
{
ASSERT(isa.nonpointer);
SideTable& table = SideTables()[this];
RefcountMap::iterator it = table.refcnts.find(this);
if (it == table.refcnts.end() || it->second == 0) {
// The edge table reservation count is zero. Can't borrow
return { 0.0 };
}
size_t oldRefcnt = it->second;
// Isa-side bits should not be set here
ASSERT((oldRefcnt & SIDE_TABLE_DEALLOCATING) == 0);
ASSERT((oldRefcnt & SIDE_TABLE_WEAKLY_REFERENCED) == 0);
size_t newRefcnt = oldRefcnt - (delta_rc << SIDE_TABLE_RC_SHIFT);
ASSERT(oldRefcnt > newRefcnt); // shouldn't underflow
it->second = newRefcnt;
return { delta_rc, newRefcnt >> SIDE_TABLE_RC_SHIFT };
}
Copy the code