preface

Oc-runimte & objc_msgSend() and oc-runimte & objc_msgSend()- give us an overview of Runtime. You learned the difference between Runtime and compile-time, and the three ways to call the Runtime. Objc_msgSend () finds imp through SEL. Let’s use this article to explore the slow query process for objc_msgSend().

Objc_msgSend () The cache could not be found

Oc-runimte & objc_msgSend()- MissLabelDynamic(__objc_msgSend_uncached) if cached is not found at all Next we search for __objc_msgSend_uncached globally

STATIC_ENTRY __objc_msgSend_uncached
UNWIND __objc_msgSend_uncached, FrameWithNoSaves

// THIS IS NOT A CALLABLE C FUNCTION
// Out-of-band p15 is the class to search
	
MethodTableLookup
TailCallFunctionPointer x17

END_ENTRY __objc_msgSend_uncached
Copy the code

Used in __objc_msgSend_uncached to MethodTableLookup, and then to TailCallFunctionPointer, which returns.

TailCallFunctionPointer implementation

.macro TailCallFunctionPointer
    // $0 = function pointer value
    br	$0
.endmacro
Copy the code

MethodTableLookup implementation

.macro MethodTableLookup SAVE_REGS MSGSEND // Store information // lookUpImpOrForward(obj, SEL, CLS, LOOKUP_INITIALIZE | LOOKUP_RESOLVER) // receiver and selector already in x0 and x1 mov x2, X16 / / x16 (class) assigned to x2 mov x3, # 3 / / 3 assigned to the x3 bl _lookUpImpOrForward / / IMP in x0 - > return (x0 was the first to register, is also the return value is stored.) // assign x0 to x17 mov x17, x0 RESTORE_REGS msgsend.endmacroCopy the code

On the method table, we know that IMP is in X0 (which is the first register and also where the return value is stored). . From this we can infer _lookUpImpOrForward. We then look globally for _lookUpImpOrForward and find that there is no definition and implementation of _lookUpImpOrForward in the assembly code. We then look globally for lookUpImpOrForward and find the result. The assembly -> C++ process is just an _ identifier.

lookUpImpOrForwardWe find that the final return value isimp

Question: why should caches be written in assemblies rather than C++?

  • Assembly of the entire process is closer to machine language and the execution of the process is very fast. Find the cache quickly in the cache lookup process.
  • The parameters in a method are indeterminate, but in C the parameters must be explicit, and assembly can be more dynamic.

lookUpImpOrForward

LookUpImpOrForward assembler source code

How do we look at lookUpImpOrForward assembly source code? Check its return value, IMP. Let’s see where it operates. Let’s analyze the source code together

IMP lookUpImpOrForward(id inst, SEL sel, Class cls, Int behavior) {// initialize message forwarding forward_IMP const IMP forward_IMP = (IMP)_objc_msgForward_impcache; IMP imp = nil; Class curClass; runtimeLock.assertUnlocked(); // check whether the class is initialized with slowPath (! cls->isInitialized())) { behavior |= LOOKUP_NOCACHE; } runtimeLock.lock(); // Check whether the class is registered checkIsKnownClass(CLS); / / initialize the class and metaclass CLS = realizeAndInitializeIfNeeded_locked (inst, CLS, behaviors & LOOKUP_INITIALIZE); // runtimeLock may have been dropped but is now locked again runtimeLock.assertLocked(); curClass = cls; // an infinite loop is generated when a break, goto exit is encountered. for (unsigned attempts = unreasonableClassCount();;) {// Check whether there is a shared cache, Generally is the method of system (not walk) if (curClass - > cache. IsConstantOptimizedCache strict (/ * * / true)) {/ / find the Shared cache # if CONFIG_USE_PREOPT_CACHES imp = cache_getImp(curClass, sel); if (imp) goto done_unlock; curClass = curClass->cache.preoptFallbackClass(); #endif} else {// curClass methodlist getMethodNoSuper_nolock(curClass, sel); if (meth) { imp = meth->imp(false); // Get the corresponding IMP and run the done process. goto done; } // curClass = curClass->getSuperclass() until nil If (slowPath ((curClass = curClass->getSuperclass()) == nil)) {// No implementation found, And resolver didn't help. // Use forwarding. Imp imp = forward_imp; break; } } // Halt if there is a cycle in the superclass chain. if (slowpath(--attempts == 0)) { _objc_fatal("Memory corruption  in class list."); Imp imp = cache_getImp(curClass, sel); if (slowpath(imp == forward_imp)) { // Found a forward:: entry in a superclass. // Stop searching, but don't cache yet; call method // resolver for this class first. break; } if (fastpath(imp)) { // Found the method in a superclass. Cache it in this class. goto done; } } // No implementation found. Try method resolver once. if (slowpath(behavior & LOOKUP_RESOLVER)) { behavior ^= LOOKUP_RESOLVER; return resolveMethod_locked(inst, sel, cls, behavior); } done: if (fastpath((behavior & LOOKUP_NOCACHE) == 0)) { #if CONFIG_USE_PREOPT_CACHES while (cls->cache.isConstantOptimizedCache(/* strict */true)) { cls = cls->cache.preoptFallbackClass(); } #endif log_and_fill_cache(cls, imp, sel, inst, curClass); } done_unlock: runtimeLock.unlock(); if (slowpath((behavior & LOOKUP_NIL) && imp == forward_imp)) { return nil; } return imp; }Copy the code

RealizeAndInitializeIfNeeded_locked implementation

static Class realizeAndInitializeIfNeeded_locked(id inst, Class cls, bool initialize) { runtimeLock.assertLocked(); / /! If (slowPath (!) = (slowPath (!)) = (slowpath(!)) = (slowpath(!)) cls->isRealized())) { cls = realizeClassMaybeSwiftAndLeaveLocked(cls, runtimeLock); RuntimeLock may have been dropped but is now locked again} if (slowPath (initialize &&! cls->isInitialized())) { cls = initializeAndLeaveLocked(cls, inst, runtimeLock); // runtimeLock may have been dropped but is now locked again // If sel == initialize, class_initialize will send +initialize and // then the messenger will send +initialize again after this // procedure finishes. Of course, if this is not being called // from the messenger then it won't happen. 2778172 } return cls; }Copy the code

Binary search algorithm

static method_t * getMethodNoSuper_nolock(Class cls, SEL sel) { runtimeLock.assertLocked(); ASSERT(cls->isRealized()); // fixme nil cls? // fixme nil sel? Auto const methods = CLS ->data()->methods(); For (auto mlists = methods.beginlists (), end = methods.endlists (); mlists ! = end; ++mlists) { // <rdar://problem/46904873> getMethodNoSuper_nolock is the hottest // caller of search_method_list, inlining it turns // getMethodNoSuper_nolock into a frame-less function and eliminates // any store from this codepath. // Binary lookup method_t *m = search_method_list_inline(*mlists, sel); if (m) return m; } return nil; }Copy the code
template<class getNameFunc> ALWAYS_INLINE static method_t * findMethodInSortedMethodList(SEL key, const method_list_t *list, const getNameFunc &getName) { ASSERT(list); Auto first = list->begin(); auto base = first; decltype(first) probe; Uintptr_t keyValue = (uintptr_t)key; uint32_t count; /** * for example: count = list->count = 8; * count >> 1 => 1000 -> 0100 => count = 4; * probe = base + (count >> 1); => probe = 0 + 4; * base = probe + 1; => base = 5; * count--; => count = 8 - 1; * Continue loop * count >>= 1 => 0111 -> 0011 => count = 3 * count >> 1 => 0011 -> 0001 => count = 1 * probe = base + (count >> 1); => probe = 5 + 1; * base = 5, list->count = 8, that is, its value is 6, 7. */ for (count = list->count; count ! = 0; count >>= 1) { probe = base + (count >> 1); Uintptr_t probeValue = (uintptr_t)getName(probe); If (keyValue == probeValue) {// 'probe' is a match. // Rewind looking for the *first* Occurrence of this value. // this is required for correct category overrides. While (probe > first && keyValue == (uintptr_t)getName((probe - 1))) {probe--; } return &*probe; } if (keyValue > probeValue) { base = probe + 1; count--; }} return nil; }Copy the code

Through the findMethodInSortedMethodList we learned a binary search algorithm. Binary search is also called half search, is a simple and fast search algorithm; It has two requirements for the sequence to be searched, one is that the sequence must be ordered (that is, all elements in the sequence are sorted according to the size relationship, ascending and descending order can be), the other is that the sequence must be stored sequentially.

For example, quickly find the target value 66 from 0 to 100.Copy the code

Cache_getImp analysis

The STATIC_ENTRY _cache_getImp // GetClassFromIsa_p16 macro defines the same thing as when we started querying the cache method in this class. _cache_getImp, LGetImpMissDynamic, LGetImpMissConstant LGetImpMissDynamic: mov p0, #0 retCopy the code

GetClassFromIsa_p16 definition

.macro GetClassFromIsa_p16 src, needs_auth, auth_address /* note: auth_address is not required if ! needs_auth */ #if SUPPORT_INDEXED_ISA // Indexed isa mov p16, \src // optimistically set dst = src tbz p16, #ISA_INDEX_IS_NPI_BIT, 1f // done if not non-pointer isa // isa in p16 is indexed adrp x10, _objc_indexed_classes@PAGE add x10, x10, _objc_indexed_classes@PAGEOFF ubfx p16, p16, #ISA_INDEX_SHIFT, #ISA_INDEX_BITS // extract index ldr p16, [x10, p16, UXTP #PTRSHIFT] // load class from array 1: #elif __LP64__ .if \needs_auth == 0 // _cache_getImp takes an authed class already mov p16, \src .else // 64-bit packed isa ExtractISA p16, \src, \auth_address .endifCopy the code

_cache_getImp calls GetClassFromIsa_p16 to store p0 and 0 SRC :p0; Needs_auth :0, so go needs_auth==0, p0=curClass directly in GetClassFromIsa_p16. Assign the value of register P0 to register P16, p16= curClass. This is where we get to objc_msgSend() where the cache can’t find it.

conclusion

Slow query flow chart