In _objc_msgSend analysis of CacheLookup, we analyse MissLabelDynamic (__objc_msgSend_uncached) when the corresponding IMP is not found in cache. It also leaves the question of why to write in assembly, but the reason is that assembly is closer to machine language, more efficient and safer.

__objc_msgSend_uncached Slow search process analysis

STATIC_ENTRY __objc_msgSend_uncached
	UNWIND __objc_msgSend_uncached, FrameWithNoSaves

	// THIS IS NOT A CALLABLE C FUNCTION
	// Out-of-band p15 is the class to search
	
	MethodTableLookup
	TailCallFunctionPointer x17

	END_ENTRY __objc_msgSend_uncached
Copy the code
.macro MethodTableLookup
	
	SAVE_REGS MSGSEND

	// lookUpImpOrForward(obj, sel, cls, LOOKUP_INITIALIZE | LOOKUP_RESOLVER)
	// receiver and selector already in x0 and x1
	mov	x2, x16
	mov	x3, #3
	bl	_lookUpImpOrForward

	// IMP in x0
	mov	x17, x0

	RESTORE_REGS MSGSEND

.endmacro
Copy the code
.macro TailCallFunctionPointer
	// $0 = function pointer value
	br	$0
.endmacro
Copy the code

MethodTableLookup and TailCallFunctionPointer X17 are executed in __objc_msgSend_uncached. In MethodTableLookup, Call the _lookUpImpOrForward (/ / lookUpImpOrForward (obj, sel, CLS, LOOKUP_INITIALIZE | LOOKUP_RESOLVER)) this function. TailCallFunctionPointer x17 and TailCallFunctionPointer x17.

So the next focus is _lookUpImpOrForward.

NEVER_INLINE IMP lookUpImpOrForward(id inst, SEL sel, Class cls, int behavior) { const IMP forward_imp = (IMP)_objc_msgForward_impcache; IMP imp = nil; Class curClass; runtimeLock.assertUnlocked(); if (slowpath(! cls->isInitialized())) { // The first message sent to a class is often +new or +alloc, or +self // which goes through objc_opt_* or various optimized entry points. // // However, the class isn't realized/initialized yet at this point, // and the optimized entry points fall down through objc_msgSend, // which ends up here. // // We really want to avoid caching these, as it can cause IMP caches // to be made with a single entry forever. // // Note that this check is racy as several threads might try to // message a given class for the first time at the same time, // in which case we might cache anyway. behavior |= LOOKUP_NOCACHE; } // runtimeLock is held during isRealized and isInitialized checking // to prevent races against concurrent realization. // runtimeLock is held during method search to make // method-lookup + cache-fill atomic with respect to method addition. // Otherwise, a category could be added but ignored indefinitely because // the cache was re-filled with the old value after the cache  flush on // behalf of the category. runtimeLock.lock(); // We don't want people to be able to craft a binary blob that looks like // a class but really isn't one and do a CFI attack. // // To make these harder we want to make sure this is a class that was // either built into the binary or legitimately registered through // objc_duplicateClass, objc_initializeClassPair or objc_allocateClassPair. checkIsKnownClass(cls); cls = realizeAndInitializeIfNeeded_locked(inst, cls, behavior & LOOKUP_INITIALIZE); // runtimeLock may have been dropped but is now locked again runtimeLock.assertLocked(); curClass = cls; // The code used to lookup the class's cache again right after // we take the lock but for the vast majority of the cases // evidence shows this is a miss most of the time, hence a time loss. // // The only codepath calling into this without having performed some // kind of cache lookup is class_getInstanceMethod(). for (unsigned attempts = unreasonableClassCount();;) { if (curClass->cache.isConstantOptimizedCache(/* strict */true)) { #if CONFIG_USE_PREOPT_CACHES imp = cache_getImp(curClass, sel); if (imp) goto done_unlock; curClass = curClass->cache.preoptFallbackClass(); #endif } else { // curClass method list. Method meth = getMethodNoSuper_nolock(curClass, sel); if (meth) { imp = meth->imp(false); goto done; } if (slowpath((curClass = curClass->getSuperclass()) == nil)) { // No implementation found, and method resolver didn't help. // Use forwarding. imp = forward_imp; break; } } // Halt if there is a cycle in the superclass chain. if (slowpath(--attempts == 0)) { _objc_fatal("Memory corruption  in class list."); } // Superclass cache. imp = cache_getImp(curClass, sel); if (slowpath(imp == forward_imp)) { // Found a forward:: entry in a superclass. // Stop searching, but don't cache yet; call method // resolver for this class first. break; } if (fastpath(imp)) { // Found the method in a superclass. Cache it in this class. goto done; } } // No implementation found. Try method resolver once. if (slowpath(behavior & LOOKUP_RESOLVER)) { behavior ^= LOOKUP_RESOLVER; return resolveMethod_locked(inst, sel, cls, behavior); } done: if (fastpath((behavior & LOOKUP_NOCACHE) == 0)) { #if CONFIG_USE_PREOPT_CACHES while (cls->cache.isConstantOptimizedCache(/* strict */true)) { cls = cls->cache.preoptFallbackClass(); } #endif log_and_fill_cache(cls, imp, sel, inst, curClass); } done_unlock: runtimeLock.unlock(); if (slowpath((behavior & LOOKUP_NIL) && imp == forward_imp)) { return nil; } return imp; }Copy the code

Before analyzing _lookUpImpOrForward, we identify the target. Now we want to find the corresponding IMP through SEL, so in the _lookUpImpOrForward method, the target is the IMP. Ignore some extraneous code for now.

The code logic in _lookUpImpOrForward is:

1.if (slowpath(! CLS ->isInitialized())

2. CheckIsKnownClass (CLS) checks whether the current class is registered and loaded.

3. The CLS = realizeAndInitializeIfNeeded_locked (inst, CLS, behaviors & LOOKUP_INITIALIZE), this step is internal to the class of the ro, rw assignment, Setting supercls, metacls (which is also a recursive call to superclass and metaclass initialization), we know that method_list is stored in the corresponding RO, RW, so this is just for the for loop looking for IMP later.

  1. for (unsigned attempts = unreasonableClassCount();;) , the loop

  2. * the if (curClass – > cache. IsConstantOptimizedCache (/ strict * / true)) * *, to determine whether you cache first, if you have, first in the cache lookup

  3. Method meth = getMethodNoSuper_nolock(curClass, sel)

Slowpath ((curClass = curClass->getSuperclass()) == nil)) Return forward_IMP (const IMP forward_IMP = (IMP)_objc_msgForward_impcache), if not nil, Imp = cache_getImp(curClass -> sel); Continue the loop 5,6,7.

If all classes cannot be found, resolveMethod_locked(inst, SEL, CLS, behavior); Method resolution process.

Log_and_fill_cache (CLS, IMP, sel, inst, curClass); Insert cache.

Analysis getMethodNoSuper_nolock

static method_t * getMethodNoSuper_nolock(Class cls, SEL sel) { runtimeLock.assertLocked(); ASSERT(cls->isRealized()); // fixme nil cls? // fixme nil sel? auto const methods = cls->data()->methods(); for (auto mlists = methods.beginLists(), end = methods.endLists(); mlists ! = end; ++mlists) { // <rdar://problem/46904873> getMethodNoSuper_nolock is the hottest // caller of search_method_list, inlining it turns // getMethodNoSuper_nolock into a frame-less function and eliminates // any store from this codepath. method_t *m = search_method_list_inline(*mlists, sel); if (m) return m; } return nil; }Copy the code

1. Check whether the current ASSERT(CLS ->isRealized()) class is implemented.

2. Const methods = CLS ->data()->methods()

3. Loop over methods, method_t*m = search_method_list_inline(*mlists, sel)

Call findMethodInSortedMethodList search_method_list_inline layer upon layer calls, finally

template<class getNameFunc> ALWAYS_INLINE static method_t * findMethodInSortedMethodList(SEL key, const method_list_t *list, const getNameFunc &getName) { ASSERT(list); auto first = list->begin(); auto base = first; decltype(first) probe; uintptr_t keyValue = (uintptr_t)key; uint32_t count; for (count = list->count; count ! = 0; count >>= 1) { probe = base + (count >> 1); uintptr_t probeValue = (uintptr_t)getName(probe); if (keyValue == probeValue) { // `probe` is a match. // Rewind looking for the *first* occurrence of this value. // This  is required for correct category overrides. while (probe > first && keyValue == (uintptr_t)getName((probe - 1))) { probe--; } return &*probe; } if (keyValue > probeValue) { base = probe + 1; count--; } } return nil; }Copy the code

In findMethodInSortedMethodList, using the dichotomy, lookup method. Let’s look at the flow more clearly through the breakpoint.

@interface LGPerson : NSObject{ NSString *hobby; } @property (nonatomic, copy) NSString *name; @property (nonatomic) int age; SaySomething - (void)saySomething; + (void)sayNB;Copy the code

Probe = base + (count >> 1) probe = first address + 2,5 = half, 3 = 3

2. Uintptr_t probeValue = (uintptr_t)getName(probe), get the corresponding SEL through the address of the method

While (probe > first && keyValue == (uintptr_t)getName((probe – 1)))) Because the classification overrides the corresponding method, puts the corresponding IMP method first, so this probe-1, finds the first implementation, and then.

If (keyValue > probeValue), if (base = probe + 1, count–, base = probe + 1, count–, base = probe + 1, count–, base = probe + 1, count–

  1. To continue the for loop, select count >>= 1 (moving 1 bit to the right equals dividing by 2, reducing the search scope by half).

By printing, the current count == 2(0010B), and then moving it 1 bit right again, getprobe = base + 1Find probe’s index = 1 (this is a C++ iterator, and overload + method. Then loop 3,4,5, return the address of method if found, return nil if not.

Conclusion: _lookUpImpOrForward is used to check whether the current class is initialized and calls it recursive. ro and RW assignments to the current class, parent class and metaclass are also used. To prepare for the slow search process, in the slow search process, the class’s methods are cyclically recursively traversed (if the current class is not found, the parent class is obtained, and the parent class continues to search). In traversing methods, the binary search algorithm is skillfully used to find the corresponding method implementation (if there is a classification rewrite, Need to find the most advanced method implementation).