GCD source code


This chapter mainly introduces the implementation logic of the underlying source code

  1. Exploration of underlying principles of GCD queue
  2. GCD asynchronous function underlying principle exploration
  3. Exploration of underlying principles of GCD synchronization function
  4. Exploration of underlying principles of GCD singleton

Prepare to work with GCD source download

1 exploration of the underlying principle of GCD queue

Search for dispatch_queue_CREATE in the source code

dispatch_queue_t
dispatch_queue_create(const char *label, dispatch_queue_attr_t attr)
{
	return _dispatch_lane_create_with_target(label, attr,
			DISPATCH_TARGET_QUEUE_DEFAULT, true);
}
Copy the code

1.1 _dispatch_lane_create_with_target

Enter the _dispatch_lane_create_with_target

DISPATCH_NOINLINE static dispatch_queue_t _dispatch_lane_create_with_target(const char *label, dispatch_queue_attr_t dqa, dispatch_queue_t tq, Bool Legacy) {// dqai create -dispatch_queue_attr_info_t dqai = _dispatch_queue_attr_to_info(dQA); // Step 1: Normalize parameters, such as qos, overcommit, tq... // Concatenation queue name const void *vtable; dispatch_queue_flags_t dqf = legacy ? DQF_MUTABLE : 0; If (dqai.dqai_concurrent) {// Vtable indicates the type of the class // OS_dispatch_queue_concurrent Vtable = DISPATCH_VTABLE(queue_concurrent); } else { vtable = DISPATCH_VTABLE(queue_serial); }... // Create a queue and initialize dispatch_lane_t dq = _dispatch_object_alloc(vtable, sizeof(struct dispatch_lane_s)); // alloc // The value of dqai.dqai_concurrent determines whether the queue is serial or concurrent. _dispatch_queuE_init (dq, DQF, dqai.dqai_concurrent? DISPATCH_QUEUE_WIDTH_MAX : 1, DISPATCH_QUEUE_ROLE_INNER | (dqai.dqai_inactive ? DISPATCH_QUEUE_INACTIVE : 0)); Dq ->dq_label = label; Dq ->dq_priority = _dispatch_priority_make((dispatch_qOS_t)dqai.dqai_qos, dqai.dqai_relpri); // Priority processing... Dq ->do_targetq = tq; dq->do_targetq = tq; _dispatch_object_debug(dq, "%s", __func__); return _dispatch_trace_queue_create(dq)._dq; // Dq}Copy the code

1.1.1 _dispatch_queue_attr_to_info

Pass the _dispatch_queue_attr_to_info method to DQA (that is, queue type, serial, concurrent, etc.) to create dQAI of dispatch_queue_attr_info_T type, which is used to store the attributes of the queue

dispatch_queue_attr_info_t _dispatch_queue_attr_to_info(dispatch_queue_attr_t dqa) { dispatch_queue_attr_info_t dqai = { }; if (! dqa) return dqai; #if DISPATCH_VARIANT_STATIC if (dqa == &_dispatch_queue_attr_concurrent) { dqai.dqai_concurrent = true; return dqai; } #endif if (dqa < _dispatch_queue_attrs || dqa >= &_dispatch_queue_attrs[DISPATCH_QUEUE_ATTR_COUNT]) { DISPATCH_CLIENT_CRASH(dqa->do_vtable, "Invalid queue attribute"); } size_t idx = (size_t)(dqa - _dispatch_queue_attrs); Dqai. dqai_INACTIVE = (idx % DISPATCH_QUEUE_ATTR_INACTIVE_COUNT); idx /= DISPATCH_QUEUE_ATTR_INACTIVE_COUNT; dqai.dqai_concurrent = ! (idx % DISPATCH_QUEUE_ATTR_CONCURRENCY_COUNT); idx /= DISPATCH_QUEUE_ATTR_CONCURRENCY_COUNT; dqai.dqai_relpri = -(int)(idx % DISPATCH_QUEUE_ATTR_PRIO_COUNT); idx /= DISPATCH_QUEUE_ATTR_PRIO_COUNT; dqai.dqai_qos = idx % DISPATCH_QUEUE_ATTR_QOS_COUNT; idx /= DISPATCH_QUEUE_ATTR_QOS_COUNT; dqai.dqai_autorelease_frequency = idx % DISPATCH_QUEUE_ATTR_AUTORELEASE_FREQUENCY_COUNT; idx /= DISPATCH_QUEUE_ATTR_AUTORELEASE_FREQUENCY_COUNT; dqai.dqai_overcommit = idx % DISPATCH_QUEUE_ATTR_OVERCOMMIT_COUNT; idx /= DISPATCH_QUEUE_ATTR_OVERCOMMIT_COUNT; return dqai; }Copy the code

1.1.2 Setting attributes associated with queues, such as quality of service (qos

Dqai. dqai_INACTIVE = (idx % DISPATCH_QUEUE_ATTR_INACTIVE_COUNT); idx /= DISPATCH_QUEUE_ATTR_INACTIVE_COUNT; dqai.dqai_concurrent = ! (idx % DISPATCH_QUEUE_ATTR_CONCURRENCY_COUNT); idx /= DISPATCH_QUEUE_ATTR_CONCURRENCY_COUNT; dqai.dqai_relpri = -(int)(idx % DISPATCH_QUEUE_ATTR_PRIO_COUNT); idx /= DISPATCH_QUEUE_ATTR_PRIO_COUNT; dqai.dqai_qos = idx % DISPATCH_QUEUE_ATTR_QOS_COUNT; idx /= DISPATCH_QUEUE_ATTR_QOS_COUNT; dqai.dqai_autorelease_frequency = idx % DISPATCH_QUEUE_ATTR_AUTORELEASE_FREQUENCY_COUNT; idx /= DISPATCH_QUEUE_ATTR_AUTORELEASE_FREQUENCY_COUNT; dqai.dqai_overcommit = idx % DISPATCH_QUEUE_ATTR_OVERCOMMIT_COUNT; idx /= DISPATCH_QUEUE_ATTR_OVERCOMMIT_COUNT;Copy the code

1.1.3 DISPATCH_VTABLE Splicing queue name

The queue name (vtable) is spliced with DISPATCH_VTABLE. DISPATCH_VTABLE is a macro definition, as shown below. Therefore, the queue type is spliced with OS_dispatch_queue_+ queue type (queue_concurrent)

#define DISPATCH_VTABLE(name) DISPATCH_OBJC_CLASS(name) #define DISPATCH_OBJC_CLASS(name) (&DISPATCH_CLASS_SYMBOL(name))  #define DISPATCH_CLASS(name) OS_dispatch_##nameCopy the code
// dispatch_queue = dispatch_queue_create("ypy", DISPATCH_QUEUE_SERIAL); // DISPATCH_QUEUE_CONCURRENT = dispatch_queue_create("ypy", DISPATCH_QUEUE_CONCURRENT); // mainQueue (serial queue) dispatch_queue_t mainQueue = dispatch_get_main_queue(); // globalQueue (parallel queue) dispatch_queue_t globalQueue = dispatch_get_global_queue(0, 0); NSLog(@"\n%@ \n%@ \n%@ \n%@", object_getClass(serial), object_getClass(concurrent), object_getClass(mainQueue), object_getClass(globalQueue));Copy the code
2021-08-03 19:15:41.368990+0800 001-- Functions and queues [20480:1105392] OS_dispatch_queue_serial OS_dispatch_queue_concurrent OS_dispatch_queue_main OS_dispatch_queue_globalCopy the code

1.1.4 _dispatch_object_alloc + _dispatch_queue_init Creates queue objects

The queue is initialized by alloc+init, namely DQ. In the _dispatch_queue_INIT parameter, the Boolean value of dqai.dqai_concurrent can be used to determine whether the queue is serial or concurrent. Vtable indicates the type of queue, indicating that the queue is also an object

1.1.4.1 _dispatch_object_alloc

In the _dispatch_object_alloc -> _os_object_alloc_realized method, the isa point can be set to verify that queues are objects

void *
_dispatch_object_alloc(const void *vtable, size_t size)
{
#if OS_OBJECT_HAVE_OBJC1
	const struct dispatch_object_vtable_s *_vtable = vtable;
	dispatch_object_t dou;
	dou._os_obj = _os_object_alloc_realized(_vtable->_os_obj_objc_isa, size);
	dou._do->do_vtable = vtable;
	return dou._do;
#else
	return _os_object_alloc_realized(vtable, size);
#endif
}
Copy the code
inline _os_object_t _os_object_alloc_realized(const void *cls, size_t size) { _os_object_t obj; dispatch_assert(size >= sizeof(struct _os_object_s)); while (unlikely(! (obj = calloc(1u, size)))) { _dispatch_temporary_resource_shortage(); } obj->os_obj_isa = cls; return obj; }Copy the code

1.1.4.2 _dispatch_queue_init

  • Enter the _dispatch_queue_init method with the queue type dispatch_queue_t and set the properties of the queue
// Note to later developers: ensure that any initialization changes are
// made for statically allocated queues (i.e. _dispatch_main_q).
static inline dispatch_queue_class_t
_dispatch_queue_init(dispatch_queue_class_t dqu, dispatch_queue_flags_t dqf,
		uint16_t width, uint64_t initial_state_bits)
{
	uint64_t dq_state = DISPATCH_QUEUE_STATE_INIT_VALUE(width);
	dispatch_queue_t dq = dqu._dq;

	dispatch_assert((initial_state_bits & ~(DISPATCH_QUEUE_ROLE_MASK |
			DISPATCH_QUEUE_INACTIVE)) == 0);

	if (initial_state_bits & DISPATCH_QUEUE_INACTIVE) {
		dq->do_ref_cnt += 2; // rdar://8181908 see _dispatch_lane_resume
		if (dx_metatype(dq) == _DISPATCH_SOURCE_TYPE) {
			dq->do_ref_cnt++; // released when DSF_DELETED is set
		}
	}

	dq_state |= initial_state_bits;
	dq->do_next = DISPATCH_OBJECT_LISTLESS;
	dqf |= DQF_WIDTH(width);
	os_atomic_store2o(dq, dq_atomic_flags, dqf, relaxed);
	dq->dq_state = dq_state;
	dq->dq_serialnum =
			os_atomic_inc_orig(&_dispatch_queue_serial_numbers, relaxed);
	return dqu;
}
Copy the code

1.1.5 _dispatch_trace_queue_create

The created queue is processed by _dispatch_trace_queue_CREATE, and the call to _dispatch_introspection_queue_CREATE is made by _dispatch_trace_queue_CREATE. The processed _dq is returned

DISPATCH_ALWAYS_INLINE static inline dispatch_queue_class_t _dispatch_trace_queue_create(dispatch_queue_class_t dqu) { _dispatch_only_if_ktrace_enabled({ uint64_t dq_label[4] = {0}; // So that we get the right null termination dispatch_queue_t dq = dqu._dq; strncpy((char *)dq_label, (char *)dq->dq_label ? : "", sizeof(dq_label)); _dispatch_ktrace2(DISPATCH_QOS_TRACE_queue_creation_start, dq->dq_serialnum, _dispatch_priority_to_pp_prefer_fallback(dq->dq_priority)); _dispatch_ktrace4(DISPATCH_QOS_TRACE_queue_creation_end, dq_label[0], dq_label[1], dq_label[2], dq_label[3]); }); return _dispatch_introspection_queue_create(dqu); }Copy the code
dispatch_queue_class_t _dispatch_introspection_queue_create(dispatch_queue_t dq) { dispatch_queue_introspection_context_t dqic; size_t sz = sizeof(struct dispatch_queue_introspection_context_s); if (! _dispatch_introspection.debug_queue_inversions) { sz = offsetof(struct dispatch_queue_introspection_context_s, __dqic_no_queue_inversion); } dqic = _dispatch_calloc(1, sz); dqic->dqic_queue._dq = dq; //dispatch_queue_t if (_dispatch_introspection.debug_queue_inversions) { LIST_INIT(&dqic->dqic_order_top_head); LIST_INIT(&dqic->dqic_order_bottom_head); } dq->do_finalizer = dqic; _dispatch_unfair_lock_lock(&_dispatch_introspection.queues_lock); LIST_INSERT_HEAD(&_dispatch_introspection.queues, dqic, dqic_list); _dispatch_unfair_lock_unlock(&_dispatch_introspection.queues_lock); DISPATCH_INTROSPECTION_INTERPOSABLE_HOOK_CALLOUT(queue_create, dq); if (DISPATCH_INTROSPECTION_HOOK_ENABLED(queue_create)) { _dispatch_introspection_queue_create_hook(dq); } return upcast(dq)._dqu; //dispatch_queue_class_t }Copy the code
1.1.5.1 _dispatch_introspection_queue_create_hook

Enter _dispatch_introspection_queuE_create_hook -> dispatch_introspection_queuE_get_info -> As you can see from the _dispatch_introspection_lane_get_info, unlike our custom classes, the underlying implementation of creating queues is created using templates

DISPATCH_NOINLINE
static void
_dispatch_introspection_queue_create_hook(dispatch_queue_t dq)
{
	dispatch_introspection_queue_s diq;
	diq = dispatch_introspection_queue_get_info(dq);
	dispatch_introspection_hook_callout_queue_create(&diq);
}
Copy the code
dispatch_introspection_queue_s
dispatch_introspection_queue_get_info(dispatch_queue_t dq)
{
	if (dx_metatype(dq) == _DISPATCH_WORKLOOP_TYPE) {
		return _dispatch_introspection_workloop_get_info(upcast(dq)._dwl);
	}
	return _dispatch_introspection_lane_get_info(upcast(dq)._dl);
}
Copy the code
static inline dispatch_introspection_queue_s _dispatch_introspection_lane_get_info(dispatch_lane_class_t dqu) { dispatch_lane_t dq = dqu._dl; bool global = _dispatch_object_is_global(dq); uint64_t dq_state = os_atomic_load2o(dq, dq_state, relaxed); Dispatch_introspection_queue_s diq = {. Queue = dq->_as_dq,. Target_queue = dq-> do_targetQ, .label = dq->dq_label, .serialnum = dq->dq_serialnum, .width = dq->dq_width, .suspend_count = _dq_state_suspend_cnt(dq_state) + dq->dq_side_suspend_cnt, .enqueued = _dq_state_is_enqueued(dq_state) && ! global, .barrier = _dq_state_is_in_barrier(dq_state) && ! global, .draining = (dq->dq_items_head == (void*)~0ul) || (! dq->dq_items_head && dq->dq_items_tail), .global = global, .main = dx_type(dq) == DISPATCH_QUEUE_MAIN_TYPE, }; return diq; }Copy the code

1.2 summarize

  • Parameter two (queue type) in dispatch_queue_CREATE determines the underlying Max & 1 (used to distinguish between serial and concurrent), where 1 means serial
  • A queue is also an object that needs to be created by alloc + init, and there is also a class in alloc, which is concatenated by macros and specifies the isa point
  • The underlying process of creating a queue is created using a template of type dispatch_introspection_queue_s structure

Dispatch_queue_create The following figure shows the underlying analysis process

2. Exploration of the underlying principle of GCD asynchronous function

void dispatch_async(dispatch_queue_t dq, Dispatch_block_t work)// Work task {dispatch_continuation_t dc = _dispatch_continuation_alloc(); uintptr_t dc_flags = DC_FLAG_CONSUME; dispatch_qos_t qos; // Save block qos = _dispatch_continuation_init(dc, dq, work, 0) dc_flags); Continuation_async (dq, dc, qos, dc->dc_flags); }Copy the code

Enter dispatch_async source code implementation, the main analysis of two functions

2.1_dispatch_CONTINUation_init: Task wrapper function

Enter the _dispatch_continuation_init source code implementation, mainly packaging tasks, and set the thread of the return function, equivalent to initialization

DISPATCH_ALWAYS_INLINE static inline dispatch_qos_t _dispatch_continuation_init(dispatch_continuation_t dc, dispatch_queue_class_t dqu, dispatch_block_t work, dispatch_block_flags_t flags, uintptr_t dc_flags) { void *ctxt = _dispatch_Block_copy(work); / / copy task dc_flags | = DC_FLAG_BLOCK | DC_FLAG_ALLOCATED; if (unlikely(_dispatch_block_has_private_data(work))) { dc->dc_flags = dc_flags; dc->dc_ctxt = ctxt; // will initialize all fields but requires dc_flags & dc_ctxt to be set return _dispatch_continuation_init_slow(dc,)  dqu, flags); } dispatch_function_t func = _dispatch_Block_invoke(work); If (dc_flags & DC_FLAG_CONSUME) {func = _dispatch_call_block_and_release; } return _dispatch_continuation_init_f(dc, dqu, CTXT, func, flags, dc_flags); }Copy the code

There are mainly the following steps

  • Copy tasks using the _dispatch_Block_copy command
  • The task is encapsulated with _dispatch_Block_invoke, where _dispatch_Block_invoke is a macro definition, which is an asynchronous callback according to the above analysis
#define _dispatch_Block_invoke(bb) \
        ((dispatch_function_t)((struct Block_layout *)bb)->invoke)
Copy the code
  • If synchronous, the callback is assigned to _dispatch_call_block_and_release
  • The callback function is assigned via the _dispatch_continuation_init_f method, where f is func, and stored in the property
DISPATCH_ALWAYS_INLINE static inline dispatch_qos_t _dispatch_continuation_init_f(dispatch_continuation_t dc, dispatch_queue_class_t dqu, void *ctxt, dispatch_function_t f, dispatch_block_flags_t flags, uintptr_t dc_flags) { pthread_priority_t pp = 0; dc->dc_flags = dc_flags | DC_FLAG_ALLOCATED; dc->dc_func = f; dc->dc_ctxt = ctxt; // in this context DISPATCH_BLOCK_HAS_PRIORITY means that the priority // should not be propagated, only taken from the handler if it has one if (! (flags & DISPATCH_BLOCK_HAS_PRIORITY)) { pp = _dispatch_priority_propagate(); } _dispatch_continuation_voucher_set(dc, flags); return _dispatch_continuation_priority_set(dc, dqu, pp, flags); }Copy the code

2.2_dispatch_CONTINUation_ASYNc: Concurrency processing function

In this function, the main thing is to execute the block callback into the source implementation of _dispatch_continuation_async

DISPATCH_ALWAYS_INLINE static inline void _dispatch_continuation_async(dispatch_queue_class_t dqu, dispatch_continuation_t dc, dispatch_qos_t qos, uintptr_t dc_flags) { #if DISPATCH_INTROSPECTION if (! (dc_flags & DC_FLAG_NO_INTROSPECTION)) { _dispatch_trace_item_push(dqu, dc); // trace log} #else (void)dc_flags; #endif return dx_push(dqu._dq, dc, qos); Dx_invoke ();}Copy the code

The key code is DX_push (dqu._dq, DC, qos). Dx_push is the macro definition, as shown below

 #define dx_push(x, y, z) dx_vtable(x)->dq_push(x, y, z)
Copy the code

Dq_push needs to perform different functions according to the type of queue


DISPATCH_VTABLE_INSTANCE(workloop,
	.do_type        = DISPATCH_WORKLOOP_TYPE,
	.do_dispose     = _dispatch_workloop_dispose,
	.do_debug       = _dispatch_queue_debug,
	.do_invoke      = _dispatch_workloop_invoke,

	.dq_activate    = _dispatch_queue_no_activate,
	.dq_wakeup      = _dispatch_workloop_wakeup,
	.dq_push        = _dispatch_workloop_push,
);

DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_serial, lane,
	.do_type        = DISPATCH_QUEUE_SERIAL_TYPE,
	.do_dispose     = _dispatch_lane_dispose,
	.do_debug       = _dispatch_queue_debug,
	.do_invoke      = _dispatch_lane_invoke,

	.dq_activate    = _dispatch_lane_activate,
	.dq_wakeup      = _dispatch_lane_wakeup,
	.dq_push        = _dispatch_lane_push,
);

DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_concurrent, lane,
	.do_type        = DISPATCH_QUEUE_CONCURRENT_TYPE,
	.do_dispose     = _dispatch_lane_dispose,
	.do_debug       = _dispatch_queue_debug,
	.do_invoke      = _dispatch_lane_invoke,

	.dq_activate    = _dispatch_lane_activate,
	.dq_wakeup      = _dispatch_lane_wakeup,
	.dq_push        = _dispatch_lane_concurrent_push,
);

DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_global, lane,
	.do_type        = DISPATCH_QUEUE_GLOBAL_ROOT_TYPE,
	.do_dispose     = _dispatch_object_no_dispose,
	.do_debug       = _dispatch_queue_debug,
	.do_invoke      = _dispatch_object_no_invoke,

	.dq_activate    = _dispatch_queue_no_activate,
	.dq_wakeup      = _dispatch_root_queue_wakeup,
	.dq_push        = _dispatch_root_queue_push,
);

#if DISPATCH_USE_PTHREAD_ROOT_QUEUES
DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_pthread_root, lane,
	.do_type        = DISPATCH_QUEUE_PTHREAD_ROOT_TYPE,
	.do_dispose     = _dispatch_pthread_root_queue_dispose,
	.do_debug       = _dispatch_queue_debug,
	.do_invoke      = _dispatch_object_no_invoke,

	.dq_activate    = _dispatch_queue_no_activate,
	.dq_wakeup      = _dispatch_root_queue_wakeup,
	.dq_push        = _dispatch_root_queue_push,
);
#endif // DISPATCH_USE_PTHREAD_ROOT_QUEUES

DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_mgr, lane,
	.do_type        = DISPATCH_QUEUE_MGR_TYPE,
	.do_dispose     = _dispatch_object_no_dispose,
	.do_debug       = _dispatch_queue_debug,
#if DISPATCH_USE_MGR_THREAD
	.do_invoke      = _dispatch_mgr_thread,
#else
	.do_invoke      = _dispatch_object_no_invoke,
#endif

	.dq_activate    = _dispatch_queue_no_activate,
	.dq_wakeup      = _dispatch_mgr_queue_wakeup,
	.dq_push        = _dispatch_mgr_queue_push,
);

DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_main, lane,
	.do_type        = DISPATCH_QUEUE_MAIN_TYPE,
	.do_dispose     = _dispatch_lane_dispose,
	.do_debug       = _dispatch_queue_debug,
	.do_invoke      = _dispatch_lane_invoke,

	.dq_activate    = _dispatch_queue_no_activate,
	.dq_wakeup      = _dispatch_main_queue_wakeup,
	.dq_push        = _dispatch_main_queue_push,
);

#if DISPATCH_COCOA_COMPAT
DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_runloop, lane,
	.do_type        = DISPATCH_QUEUE_RUNLOOP_TYPE,
	.do_dispose     = _dispatch_runloop_queue_dispose,
	.do_debug       = _dispatch_queue_debug,
	.do_invoke      = _dispatch_lane_invoke,

	.dq_activate    = _dispatch_queue_no_activate,
	.dq_wakeup      = _dispatch_runloop_queue_wakeup,
	.dq_push        = _dispatch_lane_push,
);
#endif

DISPATCH_VTABLE_INSTANCE(source,
	.do_type        = DISPATCH_SOURCE_KEVENT_TYPE,
	.do_dispose     = _dispatch_source_dispose,
	.do_debug       = _dispatch_source_debug,
	.do_invoke      = _dispatch_source_invoke,

	.dq_activate    = _dispatch_source_activate,
	.dq_wakeup      = _dispatch_source_wakeup,
	.dq_push        = _dispatch_lane_push,
);

DISPATCH_VTABLE_INSTANCE(channel,
	.do_type        = DISPATCH_CHANNEL_TYPE,
	.do_dispose     = _dispatch_channel_dispose,
	.do_debug       = _dispatch_channel_debug,
	.do_invoke      = _dispatch_channel_invoke,

	.dq_activate    = _dispatch_lane_activate,
	.dq_wakeup      = _dispatch_channel_wakeup,
	.dq_push        = _dispatch_lane_push,
);

#if HAVE_MACH
DISPATCH_VTABLE_INSTANCE(mach,
	.do_type        = DISPATCH_MACH_CHANNEL_TYPE,
	.do_dispose     = _dispatch_mach_dispose,
	.do_debug       = _dispatch_mach_debug,
	.do_invoke      = _dispatch_mach_invoke,

	.dq_activate    = _dispatch_mach_activate,
	.dq_wakeup      = _dispatch_mach_wakeup,
	.dq_push        = _dispatch_lane_push,
);
Copy the code

Symbol breakpoint debugging execution function

Run demo, use symbolic breakpoints to determine which function is being executed, and since it’s a concurrent queue, add a symbolic breakpoint for _dispatch_lane_concurrent_push, and see if it goes there

dispatch_queue_t conque = dispatch_queue_create("com.ypy.queue", DISPATCH_QUEUE_CONCURRENT); Dispatch_async (conque, ^{NSLog(@" async ")); });Copy the code
  • The run found that it was indeed _dispatch_lane_concurrent_push

  • Continue debugging with symbolic breakpoints _dispatch_continuation_redirect_push and _dispatch_lane_push It’s going to be _dispatch_continuation_redirect_push

  • When we enter the source code of _dispatch_continuation_redirect_push, we find that we have gone to dX_push, that is, recursion. When we create the queue before, we know that the queue is also an object, with parent class and root class, so we will recursively execute the method of the root class
DISPATCH_NOINLINE static void _dispatch_continuation_redirect_push(dispatch_lane_t dl, dispatch_object_t dou, dispatch_qos_t qos) { if (likely(! _dispatch_object_is_redirection(dou))) { dou._dc = _dispatch_async_redirect_wrap(dl, dou); } else if (! dou._dc->dc_ctxt) { // find first queue in descending target queue order that has // an autorelease frequency set, and use that as the frequency for // this continuation. dou._dc->dc_ctxt = (void *) (uintptr_t)_dispatch_queue_autorelease_frequency(dl); } dispatch_queue_t dq = dl->do_targetq; if (! qos) qos = _dispatch_priority_qos(dq->dq_priority); dx_push(dq, dou, qos); // The recursion queue is also an object, and the parent root class}Copy the code
  • Next, we verify that the guess is correct by using the _dispatch_root_queue_push symbol breakpoint on the root class, which is exactly correct

  • _dispatch_root_queue_push -> _dispatch_root_queue_push_inline ->_dispatch_root_queue_poke -> _dispatch_root_queue_poke_slow source code, after symbolic breakpoint verification, is really go here, view the method source implementation, there are two main steps

    • The callbacks are registered with the _dispatch_root_queues_init method
    • The thread is created through a do-while loop, using the pthread_create method
DISPATCH_NOINLINE static void _dispatch_root_queue_poke_slow(dispatch_queue_global_t dq, int n, int floor) { int remaining = n; int r = ENOSYS; _dispatch_root_queues_init(); / / the key... // The do-while loop creates a thread do {_dispatch_retain(dq); // released in _dispatch_worker_thread while ((r = pthread_create(pthr, attr, _dispatch_worker_thread, dq))) { if (r ! = EAGAIN) { (void)dispatch_assume_zero(r); } _dispatch_temporary_resource_shortage(); } } while (--remaining); . }Copy the code

_dispatch_root_queues_init

  • Enter the _dispatch_root_queues_init source code implementation, found is a dispatch_once_f singleton (see the underlying analysis of the subsequent singleton, not described here), where the func passed in is _dispatch_root_queues_init_once.
 DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_root_queues_init(void)
{
    dispatch_once_f(&_dispatch_root_queues_pred, NULL, _dispatch_root_queues_init_once);
}
Copy the code

Go to the source for _dispatch_root_queues_init_once, where the call handle for different internal transactions is _dispatch_worker_thread2Its block callback executes the call path as follows: _dispatch_root_queues_init_once ->_dispatch_worker_thread2 -> _dispatch_root_queue_drain -> _dispatch_root_queue_drain -> _dispatch_continuation_pop_inline -> _dispatch_continuation_invoke_inline -> _dispatch_client_callout -> Dispatch_call_block_and_release this path can be obtained by using breakpoint, BT print stack informationOne thing to note here is that the block callbacks for singletons are different from the block callbacks for asynchronous functions

  • In the singleton, the func in the block callback is _dispatch_Block_invoke(block)
  • In an asynchronous function, the func in the block callback is dispatch_call_block_and_release

2.3 summarize

The underlying analysis of asynchronous functions is as follows

  • [Preparation] : First, copy and encapsulate the asynchronous task and set the callback function func
  • [Block callback] : The underlying layer recurses through DX_push, redirects to the root queue, then creates the thread via pthread_creat, and finally executes the block callback via DX_invoke (dX_push and DX_Invoke are paired).

The underlying analysis process of asynchronous functions is shown in the figure

3. Exploration of underlying principles of GCD synchronization function

Enter the source implementation of dispatch_sync, whose underlying implementation is realized by the fence function

DISPATCH_NOINLINE
void
dispatch_sync(dispatch_queue_t dq, dispatch_block_t work)
{
    uintptr_t dc_flags = DC_FLAG_BLOCK;
    if (unlikely(_dispatch_block_has_private_data(work))) {
        return _dispatch_sync_block_with_privdata(dq, work, dc_flags);
    }
    _dispatch_sync_f(dq, work, _dispatch_Block_invoke(work), dc_flags);
}
Copy the code

3.1 Enter the source code of _dispatch_sync_f

DISPATCH_NOINLINE
static void
_dispatch_sync_f(dispatch_queue_t dq, void *ctxt, dispatch_function_t func,
		uintptr_t dc_flags)
{
	_dispatch_sync_f_inline(dq, ctxt, func, dc_flags);
}
Copy the code

3.1.1 _dispatch_sync_f_inline source

Look at the _dispatch_sync_F_inline source code, where width = 1 indicates a serial queue, with two important points:

  • Fence: _dispatch_barrier_sync_f (described later in this article) shows that the underlying implementation of the synchronization function is actually a synchronization fence function
  • Deadlock: _dispatch_sync_f_slow, deadlock if there is a mutual wait
DISPATCH_ALWAYS_INLINE static inline void _dispatch_sync_f_inline(dispatch_queue_t dq, void *ctxt, dispatch_function_t func, uintptr_t dc_flags) { if (likely(dq->dq_width == 1)) { return _dispatch_barrier_sync_f(dq, ctxt, func, dc_flags); } if (dx_metatype(dq)! = _DISPATCH_LANE_TYPE)) { DISPATCH_CLIENT_CRASH(0, "Queue type doesn't support dispatch_sync"); } dispatch_lane_t dl = upcast(dq)._dl; // Global concurrent queues and queues bound to non-dispatch threads // always fall into the slow case, see DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE if (unlikely(! _dispatch_queue_try_reserve_sync_width(dl))) { return _dispatch_sync_f_slow(dl, ctxt, func, 0, dl, dc_flags); } if (dq-> do_targetQ -> do_targetQ) {return _dispatch_sync_recurse(dl, CTXT, func, dc_flags); } _dispatch_introspection_sync_begin(dl); // Process the current information _dispatch_sync_invoke_and_complete(dl, CTXT, func DISPATCH_TRACE_ARG(_dispatch_trace_ITEM_sync_push_pop (dq, ctxt, func, dc_flags))); }Copy the code
3.1.1.2_dispatch_barrier_sync_f (described later in this article) shows that the underlying implementation of the synchronization function is actually the synchronization fence function
3.1.1.2 _dispatch_sync_f_slow deadlocks

Go to _dispatch_sync_f_slow and the current main queue is suspended and blocked

#pragma mark dispatch_sync / dispatch_barrier_sync DISPATCH_NOINLINE static void _dispatch_sync_f_slow(dispatch_queue_class_t top_dqu, void *ctxt, dispatch_function_t func, uintptr_t top_dc_flags, dispatch_queue_class_t dqu, uintptr_t dc_flags) { dispatch_queue_t top_dq = top_dqu._dq; dispatch_queue_t dq = dqu._dq; if (unlikely(! dq->do_targetq)) { return _dispatch_sync_function_invoke(dq, ctxt, func); } pthread_priority_t pp = _dispatch_get_priority(); struct dispatch_sync_context_s dsc = { .dc_flags = DC_FLAG_SYNC_WAITER | dc_flags, .dc_func = _dispatch_async_and_wait_invoke, .dc_ctxt = &dsc, .dc_other = top_dq, .dc_priority = pp | _PTHREAD_PRIORITY_ENFORCE_FLAG, .dc_voucher = _voucher_get(), .dsc_func = func, .dsc_ctxt = ctxt, .dsc_waiter = _dispatch_tid_self(), }; _dispatch_trace_item_push(top_dq, &dsc); __DISPATCH_WAIT_FOR_QUEUE__(&dsc, dq); if (dsc.dsc_func == NULL) { dispatch_queue_t stop_dq = dsc.dc_other; return _dispatch_sync_complete_recurse(top_dq, stop_dq, top_dc_flags); } _dispatch_introspection_sync_begin(top_dq); _dispatch_trace_item_pop(top_dq, &dsc); _dispatch_sync_invoke_and_complete_recurse(top_dq, ctxt, func,top_dc_flags DISPATCH_TRACE_ARG(&dsc)); }Copy the code

When a task is added to a queue, it will be pushed into the main queue at _dispatch_trace_ITEM_push

DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_trace_item_push(dispatch_queue_class_t dqu, dispatch_object_t _tail)
{
	if (unlikely(DISPATCH_QUEUE_PUSH_ENABLED())) {
		_dispatch_trace_continuation(dqu._dq, _tail._do, DISPATCH_QUEUE_PUSH);
	}

	_dispatch_trace_item_push_inline(dqu._dq, _tail._do);
	_dispatch_introspection_queue_push(dqu, _tail);
}
Copy the code

Enter __DISPATCH_WAIT_FOR_QUEUE__, determine whether dQ is a waiting queue, then give a state, and then match the dQ state with the queue that the current task depends on

DISPATCH_NOINLINE static void __DISPATCH_WAIT_FOR_QUEUE__(dispatch_sync_context_t dsc, dispatch_queue_t dq) { uint64_t dq_state = _dispatch_wait_prepare(dq); if (unlikely(_dq_state_drain_locked_by(dq_state, dsc->dsc_waiter))) { DISPATCH_CLIENT_CRASH((uintptr_t)dq_state, "dispatch_sync called on queue " "already owned by current thread"); } // Blocks submitted to the main thread MUST run on the main thread, and // dispatch_async_and_wait also executes on the remote context rather than // the current thread. // // For both these cases we need to save the frame linkage for the sake of // _dispatch_async_and_wait_invoke _dispatch_thread_frame_save_state(&dsc->dsc_dtf); if (_dq_state_is_suspended(dq_state) || _dq_state_is_base_anon(dq_state)) { dsc->dc_data = DISPATCH_WLH_ANON; } else if (_dq_state_is_base_wlh(dq_state)) { dsc->dc_data = (dispatch_wlh_t)dq; } else { _dispatch_wait_compute_wlh(upcast(dq)._dl, dsc); } if (dsc->dc_data == DISPATCH_WLH_ANON) { dsc->dsc_override_qos_floor = dsc->dsc_override_qos = (uint8_t)_dispatch_get_basepri_override_qos_floor(); _dispatch_thread_event_init(&dsc->dsc_event); } dx_push(dq, dsc, _dispatch_qos_from_pp(dsc->dc_priority)); _dispatch_trace_runtime_event(sync_wait, dq, 0); if (dsc->dc_data == DISPATCH_WLH_ANON) { _dispatch_thread_event_wait(&dsc->dsc_event); // acquire } else { _dispatch_event_loop_wait_for_ownership(dsc); } if (dsc->dc_data == DISPATCH_WLH_ANON) { _dispatch_thread_event_destroy(&dsc->dsc_event); // If _dispatch_sync_waiter_wake() gave this thread an override, // ensure that the root queue sees it. if (dsc->dsc_override_qos > dsc->dsc_override_qos_floor) { _dispatch_set_basepri_override_qos(dsc->dsc_override_qos); }}}Copy the code

Go to the source for _dq_state_drain_locked_by -> _dispatch_lock_IS_lockeD_by

DISPATCH_ALWAYS_INLINE static inline bool _dispatch_lock_is_locked_by(dispatch_lock lock_value, Equivalent to equivalent or equivalent to dispatch_lock_owner(lock_value) == tid The same is 0, the different is 1, if the same, it is 0, 0 & any number is 0 // that is to determine whether the current waiting task and the executing task are the same, Return ((lock_value ^ tid) &dlock_owner_mask) == 0; }Copy the code

If you are waiting on the same queue as you are executing, this determines whether the thread ids are multiplied. If they are equal, a deadlock occurs

Reason for sequential execution of synchronous functions + concurrent queues

In the _dispatch_sync_invoke_and_complete -> _dispatch_sync_function_invoke_inline source code, there are three main steps:

  • To queue a task: _dispatch_thread_frame_push
  • Block callback for executing a task: _dispatch_client_callout
  • Queue the task: _dispatch_thread_frame_pop

From the implementation, we can see that the task is pushed into the queue, then the block callback is executed, and then the task is pop, so the task is executed sequentially.

3.2 summarize

The underlying implementation of the synchronization function is as follows:

  • The underlying implementation of the synchronization function is actually the synchronization fence function
  • In the synchronization function, if the queue that is currently executing and the queue that is waiting are the same, the situation of mutual waiting will result in deadlock

The underlying implementation process of the synchronization function is shown in the figure

  1. Exploration of underlying principles of GCD singleton

4.1 dispatch_once

static dispatch_once_t onceToken; Dispatch_once (&onceToken, ^{NSLog(@" singleton ")); });Copy the code

Enter dispatch_once source implementation, the bottom layer is implemented through dispatch_once_f

  • Parameter 1: onceToken, which is a static variable. Because static variables are defined differently in different locations, static variables are unique
  • Parameter 2: block callback
void
dispatch_once(dispatch_once_t *val, dispatch_block_t block)
{
    dispatch_once_f(val, block, _dispatch_Block_invoke(block));
}
Copy the code

4.4.1 dispatch_once_f source

DISPATCH_NOINLINE void dispatch_once_f(dispatch_once_t *val, void *ctxt, dispatch_function_t func) { dispatch_once_gate_t l = (dispatch_once_gate_t)val; #if ! DISPATCH_ONCE_INLINE_FASTPATH || DISPATCH_ONCE_USE_QUIESCENT_COUNTER uintptr_t v = os_atomic_load(&l->dgo_once, acquire); //load if (v == DLOCK_ONCE_DONE)) { } #if DISPATCH_ONCE_USE_QUIESCENT_COUNTER if (likely(DISPATCH_ONCE_IS_GEN(v))) { return _dispatch_once_mark_done_if_quiesced(l, v); } #endif #endif if (_dispatch_once_gate_tryenter(l)) {return _dispatch_once_callout(l, CTXT, func); } return _dispatch_once_wait(l); // Wait an infinite number of times}Copy the code

Enter the source code of dispatch_once_f, where val is the onceToken static variable passed from the outside, and func is _dispatch_Block_invoke(block), where the underlying singleton is mainly divided into the following steps

  • Convert val, which is a static variable, to l of type dispatch_once_gate_t
  • Run the os_atomIC_LOAD command to obtain the task identifier v
    • If v is equal to DLOCK_ONCE_DONE, it indicates that the task has been executed and returns directly
    • If the locking fails after the task is executed, go to the _dispatch_once_mark_done_if_quiesced function and store again, setting the identifier to DLOCK_ONCE_DONE
    • Otherwise, it attempts to enter the task with _dispatch_once_gate_tryEnter, that is, to unlock it, and then executes the _dispatch_once_callout to execute the block callback
  • If a task is executing and a second one comes in, the _dispatch_once_wait function is used to put task 2 on an infinite wait

4.1.2 _dispatch_once_gate_tryenter unlock

DISPATCH_ALWAYS_INLINE static inline bool _dispatch_once_gate_tryenter(dispatch_once_gate_t l) { return os_atomic_cmpxchg(&l->dgo_once, DLOCK_ONCE_UNLOCKED, (uintptr_t)_dispatch_lock_value_for_self(), relaxed); // First compare, then change}Copy the code

In the source code, compare the underlying OS_atomic_CMPxCHG method. If there is no problem with the comparison, the task is locked, that is, the identifier of the task is set as DLOCK_ONCE_UNLOCKED

4.1.3 _dispatch_once_callout callback

DISPATCH_NOINLINE static void _dispatch_once_callout(dispatch_once_gate_t l, void *ctxt, dispatch_function_t func) { _dispatch_client_callout(ctxt, func); // The block call executes _dispatch_once_gate_broadcast(l); // Broadcast: tell others that they have a home, don't look for meCopy the code

Enter the _dispatch_once_callout source code, there are two main steps

  • _dispatch_client_callout: block callback execution
  • _dispatch_once_gate_broadcast: indicates that broadcast is performed

4.1.3.1 _dispatch_client_callout source

#undef _dispatch_client_callout void _dispatch_client_callout(void *ctxt, dispatch_function_t f) { @try { return f(ctxt); } @catch (...) { objc_terminate(); }}Copy the code

Enter the _dispatch_client_callout source code, the main is to execute the block callback, where f is equal to _dispatch_Block_invoke(block), that is, asynchronous callback

4.1.3.2 _dispatch_onCE_gate_broadcast -> _dispatch_once_mark_done source code

Uintptr_t _dispatch_once_mark_done(dispatch_once_gate_t dgo) { Return os_atomic_xchg(&dgo-> dGO_once, DLOCK_ONCE_DONE, release); }Copy the code

Enter the source code of _dispatch_once_gate_broadcast -> _dispatch_once_mark_done, the main is to dGO -> dGO_once value, and then the identifier of the task is DLOCK_ONCE_DONE, that is, unlock

4.2 summarize

For the bottom implementation of singleton, the main description is as follows:

  • Singletons are executed only once: In the GCD singleton, there are two important parameters, onceToken and block. OnceToken is a static variable with uniqueness and is encapsulated as a variable l of type dispatch_once_gate_t at the bottom. L is mainly used to obtain the underlying atomic encapsulation association, that is, variable V, If v is equal to DLOCK_ONCE_DONE, it indicates that the task has been processed once and returns directly
  • [block call timing] : if the task has not been executed, the task will be locked through C++ function comparison, that is, the task state is set to DLOCK_ONCE_UNLOCK, in order to ensure the uniqueness of the current task and prevent multiple definitions in other places. After the lock is executed, the block callback function is executed. After the execution is complete, the current task is unlocked and the current task state is set to DLOCK_ONCE_DONE. The next time you come in, the task will not be executed and will return directly
  • [Multithreading impact] : If another task enters during the execution of the current task, it will enter an infinite wait. The reason is that the current task has acquired the lock, and the lock has been added, and other tasks cannot acquire the lock

The underlying flowchart of the singleton