In this chapter, we mainly analyze the underlying source implementation of synchronous and asynchronous functions. Before exploring, let’s first understand the difference between synchronous and asynchronous functions.

  1. Can we create threads
  2. Whether the callback of the task is asynchronous or synchronous
  3. Cause of deadlock

Synchronization functiondispatch_sync

  1. dispatch_sync
void
dispatch_sync(dispatch_queue_t dq, dispatch_block_t work)
{
	uintptr_t dc_flags = DC_FLAG_BLOCK;
	if (unlikely(_dispatch_block_has_private_data(work))) {
		return _dispatch_sync_block_with_privdata(dq, work, dc_flags);
	}
	_dispatch_sync_f(dq, work, _dispatch_Block_invoke(work), dc_flags);
}
Copy the code
  1. _dispatch_sync_f
static void
_dispatch_sync_f(dispatch_queue_t dq, void *ctxt, dispatch_function_t func,
		uintptr_t dc_flags)
{
	_dispatch_sync_f_inline(dq, ctxt, func, dc_flags);
}
Copy the code
  1. _dispatch_sync_f_inline
static inline void _dispatch_sync_f_inline(dispatch_queue_t dq, void *ctxt, dispatch_function_t func, Uintptr_t dc_flags) {// dq_width == 1 If (likely(dq->dq_width == 1)) {return _dispatch_barrier_sync_f(dq, CTXT, func, dc_flags); } if (unlikely(dx_metatype(dq) ! = _DISPATCH_LANE_TYPE)) { DISPATCH_CLIENT_CRASH(0, "Queue type doesn't support dispatch_sync"); } dispatch_lane_t dl = upcast(dq)._dl; // Global concurrent queues and queues bound to non-dispatch threads // always fall into the slow case, see DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE if (unlikely(! _dispatch_queue_try_reserve_sync_width(dl))) { return _dispatch_sync_f_slow(dl, ctxt, func, 0, dl, dc_flags); } if (unlikely(dq->do_targetq->do_targetq)) { return _dispatch_sync_recurse(dl, ctxt, func, dc_flags); } _dispatch_introspection_sync_begin(dl); _dispatch_sync_invoke_and_complete(dl, ctxt, func DISPATCH_TRACE_ARG( _dispatch_trace_item_sync_push_pop(dq, ctxt, func, dc_flags))); }Copy the code
  1. _dispatch_barrier_sync_f
static void
_dispatch_barrier_sync_f(dispatch_queue_t dq, void *ctxt,
		dispatch_function_t func, uintptr_t dc_flags)
{
	_dispatch_barrier_sync_f_inline(dq, ctxt, func, dc_flags);
}
Copy the code
  1. _dispatch_barrier_sync_f_inline

Here we’re going to focus on the _dispatch_sync_f_slow function, which is the error message that we usually get when we encounter a deadlock.

  1. Simulate deadlocks
- (void)textDemo1{
    dispatch_queue_t queue = dispatch_queue_create("chenxi", DISPATCH_QUEUE_SERIAL);
    NSLog(@"1");
    dispatch_async(queue, ^{
        NSLog(@"2");
        dispatch_sync(queue, ^{
            NSLog(@"3");
        });
        NSLog(@"4");
    });
    NSLog(@"5");
}
Copy the code

Let’s run the code here to simulate the deadlock.

But when we run it, we can see that the actual error is actually generated in the __DISPATCH_WAIT_FOR_QUEUE__ function, so let’s take a look at the implementation of __DISPATCH_WAIT_FOR_QUEUE__.

  1. __DISPATCH_WAIT_FOR_QUEUE__

In this case, we can see that these messages are consistent with the stack error message, so if this condition is true, there will be a deadlock and an exception will be thrown. So let’s take a look at the conditions for this to be true. Here DSC -> DSC_waiter represents the id of the current thread, and dq_state represents the waiting state.

  1. _dq_state_drain_locked_by
static inline bool _dq_state_drain_locked_by(uint64_t dq_state, dispatch_tid tid) { return _dispatch_lock_is_locked_by((dispatch_lock)dq_state, tid); } static inline bool _dispatch_lock_is_locked_by(dispatch_lock lock_value, Dispatch_tid TID) {// Equivalent to _dispatch_lock_owner(lock_value) == TID // Equivalent to lock_value ^ TID DLOCK_OWNER_MASK, Return ((lock_value ^ tid) & DLOCK_OWNER_MASK) == 0; return ((lock_value ^ tid) & DLOCK_OWNER_MASK) == 0; }Copy the code

We can see through the source code, the last parameter deadlock condition is lock_value ^ TID == 0, that is to wait for the state and the thread ID is the same, currently in the waiting state, and the tune up the DQ (queue), and to execute, execute, and found that is in the waiting state, deadlock.

Synchronization Function task synchronization

Here we will analyze the execution flow of the underlying code executing the synchronization function under the global concurrent queue.

We execute this code, and in the _dispatch_barrier_sync_F_inline function we put a symbolic breakpoint at these points.

And when we run it, you can see that it’s going to execute the _dispatch_sync_f_slow function, so we’re going to follow this function right here.

static void _dispatch_sync_f_slow(dispatch_queue_class_t top_dqu, void *ctxt, dispatch_function_t func, uintptr_t top_dc_flags, dispatch_queue_class_t dqu, uintptr_t dc_flags) { dispatch_queue_t top_dq = top_dqu._dq; dispatch_queue_t dq = dqu._dq; if (unlikely(! dq->do_targetq)) { return _dispatch_sync_function_invoke(dq, ctxt, func); } pthread_priority_t pp = _dispatch_get_priority(); struct dispatch_sync_context_s dsc = { .dc_flags = DC_FLAG_SYNC_WAITER | dc_flags, .dc_func = _dispatch_async_and_wait_invoke, .dc_ctxt = &dsc, .dc_other = top_dq, .dc_priority = pp | _PTHREAD_PRIORITY_ENFORCE_FLAG, .dc_voucher = _voucher_get(), .dsc_func = func, .dsc_ctxt = ctxt, .dsc_waiter = _dispatch_tid_self(), }; _dispatch_trace_item_push(top_dq, &dsc); __DISPATCH_WAIT_FOR_QUEUE__(&dsc, dq); if (dsc.dsc_func == NULL) { // dsc_func being cleared means that the block ran on another thread ie. // case (2) as listed in _dispatch_async_and_wait_f_slow. dispatch_queue_t stop_dq = dsc.dc_other; return _dispatch_sync_complete_recurse(top_dq, stop_dq, top_dc_flags); } _dispatch_introspection_sync_begin(top_dq); _dispatch_trace_item_pop(top_dq, &dsc); _dispatch_sync_invoke_and_complete_recurse(top_dq, ctxt, func,top_dc_flags DISPATCH_TRACE_ARG(&dsc)); }Copy the code

You can see in the _dispatch_sync_f_slow function that there’s still a lot of flow, so let’s continue with the symbolic breakpoints and see which function we’re going to execute.

After the breakpoint, you can see that the _dispatch_sync_function_invoke function is executed, so follow up on this function.

static void
_dispatch_sync_function_invoke(dispatch_queue_class_t dq, void *ctxt,
		dispatch_function_t func)
{
	_dispatch_sync_function_invoke_inline(dq, ctxt, func);
}
Copy the code
static inline void
_dispatch_sync_function_invoke_inline(dispatch_queue_class_t dq, void *ctxt,
		dispatch_function_t func)
{
	dispatch_thread_frame_s dtf;
	_dispatch_thread_frame_push(&dtf, dq);
	_dispatch_client_callout(ctxt, func);
	_dispatch_perfmon_workitem_inc();
	_dispatch_thread_frame_pop(&dtf);
}
Copy the code

We talked about _dispatch_client_callout in GCD source analysis (1). When this sentence is executed, the function call will be executed, so it will be executed synchronously.

Asynchronous function analysis

  1. dispatch_async
void
dispatch_async(dispatch_queue_t dq, dispatch_block_t work)
{
	dispatch_continuation_t dc = _dispatch_continuation_alloc();
	uintptr_t dc_flags = DC_FLAG_CONSUME;
	dispatch_qos_t qos;

	qos = _dispatch_continuation_init(dc, dq, work, 0, dc_flags);
	_dispatch_continuation_async(dq, dc, qos, dc->dc_flags);
}
Copy the code

Here the task is wrapped in qos and the _dispatch_continuation_async function is executed.

  1. _dispatch_continuation_async
static inline void _dispatch_continuation_async(dispatch_queue_class_t dqu, dispatch_continuation_t dc, dispatch_qos_t qos, uintptr_t dc_flags) { #if DISPATCH_INTROSPECTION if (! (dc_flags & DC_FLAG_NO_INTROSPECTION)) { _dispatch_trace_item_push(dqu, dc); } #else (void)dc_flags; #endif return dx_push(dqu._dq, dc, qos); }Copy the code

As mentioned earlier, DX_push is a macro definition. Dx_push varies according to the type of queue.

DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_concurrent, lane,
	.do_type        = DISPATCH_QUEUE_CONCURRENT_TYPE,
	.do_dispose     = _dispatch_lane_dispose,
	.do_debug       = _dispatch_queue_debug,
	.do_invoke      = _dispatch_lane_invoke,

	.dq_activate    = _dispatch_lane_activate,
	.dq_wakeup      = _dispatch_lane_wakeup,
	.dq_push        = _dispatch_lane_concurrent_push,
);
Copy the code

Let’s look at the case of concurrent queues here.

  1. _dispatch_lane_concurrent_push
void _dispatch_lane_concurrent_push(dispatch_lane_t dq, dispatch_object_t dou, dispatch_qos_t qos) { ordering if (dq->dq_items_tail == NULL && ! _dispatch_object_is_waiter(dou) && ! _dispatch_object_is_barrier(dou) && _dispatch_queue_try_acquire_async(dq)) { return _dispatch_continuation_redirect_push(dq, dou, qos); } _dispatch_lane_push(dq, dou, qos); }Copy the code

One of the things to notice here is that you need to figure out if there’s a fence.

  1. _dispatch_lane_push
void _dispatch_lane_push(dispatch_lane_t dq, dispatch_object_t dou, dispatch_qos_t qos) { dispatch_wakeup_flags_t flags = 0; struct dispatch_object_s *prev; if (unlikely(_dispatch_object_is_waiter(dou))) { return _dispatch_lane_push_waiter(dq, dou._dsc, qos); } dispatch_assert(! _dispatch_object_is_global(dq)); qos = _dispatch_queue_push_qos(dq, qos); // If we are going to call dx_wakeup(), the queue must be retained before // the item we're pushing can be dequeued, which means: // - before we exchange the tail if we have to override // - before we set the head if we made the queue non empty. // Otherwise, if preempted between one of these and the call to dx_wakeup() // the blocks submitted to the queue may release the last reference to the // queue when invoked by _dispatch_lane_drain. <rdar://problem/6932776> prev = os_mpsc_push_update_tail(os_mpsc(dq, dq_items), dou._do, do_next); if (unlikely(os_mpsc_push_was_empty(prev))) { _dispatch_retain_2_unsafe(dq); flags = DISPATCH_WAKEUP_CONSUME_2 | DISPATCH_WAKEUP_MAKE_DIRTY; } else if (unlikely(_dispatch_queue_need_override(dq, qos))) { // There's a race here, _dispatch_queue_need_override may read a stale // dq_state value. // // If it's a stale load from the same drain streak,  given that // the max qos is monotonic, too old a read can only cause an // unnecessary attempt at overriding which is harmless. // // We'll assume here that a stale load from an a previous drain streak // never happens in practice. _dispatch_retain_2_unsafe(dq); flags = DISPATCH_WAKEUP_CONSUME_2; } os_mpsc_push_update_prev(os_mpsc(dq, dq_items), prev, dou._do, do_next); if (flags) { return dx_wakeup(dq, qos, flags); }}Copy the code
#define dx_wakeup(x, y, z) dx_vtable(x)->dq_wakeup(x, y, z)
Copy the code

In the _dispatch_lane_push function let’s continue with the symbol breakpoint. The important thing to note here is that dx_wakeup is a macro definition, and dq_wakeup = _dispatch_lane_wakeup when dq_wakeup is in the concurrent queue. So the breakpoint here is going to be _dispatch_lane_wakeup.

After the breakpoint you can see that the _dispatch_lane_wakeup function is executed.

  1. _dispatch_lane_wakeup
void
_dispatch_lane_wakeup(dispatch_lane_class_t dqu, dispatch_qos_t qos,
		dispatch_wakeup_flags_t flags)
{
	dispatch_queue_wakeup_target_t target = DISPATCH_QUEUE_WAKEUP_NONE;

	if (unlikely(flags & DISPATCH_WAKEUP_BARRIER_COMPLETE)) {
		return _dispatch_lane_barrier_complete(dqu, qos, flags);
	}
	if (_dispatch_queue_class_probe(dqu)) {
		target = DISPATCH_QUEUE_WAKEUP_TARGET;
	}
	return _dispatch_queue_wakeup(dqu, qos, flags, target);
}
Copy the code

So if we continue here the breakpoint will execute the _dispatch_queue_wakeup function.

  1. _dispatch_queue_wakeup

The _dispatch_queue_wakeup function is also used for all process symbolic breakpoints, and then the _dispatch_lane_class_barrier_complete function is executed.

  1. _dispatch_lane_class_barrier_complete

This is where the _dispatch_root_queue_push function is executed.

  1. _dispatch_root_queue_push

  1. _dispatch_root_queue_push_inline
static inline void _dispatch_root_queue_push_inline(dispatch_queue_global_t dq, dispatch_object_t _head, dispatch_object_t _tail, int n) { struct dispatch_object_s *hd = _head._do, *tl = _tail._do; if (unlikely(os_mpsc_push_list(os_mpsc(dq, dq_items), hd, tl, do_next))) { return _dispatch_root_queue_poke(dq, n, 0); }}Copy the code
  1. _dispatch_root_queue_poke
void _dispatch_root_queue_poke(dispatch_queue_global_t dq, int n, int floor) { if (! _dispatch_queue_class_probe(dq)) { return; } #if ! DISPATCH_USE_INTERNAL_WORKQUEUE #if DISPATCH_USE_PTHREAD_POOL if (likely(dx_type(dq) == DISPATCH_QUEUE_GLOBAL_ROOT_TYPE)) #endif { if (unlikely(! os_atomic_cmpxchg2o(dq, dgq_pending, 0, n, relaxed))) { _dispatch_root_queue_debug("worker thread request still pending " "for global queue: %p", dq); return; } } #endif // ! DISPATCH_USE_INTERNAL_WORKQUEUE return _dispatch_root_queue_poke_slow(dq, n, floor); }Copy the code
  1. _dispatch_root_queue_poke_slow

The method of the source code implementation, there are two main steps

  • through_dispatch_root_queues_initMethod registration callback
  • throughdo-whileLoop to create a thread, usingpthread_createmethods
DISPATCH_NOINLINE static void _dispatch_root_queue_poke_slow(dispatch_queue_global_t dq, int n, int floor) { int remaining = n; int r = ENOSYS; _dispatch_root_queues_init(); / / the key... // The do-while loop creates a thread do {_dispatch_retain(dq); // released in _dispatch_worker_thread while ((r = pthread_create(pthr, attr, _dispatch_worker_thread, dq))) { if (r ! = EAGAIN) { (void)dispatch_assume_zero(r); } _dispatch_temporary_resource_shortage(); } } while (--remaining); . }Copy the code

_dispatch_root_queues_init

  • Enter the_dispatch_root_queues_initSource code implementation, discovery is adispatch_once_fSingletons (see the underlying analysis of subsequent singletons, not described here), where the passed infunc_dispatch_root_queues_init_once.
 DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_root_queues_init(void)
{
    dispatch_once_f(&_dispatch_root_queues_pred, NULL, _dispatch_root_queues_init_once);
}
Copy the code

Go to the source for _dispatch_root_queues_init_once, where the call handle for different internal transactions is _dispatch_worker_thread2.

Its block callback executes the call path as follows: _dispatch_root_queues_init_once -> _dispatch_worker_thread2 -> _dispatch_root_queue_drain -> _dispatch_root_queue_drain -> _dispatch_continuation_pop_inline -> _dispatch_continuation_invoke_inline -> _dispatch_client_callout -> Dispatch_call_block_and_release this path can be obtained by using breakpoint, BT print stack information

One thing to note here is that the block callbacks for singletons are different from the block callbacks for asynchronous functions

  • A single case,blockIn the callbackfunc_dispatch_Block_invoke(block)
  • In an asynchronous function,blockIn the callbackfuncdispatch_call_block_and_release

The underlying analysis of asynchronous functions is summarized as follows:

  • Preparation: First, copy and encapsulate the asynchronous task and set the callback functionfunc
  • blockCallback: Bottom passdx_pushRecursion, will redirect to the root queue, and then throughpthread_creatCreate the thread and finally passdx_invokeperformblockThe callback,dx_pushdx_invokeIt comes in pairs.

GCD singleton underlying principle

  1. dispatch_once
static dispatch_once_t onceToken; Dispatch_once (&onceToken, ^{NSLog(@" singleton ")); });Copy the code

Enter dispatch_once source implementation, the bottom layer is implemented through dispatch_once_f

  • Parameter 1:onceToken, it is a static variable, because the static variables defined in different locations are different, so static variables have uniqueness
  • Argument 2:blockThe callback
void
dispatch_once(dispatch_once_t *val, dispatch_block_t block)
{
    dispatch_once_f(val, block, _dispatch_Block_invoke(block));
}
Copy the code
  1. dispatch_once_f
DISPATCH_NOINLINE void dispatch_once_f(dispatch_once_t *val, void *ctxt, dispatch_function_t func) { dispatch_once_gate_t l = (dispatch_once_gate_t)val; #if ! DISPATCH_ONCE_INLINE_FASTPATH || DISPATCH_ONCE_USE_QUIESCENT_COUNTER uintptr_t v = os_atomic_load(&l->dgo_once, acquire); //load if (v == DLOCK_ONCE_DONE)) { } #if DISPATCH_ONCE_USE_QUIESCENT_COUNTER if (likely(DISPATCH_ONCE_IS_GEN(v))) { return _dispatch_once_mark_done_if_quiesced(l, v); } #endif #endif if (_dispatch_once_gate_tryenter(l)) {return _dispatch_once_callout(l, CTXT, func); } return _dispatch_once_wait(l); // Wait an infinite number of times}Copy the code

Enter the source code of dispatch_once_f, where val is the onceToken static variable passed from the outside, and func is _dispatch_Block_invoke(block), where the underlying singleton is mainly divided into the following steps

  • Convert val, which is a static variable, to l of type dispatch_once_gate_t

  • Run the os_atomIC_LOAD command to obtain the task identifier v

    • ifvIs equal to theDLOCK_ONCE_DONE, indicating that the task has been executed, directlyreturn
    • If locking fails after the task is executed, go to_dispatch_once_mark_done_if_quiescedFunction, which stores again, setting the identifier toDLOCK_ONCE_DONE
    • Otherwise, it passes_dispatch_once_gate_tryenterTry to enter the task, that is, unlock and then execute_dispatch_once_calloutperformblockThe callback
  • If a task is executing and a second one comes in, the _dispatch_once_wait function is used to put task 2 on an infinite wait

  1. _dispatch_once_gate_tryenter
DISPATCH_ALWAYS_INLINE static inline bool _dispatch_once_gate_tryenter(dispatch_once_gate_t l) { return os_atomic_cmpxchg(&l->dgo_once, DLOCK_ONCE_UNLOCKED, (uintptr_t)_dispatch_lock_value_for_self(), relaxed); // First compare, then change}Copy the code

In the source code, compare the underlying OS_atomic_CMPxCHG method. If there is no problem with the comparison, the task is locked, that is, the identifier of the task is set as DLOCK_ONCE_UNLOCKED.

  1. _dispatch_once_callout
DISPATCH_NOINLINE static void _dispatch_once_callout(dispatch_once_gate_t l, void *ctxt, dispatch_function_t func) { _dispatch_client_callout(ctxt, func); // The block call executes _dispatch_once_gate_broadcast(l); // Broadcast: tell others that they have a home, don't look for meCopy the code

Enter the _dispatch_once_callout source code, there are two main steps

*_dispatch_client_callout: block callback execution

  • _dispatch_once_gate_broadcast: Broadcast
  1. _dispatch_client_callout
#undef _dispatch_client_callout void _dispatch_client_callout(void *ctxt, dispatch_function_t f) { @try { return f(ctxt); } @catch (...) { objc_terminate(); }}Copy the code

Enter the _dispatch_client_callout source code, the main is to execute the block callback, where f is equal to _dispatch_Block_invoke(block), that is, asynchronous callback

  1. _dispatch_once_gate_broadcast -> _dispatch_once_mark_doneThe source code
Uintptr_t _dispatch_once_mark_done(dispatch_once_gate_t dgo) { Return os_atomic_xchg(&dgo-> dGO_once, DLOCK_ONCE_DONE, release); }Copy the code

Enter the source code of _dispatch_once_gate_broadcast -> _dispatch_once_mark_done, the main is to dGO -> dGO_once value, and then the identifier of the task is DLOCK_ONCE_DONE, that is, unlock.

Summary of the underlying implementation of singletons, mainly explained as follows:

  • Singletons are executed only once: in GCD singletons, there are two important parameters, onceToken and

Block, where onceToken is a static variable with uniqueness and is encapsulated as a variable L of type dispatch_once_gate_T at the bottom layer. L is mainly used to obtain the association of the underlying atomic encapsulation, that is, variable V, through which the task status can be queried. If v is equal to DLOCK_ONCE_DONE, then the task has already been processed once.

  • When a block is called: if the task has not been executed, the task is locked through a C++ function comparison, that is, the task state is set to DLOCK_ONCE_UNLOCK, in order to ensure the uniqueness of the current task and prevent multiple definitions elsewhere. After the lock is executed, the block callback function is executed. After the execution is complete, the current task is unlocked and the current task state is set to DLOCK_ONCE_DONE. The next time you come in, the task will not be executed and will return directly.
  • Impact of multi-threading: If another task enters during the execution of the current task, it will enter an infinite wait. The reason is that the current task has acquired the lock, and other tasks cannot acquire the lock.