This is the 10th day of my participation in the August More Text Challenge. For details, see:August is more challenging

Write in the front: the exploration of the underlying principles of iOS is my usual development and learning in the continuous accumulation of a step forward road. I hope it will be helpful for readers to record my journey of discovery.Copy the code

The directory is as follows:

  1. Exploring the underlying principles of iOS alloc
  2. Exploration of the underlying principles of iOS structure in vivo alignment
  3. The nature of the object explored by iOS underlying principles & the underlying implementation of ISA
  4. The underlying principles of the ISA-class (part 1)
  5. The underlying principles of the ISA class (middle)
  6. The underlying principles of isA-Class (part 2)
  7. Exploring the nature of Runtime Runtime & Methods in iOS Underlying principles
  8. Objc_msgSend explores the underlying principles of iOS
  9. Runtime Runtime slow lookup process
  10. Dynamic method resolution for exploring the underlying principles of iOS
  11. IOS underlying principles to explore the message forwarding process
  12. IOS Application loading principle
  13. Application loading principle (2)
  14. IOS underlying principle exploration class load
  15. IOS Underlying principles to explore the classification of loading
  16. Associated object for iOS underlying principle exploration
  17. Exploration of iOS underlying principles of sorcerer KVC
  18. Exploring the underlying principles of iOS — KVO Principles | more challenging in August
  19. Exploration of the underlying principles of iOS rewrite KVO | August more text challenges
  20. Exploring the underlying principles of iOS multithreading | more challenging in August
  21. Exploring the underlying principles of iOS: GCD functions and queues
  22. GCD Principles (part 1)

Summary column for the above

  • Summary of the phase of iOS underlying principle exploration

Tidy up the details

  • Summary of iOS development details

preface

Let’s start with the last classic case of deadlocks in the content of GCD functions and queues, starting with the fact that deadlocks occur, and looking at the fundamental reason why they occur. Without further ado, let’s begin.

Referring to our article on deadlocks in GCD functions and queues:

  • Creating a serial scheduling queue

Serial queues are useful when we want tasks to be executed in a particular order. A serial queue executes only one task at a time and always pulls tasks from the head of the queue. We can use serial queues instead of locks to protect shared resources or variable data structures. Unlike locks, serial queues ensure that tasks execute in a predictable order. And as long as we commit tasks asynchronously to a serial queue, the queue will never deadlock.

  • Explanation on adding a single task to a queue:

There are two ways to add tasks to a queue: asynchronously or synchronously. If possible, asynchronous execution using dispatch_async and dispatch_asynC_f functions takes precedence over synchronous alternatives. When you add a block object or function to a queue, you have no way of knowing when the code will execute. Thus, adding blocks or functions asynchronously lets you schedule the execution of your code and continue to do other work from the calling thread. This is especially important if you schedule tasks from the main thread of your application — perhaps in response to some user event.

Although you should add tasks asynchronously whenever possible, sometimes you may need to add tasks synchronously to prevent race conditions or other synchronization errors. In these cases, you can add tasks to the queue using the dispatch_sync and dispatch_sync_f functions. These functions block the current thread of execution until the specified task completes execution.

Important: You should never call the dispatch_sync or dispatch_sync_f functions for tasks that are executed in the same queue that you plan to pass to the function. This is especially important for serial queues that guarantee deadlocks, but should also be avoided for concurrent queues.

The above two sections explain the concept of serial queues and the two ways to add tasks to queues. In general, it helps us avoid deadlocks when using serial queues properly and when adding tasks to queues.

Let’s start with the deadlock in our case.

Deadlock occurs

As explained in the important note locks above, we should never add functions to a queue to perform tasks in a synchronous manner. This is especially important for serial queues that guarantee deadlocks, but should also be avoided for concurrent queues.

Sure, this is an important idea to avoid deadlocks, but still, it’s hard to avoid. In actual development, we use the following code:

A serial queue, a synchronization function within a synchronization function

At line 180, our program occurs to deadlock, as shown in the stack message:

libdispatch.dylib_dispatch_sync_f_slow: -> libdispatch.dylib__DISPATCH_WAIT_FOR_QUEUE__:

Tracking process

At line 180, put a breakpoint. In the GCD function and queue section, we know that the dispatch_SYN function is executed as follows:

dispatch_sync -> _dispatch_sync_f -> _dispatch_sync_f_inline

In _dispatch_sync_f_inline, we have five branches, and we have five symbol breakpoints:

_dispatch_sync_f_inline

static inline void
_dispatch_sync_f_inline(dispatch_queue_t dq, void *ctxt,
		dispatch_function_t func, uintptr_t dc_flags)
{
	if (likely(dq->dq_width == 1)) {
		return _dispatch_barrier_sync_f(dq, ctxt, func, dc_flags);
	}

	if(unlikely(dx_metatype(dq) ! = _DISPATCH_LANE_TYPE)) { DISPATCH_CLIENT_CRASH(0."Queue type doesn't support dispatch_sync");
	}

	dispatch_lane_t dl = upcast(dq)._dl;
	// Global concurrent queues and queues bound to non-dispatched threads
	DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE
	if(unlikely(! _dispatch_queue_try_reserve_sync_width(dl))) {return _dispatch_sync_f_slow(dl, ctxt, func, 0, dl, dc_flags);
	}

	if (unlikely(dq->do_targetq->do_targetq)) {
		return _dispatch_sync_recurse(dl, ctxt, func, dc_flags);
	}
	_dispatch_introspection_sync_begin(dl);
	_dispatch_sync_invoke_and_complete(dl, ctxt, func DISPATCH_TRACE_ARG(
			_dispatch_trace_item_sync_push_pop(dq, ctxt, func, dc_flags)));
}
Copy the code

First came the _dispatch_barrier_sync_f branch

_dispatch_barrier_sync_f

static void
_dispatch_barrier_sync_f(dispatch_queue_t dq, void *ctxt,
dispatch_function_t func, uintptr_t dc_flags)
{
_dispatch_barrier_sync_f_inline(dq, ctxt, func, dc_flags);
}
Copy the code

So I’m going to go to _dispatch_barrier_sync_f_inline, and I’m going to go to _dispatch_sync_f_slow:

We come to the implementation inside _dispatch_barrier_sync_f_inline:

_dispatch_barrier_sync_f_inline

DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_barrier_sync_f_inline(dispatch_queue_t dq, void *ctxt,
		dispatch_function_t func, uintptr_t dc_flags)
{
	dispatch_tid tid = _dispatch_tid_self();

	if(unlikely(dx_metatype(dq) ! = _DISPATCH_LANE_TYPE)) { DISPATCH_CLIENT_CRASH(0."Queue type doesn't support dispatch_sync");
	}

	dispatch_lane_t dl = upcast(dq)._dl;
	// It is more correct to merge the qos of threads
	// You have just acquired the barrier lock that entered the queue state.
	//
	// However, this is too expensive for the fast path, so skip it.
	// The tradeoff of choice is if the queue is on a thread of lower priority
	// With this fast path, this thread may receive useless overrides.
	//
	// Global concurrent queues and queues bound to non-dispatched threads
	DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE
	if(unlikely(! _dispatch_queue_try_acquire_barrier_sync(dl, tid))) {return _dispatch_sync_f_slow(dl, ctxt, func, DC_FLAG_BARRIER, dl,
				DC_FLAG_BARRIER | dc_flags);
	}

	if (unlikely(dl->do_targetq->do_targetq)) {
		return _dispatch_sync_recurse(dl, ctxt, func,
				DC_FLAG_BARRIER | dc_flags);
	}
	_dispatch_introspection_sync_begin(dl);
	_dispatch_lane_barrier_sync_invoke_and_complete(dl, ctxt, func
			DISPATCH_TRACE_ARG(_dispatch_trace_item_sync_push_pop(
					dq, ctxt, func, dc_flags | DC_FLAG_BARRIER)));
}
Copy the code

(the _dispatch_sync_f_inline and _dispatch_barrier_sync_F_inline internal implementations are somewhat similar)

The second branch inside is the call to _dispatch_sync_f_slow

_dispatch_sync_f_slow

DISPATCH_NOINLINE
static void
_dispatch_sync_f_slow(dispatch_queue_class_t top_dqu, void *ctxt,
		dispatch_function_t func, uintptr_t top_dc_flags,
		dispatch_queue_class_t dqu, uintptr_t dc_flags)
{
	dispatch_queue_t top_dq = top_dqu._dq;
	dispatch_queue_t dq = dqu._dq;
	if(unlikely(! dq->do_targetq)) {return _dispatch_sync_function_invoke(dq, ctxt, func);
	}

	pthread_priority_t pp = _dispatch_get_priority();
	struct dispatch_sync_context_s dsc = {
		.dc_flags    = DC_FLAG_SYNC_WAITER | dc_flags,
		.dc_func     = _dispatch_async_and_wait_invoke,
		.dc_ctxt     = &dsc,
		.dc_other    = top_dq,
		.dc_priority = pp | _PTHREAD_PRIORITY_ENFORCE_FLAG,
		.dc_voucher  = _voucher_get(),
		.dsc_func    = func,
		.dsc_ctxt    = ctxt,
		.dsc_waiter  = _dispatch_tid_self(),
	};

	_dispatch_trace_item_push(top_dq, &dsc);
	__DISPATCH_WAIT_FOR_QUEUE__(&dsc, dq);

	if (dsc.dsc_func == NULL) {
		// dsc_func being cleared means that the block ran on another thread ie.
		// case (2) as listed in _dispatch_async_and_wait_f_slow.
		dispatch_queue_t stop_dq = dsc.dc_other;
		return _dispatch_sync_complete_recurse(top_dq, stop_dq, top_dc_flags);
	}

	_dispatch_introspection_sync_begin(top_dq);
	_dispatch_trace_item_pop(top_dq, &dsc);
	_dispatch_sync_invoke_and_complete_recurse(top_dq, ctxt, func,top_dc_flags
			DISPATCH_TRACE_ARG(&dsc));
}
Copy the code

When you look at the implementation, it will look a little similar to the two above (_dispatch_sync_f_inline and _dispatch_barrier_sync_F_inline). Again, the break point under the branch symbol will not be broken, it will crash directly, according to the stack, we will be: __DISPATCH_WAIT_FOR_QUEUE__(&dsc, dq) execution.

The point of ultimate collapse

__DISPATCH_WAIT_FOR_QUEUE__

DISPATCH_NOINLINE
static void
__DISPATCH_WAIT_FOR_QUEUE__(dispatch_sync_context_t dsc, dispatch_queue_t dq)
{
    uint64_t dq_state = _dispatch_wait_prepare(dq);
    if (unlikely(_dq_state_drain_locked_by(dq_state, dsc->dsc_waiter))) {
            DISPATCH_CLIENT_CRASH((uintptr_t)dq_state,
                            "dispatch_sync called on queue "
                            "already owned by current thread"); }... }Copy the code

The program crashes here, so what we need to focus on is what is the if condition here? What conditions are met that cause the program to crash.

Function comparison of two contents:

// 1, the current queue thread ID
#define _dispatch_tid_self() ((dispatch_tid)_dispatch_thread_port())

#define _dispatch_thread_port() pthread_mach_thread_np(_dispatch_thread_self())

#define _dispatch_thread_self() ((uintptr_t)pthread_self())

// 2. Status of the queue
 _dispatch_wait_prepare(dq);

Copy the code

_dq_state_drain_locked_by

ISPATCH_ALWAYS_INLINE
static inline bool
_dq_state_drain_locked_by(uint64_t dq_state, dispatch_tid tid)
{
	return _dispatch_lock_is_locked_by((dispatch_lock)dq_state, tid);
}

#define DLOCK_OWNER_MASK ((dispatch_lock)0xfffffffc)

DISPATCH_ALWAYS_INLINE
static inline bool
_dispatch_lock_is_locked_by(dispatch_lock lock_value, dispatch_tid tid)
{
	// equivalent to _dispatch_lock_owner(lock_value) == tid
        // DLOCK_OWNER_MASK is a large number ((dispatch_lock) 0xffFFFFFC)
        // DLOCK_OWNER_MASK is not 0
        // If DLOCK_OWNER_MASK equals 0, then DLOCK_OWNER_MASK equals 0
        // Finally, if lock_value is the same as tid, it will be 0
	return ((lock_value ^ tid) & DLOCK_OWNER_MASK) == 0;
}
Copy the code

conclusion

Finally, the state of the thread that was locked to wait is the same as our thread ID. In other words, our thread should be in the waiting state, but at this time, the thread queue is called to add a task, telling the system to call this thread, the result in our system this thread is in the waiting state. Therefore, the addition task is not possible.

At this point, the thread is called up again, and then the thread is in a wait state, which is a contradiction and cannot continue execution, so a deadlock occurs.