This is the 17th day of my participation in the August Challenge

review

In the last blog has on the GCD function synchronization/asynchronous and singleness of the underlying source code, made a detailed analysis, so this blog will be on the bottom of the fence function to explore the analysis!

Multithreading in iOS (I) – Processes and threads

IOS low-level exploration of multithreading (II) – Threads and locks

IOS low-level exploration of multithreading (3) – GCD

IOS low-level exploration of multithreading (4) – GCD queue

IOS bottom exploration of multithreading (five) – GCD different queue source analysis

IOS bottom exploration of multithreading (six) – GCD source analysis (sync function, async asynchronous function)

IOS low-level exploration of multithreading (seven) – GCD source analysis (cause of deadlock)

IOS bottom exploration of multithreading (eight) – GCD source analysis (function synchronization, asynchrony, singleton)

1. Basic introduction of fence function

1.1 Function of fence function

Function of fence:

The most direct effect: control the task execution sequence, that is, achieve the effect of synchronization

  • dispatch_barrier_async: Only after the previous mission is completed will I come here
  • dispatch_barrier_sync: has the same effect, but blocks the thread and affects subsequent execution

Note: The fence function can only control the same concurrent queue

1.2 Examples of fence function

  • dispatch_barrier_asyncFor example,

  • The running results are as follows:

  • In the same queue,Barrier functionThe previous task is finished, the task inside the fence function can be executed, but it won’tPlug thread.
  • The tasks behind the fence function can still be executed. But the task in front of the fence function must be executed before the task inside the fence function.

That is, task 1 and task 2 must be executed in front of the fence function.

  • dispatch_barrier_sync:

The code above 👆 is to change the asynchronous fence function to synchronous, see what happens?

  • The console output is as follows:

  • The tasks in front of the fence function are executed normally, but the tasks behind the fence function are executed
  • The barrier function blocks the thread, and the task behind the barrier function will not be executed until the barrier function has completed its execution

Remember the above statement: the barrier function can only control the same concurrent queue, so let’s see if the barrier function can block not the same concurrent queue.

We put the fence function on another queue and found that it did not block the task. Is it asynchronous?

So why don’t we switch to sync and see if we can stop it?

From the results of the operation, found or blocked, indicating that not the same concurrent queue, whether the fence function is synchronous or asynchronous, are blocked, can only be the same concurrent queue!

Let’s take another example 🌰, using a global concurrency queue

From the print result, the global concurrent queue is also not blocked, only the custom concurrent queue can be used, why? Go to the underlying source code to see if you can find the answer!

2. Fence function source code analysis

2.1 Process Tracking

The above has a general understanding of the fence function, so what is the underlying implementation logic? Find out now.

Search dispatch_barrier_sync for dispatch_barrier_sync_f — > _dispatch_barrier_sync_f_inline

The _dispatch_barrier_sync_f_inline method that we used when we were looking at deadlocks, the _dispatch_sync_f_slow method, which is labeled DC_FLAG_BARRIER!

This is also the case with previous synchronization deadlocks, where a symbolic breakpoint is set to continue tracing the flow.

The flow traced from this is: _dispatch_sync_f_slow –> _dispatch_sync_invoke_and_complete_recurse –> _dispatch_sync_complete_recurse, The _dispatch_SYNc_COMPLEte_RECURse method is located.

Dx_wakeup will wakeup the task. _dispatch_LANe_non_barrier_complete will not be executed until the task is completed, indicating that the task in the current queue has completed. And there’s no barrier function left and we’re going to continue down the flow.

#define dx_wakeup(x, y, z) dx_vtable(x)->dq_wakeup(x, y, z)

Copy the code

So now go to dq_wakeup

So we’ve seen this before when we were looking at synchronous and asynchronous, global concurrency is _dispatch_root_queue_wakeup, serial and concurrent is _dispatch_lane_wakeup, so what’s the difference?

2.3 Customized concurrent queue analysis

Let’s first look at the _dispatch_lane_wakeup of the custom concurrent queue

_dispatch_lane_wakeup(dispatch_lane_class_t dqu, dispatch_qos_t qos,
		dispatch_wakeup_flags_t flags)
{
	dispatch_queue_wakeup_target_t target = DISPATCH_QUEUE_WAKEUP_NONE;

	if (unlikely(flags & DISPATCH_WAKEUP_BARRIER_COMPLETE)) {
		return _dispatch_lane_barrier_complete(dqu, qos, flags);
	}
	if (_dispatch_queue_class_probe(dqu)) {
		target = DISPATCH_QUEUE_WAKEUP_TARGET;
	}
	return _dispatch_queue_wakeup(dqu, qos, flags, target);
}
Copy the code
  • The _dispatch_lane_barrier_complete method is used to determine whether the barrier is present

  • If there is no barrier, the normal concurrent queue process is followed and the _dispatch_queue_wakeup method is called.

  • _dispatch_lane_barrier_complete

  • In a serial queue, the system waits for other tasks to complete and then executes them in sequence
  • If it is a concurrent queue, it is called_dispatch_lane_drain_non_barriersMethod completes the task before the fence.
  • At the end of the call_dispatch_lane_class_barrier_completeMethod, that is, pull out the fence, do not block, so as to carry out the task behind the fence.

2.3 Global Concurrent Queue Analysis

  • Global concurrent queue,dx_wakeupThe corresponding is_dispatch_root_queue_wakeupMethod, view the source code implementation
void
_dispatch_root_queue_wakeup(dispatch_queue_global_t dq,
		DISPATCH_UNUSED dispatch_qos_t qos, dispatch_wakeup_flags_t flags)
{
	if(! (flags & DISPATCH_WAKEUP_BLOCK_WAIT)) { DISPATCH_INTERNAL_CRASH(dq->dq_priority,"Don't try to wake up or override a root queue");
	}
	if (flags & DISPATCH_WAKEUP_CONSUME_2) {
		return_dispatch_release_2_tailcall(dq); }}Copy the code
  • In the global concurrent queue, there’s no pairbarrierIs according to the normal concurrent queue processing.
  • Why doesn’t the global concurrent queue handle the fence function? Because global concurrent queues are used by the system as well as by us.
  • If you add a barrier function, it will block queue execution and affect system-level execution, so the barrier function is not suitable for global concurrent queues.

3. Summary

  • When using the fence function, you must be in the same queue as other tasks that need to be executed
  • You cannot use global concurrent queues using the fence function
  • In addition to our use, the system is also used.
  • If the fence function is added, it will cause queue execution to block and affect system level operation

More content continues to be updated

🌹 if you like, give it a thumbs up 👍🌹

🌹 feel harvest, can come to a wave, collection + attention, comment + forward, so as not to find me next time 😁🌹

🌹 welcome everyone to leave a message exchange, criticism and correction, learn from each other 😁, improve themselves 🌹