This is the 14th day of my participation in the August More Text Challenge. For details, see:August is more challenging

Write in the front: the exploration of the underlying principles of iOS is my usual development and learning in the continuous accumulation of a step forward road. I hope it will be helpful for readers to record my journey of discovery.Copy the code

The directory is as follows:

  1. Exploring the underlying principles of iOS alloc
  2. Exploration of the underlying principles of iOS structure in vivo alignment
  3. The nature of the object explored by iOS underlying principles & the underlying implementation of ISA
  4. The underlying principles of the ISA-class (part 1)
  5. The underlying principles of the ISA class (middle)
  6. The underlying principles of isA-Class (part 2)
  7. Exploring the nature of Runtime Runtime & Methods in iOS Underlying principles
  8. Objc_msgSend explores the underlying principles of iOS
  9. Runtime Runtime slow lookup process
  10. Dynamic method resolution for exploring the underlying principles of iOS
  11. IOS underlying principles to explore the message forwarding process
  12. IOS Application loading principle
  13. Application loading principle (2)
  14. IOS underlying principle exploration class load
  15. IOS Underlying principles to explore the classification of loading
  16. Associated object for iOS underlying principle exploration
  17. Exploration of iOS underlying principles of sorcerer KVC
  18. Exploring the underlying principles of iOS — KVO Principles | more challenging in August
  19. Exploration of the underlying principles of iOS rewrite KVO | August more text challenges
  20. Exploring the underlying principles of iOS multithreading | more challenging in August
  21. Exploring the underlying principles of iOS: GCD functions and queues
  22. GCD Principles (part 1)
  23. IOS Low-level – What do you know about deadlocks?
  24. IOS low-level – Destroy a singleton
  25. IOS Low-level – Dispatch Source

Summary column for the above

  • Summary of the phase of iOS underlying principle exploration

Tidy up the details

  • Summary of iOS development details

preface

In the previous chapter of the exploration of iOS underlying Principles of GCD Principles (Part I), we focused on the residual analysis of GCD functions and queues, GCD underlying data structures, synchronous function underlying calls and asynchronous function underlying calls. How much do you know about deadlocks in the follow-up? , destroy a singleton, Dispatch Source three parts respectively to GCD using intermediate lethal lock problems, singleton underlying implementation logic and Dispatch Source application content. Today, we will look at the GCD part of the fence function underlying implementation, semaphore and scheduling group applications. It is also the end of the communist party chapter. All right, let’s get started.

Barrier function

  • The most direct function of the fence function is to control the task execution sequence, synchronization
  • The fence function can only control the same concurrent queue

dispatch_barrier_async

The block task added inside the function is executed only after the previously added task is completed

dispatch_barrier_sync

The block task added inside the function is executed only after the previously added task is completed, which blocks the thread and affects the execution of subsequent tasks

example

We can use the fence function to control the order of task execution, to solve some scenarios, the execution of a task depends on the results of the previous task execution. The fence function in example 1 will not be executed until tasks 1 and 2 have completed

Case 1

    // Customize the concurrent queue
    dispatch_queue_t concurrentQueue = dispatch_queue_create("superman", DISPATCH_QUEUE_CONCURRENT);

    // Task 1. Asynchronous function
    dispatch_async(concurrentQueue, ^{
        sleep(5);
        NSLog(@"123");
    });
    
    // Task 2. Asynchronous function
    dispatch_async(concurrentQueue, ^{
        sleep(3);
        NSLog(@"456");
    });
    
     // Task 3. Fence function */
    dispatch_barrier_async(concurrentQueue, ^{
        NSLog(@"-- % @ -- -- -- -- --",[NSThread currentThread]);
    });
    
    // Task 4. Asynchronous function
    dispatch_async(concurrentQueue, ^{
        NSLog(@"789");
    });
    
    5 / / task
    NSLog(@"********** good for you."); The print result is as follows:21:24:17.375649+0800 [18834:145002********** Good for you21:24:20.379650+0800 [18834:145218] 456
 21:24:22.376071+0800 [18834:145217] 123
 21:24:22.376270+0800 [18834:145217] -<NSThread: 0x600003c346c0>{number = 7, name = (null)}-----
 21:24:22.376385+0800 [18834:145217] 789
Copy the code
  • willCase 1In thedispatch_barrier_asyncChange the function todispatch_barrier_syncFunction, the order of execution becomes:
 21:35:35.730836+0800 [18959:155093] 456
 21:35:37.730720+0800 [18959:154791] 123
 21:35:37.731164+0800 [18959:154350] -<NSThread: 0x600001548540>{number = 1, Name = main}----- 21:35:37.731459+0800 [18959:154350] **********Copy the code
  • willCase 1In thedispatch_queue_tChange fordispatch_get_global_queueAfter the global concurrent queue, the task execution order becomes:
22:16:19.411253+0800 [19499:189314] -<NSThread: 0x60000343c1c0>{number = 1, Name = main}----- 22:16:19.411375+0800 [1949:189314] ********** 22:16:19.411451+0800 [1949:189527] 789 22:16:24.415446+0800 [1949:189447] 123Copy the code

That is, the fence function does not work for the global concurrent queue.

Therefore, only our custom concurrent queues using the fence function can achieve the desired order of execution.

The illustration

Let’s draw a picture to illustrate the two cases above:

The problem

  • Why does the fence function work with a custom concurrent queue?
  • How do the fence functions control the order of execution?

With these two questions in mind, let’s dive into the source code.

dispatch_barrier_sync

void
dispatch_barrier_sync(dispatch_queue_t dq, dispatch_block_t work)
{
	uintptr_t dc_flags = DC_FLAG_BARRIER | DC_FLAG_BLOCK;
	if (unlikely(_dispatch_block_has_private_data(work))) {
		return _dispatch_sync_block_with_privdata(dq, work, dc_flags);
	}
	_dispatch_barrier_sync_f(dq, work, _dispatch_Block_invoke(work), dc_flags);
}
Copy the code
_dispatch_barrier_sync_f
DISPATCH_NOINLINE
static void
_dispatch_barrier_sync_f(dispatch_queue_t dq, void *ctxt,
		dispatch_function_t func, uintptr_t dc_flags)
{
	_dispatch_barrier_sync_f_inline(dq, ctxt, func, dc_flags);
}
Copy the code
_dispatch_barrier_sync_f_inline

#define DC_FLAG_BARRIER 0x002ul

...


static inline void
_dispatch_barrier_sync_f_inline(dispatch_queue_t dq, void *ctxt,
		dispatch_function_t func, uintptr_t dc_flags)
{
    dispatch_tid tid = _dispatch_tid_self();

    if(unlikely(dx_metatype(dq) ! = _DISPATCH_LANE_TYPE)) { DISPATCH_CLIENT_CRASH(0."Queue type doesn't support dispatch_sync");	}
    dispatch_lane_t dl = upcast(dq)._dl;
	
    if(unlikely(! _dispatch_queue_try_acquire_barrier_sync(dl, tid))) {return _dispatch_sync_f_slow(dl, ctxt, func, DC_FLAG_BARRIER, dl,
				DC_FLAG_BARRIER | dc_flags);
    }

    if (unlikely(dl->do_targetq->do_targetq)) {
	return _dispatch_sync_recurse(dl, ctxt, func,
				DC_FLAG_BARRIER | dc_flags);
    }
    _dispatch_introspection_sync_begin(dl);
    _dispatch_lane_barrier_sync_invoke_and_complete(dl, ctxt, func
			DISPATCH_TRACE_ARG(_dispatch_trace_item_sync_push_pop(
					dq, ctxt, func, dc_flags | DC_FLAG_BARRIER)));
}
Copy the code

The internal implementation of the function here, as we did with the internal implementation of the synchronous function before, is kind of the same overall idea. Also included is a _dispatch_sync_f_slow (internal deadlock issues can occur). The method call here will have an extra DC_FLAG_BARRIER to identify the fence function. Here’s a branch, let’s debug the fence function execution flow with the following symbolic breakpoint: Function execution comes

_dispatch_sync_f_slow

static void
_dispatch_sync_f_slow(dispatch_queue_class_t top_dqu, void *ctxt,
		dispatch_function_t func, uintptr_t top_dc_flags,
		dispatch_queue_class_t dqu, uintptr_t dc_flags)
{
	dispatch_queue_t top_dq = top_dqu._dq;
	dispatch_queue_t dq = dqu._dq;
	if(unlikely(! dq->do_targetq)) {return _dispatch_sync_function_invoke(dq, ctxt, func);
	}

	pthread_priority_t pp = _dispatch_get_priority();
	struct dispatch_sync_context_s dsc = {
		.dc_flags    = DC_FLAG_SYNC_WAITER | dc_flags,
		.dc_func     = _dispatch_async_and_wait_invoke,
		.dc_ctxt     = &dsc,
		.dc_other    = top_dq,
		.dc_priority = pp | _PTHREAD_PRIORITY_ENFORCE_FLAG,
		.dc_voucher  = _voucher_get(),
		.dsc_func    = func,
		.dsc_ctxt    = ctxt,
		.dsc_waiter  = _dispatch_tid_self(),
	};

	_dispatch_trace_item_push(top_dq, &dsc);
	__DISPATCH_WAIT_FOR_QUEUE__(&dsc, dq);

	if (dsc.dsc_func == NULL) {
		// dsc_func being cleared means that the block ran on another thread ie.
		// case (2) as listed in _dispatch_async_and_wait_f_slow.
		dispatch_queue_t stop_dq = dsc.dc_other;
		return _dispatch_sync_complete_recurse(top_dq, stop_dq, top_dc_flags);
	}

	_dispatch_introspection_sync_begin(top_dq);
	_dispatch_trace_item_pop(top_dq, &dsc);
	_dispatch_sync_invoke_and_complete_recurse(top_dq, ctxt, func,top_dc_flags
			DISPATCH_TRACE_ARG(&dsc));
}
Copy the code

Next comes _dispatch_sync_complete_recurse:

_dispatch_sync_complete_recurse
static void
_dispatch_sync_complete_recurse(dispatch_queue_t dq, dispatch_queue_t stop_dq, uintptr_t dc_flags)
{
    bool barrier = (dc_flags & DC_FLAG_BARRIER);
    do {
	if (dq == stop_dq) return;
        if (barrier) {
            // When there is a fence, the task before the fence function is scheduled to complete
            dx_wakeup(dq, 0, DISPATCH_WAKEUP_BARRIER_COMPLETE);
	} else {
            // After all tasks are completed, there is no barrier function
            // Tell the system that the execution is complete, and then process the status
            _dispatch_lane_non_barrier_complete(upcast(dq)._dl, 0);
	}
	dq = dq->do_targetq;
	barrier = (dq->dq_width == 1);
    } while(unlikely(dq->do_targetq)); }... #define dx_wakeup(x, y, z) dx_vtable(x)->dq_wakeup(x, y, z) ... DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_serial, lane, .do_type = DISPATCH_QUEUE_SERIAL_TYPE, .do_dispose = _dispatch_lane_dispose, .do_debug = _dispatch_queue_debug, .do_invoke = _dispatch_lane_invoke, .dq_activate = _dispatch_lane_activate, .dq_wakeup = _dispatch_lane_wakeup, .dq_push = _dispatch_lane_push, ); DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_concurrent, lane, .do_type = DISPATCH_QUEUE_CONCURRENT_TYPE, .do_dispose = _dispatch_lane_dispose, .do_debug = _dispatch_queue_debug, .do_invoke = _dispatch_lane_invoke, .dq_activate = _dispatch_lane_activate, .dq_wakeup = _dispatch_lane_wakeup, .dq_push = _dispatch_lane_concurrent_push, ); DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_global, lane, .do_type = DISPATCH_QUEUE_GLOBAL_ROOT_TYPE, .do_dispose = _dispatch_object_no_dispose, .do_debug = _dispatch_queue_debug, .do_invoke = _dispatch_object_no_invoke, .dq_activate = _dispatch_queue_no_activate, .dq_wakeup = _dispatch_root_queue_wakeup, .dq_push = _dispatch_root_queue_push, );Copy the code

A do{}while() loop is executed. If there is a barrier, dx_wakeup is executed. Let’s start with a custom concurrent queue:

Custom concurrent queues

_dispatch_lane_wakeup
void
_dispatch_lane_wakeup(dispatch_lane_class_t dqu, dispatch_qos_t qos, dispatch_wakeup_flags_t flags)
{
	dispatch_queue_wakeup_target_t target = DISPATCH_QUEUE_WAKEUP_NONE;

	if (unlikely(flags & DISPATCH_WAKEUP_BARRIER_COMPLETE)) {
            // Call the fence function
            return _dispatch_lane_barrier_complete(dqu, qos, flags);
	}
	if (_dispatch_queue_class_probe(dqu)) {
            target = DISPATCH_QUEUE_WAKEUP_TARGET;
	}
        // Synchronous and asynchronous functions
	return _dispatch_queue_wakeup(dqu, qos, flags, target);
}
Copy the code

Here, all tasks added before the fence function are called in a loop. After the task is finished, the fence function is called, and finally, the execution of _dispatch_lane_barrier_complete comes up.

_dispatch_lane_barrier_complete
static void
_dispatch_lane_barrier_complete(dispatch_lane_class_t dqu, dispatch_qos_t qos, dispatch_wakeup_flags_t flags)
{
    dispatch_queue_wakeup_target_t target = DISPATCH_QUEUE_WAKEUP_NONE;
dispatch_lane_t dq = dqu._dl;

    if(dq->dq_items_tail && ! DISPATCH_QUEUE_IS_SUSPENDED(dq)) { struct dispatch_object_s *dc = _dispatch_queue_get_head(dq);if (likely(dq->dq_width == 1 || _dispatch_object_is_barrier(dc))) {
            // Serial queue
            if (_dispatch_object_is_waiter(dc)) {
                / / wait for
                return _dispatch_lane_drain_barrier_waiter(dq, dc, flags, 0); }}else if (dq->dq_width > 1 && !_dispatch_object_is_barrier(dc)) {
            // Concurrent queue, perform the fence function task - unplug the fence
            return _dispatch_lane_drain_non_barriers(dq, dc, flags);
        }

        if(! (flags & DISPATCH_WAKEUP_CONSUME_2)) { _dispatch_retain_2(dq); flags |= DISPATCH_WAKEUP_CONSUME_2; } target = DISPATCH_QUEUE_WAKEUP_TARGET; } uint64_t owned = DISPATCH_QUEUE_IN_BARRIER + dq->dq_width * DISPATCH_QUEUE_WIDTH_INTERVAL;// All fences are executed
    // Clear the fence tag - to perform the following function task
    return _dispatch_lane_class_barrier_complete(dq, qos, flags, target, owned);
}
Copy the code

Global concurrent queue

_dispatch_root_queue_wakeup
void
_dispatch_root_queue_wakeup(dispatch_queue_global_t dq, DISPATCH_UNUSED dispatch_qos_t qos, dispatch_wakeup_flags_t flags)
{
	if(! (flags & DISPATCH_WAKEUP_BLOCK_WAIT)) { DISPATCH_INTERNAL_CRASH(dq->dq_priority,"Don't try to wake up or override a root queue");
	}
	if (flags & DISPATCH_WAKEUP_CONSUME_2) {
		return_dispatch_release_2_tailcall(dq); }}Copy the code

There is no barrier function in the method, which is why the global concurrent queue barrier function does not work.

The fence function

The fence function can only block tasks in the same queue. This is why the fence function is less commonly used. Because of the complexity and encapsulation of the business in the actual development. The fence will not be able to capture the queue, so, because of this limitation, it is actually used less.