GCD explores functions and queues

The GCD introduction

What is the communist party?

Grand Central Dispatch, pure C, provides a lot of powerful functions.

The advantage of the GCD

  • GCD is apple’s solution for multi-core parallel computing
  • – GCD will automatically utilize more CPU cores (e.g. Dual-core, quad-core)
  • GCD automatically manages thread lifecycles (creating threads, scheduling tasks, destroying threads)
  • The programmer just tells the GCD what task he wants to perform, and doesn’t have to write any thread management code

function

  • Tasks are encapsulated with blocks
  • The block of the task has no arguments and no return value
    • The function that performs the task
  • asynchronousdispatch_async
    • The next statement can be executed without waiting for the current statement to complete
    • This will enable the thread to execute the block task
    • Asynchronous is synonymous with multithreading
  • synchronousdispatch_sync
    • The next statement is executed only after the current statement is completed
    • Will not start the thread
    • The task that currently executes the block

The queue

Queues are divided into serial queues and concurrent queues (queues are also a data structure). Queues are scheduled but not executed. Execution requires threads to operate, and threads depend on thread pools.

  • Serial queues follow FIFO(First In First Out) principle.
  • Concurrent queues scheduled first are not necessarily executed first; The concurrent queue only considers the scheduling order; At any given moment, multiple tasks may be scheduled;

Functions and queues

Main queue and global queue

The home side column

  • A serial queue dedicated to scheduling tasks on the main thread
  • Will not start the thread
  • If a task is currently executing on the main thread, then whatever task is currently added to the main queue will not be scheduled
  • dispatch_get_main_queue()

Global concurrent queue

  • For programmer convenience, Apple provides a global queue dispatch_get_global_queue(0, 0)
  • A global queue is a concurrent queue
  • When using multithreaded development, if there is no special need for queues, you can use global queues directly when performing asynchronous tasks

Deadlock phenomenon

Here we look at two examples.

  1. Case 1

Here to create a concurrent queue, because 2, 3, 4 is performed in an asynchronous function, and either asynchronous or synchronous functions are time-consuming operation, because here is the concurrent queue, so the task of an asynchronous function does not influence the execution of the tasks behind, so 5 print big probability than asynchronous function of tasks to perform first. Since 3 is printed in the synchronization function, 4 is printed when the task in the synchronization function is finished, so the final printing order is 1, 5, 2, 3, 4. So this is concurrent queue execution, but what happens when we change it to synchronous queue? So let’s see.

  1. Case 2

When we change to a synchronous queue, we can see that a deadlock error is reported. The specific reason is: First of all, the synchronization queue follows the FIFO feature, and the dispatch_sync feature is to ensure that the tasks in the dispatch_sync block are executed first before the subsequent tasks are executed, so the 4 tasks are after the dispatch_sync block task, and the 3 tasks in the block are after the 4 tasks, because the 3 tasks are within the block. So 4 waits for the tasks in the dispatch_sync block to complete before executing, and 3 waits for 4 tasks, so a circular wait is created.

Serial queue and concurrent queue underlying source code

// Dispatch_queue = dispatch_queue_create("chenxi", DISPATCH_QUEUE_SERIAL); // Dispatch_queue_conque = dispatch_queue_create("chenxi", DISPATCH_QUEUE_CONCURRENT);Copy the code

The dispatch_queue_CREATE method is called when creating a concurrent queue or a serial queue. Now let’s explore the underlying implementation of the dispatch_queue_CREATE method.

dispatch_queue_t
dispatch_queue_create(const char *label, dispatch_queue_attr_t attr)
{
	return _dispatch_lane_create_with_target(label, attr,
			DISPATCH_TARGET_QUEUE_DEFAULT, true);
}
Copy the code

In the _dispatch_lane_create_with_target method we mainly look at the return value _dispatch_trace_queue_create(dq)._dq, Since _dispatch_trace_queue_CREATE mainly turns on tracing, we’ll focus on dQ.

dispatch_lane_t dq = _dispatch_object_alloc(vtable, sizeof(struct dispatch_lane_s)); // _dispatch_queue_init The third parameter determines dqai.dqai_concurrent. If yes, DISPATCH_QUEUE_WIDTH_MAX. 1 _dispatch_queue_init(dq, DQF, dqai.dqai_concurrent? DISPATCH_QUEUE_WIDTH_MAX : 1, DISPATCH_QUEUE_ROLE_INNER | (dqai.dqai_inactive ? DISPATCH_QUEUE_INACTIVE : 0));Copy the code

The third parameter of the _dispatch_queue_init method determines dqai.dqai_concurrent, Dqai. Dqai_concurrent sends DISPATCH_QUEUE_WIDTH_MAX, dQai = _dispatch_queue_attr_to_INFO (dQA), dqAI = _dispatch_queue_attr_to_INFO (dQA), Next we look at the _dispatch_queue_attr_to_info method.

dispatch_queue_attr_info_t _dispatch_queue_attr_to_info(dispatch_queue_attr_t dqa) { dispatch_queue_attr_info_t dqai = { }; if (! dqa) return dqai; #if DISPATCH_VARIANT_STATIC if (dqa == &_dispatch_queue_attr_concurrent) { dqai.dqai_concurrent = true; return dqai; } #endif if (dqa < _dispatch_queue_attrs || dqa >= &_dispatch_queue_attrs[DISPATCH_QUEUE_ATTR_COUNT]) { #ifndef __APPLE__ if (memcmp(dqa, &_dispatch_queue_attrs[0], sizeof(struct dispatch_queue_attr_s)) == 0) { dqa = (dispatch_queue_attr_t)&_dispatch_queue_attrs[0]; } else #endif // __APPLE__ DISPATCH_CLIENT_CRASH(dqa->do_vtable, "Invalid queue attribute"); } size_t idx = (size_t)(dqa - _dispatch_queue_attrs); dqai.dqai_inactive = (idx % DISPATCH_QUEUE_ATTR_INACTIVE_COUNT); idx /= DISPATCH_QUEUE_ATTR_INACTIVE_COUNT; dqai.dqai_concurrent = ! (idx % DISPATCH_QUEUE_ATTR_CONCURRENCY_COUNT); idx /= DISPATCH_QUEUE_ATTR_CONCURRENCY_COUNT;Copy the code

Dqa in the dispatch_queue_attr_to_info method is the second parameter of dispatch_queue_create when we create a concurrent queue or a serial queue. Dqa returns dQAI directly, dqai. Dqai_concurrent assignment, dqai. The macro definition of DISPATCH_QUEUE_SERIAL parameter is NULL (#define DISPATCH_QUEUE_SERIAL NULL), so dqai.dqai_concurrent has a value to indicate the concurrent queue. Now let’s look at the _dispatch_queue_init method.

static inline dispatch_queue_class_t _dispatch_queue_init(dispatch_queue_class_t dqu, dispatch_queue_flags_t dqf, uint16_t width, uint64_t initial_state_bits) { uint64_t dq_state = DISPATCH_QUEUE_STATE_INIT_VALUE(width); dispatch_queue_t dq = dqu._dq; dispatch_assert((initial_state_bits & ~(DISPATCH_QUEUE_ROLE_MASK | DISPATCH_QUEUE_INACTIVE)) == 0); if (initial_state_bits & DISPATCH_QUEUE_INACTIVE) { dq->do_ref_cnt += 2; // rdar://8181908 see _dispatch_lane_resume if (dx_metatype(dq) == _DISPATCH_SOURCE_TYPE) { dq->do_ref_cnt++; // released when DSF_DELETED is set } } dq_state |= initial_state_bits; dq->do_next = DISPATCH_OBJECT_LISTLESS; / / width to 1 for serial queue here DQF | = DQF_WIDTH (width); os_atomic_store2o(dq, dq_atomic_flags, dqf, relaxed); dq->dq_state = dq_state; dq->dq_serialnum = os_atomic_inc_orig(&_dispatch_queue_serial_numbers, relaxed); return dqu; }Copy the code

Here you can see DQF_WIDTH(width), which is a serial queue when width is 1, and dq_serialnum is an identifier.

The home side column

dispatch_queue_t mainQueue = dispatch_get_main_queue();

* Because the main queue doesn't behave entirely like a regular serial queue,
 * it may have unwanted side-effects when used in processes that are not UI apps
 * (daemons). For such processes, the main queue should be avoided.
 *
 * @see dispatch_queue_main_t
 *
 * @result
 * Returns the main queue. This queue is created automatically on behalf of
 * the main thread before main() is called.
 */
DISPATCH_INLINE DISPATCH_ALWAYS_INLINE DISPATCH_CONST DISPATCH_NOTHROW
dispatch_queue_main_t
dispatch_get_main_queue(void)
{
	return DISPATCH_GLOBAL_OBJECT(dispatch_queue_main_t, _dispatch_main_q);
}
Copy the code

We usually call dispatch_get_main_queue to get the main queue, but if we click on it we can see the comment that the main queue is different from a normal serial queue, it means a special serial queue that executes before main, Next, let’s explore the underlying implementation of the dispatch_GEt_main_queue method.

dispatch_queue_main_t dispatch_get_main_queue(void) { return DISPATCH_GLOBAL_OBJECT(dispatch_queue_main_t, _dispatch_main_q); } // The actual object is the second parameter, and the actual object is the second parameter. Dispatch_main_q #define DISPATCH_GLOBAL_OBJECT(type, object) ((type)&(object))Copy the code
struct dispatch_queue_static_s _dispatch_main_q = { DISPATCH_GLOBAL_OBJECT_HEADER(queue_main), #if ! DISPATCH_USE_RESOLVERS .do_targetq = _dispatch_get_default_queue(true), #endif .dq_state = DISPATCH_QUEUE_STATE_INIT_VALUE(1) | DISPATCH_QUEUE_ROLE_BASE_ANON, .dq_label = "com.apple.main-thread", DQF_WIDTH(width), Width of 1 time for serial queue. Dq_atomic_flags = DQF_THREAD_BOUND | DQF_WIDTH (1), the dq_serialnum = 1,};Copy the code

Global concurrent queue

dispatch_get_global_queue(0, 0)
Copy the code

To enter the global queue definition, see the following figure

When creating a global concurrent queue, parameters can be passed to provide different concurrent queues based on different quality of service or priority. Then we can conclude that there should be a global set to maintain these concurrent queues.

  • With global queueslable-com.apple.root.defaultsearch

Get a set of queues that provide different global queues according to different qualities of service, as shown below:

Summary: The system maintains a set of global queues, providing different global queues based on quality of service or priority. Our default for development work is dispatch_get_global_queue(0, 0).

GCD’s task execution stack (sync)

NSLog(@" GCD function analysis "); });Copy the code

When we normally run code like this, it’s automatically printed, and there’s no block() call that we normally see, so when is this block called? Let’s look at the source code again.

void dispatch_sync(dispatch_queue_t dq, dispatch_block_t work) { uintptr_t dc_flags = DC_FLAG_BLOCK; if (unlikely(_dispatch_block_has_private_data(work))) { return _dispatch_sync_block_with_privdata(dq, work, dc_flags); } // _dispatch_invoke (work) is a package function _dispatch_sync_f(dq, work, _dispatch_Block_invoke(work), dc_flags); }Copy the code

The work is the block that comes in, so let’s focus on where the work goes. The _dispatch_sync_f function is called with the work argument passed in, so let’s look at the _dispatch_sync_f function call.

static void
_dispatch_sync_f(dispatch_queue_t dq, void *ctxt, dispatch_function_t func,
		uintptr_t dc_flags)
{
	_dispatch_sync_f_inline(dq, ctxt, func, dc_flags);
}
Copy the code

Here * CTXT is the work passed in because _dispatch_sync_f_inline is called, so let’s move on to the implementation of _dispatch_sync_f_inline.

In the _dispatch_sync_F_inline function we should also focus on the CTXT call. There are a lot of calls here, so let’s look at the following symbolic breakpoint to see which line of code is executed.

You can see from the breakpoint that _dispatch_sync_f_slow was executed here, so let’s go ahead and look at the implementation of the _dispatch_sync_f_slow function.

static void _dispatch_sync_f_slow(dispatch_queue_class_t top_dqu, void *ctxt, dispatch_function_t func, uintptr_t top_dc_flags, dispatch_queue_class_t dqu, uintptr_t dc_flags) { dispatch_queue_t top_dq = top_dqu._dq; dispatch_queue_t dq = dqu._dq; if (unlikely(! dq->do_targetq)) { return _dispatch_sync_function_invoke(dq, ctxt, func); } pthread_priority_t pp = _dispatch_get_priority(); struct dispatch_sync_context_s dsc = { .dc_flags = DC_FLAG_SYNC_WAITER | dc_flags, .dc_func = _dispatch_async_and_wait_invoke, .dc_ctxt = &dsc, .dc_other = top_dq, .dc_priority = pp | _PTHREAD_PRIORITY_ENFORCE_FLAG, .dc_voucher = _voucher_get(), .dsc_func = func, .dsc_ctxt = ctxt, .dsc_waiter = _dispatch_tid_self(), }; _dispatch_trace_item_push(top_dq, &dsc); __DISPATCH_WAIT_FOR_QUEUE__(&dsc, dq); if (dsc.dsc_func == NULL) { // dsc_func being cleared means that the block ran on another thread ie. // case (2) as listed in _dispatch_async_and_wait_f_slow. dispatch_queue_t stop_dq = dsc.dc_other; return _dispatch_sync_complete_recurse(top_dq, stop_dq, top_dc_flags); } _dispatch_introspection_sync_begin(top_dq); _dispatch_trace_item_pop(top_dq, &dsc); _dispatch_sync_invoke_and_complete_recurse(top_dq, ctxt, func,top_dc_flags DISPATCH_TRACE_ARG(&dsc)); }Copy the code

Here we also say that the symbol breakpoint executes _dispatch_sync_function_invoke, so look at the implementation of _dispatch_sync_function_invoke.

static void
_dispatch_sync_function_invoke(dispatch_queue_class_t dq, void *ctxt,
		dispatch_function_t func)
{
	_dispatch_sync_function_invoke_inline(dq, ctxt, func);
}

static inline void
_dispatch_sync_function_invoke_inline(dispatch_queue_class_t dq, void *ctxt,
		dispatch_function_t func)
{
	dispatch_thread_frame_s dtf;
	_dispatch_thread_frame_push(&dtf, dq);
	_dispatch_client_callout(ctxt, func);
	_dispatch_perfmon_workitem_inc();
	_dispatch_thread_frame_pop(&dtf);
}
Copy the code

This is just the _dispatch_client_callout function that uses CTXT and func. So let’s look at the _dispatch_client_callout function.

static inline void
_dispatch_client_callout(void *ctxt, dispatch_function_t f)
{
	return f(ctxt);
}
Copy the code

So we’re passing a call to F to execute CTXT, just like a block.

GCD’s task execution stack (asynchronous)

Here we continue to look at the implementation of asynchronous functions.

void
dispatch_async(dispatch_queue_t dq, dispatch_block_t work)
{
	dispatch_continuation_t dc = _dispatch_continuation_alloc();
	uintptr_t dc_flags = DC_FLAG_CONSUME;
	dispatch_qos_t qos;

	qos = _dispatch_continuation_init(dc, dq, work, 0, dc_flags);
	_dispatch_continuation_async(dq, dc, qos, dc->dc_flags);
}
Copy the code
static inline dispatch_qos_t
_dispatch_continuation_init(dispatch_continuation_t dc,
		dispatch_queue_class_t dqu, dispatch_block_t work,
		dispatch_block_flags_t flags, uintptr_t dc_flags)
{
	void *ctxt = _dispatch_Block_copy(work);

	dc_flags |= DC_FLAG_BLOCK | DC_FLAG_ALLOCATED;
	if (unlikely(_dispatch_block_has_private_data(work))) {
		dc->dc_flags = dc_flags;
		dc->dc_ctxt = ctxt;
		// will initialize all fields but requires dc_flags & dc_ctxt to be set
		return _dispatch_continuation_init_slow(dc, dqu, flags);
	}

	dispatch_function_t func = _dispatch_Block_invoke(work);
	if (dc_flags & DC_FLAG_CONSUME) {
		func = _dispatch_call_block_and_release;
	}
	return _dispatch_continuation_init_f(dc, dqu, ctxt, func, flags, dc_flags);
}
Copy the code
static inline dispatch_qos_t _dispatch_continuation_init_f(dispatch_continuation_t dc, dispatch_queue_class_t dqu, void *ctxt, dispatch_function_t f, dispatch_block_flags_t flags, uintptr_t dc_flags) { pthread_priority_t pp = 0; dc->dc_flags = dc_flags | DC_FLAG_ALLOCATED; dc->dc_func = f; dc->dc_ctxt = ctxt; // in this context DISPATCH_BLOCK_HAS_PRIORITY means that the priority // should not be propagated, only taken from the handler if it has one if (! (flags & DISPATCH_BLOCK_HAS_PRIORITY)) { pp = _dispatch_priority_propagate(); } _dispatch_continuation_voucher_set(dc, flags); return _dispatch_continuation_priority_set(dc, dqu, pp, flags); }Copy the code

Unlike the synchronization function, f and CTXT are assigned to dc->dc_func and DC ->dc_ctxt, respectively, and are prioritised in the _dispatch_continuation_priority_set function. The priority processing here is because the asynchronous function represents the asynchronous call, will produce disorder, the priority will be used as a reference to measure the basis; Also, because functions are called asynchronously, the call to the function is stored in dc and can be retrieved when needed. So the value of qos in dispatch_async is equal to the value returned by dispatch_continuation_init, so the important thing here is to see where qos is going, So look at the implementation of the _dispatch_continuation_async function.

static inline void _dispatch_continuation_async(dispatch_queue_class_t dqu, dispatch_continuation_t dc, dispatch_qos_t qos, uintptr_t dc_flags) { #if DISPATCH_INTROSPECTION if (! (dc_flags & DC_FLAG_NO_INTROSPECTION)) { _dispatch_trace_item_push(dqu, dc); } #else (void)dc_flags; #endif return dx_push(dqu._dq, dc, qos); }Copy the code
#define dx_push(x, y, z) dx_vtable(x)->dq_push(x, y, z)
Copy the code

Search and you can see the macro definition of DX_push, where z is qos, so focus on z, and look at dq_push(x, y, z).

Where concurrency.dq_push = _dispatch_LANE_conCURRENT_push and global concurrency.dq_push = _dispatch_root_queue_push. Here we search globally for the implementation of the _dispatch_root_queue_push method:

In this process, we just do some judging and encapsulation, and eventually we go to the last line of code _dispatch_root_queue_PUSH_inline to continue the tracer source process:

The _dispatch_root_queue_poke method is called in _dispatch_root_queue_push_inline, The core process for _dispatch_root_queue_poke is _dispatch_root_queue_poke_slow, as shown in the following figure:

  • _dispatch_root_queue_poke_slowimplementation

_dispatch_root_queues_init() is a key process in _dispatch_root_queues_poke_slow, as shown below:

Enter the _dispatch_root_queues_init() method, which uses singleton processing, as shown in the following figure:

  • Singleton processing_dispatch_root_queues_init_once

So let’s go to the _dispatch_root_queues_init_once method, so what’s going on here? See below:

Thread pools are initialized, work queues are configured, work queues are initialized, and so on, which explains why _dispatch_root_queues_init_once is a singleton. Singletons can avoid repeated initialization. One of the key Settings is the execution function, which is set to _dispatch_worker_thread2. See the following code:

cfg.workq_cb = _dispatch_worker_thread2;
Copy the code

We can verify that the asynchronous function’s final task is called with _dispatch_worker_thread2 by bt printing the run stack information. See the picture below:

Summary: By tracing the asynchronous processing process, the system performs different DQ_push methods for different queue types, completes the initialization of the thread pool and the configuration of the work queue in the form of singleton, and finally completes the invocation and execution of tasks in the asynchronous function through _dispatch_worker_thread2.

Related Interview Questions

The interview questions 1

- (void)wbinterDemo{
    dispatch_queue_t queue = dispatch_queue_create("com.lg.cooci.cn", DISPATCH_QUEUE_SERIAL);

    dispatch_async(queue, ^{
        NSLog(@"1");
    });
    
    dispatch_async(queue, ^{
        NSLog(@"2");
    });

    dispatch_sync(queue, ^{ NSLog(@"3"); });
    
    NSLog(@"0");

    dispatch_async(queue, ^{
        NSLog(@"7");
    });
    dispatch_async(queue, ^{
        NSLog(@"8");
    });
    dispatch_async(queue, ^{
        NSLog(@"9");
    });

    // A: 1230789
    // B: 1237890
    // C: 3120798
    // D: 2137890
}
Copy the code

Since dispatch_sync will only block subsequent tasks, 0, 7, 8, 9 will only print after 3, but 1 and 2 are out of order and 0, 7, 8, 9 are not executed in order, so the correct answer is A, C.

The interview questions 2

- (void)MTDemo{
    self.num = 0;
    while (self.num < 5) {
        dispatch_async(dispatch_get_global_queue(0, 0), ^{
            self.num++;
        });
    }
    NSLog(@"end : %d",self.num); 
}
Copy the code

The output here is going to be greater than or equal to 5, first of all, because this is a while loop, self.num is not going to be printed if self.num is not greater than or equal to 5, and then because it’s a concurrent queue and asynchronously executing, there’s going to be multiple threads open, so it’s possible that self.num just satisfies the condition that self.num is greater than or equal to 5, When you break out of the loop, the other thread starts executing, and then you add and add self.num again, so that the final print is greater than or equal to 5.

The interview question 3

- (void)KSDemo {
    for (int i= 0; i<10000; i++) {
        dispatch_async(dispatch_get_global_queue(0, 0), ^{
            self.num++;
        });
    }
    NSLog(@"end : %d",self.num); 
}
Copy the code

In this case, because the judgment condition is a new variable I, it will only loop 10,000 times, and then because it is multi-threaded, when it breaks out of the loop, some of the threaded tasks have not completed, so the final print result is less than or equal to 10000.