preface

As the previous article explored the use of GCD at the OC layer in detail, this article will explore the underlying layer of GCD.

Resources to prepare

  • Libdispatch source

Underlying analysis of the main queue

A primary queue is a special serial queue provided by GCD. In the libDispatch source code, find the definition of a primary queue, which is the basis for a serial queue

The implementation type of the main queue

There are two types of implementation:

Type a

Dispatch_get_main_queue (dispatch_get_main_queue); libDispatch (dispatch_get_main_queue);

The parameter 1: dispatch_queue_main_t is a structure.

The dispatch_queue_STATIC_s structure is implemented as follows:

_dispatch_main_q = = = = = = = = = = = = = =

Type 2

In the code, create the main queue, using LLDB, and execute the command Po queue to output its structure, which contains the queue identifier. When you print the results, you know which com.apple.main-thread is associated with the main queue, and you can search the libDispatch source code for that as a keyword. You can directly find the result of _dispatch_main_q = above.

The main queue is an exploration of serial queues

After the home team to explore on the bottom of the column, can know that the home side column dq_atomic_flags for DQF_THREAD_BOUND | DQF_WIDTH (1); The main queue is a serial queue by comparing the dq_atomic_flags of the serial queue to the dQ_atomic_flags of the main queue.

To create a serial queue, use dispatch_queue_CREATE, then search libdispatch source code for this keyword:

The _dispatch_lane_create_with_target function is returned.

static dispatch_queue_t _dispatch_lane_create_with_target(const char *label, dispatch_queue_attr_t dqa, Dispatch_queue_tq, bool legacy) {// Object oriented packaging dispatch_queue_attr_info_t dqai = _dispatch_queue_attr_to_info(dQA); // Normalize arguments (qos, overcommit, tq) // Step 2: Initialize the queue const void *vtable; dispatch_queue_flags_t dqf = legacy ? DQF_MUTABLE : 0; if (dqai.dqai_concurrent) { vtable = DISPATCH_VTABLE(queue_concurrent); } else { vtable = DISPATCH_VTABLE(queue_serial); } // dispatch_lane_t dq = _dispatch_object_alloc(vtable, sizeof(struct dispatch_lane_s)); // constructor initializes _dispatch_queue_init(dq, DQF, dqai.dqai_concurrent? DISPATCH_QUEUE_WIDTH_MAX : 1, DISPATCH_QUEUE_ROLE_INNER | (dqai.dqai_inactive ? DISPATCH_QUEUE_INACTIVE : 0)); dq->dq_label = label; return _dispatch_trace_queue_create(dq)._dq; }Copy the code
  • Dqa is wrapped object – oriented as DQAI by _dispatch_queue_attr_to_info.

  • _dispatch_queue_init is a constructor, look at its third argument, DISPATCH_QUEUE_WIDTH_MAX if it is concurrent, 1 if it is not.

Go to the _dispatch_queue_init function and find argument 3:

If it is a serial queue, the passed width is 1 and DQF is DQF_WIDTH(1), which is the same as the value set by the dq_atomic_FLAGS property of the main queue, so the main queue is a serial queue.

And in this function, what does dq_serialnum do?

dq_serialnumThe role of

There are some theories ifdq_serialnumSet to1, is the serial queue, for example: main queue Settings

In the libDispatch source code, search for dq_serialnum to find a number of assignment scenarios

Check the definition of DISPATCH_QUEUE_SERIAL_NUMBER_WLF:

The annotation for dq_serialnum is also found

  • Skip the zero

  • Main queue: 1

  • Mgr_q: 2

  • Mgr_root_q: 3

  • Global queue: 4 to 15

  • Custom queue: 17

So dQ_serialnum is not used to label serial and concurrent queues.

Custom queuedq_serialnum

From the above analysis, we know that the custom queue is 17:

#define DISPATCH_QUEUE_SERIAL_NUMBER_INIT 17
Copy the code

Search DISPATCH_QUEUE_SERIAL_NUMBER_INIT to find the code assigned:

Then explore _dispatch_queue_serial_numbers:

Explore the os_atomic_INC_ORIg macro definition:

#define os_atomic_inc_orig(p, m) \ 
        os_atomic_add_orig((p), 1, m) 

#define os_atomic_add_orig(p, v, m) \ 
        _os_atomic_c11_op_orig((p), (v), m, add, +) 

#define _os_atomic_c11_op_orig(p, v, m, o, op) \
        atomic_fetch_##o##_explicit(_os_atomic_c11_atomic(p), v, \ 
        memory_order_##m)
Copy the code
  • The ##o## # in atomic_fetch_## O ##_explicit is placeholder and the replacement o is passed in with the value add

  • This is replaced with atomic_fetch_add_explicit, an atomic manipulation function from C++11

  • Macros that pass in 17 and then +1 each time.

Global queue analysis

Let’s explore what we can print from LLDB when using global queues.

After printing the results, we can explore the libdispatch source code using com.apple.root.default-qos:

struct dispatch_queue_global_s _dispatch_root_queues[] = { #define _DISPATCH_ROOT_QUEUE_IDX(n, flags) \ ((flags & DISPATCH_PRIORITY_FLAG_OVERCOMMIT) ? \ DISPATCH_ROOT_QUEUE_IDX_##n##_QOS_OVERCOMMIT : \ DISPATCH_ROOT_QUEUE_IDX_##n##_QOS) #define _DISPATCH_ROOT_QUEUE_ENTRY(n, flags, ...) \ [_DISPATCH_ROOT_QUEUE_IDX(n, flags)] = { \ DISPATCH_GLOBAL_OBJECT_HEADER(queue_global), \ .dq_state = DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE, \ .do_ctxt = _dispatch_root_queue_ctxt(_DISPATCH_ROOT_QUEUE_IDX(n, flags)), \ .dq_atomic_flags = DQF_WIDTH(DISPATCH_QUEUE_WIDTH_POOL), \ .dq_priority = flags | ((flags & DISPATCH_PRIORITY_FLAG_FALLBACK) ? \ _dispatch_priority_make_fallback(DISPATCH_QOS_##n) : \ _dispatch_priority_make(DISPATCH_QOS_##n, 0)), \ __VA_ARGS__ \ } _DISPATCH_ROOT_QUEUE_ENTRY(MAINTENANCE, 0, .dq_label = "com.apple.root.maintenance-qos", .dq_serialnum = 4, ), _DISPATCH_ROOT_QUEUE_ENTRY(MAINTENANCE, DISPATCH_PRIORITY_FLAG_OVERCOMMIT, .dq_label = "com.apple.root.maintenance-qos.overcommit", .dq_serialnum = 5, ), _DISPATCH_ROOT_QUEUE_ENTRY(BACKGROUND, 0, .dq_label = "com.apple.root.background-qos", .dq_serialnum = 6, ), _DISPATCH_ROOT_QUEUE_ENTRY(BACKGROUND, DISPATCH_PRIORITY_FLAG_OVERCOMMIT, .dq_label = "com.apple.root.background-qos.overcommit", .dq_serialnum = 7, ), _DISPATCH_ROOT_QUEUE_ENTRY(UTILITY, 0, .dq_label = "com.apple.root.utility-qos", .dq_serialnum = 8, ), _DISPATCH_ROOT_QUEUE_ENTRY(UTILITY, DISPATCH_PRIORITY_FLAG_OVERCOMMIT, .dq_label = "com.apple.root.utility-qos.overcommit", .dq_serialnum = 9, ), / / ✅ com. Apple. Root. The default - qos _DISPATCH_ROOT_QUEUE_ENTRY (default, DISPATCH_PRIORITY_FLAG_FALLBACK, .dq_label = "com.apple.root.default-qos", .dq_serialnum = 10, ), _DISPATCH_ROOT_QUEUE_ENTRY(DEFAULT, DISPATCH_PRIORITY_FLAG_FALLBACK | DISPATCH_PRIORITY_FLAG_OVERCOMMIT, .dq_label = "com.apple.root.default-qos.overcommit", .dq_serialnum = 11, ), _DISPATCH_ROOT_QUEUE_ENTRY(USER_INITIATED, 0, .dq_label = "com.apple.root.user-initiated-qos", .dq_serialnum = 12, ), _DISPATCH_ROOT_QUEUE_ENTRY(USER_INITIATED, DISPATCH_PRIORITY_FLAG_OVERCOMMIT, .dq_label = "com.apple.root.user-initiated-qos.overcommit", .dq_serialnum = 13, ), _DISPATCH_ROOT_QUEUE_ENTRY(USER_INTERACTIVE, 0, .dq_label = "com.apple.root.user-interactive-qos", .dq_serialnum = 14, ), _DISPATCH_ROOT_QUEUE_ENTRY(USER_INTERACTIVE, DISPATCH_PRIORITY_FLAG_OVERCOMMIT, .dq_label = "com.apple.root.user-interactive-qos.overcommit", .dq_serialnum = 15, ), };Copy the code
  • _dispatch_root_queues is an array type. By default, global queues ranging from 4 to 15 are initialized. Queues can be obtained using different types.

  • And the main queue type difference:

    • The home side columns:dispatch_queue_static_s;
    • Global queue:dispatch_queue_global_s.

Create a queue

Search for dispatch_queue_CREATE and enter the _dispatch_lane_create_with_target function:

DISPATCH_NOINLINE static dispatch_queue_t _dispatch_lane_create_with_target(const char *label, dispatch_queue_attr_t dqa, dispatch_queue_t tq, Bool legacy) {// 1. Create dqAI dispatch_queue_attr_info_t dqAI = _dispatch_queue_attr_to_info(DQA); // 2. Normalize parameters such as qos, overcommit, tq dispatch_qOS_t qos = dqai.dqai_qos; . // 3. Create vtable if (legacy) {// If any of these attributes is specified, use non legacy classes if (dqai.dqai_inactive || dqai.dqai_autorelease_frequency) { legacy = false; } } const void *vtable; dispatch_queue_flags_t dqf = legacy ? DQF_MUTABLE : 0; if (dqai.dqai_concurrent) { vtable = DISPATCH_VTABLE(queue_concurrent); } else { vtable = DISPATCH_VTABLE(queue_serial); }... Struct dispatch_lane_t dq = _dispatch_object_alloc(vtable, sizeof(struct dispatch_lane_s)); _dispatch_queue_init(dq, dqf, dqai.dqai_concurrent ? DISPATCH_QUEUE_WIDTH_MAX : 1, DISPATCH_QUEUE_ROLE_INNER | (dqai.dqai_inactive ? DISPATCH_QUEUE_INACTIVE : 0)); dq->dq_label = label; . Dq ->do_targetq = tq; _dispatch_object_debug(dq, "%s", __func__); return _dispatch_trace_queue_create(dq)._dq; }Copy the code

createdqai

Search for _dispatch_queue_attr_to_info in the source code and enter this function:

  • Create an empty DQAI, which defaults to a serial queue. If the passed argument is null, return null Dqai;

  • The passed parameter has a value, which saves the queue information to the DQAI

Normalized parameter

Normalizing parameters such as qos, overcommit, TQ…

  • Set parameters such as priority and service quality.

createvtable

Create a VTable based on the queue type. Use DISPATCH_VTABLE to join the incoming queue type into a Vtable.

See DISPATCH_VTABLE macro definition again:

#define DISPATCH_VTABLE(name) DISPATCH_OBJC_CLASS(name) #define DISPATCH_OBJC_CLASS(name) (&DISPATCH_CLASS_SYMBOL(name))  #define DISPATCH_CLASS_SYMBOL(name) _dispatch_##name##_vtableCopy the code

Initializing the queue

Create a space using the _dispatch_object_alloc function.

_os_object_alloc_realized

  • containsisaTo indicate that the queue is also an object.

Init (_dispatch_queue_init);

  • The queue type isdispatch_queue_tAssigns a value to a member variable of the queue.

Create queues using templates

The initialized DQ is passed in using the _dispatch_introspection_queue_create function

Process: _dispatch_introspection_queue_create – _dispatch_introspection_queue_create_hook – dispatch_introspection_queue_get_info – _di spatch_introspection_lane_get_info

Inheritance chain

Create a queue with _dispatch_lane_CREATE_WITH_target, returning the dispatch_queue_t type.

The dispatch_queue_t type comes from the macro definition

DISPATCH_DECL(dispatch_queue); 

#define DISPATCH_DECL(name) OS_OBJECT_DECL_SUBCLASS(name, dispatch_object)

#define OS_OBJECT_DECL_SUBCLASS(name, super) \
        OS_OBJECT_DECL_IMPL(name, NSObject, <OS_OBJECT_CLASS(super)>) 

#define OS_OBJECT_DECL_IMPL(name, adhere, ...) \ 
        OS_OBJECT_DECL_PROTOCOL(name, __VA_ARGS__) \ 
        typedef adhere<OS_OBJECT_CLASS(name)> \ * OS_OBJC_INDEPENDENT_CLASS name##_t 

#define OS_OBJECT_DECL_PROTOCOL(name, ...) \ 
        @protocol OS_OBJECT_CLASS(name) __VA_ARGS__ \ @end 

#define OS_OBJECT_CLASS(name) OS_##name
Copy the code
  • OS_OBJECT_CLASS = “OS_dispatch_queue”;

  • Step 2: Run the OS_OBJECT_DECL_PROTOCOL command to add _T to the end, for example, OS_dispatch_queue_t.

  • Typedef adhere

    * OS_OBJC_INDEPENDENT_CLASS OS_dispatch_queue_t;

The DISPATCH_DECL macro is defined in the dispatch_object_s structure:

  • Typedeft struct dispatch_queue_s: public dispatch_object_s {} *dispatch_queue_t

  • The root of dispatch_queue_t is dispatch_queue_s, and dispatch_queue_s inherits from dispatch_object_s.

Dispatch_object_s comes from the union of dispatch_object_t:

  • Dispatch_object_t can represent all types in the union, so the final root class is dispatch_object_t.

  • The result is dispatch_queue_t –> dispatch_queue_s –> dispatch_object_s –> dispatch_object_t.

Now we’re looking at dispatch_queue_t, so let’s look at what it is, which is dispatch_queue_s. Find the structure of dispatch_queue_s:

Look again at the DISPATCH_QUEUE_CLASS_HEADER definition nested within the structure:

#define DISPATCH_QUEUE_CLASS_HEADER(x, __pointer_sized_field__) \ _DISPATCH_QUEUE_CLASS_HEADER(x, __pointer_sized_field__); \ /* LP64 global queue cacheline boundary */ \ unsigned long dq_serialnum; \ const char *dq_label; \ DISPATCH_UNION_LE(uint32_t volatile dq_atomic_flags, \ const uint16_t dq_width, \ const uint16_t __dq_opaque2 \ ); \ dispatch_priority_t dq_priority; \ union { \ struct dispatch_queue_specific_head_s *dq_specific_head; \ struct dispatch_source_refs_s *ds_refs; \ struct dispatch_timer_source_refs_s *ds_timer_refs; \ struct dispatch_mach_recv_refs_s *dm_recv_refs; \ struct dispatch_channel_callbacks_s const *dch_callbacks; The \}; \ int volatile dq_sref_cnt struct dispatch_queue_s { DISPATCH_QUEUE_CLASS_HEADER(queue, void *__dq_opaque1); /* 32bit hole on LP64 */ } DISPATCH_ATOMIC64_ALIGN; struct dispatch_workloop_s { struct dispatch_queue_s _as_dq[0]; DISPATCH_QUEUE_CLASS_HEADER(workloop, dispatch_timer_heap_t dwl_timer_heap); uint8_t dwl_drained_qos; /* 24 bits hole */ struct dispatch_object_s *dwl_heads[DISPATCH_QOS_NBUCKETS]; struct dispatch_object_s *dwl_tails[DISPATCH_QOS_NBUCKETS]; dispatch_workloop_attr_t dwl_attr; } DISPATCH_ATOMIC64_ALIGN; #define DISPATCH_LANE_CLASS_HEADER(x) \ struct dispatch_queue_s _as_dq[0]; \ DISPATCH_QUEUE_CLASS_HEADER(x, \ struct dispatch_object_s *volatile dq_items_tail); \ dispatch_unfair_lock_s dq_sidelock; \ struct dispatch_object_s *volatile dq_items_head; \ uint32_t dq_side_suspend_cnt typedef struct dispatch_lane_s { DISPATCH_LANE_CLASS_HEADER(lane); /* 32bit hole on LP64 */ } DISPATCH_ATOMIC64_ALIGN *dispatch_lane_t; // Cache aligned type for static queues (main queue, manager) struct dispatch_queue_static_s { struct dispatch_lane_s _as_dl[0]; \ DISPATCH_LANE_CLASS_HEADER(lane); } DISPATCH_CACHELINE_ALIGN; #define DISPATCH_QUEUE_ROOT_CLASS_HEADER(x) \ struct dispatch_queue_s _as_dq[0]; \ DISPATCH_QUEUE_CLASS_HEADER(x, \ struct dispatch_object_s *volatile dq_items_tail); \ int volatile dgq_thread_pool_size; \ struct dispatch_object_s *volatile dq_items_head; \ int volatile dgq_pending struct dispatch_queue_global_s { DISPATCH_QUEUE_ROOT_CLASS_HEADER(lane); } DISPATCH_CACHELINE_ALIGN;Copy the code
  • inDISPATCH_OBJECT_HEADERContained in thedispatch_object_sThe structure and_DISPATCH_OBJECT_HEADER

Check the definition of _DISPATCH_OBJECT_HEADER:

  • And in the_DISPATCH_OBJECT_HEADERAnd contain_os_object_sThe structure andOS_OBJECT_STRUCT_HEADER

Find and view the OS_OBJECT_STRUCT_HEADER definition:

#define OS_OBJECT_STRUCT_HEADER(x) \
    _OS_OBJECT_HEADER(\
    const struct x##_vtable_s *__ptrauth_objc_isa_pointer do_vtable, \
    do_ref_cnt, \
    do_xref_cnt)
Copy the code

Run the _OS_OBJECT_HEADER command to check its definition:

#define _OS_OBJECT_HEADER(isa, ref_cnt, xref_cnt) \
        isa; /* must be pointer-sized and use __ptrauth_objc_isa_pointer */ \
        int volatile ref_cnt; \
        int volatile xref_cnt
Copy the code
  • Found that containsisaMember variables.

The dispatch_queue_s structure also inherits from _OS_OBJect_s and contains isa member variables, indicating that the queue is also an object.

Synchronization function

Global queue

Synchronization functions work with global queues

dispatch_sync(dispatch_get_global_queue(0, 0), ^{ 
    NSLog(@"Block"); 
});
Copy the code

In the libdispatch source code, find the implementation of the dispatch_sync function:

  • The work argument passed in is the block of the task, and you just need to keep tracing the work argument to find the code to call;

  • Wrap work by passing parameter 3 to the _dispatch_sync_f function.

Next look at the implementation of parameter 3. Go to _dispatch_Block_invoke and check the definition:

  • And by definition, we know that omega is omegablockUnder theinvokeStrong todispatch_function_tStructure.

After analyzing the parameters, let’s look at the implementation of the _dispatch_sync_f function:

In combination with the above, then one by one corresponding to the parameters in the function:

  • ctxt:block
  • func:invoke

Further down, go to the _dispatch_sync_f_inline function:

From the source code, there is a complex logic in the function, you can use the symbol breakpoint to find the correct logic branch.

To navigate to the _dispatch_sync_F_slow function, enter this function:

Again complex logic, continue to use symbolic breakpoints.

Continuing in the project, branch all possible conditions, all set symbolic breakpoints:

  • Enter the_dispatch_sync_function_invokefunction

Enter the _dispatch_sync_function_invoke function:

Next, enter the _dispatch_sync_function_invoke_inline function:

At this point, it’s clear:

  • _dispatch_THREAD_frame_PUSH: The task is queued

  • _dispatch_client_callout: Executes tasks

  • _dispatch_THREAD_FRAME_POP: Task removal queue

Enter the _dispatch_client_callout function

  • callf(ctxt)It’s called internally automaticallyblockThe code for the task.

Then we verify through LLDB:

  • And the source logic is exactly the same

The synchronous functions are executed sequentially because in _dispatch_sync_function_INVOke_inline, three functions are called sequentially:

  • _dispatch_THREAD_frame_PUSH: The task is queued

  • _dispatch_client_callout: Executes tasks

  • _dispatch_THREAD_FRAME_POP: Task removal queue

When the synchronization function joins the serial queue, the _dispatch_barrier_sync_f synchronization fence function is used in the _dispatch_sync_F_inline function.

Concurrent queue

Synchronization functions with concurrent queues:

dispatch_queue_t queue = dispatch_queue_create("cooci", DISPATCH_QUEUE_CONCURRENT); Dispatch_sync (queue, ^{NSLog(@"block-- : %@",[NSThread currentThread]); });Copy the code

Dispatch_sync -> _dispatch_sync_f

Enter the _dispatch_sync_f_inline function for details:

Concurrent queues trigger different code branches than global queues.

Enter the _dispatch_sync_INVOke_and_complete function:

  • And what you end up calling is_dispatch_sync_function_invoke_inlineFunction.

The last input parameter in a function

dispatch_function_t func DISPATCH_TRACE_ARG(void *dc) 

#define DISPATCH_TRACE_ARG(arg) 

#define DISPATCH_TRACE_ARG(arg) , arg
Copy the code
  • will.andargWrap them together so you don’t have to put extra commas on the outside.

A deadlock

  • Called when a deadlock exception occurs_dispatch_sync_f_slowfunction
  • And then call__DISPATCH_WAIT_FOR_QUEUE__Throw an exception

Based on the analysis of the previous cases, explore the flow: dispatch_sync -> _dispatch_SYNc_F -> _dispatch_SYNc_F_INLINE -> _dispatch_SYNc_F_inline.

Enter the _dispatch_sync_f_inline function:

Call _dispatch_barrier_sync_f to the bottom of the serial queue. Use the synchronization fence to enter _dispatch_barrier_sync_f:

Then enter the _dispatch_barrier_sync_f_inline function:

  • Known when a deadlock exception occurs_dispatch_sync_f_slowfunction

Enter the _dispatch_sync_F_slow function again:

  • Internal calls__DISPATCH_WAIT_FOR_QUEUE__Throw an exception.

Go to the __DISPATCH_WAIT_FOR_QUEUE__ function:

  • _dq_state_drain_locked_byThe function returns true and throws an exception
  • Pass in two parameters,dq_stateIs the queue state,dsc_waiterFor the threadtid

Enter the _dq_STATE_DRAIN_locked_by function:

Enter the _dispatch_lock_is_locked_by function:

  • Find a basis for judgmentDLOCK_OWNER_MASK.

DLOCK_OWNER_MASK: DLOCK_OWNER_MASK:

  • DLOCK_OWNER_MASK is set to a large value. If a non-zero value is ampersand, the result must not be zero;

  • The bitwise xOR of lock_value and tid is zero. The only possibility is that the value of the two values is the same.

When the synchronization function is added to the serial queue, the same thread is executed while waiting, resulting in the situation of mutual waiting, resulting in deadlock exceptions.

An asynchronous function

Find the implementation of the dispatch_async function in the libdispatch source code:

  • Encapsulate tasks intodispatch_qos_tThe type ofqos
  • call_dispatch_continuation_asyncfunction

Encapsulation of tasks and priorities

The _dispatch_continuation_init function is displayed

Enter the _dispatch_continuation_init_f function to encapsulate the block task:

Enter the _dispatch_continuation_priority_set function to encapsulate task priorities:

Priority is one of the criteria for asynchronous function execution. And the task is executed asynchronously based on the CPU’s scheduling, so it must be out of order and needs to be encapsulated.

Concurrent queue

Asynchronous functions work with concurrent queues

dispatch_queue_t queue = dispatch_queue_create("cooci", DISPATCH_QUEUE_CONCURRENT); Dispatch_async (queue, ^{NSLog(@"block-- : %@",[NSThread currentThread]); });Copy the code

Dispatch_async –> _dispatch_continuation_async. Then go to _dispatch_continuation_async and check the implementation:

Now look at the dX_push macro definition:

  • At this point, we focus on the call of parameter 3. We can ignore dX_vtable for the time being and continue to trace DQ_push.

  • Dq_push is the assignment point of the function, and the assignment varies depending on the queue type.

To view the assignment of the concurrent queue:

DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_concurrent, lane, 
    .do_type = DISPATCH_QUEUE_CONCURRENT_TYPE, 
    .do_dispose = _dispatch_lane_dispose, 
    .do_debug = _dispatch_queue_debug, 
    .do_invoke = _dispatch_lane_invoke, 
    .dq_activate = _dispatch_lane_activate, 
    .dq_wakeup = _dispatch_lane_wakeup, 
    .dq_push = _dispatch_lane_concurrent_push, 
);
Copy the code

Enter the _dispatch_lane_concurrent_push function:

Using a symbolic breakpoint, start with the _dispatch_continuation_redirect_push function:

Call dx_push again, and the queue type is queue_pthread_root, so instead of _dispatch_LANe_concurrent_push, dx_push corresponds to _dispatch_root_queue_push. Similar to calling the parent class method:

See the flow through symbolic breakpoints with assembly code: _dispatch_root_queue_push –> _dispatch_root_queue_push_override –> _dispatch_root_queue_poke –> _dispatch_root_queue_poke_slow.

Enter the _dispatch_root_queue_poke_slow function:

static void _dispatch_root_queue_poke_slow(dispatch_queue_global_t dq, int n, int floor) { ... _dispatch_root_queues_init(); . do { _dispatch_retain(dq); // released in _dispatch_worker_thread while ((r = pthread_create(pthr, attr, _dispatch_worker_thread, dq))) { if (r ! = EAGAIN) { (void)dispatch_assume_zero(r); } _dispatch_temporary_resource_shortage(); } } while (--remaining); .Copy the code
  • through_dispatch_root_queues_initRegister callbacks for asynchronous task execution;
  • throughdo... whileLoop to create thread, usingpthread_createFunction.

Of course, we can also use LLDB debugging, printing stack information, to do the reverse:

  • By the system of_pthread_wqthreadcalllibdispatchthe_dispatch_worker_thread2Function.

_dispatch_root_queues_init –> _dispatch_root_queues_init_once –> _dispatch_worker_thread2

Assign _dispatch_worker_thread2 to encapsulate the API call to pthread:

  • Asynchronous thread byCPUDispatch, the system at the right time, throughlibsystem_pthread.dylibthe_pthread_wqthreadFunction, calllibdispatch.dylibthe_dispatch_worker_thread2Function, which ultimately performs the task of the asynchronous function

Asynchronous task callback process: _dispatch_worker_thread2 –> _dispatch_root_queue_drain –> _dispatch_async_redirect_invoke –> _dispatch_continuation_POP –> _dispatch_client_callout –> _dispatch_call_block_and_RELEASE asynchronous function with concurrent queue logic:

  • First, encapsulate tasks and priorities;

  • Call dx_push several times and finally find _dispatch_root_queue_push;

  • Register callbacks performed by asynchronous tasks with _dispatch_root_queues_init;

  • Through the do… The while loop creates a thread, using the pthread_create function;

  • Asynchronous threads are scheduled by the CPU, and the system calls the _dispatch_worker_thread2 function when appropriate via libsystem_pthread.dylib.

Global queue

Asynchronous functions with global queues

dispatch_queue_t queue = dispatch_get_global_queue(0, 0); Dispatch_async (queue, ^{NSLog(@"block-- : %@",[NSThread currentThread]); });Copy the code

Dispatch_async -> _dispatch_continuation_async -> _dispatch_continuation_async -> DX_push.

The dX_push call is different from the assignment for the concurrent queue

DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_global, lane, 
    .do_type = DISPATCH_QUEUE_GLOBAL_ROOT_TYPE, 
    .do_dispose = _dispatch_object_no_dispose, 
    .do_debug = _dispatch_queue_debug, 
    .do_invoke = _dispatch_object_no_invoke, 
    .dq_activate = _dispatch_queue_no_activate, 
    .dq_wakeup = _dispatch_root_queue_wakeup, 
    .dq_push = _dispatch_root_queue_push, 
);
Copy the code

Enter the _dispatch_root_queue_push function flow:

_dispatch_root_queue_push –> _dispatch_root_queue_push_override –> _dispatch_root_queue_push_inline –> _dispatch_root_queue_poke –> _dispatch_root_queue_poke_slow

Enter the _dispatch_root_queue_poke_slow function:

  • through_dispatch_root_queues_initRegisters callbacks for asynchronous task execution
  • throughdo... whileLoop to create thread, usingpthread_createfunction

Use LLDB to work backwards:

Asynchronous task callback process: _dispatch_worker_thread2 –> _dispatch_root_queue_drain –> _dispatch_queue_override_invoke –> _dispatch_client_callout –> _dispatch_call_block_and_release

The singleton pattern

static dispatch_once_t onceToken; Dispatch_once (&onceToken, ^{NSLog(@"block-- : %@",[NSThread currentThread]); });Copy the code

Go to the source code and find the implementation of the dispatch_once function:

Enter the dispatch_once_f function:

  • willvalStrong todispatch_once_gate_tType, similar to the use of fences

Three conditional branches:

  • If the execution is complete, return directly

  • For the first time, execute the _dispatch_once_callout function

  • If yes, go to _dispatch_once_WAIT and wait

The lock handle

Enter the _dispatch_once_gate_tryenter function:

  • Atomic lock processing to prevent multithreading

Perform a task

Enter the _dispatch_once_callout function:

Enter the _dispatch_client_callout function again:

  • throughf(ctxt)Perform the callback of the task.

The _dispatch_once_gate_broadcast function is called from the _dispatch_once_callout function:

  • Lock processing and mark as done

The principle of singleton pattern:

  • Call the dispatch_once function, passing in the onceToken and block. OnceToken is a static variable with uniqueness, which is strongly converted to variable L of dispatch_once_gate_T type at the bottom layer. L obtains the encapsulation association of underlying atoms through os_atomic_LOAD function, and obtains variable V, which is used to query the task status. If v is equal to DLOCK_ONCE_DONE, the task has been processed once. Return.

  • If the task is executed for the first time, lock the task and set its status to DLOCK_ONCE_UNLOCK to ensure thread safety. After the block callback is executed, the current task is unlocked and the status of the current task is set to DLOCK_ONCE_DONE. The next time the task is executed, it will return directly.

  • If other tasks come in during the execution of the current task, the wait will enter infinite times. The reason is that the current task has obtained the lock and locked, and other tasks cannot obtain the lock.