This article studies and references iOS- Basic Principle 28: GCD basic principle analysis, thanks here

This article is an analysis of the underlying principles of queue creation, synchronous/asynchronous functions, singletons, semaphores, and scheduling groups

Queues are created using the dispatch_queue_create method in GCD. Let’s explore how queues are created in libdispatch.dylib (download link)

The queue to create

Underlying source code analysis

  • Search in the source codedispatch_queue_create
dispatch_queue_t
dispatch_queue_create(const char *label, dispatch_queue_attr_t attr)
{
    return _dispatch_lane_create_with_target(label, attr, DISPATCH_TARGET_QUEUE_DEFAULT, true);
}
Copy the code
  • Enter the_dispatch_lane_create_with_target(
DISPATCH_NOINLINE static dispatch_queue_t _dispatch_lane_create_with_target(const char *label, dispatch_queue_attr_t dqa, dispatch_queue_t tq, Bool legacy) {// dqAI creation -dispatch_queue_attr_info_t dqAI = _dispatch_queue_attr_to_info(DQA); // Step 1: normalize parameters such as qos, overcommit, tq... // Join queue name const void *vtable; dispatch_queue_flags_t dqf = legacy ? DQF_MUTABLE : 0; If (dqai.dqai_concurrent) {OS_dispatch_queue_concurrent vtable = DISPATCH_VTABLE(queue_concurrent); } else { vtable = DISPATCH_VTABLE(queue_serial); }... [struct dispatch_lane_t dq = _dispatch_object_alloc(vtable, sizeof(struct dispatch_lane_s))); _dispatch_queue_init(dq, DQF, dqai.dqai_concurrent? DISPATCH_QUEUE_WIDTH_MAX : 1, DISPATCH_QUEUE_ROLE_INNER | (dqai.dqai_inactive ? DISPATCH_QUEUE_INACTIVE : 0)); Dq ->dq_label = label; Dq ->dq_priority = _dispatch_priority_make((dispatch_qos_t)dqai.dqai_qos, dqai.dqai_relpri); // Priority processing... Dq ->do_targetq = tq; do_targetq = tq; _dispatch_object_debug(dq, "%s", __func__); return _dispatch_trace_queue_create(dq)._dq; // check dq}Copy the code

_dispatch_lane_create_with_target analysisStep 1: Pass_dispatch_queue_attr_to_infoMethods the incomingdqa(That is, the queue type,Serial, concurrentEtc.) to createdispatch_queue_attr_info_tObjects of typedqaiforAttributes of the storage queue

  • [Step 2] Set the attributes associated with the queue, such as qos
  • [Step 3] PassDISPATCH_VTABLEThe name of the concatenation queue, that isvtable, includingDISPATCH_VTABLEIs the macro definition, as shown below, so the queue type is throughOS_dispatch_+ Queue typequeue_concurrentSpliced together
    • serialQueue type:OS_dispatch_queue_serial, verify as follows

concurrentQueue type:OS_dispatch_queue_concurrent, verify as follows

👇 #define DISPATCH_VTABLE(name) DISPATCH_OBJC_CLASS(name) (&DISPATCH_CLASS_SYMBOL(name)) 👇 #define DISPATCH_CLASS(name) OS_dispatch_##Copy the code

Dqai.dqai_concurrent = dqai.dqai_concurrent = dqai.dqai_concurrent = dqai.dqai_concurrent = dqai.dqai_concurrent = dqai.dqai_concurrent = dqai.dqai_concurrent = dqai.dqai_concurrent = dqai.dqai_concurrent = dqai.dqai_concurrent = dqai.dqai_concurrent Queues are also objects

  • Enter the_dispatch_object_alloc -> _os_object_alloc_realizedMethod is set to point to isa, which can be verified from hereQueues are also objectsthe

Enter the_dispatch_queue_initMethod, the queue type isdispatch_queue_tAnd sets the queue properties[Step 5] Pass_dispatch_trace_queue_createProcess the queue created, where_dispatch_trace_queue_createis_dispatch_introspection_queue_createThe encapsulated macro definition will eventually return the processed one_dq Enter the_dispatch_introspection_queue_create_hook -> dispatch_introspection_queue_get_info -> _dispatch_introspection_lane_get_infoAs you can see, there are some differences with our custom class,Create a queueThe underlying implementation isCreating a Vm using a Templatethe conclusion

  • Queue creation methoddispatch_queue_createParameter two, the queue type, determines the underlying mediummax & 1(used to distinguish between serial and concurrent), where1saidserial
  • queueIs also aobject, also needs to be created by alloc + init, and there’s one in alloc as wellclasstheclassIs through theMacro definition splicingAnd bothSpecify the point to ISA
  • Create a queueThe underlying processing is created from a template of typedispatch_introspection_queue_sThe structure of the body

dispatch_queue_createThe underlying analysis process is shown in the following figure

Function underlying principle analysis

This paper mainly analyzes the asynchronous function dispatch_async and synchronous function dispatch_sync

An asynchronous function

Enter dispatch_async source code implementation, the main analysis of two functions

  • _dispatch_continuation_init: task wrapper function
  • _dispatch_continuation_async: concurrent processing function
void dispatch_async(dispatch_queue_t dq, Dispatch_block_t work)//work tasks {dispatch_continuation_t dc = _dispatch_continuation_alloc(); uintptr_t dc_flags = DC_FLAG_CONSUME; dispatch_qos_t qos; // Block qos = _dispatch_continuation_init(dc, dq, work, 0) dc_flags); Continuation_async (dq, dc, qos, dc->dc_flags); }Copy the code

_dispatch_continuation_init Task wrapper

Enter the source code implementation of _dispatch_continuation_init, mainly packaging tasks, and set up the thread return function, equivalent to initialization

DISPATCH_ALWAYS_INLINE static inline dispatch_qos_t _dispatch_continuation_init(dispatch_continuation_t dc, dispatch_queue_class_t dqu, dispatch_block_t work, dispatch_block_flags_t flags, uintptr_t dc_flags) { void *ctxt = _dispatch_Block_copy(work); / / copy task dc_flags | = DC_FLAG_BLOCK | DC_FLAG_ALLOCATED; if (unlikely(_dispatch_block_has_private_data(work))) { dc->dc_flags = dc_flags; dc->dc_ctxt = ctxt; // Will initialize all fields but requires dc_flags & dc_ctxt to be set return _dispatch_continuation_init_slow(dc,  dqu, flags); } dispatch_function_t func = _dispatch_Block_invoke(work); If (dc_flags & DC_FLAG_CONSUME) {func = _dispatch_call_block_and_release; } return _dispatch_continuation_init_f(dc, dqu, CTXT, func, flags, dc_flags); }Copy the code

There are mainly the following steps

  • through_dispatch_Block_copyCopy the task
  • through_dispatch_Block_invokeEncapsulate tasks, where_dispatch_Block_invokeIs a macro definition, according to the above analysisAn asynchronous callback
#define _dispatch_Block_invoke(bb) \
        ((dispatch_function_t)((struct Block_layout *)bb)->invoke)
Copy the code
  • If it is synchronous, the callback function is assigned the value _dispatch_call_block_and_release

  • through_dispatch_continuation_init_fMethod assigns a value to the callback function, i.efisfuncSave it in a property

_dispatch_continuation_async Concurrent processing

In this function, the block callback is performed

  • Enter the_dispatch_continuation_asyncSource code implementation
DISPATCH_ALWAYS_INLINE static inline void _dispatch_continuation_async(dispatch_queue_class_t dqu, dispatch_continuation_t dc, dispatch_qos_t qos, uintptr_t dc_flags) { #if DISPATCH_INTROSPECTION if (! (dc_flags & DC_FLAG_NO_INTROSPECTION)) { _dispatch_trace_item_push(dqu, dc); } #else (void)dc_flags; #endif return dx_push(dqu._dq, dc, qos); // Like dx_invoke, are macros}Copy the code
  • The key code isdx_push(dqu._dq, dc, qos).dx_pushIs the macro definition, as shown below
#define dx_push(x, y, z) dx_vtable(x)->dq_push(x, y, z)
Copy the code
  • And one of thedq_pushDepending on the type of queue, different functions need to be executed

Symbolic breakpoints debug execution functions

  • Run demo and passSymbol breakpointTo determine which function is being executed, since it’s a concurrent queue, by incrementing_dispatch_lane_concurrent_pushSign breakpoint to see if it goes here
dispatch_queue_t conque = dispatch_queue_create("com.CJL.Queue", DISPATCH_QUEUE_CONCURRENT); Dispatch_async (conque, ^{NSLog(@" async function "); });Copy the code
  • The run found that it did go_dispatch_lane_concurrent_push\

  • Enter the_dispatch_lane_concurrent_pushSource code, found in two steps, continues through the symbol breakpoint_dispatch_continuation_redirect_pushand_dispatch_lane_pushDebug, and find that go is_dispatch_continuation_redirect_push\

  • Enter the_dispatch_continuation_redirect_pushSource code, found to go againdx_pushQueue is also an object with a parent class and a root class, so it will recursively execute to the root class method \

  • Next, through the root class_dispatch_root_queue_pushSign breakpoint, to verify whether the guess is correct, from the run result, it is completely correct \

  • Go to _dispatch_root_queue_push -> _dispatch_root_queue_PUSH_inline -> _dispatch_root_queue_POKE -> The _dispatch_root_queue_poke_slow source code, verified by symbolic breakpoints, does go here, view the source code implementation of this method, there are two main steps

  • Register callbacks with the _dispatch_root_queues_init method

  • Create a thread through a do-while loop, using the pthread_create method

DISPATCH_NOINLINE static void _dispatch_root_queue_poke_slow(dispatch_queue_global_t dq, int n, int floor) { int remaining = n; int r = ENOSYS; _dispatch_root_queues_init(); / / the key... //do-while loop creates thread do {_dispatch_retain(dq); // released in _dispatch_worker_thread while ((r = pthread_create(pthr, attr, _dispatch_worker_thread, dq))) { if (r ! = EAGAIN) { (void)dispatch_assume_zero(r); } _dispatch_temporary_resource_shortage(); } } while (--remaining); . }Copy the code

_dispatch_root_queues_init

  • Enter the_dispatch_root_queues_initSource code implementation, discovery is adispatch_once_fSingleton (see the underlying analysis of subsequent singleton, not described here), which is passed infuncis_dispatch_root_queues_init_once.
DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_root_queues_init(void)
{
    dispatch_once_f(&_dispatch_root_queues_pred, NULL, _dispatch_root_queues_init_once);
}
Copy the code
  • Enter the_dispatch_root_queues_init_onceThe source code, its internal different transaction call handles are_dispatch_worker_thread2\

Its block callback executes the call path:_dispatch_root_queues_init_once ->_dispatch_worker_thread2 -> _dispatch_root_queue_drain -> _dispatch_root_queue_drain -> _dispatch_continuation_pop_inline -> _dispatch_continuation_invoke_inline -> _dispatch_client_callout -> dispatch_call_block_and_release

This path can go through the breakpoint,btPrint stack information out

instructions

One thing to note here is that there is a difference between a singleton block callback and an asynchronous function’s block callback

  • In a singleton, in a block callbackfuncis_dispatch_Block_invoke(block)
  • whileasynchronousFunction, block callbackfuncisdispatch_call_block_and_release

conclusion

So, to sum up, the underlying analysis of asynchronous functions is as follows

  • [Preparation] : First, copy and encapsulate the asynchronous task, and set up the callback functionfunc
  • Block callback: The bottom layer passesDx_push recursive, will redirect toRoot queueAnd then throughpthread_creatThe thread is created and finally passeddx_invokeExecute the block callback (notedx_pushdx_invokeIt’s in pairs.)

The underlying analysis flow of asynchronous functions is as follows:

Synchronization function

Enter the dispatch_sync source code implementation, the underlying implementation is implemented through the fence function (see below for the underlying analysis of the fence function)

DISPATCH_NOINLINE
void
dispatch_sync(dispatch_queue_t dq, dispatch_block_t work)
{
    uintptr_t dc_flags = DC_FLAG_BLOCK;
    if (unlikely(_dispatch_block_has_private_data(work))) {
        return _dispatch_sync_block_with_privdata(dq, work, dc_flags);
    }
    _dispatch_sync_f(dq, work, _dispatch_Block_invoke(work), dc_flags);
}
Copy the code
  • Enter the_dispatch_sync_fSource \

  • Look at the _dispatch_sync_F_inline source code, where width = 1 indicates a serial queue, and there are two important points:

    • The fence:_dispatch_barrier_sync_f(It can be explained by the underlying analysis of the fence function in the following article)Synchronization functiontheThe underlying implementationIs actuallySynchronous fence function
    • Deadlock:_dispatch_sync_f_slowIf there is a mutual wait, a deadlock will occur
DISPATCH_ALWAYS_INLINE static inline void _dispatch_sync_f_inline(dispatch_queue_t dq, void *ctxt, dispatch_function_t func, Uintptr_t dc_flags) {likely(dq->dq_width == 1)) {return _dispatch_barrier_sync_f(dq, CTXT, func, dc_flags); // fence} if (unlikely(dx_metatype(dq)! = _DISPATCH_LANE_TYPE)) { DISPATCH_CLIENT_CRASH(0, "Queue type doesn't support dispatch_sync"); } dispatch_lane_t dl = upcast(dq)._dl; // Global concurrent queues and queues bound to non-dispatch threads // always fall into the slow case, see DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE if (unlikely(! _dispatch_queue_try_reserve_sync_width(dl))) { return _dispatch_sync_f_slow(dl, ctxt, func, 0, dl, dc_flags); // deadlocks} if (unlikely(Dq ->do_targetq->do_targetq)) {return _dispatch_sync_RECURse (DL, CTXT, func, dc_flags); } _dispatch_introspection_sync_begin(dl); _dispatch_sync_INVOke_and_complete (dl, CTXT, func DISPATCH_TRACE_ARG(_dispatch_trace_item_SYNc_PUSH_pop (dq, ctxt, func, dc_flags))); //block execution and release}Copy the code

_dispatch_sync_f_slow deadlock

  • Enter the_dispatch_sync_f_slowThe currentThe home side columnisSuspend, blockthe
  • Adding a task to a queue pushes it into the main queue_dispatch_trace_item_push

  • Enter the__DISPATCH_WAIT_FOR_QUEUE__, determine whether the DQ is the waiting queue, and then give a state state, and then change theThe status of the DQ matches the queue on which the current task depends

  • Enter the_dq_state_drain_locked_by -> _dispatch_lock_is_locked_byThe source code
    DISPATCH_ALWAYS_INLINE static inline bool _dispatch_lock_is_locked_by(dispatch_lock lock_value, Dispatch_tid tid) {// equivalent to _dispatch_lock_OWNER (lock_value) == tid If they are the same, they are 1; if they are the same, they are 0, 0 & any number is 0 // that is, the task to be waited is the same as the task being executed. Return ((lock_value ^ tid) &dlock_owner_mask) == 0; }Copy the code

If the waiting queue and the executing queue are the same, it checks whether the thread ids are multiplied. If they are equal, it will cause a deadlock. Reason for sequential execution of synchronous function + concurrent queue

In the _dispatch_sync_invoke_and_complete -> _dispatch_sync_function_invoke_inline source code, there are three main steps:

  • Push tasks into queues:_dispatch_thread_frame_push
  • Block callback to execute task:_dispatch_client_callout
  • Out of the team:_dispatch_thread_frame_pop

As you can see from the implementation, the tasks are pushed into the queue first, then the block callback is performed, and then the tasks are popped, so the tasks are executed sequentially.

conclusion

The underlying implementation of the synchronization function is as follows:

  • Synchronization functionThe underlying implementation of theSynchronous fence function
  • If in the synchronization functionThe currently executing queue is the same queue as the waiting queueTo formWait for each otherWill result inA deadlock

Therefore, to sum up, the underlying implementation process of synchronization function is shown in the figure:

The singleton

In daily development, we typically use GCD’s Dispatch_once to create singletons, as shown below

static dispatch_once_t onceToken; Dispatch_once (&oncetoken, ^{NSLog(@" singleton application "); });Copy the code

First, there are two things we need to know about singletons

  • [Reason for executing once]A singleton process is executed only once. How is the underlying control, that is, why can it only be executed once?
  • [Block call timing]When is a singleton block called?

With the following two questions, we will analyze the underlying singleton

  • Enter dispatch_once source code implementation, the bottom is through dispatch_onCE_F implementation

  • Parameter 1: onceToken, which is a static variable. Since static variables are defined differently in different locations, static variables are unique

  • Parameter 2: Block callback

void
dispatch_once(dispatch_once_t *val, dispatch_block_t block)
{
    dispatch_once_f(val, block, _dispatch_Block_invoke(block));
}
Copy the code
  • Enter the source code of dispatch_once_F, where val is the onceToken static variable passed in from the outside world, and func is _dispatch_Block_invoke(block), where the underlying singleton is mainly divided into the following steps

    • Convert val, the static variable, to a variable l of type dispatch_once_gate_t

    • Run the OS_Atomic_LOAD command to obtain the task identifier V

      • ifvIs equal to theDLOCK_ONCE_DONE, indicating that the task has been executed. Return
      • If after the mission is executed,Locking failure, then walk to_dispatch_once_mark_done_if_quiescedFunction to store again, setting the identifier toDLOCK_ONCE_DONE
      • Otherwise, it passes_dispatch_once_gate_tryenterTry to enter the task, which is unlocked, and then execute_dispatch_once_calloutPerform the block callback
    • If task 2 is in the process of executing, the _dispatch_once_wait function makes task 2 wait an infinite number of times

DISPATCH_NOINLINE void dispatch_once_f(dispatch_once_t *val, void *ctxt, dispatch_function_t func) { dispatch_once_gate_t l = (dispatch_once_gate_t)val; #if ! DISPATCH_ONCE_INLINE_FASTPATH || DISPATCH_ONCE_USE_QUIESCENT_COUNTER uintptr_t v = os_atomic_load(&l->dgo_once, acquire); //load if (likely(v == DLOCK_ONCE_DONE)) {likely(v == DLOCK_ONCE_DONE); } #if DISPATCH_ONCE_USE_QUIESCENT_COUNTER if (likely(DISPATCH_ONCE_IS_GEN(v))) { return _dispatch_once_mark_done_if_quiesced(l, v); } #endif #endif if (_dispatch_once_gate_tryenter(l)) {return _dispatch_once_callout(l, CTXT, func); } return _dispatch_once_wait(l); // wait indefinitely}Copy the code

_dispatch_once_gate_tryenter unlock

View the source code using the underlying OS_atomic_cmpxchg method. If the comparison is successful, unlock the task. That is, the task identifier is set to DLOCK_ONCE_UNLOCKED

DISPATCH_ALWAYS_INLINE static inline bool _dispatch_once_gate_tryenter(dispatch_once_gate_t l) { return os_atomic_cmpxchg(&l->dgo_once, DLOCK_ONCE_UNLOCKED, (uintptr_t)_dispatch_lock_value_for_self(), relaxed); // First compare, then change}Copy the code

_dispatch_once_callout callback

  • Enter the _dispatch_once_callout source code, there are two main steps

    • _dispatch_client_callout: Block callback execution
    • _dispatch_once_gate_broadcast: Broadcast
DISPATCH_NOINLINE static void _dispatch_once_callout(dispatch_once_gate_t l, void *ctxt, dispatch_function_t func) { _dispatch_client_callout(ctxt, func); //block calls _dispatch_once_gate_broadcast(l); // Broadcast: tell others that you have a home, don't look for meCopy the code
  • Enter the_dispatch_client_calloutSource code, is mainly to execute block callback, whichfIs equal to the_dispatch_Block_invoke(block), asynchronous callback
#undef _dispatch_client_callout void _dispatch_client_callout(void *ctxt, dispatch_function_t f) { @try { return f(ctxt); } @catch (...) { objc_terminate(); }}Copy the code
  • Enter the_dispatch_once_gate_broadcast -> _dispatch_once_mark_doneSource code, is mainly todgo->dgo_onceA value, and then set the task identifier toDLOCK_ONCE_DONE, i.e.,unlock
DISPATCH_ALWAYS_INLINE static uintptr_t _dispatch_once_mark_done(dispatch_once_gate_t dgo) { Os_atomic_xchg (&dgo->dgo_once, DLOCK_ONCE_DONE, release); os_atomic_xchg(&dgo->dgo_once, DLOCK_ONCE_DONE, release) }Copy the code

conclusion

The basic implementation of singletons is described as follows:

  • [Principle of singleton execution only once]In the: GCD singleton, there are two important parameters,onceTokenblock, includingonceTokenAre static variables, unique, encapsulated at the bottomdispatch_once_gate_tThe type ofThe variable l.lIt is mainly used to obtain the association of encapsulation of underlying atoms, i.eVariable vV is used to query the task status. If v is equal toDLOCK_ONCE_DONEThe task has already been handled once, directlyreturn
  • [Block call timing]: if the task has not been executed at this time, it will be compared by C++ functions at the bottomTask Unlocking, that is, the task status is set toDLOCK_ONCE_UNLOCKThe purpose is toEnsure the uniqueness of the current task executionTo prevent multiple definitions elsewhere. After the lockExecutes the block callback functionAfter the execution is complete, theCurrent Task Unlock, will the currentThe task status is set to DLOCK_ONCE_DONE, the next time it comes in, it’s not going to execute, it’s going to return
  • [Impact of multi-threading] : If other tasks come in during the execution of the current task, it will enter infinite waiting times. The reason is that the current task has obtained the lock and locked, and other tasks cannot obtain the lock

The underlying process analysis of the singleton is as follows:

Barrier function

There are two main types of fence functions used in GCD

  • synchronousBarrier functiondispatch_barrier_sync(executed in the main thread) : The previous task will not come here until it completes, but the synchronous fence function willPlug thread, affecting the execution of subsequent tasks
  • asynchronousBarrier functiondispatch_barrier_async: Will come here after the previous mission is completed

The most direct function of the fence function is to control the fence before all tasks are executed, the fence function after the task is executed. Meanwhile, the fence function needs to pay attention to the following points:

  • Barrier functiononlycanControl the same concurrent queue
  • Synchronous fenceWhen I add to the queue,The current thread will be lockedUntil the task before the synchronization fence and the synchronization fence task itself have finished, the current thread will open and continue to execute the next line of code.
  • When you use the fence function. It makes sense to use custom queuesIf I use thetaSerial queuesOr provided by the systemGlobal concurrent queueThat’s the fence functionFunction is equivalent to a synchronization functionDoes not mean anything

Code debugging

There are four tasks in total, among which the first two tasks have dependencies, that is, after task 1 is executed, task 2 can be executed, and then the fence function can be used

  • Asynchronous fence functions do not block the main thread, asynchronouslyIt's the queues that are clogged

  • The synchronous fence function doesBlocking main threadSync block is the current thread

conclusion

  • Asynchronous fence functionBlocking isThe queueAnd it has to beCustom concurrent queues, does not affect the execution of main thread tasks
  • Synchronous fence functionBlocking isthreadAnd isThe main thread, which affects the execution of other tasks in the main thread

Usage scenarios

In addition to being used for task dependencies, the fence function can also be used for data security

If you do something like this, it crashesThe reason for the crash is that data is constantly retained and released. Release has already started before the data is retained, which is equivalent to adding an empty data for release

Modify the

  • By adding a fence function
    - (void)use041{ dispatch_queue_t concurrentQueue = dispatch_queue_create("com.CJL.Queue", DISPATCH_QUEUE_CONCURRENT); NSMutableArray *array = [NSMutableArray array]; for (int i = 0; i<100000; i++) { dispatch_async(concurrentQueue, ^{ dispatch_barrier_async(concurrentQueue, ^{ [array addObject:[NSString stringWithFormat:@"%d", i]]; }); }); }}Copy the code
  • Use mutex@synchronized (self) {}
    - (void)use041{
        dispatch_queue_t concurrentQueue = dispatch_queue_create("com.CJL.Queue", DISPATCH_QUEUE_CONCURRENT);
    
        NSMutableArray *array = [NSMutableArray array];
    
        for (int i = 0; i<100000; i++) {
            dispatch_async(concurrentQueue, ^{
                @synchronized (self) {
                    [array addObject:[NSString stringWithFormat:@"%d", i]];
                };
            });
        }
    }
    Copy the code

Pay attention to

  • If the fence function is usedGlobal queueThe system is also using a global concurrent queue. Using a fence will block the system at the same time, so it will crash
  • If you haveCustomize concurrent queuesInstead ofSerial queuesSerial, the serial queue itselfOrderly synchronousAdd a fence at this time, it will waste performance
  • Barrier functionIt will only block once

Bottom analysis of asynchronous fence function

Enter dispatch_barrier_async source code implementation, its underlying implementation and dispatch_async similar, here will no longer do the analysis, interested can explore the next

#ifdef __BLOCKS__
void
dispatch_barrier_async(dispatch_queue_t dq, dispatch_block_t work)
{
    dispatch_continuation_t dc = _dispatch_continuation_alloc();
    uintptr_t dc_flags = DC_FLAG_CONSUME | DC_FLAG_BARRIER;
    dispatch_qos_t qos;

    qos = _dispatch_continuation_init(dc, dq, work, 0, dc_flags);
    _dispatch_continuation_async(dq, dc, qos, dc_flags);
}
#endif
Copy the code

Bottom analysis of synchronous fence function

Enter the dispatch_barrier_sync source code as follows

void
dispatch_barrier_sync(dispatch_queue_t dq, dispatch_block_t work)
{
    uintptr_t dc_flags = DC_FLAG_BARRIER | DC_FLAG_BLOCK;
    if (unlikely(_dispatch_block_has_private_data(work))) {
        return _dispatch_sync_block_with_privdata(dq, work, dc_flags);
    }
    _dispatch_barrier_sync_f(dq, work, _dispatch_Block_invoke(work), dc_flags);
}
Copy the code

Go to the _dispatch_barrier_SYNc_f -> _dispatch_barrier_sync_f_INLINE source code

_dispatch_barrier_sync_f_inline

DISPATCH_ALWAYS_INLINE static inline void _dispatch_barrier_sync_f_inline(dispatch_queue_t dq, void *ctxt, dispatch_function_t func, uintptr_t dc_flags) { dispatch_tid tid = _dispatch_tid_self(); // Get the thread id, which is the unique identity of the thread... If (unlikely(! _dispatch_queue_trY_acquire_barrier_sync (dl, tid)) {return _dispatch_sync_f_slow(dl, CTXT, func, DC_FLAG_BARRIER, dl, / / not recycling DC_FLAG_BARRIER | dc_flags); } // Verify that target exists, if it does, If (Unlikely (DL ->do_targetq->do_targetq)) {return _dispatch_sync_recurse(dl, CTXT, func, DC_FLAG_BARRIER | dc_flags); } _dispatch_introspection_sync_begin(dl); _dispatch_lane_barrier_sync_invoke_and_complete(dl, ctxt, func DISPATCH_TRACE_ARG(_dispatch_trace_item_sync_push_pop( dq, ctxt, func, dc_flags | DC_FLAG_BARRIER))); / / execution}Copy the code

The source code is mainly divided into the following parts

  • Get the thread ID by _dispatch_tid_self

  • through_dispatch_queue_try_acquire_barrier_syncDetermining thread status

    • Enter the_dispatch_queue_try_acquire_barrier_sync_and_suspendRelease here

  • Recursively finds the target of the fence function by _dispatch_SYNc_RECURse

  • through_dispatch_introspection_sync_beginProcess forward information

  • through_dispatch_lane_barrier_sync_invoke_and_completeExecute the block and release

A semaphore

Semaphores are generally used to synchronize tasks, similar to mutex, and the user can control the maximum number of concurrent GCD tasks as needed

// dispatch_semaphore_t sem = dispatch_semaphore_create(1); dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER); dispatch_semaphore_signal(sem);Copy the code

Let’s analyze the underlying principles

Dispatch_semaphore_create create

The underlying implementation of this function is to initialize the semaphore and set the maximum concurrency of the GCD, which must be greater than 0

dispatch_semaphore_t
dispatch_semaphore_create(long value)
{
    dispatch_semaphore_t dsema;

    // If the internal value is negative, then the absolute of the value is
    // equal to the number of waiting threads. Therefore it is bogus to
    // initialize the semaphore with a negative value.
    if (value < 0) {
        return DISPATCH_BAD_INPUT;
    }

    dsema = _dispatch_object_alloc(DISPATCH_VTABLE(semaphore),
            sizeof(struct dispatch_semaphore_s));
    dsema->do_next = DISPATCH_OBJECT_LISTLESS;
    dsema->do_targetq = _dispatch_get_default_queue(false);
    dsema->dsema_value = value;
    _dispatch_sema4_init(&dsema->dsema_sema, _DSEMA4_POLICY_FIFO);
    dsema->dsema_orig = value;
    return dsema;
}
Copy the code

Dispatch_semaphore_wait lock

Os_atomic_dec2o specifies the — operation on the semaphore dsema. Inside, the C++ atomic_fetch_sub_explicit method is executed

  • If value is greater than or equal to 0, the operation is invalidExecute successfully
  • If value is equal toLONG_MIN, the system will throw a crash
  • If value is less than 0, enterLong waiting
long dispatch_semaphore_wait(dispatch_semaphore_t dsema, // dispatch_time_t timeout) {os_ATOMic_DEC2O (DSEMA, DSEMA_value, acquire); Likely (value >= 0) {// Indicates that the operation is invalid, that is, return 0 successfully. } return _dispatch_semaphore_wait_slow(dsema, timeout); // long wait}Copy the code

The os_ATOMIC_DEC2O macro definition is translated as follows

Os_atomic_inc2o (p, f and m) 👇 os_atomic_sub2o (p, f, 1, m) 👇 _os_atomic_c11_op ((p), (v), m, sub, -) 👇 _os_atomic_c11_op ((p), (v), m, add, +) 👇 ({_os_atomic_basetypeof (p) _v = (v), _r = \ atomic_fetch_##o##_explicit(_os_atomic_c11_atomic(p), _v, \ memory_order_##m); (__typeof__(_r))(_r op _v); })Copy the code

Substitute the specific value into zero

os_atomic_dec2o(dsema, dsema_value, acquire); os_atomic_sub2o(dsema, dsema_value, 1, m) os_atomic_sub(dsema->dsema_value, 1, m) _os_atomic_c11_op(dsema->dsema_value, 1, m, sub, -) _r = atomic_FETCH_SUB_explicit (dsema-> dsemA_value, 1), equivalent to dsema-> dsemA_value - 1Copy the code
  • Enter the_dispatch_semaphore_wait_slowSource code implementation, when value is less than 0, according to wait eventstimeoutDo different things

Dispatch_semaphore_signal unlock

Os_atomic_inc2o specifies the “++” function for value. Os_atomic_inc2o specifies the “++” function for value

  • If value is greater than 0, the operation is invalidExecute successfully
  • If value equals 0, enterLong waiting
Long dispatch_semaphore_signal(dispatch_semaphore_t dsema) {dispatch_semaphore_signal(dispatch_semaphore_t dsema) { dsema_value, release); If (likely(value > 0)) {// Return 0, indicating that the current operation is invalid. } if (unlikely(value == LONG_MIN)) { DISPATCH_CLIENT_CRASH(value, "Unbalanced call to dispatch_semaphore_signal()"); } return _dispatch_semaphore_signal_slow(dsema); // enter a long wait}Copy the code

The os_ATOMIC_DEC2O macro definition is translated as follows

Os_atomic_inc2o (p, f and m) 👇 os_atomic_add2o (p, f, 1, m) 👇 os_atomic_add (& (p) - > f (v), m) 👇 _os_atomic_c11_op ((p), (v), m, add, +) 👇 ({_os_atomic_basetypeof (p) _v = (v), _r = \ atomic_fetch_ # # # o# _explicit (_os_atomic_c11_atomic (p), _v, \ memory_order_##m); (__typeof__(_r))(_r op _v); })Copy the code

Substitute the specific value into zero

os_atomic_inc2o(dsema, dsema_value, release); os_atomic_add2o(dsema, dsema_value, 1, m) os_atomic_add(&(dsema)->dsema_value, (1), m) _os_atomic_c11_op((dsema->dsema_value), (1), m, add, +) _r = atomic_fetch_add_explicit(dsema->dsema_value, 1), Equivalent to dsema-> dsemA_value + 1Copy the code

conclusion

  • dispatch_semaphore_createThe main is to initialize the number limit
  • dispatch_semaphore_waitThe semaphore value is locked
  • dispatch_semaphore_signalUnlock the semaphore value by ++

Therefore, to sum up, the underlying operation of semaphore correlation functions is shown in the figure below:

Scheduling group

The scheduling group controls the execution sequence of tasks in the following ways

Dispatch_group_create Creates a group. Dispatch_group_async Task dispatch_group_notify Task completion Notification. Dispatch_group_wait Wait time for task execution // Group entry and group exit are generally used in pairs. Dispatch_group_enter Is used for group entry. Dispatch_group_leave is used for group exitCopy the code

use

Assume that there are two tasks at present, and the UI will be updated until the two tasks are completedScheduling group

  • [Modification 1] If willdispatch_group_notifyCan I move it to the front?

    It can be executed, but notify is executed whenever there is an Enter-leave pair match, rather than waiting for both groups to complete. This means that enter-leave can be executed as long as it is paired

  • Add an enter, i.eEnter: Wait is 3:2, can you execute notify?

    Can’t,Has been waiting for, wait for leave to execute notify

  • If enter: wait is 2:3, can you perform notify?It crashes because enter-leave doesn’t pair, and it crashes inside because async has a delay

Dispatch_group_create create a group

Create a group and set its properties. The value of the group is 0

  • Enter thedispatch_group_createThe source code
dispatch_group_t
dispatch_group_create(void)
{
    return _dispatch_group_create_with_count(0);
}
Copy the code
  • Enter the_dispatch_group_create_with_countSource code, which is the group object attribute assignment, and return the group object, whichnIs equal to the0
DISPATCH_ALWAYS_INLINE static inline dispatch_group_t _dispatch_group_create_with_count(uint32_t n) { OS_dispatch_group dispatch_group_t dg = _dispatch_object_alloc(DISPATCH_VTABLE(group), sizeof(struct dispatch_group_s)); Dg ->do_next = DISPATCH_OBJECT_LISTLESS; dg->do_targetq = _dispatch_get_default_queue(false); if (n) { os_atomic_store2o(dg, dg_bits, (uint32_t)-n * DISPATCH_GROUP_VALUE_INTERVAL, relaxed); os_atomic_store2o(dg, do_ref_cnt, 1, relaxed); // <rdar://22318411> } return dg; }Copy the code

Dispatch_group_enter into groups

Run the os_atomic_sub_ORIG2O command to perform — operations on dg->dg.bits to process values

void dispatch_group_enter(dispatch_group_t dg) { // The value is decremented on a 32bits wide atomic so that the carry // for the 0 -> -1 transition is not propagated to the upper 32bits. uint32_t old_bits = os_atomic_sub_orig2o(dg, Dg_bits,// atom decrement 0 -> -1 DISPATCH_GROUP_VALUE_INTERVAL, acquire); uint32_t old_value = old_bits & DISPATCH_GROUP_VALUE_MASK; If (unlikely(old_value == 0)) {// If old_value _dispatch_retain(dg); // <rdar://problem/22318411>} if (unlikely(old_value == DISPATCH_GROUP_VALUE_MAX)) { Crash DISPATCH_CLIENT_CRASH(old_bits, "Too many calls to dispatch_group_enter()"); }}Copy the code

Dispatch_group_leave out group

  • The dispatch_group_leave source code is displayed

    • Minus 1 to 0, which is++operation
    • Depending on the state, the do-while loop wakes up to execute the block task
    • If 0 + 1 = 1, enter-leave is unbalanced, that is, leave is called many times, and crash occurs
void dispatch_group_leave(dispatch_group_t dg) { // The value is incremented on a 64bits wide atomic so that the carry for // the -1 -> 0 transition increments the generation atomically. uint64_t new_state, Old_state = OS_atomic_add_ORIG2O (dg, dg_state,// atomic increment ++ DISPATCH_GROUP_VALUE_INTERVAL, release); uint32_t old_value = (uint32_t)(old_state & DISPATCH_GROUP_VALUE_MASK); If (unlikely(old_value == DISPATCH_GROUP_VALUE_1)) {old_state += DISPATCH_GROUP_VALUE_INTERVAL; do { new_state = old_state; if ((old_state & DISPATCH_GROUP_VALUE_MASK) == 0) { new_state &= ~DISPATCH_GROUP_HAS_WAITERS; new_state &= ~DISPATCH_GROUP_HAS_NOTIFS; } else { // If the group was entered again since the atomic_add above, // we can't clear the waiters bit anymore as we don't know for // which generation the waiters are for new_state &= ~DISPATCH_GROUP_HAS_NOTIFS; } if (old_state == new_state) break; } while (unlikely(! os_atomic_cmpxchgv2o(dg, dg_state, old_state, new_state, &old_state, relaxed))); return _dispatch_group_wake(dg, old_state, true); } // leave (1 -> 0, 0 -> 1) If (Unlikely (old_value == 0)) {DISPATCH_CLIENT_CRASH((uintptr_t)old_value, "Unbalanced call to dispatch_group_leave()"); }}Copy the code
  • Enter the_dispatch_group_wakeSource code, do-while loopasynchronousHit, call_dispatch_continuation_asyncperform
DISPATCH_NOINLINE static void _dispatch_group_wake(dispatch_group_t dg, uint64_t dg_state, bool needs_release) { uint16_t refs = needs_release ? 1:0; // <rdar://problem/22318411> if (dg_state & DISPATCH_GROUP_HAS_NOTIFS) { dispatch_continuation_t dc, next_dc, tail; // Snapshot before anything is notified/woken <rdar://problem/8554546> dc = os_mpsc_capture_snapshot(os_mpsc(dg, dg_notify), &tail); do { dispatch_queue_t dsn_queue = (dispatch_queue_t)dc->dc_data; next_dc = os_mpsc_pop_snapshot_head(dc, tail, do_next); _dispatch_continuation_async(dsn_queue, dc, _dispatch_qos_from_pp(dc->dc_priority), dc->dc_flags); // Block task execution _dispatch_release(dsn_queue); } while ((dc = next_dc)); //do-while loop for asynchronous task hit refs++; } if (dg_state & DISPATCH_GROUP_HAS_WAITERS) { _dispatch_wake_by_address(&dg->dg_gen); } if (refs) _dispatch_release_n(dg, refs); // reference release}Copy the code
  • Enter the_dispatch_continuation_asyncThe source code
DISPATCH_ALWAYS_INLINE static inline void _dispatch_continuation_async(dispatch_queue_class_t dqu, dispatch_continuation_t dc, dispatch_qos_t qos, uintptr_t dc_flags) { #if DISPATCH_INTROSPECTION if (! (dc_flags & DC_FLAG_NO_INTROSPECTION)) { _dispatch_trace_item_push(dqu, dc); } #else (void)dc_flags; #endif return dx_push(dqu._dq, dc, qos); // Like dx_invoke, are macros}Copy the code

This step is consistent with the block callback execution of asynchronous functions and is not explained here

Dispatch_group_notify notice

  • Enter thedispatch_group_notifySource code, ifold_stateIs equal to the0, can be released
DISPATCH_ALWAYS_INLINE static inline void _dispatch_group_notify(dispatch_group_t dg, dispatch_queue_t dq, dispatch_continuation_t dsn) { uint64_t old_state, new_state; dispatch_continuation_t prev; dsn->dc_data = dq; _dispatch_retain(dq); Run the OS_atomic_store2O command to obtain the underlying status identifier of the dg. State prev = OS_MPSC_PUSH_UPDATE_TAIL (OS_MPSC (dg, dg_notify), DSN, do_NEXT); if (os_mpsc_push_was_empty(prev)) _dispatch_retain(dg); os_mpsc_push_update_prev(os_mpsc(dg, dg_notify), prev, dsn, do_next); if (os_mpsc_push_was_empty(prev)) { os_atomic_rmw_loop2o(dg, dg_state, old_state, new_state, release, { new_state = old_state | DISPATCH_GROUP_HAS_NOTIFS; If ((uint32_t)old_state == 0) { Os_atomic_rmw_loop_give_up ({return _dispatch_group_wake(dg, new_state, false); / / wake up}); }}); }}Copy the code

Leave can be awoken with _dispatch_group_wake, and dispatch_group_notify can be awoken with dispatch_group_notify

  • Among themos_mpsc_push_update_tailIs a macro definition used to get the status code of dg
#define os_mpsc_push_update_tail(Q, tail, _o_next) ({ \ os_mpsc_node_type(Q) _tl = (tail); \ os_atomic_store2o(_tl, _o_next, NULL, relaxed); \ os_atomic_xchg(_os_mpsc_tail Q, _tl, release); The \})Copy the code

dispatch_group_async

  • Enter thedispatch_group_asyncSource code, mainlyThe packing taskandAsynchronous processing task
#ifdef __BLOCKS__ void dispatch_group_async(dispatch_group_t dg, dispatch_queue_t dq, dispatch_block_t db) { dispatch_continuation_t dc = _dispatch_continuation_alloc(); uintptr_t dc_flags = DC_FLAG_CONSUME | DC_FLAG_GROUP_ASYNC; dispatch_qos_t qos; // Task wrappers qos = _dispatch_continuation_init(dc, dq, db, 0, dc_flags); _dispatch_continuation_group_async(dg, Dq, dc, qos); } #endifCopy the code

See _dispatch_continuation_group_async source code.

  • Enter the_dispatch_continuation_group_asyncSource code, mainly encapsulateddispatch_group_enterInto a set of operations
DISPATCH_ALWAYS_INLINE static inline void _dispatch_continuation_group_async(dispatch_group_t dg, dispatch_queue_t dq, dispatch_continuation_t dc, dispatch_qos_t qos) { dispatch_group_enter(dg); Dc ->dc_data = dg; _dispatch_continuation_async(dq, dc, qos, dc->dc_flags); // async operation}Copy the code
  • Enter the_dispatch_continuation_asyncSource code, perform regular asynchronous function underlying operations. sinceIf you have enter, you have leave, we guessLeave is implicitly executed after block execution, through breakpoint debugging, printing stack information

  • Search for _dispatch_client_callout calls in _dispatch_continuation_WITH_group_invoke

DISPATCH_ALWAYS_INLINE static inline void _dispatch_continuation_with_group_invoke(dispatch_continuation_t dc) { struct dispatch_object_s *dou = dc->dc_data; unsigned long type = dx_type(dou); If (type == DISPATCH_GROUP_TYPE) {// Dispatch_client_callout (dc->dc_ctxt, dc->dc_func); / / block callback _dispatch_trace_item_complete (dc); dispatch_group_leave((dispatch_group_t)dou); } else {DISPATCH_INTERNAL_CRASH(DX_type (dou), "Unexpected object type"); }Copy the code

If the type is scheduling group, an out-of-group operation is performed. Therefore, it is perfectly verified that the bottom layer of dispatch_group_async encapsulates enter-leave

conclusion

  • enter-leaveAs long as it’s in pairs, near or far
  • dispatch_group_enterAt the bottom is through C++ function, the value of the group--Operation (i.e. 0 -> -1)
  • dispatch_group_leaveAt the bottom is through C++ function, the value of the group++Operation (that is, -1 -> 0)
  • dispatch_group_notifyAt the bottom level, it mainly judges groupsstateWhether is equal to the0, is notified when equal to 0
  • The wake up of the block task can be passeddispatch_group_leave, can also passdispatch_group_notify
  • dispatch_group_asyncIs equivalent toenter - leave, and the underlying implementation isenter-leave

To sum up, the underlying analysis process of the scheduling group is shown in the figure below

dispatch_source

The paper

Dispatch_source is the base data type used to coordinate the processing of specific underlying system events.

Dispatch_source replaces asynchronous call-back functions to handle system-specific events. When configuring a Dispatch, you specify the events to monitor, the Dispatch queue, and the code (block or function) to handle the events. When an event occurs, the Dispatch source submits your block or function to the specified queue for execution

The only reason to use a Dispatch Source instead of dispatch_async is to take advantage of joins.

The general process of connection is as follows: A call to dispatch_source_merge_data on any thread executes a pre-defined handle to the Dispatch Source (simply a block). This process is called a Custom event. Is the type of event that a Dispatch source supports.

Simply put: this event is signaled to you by calling the dispatch_source_merge_data function.

A HANDLE is a pointer to a pointer, which points to a class or structure and is closely related to the system. There is also a general HANDLE, which is called HANDLE

  • Instance handle HINSTANCE
  • Bitmap handle HBITMAP
  • Device table handle HDC
  • Icon handle HICON

The characteristics of

  • Its CPU load is very small and does not occupy resources as much as possible
  • Does not rely on Runloop depends on the underlying kernel
  • Advantages of connectivity

use

  • Create a dispatch source
dispatch_source_t source = dispatch_source_create(dispatch_source_type_t type, 
uintptr_t handle, unsigned long mask, dispatch_queue_t queue)
Copy the code
Copy the code
parameter instructions
type Dispatch Indicates the events that the source can handle
handle Can be understood as a handle, index, or ID. If you want to listen to a process, you need to pass in the process ID
mask You can think of it as a description, a more detailed description, so it knows exactly what it’s listening for, right
queue A queue required by the custom source to handle all response handles

Dispatch the Source species

The types of type are as follows

species instructions
DISPATCH_SOURCE_TYPE_DATA_ADD Custom event, variable increment
DISPATCH_SOURCE_TYPE_DATA_OR Custom event, variable OR
DISPATCH_SOURCE_TYPE_MACH_SEND MACH port sending
DISPATCH_SOURCE_TYPE_MACH_RECV MACH port reception
DISPATCH_SOURCE_TYPE_MEMORYPRESSURE Memory pressure (note: available after iOS8)
DISPATCH_SOURCE_TYPE_PROC A process listens, such as the exit of a process, the creation of one or more child threads, or the receipt of UNIX signals by a process
DISPATCH_SOURCE_TYPE_READ IO operations, such as file operations, socket operations read response
DISPATCH_SOURCE_TYPE_SIGNAL Response when a UNIX signal is received
DISPATCH_SOURCE_TYPE_TIMER The timer
DISPATCH_SOURCE_TYPE_VNODE File status monitor, files are deleted, moved, renamed
DISPATCH_SOURCE_TYPE_WRITE IO operations, such as file operations, write responses to socket operations

Note:

  • DISPATCH_SOURCE_TYPE_DATA_ADD When an event is triggered at a high frequency at the same time, the DISPATCH_SOURCE_TYPE_DATA_ADD accumulates these responses in the form of ADD, and finally processes them when the system is idle. If the trigger frequency is scattered, The Dispatch Source responds to these events separately.

  • DISPATCH_SOURCE_TYPE_DATA_OR is a custom event as above, but it is accumulated as an OR

Commonly used functions

/ / pending queue dispatch_suspend (queue) / / dispatch source created by default in a suspended state, before the dispatch source assignment handler must restore dispatch_resume (source) / / send dispatch source event, it is important to note that You cannot pass a value of 0 (the event will not fire), and you cannot pass negative numbers either. Dispatch_source_merge_data // sets the block that responds to the dispatch source event, Dispatch_source_set_event_handler dispatch_source_get_data dispatch_source_get_data dispatch_source_get_data dispatch_source_get_data Uintptr_t dispatch_source_get_handle(dispatch_source_t source); Unsigned long dispatch_source_get_mask(dispatch_source_t source); //// undispatches event handling of the Dispatch source -- that is, blocks are no longer called. If the call to dispatch_suspend simply suspends the dispatch source. void dispatch_source_cancel(dispatch_source_t source); Long dispatch_source_testCancel (dispatch_source_t source); //dispatch a block that is called when the source is cancelled, usually to close a file or socket, etc. Void dispatch_source_set_cancel_handler(dispatch_source_t source, dispatch_block_t cancel_handler); // Can be used to set up the dispatch source to call a block when it is started and to release the block when it is finished. This function can also be called at any time during a Dispatch source run. void dispatch_source_set_registration_handler(dispatch_source_t source, dispatch_block_t registration_handler);Copy the code

Usage scenarios

It is often used for captcha countdowns because dispatch_source does not rely on Runloop and instead interacts directly with the underlying kernel for greater accuracy.

- (void)use033{// Countdown time __block int timeout = 3; Dispatch_queue_t globalQueue = dispatch_get_global_queue(0, 0); // create timer dispatch_source_t timer = dispatch_source_create(dispatch_source_timer, 0, 0, globalQueue); -source dispatch source-start Specifies the time when the timer is triggered for the first time. The parameter type is dispatch_time_t, which is an opaque type and we can't manipulate it directly. We need the dispatch_time and dispatch_walltime functions to create them. In addition, the constants DISPATCH_TIME_NOW and DISPATCH_TIME_FOREVER are often useful. - interval Interval - leeway Specifies the accuracy of the dispatch_source_set_timer(timer,dispatch_walltime(NULL, 0),1.0*NSEC_PER_SEC, 0); // Dispatch_source_set_event_handler (timer, ^{ Disable if (timeout <= 0) {// Cancel dispatch source dispatch_source_cancel(timer); }else{ timeout--; Dispatch_async (dispatch_get_main_queue(), ^{dispatch_get_main_queue(), ^{NSLog(@" countdown - %d", timeout); }); }}); // Start dispatch_resume(timer); }Copy the code