• Common data structures

    • dispatch_object_t

      GCDThe base class for all classesFrom the source can be seendispatch_object_tIs a consortium (and a transparent consortium at that), isdispatch_object_tIt can be any type in the structure and is the size of the largest data type among these data types

    • _os_object_s

      Similar to isa Pointers in a class

    • dispatch_object_s

      The most basic base class of GCD

    • dispatch_continuation_s

      dispatch_continuation_sThe structure is mainly encapsulatedblockandfunction.dispatch_asyncIn theblockIt’s going to be encapsulated into this data type

    • dispatch_group_s

    • dispatch_queue_s

  • Create queue source code analysis

    • dispatch_queue_createThe source code

      First step next symbol breakpointFind source code inlibdispatch.dylibLibrary, and then downloadlibdispatch.dylibThe source code searchdispatch_queue_createYou can find theIn fact, the stack information can also know the next call_dispatch_lane_create_with_targetFunction to implement the creation

    • _dispatch_lane_create_with_targetThe source code

      The general steps are as follows:

      1. Normalized parameter
      2. Setting the destination queue

        call_dispatch_get_root_queueMethod to get the target queue

        If a serial queue is used_dispatch_get_root_queue(0, true)The function gets the destination queue, and the destination queue gets is_dispatch_root_queues[DISPATCH_ROOT_QUEUE_IDX_DEFAULT_OVERCOMMIT_PRIORITY].

        If it is a concurrent queue, use_dispatch_get_root_queue(0, false)The function gets the destination queue, and the destination queue gets is_dispatch_root_queues[DISPATCH_ROOT_QUEUE_IDX_DEFAULT_PRIORITY].
      3. Open up space to create queues

        _dispatch_object_allocOpen up space to create queues
      4. Initializing the queue

        _dispatch_queue_initInitialize the queue. Set the maximum number of concurrent requests according to the parameters specified in step 1.DISPATCH_QUEUE_WIDTH_MAXFrom this you also know that the maximum number of concurrent requests is 4094

        Set queue identifiers and so on
      5. Assign target queue Assigns a value to the target queue set above

      The specific flow chart is as follows:

    • A diagram of the relationship between queues and threads

  • Function source code analysis

    • Asynchronous function analysis
      • _dispatch_continuation_initTask wrapper function

        Mainly copy tasks, assigning func, CTXT and other parametersthrough_dispatch_continuation_init_fThe function assigns parameters such as func and CTXT

      • _dispatch_continuation_asyncConcurrent processing function

        Global searchdx_pushIt turns out to be a macro#define dx_push(x, y, z) dx_vtable(x)->dq_push(x, y, z)Found or calleddq_pushMethod to continue the global searchdq_pushfounddq_pushThere’s a bunch of methods but each of them has a type and it’s not hard to guess what the asynchronous function should bequeue_concurrentThe method below corresponds to the method below_dispatch_lane_concurrent_pushThis can be verified by setting a symbolic breakpoint first by setting a breakpoint for the asynchronous functionAnd then I went down and I found it was broken_dispatch_lane_concurrent_pushMethod stack so also verify the above conjectureContinue to follow, global search_dispatch_lane_concurrent_pushSo asynchronous tasks in parallel queues are usually empty so if you don’t add a fence function it’s usually executed_dispatch_continuation_redirect_pushMethod can also be verified here by adding a symbolic breakpointFound that indeed_dispatch_lane_concurrent_pushAfter the method completes, enter the_dispatch_continuation_redirect_pushMethod then performs the task to continue the global search_dispatch_continuation_redirect_pushmethodsAt this timedq_pushThe corresponding one is not_dispatch_lane_concurrent_pushSince the corresponding queue type is changed to the same queue, the corresponding queue should be_dispatch_root_queue_pushMethod can also follow the steps above to verify the next symbol breakpoint, continue the global search_dispatch_root_queue_pushmethodsnext_dispatch_root_queue_push_overrideThe symbol breakpoint is next_dispatch_root_queue_push_inlineSymbol breakpoint discovery will go to_dispatch_root_queue_push_overrideAnd then in the methodIt’s going to call again_dispatch_root_queue_push_inlinemethodsContinue to find_dispatch_root_queue_pokemethodsGlobal search_dispatch_root_queue_poke_slowmethodsAt this point the asynchronous function thread creation process is complete as shown belowBut at this point, you’re just creating the thread, but the task hasn’t been executed yet and look at the top_dispatch_root_queues_initMethod click in and find find isdispatch_once_fThe method callThe incomingfuncis_dispatch_root_queues_init_onceContinue to enter_dispatch_worker_thread2(thread execution is the bottom of the system to start, here only need to bind the thread, the system will trigger the thread) method to view the source code_dispatch_root_queue_drainMethods the source codeFinally, I found that the task method was called here, and I could have broken the point before analyzing it and printed the stack call and then looked for the corresponding code in the source code. The stack call is as follows:The call to the stack is found to be consistent with the flow of the above analysis,

        The overall flow chart is as follows:

    • Synchronization function analysis

      First of all to seedispatch_syncThe source codeContinue to look at_dispatch_sync_fThe source codeWhere the synchronization function is found is through the fence function to achieve (fence function source below)

  • Fence function source code analysis

    • The synchronization fence function must complete its own block before the following block can be executed, which means that synchronization blocks the current thread
    • The asynchronous barrier function does not wait for its own block to execute, the following code will execute after it, indicating that the asynchronous block is a queue.

    Note: the barrier function can only be used in a defined queue, the global concurrent queue, if it blocks other system tasks, will cause other tasks, the serial queue using the barrier function is useless

    dispatch_barrier_syncThe source code is as follows:Synchronous fence function source process:

  • Deadlock source code analysis

    The above fence function analysis shows that deadlocks occur in_dispatch_sync_f_slowMethodThe current task is first added to the queue and then executed__DISPATCH_WAIT_FOR_QUEUE__methodsTake a look at_dq_state_drain_locked_byMethods the source codeDetermine whether the DQ is a waiting queue and give a state state. Then match the DQ state with the queue that the current task depends on. If the match is successful, an exception is thrown

  • Single-column source code analysis

    Global searchdispatch_onceView the source codeAt the same time found_dispatch_Block_invokeIt’s just a macro#define _dispatch_Block_invoke(bb) \ ((dispatch_function_t)((struct Block_layout *)bb)->invoke)That’s the invoke methodSingleton low-level implementation flowchart

  • A semaphore

    Core functions to control the maximum concurrency (generally not used much) are as follows:

    1. dispatch_semaphore_createCreates a semaphore specifying the maximum number of concurrent requests
    2. dispatch_semaphore_signalSend a signal
    3. dispatch_semaphore_waitWaiting for the signal

    A code example is as follows:

    - (void)viewDidLoad { [super viewDidLoad]; dispatch_queue_t queue = dispatch_get_global_queue(0, 0); dispatch_semaphore_t sem = dispatch_semaphore_create(2); // dispatch_async(queue, ^{dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER); NSLog(@" Perform task 1"); sleep(1); NSLog(@" Task 1 completed "); dispatch_semaphore_signal(sem); }); // dispatch_async(queue, ^{dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER); NSLog(@" Perform task 2"); sleep(1); NSLog(@" Task 2 completed "); dispatch_semaphore_signal(sem); }); // dispatch_async(queue, ^{dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER); NSLog(@" Perform task 3"); sleep(1); NSLog(@" Task 3 completed "); dispatch_semaphore_signal(sem); }); }Copy the code

    The print result is as follows:It can be found that since the maximum concurrency control is 2, there are two tasks that can be executed at the same time. At this point, task 1 and Task 2 are executed at the same time, and task 3 is executed after the completion of the first two tasks

    Source code analysis:

    1. dispatch_semaphore_create

      The main initialization work, set the maximum concurrency (maximum concurrency should be greater than 0), encapsulated intodispatch_semaphore_tThe structure of the body

    2. dispatch_semaphore_wait

      Mainly on the amount of concurrency – operation, if the current amount of concurrency is less than 0 indicates that the current has reached the maximum amount of concurrency, so the task for infinite waiting state (blocking thread) source code as follows os_atomic_dec2oIs a macro, because inside is a macro nested, so here one by one after the analysis is as follows;

      os_atomic_dec2o(dsema, dsema_value, acquire); os_atomic_sub2o(dsema, dsema_value, 1, acquire) os_atomic_sub(&(dsema)->dsema_value, (1), acquire) _os_atomic_c11_op(&(dsema)->dsema_value, (1), acquire, sub, -) atomic_FETCH_SUB_explicit (&(dsemA)-> dsemA_Value, 1) is equivalent to &(dsema)-> dsemA_Value = &(dsema)-> dsemA_Value -1Copy the code
    3. dispatch_semaphore_signal

      dispatch_semaphore_waitInstead, the amount of concurrency is added

  • Scheduling group

    The main function of the scheduling group is to monitor the execution status of some tasks. For example, the UI needs to be refreshed now but needs to be refreshed after two requests are completed. In this case, the scheduling group can be usedCore functions:

    • dispatch_group_createCreate a group
    • dispatch_group_asyncInto a set of tasks
    • dispatch_group_notifyWait for notification of task completion
    • dispatch_group_waitSuspends (blocks) the current thread until the task in the specified group completes before continuing
    • dispatch_group_enterIt indicates that a task is appended to the group and executed once, which is equivalent to the number of unfinished tasks in the group +1
    • dispatch_group_leaveIt indicates that a task has left the group and is executed once, which is equal to the number of unfinished tasks in the group -1.

    Note:dispatch_group_leaveanddispatch_group_enterYou need to come in pairs

    Example:

    1. Normal Case Example
        
      Copy the code

      Printing situation:

    2. usedispatch_group_wait
      dispatch_group_t group = dispatch_group_create(); dispatch_queue_t queue = dispatch_get_global_queue(0, 0); dispatch_group_enter(group); dispatch_async(queue, ^{ sleep(1); NSLog(@" Task 1 completed "); dispatch_group_leave(group); }); dispatch_group_enter(group); Dispatch_async (queue, ^{NSLog(@" task 2 completed "); dispatch_group_leave(group); }); dispatch_async(dispatch_get_global_queue(0, 0), ^{ sleep(3); NSLog(@" Task 3 completed "); }); Dispatch_group_notify (dispatch_get_main_queue(), ^{NSLog(@" Tasks 1 and 2 completed "); }); NSLog(@" Task 6 completed (main thread) ");Copy the code

      Print result:

    3. Special Case 1 (put notification first)
      dispatch_group_t group = dispatch_group_create(); dispatch_queue_t queue = dispatch_get_global_queue(0, 0); Dispatch_group_notify (dispatch_get_main_queue(), ^{NSLog(@" Tasks 1 and 2 completed "); }); dispatch_group_enter(group); dispatch_async(queue, ^{ sleep(1); NSLog(@" Task 1 completed "); dispatch_group_leave(group); }); dispatch_group_enter(group); Dispatch_async (queue, ^{NSLog(@" task 2 completed "); dispatch_group_leave(group); }); dispatch_async(dispatch_get_global_queue(0, 0), ^{ sleep(3); NSLog(@" Task 3 completed "); }); NSLog(@" Task 6 completed (main thread) ");Copy the code

      Print result:Note that notifications are placed first as long as they are executeddispatch_group_leaveThe notification will be executed

    4. Special Case 2 (one more)dispatch_group_enter
      dispatch_group_t group = dispatch_group_create(); dispatch_queue_t queue = dispatch_get_global_queue(0, 0); dispatch_group_enter(group); dispatch_async(queue, ^{ sleep(1); NSLog(@" Task 1 completed "); dispatch_group_leave(group); }); dispatch_group_enter(group); Dispatch_async (queue, ^{NSLog(@" task 2 completed "); dispatch_group_leave(group); }); dispatch_group_enter(group); dispatch_async(dispatch_get_global_queue(0, 0), ^{ sleep(3); NSLog(@" Task 3 completed "); }); Dispatch_group_notify (dispatch_get_main_queue(), ^{NSLog(@" Tasks 1 and 2 completed "); }); NSLog(@" Task 6 completed (main thread) ");Copy the code

      Printing situation:founddispatch_group_notifyIt never works

    5. Special Case 2 (one more)dispatch_group_leave It’s going to crash, sodispatch_group_leaveanddispatch_group_enterAlways come in pairs

    Source code exploration:

    • dispatch_group_create

    • dispatch_group_enter

    • dispatch_group_leave

    • dispatch_group_async

      It saves the developersdispatch_group_enteranddispatch_group_leaveTwo method calls Found that the underlying layer was called before creationdispatch_group_entermethodsIt is called again after the task is completeddispatch_group_leave

    • dispatch_group_notify

  • Dispatch Source

    Call one of its functions on any threaddispatch_source_merge_dataAfter that, it will executeDispatch SourceA Custom event is a process that defines a handle (a handle can be simply defined as a block). Is dispatch sourceAn event that supports processing

    A handle is a pointer that points to a pointer that points to a class or structure that has a very close relationship to the system HINSTANCE, HBITMAP, HDC, HICON, etc. There is also a generic HANDLE called HANDLE

    Core approach:

    1. dispatch_source_createCreate the source
    2. dispatch_source_set_event_handlerSet the source event callback
    3. dispatch_source_merge_dataSource event sets data
    4. dispatch_source_get_dataGet the source event data
    5. dispatch_resumeContinue to
    6. dispatch_suspendhang

    Example:

    - (void)viewDidLoad { [super viewDidLoad]; self.totalComplete = 0; self.queue = dispatch_queue_create("td.com", NULL); /* DISPATCH_SOURCE_TYPE_DATA_ADD Custom events. Add DISPATCH_SOURCE_TYPE_DATA_OR custom events. Variable OR DISPATCH_SOURCE_TYPE_DATA_REPLACE custom events, Variable Replace DISPATCH_SOURCE_TYPE_MACH_SEND DISPATCH_SOURCE_TYPE_MACH_RECV the MACH port receives DISPATCH_SOURCE_TYPE_MACH_RECV DISPATCH_SOURCE_TYPE_MEMORYPRESSURE Memory alerts The DISPATCH_SOURCE_TYPE_PROC process listens for, for example, the exit of a process, the creation of one or more child threads, or the receipt of UNIX signals by a process DISPATCH_SOURCE_TYPE_READ IO operation, For example, file operations and socket operations. DISPATCH_SOURCE_TYPE_SIGNAL DISPATCH_SOURCE_TYPE_TIMER Timer DISPATCH_SOURCE_TYPE_VNODE DISPATCH_SOURCE_TYPE_WRITE IO operation when files are deleted, moved, or renamed Source = dispatch_source_create(dispatch_source_data_add, 0, dispatch_source_data_add) 0, dispatch_get_main_queue()); dispatch_source_set_event_handler(self.source, ^{ NSLog(@"%@",[NSThread currentThread]); NSUInteger value = dispatch_source_get_data(self.source); self.totalComplete += value; NSLog (@ "progress: % 2 f, the self. The totalComplete / 100.0); Self. ProgressView. Progress = self. TotalComplete / 100.0; }); self.isRunning = YES; dispatch_resume(self.source); } - (IBAction)didClickStartOrPauseAction:(id)sender { if (self.isRunning) { dispatch_suspend(self.source); dispatch_suspend(self.queue); NSLog(@" paused "); self.isRunning = NO; [sender setTitle: @ "pause in.." forState: UIControlStateNormal]; }else{ dispatch_resume(self.source); dispatch_resume(self.queue); NSLog(@" executed "); self.isRunning = YES; [sender setTitle: @ "pause in.." forState: UIControlStateNormal]; }} - (void)touch began :(NSSet< touch *> *)touches :(UIEvent *)event{NSLog(@" touch "); for (int i= 0; i<100; i++) { dispatch_async(self.queue, ^{ if (! Self. isRunning) {NSLog(@" paused "); return; } sleep(1); dispatch_source_merge_data(self.source, 1); }); }}Copy the code