preface

The previous article covered the basic analysis steps of synchronization and asynchrony. Today we will talk about the practical application of GCD functions and principles, mainly: fence functions, semaphores, thread groups and Dispatch_source

Barrier function

  • The fence function has a more direct effect:Controls the order in which tasks are executed, resulting in synchronization.
  • There are two types of fence functions:
    • dispatch_barrier_async
    • dispatch_barrier_sync

The following cases are used to analyze their functions:

dispatch_barrier_async

Let’s take a look at the asynchronous fence example:

- (void)testAsync_barrier {
    dispatch_queue_t concurrent = dispatch_queue_create("wushuang.concurrent", DISPATCH_QUEUE_CONCURRENT);
    NSLog("Here we go!");
    dispatch_async(concurrent, ^{
        sleep(1);
        NSLog(@ "1");
    });
    dispatch_async(concurrent, ^{
        sleep(1);
        NSLog(@ "2");
    });
    dispatch_barrier_async(concurrent, ^{
        NSLog(@"——————— Dailan ———————");
    });
    dispatch_async(concurrent, ^{
        NSLog(@ "3");
    });
    dispatch_async(concurrent, ^{
        NSLog(@ "4");
    });
    NSLog(@" ~ ~ come here ~ ~");
}
Copy the code
  • For more direct observation, the asynchronous function in front of the fence has been addedsleep, the print result is as follows:



  • As you can see from the print, the fence function blocks tasks in the same thread and does not block the thread

dispatch_barrier_sync

Take a look at sync fences:

- (void)testSync_barrier {
    dispatch_queue_t concurrent = dispatch_queue_create("wushuang.concurrent", DISPATCH_QUEUE_CONCURRENT);
    NSLog("Here we go!");
    dispatch_async(concurrent, ^{
        sleep(1);
        NSLog(@ "1");
    });
    dispatch_async(concurrent, ^{
        sleep(1);
        NSLog(@ "2");
    });
    dispatch_barrier_sync(concurrent, ^{
        NSLog(@"——————— Sync Dailarun ———————");
    });
    dispatch_async(concurrent, ^{
        NSLog(@ "3");
    });
    dispatch_async(concurrent, ^{
        NSLog(@ "4");
    });
    NSLog(@" ~ ~ come here ~ ~");
}
Copy the code

First look at the print result:



  • As you can see from the results, the synchronization fence function also blocks tasks in the same queue, but blocks the thread

Global queue fence

The barrier function is used to create concurrent queues. How about global queues?

- (void)testGlobalBarrier {
    dispatch_queue_t global = dispatch_get_global_queue(0.0);
    NSLog("Here we go!");
    dispatch_async(global, ^{
        sleep(1);
        NSLog(@ "1");
    });
    dispatch_async(global, ^{
        sleep(1);
        NSLog(@ "2");
    });
    dispatch_barrier_async(global, ^{
        NSLog(@"——————— Dailan ———————");
    });
    dispatch_async(global, ^{
        NSLog(@ "3");
    });

    dispatch_async(global, ^{
        NSLog(@ "4");
    });
    NSLog(@" ~ ~ come here ~ ~");
}
Copy the code
  • The print result is as follows:



  • The result is that nothing is blocked, indicating that the global queue is special.

Question: 1. Why can the fence function control the task 2. Why does the fence function not work in global queue

  • With these two questions in mind, let’s explore libdispatch-1271.120.2

The underlying source

  • In order todispatch_barrier_syncTo analyze, search and flow will eventually enter_dispatch_barrier_sync_f_inlinemethods



  • Through symbol breakpoint debugging, it is found that it will eventually walk_dispatch_sync_f_slowMethod at the same timeDC_FLAG_BARRIERFlag it and follow up



  • According to the next sign breakpoint, determine to go_dispatch_sync_invoke_and_complete_recurseMethod, and willDC_FLAG_BARRIERParameters of the incoming



  • Enter the_dispatch_sync_function_invoke_inlineMethods can be found to be_dispatch_client_calloutThe method, which is equal toThe fence function call, but the core of our research is why do we control the task and find out through the sign breakpoint that the fence function will walk after it executes_dispatch_sync_complete_recursemethods



  • Then follow up_dispatch_sync_complete_recursemethods
static void
_dispatch_sync_complete_recurse(dispatch_queue_t dq, dispatch_queue_t stop_dq,
	      uintptr_t dc_flags)
{
     bool barrier = (dc_flags & DC_FLAG_BARRIER);
     do {
	 if (dq == stop_dq) return;
	 if (barrier) {
	     dx_wakeup(dq, 0, DISPATCH_WAKEUP_BARRIER_COMPLETE);
	 } else {
	     _dispatch_lane_non_barrier_complete(upcast(dq)._dl, 0);
	 }
	 dq = dq->do_targetq;
	 barrier = (dq->dq_width == 1);
     } while (unlikely(dq->do_targetq));
}
Copy the code
  • Here is ado-whileLoop, first of alldc_flagsThe incoming isDC_FLAG_BARRIER, so(dc_flags & DC_FLAG_BARRIER)It must have some value, so it goes in the loopdx_wakeup, to wake up the task in the queue, and when it is done, the fence is over, and it will go_dispatch_lane_non_barrier_completeFunction, which is to continue the flow after the fence.
  • Look again at this pointdx_wakeupDelta function, at this pointflagFor the incomingDISPATCH_WAKEUP_BARRIER_COMPLETE



_dispatch_lane_wakeup

  • Concurrent processes_dispatch_root_queue_wakeupAccording toflagJudgment will go_dispatch_lane_barrier_completeMethods:



  • Called when serial or fenced_dispatch_lane_drain_barrier_waiterBlock the task untilVerify that tasks in front of the current queue are completeIt will continue the execution of subsequent tasks

_dispatch_root_queue_wakeup

void
_dispatch_root_queue_wakeup(dispatch_queue_global_t dq,
	 DISPATCH_UNUSED dispatch_qos_t qos, dispatch_wakeup_flags_t flags)
{
     if(! (flags & DISPATCH_WAKEUP_BLOCK_WAIT)) {DISPATCH_INTERNAL_CRASH(dq->dq_priority,
		  "Don't try to wake up or override a root queue");
     }
     if (flags & DISPATCH_WAKEUP_CONSUME_2) {
	 return_dispatch_release_2_tailcall(dq); }}Copy the code
  • What is not done in the global concurrent queue source code, where the fence is equivalent to a normal asynchronous concurrent functionDidn't workWhy? Global concurrency is a concurrent queue created by the system. Blocking can cause problems for system tasks, so you cannot use all concurrent queues when using the fence function

conclusion

    1. Fence function can block the current thread of the task, to achieve the effect of task control, but only inThe concurrent queue createdThe use of
    1. The fence function can also be used in multi-read single-write scenarios
    1. The fence function can only be used in the current thread, if multiple threads do not have the desired effect

A semaphore

  • inThe self-cultivation of programmersThe book’sOn page 26, have toBinary semaphoreIt’s only0and1Two states,Multivariate semaphoreReferred to as”A semaphore
  • GCDSemaphore indispatch_semaphore_tThere are three main functions in:
    • dispatch_semaphore_create: Create signal
    • dispatch_semaphore_wait: Waiting for signal
    • dispatch_semaphore_signal: Release signal

Case analysis

- (void)testDispatchSemaphore {
    dispatch_queue_t global = dispatch_get_global_queue(0.0);
    dispatch_semaphore_t semaphore = dispatch_semaphore_create(0);
    
    NSLog(@" ~ ~ start ~ ~");
    dispatch_async(global, ^{
        NSLog(@" ~ ~ 0 ~ ~");
        dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
        NSLog(@" ~ ~ 1 ~ ~");
    });
    
    dispatch_async(global, ^{
        NSLog(@" ~ ~ 2 ~ ~");
        sleep(2);
        dispatch_semaphore_signal(semaphore);
        NSLog(@" ~ ~ 3 ~ ~");
    });
    NSLog(@" ~ ~ come here ~ ~");
}
Copy the code
  • The running results are as follows:



  • As you can see from the print result, an asynchronous task was executed first, so it was printed0, but there was no signal, sodispatch_semaphore_waitJust wait in place, cause1The second asynchronous task is executed and printed2And thendispatch_semaphore_signalGive the signal, and then1It’s ready to print

The source code interpretation

Take a look at source code analysis

dispatch_semaphore_create



  • First of all if the signal is less than0, returns oneDISPATCH_BAD_INPUTType object, that is, return a_Nonnull
  • If the signalGreater than or equal to 0, you will get todispatch_semaphore_tobjectdsemaDo some assignments and returndsemaobject

dispatch_semaphore_wait

intptr_t
dispatch_semaphore_wait(dispatch_semaphore_t dsema, dispatch_time_t timeout)
{
    long value = os_atomic_dec2o(dsema, dsema_value, acquire);
    if (likely(value >= 0)) {
	return 0;
    }
    return _dispatch_semaphore_wait_slow(dsema, timeout);
}
Copy the code
  • The waiting signal is mainly throughos_atomic_dec2ofunctionSubtracting the signalwhenValue greater than or equal to 0When to return to0
  • The dutyLess than zeroWhen will go_dispatch_semaphore_wait_slowmethods
_dispatch_semaphore_wait_slow



  • When timeout is set, operations are performed according to the type. In this article DISPATCH_TIME_FOREVER is used. In this case, _dispatch_sema4_wait is called:



    • Here’s mainly onedo-whileIn the loop, the queue waits for the signal, and when the condition is not met, the loop will jump out, so there will be a waiting effect

dispatch_semaphore_signal

intptr_t
dispatch_semaphore_signal(dispatch_semaphore_t dsema)
{
    long value = os_atomic_inc2o(dsema, dsema_value, release);
    if (likely(value > 0)) {
	return 0;
    }
    if (unlikely(value == LONG_MIN)) {
	DISPATCH_CLIENT_CRASH(value,
		"Unbalanced call to dispatch_semaphore_signal()");
    }
    return _dispatch_semaphore_signal_slow(dsema);
}
Copy the code
  • The main way to send signals is throughos_atomic_inc2oCarry on a signalSince the increase, if the increment afterIt's greater than zero, it returns0
  • ifIt's still less than 0, will go to_dispatch_semaphore_signal_slowmethods
_dispatch_semaphore_signal_slow
intptr_t
_dispatch_semaphore_signal_slow(dispatch_semaphore_t dsema)
{
     _dispatch_sema4_create(&dsema->dsema_sema, _DSEMA4_POLICY_FIFO);
     _dispatch_sema4_signal(&dsema->dsema_sema, 1);
     return 1;
}
Copy the code
  • Here is a_dispatch_sema4_signalFunction to send signals slowly



Scheduling group

  • The most direct function of the scheduling group is:Control the execution sequence of tasks.ApiThere are mainly the following methods:
    • dispatch_group_create: create a group
    • dispatch_group_async: Group tasks
    • dispatch_group_notify: Indicates a notification that a group task is completed
    • dispatch_group_enter: into the group
    • dispatch_group_leave: a group of
    • dispatch_group_wait: Waiting group task time

Case analysis

dispatch_group_async

- (void)testDispatchGroup1 {
    dispatch_group_t group = dispatch_group_create();
    dispatch_queue_t globalQueue = dispatch_get_global_queue(0.0);
    dispatch_group_async(group, globalQueue, ^{
        sleep(1);
        NSLog(@ "1");
    });

    NSLog(@ "2");
    dispatch_group_async(group, globalQueue, ^{
        sleep(1);
        NSLog(@ "3");
    });

    NSLog(@ "4");
    dispatch_group_notify(group, globalQueue, ^{
        NSLog(5 "@");
    });

    NSLog(@" ~ ~ 6 ~ ~");
}
Copy the code
  • Speaking tasks1 and 3Put it in the group task, and then put it5Task indispatch_group_notify, the following output is displayed:



  • The output results can be concluded as follows:
      1. Scheduling groups do not block threads
      1. Set of tasksNo order of execution, equivalent to an asynchronous concurrent queue
      1. Group tasks are executed only after they are completedispatch_group_notifytask

dispatch_group_wait

    1. The task1 and 3Place them in both task groups and execute them belowdispatch_group_waitWaiting for the10 seconds
- (void)testDispatchGroup1 {
    dispatch_group_t group = dispatch_group_create();
    dispatch_queue_t globalQueue = dispatch_get_global_queue(0.0);
    dispatch_group_async(group, globalQueue, ^{
        sleep(5);
        NSLog(@ "1");
    });

    NSLog(@ "2");
    dispatch_group_async(group, globalQueue, ^{
        sleep(5);
        NSLog(@ "3");
    });

    dispatch_time_t time = dispatch_time(DISPATCH_TIME_NOW, NSEC_PER_SEC * 10);
    dispatch_group_wait(group, time);

    NSLog(@" ~ ~ 6 ~ ~");
}
Copy the code
  • The following output is displayed:



Found waiting is10 seconds, but5 secondsAfter the task is executed, execute it immediatelyTask 6

    1. Change the waiting time to3 seconds:



Here is the etc.3 secondsFind the group task is not finished to execute6task

  • conclusion:dispatch_group_waitThe role of isBlocks tasks outside the scheduling group:
      1. When the waiting time ends, the group task is not completed, and the block is finished to execute other tasks
      1. When the group task is complete and the waiting time is not over, the system stops blocking and executes other tasks

In group + out group

- (void)testDispatchGroup2 {
    dispatch_group_t group = dispatch_group_create();
    dispatch_queue_t globalQueue = dispatch_get_global_queue(0.0);

    dispatch_group_enter(group);
    dispatch_async(globalQueue, ^{
        sleep(1);
        NSLog(@ "1");
        dispatch_group_leave(group);
    });

    NSLog(@ "2");

    dispatch_group_enter(group);
    dispatch_async(globalQueue, ^{
        sleep(1);
        NSLog(@ "3");
        dispatch_group_leave(group);
    });

    dispatch_group_notify(group, globalQueue, ^{
        NSLog(@ "4");
    });

    NSLog(5 "@");
}
Copy the code
  • In group + out groupIs basically the same as the above case, only into the group of tasks divided intoIn group + out groupThe result is as follows:



  • The output of dispatch_group_async is consistent with that of dispatch_group_async

  • In group + out group combination has several precautions, they must be in pairs, must be advanced after the group, otherwise the following problems will occur:

      1. If one is missingdispatch_group_leave,dispatch_group_notifyDo not perform



      1. If one is missingdispatch_group_enterAfter the task is executed, the system exitsnotifyBut it’s not rightdispatch_group_leavePlace will collapse



      1. If the firstleaveafterenter, will directly crash inleaveplace



Question: 1. How does the scheduling group achieve process control? 2. Why are dispatch_group_async and dispatch_group_async used together? 3. Why does dispatch_group_leave crash?

With the question, let’s explore the source code

The principle of analysis

dispatch_group_creat

dispatch_group_t
dispatch_group_create(void)
{
    return _dispatch_group_create_with_count(0);
}
Copy the code

The dispatch_group_create method calls the _dispatch_group_create_with_count method, passing in the parameter 0

static inline dispatch_group_t
_dispatch_group_create_with_count(uint32_t n)
{
     dispatch_group_t dg = _dispatch_object_alloc(DISPATCH_VTABLE(group),
	      sizeof(struct dispatch_group_s));
     dg->do_next = DISPATCH_OBJECT_LISTLESS;
     dg->do_targetq = _dispatch_get_default_queue(false);
     if (n) {
	 os_atomic_store2o(dg, dg_bits,
		  (uint32_t)-n * DISPATCH_GROUP_VALUE_INTERVAL, relaxed);
	 os_atomic_store2o(dg, do_ref_cnt, 1, relaxed); // <rdar://22318411>
     }
     return dg;
}
Copy the code
  • The core of the method is creationdispatch_group_tobjectdgAnd on itdo_nextanddo_targetqParameter is assigned and then calledos_atomic_store2oFor storage

dispatch_group_enter

void
dispatch_group_enter(dispatch_group_t dg)
{
    // The value is decremented on a 32bits wide atomic so that the carry
    // for the 0 -> -1 transition is not propagated to the upper 32bits.
    uint32_t old_bits = os_atomic_sub_orig2o(dg, dg_bits,
	     DISPATCH_GROUP_VALUE_INTERVAL, acquire);
    uint32_t old_value = old_bits & DISPATCH_GROUP_VALUE_MASK;
    if (unlikely(old_value == 0)) {
	_dispatch_retain(dg); // <rdar://problem/22318411>
    }
    if (unlikely(old_value == DISPATCH_GROUP_VALUE_MAX)) {
	DISPATCH_CLIENT_CRASH(old_bits,
		 "Too many nested calls to dispatch_group_enter()"); }}Copy the code

Os_atomic_sub_orig2o is used to decrement from 0 -> -1, but does not wait for the semaphore

dispatch_group_leave

void
dispatch_group_leave(dispatch_group_t dg)
{
     // The value is incremented on a 64bits wide atomic so that the carry for
     // the -1 -> 0 transition increments the generation atomically.
     uint64_t new_state, old_state = os_atomic_add_orig2o(dg, dg_state,
	      DISPATCH_GROUP_VALUE_INTERVAL, release);
     uint32_t old_value = (uint32_t)(old_state & DISPATCH_GROUP_VALUE_MASK);

     if (unlikely(old_value == DISPATCH_GROUP_VALUE_1)) {
	 old_state += DISPATCH_GROUP_VALUE_INTERVAL;
	 do {
	     new_state = old_state;
	     if ((old_state & DISPATCH_GROUP_VALUE_MASK) == 0) {
		 new_state &= ~DISPATCH_GROUP_HAS_WAITERS;
		 new_state &= ~DISPATCH_GROUP_HAS_NOTIFS;
	     } else {
		 // If the group was entered again since the atomic_add above,
		 // we can't clear the waiters bit anymore as we don't know for
		 // which generation the waiters are for
		 new_state &= ~DISPATCH_GROUP_HAS_NOTIFS;
	     }   
	     if (old_state == new_state) break;
         } while (unlikely(!os_atomic_cmpxchgv2o(dg, dg_state,
		  old_state, new_state, &old_state, relaxed)));
	 return _dispatch_group_wake(dg, old_state, true);
     }

     if (unlikely(old_value == 0)) {
	 DISPATCH_CLIENT_CRASH((uintptr_t)old_value,
		  "Unbalanced call to dispatch_group_leave()"); }}Copy the code

Run the OS_ATOMic_ADD_ORIG2O command to increment old_state from -1 to 0

DISPATCH_GROUP_VALUE_MASK       0x00000000fffffffcULL
DISPATCH_GROUP_VALUE_1          DISPATCH_GROUP_VALUE_MASK
Copy the code
    1. old_valueIs equal to theold_state & DISPATCH_GROUP_VALUE_MASKIs equal to the0At this time,old_value ! = DISPATCH_GROUP_VALUE_1, again due to the judgment isunlikelySo they willEnter if judgment
    1. Due to theDISPATCH_GROUP_VALUE_INTERVAL = 4, so at this timeold_state = 4And then enter thedo-whileCycle, it’s going to goelsejudgenew_state &= ~DISPATCH_GROUP_HAS_NOTIFS= 4 & ~2 = 4, at this momentold_stateandnew_stateEqual, and then out of the loop_dispatch_group_wakeMethod, that is, wake updispatch_group_notifymethods
    1. ifenterTwo, thenold_state=-1, plusAfter 4for3, that is, atdo-whileLoop,new_state = old_state = 3. thenNew_state &= ~DISPATCH_GROUP_HAS_NOTIFS = 3 & ~ 2 = 1And thenRange and old_stateIt’s going to loop again, when it’s going to loop againleaveAfter the increment operation, the new loop judgment will go to_dispatch_group_wakeFunction to wake up

dispatch_group_notify

static inline void
_dispatch_group_notify(dispatch_group_t dg, dispatch_queue_t dq,
	 dispatch_continuation_t dsn)
{
     uint64_t old_state, new_state;
     dispatch_continuation_t prev;

     dsn->dc_data = dq;
     _dispatch_retain(dq);

     prev = os_mpsc_push_update_tail(os_mpsc(dg, dg_notify), dsn, do_next);
     if (os_mpsc_push_was_empty(prev)) _dispatch_retain(dg);
     os_mpsc_push_update_prev(os_mpsc(dg, dg_notify), prev, dsn, do_next);
     if (os_mpsc_push_was_empty(prev)) {
	 os_atomic_rmw_loop2o(dg, dg_state, old_state, new_state, release, {
	     new_state = old_state | DISPATCH_GROUP_HAS_NOTIFS;
	     if ((uint32_t)old_state == 0) {
		 os_atomic_rmw_loop_give_up({
		     return _dispatch_group_wake(dg, new_state, false); }); }}); }}Copy the code

Os_atomic_rmw_loop2o is used to execute the do-while loop. When olD_state == 0, _dispatch_group_wake is executed

  • There’s still one problem left to solve, which isdispatch_group_asyncAnd whyIn group + out groupThe effect is the same, then analyze the source code

dispatch_group_async

void
dispatch_group_async(dispatch_group_t dg, dispatch_queue_t dq,
		dispatch_block_t db)
{
    dispatch_continuation_t dc = _dispatch_continuation_alloc();
    uintptr_t dc_flags = DC_FLAG_CONSUME | DC_FLAG_GROUP_ASYNC;
    dispatch_qos_t qos;

    qos = _dispatch_continuation_init(dc, dq, db, 0, dc_flags);
    _dispatch_continuation_group_async(dg, dq, dc, qos);
}
Copy the code

The function that is familiar, and asynchronous function much like, but at the moment dc_flags = DC_FLAG_CONSUME | DC_FLAG_GROUP_ASYNC, then entered the _dispatch_continuation_group_async function:

static inline void
_dispatch_continuation_group_async(dispatch_group_t dg, dispatch_queue_t dq,
		dispatch_continuation_t dc, dispatch_qos_t qos)
{
     dispatch_group_enter(dg);
     dc->dc_data = dg;
     _dispatch_continuation_async(dq, dc, qos, dc->dc_flags);
}
Copy the code

Here we see dispatch_group_enter, but we don’t see leave. I assume that leave must be executed where the block is executed, otherwise we can’t ensure that the task in the group is completed. The _dispatch_continuation_INVOke_inline function is found based on dq_push = _dispatch_root_queue_push of type global



For ordinary asynchronous types, the _dispatch_client_callout function is used, and for DC_FLAG_GROUP_ASYNC group type, _dispatch_continuation_WITH_group_invoke function is used

static inline void
_dispatch_continuation_with_group_invoke(dispatch_continuation_t dc)
{
     struct dispatch_object_s *dou = dc->dc_data;
     unsigned long type = dx_type(dou);
     if (type == DISPATCH_GROUP_TYPE) {
	 _dispatch_client_callout(dc->dc_ctxt, dc->dc_func);
	 _dispatch_trace_item_complete(dc);
	 dispatch_group_leave((dispatch_group_t)dou);
     } else {
	 DISPATCH_INTERNAL_CRASH(dx_type(dou), "Unexpected object type"); }}Copy the code

If the type is “DISPATCH_GROUP_TYPE”, the dispatch_client_callout command is executed first and then the “dispatch_group_leave” command is executed

Signal sourceDispatch_source

  • Signal sourceDispatch_sourceIs a try not to occupy resources, andCPUVery low loadApi, it does not sufferRunloopThe influence is the sumRunloopA flat setApi. It mainly consists of the following functions:
      1. dispatch_source_create: Creates a signal source
      1. dispatch_source_set_event_handler: Sets the signal source callback
      1. dispatch_source_merge_data: Source time setting data
      1. dispatch_source_get_data: Obtains signal source data
      1. dispatch_resume: continue to
      1. dispatch_suspend: hang
  • Call a function on any threaddispatch_source_merge_dataAfter that, it will executeDispatch SourcePredefined handle (can be understood as ablock), a process calledCustom eventUser events, yesDispatch SourceAn event that supports processing

Source type

  • Create the source
    dispatch_source_t
    dispatch_source_create(dispatch_source_type_t type,
        uintptr_t handle,
        uintptr_t mask,
        dispatch_queue_t _Nullable queue);
    Copy the code
    • The first parameter is thetadispatch_source_type_tThe type oftypeAnd thenhandleandmaskAre alluintptr_tType, and finally pass in a queue
  • Usage:
    dispatch_source_t source = dispatch_source_create(DISPATCH_SOURCE_TYPE_DATA_ADD, 0.0.dispatch_get_main_queue());
    Copy the code
  • The type of sourcedispatch_source_type_t:
      1. DISPATCH_SOURCE_TYPE_DATA_ADD: used toADDMerge data
      1. DISPATCH_SOURCE_TYPE_DATA_OR: used toBitwise orMerge data
      1. DISPATCH_SOURCE_TYPE_DATA_REPLACE:trackingBy calling thedispatch_source_merge_dataThe dispatch source of the obtained data, and the newly obtained data values willreplace Not yet delivered to the source handlerExisting data values of
      1. DISPATCH_SOURCE_TYPE_MACH_SEND: Used for monitoringThe Mach porttheInvalid nameThe dispatch source of notifications can only be sentNo receive permission
      1. DISPATCH_SOURCE_TYPE_MACH_RECV: used toMonitoring Mach portsthehangThe message
      1. DISPATCH_SOURCE_TYPE_MEMORYPRESSURE: used tomonitoringsystemMemory pressure change
      1. DISPATCH_SOURCE_TYPE_PROC: used toMonitoring external processesIn the event
      1. DISPATCH_SOURCE_TYPE_READ:monitoringFile descriptor asGets a readable pending byteThe dispatch of the source
      1. DISPATCH_SOURCE_TYPE_SIGNAL:Monitoring current ProcessesTo obtain the scheduling source of the signal
      1. DISPATCH_SOURCE_TYPE_TIMERBased on:Timer commit eventThe dispatch source of the handler block
      1. DISPATCH_SOURCE_TYPE_VNODE: used tomonitoringFile descriptorDefined eventsThe dispatch of the source
      1. DISPATCH_SOURCE_TYPE_WRITE:monitoringFile descriptor asGets writable bytesThe dispatch source of the available buffer space.

The timer

The following uses a Dispatch Source to encapsulate a timer

- (void)testTimer {

    self.timer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0.0.dispatch_get_global_queue(0.0));
    dispatch_time_t startTime = dispatch_time(DISPATCH_TIME_NOW, 0);
    dispatch_source_set_timer(self.timer, startTime, 1 * NSEC_PER_SEC, 0);

    __block int a = 0;
    dispatch_source_set_event_handler(self.timer, ^{
        a++;
        NSLog(@"A value % d", a);
    });

    dispatch_resume(self.timer);
    self.isRunning = YES;
}

- (void)touchesBegan:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event {
    if (self.isRunning) {
        dispatch_suspend(self.timer);
// dispatch_source_cancel(self.timer);
        self.isRunning = NO;
        NSLog(@"Halftime.");
    } else {
        dispatch_resume(self.timer);
        self.isRunning = YES;
        NSLog(@"Keep drinking."); }}Copy the code

The following output is displayed:



  • Creating a Timerdispatch_source_createBe sure to useProperty or instance variable receiveOtherwise, the timer will not be executed
  • dispatch_source_set_timerThe second argument tostartSince when, the third parameterintervalIs the time interval,leewayIt’s the nanosecond deviation of the timer
  • There are two kinds of timer stops:
    • dispatch_resume: Timer paused but always online, can be woken up
    • dispatch_source_cancel: Timer release, executedispatch_resumeWake up, crash