To prepare

  • Libdispatch source

First, the application of fence function

role

  • Control task execution sequence, synchronization.

function

  • dispatch_barrier_async: I will come here after the previous task is completed.
  • dispatch_barrier_sync: has the same effect, but this will block the thread and affect the execution of later tasks.
  • It is important to note that fence functions can only control the same concurrent queue.

Dispatch_barrier_async sample

- (void)myDemo { dispatch_queue_t concurrentQueue = dispatch_queue_create("ssl", DISPATCH_QUEUE_CONCURRENT); // Dispatch_async (concurrentQueue, ^{sleep(1); NSLog (@ "task 1"); }); // Dispatch_async (concurrentQueue, ^{sleep(1); NSLog (@ "task 2"); }); // Dispatch_barrier_async (concurrentQueue, ^{sleep(1); NSLog(@"------barrier------"); }); // Dispatch_async (concurrentQueue, ^{NSLog(@" task 3")); }); NSLog(@"-------- dry dry dry --------"); } the execution result: -- -- -- -- -- -- -- -- dry dry -- -- -- -- -- -- -- - task 1 task 2 -- -- -- -- -- -- -- the barrier -- -- -- -- -- -- - task 3Copy the code
  • Task 1,Task 2,barrierThere is a delay, but the execution result is executed firstTask 1andTask 2, and then executebarrierAnd finally execute the followingTask 3As stated above, the previous task will be executedbarrierAnd the next task.

Dispatch_barrier_sync sample

- (void)myDemo { dispatch_queue_t concurrentQueue = dispatch_queue_create("ssl", DISPATCH_QUEUE_CONCURRENT); // Dispatch_async (concurrentQueue, ^{sleep(1); NSLog (@ "task 1"); }); // Dispatch_async (concurrentQueue, ^{sleep(1); NSLog (@ "task 2"); }); // Dispatch_barrier_sync (concurrentQueue, ^{sleep(1); NSLog(@"------barrier------"); }); // Dispatch_async (concurrentQueue, ^{NSLog(@" task 3")); }); NSLog(@"-------- dry dry dry --------"); } the execution result: task 2 task 1 -- -- -- -- -- - the barrier -- -- -- -- -- -- -- -- -- -- -- -- -- -- dry dry -- -- -- -- -- -- -- -- the task 3Copy the code
  • You can seesyncThe function not only hasasyncFunction, and the role of the block thread, the main threadDry dryAlso inbarrierIt will be executed later.

Dispatch_get_global_queue sample

Change the common queue into a global concurrent queue and view the execution result:

- (void)myDemo { dispatch_queue_t concurrentQueue = dispatch_get_global_queue(0, 0); // Dispatch_async (concurrentQueue, ^{sleep(1); NSLog (@ "task 1"); }); // Dispatch_async (concurrentQueue, ^{sleep(1); NSLog (@ "task 2"); }); // Dispatch_barrier_async (concurrentQueue, ^{sleep(1); NSLog(@"------barrier------"); }); // Dispatch_async (concurrentQueue, ^{NSLog(@" task 3")); }); NSLog(@"-------- dry dry dry --------"); } the execution result: -- -- -- -- -- -- -- -- dry dry -- -- -- -- -- -- -- -- mission 3 -- -- -- -- -- - the barrier -- -- -- -- -- - task 2 task 1Copy the code
  • You can see that in the global concurrent queue, the fence function is invalid.

Process control case

Look at this example:

- (void)myDemo { dispatch_queue_t concurrentQueue = dispatch_queue_create("ssl", DISPATCH_QUEUE_CONCURRENT); Marray for (int I = 0; i < 1000; i++) { dispatch_async(concurrentQueue, ^{ NSString *imageName = [NSString stringWithFormat:@"%d.jpg", (i % 10)]; NSURL *url = [[NSBundle mainBundle] URLForResource:imageName withExtension:nil]; NSData *data = [NSData dataWithContentsOfURL:url]; UIImage *image = [UIImage imageWithData:data]; [self.mArray addObject:image]; }); }} - (void) Touches began :(NSSet< touches *> *) Touches withEvent:(UIEvent *)event{self.touches (@" %zd", self.touches. Count); } Result: array number :996Copy the code
  • The number of arrays is not necessarily fixed996, but is less than1000.
  • Because it is unsafe to operate on threads when two tasks are performed simultaneouslyaddObjectSo when I do that, let’s say that this ismArrayThe number of800Then both tasks will go to801, one of which will be overwritten, resulting in the final array being less than1000.

Put [self.marray addObject:image] into the fence function to keep the thread safe:

- (void)myDemo { ... dispatch_barrier_async(concurrentQueue , ^{ [self.mArray addObject:image]; }); . } - (void) Touches began :(NSSet< touches *> *) Touches withEvent:(UIEvent *)event{self.touches (@" %zd", self.touches. Count); } Result: number of arrays :1000Copy the code
  • The fence function will ensure that the previous tasks are completed, and then the subsequent tasks will be executed one by one, ensuring thread safety, so the final number of arrays is1000.

Second, the underlying principle of the fence function

We analyze the underlying principles in terms of synchronous fence functions.

Fence function before the task executes

Enter the dispatch_barrier_async:

Go to _dispatch_barrier_sync_f -> _dispatch_barrier_sync_f_inline -> _dispatch_sync_f_slow:

Symbol breakpoints and processes:

  • You can seeThe fenceThe previous assignment was in__DISPATCH_WAIT_FOR_QUEUE__This function is executed inIn the previousAlso mentioned,A deadlockThat’s what’s happening in this function.

The task after the fence function is executed

Next go to _dispatch_sync_INVOke_and_complete_recurse -> _dispatch_sync_complete_recurse:

static void _dispatch_sync_complete_recurse(dispatch_queue_t dq, dispatch_queue_t stop_dq, uintptr_t dc_flags) { bool barrier = (dc_flags & DC_FLAG_BARRIER); Do {if (dq == stop_dq) return; #define dx_wakeup(x, y, z) dx_vtable(x)->dq_wakeup(x, y, z) dx_wakeup(dq, 0) DISPATCH_WAKEUP_BARRIER_COMPLETE); } else {// Come in again, no fence function, some state modification _dispatch_lane_non_barrier_complete(upcast(dq)._dl, 0); } dq = dq->do_targetq; barrier = (dq->dq_width == 1); } while (unlikely(dq->do_targetq)); }Copy the code

Breakpoints and process validation:

The _dispatch_sync_f_slow function is then called to complete all tasks:

conclusion

  • First of all by__DISPATCH_WAIT_FOR_QUEUE__The queued () function executes the first queued task.
  • thendo whileLoop, judge if there isbarrierTo performbarrierTasks, and then repeats the rest of the tasks.

Three, semaphore use

role

  • Sync -> When lock, controlGCDMaximum number of concurrent requests.

Correlation function

  • dispatch_semaphore_create : Creates a semaphore.
  • dispatch_semaphore_wait: Semaphore waiting.
  • dispatch_semaphore_signal: Semaphore release.

The sample

- (void)myDemo { dispatch_queue_t queue = dispatch_get_global_queue(0, 0); dispatch_semaphore_t sem = dispatch_semaphore_create(1); // Task 1 dispatch_async(queue, ^{dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER); / / wait for sleep. (2); NSLog(@" execute task 1"); NSLog(@" Task 1 completed "); dispatch_semaphore_signal(sem); // send a signal}); // Task 2 dispatch_async(queue, ^{dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER); / / wait for sleep (1); NSLog(@" execute task 2"); NSLog(@" Task 2 completed "); dispatch_semaphore_signal(sem); // send a signal}); // Task 3 dispatch_async(queue, ^{dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER); // wait for NSLog(@" execute task 3"); NSLog(@" Task 3 completed "); dispatch_semaphore_signal(sem); }); NSLog (@ "task 4"); } Result: Task 4 Task 1 Task 1 Completed Task 2 Task 2 Completed Task 3 Task 3 completedCopy the code
  • Task 1andTask 2Both have delayed operations that should have been executed later, because the semaphore is added, and the task is executed from the top down,Task 1andTask 2Instead, it was executed first.

Fourth, the underlying principle of semaphore

dispatch_semaphore_create

  • You can see that the semaphore is created primarily to givedsema_valueAssign, assign to the value that we passed in, and then continue with the analysis.

dispatch_semaphore_wait

intptr_t
dispatch_semaphore_wait(dispatch_semaphore_t dsema, dispatch_time_t timeout)
{
    long value = os_atomic_dec2o(dsema, dsema_value, acquire);
    if (likely(value >= 0)) {
        return 0;
    }
    return _dispatch_semaphore_wait_slow(dsema, timeout);
}
Copy the code
  • rightdsema_valuefor--Operation, if> = 0, the function returns0, the task can be executed.
  • Otherwise, return_dispatch_semaphore_signal_slowFunction.

Enter the _dispatch_semaphore_wait_slow:

  • whentimeoutisDISPATCH_TIME_FOREVER, will wait.

Enter the _dispatch_sema4_wait function:

  • You can see here is onedo whileLoop, sodispatch_semaphore_waitThe following code cannot be executed.
  • whendsema_value >= 0whendo whileThe loop is over and the task can execute normally again, which involvesdsema_valuethe++So let’s do thatdispatch_semaphore_signalFunction.

dispatch_semaphore_signal

dispatch_semaphore_signal(dispatch_semaphore_t dsema)
{
    long value = os_atomic_inc2o(dsema, dsema_value, release);
    if (likely(value > 0)) {
        return 0;
    }
    if (unlikely(value == LONG_MIN)) {
        DISPATCH_CLIENT_CRASH(value,
                "Unbalanced call to dispatch_semaphore_signal()");
    }
    return _dispatch_semaphore_signal_slow(dsema);
}
Copy the code
  • rightdsema_valuefor++Operation, if> 0, the function returns0.
  • ifvalue == LONG_MIN, the program crashes.
  • If none of the above is true, return_dispatch_semaphore_signal_slowFunction.

The _dispatch_semaphore_signal_slow function can be used to handle some exceptions:

conclusion

  • Semaphore operations are basicallydsema_valueRelated processing.
  • dispatch_semaphore_createInitialize thedsema_valueValue.
  • dispatch_semaphore_waitfordsema_valuethe--Operation, if> = 0Task execution, otherwise proceeddo whileThe loop blocks task execution and waitsdsema_valuevalue> = 0The task can continue.
  • dispatch_semaphore_signalfordsema_valuethe++Operation.

Application of scheduling groups

role

  • Control the order in which tasks are executed.

function

  • dispatch_group_create: Creates a group.
  • dispatch_group_async: Group tasks.
  • dispatch_group_notify: indicates the completion of a group task.
  • dispatch_group_wait: indicates the waiting time for a group task to be executed.
  • dispatch_group_enter: into the group.
  • dispatch_group_leave: out of the group.

The sample

- (void)myGroupDemo { dispatch_group_t group = dispatch_group_create(); dispatch_queue_t queue = dispatch_get_global_queue(0, 0); // Create a dispatch group of "dispatch_group_async" (group, queue, ^{sleep(1); [self.marray addObject:@" 1"]; }); // Dispatch_group_enter (group); dispatch_async(queue, ^{ sleep(1); [self.marray addObject:@" picture 2"]; dispatch_group_leave(group); }); dispatch_group_notify(group, dispatch_get_main_queue(), ^{ NSMutableString *muStr = [NSMutableString new]; for (NSString *str in self.mArray) { [muStr appendString:str]; } [muStr appendString:@"- generate watermark "]; NSLog(@"%@",muStr); }); } Execute result: Picture 1 Picture 2- generate watermarkCopy the code
  • When both asynchronous tasks have completed, thegroupTask, let’s look at the underlying principles.

Principle of scheduling group

The previous version of scheduling group was implemented using semaphore, now I have written a set, also modeled on the principle of semaphore, is about some of the operation of value.

dispatch_group_create

dispatch_group_t
dispatch_group_create(void)
{
    return _dispatch_group_create_with_count(0);
}
Copy the code

Enter the _dispatch_group_create_with_count:

static inline dispatch_group_t
_dispatch_group_create_with_count(uint32_t n)
{
    dispatch_group_t dg = _dispatch_object_alloc(DISPATCH_VTABLE(group),
                    sizeof(struct dispatch_group_s));
    dg->do_next = DISPATCH_OBJECT_LISTLESS;
    dg->do_targetq = _dispatch_get_default_queue(false);
    if (n) {
        os_atomic_store2o(dg, dg_bits,
                        (uint32_t)-n * DISPATCH_GROUP_VALUE_INTERVAL, relaxed);
        os_atomic_store2o(dg, do_ref_cnt, 1, relaxed); // <rdar://22318411>
    }
    return dg;
}
Copy the code
  • There’s some initialization going on here.

dispatch_group_enter

void dispatch_group_enter(dispatch_group_t dg) { // The value is decremented on a 32bits wide atomic so that the carry // for the 0 -> -1 transition is not propagated to the upper 32bits. uint32_t old_bits = os_atomic_sub_orig2o(dg, dg_bits, DISPATCH_GROUP_VALUE_INTERVAL, acquire); uint32_t old_value = old_bits & DISPATCH_GROUP_VALUE_MASK; if (unlikely(old_value == 0)) { _dispatch_retain(dg); // <rdar://problem/22318411> } if (unlikely(old_value == DISPATCH_GROUP_VALUE_MAX)) { DISPATCH_CLIENT_CRASH(old_bits, "Too many nested calls to dispatch_group_enter()"); }}Copy the code
  • rightvaluefor--Operation.
  • ifold_valueIs equal to the0, the call_dispatch_retainfunction
  • ifold_valueIs equal to theDISPATCH_GROUP_VALUE_MAX, the program crashes,DISPATCH_GROUP_VALUE_MAXThe value of4:
    #define DISPATCH_GROUP_VALUE_INTERVAL   0x0000000000000004ULL
    #define DISPATCH_GROUP_VALUE_MAX        DISPATCH_GROUP_VALUE_INTERVAL
    Copy the code

dispatch_group_leave

  • So this is going to executevaluethe++Operation, ifold_valueIs equal to the- 1, the call_dispatch_group_wakePerform the wake operation, the wake operation will be calleddispatch_group_notifyIn theblockTask.

dispatch_group_notify

  • whenold_state == 0That isvalue == 0When performing_dispatch_group_wake, the calldispatch_group_notifyIn theblockTask.

dispatch_group_async

We’re guessing that there are calls to dispatch_group_Enter and dispatch_group_leave in the dispatch_group_async function, so let’s explore.

Enter the dispatch_group_async:

Enter the _dispatch_continuation_group_async:

  • Here it isdispatch_group_enterFunction call. Let’s continue_dispatch_continuation_async.

As we have analyzed in iOS Multithreading (ii) : GCD Fundamentals & Source analysis, the continuation_async function is called to the _dispatch_continuation_invoke_inline function after _dispatch_continuation_async:

  • The normal case that we analyzed earlier is called_dispatch_client_calloutFunction, but if the notation isDC_FLAG_GROUP_ASYNCIs called_dispatch_continuation_with_group_invokeFunction.

Enter the _dispatch_continuation_with_group_invoke:

  • As shown above, whentype == DISPATCH_GROUP_TYPE, pass first_dispatch_client_calloutcompleteblockTask, and then it will be calleddispatch_group_leaveFunction, perfect!! .

conclusion

  • dispatch_group_createInitialization:value = 0;
  • dispatch_group_enter:value--;
  • dispatch_group_leave:value++If theold_value == -1, the calldispatch_group_notifyIn theblockTask.
  • dispatch_group_notifyIf:value == 0, the calldispatch_group_notifyIn theblockTask.
  • dispatch_group_async: Will be called internally in pairsdispatch_group_enteranddispatch_group_leave.

Seven, dispatch_source

role

  • itsCPULoad is very small, try not to occupy resources.
  • Advantages of connection.

concept

When one of its functions dispatch_source_merge_data is called from either thread, the process is called Custom Event, the user event, and the process is called dispatch_source_merge_data. Is the type of event that the Dispatch Source supports processing.

A handle is a pointer to a pointer that points to a class or structure that is closely related to the system

HINSTANCE (instance handle), HBITMAP (bitmap handle), HDC (device representation handle), HICON (icon handle), etc. There is also a common HANDLE, called HANDLE.

function

  • dispatch_source_create: Creates a source.
  • dispatch_source_set_event_handler: Sets the source event callback.
  • dispatch_source_merge_data: Source event Settings data.
  • dispatch_source_get_data: Obtains the source event data.
  • dispatch_resume: go on.
  • dispatch_suspend: hang.

The basic use

#import "ViewController.h" @interface ViewController () @property (weak, nonatomic) IBOutlet UIProgressView *progressView; @property (nonatomic, strong) dispatch_source_t source; @property (nonatomic, strong) dispatch_queue_t queue; @property (nonatomic, assign) NSUInteger totalComplete; @property (nonatomic) BOOL isRunning; @end @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; self.totalComplete = 0; self.queue = dispatch_queue_create("com.ssl.dd", 0); The first parameter, dispatch_source_type_t type, sets the type of the GCD source method, as enumerated earlier. 2) UintPtr_t Handle Apple The third parameter: unsigned long mask: unsigned long mask: unsigned long mask: Unsigned long mask: Unsigned long Mask: Unsigned long Mask: Unsigned long Mask: Unsigned long Mask: Unsigned long Mask The fourth parameter: dispatch_queue_t_NULlable Queue, to which queue the timer event processing Block is submitted. You can pass Null, default is global queue. Note: When submitting to the global queue, the time processing callback needs to asynchronously fetch the UI thread, update the UI... But it seems like common sense, so much more... */ self.source = dispatch_source_create(DISPATCH_SOURCE_TYPE_DATA_ADD, 0, 0, dispatch_get_main_queue()); // Save the code block --> asynchronous dispatch_source_set_event_handler() // Set cancel callback dispatch_source_set_cancel_handler(dispatch_source_t) Source,dispatch_block_t _Nullable Handler) // Encapsulate the trigger function that we need to call back -- respond with dispatch_source_set_event_handler(self.source, ^{ NSUInteger value = dispatch_source_get_data(self.source); Self. totalComplete += value; self.totalComplete += value; NSLog(@" progress: %.2f", self.totalComplete/100.0); Self. ProgressView. Progress = self. TotalComplete / 100.0; }); self.isRunning = YES; dispatch_resume(self.source); } - IBAction didClickStartOrPauseAction: (id) sender {the if (self. Set the) {/ / are running will suspend dispatch_suspend (self. The source); dispatch_suspend(self.queue); // MainQueue suspends self.isRunning = NO; [sender setTitle: @ "pause..." forState: UIControlStateNormal]; }else{ dispatch_resume(self.source); dispatch_resume(self.queue); self.isRunning = YES; [sender setTitle: @ "loading..." forState: UIControlStateNormal]; }} - (void)touches began :(NSSet< touches *> *)touches withEvent:(UIEvent *)event{NSLog(@' touches'); for (NSUInteger index = 0; index < 100; index++) { dispatch_async(self.queue, ^{ if (! Self.isrunning) {NSLog(@" pause download "); return ; } sleep(2); dispatch_source_merge_data(self.source, 1); // source value response}); }}Copy the code

Print result: