Blog links for more on GCD’s Dispatch_Semaphore

Dispath_group is implemented around dispatch_semaphore. So let’s go to dispatch_semaphore source code to learn. Dispatch_semaphore is used in GCD to ensure the security of resource usage (queue synchronous execution depends on semaphore implementation). As you can imagine, dispatch_Semaphore performance should not be bad.

dispatch_semaphore_t

Dispatch_semaphore_s is a semaphore structure. The code is as follows:

struct dispatch_semaphore_s { DISPATCH_STRUCT_HEADER(dispatch_semaphore_s, dispatch_semaphore_vtable_s); long dsema_value; // The current semaphore long dsema_orig; // initialize the semaphore size_t dsema_sent_ksignals;#if USE_MACH_SEM && USE_POSIX_SEM
#error "Too many supported semaphore types"
#elif USE_MACH_SEM
	semaphore_t dsema_port;
	semaphore_t dsema_waiter_port;
#elif USE_POSIX_SEM
	sem_t dsema_sem;
#else
#error "No supported semaphore type"
#endifsize_t dsema_group_waiters; struct dispatch_sema_notify_s *dsema_notify_head; Struct dispatch_sema_notify_s * dsemA_notify_tail; //notify; typedef mach_port_t semaphore_t; struct dispatch_sema_notify_s { struct dispatch_sema_notify_s *volatile dsn_next; // Dispatch_queue_t dsn_queue; Void *dsn_ctxt; // context void (*dsn_func)(void *); // Execute function};Copy the code

There are still some properties that we don’t know what they do, but let’s move on.

dispatch_semaphore_create

Dispatch_semaphore_create Is used to create semaphores.

dispatch_semaphore_t
dispatch_semaphore_create(long value)
{
	dispatch_semaphore_t dsema;

	// If the internal value is negative, then the absolute of the value is
	// equal to the number of waiting threads. Therefore it is bogus to
	// initialize the semaphore with a negative value.
	if(value < 0) {// Value must be greater than or equal to 0returnNULL; } dsema = calloc(1, sizeof(struct dispatch_semaphore_s));if(fastPath (dsema)) {dsema->do_vtable = &_dispatch_semaphore_vtable; Dsema ->do_next = DISPATCH_OBJECT_LISTLESS; Dsema ->do_ref_cnt = 1; dsema->do_xref_cnt = 1; Dsema ->do_targetq = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0); Dsema ->dsema_value = value; dsema->dsema_orig = value;#if USE_POSIX_SEM
		int ret = sem_init(&dsema->dsema_sem, 0, 0);
		DISPATCH_SEMAPHORE_VERIFY_RET(ret);
#endif
	}

	return dsema;
}
Copy the code

Dsema ->do_vtable = &_dispatch_semaphore_vtable;

_dispatch_semaphoRE_vtable is defined as follows:

const struct dispatch_semaphore_vtable_s _dispatch_semaphore_vtable = {
	.do_type = DISPATCH_SEMAPHORE_TYPE,
	.do_kind = "semaphore",
	.do_dispose = _dispatch_semaphore_dispose,
	.do_debug = _dispatch_semaphore_debug,
};
Copy the code

There is a _dispatch_semaphore_Dispose function which is the semaphore destruction function. The code is as follows:

Static void _dispatch_semaphore_dispose(dispatch_semaphore_t dsema) {// If the current value of the semaphore is less than the initial value, flash back occurs. Because the semaphore has been releasedif (dsema->dsema_value < dsema->dsema_orig) {
		DISPATCH_CLIENT_CRASH(
				"Semaphore/group object deallocated while in use");
	}

#if USE_MACH_SEMkern_return_t kr; // Release the signal used by dispatch_semaphoreif(dsema->dsema_port) { kr = semaphore_destroy(mach_task_self(), dsema->dsema_port); DISPATCH_SEMAPHORE_VERIFY_KR(kr); } // Release the signal used by dispatch_groupif (dsema->dsema_waiter_port) {
		kr = semaphore_destroy(mach_task_self(), dsema->dsema_waiter_port);
		DISPATCH_SEMAPHORE_VERIFY_KR(kr);
	}
#elif USE_POSIX_SEM
	int ret = sem_destroy(&dsema->dsema_sem);
	DISPATCH_SEMAPHORE_VERIFY_RET(ret);
#endif

	_dispatch_dispose(dsema);
}
Copy the code

dispatch_semaphore_wait

Once a semaphore has been created, it begins to wait for signals to send messages.

Long dispatch_semaphore_wait(dispatch_semaphore_t dsema, dispatch_time_t timeout) {dispatch_time_t timeout (dispatch_semaphore_t dsema, dispatch_time_t timeout) { And assign the new value to value long value = dispatch_atomic_dec2O (dSEMA, dsemA_value); dispatch_atomic_acquire_barrier();if(fastPath (value >= 0)) {// There is a resource availablereturn0; } // Wait for the semaphore to wake up or timeout timeoutreturn _dispatch_semaphore_wait_slow(dsema, timeout);
}
Copy the code

_dispatch_semaphore_wait_slow

In dispatch_semaphore_wait, if value is less than zero, _dispatch_semaphore_wait_slow is executed to wait for the semaphore to wake up or timeout to timeout. The code for _dispatch_semaphoRE_wait_slow is as follows:

static long
_dispatch_semaphore_wait_slow(dispatch_semaphore_t dsema,
		dispatch_time_t timeout)
{
	long orig;

again:
	// Mach semaphores appear to sometimes spuriously wake up. Therefore,
	// we keep a parallel count of the number of timesSignaled (6880961). // if dsema->dsema_sent_ksignals is not zero, dsema_sent_ksignals will enter the loop Orig,orig - 1) means //dsema-> dsemA_sent_ksignals if equal to orig, then assign orig - 1 to dsemA_sent_ksignals, // and returntrueOtherwise returnfalse. // If returntrue", indicating that resources have been obtainedwhile ((orig = dsema->dsema_sent_ksignals)) {
		if (dispatch_atomic_cmpxchg2o(dsema, dsema_sent_ksignals, orig,
				orig - 1)) {
			return0; }}#if USE_MACH_SEMmach_timespec_t _timeout; kern_return_t kr; _dispatch_semaphoRE_create_port (&dsema->dsema_port); // From xnu/osfmk/kern/sync_sema.c: // wait_semaphore->count = -1; /* we don't keep an actual count */ // // The code above does not match the documentation, and that fact is // not surprising. The documented semantics are clumsy to use in any // practical way. The above hack Effectively tricks the rest of the // Mach semaphore logic to behave like the libdispatch algorithm. Switch (timeout) {default: // Calculate the remaining time and call the Mach kernel's wait function semaphore_timedWait () to wait. // If no notification is received within a specified period of time, the dsemA_port is blocked. // When a timeout occurs, the following case code is executed (this default has no break). do { uint64_t nsec = _dispatch_timeout(timeout); _timeout.tv_sec = (typeof(_timeout.tv_sec))(nsec / NSEC_PER_SEC); _timeout.tv_nsec = (typeof(_timeout.tv_nsec))(nsec % NSEC_PER_SEC); kr = slowpath(semaphore_timedwait(dsema->dsema_port, _timeout)); } while (kr == KERN_ABORTED); if (kr ! = KERN_OPERATION_TIMED_OUT) { DISPATCH_SEMAPHORE_VERIFY_KR(kr); break; } // Fall through and try to undo what the fast path did to // dsema->dsema_value case DISPATCH_TIME_NOW: // If the current semaphore desma_value is less than 0, add it to return the timeout signal KERN_OPERATION_TIMED_OUT. KERN_OPERATION_TIMED_OUT represents waiting for a timeout to return. while ((orig = dsema->dsema_value) < 0) { if (dispatch_atomic_cmpxchg2o(dsema, dsema_value, orig, orig + 1)) { return KERN_OPERATION_TIMED_OUT; } } // Another thread called semaphore_signal(). // Fall through and drain the wakeup. case DISPATCH_TIME_FOREVER: // Wait until there is a signal. Do {kr = semaphoRE_wait (dsema-> dsemA_port); } while (kr == KERN_ABORTED); DISPATCH_SEMAPHORE_VERIFY_KR(kr); break; } #elif USE_POSIX_SEM #endif goto again }Copy the code

There are a few other things to note in the source code above:

  1. The while loop and the if condition in the first part. If dsemA -> dsemA_sent_ksignals is equal to ORIg, then orig – 1 is assigned to dsemA_sent_ksignals, Returns true and false otherwise. Obviously, as long as we can get into the loop, this condition is always true, and the function returns 0, which means wait for the signal. Dsema_sent_ksignals is not assigned when the semaphore is initialized, so it enters the following code. This means that no semaphore is actually notified or a system exception is notified, and the wait is not cleared

  2. Semaphore_timedwait and semaphore_wait appear above. These methods are in semaphore.h. So dispatch_Semaphore is implemented based on the Mach kernel’s semaphore interface. Dsema_port is monitored by the Mach kernel semaphore, so dsema_port is the signal of dispatch_semaphore.

  3. Let’s go back and look at the dsemA_waiter_port in the dispatch_semaphore_S structure. A global search shows that this attribute is used in dispatch_group. Semaphore_wait does not listen for dsema_port but for dsema_waiter_port.

The dispatch_semaphore_WAIT process is as follows:

dispatch_semaphore_signal

The code for sending a signal is much simpler than waiting for a signal; it doesn’t block, it just sends a wake up.

long dispatch_semaphore_signal(dispatch_semaphore_t dsema) { dispatch_atomic_release_barrier(); Long value = dispatch_atomic_inc2O (dsemA, dsemA_value);if (fastpath(value > 0)) {
		return 0;
	}
	if (slowpath(value == LONG_MIN)) {
		DISPATCH_CLIENT_CRASH("Unbalanced call to dispatch_semaphore_signal()");
	}
	return _dispatch_semaphore_signal_slow(dsema);
}
Copy the code

_dispatch_semaphore_signal_slow

long
_dispatch_semaphore_signal_slow(dispatch_semaphore_t dsema)
{
	// Before dsema_sent_ksignals is incremented we can rely on the reference
	// held by the waiter. However, once this value is incremented the waiter
	// may return between the atomic increment and the semaphore_signal(),
	// therefore an explicit reference must be held in order to safely access
	// dsema after the atomic increment.
	_dispatch_retain(dsema);

	(void)dispatch_atomic_inc2o(dsema, dsema_sent_ksignals);

#if USE_MACH_SEM
	_dispatch_semaphore_create_port(&dsema->dsema_port);
	kern_return_t kr = semaphore_signal(dsema->dsema_port);
	DISPATCH_SEMAPHORE_VERIFY_KR(kr);
#elif USE_POSIX_SEM
	int ret = sem_post(&dsema->dsema_sem);
	DISPATCH_SEMAPHORE_VERIFY_RET(ret);
#endif

	_dispatch_release(dsema);
	return 1;
}
Copy the code

_dispatch_semaphoRE_signal_slow is the kernel semaphore_signal function that wakes up the number of threads waiting for dispatch_semaphore_wait and returns 1.

The dispatch_semaphore_signal flow is as follows:

conclusion

  1. Dispatch_semaphore is implemented based on the Mach kernel semaphore interface

  2. Call dispatch_semaphoRE_WAIT semaphore minus 1, call Dispatch_semaphore_signal semaphore plus 1

  3. In wait, resources are returned immediately if the semaphore is greater than or equal to 0; otherwise, wait for the semaphore or return timeout. In Signal, a semaphore greater than 0 means a resource is returned immediately, or a waiting thread is woken up

  4. Dispatch_semaphore uses two variables desma_value and dSEMA_SENT_ksignals to handle wait and signal. In Singnal, if there are resources, there is no need to wake up the thread and desma_value is used. Dsema_sent_ksignals is used to restart the wait process when the thread needs to wake up.

  5. Look again at the variables of the dispatch_semaphoRE_s structure.

struct dispatch_semaphore_s { DISPATCH_STRUCT_HEADER(dispatch_semaphore_s, dispatch_semaphore_vtable_s); long dsema_value; // The current semaphore long dsema_orig; // initialize the semaphore size_t dsema_sent_ksignals; // Wake up semaphore#if USE_MACH_SEM && USE_POSIX_SEM
#error "Too many supported semaphore types"
#elif USE_MACH_SEMsemaphore_t dsema_port; // The semaphore signal used by the struct semaphore_t dsema_waiter_port; //dispatch_group used semaphore signals used#elif USE_POSIX_SEM
	sem_t dsema_sem;
#else
#error "No supported semaphore type"
#endifsize_t dsema_group_waiters; struct dispatch_sema_notify_s *dsema_notify_head; Struct dispatch_sema_notify_s * dsemA_notify_tail; //notify;Copy the code

supplement

How to control the number of concurrent threads

Method 1: Use semaphores for concurrency control

dispatch_queue_t concurrentQueue = dispatch_queue_create("concurrentQueue", DISPATCH_QUEUE_CONCURRENT);
    dispatch_queue_t serialQueue = dispatch_queue_create("serialQueue",DISPATCH_QUEUE_SERIAL);
    dispatch_semaphore_t semaphore = dispatch_semaphore_create(4);
    for (NSInteger i = 0; i < 15; i++) {
        dispatch_async(serialQueue, ^{
            dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
            dispatch_async(concurrentQueue, ^{
                NSLog(@"Thread :%@ Start task %d",[NSThread currentThread],(int)i);
                sleep(1);
                NSLog(@"Thread :%@ End task %d",[NSThread currentThread],(int)i); dispatch_semaphore_signal(semaphore); }); }); } NSLog(@"The main thread... !");
Copy the code

The results of

Method 2: implementation of YYDispatchQueuePool

YYDispatchQueuePool in THE YYKit component can also control the number of concurrent queues

Here are some tips for keeping the interface smooth on iOS:

The idea is to create serial queues with the same number of cpus for different priorities. Each time a queue is fetched from a pool, one of the queues will be polling back. I put all asynchronous operations in the App, including image decoding, object release and asynchronous drawing, into the global serial queue according to different priorities, so as to avoid performance problems caused by multi-threading.