This is the 14th day of my participation in the August More Text Challenge. For details, see:August is more challenging
Write in the front: the exploration of the underlying principles of iOS is my usual development and learning in the continuous accumulation of a step forward road. I hope it will be helpful for readers to record my journey of discovery.Copy the code
The directory is as follows:
- Exploring the underlying principles of iOS alloc
- Exploration of the underlying principles of iOS structure in vivo alignment
- The nature of the object explored by iOS underlying principles & the underlying implementation of ISA
- The underlying principles of the ISA-class (part 1)
- The underlying principles of the ISA class (middle)
- The underlying principles of isA-Class (part 2)
- Exploring the nature of Runtime Runtime & Methods in iOS Underlying principles
- Objc_msgSend explores the underlying principles of iOS
- Runtime Runtime slow lookup process
- Dynamic method resolution for exploring the underlying principles of iOS
- IOS underlying principles to explore the message forwarding process
- IOS Application loading principle
- Application loading principle (2)
- IOS underlying principle exploration class load
- IOS Underlying principles to explore the classification of loading
- Associated object for iOS underlying principle exploration
- Exploration of iOS underlying principles of sorcerer KVC
- Exploring the underlying principles of iOS — KVO Principles | more challenging in August
- Exploration of the underlying principles of iOS rewrite KVO | August more text challenges
- Exploring the underlying principles of iOS multithreading | more challenging in August
- Exploring the underlying principles of iOS: GCD functions and queues
- GCD Principles (part 1)
- IOS Low-level – What do you know about deadlocks?
- IOS low-level – Destroy a singleton
- IOS Low-level – Dispatch Source
- IOS bottom – a fence letter blocked the number
Summary column for the above
- Summary of the phase of iOS underlying principle exploration
Tidy up the details
- Summary of iOS development details
preface
In the previous chapter of the exploration of iOS underlying Principles of GCD Principles (Part I), we focused on the residual analysis of GCD functions and queues, GCD underlying data structures, synchronous function underlying calls and asynchronous function underlying calls. How much do you know about deadlocks in the follow-up? , destroy a singleton, Dispatch Source three parts respectively to GCD using intermediate lethal lock problems, singleton underlying implementation logic and Dispatch Source application content. Today, we will look at the GCD part of the fence function underlying implementation, semaphore and scheduling group applications. It is also the end of the communist party chapter. All right, let’s get started.
A semaphore
On page 26 of Self-cultivation for Programmers, binary semaphores are relatively similar.
A Binary Semaphore is the simplest type of lock, which has only two states: occupied and non-occupied. It is suitable for resources that are exclusively accessed by a single thread. When the binary semaphore is not occupied, the first thread that attempts to acquire the binary semaphore acquires the lock and puts it in the occupied state, after which all other threads that attempt to acquire the binary semaphore wait until the lock is released. For resources that allow multiple threads to access concurrently, a multivariate Semaphore, or Semaphore for short, is – a good choice. A semaphore with an initial value of N allows concurrent access by N threads. When a thread accesses a resource, it first retrives a semaphore and performs the following operations:
- Subtract 1 from the semaphore value.
- If the semaphore value is less than zero, the wait state is entered, otherwise execution continues.
- After the resource is accessed, the thread releases the semaphore and performs the following operations:
- Increase the semaphore value by 1.
- If the semaphore value is less than 1, wake up a waiting thread.
The semaphore we’re looking at, it’s not just one and zero.
The use of semaphore
-
Dispatch_semaphore_create: Creates a semaphore with parameters that control the maximum number of concurrent queues
-
Dispatch_semaphore_wait: the semaphore waits for the signal to be released before the task is executed
-
Dispatch_semaphore_signal: indicates that the semaphore is released
example
Case 1
Example 1
Control the maximum number of concurrent requests
// Customize the concurrent queue
dispatch_queue_t queue = dispatch_queue_create("superman", DISPATCH_QUEUE_CONCURRENT);
// Create a semaphore
dispatch_semaphore_t sem = dispatch_semaphore_create(5);
// Add a task
for (int i = 0; i < 10; i++) {
dispatch_async(queue, ^{
dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER);
NSLog(@"On a mission -- %d", i);
sleep(2);
NSLog(@"Mission %d completed", i);
dispatch_semaphore_signal(sem);
});
}
Copy the code
Case 2
Example 2
Even if task 1 is added first and task 2 takes more time, task 2 is executed first
// Customize the concurrent queue
dispatch_queue_t queue = dispatch_queue_create("superman", DISPATCH_QUEUE_CONCURRENT);
// Create a semaphore
dispatch_semaphore_t sem = dispatch_semaphore_create(0);
1 / / task
dispatch_async(queue, ^{
dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER); / / wait for
NSLog(@"On mission -- 1");
sleep(2);
NSLog(@"Mission 1 completed");
});
2 / / task
dispatch_async(queue, ^{
NSLog(@"On mission -- 2");
sleep(10);
NSLog(@"Mission 2 completed");
dispatch_semaphore_signal(sem); / / signal
});
Copy the code
So, at the bottom, how does the semaphore achieve the maximum concurrency control? In addition, dispatch_SEMaphore_Signal and Dispatch_semaphore_WAIT can be used together to achieve synchronization and control the effect of the process. Let’s go to the underlying source code and see what dispatch_semaphore_signal does to send a signal and what dispatch_semaphore_wait does to wait for a signal to be sent.
dispatch_semaphore_create
Starting with the semaphore creation,
/ *! * @function dispatch_semaphore_create * * @abstract * Create a new count semaphore with an initial value * * @discussion * It is useful to pass 0 to this value when two threads need to coordinate to complete a particular event. * Pass a value greater than zero for managing a limited resource pool where the pool size is equal to the value * * @param value * the starting value of the semaphore. Passing a value less than zero results in NULL. * * * @result * The newly created semaphore, or NULL */ on failure
API_AVAILABLE(macos(10.6), ios(4.0))
DISPATCH_EXPORT DISPATCH_MALLOC DISPATCH_RETURNS_RETAINED DISPATCH_WARN_RESULT
DISPATCH_NOTHROW
dispatch_semaphore_t
dispatch_semaphore_create(intptr_t value);
Copy the code
dispatch_semaphore_signal
/ *! * * @discussion * add count semaphore. * * @discussion * add count semaphore. If The previous value is less than zero * this function wakes up a waiting thread before returning * * @param dsema The counting Semaphore. * The result of passing NULL in this argument is undefined * * @result * If The thread is woken up, This function returns a nonzero value. Otherwise, return zero * */
intptr_t
dispatch_semaphore_signal(dispatch_semaphore_t dsema)
{
// The +1 operation
long value = os_atomic_inc2o(dsema, dsema_value, release);
if (likely(value > 0)) {
return 0;
}
// Throw an exception when the wait operation is excessive and the signal signal is unbalanced
if (unlikely(value == LONG_MIN)) {
DISPATCH_CLIENT_CRASH(value,
"Unbalanced call to dispatch_semaphore_signal()");
}
// Since it is based on the encapsulation of pThread, in order to avoid the exception of the lower layer, we do exception handling here
return _dispatch_semaphore_signal_slow(dsema);
}
Copy the code
dispatch_semaphore_wait
/ *! * @function dispatch_semaphore_wait * * @abstract * wait semaphore * * @discussion * decrement count semaphore. If the resulting value is less than zero, * the function waits for a signal to appear before returning. * * @param dsema *. The result of passing NULL in this argument is an undefined * * @param timeout * when it times out (see dispatch_time). For convenience, * has DISPATCH_TIME_NOW and DISPATCH_TIME_FOREVER constants. * * @result * Returns zero on success and non-zero on timeout */
intptr_t
dispatch_semaphore_wait(dispatch_semaphore_t dsema, dispatch_time_t timeout)
{
/ / - 1
long value = os_atomic_dec2o(dsema, dsema_value, acquire);
if (likely(value >= 0)) {
return 0;
}
return _dispatch_semaphore_wait_slow(dsema, timeout);
}
Copy the code
_dispatch_semaphore_wait_slow
static intptr_t
_dispatch_semaphore_wait_slow(dispatch_semaphore_t dsema, dispatch_time_t timeout)
{
long orig;
_dispatch_sema4_create(&dsema->dsema_sema, _DSEMA4_POLICY_FIFO);
/ / determine the timeout
switch (timeout) {
default:
if(! _dispatch_sema4_timedwait(&dsema->dsema_sema, timeout)) {break;
}
// Fall through and try to undo what the fast path did to
// dsema->dsema_value
case DISPATCH_TIME_NOW:
orig = dsema->dsema_value;
while (orig < 0) {
if (os_atomic_cmpxchgv2o(dsema, dsema_value, orig, orig + 1,
&orig, relaxed)) {
// Handle the timeout
return_DSEMA4_TIMEOUT(); }}// Another thread called semaphore_signal().
// Fall through and drain the wakeup.
case DISPATCH_TIME_FOREVER:
// Wait for the operation
_dispatch_sema4_wait(&dsema->dsema_sema);
break;
}
return 0;
}
Copy the code
_dispatch_sema4_wait
void
_dispatch_sema4_wait(_dispatch_sema4_t *sema)
{
int ret = 0;
do {
ret = sem_wait(sema);
} while (ret == -1 && errno == EINTR);
DISPATCH_SEMAPHORE_VERIFY_RET(ret);
}
Copy the code
Here is a do{} while() loop. So it’s a while condition, and it’s going to wait here until the semaphore comes back, and then it’s going to wait for the next task. This explains why, while waiting, the semaphore will not be released and will not be executed.