Welcome to the iOS Exploration series.
- IOS explores the alloc process
- IOS explores memory alignment &malloc source code
- IOS explores ISA initialization & pointing analysis
- IOS exploration class structure analysis
- IOS explores cache_T analysis
- IOS explores the nature of methods and the method finding process
- IOS explores dynamic method resolution and message forwarding mechanisms
- IOS explores the dyLD loading process briefly
- The loading process of iOS explore classes
- IOS explores the loading process of classification and class extension
- IOS explores isa interview question analysis
- IOS Explore Runtime interview question analysis
- IOS explores KVC principles and customization
- IOS explores KVO principles and customizations
- IOS explores the principle of multithreading
- IOS explores multi-threaded GCD applications
- IOS explores multithreaded GCD underlying analysis
- IOS explores NSOperation for multithreading
- IOS explores multi-threaded interview question analysis
- IOS Explore the locks in iOS
- IOS explores the full range of blocks to read
Writing in the front
Because of the large length of the source code, logical branches, macro definitions, so that the source code becomes difficult to understand, so that developers are discouraged. But if you look at the source code with questions and purpose, you can reduce the difficulty and ignore irrelevant code. Firstly, several problems analyzed in this paper are proposed:
- How is the underlying queue created
- Deadlock generation
- Dispatch_block task execution
- Synchronization function
- An asynchronous function
- Principle of semaphore
- The principle of scheduling groups
- The singleton principle
The length of this paper will be relatively large, between the functions of the jump is also more, but only the core process code to do research, I believe that it should be a harvest
Source code selection judgment
Analysis of the source code first to obtain GCD source, before has analyzed objc, malloc, dyLD source, so GCD content is in which source?
Here’s a tip. Given that you know you’re going to study GCD, there are several ways to select source code
- Baidu/Google
- Lower sign breakpoint
dispatch_queue_create
- Use only the
Debug->Debug Workflow->Always show Disassembly
You can also see this by looking at the assembly
That’s where the libDispatch source code comes in
How is the underlying queue created
The upper layer uses dispatch_queue_CREATE to search globally. But there will be a large number of search results (66 results in 17 files), this time the test of a developer to read the source code experience
- A novice will look for the past, would rather kill a thousand wrong can not let go
- Older drivers modify their search terms based on upper-level usage
- Since the queue creation code is
dispatch_queue_create("", NULL)
, so searchdispatch_queue_create(
— Drop the filter result to (21 results in 6 files) - Since the first argument is a string, in C language
const
Modify, so searchdispatch_queue_create(const
Drop filter result to (2 results in 2 files)
- Since the queue creation code is
1.dispatch_queue_create
Regular mid-tier encapsulation — easy to iterate code without changing upper-layer usage
dispatch_queue_t
dispatch_queue_create(const char *label, dispatch_queue_attr_t attr)
{
return _dispatch_lane_create_with_target(label, attr,
DISPATCH_TARGET_QUEUE_DEFAULT, true);
}
Copy the code
Sometimes we need to pay attention to the parameter passing in the source function:
- At this time
label
Is the upperReverse the full domain name
, mainly used for crash debugging attr
isNULL/DISPATCH_QUEUE_SERIAL
,DISPATCH_QUEUE_CONCURRENT
Is used to distinguish between asynchronous and synchronous queues
#define DISPATCH_QUEUE_SERIAL NULL The serial queue macro definition is NULL
2._dispatch_lane_create_with_target
DISPATCH_NOINLINE
static dispatch_queue_t
_dispatch_lane_create_with_target(const char *label, dispatch_queue_attr_t dqa,
dispatch_queue_t tq, bool legacy)
{
dispatch_queue_attr_info_tdqai = _dispatch_queue_attr_to_info(dqa); . }Copy the code
Dqa is the second parameter in this function, attr in dispatch_queue_CREATE
The serial/concurrent distinction is used, so let’s go with it
3._dispatch_queue_attr_to_info
dispatch_queue_attr_info_t
_dispatch_queue_attr_to_info(dispatch_queue_attr_t dqa)
{
dispatch_queue_attr_info_t dqai = { };
if(! dqa)return dqai;
#if DISPATCH_VARIANT_STATIC
if (dqa == &_dispatch_queue_attr_concurrent) {
dqai.dqai_concurrent = true;
return dqai;
}
#endif
if (dqa < _dispatch_queue_attrs ||
dqa >= &_dispatch_queue_attrs[DISPATCH_QUEUE_ATTR_COUNT]) {
DISPATCH_CLIENT_CRASH(dqa->do_vtable, "Invalid queue attribute");
}
size_t idx = (size_t)(dqa - _dispatch_queue_attrs); dqai.dqai_inactive = (idx % DISPATCH_QUEUE_ATTR_INACTIVE_COUNT); idx /= DISPATCH_QUEUE_ATTR_INACTIVE_COUNT; dqai.dqai_concurrent = ! (idx % DISPATCH_QUEUE_ATTR_CONCURRENCY_COUNT); idx /= DISPATCH_QUEUE_ATTR_CONCURRENCY_COUNT; dqai.dqai_relpri = -(idx % DISPATCH_QUEUE_ATTR_PRIO_COUNT); idx /= DISPATCH_QUEUE_ATTR_PRIO_COUNT; dqai.dqai_qos = idx % DISPATCH_QUEUE_ATTR_QOS_COUNT; idx /= DISPATCH_QUEUE_ATTR_QOS_COUNT; dqai.dqai_autorelease_frequency = idx % DISPATCH_QUEUE_ATTR_AUTORELEASE_FREQUENCY_COUNT; idx /= DISPATCH_QUEUE_ATTR_AUTORELEASE_FREQUENCY_COUNT; dqai.dqai_overcommit = idx % DISPATCH_QUEUE_ATTR_OVERCOMMIT_COUNT; idx /= DISPATCH_QUEUE_ATTR_OVERCOMMIT_COUNT;return dqai;
}
Copy the code
dispatch_queue_attr_info_t dqai = { };
Initializedispatch_queue_attr_info_t
withisaAgain, it’s a bit domain structure
typedef struct dispatch_queue_attr_info_s {
dispatch_qos_t dqai_qos : 8;
int dqai_relpri : 8;
uint16_t dqai_overcommit:2;
uint16_t dqai_autorelease_frequency:2;
uint16_t dqai_concurrent:1;
uint16_t dqai_inactive:1;
} dispatch_queue_attr_info_t;
Copy the code
- Now let’s look at this code
if (! dqa) return dqai;
- Serial queued
dqa
Is NULL, and NULL is returned - The asynchronous queue continues down
- Serial queued
size_t idx = (size_t)(dqa - _dispatch_queue_attrs);
- use
DISPATCH_QUEUE_CONCURRENT
To perform bitwise operations - Concurrency number of concurrent queues
dqai.dqai_concurrent
Different from serial queues
- use
#define DISPATCH_QUEUE_CONCURRENT \
DISPATCH_GLOBAL_OBJECT(dispatch_queue_attr_t, \
_dispatch_queue_attr_concurrent)
Copy the code
4. Return to _dispatch_lane_create_with_target
We’re looking at queue creation, so we can ignore the details in the source code and focus on code like alloc
DISPATCH_NOINLINE
static dispatch_queue_t
_dispatch_lane_create_with_target(const char *label, dispatch_queue_attr_t dqa,
dispatch_queue_t tq, bool legacy)
{
dispatch_queue_attr_info_t dqai = _dispatch_queue_attr_to_info(dqa);
dispatch_qos_tqos = dqai.dqai_qos; .// This is part of the logical branch
if (tq->dq_priority & DISPATCH_PRIORITY_FLAG_OVERCOMMIT) {
overcommit = _dispatch_queue_attr_overcommit_enabled;
} else {
overcommit = _dispatch_queue_attr_overcommit_disabled;
}
if(! tq) { tq = _dispatch_get_root_queue( qos == DISPATCH_QOS_UNSPECIFIED ? DISPATCH_QOS_DEFAULT : qos,/ / 4
overcommit == _dispatch_queue_attr_overcommit_enabled)->_as_dq; / / 0 to 1
if(unlikely(! tq)) { DISPATCH_CLIENT_CRASH(qos,"Invalid queue attribute"); }}...// Open up memory - generate the queue of response objects
dispatch_lane_t dq = _dispatch_object_alloc(vtable,
sizeof(struct dispatch_lane_s));
// constructor
_dispatch_queue_init(dq, dqf, dqai.dqai_concurrent ?
DISPATCH_QUEUE_WIDTH_MAX : 1, DISPATCH_QUEUE_ROLE_INNER |
(dqai.dqai_inactive ? DISPATCH_QUEUE_INACTIVE : 0));
/ / label
dq->dq_label = label;
/ / priority
dq->dq_priority = _dispatch_priority_make((dispatch_qos_t)dqai.dqai_qos,
dqai.dqai_relpri);
if (overcommit == _dispatch_queue_attr_overcommit_enabled) {
dq->dq_priority |= DISPATCH_PRIORITY_FLAG_OVERCOMMIT;
}
if(! dqai.dqai_inactive) { _dispatch_queue_priority_inherit_from_target(dq, tq); _dispatch_lane_inherit_wlh_from_target(dq, tq); } _dispatch_retain(tq); dq->do_targetq = tq; _dispatch_object_debug(dq,"%s", __func__);
return _dispatch_trace_queue_create(dq)._dq;
}
Copy the code
4.1 _dispatch_get_root_queue create
tq
Is the current function_dispatch_lane_create_with_target
Is passed when the function is calledDISPATCH_TARGET_QUEUE_DEFAULT
, soif (! tq)
Must be true for
#define DISPATCH_TARGET_QUEUE_DEFAULT NULL
Copy the code
_dispatch_get_root_queue
The two parameters are respectively4
and0/1
qos == DISPATCH_QOS_UNSPECIFIED ? DISPATCH_QOS_DEFAULT : qos
Because there is noqos
It’s assigned, so it defaults to 0overcommit == _dispatch_queue_attr_overcommit_enabled)->_as_dq;
There is a note in the code area — “This is the logical branch of the part”, the serial queue is_dispatch_queue_attr_overcommit_disabled
, the concurrent queue is_dispatch_queue_attr_overcommit_enabled
#define DISPATCH_QOS_UNSPECIFIED ((dispatch_qos_t)0)
#define DISPATCH_QOS_DEFAULT ((dispatch_qos_t)4)
DISPATCH_ALWAYS_INLINE DISPATCH_CONST
static inline dispatch_queue_global_t
_dispatch_get_root_queue(dispatch_qos_t qos, bool overcommit)
{
if (unlikely(qos < DISPATCH_QOS_MIN || qos > DISPATCH_QOS_MAX)) {
DISPATCH_CLIENT_CRASH(qos, "Corrupted priority");
}
return &_dispatch_root_queues[2 * (qos - 1) + overcommit];
}
Copy the code
- Serial queue and concurrent queue renturn respectively
&_dispatch_root_queues[6]
,&_dispatch_root_queues[7]
Global search _dispatch_root_queues[] you can see that the seventh and eighth queues in the array are serial queues and concurrent queues respectively
Guess: Custom queues are from_dispatch_root_queues
Template out of the creation
4.2 _dispatch_object_alloc Opens memory
GCD objects are mostly created for dispatch_object_t: apple source code notes that it is the abstract base type for all assigned objects; Dispatch_object_t as a union, as soon as it is created, you can use the type you want (union “I have no he” property)
The idea of unions is polymorphic, slightly different from the idea of objc_Object inheritance
/ *! * @typedef dispatch_object_t * * @abstract * Abstract base type for all dispatch objects. * The details of the type definition are language-specific. * * @discussion * Dispatch objects are reference counted via calls to dispatch_retain() and * dispatch_release(). */
typedef union {
struct _os_object_s* _os_obj;
struct dispatch_object_s* _do;
struct dispatch_queue_s* _dq;
struct dispatch_queue_attr_s* _dqa;
struct dispatch_group_s* _dg;
struct dispatch_source_s* _ds;
struct dispatch_mach_s* _dm;
struct dispatch_mach_msg_s* _dmsg;
struct dispatch_semaphore_s* _dsema;
struct dispatch_data_s* _ddata;
struct dispatch_io_s* _dchannel;
} dispatch_object_t DISPATCH_TRANSPARENT_UNION;
Copy the code
4.3 _dispatch_queue_init The value is constructed
- The concurrent queue was mentioned earlier
dqai.dqai_concurrent
I set it up, so I usedqai.dqai_concurrent
To distinguish between concurrent queues and serial queues - Serial queued
width
Is 1, concurrent queuewidth
forDISPATCH_QUEUE_WIDTH_MAX
- Return after a click operation
dispatch_queue_class_t
, that is, the transfer of parametersdqu
Perform assignment modification and other operations
4.4 return dispatch_queue_t
_dispatch_retain(tq);
dq->do_targetq = tq;
_dispatch_object_debug(dq, "%s", __func__);
return _dispatch_trace_queue_create(dq)._dq;
Copy the code
dq
isdispatch_lane_t
typetq
isdispatch_queue_t
type_dispatch_trace_queue_create(dq)
Return type isdispatch_queue_class_t
It’s a uniondispatch_queue_class_t
In thedq
That’s what you get backdispatch_queue_t
typedef struct dispatch_queue_s *dispatch_queue_t;
typedef union {
struct dispatch_queue_s* _dq;
struct dispatch_workloop_s* _dwl;
struct dispatch_lane_s* _dl;
struct dispatch_queue_static_s* _dsq;
struct dispatch_queue_global_s* _dgq;
struct dispatch_queue_pthread_root_s* _dpq;
struct dispatch_source_s* _ds;
struct dispatch_mach_s* _dm;
dispatch_lane_class_t _dlu;
#ifdef __OBJC__
id<OS_dispatch_queue> _objc_dq;
#endif
} dispatch_queue_class_t DISPATCH_TRANSPARENT_UNION;
Copy the code
5. Test your conjecture
NSLog calls the object’s describtion method, while LLDB prints the underlying pointer
- You can see serial queues, concurrent queues
target
Exactly the same as the corresponding one in the template - Similarly, serial queue, concurrent queue, primary queue, global queue
width
Vary (width
Represents the maximum number of tasks that can be scheduled in a queue.- Serial queue and main queue
width
There is no doubt - Concurrent queued
width
forDISPATCH_QUEUE_WIDTH_MAX
Is full of value – 2 - Globally queued
width
forDISPATCH_QUEUE_WIDTH_POOL
Is the value 1
- Serial queue and main queue
#define DISPATCH_QUEUE_WIDTH_FULL_BIT 0x0020000000000000ull
#define DISPATCH_QUEUE_WIDTH_FULL 0x1000ull
#define DISPATCH_QUEUE_WIDTH_POOL (DISPATCH_QUEUE_WIDTH_FULL - 1)
#define DISPATCH_QUEUE_WIDTH_MAX (DISPATCH_QUEUE_WIDTH_FULL - 2)
struct dispatch_queue_static_s _dispatch_main_q = {
DISPATCH_GLOBAL_OBJECT_HEADER(queue_main),
#if! DISPATCH_USE_RESOLVERS
.do_targetq = _dispatch_get_default_queue(true),
#endif
.dq_state = DISPATCH_QUEUE_STATE_INIT_VALUE(1) |
DISPATCH_QUEUE_ROLE_BASE_ANON,
.dq_label = "com.apple.main-thread",
.dq_atomic_flags = DQF_THREAD_BOUND | DQF_WIDTH(1),
.dq_serialnum = 1};struct dispatch_queue_global_s _dispatch_mgr_root_queue = {
DISPATCH_GLOBAL_OBJECT_HEADER(queue_global),
.dq_state = DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE,
.do_ctxt = &_dispatch_mgr_root_queue_pthread_context,
.dq_label = "com.apple.root.libdispatch-manager",
.dq_atomic_flags = DQF_WIDTH(DISPATCH_QUEUE_WIDTH_POOL),
.dq_priority = DISPATCH_PRIORITY_FLAG_MANAGER |
DISPATCH_PRIORITY_SATURATED_OVERRIDE,
.dq_serialnum = 3,
.dgq_thread_pool_size = 1};Copy the code
Queues are shared with _dispatch_root_queues. Queues are shared with _dispatch_root_queues.
6. _dispatch_root_queues creation
All queues except dispatch_get_main_queue are created using _dispatch_root_queues
After libdispatch_init is called _dispatch_introspection_init, through the for loop, _dispatch_trace_queue_create is called, The address Pointers in _dispatch_root_queues have been created
7. Figure out the process of creating a custom queue
Second, deadlock generation
Deadlocks are caused by tasks waiting for each other, so how is the underlying implementation of deadlocks?
1.dispatch_sync
Search dispatch_sync(Dispatch) globally, ignoring unlikely probability events
DISPATCH_NOINLINE
void
dispatch_sync(dispatch_queue_t dq, dispatch_block_t work)
{
uintptr_t dc_flags = DC_FLAG_BLOCK;
if (unlikely(_dispatch_block_has_private_data(work))) {
return _dispatch_sync_block_with_privdata(dq, work, dc_flags);
}
_dispatch_sync_f(dq, work, _dispatch_Block_invoke(work), dc_flags);
}
#endif // __BLOCKS__
Copy the code
2._dispatch_sync_f
It’s still a normal middle layer package
DISPATCH_NOINLINE
static void
_dispatch_sync_f(dispatch_queue_t dq, void *ctxt, dispatch_function_t func,
uintptr_t dc_flags)
{
_dispatch_sync_f_inline(dq, ctxt, func, dc_flags);
}
Copy the code
3._dispatch_sync_f_inline
- known
Serial queues
thewidth
Is 1, so the serial queue satisfiesdq->dq_width == 1
return _dispatch_barrier_sync_f(dq, ctxt, func, dc_flags);
Concurrent queue
It’s going to keep going down
DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_sync_f_inline(dispatch_queue_t dq, void *ctxt,
dispatch_function_t func, uintptr_t dc_flags)
{
if (likely(dq->dq_width == 1)) {
return _dispatch_barrier_sync_f(dq, ctxt, func, dc_flags);
}
if(unlikely(dx_metatype(dq) ! = _DISPATCH_LANE_TYPE)) { DISPATCH_CLIENT_CRASH(0."Queue type doesn't support dispatch_sync");
}
dispatch_lane_t dl = upcast(dq)._dl;
// Global concurrent queues and queues bound to non-dispatch threads
// always fall into the slow case, see DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE
if(unlikely(! _dispatch_queue_try_reserve_sync_width(dl))) {return _dispatch_sync_f_slow(dl, ctxt, func, 0, dl, dc_flags);
}
if (unlikely(dq->do_targetq->do_targetq)) {
return _dispatch_sync_recurse(dl, ctxt, func, dc_flags);
}
_dispatch_introspection_sync_begin(dl);
_dispatch_sync_invoke_and_complete(dl, ctxt, func DISPATCH_TRACE_ARG(
_dispatch_trace_item_sync_push_pop(dq, ctxt, func, dc_flags)));
}
Copy the code
4._dispatch_barrier_sync_f
The serial queue and the fence function are similar, so jump to here, again middle layer encapsulation
DISPATCH_NOINLINE
static void
_dispatch_barrier_sync_f(dispatch_queue_t dq, void *ctxt,
dispatch_function_t func, uintptr_t dc_flags)
{
_dispatch_barrier_sync_f_inline(dq, ctxt, func, dc_flags);
}
Copy the code
5._dispatch_barrier_sync_f_inline
DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_barrier_sync_f_inline(dispatch_queue_t dq, void *ctxt,
dispatch_function_t func, uintptr_t dc_flags)
{
dispatch_tid tid = _dispatch_tid_self();
if(unlikely(dx_metatype(dq) ! = _DISPATCH_LANE_TYPE)) { DISPATCH_CLIENT_CRASH(0."Queue type doesn't support dispatch_sync");
}
dispatch_lane_t dl = upcast(dq)._dl;
// The more correct thing to do would be to merge the qos of the thread
// that just acquired the barrier lock into the queue state.
//
// However this is too expensive for the fast path, so skip doing it.
// The chosen tradeoff is that if an enqueue on a lower priority thread
// contends with this fast path, this thread may receive a useless override.
//
// Global concurrent queues and queues bound to non-dispatch threads
// always fall into the slow case, see DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE
/ / a deadlock
if(unlikely(! _dispatch_queue_try_acquire_barrier_sync(dl, tid))) {return _dispatch_sync_f_slow(dl, ctxt, func, DC_FLAG_BARRIER, dl,
DC_FLAG_BARRIER | dc_flags);
}
if (unlikely(dl->do_targetq->do_targetq)) {
return _dispatch_sync_recurse(dl, ctxt, func,
DC_FLAG_BARRIER | dc_flags);
}
_dispatch_introspection_sync_begin(dl);
_dispatch_lane_barrier_sync_invoke_and_complete(dl, ctxt, func
DISPATCH_TRACE_ARG(_dispatch_trace_item_sync_push_pop(
dq, ctxt, func, dc_flags | DC_FLAG_BARRIER)));
}
Copy the code
5.1 _dispatch_tid_self
_dispatch_tid_self is a macro definition that ultimately calls _dispatch_thread_getSpecific to get the current thread ID (thread in key-value form).
#define _dispatch_tid_self() ((dispatch_tid)_dispatch_thread_port())
#if TARGET_OS_WIN32
#define _dispatch_thread_port() ((mach_port_t)0)
#elif ! DISPATCH_USE_THREAD_LOCAL_STORAGE
#if DISPATCH_USE_DIRECT_TSD
#define _dispatch_thread_port() ((mach_port_t)(uintptr_t)\
_dispatch_thread_getspecific(_PTHREAD_TSD_SLOT_MACH_THREAD_SELF))
#else
#define _dispatch_thread_port() pthread_mach_thread_np(_dispatch_thread_self())
#endif
#endif
Copy the code
It’s time to show some real technique! Now comes the core analysis of deadlocks!
5.2 _dispatch_queue_try_acquire_barrier_sync
_dispatch_queue_trY_acquire_barrier_sync retrieves a state from the underlying OS
DISPATCH_ALWAYS_INLINE DISPATCH_WARN_RESULT
static inline bool
_dispatch_queue_try_acquire_barrier_sync(dispatch_queue_class_t dq, uint32_t tid)
{
return _dispatch_queue_try_acquire_barrier_sync_and_suspend(dq._dl, tid, 0);
}
DISPATCH_ALWAYS_INLINE DISPATCH_WARN_RESULT
static inline bool
_dispatch_queue_try_acquire_barrier_sync_and_suspend(dispatch_lane_t dq,
uint32_t tid, uint64_t suspend_count)
{
uint64_t init = DISPATCH_QUEUE_STATE_INIT_VALUE(dq->dq_width);
uint64_t value = DISPATCH_QUEUE_WIDTH_FULL_BIT | DISPATCH_QUEUE_IN_BARRIER |
_dispatch_lock_value_from_tid(tid) |
(suspend_count * DISPATCH_QUEUE_SUSPEND_INTERVAL);
uint64_t old_state, new_state;
// Get information from the bottom layer -- state information -- current queue -- thread
return os_atomic_rmw_loop2o(dq, dq_state, old_state, new_state, acquire, {
uint64_t role = old_state & DISPATCH_QUEUE_ROLE_MASK;
if(old_state ! = (init | role)) { os_atomic_rmw_loop_give_up(break);
}
new_state = value | role;
});
}
Copy the code
5.3 _dispatch_sync_f_slow
New_state is available in 5.2 (this function is on the call stack when a deadlock crashes)
DISPATCH_NOINLINE
static void
_dispatch_sync_f_slow(dispatch_queue_class_t top_dqu, void *ctxt,
dispatch_function_t func, uintptr_t top_dc_flags,
dispatch_queue_class_t dqu, uintptr_t dc_flags)
{
dispatch_queue_t top_dq = top_dqu._dq;
dispatch_queue_t dq = dqu._dq;
if(unlikely(! dq->do_targetq)) {return _dispatch_sync_function_invoke(dq, ctxt, func);
}
pthread_priority_t pp = _dispatch_get_priority();
struct dispatch_sync_context_s dsc = {
.dc_flags = DC_FLAG_SYNC_WAITER | dc_flags,
.dc_func = _dispatch_async_and_wait_invoke,
.dc_ctxt = &dsc,
.dc_other = top_dq,
.dc_priority = pp | _PTHREAD_PRIORITY_ENFORCE_FLAG,
.dc_voucher = _voucher_get(),
.dsc_func = func,
.dsc_ctxt = ctxt,
.dsc_waiter = _dispatch_tid_self(),
};
_dispatch_trace_item_push(top_dq, &dsc);
__DISPATCH_WAIT_FOR_QUEUE__(&dsc, dq);
if (dsc.dsc_func == NULL) {
dispatch_queue_t stop_dq = dsc.dc_other;
return _dispatch_sync_complete_recurse(top_dq, stop_dq, top_dc_flags);
}
_dispatch_introspection_sync_begin(top_dq);
_dispatch_trace_item_pop(top_dq, &dsc);
_dispatch_sync_invoke_and_complete_recurse(top_dq, ctxt, func,top_dc_flags
DISPATCH_TRACE_ARG(&dsc));
}
Copy the code
_dispatch_trace_item_push
The pushdown operation will execute the task to push to the queue, as perFIFO
perform__DISPATCH_WAIT_FOR_QUEUE__
Is the last function to crash the stack
DISPATCH_NOINLINE
static void
__DISPATCH_WAIT_FOR_QUEUE__(dispatch_sync_context_t dsc, dispatch_queue_t dq)
{
// Get the status of the queue to see if it is in wait state
uint64_t dq_state = _dispatch_wait_prepare(dq);
if (unlikely(_dq_state_drain_locked_by(dq_state, dsc->dsc_waiter))) {
DISPATCH_CLIENT_CRASH((uintptr_t)dq_state,
"dispatch_sync called on queue "
"already owned by current thread"); }... }Copy the code
5.4 _dq_state_drain_locked_by
Compare the current waiting value with the thread TID. If YES, return for error processing
DISPATCH_ALWAYS_INLINE
static inline bool
_dq_state_drain_locked_by(uint64_t dq_state, dispatch_tid tid)
{
return _dispatch_lock_is_locked_by((dispatch_lock)dq_state, tid);
}
DISPATCH_ALWAYS_INLINE
static inline bool
_dispatch_lock_is_locked_by(dispatch_lock lock_value, dispatch_tid tid)
{
// equivalent to _dispatch_lock_owner(lock_value) == tid
// ^ (xor) if two are the same, 0 will appear otherwise 1
return ((lock_value ^ tid) & DLOCK_OWNER_MASK) == 0;
}
Copy the code
6. Understand the deadlock process in one picture
- Deadlock is generated by
Thread dar
And after the current wait state is convertedvalue
The comparison - Synchronous execution
dispatch_sync
Will carry out the stack operation, according toFIFO
To carry out Barrier function
andSynchronous execution
It’s about the same
Iii. Execution of dispatch_block tasks
A breakpoint is set at dispatch_block and LLDB debug prints out the function call stack
1._dispatch_lane_barrier_sync_invoke_and_complete
This is also the case with OS low-level callbacks like the one above. Why use callbacks — the task execution depends on the state of the thread, and the task will not execute if the thread state is not good enough
DISPATCH_NOINLINE
static void
_dispatch_lane_barrier_sync_invoke_and_complete(dispatch_lane_t dq,
void *ctxt, dispatch_function_t func DISPATCH_TRACE_ARG(void *dc))
{
...
// similar to _dispatch_queue_drain_try_unlock
os_atomic_rmw_loop2o(dq, dq_state, old_state, new_state, release, {
new_state = old_state - DISPATCH_QUEUE_SERIAL_DRAIN_OWNED;
new_state &= ~DISPATCH_QUEUE_DRAIN_UNLOCK_MASK;
new_state &= ~DISPATCH_QUEUE_MAX_QOS_MASK;
if (unlikely(old_state & fail_unlock_mask)) {
os_atomic_rmw_loop_give_up({
return _dispatch_lane_barrier_complete(dq, 0.0); }); }});if (_dq_state_is_base_wlh(old_state)) {
_dispatch_event_loop_assert_not_owned((dispatch_wlh_t)dq); }}Copy the code
- _dispatch_lane_barrier_complete
Follow directly to _dispatch_lane_class_barrier_complete
DISPATCH_NOINLINE
static void
_dispatch_lane_barrier_complete(dispatch_lane_class_t dqu, dispatch_qos_t qos,
dispatch_wakeup_flags_t flags)
{
...
uint64_t owned = DISPATCH_QUEUE_IN_BARRIER +
dq->dq_width * DISPATCH_QUEUE_WIDTH_INTERVAL;
return _dispatch_lane_class_barrier_complete(dq, qos, flags, target, owned);
}
Copy the code
- _dispatch_lane_class_barrier_complete
Follow up _dispatch_queue_push_queue
DISPATCH_NOINLINE
static void
_dispatch_lane_class_barrier_complete(dispatch_lane_t dq, dispatch_qos_t qos,
dispatch_wakeup_flags_t flags, dispatch_queue_wakeup_target_t target,
uint64_t owned)
{
...
if (tq) {
if (likely((old_state ^ new_state) & enqueue)) {
dispatch_assert(_dq_state_is_enqueued(new_state));
dispatch_assert(flags & DISPATCH_WAKEUP_CONSUME_2);
return_dispatch_queue_push_queue(tq, dq, new_state); }... }}Copy the code
- _dispatch_queue_push_queue
And dx_push is a macro definition
#define dx_push(x, y, z) dx_vtable(x)->dq_push(x, y, z)
DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_queue_push_queue(dispatch_queue_t tq, dispatch_queue_class_t dq,
uint64_t dq_state)
{
_dispatch_trace_item_push(tq, dq);
return dx_push(tq, dq, _dq_state_max_qos(dq_state));
}
Copy the code
-
Dq_push global search comes to Dq_push, select _dispatch_root_queue_push to continue
-
_dispatch_root_queue_push
Most likely, _dispatch_root_queue_PUSH_inline will go
DISPATCH_NOINLINE
void
_dispatch_root_queue_push(dispatch_queue_global_t rq, dispatch_object_t dou,
dispatch_qos_t qos)
{
...
#if HAVE_PTHREAD_WORKQUEUE_QOS
if (_dispatch_root_queue_push_needs_override(rq, qos)) {
return _dispatch_root_queue_push_override(rq, dou, qos);
}
#else
(void)qos;
#endif
_dispatch_root_queue_push_inline(rq, dou, dou, 1);
}
Copy the code
- _dispatch_root_queue_push_inline
DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_root_queue_push_inline(dispatch_queue_global_t dq,
dispatch_object_t _head, dispatch_object_t _tail, int n)
{
struct dispatch_object_s *hd = _head. _do, *tl = _tail. _do;
if (unlikely(os_mpsc_push_list(os_mpsc(dq, dq_items), hd, tl, do_next))) {
return _dispatch_root_queue_poke(dq, n, 0); }}Copy the code
- _dispatch_root_queue_poke
DISPATCH_NOINLINE
void
_dispatch_root_queue_poke(dispatch_queue_global_t dq, int n, int floor) {...return _dispatch_root_queue_poke_slow(dq, n, floor);
}
Copy the code
- _dispatch_root_queue_poke_slow
DISPATCH_NOINLINE
static void
_dispatch_root_queue_poke_slow(dispatch_queue_global_t dq, int n, int floor)
{
int remaining = n;
int r = ENOSYS;
_dispatch_root_queues_init();
_dispatch_debug_root_queue(dq, __func__);
_dispatch_trace_runtime_event(worker_request, dq, (uint64_t)n); . }Copy the code
- _dispatch_root_queues_init
Follow the core method dispatch_once_f
static inline void
_dispatch_root_queues_init(void)
{
dispatch_once_f(&_dispatch_root_queues_pred, NULL,
_dispatch_root_queues_init_once);
}
Copy the code
- Dispatch_once_f when you see
_dispatch_once_callout
The function is not far from success
DISPATCH_NOINLINE
void
dispatch_once_f(dispatch_once_t *val, void *ctxt, dispatch_function_t func)
{
// If you come once, you won't come again
dispatch_once_gate_t l = (dispatch_once_gate_t)val;
//DLOCK_ONCE_DONE
#if! DISPATCH_ONCE_INLINE_FASTPATH || DISPATCH_ONCE_USE_QUIESCENT_COUNTER
uintptr_t v = os_atomic_load(&l->dgo_once, acquire);
if (likely(v == DLOCK_ONCE_DONE)) {
return;
}
#if DISPATCH_ONCE_USE_QUIESCENT_COUNTER
if (likely(DISPATCH_ONCE_IS_GEN(v))) {
return _dispatch_once_mark_done_if_quiesced(l, v);
}
#endif
#endif
// Meet the condition -- try to enter
if (_dispatch_once_gate_tryenter(l)) {
V ->DLOCK_ONCE_DONE
return _dispatch_once_callout(l, ctxt, func);
}
return _dispatch_once_wait(l);
}
Copy the code
- _dispatch_once_callout
DISPATCH_NOINLINE
static void
_dispatch_once_callout(dispatch_once_gate_t l, void *ctxt,
dispatch_function_t func)
{
_dispatch_client_callout(ctxt, func);
_dispatch_once_gate_broadcast(l);
}
Copy the code
2._dispatch_client_callout
f(ctxt)
Calls to performdispatch_function_t
–dispatch_block
The execution of the point
DISPATCH_NOINLINE
void
_dispatch_client_callout(void *ctxt, dispatch_function_t f)
{
_dispatch_get_tsd_base();
void *u = _dispatch_get_unwind_tsd();
if(likely(! u))return f(ctxt);
_dispatch_set_unwind_tsd(NULL);
f(ctxt);
_dispatch_free_unwind_tsd();
_dispatch_set_unwind_tsd(u);
}
Copy the code
3. Understand the task saving process in one picture
Finally, we found the execution point of the task, but we did not find the savepoint of the task, and then we started to talk about synchronous and asynchronous functions
4. Synchronization function
The dispatch_sync implementation has already been implemented in the previous article, so here is a summary of the previous picture (especially the work and func calls).
Serial queues
godq->dq_width == 1
branch_dispatch_barrier_sync_f
->_dispatch_barrier_sync_f_inline
->_dispatch_lane_barrier_sync_invoke_and_complete
- And then there is
Iii. Execution of dispatch_block tasks
In the process
- Other cases are likely to go
_dispatch_sync_invoke_and_complete
1._dispatch_sync_invoke_and_complete
Save func and call _dispatch_sync_function_INVOke_inline
DISPATCH_NOINLINE
static void
_dispatch_sync_invoke_and_complete(dispatch_lane_t dq, void *ctxt,
dispatch_function_t func DISPATCH_TRACE_ARG(void *dc))
{
_dispatch_sync_function_invoke_inline(dq, ctxt, func);
_dispatch_trace_item_complete(dc);
_dispatch_lane_non_barrier_complete(dq, 0);
}
Copy the code
2._dispatch_sync_function_invoke_inline
A direct call to _dispatch_client_callout echoes the execution of the dispatch_block task
DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_sync_function_invoke_inline(dispatch_queue_class_t dq, void *ctxt,
dispatch_function_t func)
{
dispatch_thread_frame_s dtf;
_dispatch_thread_frame_push(&dtf, dq);
// f(ctxt) -- func(ctxt)
_dispatch_client_callout(ctxt, func);
_dispatch_perfmon_workitem_inc();
_dispatch_thread_frame_pop(&dtf);
}
Copy the code
3. See part of the process of synchronization function execution
Asynchronous functions
1. Save tasks
Again, follow the dispatch_async source code implementation and focus on dispatch_block_t
1.1 dispatch_async
void
dispatch_async(dispatch_queue_t dq, dispatch_block_t work)
{
dispatch_continuation_t dc = _dispatch_continuation_alloc();
uintptr_t dc_flags = DC_FLAG_CONSUME;
dispatch_qos_t qos;
qos = _dispatch_continuation_init(dc, dq, work, 0, dc_flags);
_dispatch_continuation_async(dq, dc, qos, dc->dc_flags);
}
Copy the code
1.2 _dispatch_continuation_init
_dispatch_Block_invoke formats the task uniformly
DISPATCH_ALWAYS_INLINE
static inline dispatch_qos_t
_dispatch_continuation_init(dispatch_continuation_t dc,
dispatch_queue_class_t dqu, dispatch_block_t work,
dispatch_block_flags_t flags, uintptr_t dc_flags)
{
void *ctxt = _dispatch_Block_copy(work);
dc_flags |= DC_FLAG_BLOCK | DC_FLAG_ALLOCATED;
if (unlikely(_dispatch_block_has_private_data(work))) {
dc->dc_flags = dc_flags;
dc->dc_ctxt = ctxt;
// will initialize all fields but requires dc_flags & dc_ctxt to be set
return _dispatch_continuation_init_slow(dc, dqu, flags);
}
dispatch_function_t func = _dispatch_Block_invoke(work);
if (dc_flags & DC_FLAG_CONSUME) {
func = _dispatch_call_block_and_release;
}
return _dispatch_continuation_init_f(dc, dqu, ctxt, func, flags, dc_flags);
}
Copy the code
1.3 _dispatch_continuation_init_f
Dc ->dc_func = f Saves the block task
DISPATCH_ALWAYS_INLINE
static inline dispatch_qos_t
_dispatch_continuation_init_f(dispatch_continuation_t dc,
dispatch_queue_class_t dqu, void *ctxt, dispatch_function_t f,
dispatch_block_flags_t flags, uintptr_t dc_flags)
{
pthread_priority_t pp = 0;
dc->dc_flags = dc_flags | DC_FLAG_ALLOCATED;
dc->dc_func = f;
dc->dc_ctxt = ctxt;
// in this context DISPATCH_BLOCK_HAS_PRIORITY means that the priority
// should not be propagated, only taken from the handler if it has one
if(! (flags & DISPATCH_BLOCK_HAS_PRIORITY)) { pp = _dispatch_priority_propagate(); } _dispatch_continuation_voucher_set(dc, flags);return _dispatch_continuation_priority_set(dc, dqu, pp, flags);
}
Copy the code
The asynchronous function’s task save is found, but when is its task executed? And when was the thread created?
2. Create a thread
2.1 _dispatch_continuation_async
DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_continuation_async(dispatch_queue_class_t dqu,
dispatch_continuation_t dc, dispatch_qos_t qos, uintptr_t dc_flags)
{
#if DISPATCH_INTROSPECTION
if(! (dc_flags & DC_FLAG_NO_INTROSPECTION)) { _dispatch_trace_item_push(dqu, dc); }#else
(void)dc_flags;
#endif
return dx_push(dqu._dq, dc, qos);
}
Copy the code
2.1 dx_push…
We’ve looked at this before, but why use _dispatch_root_queue_push for research — because it’s basic and leaves out the side sections
dx_push->dq_push->_dispatch_root_queue_push->_dispatch_root_queue_push_inline->_dispatch_root_queue_poke->_dispatch_root _queue_poke_slow
2.2 _dispatch_root_queue_poke_slow
static void
_dispatch_root_queue_poke_slow(dispatch_queue_global_t dq, int n, int floor) {...// The floor is 0, and remaining handles tasks in the queue
int can_request, t_count;
// Get the thread pool size
t_count = os_atomic_load2o(dq, dgq_thread_pool_size, ordered);
do {
// Calculate the number of requests that can be made
can_request = t_count < floor ? 0 : t_count - floor;
if (remaining > can_request) {
_dispatch_root_queue_debug("pthread pool reducing request from %d to %d",
remaining, can_request);
os_atomic_sub2o(dq, dgq_pending, remaining - can_request, relaxed);
remaining = can_request;
}
if (remaining == 0) {
// If the thread pool is full, an exception is reported
_dispatch_root_queue_debug("pthread pool is full for root queue: "
"%p", dq);
return; }}while(! os_atomic_cmpxchgvw2o(dq, dgq_thread_pool_size, t_count, t_count - remaining, &t_count, acquire));pthread_attr_t *attr = &pqc->dpq_thread_attr;
pthread_t tid, *pthr = &tid;
#if DISPATCH_USE_MGR_THREAD && DISPATCH_USE_PTHREAD_ROOT_QUEUES
if (unlikely(dq == &_dispatch_mgr_root_queue)) {
pthr = _dispatch_mgr_root_queue_init();
}
#endif
do {
_dispatch_retain(dq);
// start a thread
while ((r = pthread_create(pthr, attr, _dispatch_worker_thread, dq))) {
if(r ! = EAGAIN) { (void)dispatch_assume_zero(r); } _dispatch_temporary_resource_shortage(); }}while (--remaining);
#else
(void)floor;
#endif // DISPATCH_USE_PTHREAD_POOL
}
Copy the code
-
The first do-while is a judgment on the number of core threads, operations, and so on
-
The second do-while calls pthread_create to create a thread.
3. Task execution
The execution of the task was actually described earlier
_dispatch_root_queues_init
->dispatch_once_f
->_dispatch_once_callout
->_dispatch_client_callout
But the task is waiting on the state of the thread, and the thread is not known how to execute the task
4. Understand the execution process of asynchronous functions in one picture
Principle of semaphore
The basic use of semaphores is this, but what is the underlying principle?
dispatch_semaphore_t sem = dispatch_semaphore_create(0);
dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER);
dispatch_semaphore_signal(sem);
Copy the code
1.dispatch_semaphore_create
Just initialize dispatch_semaphoRE_t and save the value internally (value must be greater than 0)
dispatch_semaphore_t
dispatch_semaphore_create(long value)
{
dispatch_semaphore_t dsema;
// If the internal value is negative, then the absolute of the value is
// equal to the number of waiting threads. Therefore it is bogus to
// initialize the semaphore with a negative value.
if (value < 0) {
return DISPATCH_BAD_INPUT;
}
dsema = _dispatch_object_alloc(DISPATCH_VTABLE(semaphore),
sizeof(struct dispatch_semaphore_s));
dsema->do_next = DISPATCH_OBJECT_LISTLESS;
dsema->do_targetq = _dispatch_get_default_queue(false);
dsema->dsema_value = value;
_dispatch_sema4_init(&dsema->dsema_sema, _DSEMA4_POLICY_FIFO);
dsema->dsema_orig = value;
return dsema;
}
Copy the code
2.dispatch_semaphore_signal
The kVC-like form retrieves the value of the current semaphore from the bottom, and this function returns a value
long
dispatch_semaphore_signal(dispatch_semaphore_t dsema)
{
long value = os_atomic_inc2o(dsema, dsema_value, release);
if (likely(value > 0)) {
return 0;
}
if (unlikely(value == LONG_MIN)) {
DISPATCH_CLIENT_CRASH(value,
"Unbalanced call to dispatch_semaphore_signal()");
}
return _dispatch_semaphore_signal_slow(dsema);
}
DISPATCH_NOINLINE
long
_dispatch_semaphore_signal_slow(dispatch_semaphore_t dsema)
{
_dispatch_sema4_create(&dsema->dsema_sema, _DSEMA4_POLICY_FIFO);
_dispatch_sema4_signal(&dsema->dsema_sema, 1);
return 1;
}
Copy the code
Os_atomic_inc2o contains a ++ operation
#define os_atomic_inc2o(p, f, m) \
os_atomic_add2o(p, f, 1, m)
#define os_atomic_add2o(p, f, v, m) \
os_atomic_add(&(p)->f, (v), m)
#define os_atomic_add(p, v, m) \
_os_atomic_c11_op((p), (v), m, add, +)
Copy the code
3.dispatch_semaphore_wait
Similarly, dispatch_semaphore_wait takes value and returns the result
value>=0
And return immediately.value<0
According to the waiting timetimeout
Make a differenceDISPATCH_TIME_NOW
willvalue
Incrementing by one (to zero) — to cancel out the initial subtraction of the wait function and returnKERN_OPERATION_TIMED_OUT
Indicates that the waiting time has timed outDISPATCH_TIME_FOREVER
Calling systemsemaphore_wait
Method continues to wait until receivedsignal
callThe default
withDISPATCH_TIME_FOREVER
Similar, but need to specify a wait time
long
dispatch_semaphore_wait(dispatch_semaphore_t dsema, dispatch_time_t timeout)
{
long value = os_atomic_dec2o(dsema, dsema_value, acquire);
if (likely(value >= 0)) {
return 0;
}
return _dispatch_semaphore_wait_slow(dsema, timeout);
}
DISPATCH_NOINLINE
static long
_dispatch_semaphore_wait_slow(dispatch_semaphore_t dsema,
dispatch_time_t timeout)
{
long orig;
_dispatch_sema4_create(&dsema->dsema_sema, _DSEMA4_POLICY_FIFO);
switch (timeout) {
default:
if(! _dispatch_sema4_timedwait(&dsema->dsema_sema, timeout)) {break;
}
// Fall through and try to undo what the fast path did to
// dsema->dsema_value
case DISPATCH_TIME_NOW:
orig = dsema->dsema_value;
while (orig < 0) {
if (os_atomic_cmpxchgvw2o(dsema, dsema_value, orig, orig + 1,
&orig, relaxed)) {
return_DSEMA4_TIMEOUT(); }}// Another thread called semaphore_signal().
// Fall through and drain the wakeup.
case DISPATCH_TIME_FOREVER:
_dispatch_sema4_wait(&dsema->dsema_sema);
break;
}
return 0;
}
Copy the code
Os_atomic_dec2o performed the — operation
#define os_atomic_dec2o(p, f, m) \
os_atomic_sub2o(p, f, 1, m)
#define os_atomic_sub2o(p, f, v, m) \
os_atomic_sub(&(p)->f, (v), m)
#define os_atomic_sub(p, v, m) \
_os_atomic_c11_op((p), (v), m, sub, -)
Copy the code
Seven, the principle of scheduling group
The basic usage of a scheduling group is as follows
dispatch_group_t group = dispatch_group_create();
dispatch_group_enter(group);
dispatch_group_leave(group);
dispatch_group_async(group, queue, ^{});
dispatch_group_notify(group, queue, ^{});
Copy the code
1.dispatch_group_create
Generate dispatch_group_t with _dispatch_object_alloc as with any other GCD object
Os_atomic_store2o shows that the underlying group also maintains a value
dispatch_group_t
dispatch_group_create(void)
{
return _dispatch_group_create_with_count(0);
}
DISPATCH_ALWAYS_INLINE
static inline dispatch_group_t
_dispatch_group_create_with_count(uint32_t n)
{
dispatch_group_t dg = _dispatch_object_alloc(DISPATCH_VTABLE(group),
sizeof(struct dispatch_group_s));
dg->do_next = DISPATCH_OBJECT_LISTLESS;
dg->do_targetq = _dispatch_get_default_queue(false);
if (n) {
os_atomic_store2o(dg, dg_bits,
-n * DISPATCH_GROUP_VALUE_INTERVAL, relaxed);
os_atomic_store2o(dg, do_ref_cnt, 1, relaxed); // <rdar://22318411>
}
return dg;
}
Copy the code
2.dispatch_group_enter & dispatch_group_leave
Os_atomic_sub_orig2o and OS_ATOMic_ADD_ORIG2O are responsible for — and ++ operations
dispatch_group_leave
A group ofstate
updated- All out of the group will call
_dispatch_group_wake
void
dispatch_group_enter(dispatch_group_t dg)
{
// The value is decremented on a 32bits wide atomic so that the carry
// for the 0 -> -1 transition is not propagated to the upper 32bits.
uint32_t old_bits = os_atomic_sub_orig2o(dg, dg_bits,
DISPATCH_GROUP_VALUE_INTERVAL, acquire);
uint32_t old_value = old_bits & DISPATCH_GROUP_VALUE_MASK;
if (unlikely(old_value == 0)) {
_dispatch_retain(dg); // <rdar://problem/22318411>
}
if (unlikely(old_value == DISPATCH_GROUP_VALUE_MAX)) {
DISPATCH_CLIENT_CRASH(old_bits,
"Too many nested calls to dispatch_group_enter()"); }}void
dispatch_group_leave(dispatch_group_t dg)
{
// The value is incremented on a 64bits wide atomic so that the carry for
// the -1 -> 0 transition increments the generation atomically.
uint64_t new_state, old_state = os_atomic_add_orig2o(dg, dg_state,
DISPATCH_GROUP_VALUE_INTERVAL, release);
uint32_t old_value = (uint32_t)(old_state & DISPATCH_GROUP_VALUE_MASK);
if (unlikely(old_value == DISPATCH_GROUP_VALUE_1)) {
old_state += DISPATCH_GROUP_VALUE_INTERVAL;
do {
new_state = old_state;
if ((old_state & DISPATCH_GROUP_VALUE_MASK) == 0) {
new_state &= ~DISPATCH_GROUP_HAS_WAITERS;
new_state &= ~DISPATCH_GROUP_HAS_NOTIFS;
} else {
// If the group was entered again since the atomic_add above,
// we can't clear the waiters bit anymore as we don't know for
// which generation the waiters are for
new_state &= ~DISPATCH_GROUP_HAS_NOTIFS;
}
if (old_state == new_state) break;
} while(unlikely(! os_atomic_cmpxchgv2o(dg, dg_state, old_state, new_state, &old_state, relaxed)));return _dispatch_group_wake(dg, old_state, true);
}
if (unlikely(old_value == 0)) {
DISPATCH_CLIENT_CRASH((uintptr_t)old_value,
"Unbalanced call to dispatch_group_leave()"); }}Copy the code
3.dispatch_group_async
_dispatch_continuation_init_f
Save tasks (similar to asynchronous functions)- call
_dispatch_continuation_group_async
void
dispatch_group_async_f(dispatch_group_t dg, dispatch_queue_t dq, void *ctxt,
dispatch_function_t func)
{
dispatch_continuation_t dc = _dispatch_continuation_alloc();
uintptr_t dc_flags = DC_FLAG_CONSUME | DC_FLAG_GROUP_ASYNC;
dispatch_qos_t qos;
qos = _dispatch_continuation_init_f(dc, dq, ctxt, func, 0, dc_flags);
_dispatch_continuation_group_async(dg, dq, dc, qos);
}
Copy the code
Call dispatch_group_enter to group
static inline void
_dispatch_continuation_group_async(dispatch_group_t dg, dispatch_queue_t dq,
dispatch_continuation_t dc, dispatch_qos_t qos)
{
dispatch_group_enter(dg);
dc->dc_data = dg;
_dispatch_continuation_async(dq, dc, qos, dc->dc_flags);
}
Copy the code
After entering the group, you need to call out of the group, that is, after executing the task, the group will be removed
When _dispatch_continuation_invoke_inline is in group form, _dispatch_continuation_with_group_invoke is invoked to exit the group
4.dispatch_group_wait
Dispatch_group_wait works similarly with semaphores
Dispatch_group_create A value is saved when the dispatch_group_create group is created
- If the current
value
And the originalvalue
If yes, all tasks are completed and 0 is returned - if
timeout
Zero also returns immediately, otherwise _dispatch_group_wait_slow is called- in
_dispatch_group_wait_slow
It will wait until the task completes and returns 0 - Never finished will return
timeout
- in
long
dispatch_group_wait(dispatch_group_t dg, dispatch_time_t timeout)
{
uint64_t old_state, new_state;
os_atomic_rmw_loop2o(dg, dg_state, old_state, new_state, relaxed, {
if ((old_state & DISPATCH_GROUP_VALUE_MASK) == 0) {
os_atomic_rmw_loop_give_up_with_fence(acquire, return 0);
}
if (unlikely(timeout == 0)) {
os_atomic_rmw_loop_give_up(return _DSEMA4_TIMEOUT());
}
new_state = old_state | DISPATCH_GROUP_HAS_WAITERS;
if (unlikely(old_state & DISPATCH_GROUP_HAS_WAITERS)) {
os_atomic_rmw_loop_give_up(break); }});return _dispatch_group_wait_slow(dg, _dg_state_gen(new_state), timeout);
}
Copy the code
5.dispatch_group_notify
Wait for the _dispatch_group_WAKE callback (all outgoing groups will be called)
DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_group_notify(dispatch_group_t dg, dispatch_queue_t dq,
dispatch_continuation_t dsn)
{
uint64_t old_state, new_state;
dispatch_continuation_t prev;
dsn->dc_data = dq;
_dispatch_retain(dq);
prev = os_mpsc_push_update_tail(os_mpsc(dg, dg_notify), dsn, do_next);
if (os_mpsc_push_was_empty(prev)) _dispatch_retain(dg);
os_mpsc_push_update_prev(os_mpsc(dg, dg_notify), prev, dsn, do_next);
if (os_mpsc_push_was_empty(prev)) {
os_atomic_rmw_loop2o(dg, dg_state, old_state, new_state, release, {
new_state = old_state | DISPATCH_GROUP_HAS_NOTIFS;
if ((uint32_t)old_state == 0) {
os_atomic_rmw_loop_give_up({
return _dispatch_group_wake(dg, new_state, false); }); }}); }}static void
_dispatch_group_wake(dispatch_group_t dg, uint64_t dg_state, bool needs_release)
{
uint16_t refs = needs_release ? 1 : 0; // <rdar://problem/22318411>
if (dg_state & DISPATCH_GROUP_HAS_NOTIFS) {
dispatch_continuation_t dc, next_dc, tail;
// Snapshot before anything is notified/woken
dc = os_mpsc_capture_snapshot(os_mpsc(dg, dg_notify), &tail);
do {
dispatch_queue_t dsn_queue = (dispatch_queue_t)dc->dc_data;
next_dc = os_mpsc_pop_snapshot_head(dc, tail, do_next);
_dispatch_continuation_async(dsn_queue, dc,
_dispatch_qos_from_pp(dc->dc_priority), dc->dc_flags);
_dispatch_release(dsn_queue);
} while ((dc = next_dc));
refs++;
}
if (dg_state & DISPATCH_GROUP_HAS_WAITERS) {
_dispatch_wake_by_address(&dg->dg_gen);
}
if (refs) _dispatch_release_n(dg, refs);
}
Copy the code
Principle of singletons
#define DLOCK_ONCE_UNLOCKED ((uintptr_t)0)
void
dispatch_once_f(dispatch_once_t *val, void *ctxt, dispatch_function_t func)
{
dispatch_once_gate_t l = (dispatch_once_gate_t)val;
#if! DISPATCH_ONCE_INLINE_FASTPATH || DISPATCH_ONCE_USE_QUIESCENT_COUNTER
uintptr_t v = os_atomic_load(&l->dgo_once, acquire);
if (likely(v == DLOCK_ONCE_DONE)) {
return;
}
#if DISPATCH_ONCE_USE_QUIESCENT_COUNTER
if (likely(DISPATCH_ONCE_IS_GEN(v))) {
return _dispatch_once_mark_done_if_quiesced(l, v);
}
#endif
#endif
if (_dispatch_once_gate_tryenter(l)) {
return _dispatch_once_callout(l, ctxt, func);
}
return _dispatch_once_wait(l);
}
DISPATCH_ALWAYS_INLINE
static inline bool
_dispatch_once_gate_tryenter(dispatch_once_gate_t l)
{
return os_atomic_cmpxchg(&l->dgo_once, DLOCK_ONCE_UNLOCKED,
(uintptr_t)_dispatch_lock_value_for_self(), relaxed);
}
Copy the code
- It was passed in externally the first time it was called
onceToken
Is empty, soval
NULL_dispatch_once_gate_tryenter(l)
judgel->dgo_once
Is marked asDLOCK_ONCE_UNLOCKED
(Whether it has been stored)DLOCK_ONCE_UNLOCKED=0
, so if the judgment is true, it goes aheadblock
The callback- through
_dispatch_once_gate_broadcast
willl->dgo_once
Marked asDLOCK_ONCE_DONE
- The second time it comes in, it returns directly, ensuring that the code is executed only once
Write in the back
GCD dispatch_barrier_async is not a global queue
GCD source code is really not general obscure, the author level is limited, if there are mistakes please point out