This is the 22nd day of my participation in the August More Text Challenge

1. Queues and threads

A thread can have multiple queues, each queue can execute multiple tasks, queues can sort tasks, queues rely on threads to perform tasks.

The home team in 1.1

The Main queue is a scheduling queue associated with the Main thread. It is a Serial queue and is created before the Main function. Ui-related operations must be performed in the Main queue. The primary queue can be obtained with dispatch_get_main_queue(). Hit a breakpoint before main and see that the main queue is already there, serial. This means that the main queue may be created after dyld and before main.

Next, check the libDispatch source code to see if the main queue is initialized.

! [insert picture description here] (HTTP: / / https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/9a49cbfc663c48f9b6108ac572ac0b4c~tplv-k3u1fbpfcp-zoom-1.im age)Copy the code

Next search for DISPATCH_GLOBAL_OBJECT. See here the first one is the type, and the second one is the actual object. The dispatch_queue_main_t parameter is the type and _dispatch_main_q is the object.

Next search for _dispatch_main_q and see that it’s assigned here.

You can also search the thread’s label. I’m going to bt a breakpoint to get the label of the main thread, and then I’m going to search.

The search went straight to the main queue. Here dq_serialnum = 1 is not the place to determine whether the queue is serial, but DQF_WIDTH(1) is the place to prove that the main queue is serial.

You can normally create a queue with dispatch_queue_create, so search the source code for an implementation of dispatch_queue_create.

Next, search for _dispatch_lane_create_with_target. I see that there are over a hundred lines of code here, so LET’s look at the return value. One important thing about the return value here is dq, see dQ is dispatch_lane_t type, and do the clearing and initialization. If it is concurrent then DISPATCH_QUEUE_WIDTH_MAX is passed, otherwise 1 is passed.

The next search _dispatch_queue_init function, see the parameters used in the third DQF | = DQF_WIDTH (width) there. This is where DQF_WIDTH(width) is used to determine whether a queue is parallel or serial.

What does dq_serialnum stand for? Search for _dispatch_queue_serial_numbers and see where it is assigned.

Next, search for DISPATCH_QUEUE_SERIAL_NUMBER_INIT. I see a comment like this. So. Dq_serialnum = 1 represents the primary queue.

Go back to _dispatch_lane_create_WITH_target and look up and see that we’re initializing the parameters and then initializing the queue.

1.2 Global Queue

A Global queue running on a background thread is a shared Global queue within the system. It is a Concurrent queue. You can run dispatch_get_global_queue(0, 0) to obtain the global queue. Get the label of the Global Queue and search the source code.

Global Queue found in source code.

Source code look up, see it is an array.

1.3 Queue inheritance chain

Whether it is a primary queue, global queue or custom queue, dispatch_queue_t is used for all queues.

Click on the definition of dispatch_queue_t.

OS_OBJECT_DECL_SUBCLASS = “OS_OBJECT_DECL_SUBCLASS”;

Next, search the source code for OS_OBJECT_DECL_SUBCLASS and find its definition.

OS_OBJECT_DECL_IMPL is a concatenation of types.

Then search OS_OBJECT_DECL_PROTOCOL to see its definition.

OS_OBJECT_CLASS is a concatenation of the OS name.

So DISPATCH_DECL(dispatch_queue); OS_OBJECT_DECL_IMPL OS_OBJECT_DECL_IMPL OS_OBJECT_DECL_IMPL

@protocol OS_OBJECT_CLASS(name) __VA_ARGS__ becomes @protocol OS_OBJECT_CLASS(OS_dispatch_queue) dispatch_object.

Typedef adhere

becomes the following typedef NSObject

.

(name)>

OS_OBJC_INDEPENDENT_CLASS name##_t becomes OS_OBJC_INDEPENDENT_CLASS OS_dispatch_queue_t

Search for the definition of DISPATCH_DECL and find that it also has this definition.

typedef struct dispatch_queue_s : Public dispatch_object_s {} *dispatch_queue_t dispatch_queue_t {} *dispatch_queue_t Dispatch_queue_s inherits from dispatch_object_s.

“Dispatch_object_t” is the true root type.

So if you look at the dispatch_queue_s structure, you see that there’s a DISPATCH_QUEUE_CLASS_HEADER, which is inherited.

Next, search DISPATCH_QUEUE_CLASS_HEADER and find that there is also an inheritance.

Next, search DISPATCH_OBJECT_HEADER and see that it comes from _OS_object_s.

Next, search for OS_OBJECT_STRUCT_HEADER.

Finally, search for _OS_OBJECT_HEADER.

So we end up with the GCD inheritance chain: Dispatch_queue_t -> dispatch_queue_s ->dispatch_object_s-> _os_object_s -> dispatch_object_t, whose root class is dispatch_object_t.

2 GCD task execution stack

Whether it’s asynchronous or synchronous, there are blocks, so where is the block executed?

2.1 Synchronize the queue execution stack

It is normal to call a synchronization like this.

Dispatch_sync (dispatch_queue_t) = dispatch_sync(dispatch_queue_t) = dispatch_sync(dispatch_queue_t)

The work here is the block passed in, so you need to look at the work. Search dispatch_Block_invoke and see that there is a wrapper, then see where the wrapper function is called and executed.

Next search for _dispatch_sync_f.

Then, after searching for _dispatch_SYNC_F_inline, I don’t know which function I’m going to, I break where the function is executed.

Run after the next _dispatch_SYNC_F_SLOW breakpoint.

If it does, it means that the _dispatch_sync_F_slow function is called.

Now look at the implementation of _dispatch_SYNC_F_slow and see that there is an assignment to func. There are also _dispatch_SYNc_function_invoke and _dispatch_sync_INVOke_and_COMPLEte_RECURse.

Set breakpoints for _dispatch_SYNc_function_invoke and _dispatch_SYNc_INVOke_AND_COMPLEte_RECURse to see which executes. _dispatch_SYNc_function_invoke is invoked, but _dispatch_sync_INVOke_and_COMPLEte_RECURse is not.

Next search for _dispatch_SYNc_function_invoke.

In the search _dispatch_sync_function_INVOke_inline.

Then search _dispatch_client_callout(void) and find that there is a call to CTXT.

It’s the same thing here.

That means the block is called at _dispatch_client_callout.

You can also do a breakpoint on a block and then bt to prove it.

2.1 Asynchronous queue execution stack

Again, search dispatch_async(dispatch_queue_t) at the bottom

Next, search for _dispatch_continuation_init(dispatch_continuation_t). _dispatch_Block_invoke encapsulates work.

_dispatch_continuation_init_f(dispatch_continuation_t) is assigned, and _dispatch_continuation_priority_set is prioritized.

So in other words, this branch encapsulates tasks and priorities. Why do encapsulation? Since this is an asynchronous function, it is stated that it will be called asynchronously and may cause disorder, so priority is the basis for reference. The asynchronous function means that the task callback is also asynchronous and is asynchronous according to the CPU scheduling. The task is encapsulated so that it can be pulled out and called whenever the CPU says it’s ready to execute.

Next, search for _dispatch_continuation_async(dispatch.

Then search for Dx_push. Here z is qos, we only care about z, so search for dq_push.

Next, search dQ_push and find its assignment in the global queue.

The next step is to search for _dispatch_root_queue_push and find that _dispatch_root_queue_PUSH_inline is called.

Search for _dispatch_root_queue_PUSH_inline.

Then search for _dispatch_root_queue_POKE.

Search again for _dispatch_root_queue_POke_slow.

Next, _dispatch_root_queues_init.

_dispatch_root_queues_init_once is called, and you can see that _dispatch_worker_thread2 is there.

Go back to bt and work backwards and see that there is a call to _dispatch_worker_thread2.

Then _dispatch_root_queue_drain, see _dispatch_continuation_pop_inline here.

Search _dispatch_continuation_pop_inline.

Next, go to _dispatch_continuation_invoke_inline.

Then _dispatch_client_callout is called.

The block is called at _dispatch_client_callout.

3. The interview questions

What is the output of this function here? The answer here is A: 1230789, and since this is A serial queue, the tasks will be executed sequentially, so 1230789 is printed out.

What would be the result of the run here? The answer is greater than or equal to 5. Num ++ because it’s asynchronous, there’s probably a thread running self.num++ when self.num >= 5, so if it’s done before NSLOG, then self.num++ is greater than or equal to 5. It is also possible that, for example, if self.num equals 3, just so many threads have completed self.num++, then self.num will be greater than or equal to 5 and exit the loop.

So what is the result of the run here? Is it >= 10000? The answer is less than or equal to 10000, because the loop condition here is I, and when I =10000 exits the loop, some of the threads below may not have finished executing, so it will be less than or equal to 10000. And it’s not safe for threads, because if the last thread hasn’t done the ++ operation, and the value is 100, then the next thread will read it, and the last thread will do the ++ operation, and the next thread will do the ++ operation, and the result is 101. Not 102.