Multithreading (I)- Principle multithreading (II)-GCD foundation multithreading (III)-GCD function multithreading (IV)-GCD timer

A single functiondispatch_once

Singleton functions are typically used to create singletons or to execute programs that need to be executed only once.

static dispatch_once_t onceToken; Dispatch_once (&onceToken, ^{NSLog(@"== code that will only be executed once =="); }); void dispatch_once(dispatch_once_t *predicate, DISPATCH_NOESCAPE dispatch_block_t block)Copy the code

Dispatch_once ensures that a program in a block is executed only once, and even in a multithreaded environment, dispatch_once is thread-safe.

Iterative functiondispatch_apply

The dispatch_apply function adds the specified tasks to the specified queue for execution at the specified number of times. Whether on a serial queue or a concurrent queue, Dispatch_apply waits for all tasks to complete.

If dispatch_apply is used on a serial queue, it is executed sequentially and synchronously, just like a normal for loop. If you are using an asynchronous queue, the subscripts may not come in order.

void
dispatch_apply(size_t iterations,
		dispatch_queue_t DISPATCH_APPLY_QUEUE_ARG_NULLABILITY queue,
		DISPATCH_NOESCAPE void (^block)(size_t));
Copy the code
  • iterations: Number of iterations performed
  • queue: A queue to perform iterations. RecommendedDISPATCH_APPLY_AUTO, automatically invokes the appropriate thread queue
  • void (^block)(size_t)): Result callback of iteration

Delay functiondispatch_after

The delay function performs an operation on a specified queue at a given time.

void dispatch_after(dispatch_time_t when, dispatch_queue_t queue,
dispatch_block_t block);
Copy the code
  • dispatch_time_t when: Specifies the time to execute the task.
    • You can useDISPATCH_TIME_NOW, but not recommended, because the function is calleddispatch_async
    • You can also usedispatch_timeordispatch_walltimeCustom time:dispatch_time(DISPATCH_TIME_NOW, (int64_t)(delayInSeconds * NSEC_PER_SEC))
    • You can’t useDISPATCH_TIME_FOREVER
  • dispatch_queue_t queue: Specifies the queue to execute the task.
  • dispatch_block_t block: The task to be executed cannot be transmittedNULL.

Scheduling group functiondispatch_group

With Dispatch Groups, we can put multiple tasks into a Group and have them execute asynchronously on the same queue or on different queues, and then execute other operations that depend on those tasks.

The API:

  1. Creating a Scheduling Group
dispatch_group_t dispatch_group_create(void);
Copy the code
  1. Enter the group and start to execute tasks in the group
void dispatch_group_enter(dispatch_group_t group);
Copy the code
  1. The group task is executed successfully
void dispatch_group_leave(dispatch_group_t group);
Copy the code
  1. Synchronous wait, which blocks the current thread until the group’s tasks are completed ortimeoutZero is the next step
long dispatch_group_wait(dispatch_group_t group, dispatch_time_t timeout);
Copy the code
  1. Issue a notification that all tasks associated with the group have completed
void dispatch_group_notify(dispatch_group_t group,
	                   dispatch_queue_t queue,
	                   dispatch_block_t block);
Copy the code

Let’s look at the use of dispatch_group as an example:

dispatch_group_t group = dispatch_group_create();
dispatch_queue_t queue = dispatch_get_global_queue(0, 0);

dispatch_async(queue, ^{
    sleep(2);
    NSLog(@"==1==");
});
    
dispatch_async(queue, ^{
    NSLog(@"==2==");
});

dispatch_async(queue, ^{
    NSLog(@"==3==");
});
    
dispatch_group_notify(group, queue, ^{
    NSLog(@"===4=");
});
Copy the code

Run the program, console output:

You can see that this is not what we want. Modify the program to continue running:

Dispatch_group_wait also returns the following result:

However, dispatch_group_wait blocks subsequent operations, such as NSLog(@”==5==”) after group notification. The group task does not block execution, whereas dispatch_group_wait blocks execution.

Note that dispatch_group_enter and dispatch_group_leave must be paired; otherwise, a deadlock will occur.

Barrier functiondispatch_barrier

Fence functions are divided into dispatch_barrier_async and dispatch_barrier_sync functions, which have both similarities and differences:

  • Thing in common:
  1. Waiting for tasks inserted in front of it to finish first
  2. Wait for them to complete their own tasks before performing subsequent tasks
  • Difference:
  1. dispatch_barrier_syncWhen you insert your own task into the queue, you wait for your own task to finish before continuing to insert the tasks written after it, and then execute them
  2. dispatch_barrier_asyncInstead of waiting for its own task to finish, it continues to queue up subsequent tasks and waits for its own task to finish before performing subsequent tasks.

Here’s an example:

- (void)barrierAsync {
    dispatch_queue_t queue = dispatch_queue_create("concurrentQueue", DISPATCH_QUEUE_CONCURRENT);
    dispatch_async(queue, ^{
        NSLog(@"--1--");
    });
    dispatch_async(queue, ^{
        NSLog(@"--2--");
    });
    dispatch_barrier_async(queue, ^{
        NSLog(@"--barrier_async--%@--",[NSThread currentThread]);
        sleep(2);
    });
    
    NSLog(@"=======barrierAsync=======");
    dispatch_async(queue, ^{
        NSLog(@"--3--");
    });
    dispatch_async(queue, ^{
        NSLog(@"--4--");
    });
    dispatch_async(queue, ^{
        NSLog(@"--5--");
    });
}
Copy the code

Run the program:

Change the dispatch_barrier_async function to dispatch_barrier_sync and run the program:

It can be seen from the print result that whether the fence function is synchronous or asynchronous, it will isolate the tasks in the current queue, that is, it will make the multi-thread operation before the fence execute first, and the multi-thread operation behind the fence execute later. The difference is that all multi-threaded operations after dispatch_barrier_async are executed concurrently, while all operations after dispatch_barrier_sync are executed synchronously, so the execution order of barrierAsync printed by us is different from that of barrierSync.

In short, both dispatch_barrier_sync and dispatch_barrier_Async isolate tasks before and after the fence in the queue, the difference being that they do not block the current queue. So the fence function and the task it blocks must be in the same queue, otherwise there is no blocking effect. So using the fence function in the AFN doesn’t work. The AFN maintains a serial queue of its own that doesn’t work unless the queue is used.

Note that a deadlock occurs when we call the task in the main thread and add the synchronization fence function to the main queue. The fence function uses custom queues to prevent blocking and deadlocks.

A semaphoredispatch_semaphore_t

A token used to control the amount of access to a resource that sets up a semaphore that tells the system to execute as many threads as we specify with semaphore processing prior to thread access.

The API:

  1. Creates a semaphore. Argument: The initial value of the semaphore, returned if less than 0NULLThis parameter controls the number of threads that can be started.
dispatch_semaphore_t dispatch_semaphore_create(long value)
Copy the code
  1. Wait (reduce) for the semaphore to appear before returning.
long
dispatch_semaphore_wait(dispatch_semaphore_t dsema, dispatch_time_t timeout)
Copy the code
  • Dispatch_semaphore_t DSEMa: indicates the semaphore. If the dSEMA passed in is greater than 0, proceed down and decrement the semaphore by 1; If dSEMa is equal to 0, block the current thread waiting for resources to be released by dispatch_semaphore_signal. If the semaphore is waiting, proceed down and decrement it by 1. If the semaphore is not waiting, wait until timeout and continue.

  • Dispatch_time_t timeout: specifies the timeout duration of the blocked thread. Generally, DISPATCH_TIME_FOREVER or DISPATCH_TIME_NOW is sent, which can be customized. dispatch_time_t t = dispatch_time(DISPATCH_TIME_NOW, 1*100*100*100);

  • Zero is returned on success, and other values are returned on timeout

  1. Signal (increase semaphore). If the previous value is less than zero, the function wakes up the waiting thread
long dispatch_semaphore_signal(dispatch_semaphore_t dsema)
Copy the code

Decreasing and increasing semaphores are usually used in pairs, in the order of decreasing the semaphore (wait) followed by increasing the semaphore (signal).

Here is an example to illustrate the use of semaphores:

dispatch_queue_t queue = dispatch_get_global_queue(0, 0); dispatch_semaphore_t semaphore = dispatch_semaphore_create(1); Dispatch_async (queue, ^{dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER); NSLog(@" Perform task 1"); sleep(1); NSLog(@" Task 1 completed "); dispatch_semaphore_signal(semaphore); }); Dispatch_async (queue, ^{dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER); NSLog(@" Perform task 2"); sleep(1); NSLog(@" Task 2 completed "); dispatch_semaphore_signal(semaphore); }); Dispatch_async (queue, ^{dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER); NSLog(@" Perform task 3"); sleep(1); NSLog(@" Task 3 completed "); dispatch_semaphore_signal(semaphore); });Copy the code

Run the program, console output:

dispatch_semaphore_t semaphore = dispatch_semaphore_create(2);
Copy the code

Change the semaphore created to 3, or greater than 3:

dispatch_semaphore_t semaphore = dispatch_semaphore_create(3);
Copy the code

Similarly, we can change the example of concurrent tasks to synchronous tasks. The following conclusions can be drawn:

  • In the case of synchronous tasks, the semaphore created is executed one after the other, regardless of the relationship between the number of tasks and the semaphore created
  • For asynchronous tasks:
    • If the number of created semaphores is smaller than the number of tasks, the corresponding tasks will be executed first. The remaining tasks will be executed after the previous tasks are completed
    • If the number of semaphores created is greater than or equal to the number of tasks, all tasks are executed concurrently

Here’s another example:

__block int a = 0;
while (a < 5) {
    dispatch_async(dispatch_get_global_queue(0, 0), ^{
        a++;
    });
}
NSLog(@"==a==%d==", a);
Copy the code

Because of the asynchronous thread problem, we print the value of A, which may be greater than or equal to 5, and we can control that by using the semaphore to print a=5 out of the loop. As follows:

dispatch_semaphore_t semaphore = dispatch_semaphore_create(1);
__block int a = 0;
while (a < 5) {
    dispatch_async(dispatch_get_global_queue(0, 0), ^{
        a++;
        dispatch_semaphore_signal(semaphore);
    });
    dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
}
    
NSLog(@"==a==%d==", a);
Copy the code

When it comes to semaphores, we need to be careful to prevent threads from blocking by ensuring that the semaphore_wait method is greater than zero.

Scheduling source functiondispatch_source

When certain low-level system events occur, the scheduling source catches these events and can do other logical processing. There are multiple types of scheduling sources that listen for the corresponding type of system events. That is, a GCD function is used to specify a type of system event that you want to listen for, and then a closure or function that captures the event for logical processing as a callback function, and then specify a queue for the callback function to execute. When the specified system event is listened to, the callback function will be called. Executes the callback function as a task on the specified queue.

Related apis:

  1. Create the source
dispatch_source_t
dispatch_source_create(dispatch_source_type_t type,
	uintptr_t handle,
	unsigned long mask,
	dispatch_queue_t _Nullable queue);
Copy the code
  1. Set the source event callback
void
dispatch_source_set_event_handler(dispatch_source_t source,
	dispatch_block_t _Nullable handler);
Copy the code
  1. Set the source event data
void
dispatch_source_merge_data(dispatch_source_t source, unsigned long value);
Copy the code
  1. Get the source event data
unsigned long
dispatch_source_get_data(dispatch_source_t source);
Copy the code

The type of data retrieved is related to the type of the source event:

  • Read file typedispatch_source, returns the number of bytes that were read from the file.
  • Write file typedispatch_source, returns an identifier indicating whether the file is writable. A positive number indicates writable, and a negative number indicates unwritable.
  • Listening for file property change typesdispatch_source, returns the listened changed file properties, represented by constants, such asDISPATCH_VNODE_RENAMEAnd so on.
  • process-typedispatch_source, returns the monitored process status, expressed as a constant, for exampleDISPATCH_PROC_EXITAnd so on.
  • MachPort typedispatch_sourceTo return toMachThe state of a port, expressed as a constant, for exampleDISPATCH_MACH_SEND_DEADAnd so on.
  • Custom event typedispatch_source, return to usedispatch_source_merge_dataFunction set data.
  1. Continue to monitor
void
dispatch_resume(dispatch_object_t object);
Copy the code
  1. Suspend the listening operation
void
dispatch_suspend(dispatch_object_t object);
Copy the code
  • dispatch_source_type_t typeSet:dispatch_sourceType of method
  • uintptr_t handle: depends on the type of event to listen on, such as if it is listeningMachPort related events, then the parameter ismach_port_tThe type ofMachPort number. This parameter is not needed if it is listening for events of the event variable data type, and is set to 0.
  • unsigned long mask: depends on the type of event to listen on
  • dispatch_queue_t _Nullable queue: Execution queue, global queue by default

The values of dispatch_source_type_t are as follows:

  • DISPATCH_SOURCE_TYPE_DATA_ADD: a user-defined event can passdispatch_source_get_dataFunction gets event variable data, which can be called in our custom methodsdispatch_source_merge_dataFunction todispatch_sourceSet the data.
  • DISPATCH_SOURCE_TYPE_DATA_OR: belongs to a user-defined event. The usage is the sameDISPATCH_SOURCE_TYPE_DATA_ADD.
  • DISPATCH_SOURCE_TYPE_MACH_SEND:MachThe port sent an event.
  • DISPATCH_SOURCE_TYPE_MACH_RECV:MachThe port received the event.
  • DISPATCH_SOURCE_TYPE_PROC: Process-related events.
  • DISPATCH_SOURCE_TYPE_READ: File reading event.
  • DISPATCH_SOURCE_TYPE_WRITE: File writing event.
  • DISPATCH_SOURCE_TYPE_VNODE: File attribute change event.
  • DISPATCH_SOURCE_TYPE_SIGNAL: Indicates the receiving signal event.
  • DISPATCH_SOURCE_TYPE_TIMER: Timer event.
  • DISPATCH_SOURCE_TYPE_MEMORYPRESSURE: Memory pressure event.

Here we combine an example to illustrate the specific use:

@property (nonatomic, strong) dispatch_source_t source; @property (nonatomic, strong) dispatch_queue_t queue; @property (nonatomic, assign) NSUInteger totalComplete; - (void)initSource { self.queue = dispatch_queue_create("soureQueue", 0); Self. source = dispatch_source_create(DISPATCH_SOURCE_TYPE_DATA_ADD, 0, 0, dispatch_get_main_queue()); Dispatch_source_set_event_handler (self.source, ^{// Get source event value NSUInteger value = dispatch_source_get_data(self.source); self.totalComplete += value; NSLog(@" progress: %.2f", self.totalComplete/100.0); }); // Start listening for dispatch_resume(self.source); } - (void)touchesBegan:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event{ for (NSUInteger index = 0; index < 100; index++) { dispatch_async(self.queue, ^{ sleep(1); // Dispatch_source_merge_data (self.source, 1); }); }}Copy the code

Run the program:

conclusion

  1. dispatch_once
    • Will be executed once
    • Thread safety
  2. dispatch_afterIt is executed asynchronously
  3. dispatch_apply
    • Serial queues are the same as regular loops
    • Concurrent queue, the subscripts of the loop are not sequential
  4. dispatch_group
    • dispatch_group_enteranddispatch_group_leaveThey must come in pairs, otherwise a deadlock will occur
    • First in, last outenterafterleave
    • dispatch_group_waitBlocks the current thread
  5. dispatch_barrier
    • It has a synchronous effect
    • The performance security
    • The fundamental principle is blocking queues
    • Do not use global queues and primary queues
    • The blocking task and the fence function need to be in the same queue
  6. dispatch_semaphore
    • Act as a lock
    • Is the highest performance lock
    • Can control the maximum number of concurrent requests
    • dispatch_semaphore_waitWill block the thread if the argument is 0
  7. dispatch_source
    • Create, listen for callbacks, change Settings, and formdispatch_sourceBasic operation of
    • Be careful when setting and receiving datasourceThe type of

Resources: Official iOS multithreading: “GCD” exhaustive summary