In order to make it easier to understand the content of GCD, let’s introduce some basic definitions

Threads and processes

  • thread

A thread is the basic execution unit of a process, all the tasks of a process are executed in a thread in order for a process to execute a task, it has to have a thread, at least one thread in the process and by default the program starts a thread, and this thread is called the master thread or the UI thread

  • process

A process is an application program that is running in the system. Each process is independent of each other and runs in its dedicated and protected memory space

The Activity Monitor allows us to see what processes are running on the MAC systemAs shown, there can be multiple threads in a process.

The relationship between processes and threads

Address space: Threads of the same process share the address space of the same process, while processes are independent of each other. Resource ownership: Threads in the same process share the resources of the same process, such as memory, I/O, and CPU, but resources between processes are independent.

  1. The crash of one process in protected mode does not affect other processes, but the crash of one thread kills the entire process. So multi-processing is more robust than multi-threading. (For example, we have a lot of apps on the MAC, but when one app crashes, it doesn’t affect other apps. When we were developing iOS, if the thread crashes, the whole app quits. The whole process will crash.)
  2. Process switchover consumes large resources and achieves high efficiency. So when it comes to frequent switching, threads are better than processes. Also, if you require concurrent operations that share variables at the same time, use threads instead of processes.
  3. Execution process: Each independent process has a program run entry, sequential execution sequence, and program entry. However, threads cannot execute independently and must depend on the application, which provides multiple thread execution control.
  4. Threads are the basic unit of processor scheduling, but processes are not.
  5. Threads have no address space and are contained in the process address space.

If there was only one thread, the second task would have to wait until the first task had finished, which would be very inefficient, hence multithreading

Meaning of multithreading

advantages

  • Can improve the execution efficiency of the program
  • Appropriately improve resource utilization (CPU, memory)
  • The task on the thread is automatically destroyed after it completes execution

disadvantages

  • Starting threads takes up a certain amount of memory (512kb per thread by default)
  • If a large number of threads are enabled, a large amount of memory space will be occupied and the performance of the program will be reduced
  • The more threads there are, the more overhead the CPU has on calling threads
  • The program design is more complex, such as communication between threads, multithreaded data sharing

For a single-core CPU, the CPU can process only one thread at a time, in other words, only one thread is executing at a time. Multi-threading at the same time is actually the CPU switching between multiple threads quickly. The CPU schedules threads fast enough to have the effect of “simultaneous” execution by multiple threads. If you want true concurrency, you still need multiple cores. If the number of threads is very large, the CPU will switch between N threads, consuming a lot of CPU resources. The number of times each thread is scheduled decreases and the execution efficiency of the thread decreases.

The life cycle of a thread

  • New state: When a program creates a thread, the thread is in the new state
  • Ready: The thread is ready after the start() method is called on the thread object. At this point, the thread waits for the system to allocate a chunk of CPU time to it, rather than executing start() immediately
  • Run: A thread in the ready state can execute run() after it has acquired CPU resources, while the thread is running
  • Block: A thread that executes a method such as sleep or suspend, loses resources and enters a blocked state. It can return to the ready state after sleep
  • Termination: The run() method switches to the termination state after completion or when other termination conditions occur

  • First, check whether the number of threads in the thread pool exceeds the number of core threads. If not, create a new thread to execute the task. If the number of core threads is exceeded, the following process is entered.
  • Determine whether the task queue is full, if not, add the task to the task queue; If it is, proceed to the following process.
  • Then determine whether the number of threads will exceed the maximum number of threads after creating a thread. If not, create a new thread to execute the task. If so, enter the following process.
  • Execute a rejection policy.

Rejection policies

  • AbortPolicy directly thrown RejectedExecutionExeception exceptions to prevent normal operation of system
  • CallerRunsPolicy rolls back the task to the caller
  • DisOldestPolicy Discards the most awaited task
  • DisCardPolicy Discards the task directly

The four kinds of rejection policies RejectedExcecutionHandler interfaces are implemented

Influencing factors of task execution speed

  • cpu
  • Task complexity
  • priority
  • Thread state
Mutexes and spinlocks

In multi-threaded situations where there are some resource snatches, mutex is used: mutex is used to protect critical sections and ensure that only one thread can access data at a time. To access a shared resource, the mutex is first locked. If the mutex is locked, the calling thread blocks until the mutex is unlocked. After access to the shared resource is complete, the mutex is unlocked.

Critical section: refers to a program segment that accesses shared resources (such as shared devices or shared storage) that cannot be accessed by multiple threads at the same time.

A spinlock is simply an implementation of a mutex that abandons the CPU while waiting. A spinlock is a loop that tests the status of the lock and keeps the CPU occupied. For the attribute keywords atomic, nonatomic

  • atomicIs the atomic property, is for multithreaded development, is the default property! Only in attributessetterMethod, adding a lock (spin lock) to ensure that only one thread works on a property at a timewriteOperation, a thread-processing technique in which one (thread) write many (thread) reads at the same time
  • nonatomicNon-atomic properties, no locks! High performance!

The difference between a spin lock and a mutex is that a thread applying for a spin lock is not suspended, but in a busy state

GCD

Grand Central Dispatch, pure C, provides very powerful functions

  • GCD is apple’s solution to multi-core parallel computing
  • – GCDS automatically utilize more CPU cores (such as dual core, quad core)
  • The GCD automatically manages the thread lifecycle (thread creation, task scheduling, thread destruction) and the programmer only has to tell
  • What task does the GCD want to perform without writing any thread management code

GCD adds tasks to queues and specifies functions that execute tasks. Tasks use blocks to encapsulate functions that execute tasks (the block of a task has no return value and no arguments) :

  • asynchronousdispatch_async
  1. The next statement can be executed without waiting for the current statement to complete
  2. Opens the thread to execute the block’s task
  3. Asynchrony is a byword for multithreading
  • synchronousdispatch_sync
  1. You must wait for the current statement to complete before executing the next statement
  2. Thread will not be opened
  3. The current task of the block is executed
The queue
  • Serial queue: Indicates the tasks in this queueserialExecution, that is, one task at a time, must wait for the completion of the previous task before starting the next, and that must be in accordance withFIFOIn order of execution
  • Concurrent queues: Allow you to perform multiple tasks in parallel. Tasks start execution in the order in which they are enqueued. However, tasks are executed synchronously without waiting. Concurrent queues ensure that tasks are executed in a certain order, but you have no way of knowing the order of execution, the duration of execution, or the number of synchronized tasks at any point in time.

Our queues and functions can be grouped into four groups: synchronous serial, synchronous concurrent, asynchronous serial, and asynchronous concurrent

See a 🌰

- (void)textDemo2{

    // Concurrent queue

    dispatch_queue_t queue = dispatch_queue_create("happy", DISPATCH_QUEUE_CONCURRENT);

    NSLog(@"1");

    // Asynchronous functions

    dispatch_async(queue, ^{

        NSLog(@"2");

        / / synchronize

        dispatch_sync(queue, ^{

            NSLog(@"3");

        });

        NSLog(@"4");

    });

    NSLog(@"5");


}
Copy the code

The output is 1,5, 2,3, 4. It takes time to execute either a synchronous function or an asynchronous function, so 1,5 is printed first. In an asynchronous function, 2,3 is executed first and blocks 4, at which point 4 waits for 3 to execute.

❓ we send dispatch_queue_t queue = dispatch_queue_create(“happy”, DISPATCH_QUEUE_SERIAL); Queue to synchronous queue, running

A deadlock will occur,(left side crash function message _dispatch_SYNc_F_slow)

Since dispatch_sync is synchronous, it allows 3 to execute, whereas 3’s execution depends on 4, creating a deadlock.When we delete the NSLog at sign “4”It’s still deadlocked, and it’s the outside block that blocks synchronously