preface

Multithreading is a very important link in iOS development, whether in the development process or in the interview process, multithreading is very frequent. Today we are going to look at multithreading.

Threads and processes

Definitions of threads and processes

thread

  • Thread is the basic execution unit of a process. All tasks of a process are executed in a thread
  • In order for a process to perform a task, it must have threads,A process must have at least one thread
  • The program starts with a thread by default, this thread is calledThe main threadorThe UI thread

process

  • processRefers to theAn application that is running on the system
  • eachProcesses are independent from each otherEach process runs in its own dedicated and protected memory space
  • The Activity Monitor allows you to view the threads that are started in the MAC system

A process is a container for threads that perform tasks. In iOS, it is a single process development, a process is an APP, the process is independent of each other, such as Alipay, wechat, QQ, etc., these are different processes

The relationship between processes and threads

The relationship between processes and threads mainly involves two aspects:

  • Address space

    • Of the same processThreads share the address space of the process
    • whileProcesses are separate address Spaces
  • Resources have

    • Within the same processThreads share resources of the process, such as memory, I/O, CPU, etc
    • But in theResources are independent between programs

The relationship between the two is equivalent to that between the factory and the assembly line, which are mutually independent, while the assembly line in the factory shares the resources of the factory, that is, the process is equivalent to a factory, and the thread is equivalent to an assembly line in the factory

For processes and threads, there are a few more notes:

  • 1: Multiple processes are more robust than multithreading

    • A process crash in protected mode does not affect other processes
    • whileOne thread crashes and the entire thread dies
  • 2: Application scenario: Frequent switchover and concurrent operation

    • Process switchover consumes large resources and is efficient. So it comes down toFrequent switchingWhen,Threads are better than processes.
    • If the samerequirementsSimultaneous and sharing of variablesThe concurrent operation.Thread onlyUnable to use process
  • 3: Execution process

    • Each individualprocessThere is a programRunning entryOrder,Execution sequenceAnd chengOrder entry
    • butThreads cannot execute independently and must depend on the applicationThe application provides multiple threads of execution control.
  • Thread is the basic unit of processor scheduling, but process is not.

  • 5: Threads have no address space. Threads are contained in the process address space

Thread and Runloop relationship

  • 1:Runloops correspond to threads one by oneA runloop corresponds to a core threadRunloops can be nested, but there can only be one coreTheir relationship is stored in a global dictionary.
  • 2:Runloop is used to manage threads, when the thread runloop is enabled, the thread will go to sleep after completing the task and will be woken up to execute the task.
  • 3: Runloop is created on the first fetch and destroyed at thread end.
  • 4:The main threadSpeaking,Runloop is created by default as soon as the program startsAll right.
  • 5:The child threadSpeaking,Runloops are lazily loaded and only created when we use them, soUse timers on child threadsNote:Make sure the runloop for the child thread is created, otherwise the timer will not call back.

multithreading

Principle of multithreading

  • forA single core CPU.The CPU can only process one thread at a timeThat is, only one thread is working,
  • In the iOSMultithreading simultaneous executionIs the nature ofThe CPU switches between multiple tasks directly and quicklyBecause ofCPU scheduling threadtheTime is fast enoughIs caused byMulti-threaded "simultaneous" executionThe effect. Where the switching interval isTime slice

Meaning of multithreading

advantages

  • To the appropriateImprove the efficiency of program execution
  • To the appropriateImprove resource utilization, such as CPU and memory
  • When the task on the thread is complete,The thread is automatically destroyed

disadvantages

  • Starting a thread takes up a certain amount of memory, by default,Each thread occupies 512KB
  • If openA large number of threads will occupy a large amount of memory space, reducing the performance of the program
  • The more threads, the CPUOn the calling threadThe more it costs
  • Programming is more complex, such as communication between threads, data sharing between multiple threads

Multithreaded life cycle

The life cycle of multithreading is mainly divided into five parts: new – ready – run – block – death,

  • New: Mainly instantiates thread objects

  • Ready: a thread object calls the start method, adds the thread object to the schedulable thread pool, and waits for the CPU to call the start method. The start method is not executed immediately, and the thread object enters the ready state. It needs to wait for the CPU to schedule the execution, that is, from the ready state to run state

  • Run: The CPU is responsible for scheduling the execution of threads in a schedulable line city. The state of a thread may switch back and forth between ready and run before its execution is complete. The CPU is responsible for this change and the developer cannot interfere.

  • Blocking: When a predetermined condition is met, sleep, or a synchronous lock, can be used to block thread execution. When sleep is entered, the thread is re-added to the ready. The following Settings for sleep time are for NSThreads

    • sleepUntilDate:Blocks the current thread until the specified time, i.eSleep until a specified time
    • sleepForTimeInterval:Hibernates a thread for a given time interval, i.eSpecify sleep duration
    • Synchronization locks:@ synchronized (self) :
  • Death: falls into two categories

    • Normal deathThat is, the thread completes execution
    • Unnatural deathWhen a condition is met, execution terminates inside the thread (or in the main thread) (exit, etc.)

A running thread has a period of time (called a timeslice) that it can execute.

  • ifTime slice exhausted, the thread will enterReady state queue
  • ifThe time slot is not used upAnd you need to startWait for something, will enterBlocking status queue
  • After an event occurs, the thread will re-enterReady state queue
  • Whenever aThread out of run, that is, after the execution is complete or forced to exit, the system will retryFrom the ready state queueIn theSelect a thread to continue execution

Thread exit and cancel instructions \

  • exit: Once the thread is forcibly terminated, all subsequent code is not executed
- 'cancel' : cancels the current thread, but not the executing threadCopy the code

Thread Pool Principle

IOS multithreading implementation scheme

There are four ways to implement multithreading in iOS: pThread, NSThread, GCD, and NSOperation, as shown in the figure

Here are simple examples of the above four scenarios

// *********1: pthread********* pthread_t threadId = NULL; //c string char *cString = "HelloCode"; Pthread_t: pointer to the structure of the thread to be created. Normally, if you encounter C structures, the type suffix '_t/Ref' does not need to end with '*' 2. Thread property, nil(NULL object -oc used)/NULL(NULL address, 0 C used) 3. Void *: returns a pointer to any object similar to the id in OC (*): returns a pointer to any object similar to the id in OC (*): returns a pointer to any object similar to the id in OC (*) */ int result = pthread_create(&threaDID, NULL, pthreadTest, cString); If (result == 0) {NSLog(@" success "); } else {NSLog(@" failed "); } *********2, NSThread********* [NSThread detachNewThreadSelector:@selector(threadTest) toTarget:self withObject:nil]; / / * * * * * * * * * 3, the GCD is * * * * * * * * * dispatch_async (dispatch_get_global_queue (0, 0), ^ {[self threadTest]; }); //*********4, NSOperation********* [[[NSOperationQueue alloc] init] addOperationWithBlock:^{[self threadTest];}]; - (void)threadTest{ NSLog(@"begin"); NSInteger count = 1000 * 100; for (NSInteger i = 0; i < count; I ++) {NSInteger num = I; // NSString *name = @"zhang"; NSString *myName = [NSString stringWithFormat:@"%@ - %zd", name, num]; NSLog(@"%@", myName); } NSLog(@"over"); } void *pthreadTest(void *para){// C string // NSLog(@"===> %@ %s", [NSThread currentThread], para); NSString *name = (__bridge NSString *)(para); NSLog(@"===>%@ %@", [NSThread currentThread], name); return NULL; }Copy the code

Bridge between C and OC

The bridge between C and OC is involved, as described below

  • __bridgeI only do type conversions, butDo not modify object (memory) management rights
  • __bridge_retained(Also availableCFBridgingRetain) will beThe object of Objective - CconvertCore Foundation object, while putting the object (memory)The management is in our hands, the subsequentYou need to use CFRelease or related methods to release objects
  • __bridge_transfer(Also availableCFBridgingRelease) will beCore FoundationObject conversion ofIs an Objective-C objectAnd at the same time willObject (memory) management is given to ARC.

Multithreading knowledge

Synchronize threads: The sync function in Dispatch does things in the current thread

Asynchronous functions: Async functions in dispatch that do things in another thread

Concurrent queue: Allows multiple tasks to execute concurrently (simultaneously) (automatically enabling multiple threads to execute tasks at the same time) Concurrency only works with asynchronous functions (dispatch_async)

Serial queue: To execute tasks one after another (after one is executed, the next is executed)

Note: CFRealease is not used to release variables created by a CF function

Synchronous vs. asynchronous: Whether new threads can be started (which determines which thread will execute)

  • Synchronization: Executes tasks in the current thread. Does not have the ability to start new threads
  • Asynchrony: Performs tasks in a new thread, with the ability to start a new thread
  • Synchronization function: executes tasks immediately on the current thread and can only continue after completion. That is, the function will be stuck until all tasks in the synchronization function are completed
  • Asynchronous functions: do not require tasks to be executed immediately in the current thread, but wait for the next task to complete

Concurrent and serial: how tasks are executed

  • Concurrency: Multiple tasks are executed concurrently (simultaneously)
  • Serial: After one task is executed, the next task is executed

The main queue is a special serial queue where everything that is put in the main queue is executed on the main thread

Deadlock:

Queue features :FIFO(First In First Out)

There are two situations in which a deadlock can occur:

  • A deadlock occurs when a queue within a synchronization function is the primary queue
  • Using the sync function to add tasks to the current serial queue causes a deadlock

Queue group:

dispatch_group_t group = dispatch_group_create(); dispatch_queue_t queue = dispatch_queue_create("myQueue", DISPATCH_QUEUE_CONCURRENT); dispatch_group_async(group, queue, ^{ for (int i = 0; i<5; I++) {NSLog (@ "task 1 - % @", [NSThread currentThread]); }; }); dispatch_group_async(group, queue, ^{ for (int i = 0; i<5; I++) {NSLog (@ "task 2 - % @", [NSThread currentThread]); }; }); Dispatch_group_notify (group, queue, ^{dispatch_async(dispatch_get_main_queue()), ^{ for (int i = 0; i<5; I++) {NSLog (@ "task 3 - % @", [NSThread currentThread]); }; }); });Copy the code

Multithreading security risks:

Q: A resource may be shared by multiple threads, which may cause data disorder and data anomalies. A: Solution: Use thread synchronization technology (synchronization: synchronized, in A predetermined order). Common thread synchronization technology: lock

  • OSSpinLock: The thread waiting for the lock will be busy, occupying CPU resources (high-level lock).
H > _lock = OS_SPINLOCK_INIT; Static OSSpinLock lock; static OSSpinLock lock; static dispatch_once_t onceToken; dispatch_once(&onceToken, ^{ lock = 0; //OS_SPINLOCK_INIT = 0}); */ OSSpinLockLock(&_lock); OSSpinLockUnlock(&_lock); OSSpinLockUnlock(&_lock); / / unlockCopy the code

Note:

  • All threads should share one lock, otherwise there will still be a problem of creating a new lock each time if there are several methods and only one method can be used at a time, you need to share one lock
  • How it works: Similar to writing a while loop
  • It is no longer secure, and priority inversion may occur If a thread thread 1 and 2, assuming that the priority of a thread 1 is greater than the thread 2, thread 2 enter the code, found that the lock is unlocked, so the lock code execution, thread 1 after entering, found that the lock has been locked, so busy, but because thread 1 is greater than the thread priority 2, allocate more time executing thread 1 CPU code, may result in thread 2 code doesn’t have time to perform, If you cannot unlock it, you will enter a deadlock-like state, which is no longer recommended by Apple

You can also use static to initialize the lock statically to make the spin lock unique, so the lock does not need to use attributes When a static variable is initialized, it is not possible to call a function dynamically. The value on the right side of the function must be determined during compilation. If you want to call a function dynamically, you need to use once again, and assign the value in the once code

When multiple threads need to modify the same value at the same time, the lock is basically required, but reading is not required

  • os_unfair_lock(low-level lock)

A thread waiting for a os_UNFAIR_lock is set to sleep.

< OS /lock.h> _lock = OS_UNFAIR_LOCK_INIT; Os_unfair_lock_lock (&_lock); Os_unfair_lock_unlock (&_lock); / / unlockCopy the code
  • pthread_mutex(low-level lock)

The thread waiting for the lock will sleep. If it is not needed, it must be destroyed manually

Note: static initialization of a structure only allows simultaneous assignment, not subsequent assignment using the set method

struct Date { int year; int month; } struct Date Date = {2011 10} Date = {2011 10}// This is not possible // this is why it is not possible to define a pthread_mutex and statically initialize it using self's point syntaxCopy the code
// Static initialization //self.lock = PTHREAD_MUTEX_INITIALIZER; pthread_mutexattr_t attr; pthread_mutexattr_init(&attr); pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_DEFAULT); pthread_mutex_init(&_lock, &attr); Pthread_mutexattr_destroy (&attr); pthread_mutexattr_destroy(&attr); /* pthread_mutex_init(mutex, NULL); // At initialization, &attr can be passed NULL, which defaults to PTHREAD_MUTEX_DEFAULT */Copy the code

Replace the PTHREAD_MUTEX_DEFAULT attribute with PTHREAD_MUTEX_RECUESIVE to change the lock into a recursive lock to resolve the deadlock problem that recursion may cause

The essence of recursive locking is to allow the same thread to lock the same lock repeatedly

- (void)define{pthread_mutexattr_t attr; pthread_mutexattr_init(&attr); pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_DEFAULT); pthread_cond_init(&_con, NULL); // Create a condition pthread_mutex_init(&_lock, &attr); pthread_mutexattr_destroy(&attr); } - (void)using1{ pthread_mutex_lock(&_lock); Pthread_cond_wait (&_con, &_lock); Pthread_mutex_unlock (&_lock); pthread_mutex_unlock(&_lock); } - (void)using2{ pthread_cond_signal(&_con); // Tell the conditional lock that the lock has been released and can continue to use}Copy the code

By creating a conditional lock,pthread_mutex can achieve the effect of thread waiting (multithreaded dependency problem).

tip:

  • In the assembly type SI, is the assembly instruction level line, encountered the function call will enter the function
  • Type C in assembly to go directly to the breakpoint
  • NSLock(low-level lock)

Is the encapsulation of a normal lock for pthread_mutex

The following two functions will be executed by observing the NSLocking agreement:

  • - (void)lock;
  • - (void)unlock;

There are two commonly used apis:

  • - (BOOL)tryLock; If it can be locked, it will execute the following code. If it cannot be locked, it will return NO and continue to execute the following code (i.e. NO lock).
  • - (BOOL)lockBeforeDate:(NSDate *)limit; It will judge that the lock can be successfully added before the limit time arrives, and then lock will be added and the subsequent code will be executed. If the lock has not been successfully added before the limit time arrives, the subsequent code will be executed and the locking failure will be returned
self.lock = [[NSLock alloc] init]; [self.lock lock]; // Execute code [self.lock unlock];Copy the code
  • NSRecursiveLock(low-level lock)

Is the encapsulation of a recursive lock for pthread_mutex, used in the same way as NSLock

  • NSCondition(low-level lock)

Pthread_mutex encapsulates both pthread_mutex and cond

There are four commonly used apis:

  • - (void)wait;
  • - (BOOL)waitUntilDate:(NSDate *)limit;
  • - (void)signal;
  • - (void)broadcast;
self.condition = [[NSCondition alloc] init]; [self.condition lock]; // The code to execute [self.condition wait]; [self.condition unlock]; [self.condition signal]; // Send the NSCondition signal [self.condition broadcast]; // Send a broadcast to NSConditionCopy the code
  • NSConditionLock(low-level lock)

Is the further encapsulation of NSCondition, and specific condition values can be set

  • - (instancetype)initWithCondition:(NSInteger)condition; The condition value is initialized. If you use init directly, the default initial condition value is 0
  • - (void)lockWhenCondition:(NSInteger)condition; Lock when condition is condition
  • - (void)unlockWithCondition:(NSInteger)condition; This method unlocks and changes the condition value to condition
self.lock = [[NSConditionLock alloc]initWithCondition:1]; // The initialization condition is 1; [self.lock lockWhenCondition:1]; [self. Lock unlockWithCondition:2]; // This method unlocks and changes the conditional value to 2Copy the code

Note: If the lock method is called regardless of the condition value, that is, regardless of the condition value, the lock method can proceed.

  • GCD serial queue

Threads can also be synchronized using serial queues directly from GCD

  • dispatch_semaphore

Semaphore: semaphore

  • Wait: If the semaphore value is >0, the semaphore value is reduced by 1 and the code continues. If the semaphore value is <=0, the code sleeps until the semaphore value becomes >0
  • Signal: increases the semaphore value by 1
self.lock = dispatch_semaphore_create(5); // The initial value of dispatch_semaphore_wait(self.lock, DISPATCH_TIME_FOREVER) is used to control the maximum number of concurrent calls. // A maximum of 5 concurrent threads can execute dispatch_semaphore_signal(self.lock) simultaneously.Copy the code
  • @synchronized

Is a wrapper around the mutex recursive lock. The underlying implementation looks for the lock in the hash table based on the object passed in

@synchronized([self class]) {// objc_sync_enter // the thing to do}// objc_sync_exit //() can pass any object as a lock object, when the lock object in () is consistent, it means that the lock is sharedCopy the code

The above locks are ranked in order of performance from highest to lowest:

  • os_unfair_lock
  • OSSpinLock
  • dispatch_semaphore
  • pthread_mutex
  • GCD serial queue
  • NSLock
  • NSCondition
  • pthread_mutex(recursive)
  • NSRecursiveLock
  • NSConditionLock
  • @synchronized

Dispatch_semaphore and pthread_mutex are recommended for the highest performance

The dispatch_once (dispatch_once, dispatch_once, dispatch_once, dispatch_once, dispatch_once, dispatch_once, dispatch_once, dispatch_once, dispatch_once, dispatch_once, dispatch_once

Atomicity and non-atomicity of attributes:

  • Atomic: Adding atomic modifications to a property ensures that both the setter and getter of the property are atomic. In other words, it ensures that the threads inside the setter and getter are synchronized

Atomic operation: that is, to ensure that multiple lines of code can be executed sequentially, as if as a whole, the same line of code

Atomic does not guarantee that the process of using attributes is thread-safe, that is, only thread-safe inside the setter and getter

Atomic, however, costs performance because setters and getters for properties are called frequently, but locks are only needed when they are needed

File I/O operations:

Multiple read single write (read and write security): 1. Multiple threads are allowed to read files at the same time. 2. Only a single thread is allowed to perform write operations. 3. Both read and write operations are not allowed

  • Pthread_rwlock: The thread waiting for the lock goes to sleep
#import <pthread.h> pthread_rwlock_init(&_lock, NULL); // Initialize lock pthread_rwlock_rdlock(&_lock); Pthread_rwlock_wrlock (&_lock); // write lock lock pthread_rwlock_unlock(&_lock); / / unlockCopy the code
  • dispatch_barrier_async

When a task is executed within a barrier, a barrier appears that is no longer readable for other tasks

Self.queue = self.queue = self.queue = self.queue = self.queue = self.queue = self.queue = self.queue = self.queue = self.queue = dispatch_queue_create("queue", DISPATCH_QUEUE_CONCURRENT); - (void)read {dispatch_async(self.queue, ^{// read operation}); } - (void)write {dispatch_barrier_async(self.queue, ^{// write operation}); }Copy the code

Note: the incoming queue must be created by dispatch_queue_CREATE itself. If the incoming queue is a serial queue or a global queue, this function is equivalent to dispatch_async

The original