# Multithreading Principles tag: Multithreading
This chapter describes the basic concepts and principles of processes, threads, multithreading, and thread pools
Thread 1.
- A thread is the basic execution unit of a process. All tasks of a process are executed in a thread
- In order for a process to execute a task, it must have threads. The process must have at least one thread
- When the program starts, a thread is started by default. This thread is called the main thread or UI thread
Process of 2.
- A process is an application that is running on the system
- Each process is independent, and each process runs in its own dedicated and protected memory space
- Activity Monitor allows you to view the threads that are started on your MAC system
3. Relationship between processes and threads
A process is a container for threads that are used to execute tasks. In iOS, it is single-process development, and a process is an APP. The relationship between the two is equivalent to the relationship between the factory and the assembly line. The factories are independent of each other, while the assembly line in the factory shares the resources of the factory, that is, the process is equivalent to a factory. Threads are equivalent to a pipeline process in a factory. The relationship between processes and threads mainly involves two aspects:
-
Address space
- Threads of the same process share the address space of the same process
- Processes have separate address Spaces
-
Resources have
- Threads in a process share resources such as memory, I/O, and CPU
- But resources are independent between processes
expand
-
Multiple processes are more robust than multiple threads
- When a process crashes, it does not affect other processes in protected mode
- One thread crashes and the entire process dies
-
Application scenario: Frequent switchover and concurrent operations
- The process switchover consumes a large number of resources and is highly efficient. So when it comes to frequent switching, threads are better than processes.
- Similarly, if concurrent operations are required simultaneously and share some variables, only threads can be used, not processes
-
Implementation process
- Each individual process has a program running entry, sequence of execution, and program entry
- However, threads cannot be executed independently, and must be dependent on the application program, which provides multiple thread execution control.
-
Threads are the basic unit of processor scheduling, but processes are not.
-
Threads have no address space; they are contained in the process address space
4 a multithreaded
4.1 the principle
- For a single-core CPU, the CPU can only process one thread at a time, that is, only one thread is working.
- The essence of simultaneous execution of multiple threads in iOS is that the CPU can directly and quickly switch between multiple tasks. Because the CPU schedules threads quickly enough, the effect of simultaneous execution of multiple threads is caused. The switching time interval is the time slice
4.2 the advantages
- Can properly improve the efficiency of the implementation of procedures
- The utilization of resources, such as CPU and memory, can be improved
- When the tasks on the thread finish executing, the thread is automatically destroyed
4.2 disadvantages
- Enabling threads requires a certain memory space. By default, each thread occupies 512KB
- If a large number of threads are enabled, a large amount of memory space is occupied and the performance of the program is degraded
- The more threads there are, the greater the CPU overhead on the calling thread
- More complex programming, such as communication between threads, multi-threaded data sharing
4.3 Life Cycle
The multithreaded life cycle is divided into five main parts: new – Ready – Run – Block – dead, as shown in the figure below
- New: Mainly instantiates thread objects
- Ready: The thread object calls the start method, adds the thread object to the schedulable thread pool, and waits for the CPU to call the start method, which does not execute immediately, and then enters the ready state. It needs to wait for a period of time, and then executes after the CPU scheduling, which is to enter the running state from the ready state
- Run: The CPU is responsible for scheduling the execution of threads within the schedulable line. The state of a thread may switch back and forth between ready and run before the thread is completed. The CPU is responsible for this change and the developer cannot intervene.
- Block: When certain predetermined conditions are met, sleep, or synchronization lock, can be used to block the thread from executing. When you enter sleep, the thread is readded to the ready. The following Settings for hibernation are NSThread’s
- SleepUntilDate: Block the current thread until the specified time, that is, sleep until the specified time
- SleepForTimeInterval: Sleep the thread within a given interval, that is, the specified sleep period
- @synchronized(self)
- Death: There are two cases
- Normal death is the completion of the thread
- Abnormal death, which terminates execution within a thread (or in the main thread) when certain conditions are met (exit, etc.)
Briefly, a running thread has a period of time (called a time slice) that can be executed,
- If the time slice runs out, the thread enters the ready state queue
- If the time slice is not exhausted and you need to start waiting for an event, you enter the blocking status queue
- After waiting for the event to occur, the thread will enter the ready state queue again
- Each time a thread leaves the run, that is, after completion or forced exit, a thread is re-selected from the ready state queue to continue execution
Exit and cancel for the thread
- Exit: Once the thread is forcibly terminated, all subsequent code will not be executed
- Cancel: Cancellations the current thread, but not the executing thread
Extended time slice
The CPU quickly switches between tasks directly, and this interval is known as time slicing
- (Single-core CPU) The CPU can process only one thread at a time
- In other words, only one thread is executing at any one time
- Simultaneous execution of multiple threads:
- It allows the CPU to quickly switch between multiple threads
- The CPU schedules threads fast enough to create the effect of “simultaneous” execution of multiple threads
- If you have a lot of threads
- The CPU switches between N threads, which consumes a lot of CPU resources
- The number of times each thread is scheduled is reduced and the execution efficiency of the thread is reduced
4.4 Does the higher the priority of the Q&A thread, the faster the task will be executed?
No, the speed of thread execution depends not only on priority, but also on resource size (that is, the complexity of the task) and CPU scheduling. In NSThread, threadPriority has been replaced by qualityOfService. Here are the associated enumeration values
typedef NS_ENUM(NSInteger, NSQualityOfService) {
NSQualityOfServiceUserInteractive = 0x21,
NSQualityOfServiceUserInitiated = 0x19,
NSQualityOfServiceUtility = 0x11,
NSQualityOfServiceBackground = 0x09,
NSQualityOfServiceDefault = -1
} API_AVAILABLE(macos(10.10), ios(8.0), watchos(2.0), tvos(9.0));
Copy the code
5 Thread Pool Principle
-
[Step 1] Determine whether all core thread pools are executing tasks
- Return NO and create a new worker thread to execute
- Return YES to enter [Step 2]
-
[Step 2] Determine whether the thread pool work queue is full
- NO is returned and the task is stored on a work queue for CPU scheduling
- Return YES to enter [Step 3]
-
[Step 3] Determine whether all threads in the thread pool are in execution state
- Return NO, and the idle threads in the schedulable thread pool are scheduled to perform the task
- Return YES to enter step 4
-
[Step 4] The implementation is handed over to saturation policies, which are mainly divided into the following four types (the following four strategies are not found in iOS).
- AbortPolicy: direct selling RejectedExecutionExeception exceptions to prevent normal operation of system
- CallerRunsPolicy: Rolls back the task to the caller
- DisOldestPolicy: Discard the task that has been waiting the longest
- DisCardPolicy: Discards the task directly
6 Multi-thread technical solution
There are four main implementation methods of multithreading in iOS: pthread, NSThread, GCD and NSOperation, as shown in the figureHere are some simple examples of the above four scenarios
// *********1: pthread*********
pthread_t threadId = NULL;
/ / c string
char *cString = "HelloCode";
/** pthread_create: a pointer to a structure to create a thread. Usually, when developing C constructs, the type suffix '_t/Ref' ends without '*' 2. Thread properties, nil(empty object - used by OC)/NULL(empty address, used by 0 C) 3. Void *: return type, representing a pointer to any object, similar to id in OC (*): function name (void *): argument type, void * 4. The 'parameter' */ passed to the third parameter (function)
int result = pthread_create(&threadId, NULL, pthreadTest, cString);
if (result == 0) {
NSLog(@"Success");
} else {
NSLog(@"Failure");
}
/ / * * * * * * * * * 2, NSThread * * * * * * * * *
[NSThread detachNewThreadSelector:@selector(threadTest) toTarget:self withObject:nil];
/ / * * * * * * * * * 3, the GCD is * * * * * * * * *
dispatch_async(dispatch_get_global_queue(0.0), ^{
[self threadTest];
});
/ / * * * * * * * * * 4, NSOperation * * * * * * * * *
[[[NSOperationQueue alloc] init] addOperationWithBlock:^{
[self threadTest];
}];
- (void)threadTest{
NSLog(@"begin");
NSInteger count = 1000 * 100;
for (NSInteger i = 0; i < count; i++) {
/ / the stack area
NSInteger num = i;
/ / constant area
NSString *name = @"zhang";
/ / heap area
NSString *myName = [NSString stringWithFormat:@"%@ - %zd", name, num];
NSLog(@"% @", myName);
}
NSLog(@"over");
}
void *pthreadTest(void *para){
// Follow the C language string
// NSLog(@"===> %@ %s", [NSThread currentThread], para);
// __bridge Bridges C types to OC types
NSString *name = (__bridge NSString *)(para);
NSLog(@"= = = > % @ % @", [NSThread currentThread], name);
return NULL;
}
Copy the code
Extend the bridge between C and OC
- __bridge does type conversion only, but does not modify object (memory) management rights
- __bridge_retained(CFBridgingRetain) converts objective-C objects to Core Foundation objects and lets us manage the object (memory). Then you need to release the object using CFRelease or related methods
- __bridge_transfer(also available using CFBridgingRelease) converts Core Foundation objects into Objective-C objects and gives ARC the management of the objects (memory).
7 Thread safety issues
When multiple threads access a resource at the same time, data disorder and data security may occur. The following two solutions are used as examples
- Mutex (or synchronization lock) : @synchronized
- spinlocks
7.1 the mutex
- Used to protect critical sections and ensure that only one thread can execute at a time
- If there is only one place in your code that needs to be locked, most use self to avoid creating a separate lock object
- When a new thread accesses mutex code, if it finds that another thread is executing the locked code, the new thread will go to sleep
There are a few other things to note about mutex:
- The mutex lock range should be as small as possible. The larger the lock range, the worse the efficiency
- Any NSObject that can be locked
- The lock object must be accessible to all threads
7.2 the spin lock
- Spin locks are similar to mutex, but instead of hibernating the thread to block, it stays in a busy (spinning in place, called spin) blocking state until the lock is acquired
- Use case: When the lock is held for a short time and the thread does not want to spend too much money on rescheduled, a spin lock is needed. The property modifier atomic has a spin lock of its own
- – Added spin-locking. When a new thread accesses code and another thread locks the code, the new thread will use an infinite loop and wait for the code to finish executing
Q & A spin lock vs mutex
- The same:
- At the same time, only one thread is guaranteed to execute the task, that is, the corresponding synchronization function is guaranteed
- Different:
- Mutex: When another thread is found to be executing, the current thread is hibernated (that is, ready state) and enters the waiting state, that is, suspended. Wait until the other threads are open, then wake up the execution
- Spin-lock: Found that other threads are executing, the current thread is always asking (that is, always accessing), is busy, etc., high performance
- Scenario: Different locks are used according to the complexity of the task. However, mutex is more used when the task is incomplete
- Use spin-lock when the current task state is short and tidier
- Otherwise, use a mutex
7.3 Atomic and Nonatomic locks
Atomic and nonatomic are mainly used to modify attributes. Here are some instructions:
-
Atomic is the default property for multithreaded development.
- Adding a lock (spin-lock) to the setter method of a property ensures that only one thread is writing to the property at a time
- Thread processing in which a single (thread) writes multiple (thread) reads at the same time
- Used in Mac development
-
Nonatomic properties that are nonatomic
- There is no lock! High performance!
- Commonly used in mobile development
The difference between atomic and nonatomic
- nonatomic
- Nonatomic property
- Non-thread safe, suitable for mobile devices with small memory
- atomic
- Atomic property (thread-safe), designed for multiple threads, default
- Ensure that only one thread can write at a time (but multiple threads can write at a time)
- Atomic has its own lock (spin-lock) : single thread writes, multiple threads can read
advice
- All properties are declared nonatomic
- Avoid multiple threads snatching the same chunk of resource. Hand over the business logic of locking and resource snatching to the server to reduce the pressure on the mobile client
8 Communication between threads
inThreading Programming GuideAccording to the documentation, there are several ways to communicate between threads
- Direct messaging: through a set of performSelector methods, it is possible to perform tasks specified by one thread on another thread. Since the execution context of the task is the target thread, messages sent this way will be automatically serialized
- Global variables, shared memory Blocks, and objects: Another easy way to pass information between two threads is to use global variables, shared objects, or shared memory blocks. Although shared variables are fast and simple, they are more vulnerable than direct messaging. Shared variables must be carefully protected with locks or other synchronization mechanisms to ensure correct code. Failure to do so may result in race conditions, data corruption, or a crash.
- Conditional execution: A condition is a synchronization tool that can be used to control when a thread executes a particular part of code. You can treat a condition as a gatekeeper, letting a thread run only if a specified condition is met.
- Runloop Sources: A custom Runloop Source configuration that allows specific application messages to be received on a thread. Because the Runloop Source is event-driven, the thread automatically goes to sleep when there is nothing to do, improving the efficiency of the thread
- A more complex way to communicate between two threads, but it’s also a very reliable technique. More importantly, ports and sockets can be used to communicate with external entities, such as other processes and services. For efficiency, the port is implemented using a Runloop source, so when there is no data waiting on the port, the thread goes to sleep. Note that port communication requires the port to be added to the Runloop of the main thread, otherwise the port callback method will not be accessed
- Message queues: Traditional multiprocessing services define a first-in, first-out (FIFO) queue abstraction to manage incoming and outgoing data. Although message queues are simple and convenient, they are not as efficient as some other communication technologies
- Cocoa Distributed Objects: A Distributed object is a Cocoa technology that provides an advanced implementation of port-based communication. Although this technique can be used for inter-thread communication, it is highly recommended not to do so because of the overhead involved. Distributed objects are better suited for communicating with other processes, although transactions between those processes are also expensive
9 Relationship between threads and runloops
- A runloop corresponds to a core thread. The reason why it is called a core thread is that runloops can be nested, but there can only be one core thread. Their relationship is stored in a global dictionary.
- The runloop manages the thread. When the runloop is enabled, the thread will go to sleep after executing a task, and it will wake up to execute the task.
- The runloop is created when it is first fetched and destroyed when the thread ends.
- For the main thread, the runloop is created by default as soon as the program starts.
- For the child thread, the runloop is lazily loaded and created only when we use it, so be careful when using a timer for the child thread: make sure the runloop is created for the child thread, otherwise the timer will not call back.