What is a thread
The figure shows the model of thread in process. Thread is the smallest unit of execution flow. Thread is composed of the following parts
- Thread ID
- Current instruction pointer
- Register set
- The stack
When an iOS app is started, the operating system assigns a process to it. A process consists of one or more threads
Thread shared space: Each thread shares a program’s code segment (executable code loaded into memory), data segment (also known as global area), heap area, and open files
Thread private space: stack, Thread Local Storage (TLS, limited capacity is only used to store Thread state machine and Thread control flow identifier), register (the operating system will apply for a chunk of memory in the CPU cache, which is used to process the basic data unit of the execution flow)
Thread private | Multi-threaded sharing |
---|---|
A local variable | The global variable |
Parameter to a function | A static variable |
The TLS data | Heap area data |
Code segment | |
File stream |
Why multithreading is needed
Whether it is single-core processor or multi-core processor, multi-threading is an important means to make full use of resources, and mainstream mobile phones are basically equipped with at least dual-core processor, multi-threading can gracefully handle the following problems:
- An operation can get stuck in a long wait
- A long task will consume a lot of time
- Some of the logic itself requires concurrent processing
Thread scheduling
Single-core CPU does not exist in the true sense of concurrency, the operating system will split small time allocated to each thread, often tens to hundreds of milliseconds, make its execution, and in turn looks like concurrent, this is the thread scheduling, each thread has its own priorities, higher-priority thread is always priority, low priority will be to execute, If a high-priority thread continues to occupy the CPU, the operating system hibernates it after a period of time to keep low-priority threads from starving to death. Threads generally have four states
- In the operation of the
- Have been ready
- Waiting for the
- jam
Thread state flow:
- The running thread time slice runs out and enters the ready state
- When a running thread reaches a critical section, it will attempt to acquire a resource, and if it does, it will continue to execute otherwise it will be blocked
- A running thread can actively enter a waiting state due to business logic
- Ready threads are allocated time slices and enter the running state
- A waiting thread waits to finish and enters the ready state
- A blocked thread is allocated a time slice that detects the resource in the critical section. If the resource is obtained, the thread enters the running state. Otherwise, the thread continues to block
Multi-threaded security issues
In a multi-threaded environment, global variables and heap data may be changed by other threads at any time, resulting in great uncertainty. However, the execution result desired by the program logic is usually fixed. If the data consistency is not guaranteed, the program logic cannot get the expected result, and the program will lose its meaning or even crash. Therefore, it is important to ensure that the execution of a single instruction is complete and uninterruptible (atomicity)
Classification and properties of locks
Multiple threads accessing the same data can cause unpredictable results. Locking ensures that multiple threads accessing a data is synchronized, that is, atomicity
Semaphore:
When a thread accesses a resource, it first obtains the semaphore and sets the value of the semaphore to -1. If the value of the semaphore is less than 0, it enters the blocked state. Otherwise, it can execute
The mutex
A mutex guarantees that only one thread can access a critical section at a time. Unlike a semaphore, a semaphore can be acquired and released by either thread. A mutex requires which thread locks and releases the lock
Read-write lock
Read-write lock is used to promote certain multithreaded scenario execution efficiency, and allows multiple threads to read at the same time, but as long as there is a thread in the writing, the thread can perform other, speaking, reading and writing, speaking, reading and writing lock, there are two types of access, Shared and exclusive, when the lock is in a state of freedom, any kind of access can succeed, if the lock is in Shared state, A shared lock can still be acquired by another thread. An exclusive lock will be in a wait state until it is released by all threads. An exclusive lock will refuse to be acquired by other threads in any way
Read/write lock status | Get it in a shared way | To acquire in an exclusive manner |
---|---|---|
free | successful | successful |
Shared | successful | Waiting for the |
exclusive | Waiting for the | Waiting for the |
Condition variables,
Condition variables are applicable to scenarios where multiple threads are waiting for an operation to be executed at the same time. When the operation is executed, a single thread can be awakened or all threads can be awakened together
Threads in iOS
Operating system implements a thread factory called thread pool, and a support manager called dispatching center, iOS and no interface to directly manipulate the thread pool, the creation, thread scheduling, destruction to the thread pool, when we try to create a thread, dispatching center will check the operation of the thread pool, the number of threads in operation and operation strategy, When it comes to execution, the thread pool decides whether to open up new threads or reuse existing idle threads. Idle threads for a long time will be recycled (usually about 10s). IOS operating system is equipped with a set of data structures for thread running. A stack, an autoreleasePool, a runloop, and some flag bit variables to control the state are stored at TLS
- Stack: stores function parameter values, local variable values, etc
- AutoreleasePool: Stores the autoRelease object generated by the current thread
- Runloop: A while loop for a thread to run continuously. Apple encapsulates it as a class that internally registers some observers and implements a set of state machines that can be executed on the fly and hibernated on the fly
In the iOS lock
Here only discuss the principle and applicable scenarios, no demo, you can see the use of Apple API or official website documentation
The mutexNSLock
Control multi-threaded critical section resources of synchronous access, the underlying wrapper is pthread_mutex
Recursive lockingNSRecursiveLock
Pthread_mutex (recursive) is a type of mutex that locks the same thread multiple times without deadlock. It encapsulates pthread_mutex(recursive).
The mutex @ synchronized * * * *
Compared with NSLock, the usage is different, but the function is the same
spinlocksOSSpinLock
A thread that has not acquired the lock will not go to sleep, but will try to acquire the lock. There is no problem with Apple’s OSSpinLock, but there is a priority reversal problem with thread scheduling and lock itself
What exactly is priority inversion:
- A Indicates the highest priority of the thread
- B Thread has a medium priority
- C Indicates the lowest priority of the thread
- A and C have critical section resources
- C executes before A, and A waits for C to release the lock
When C the time slice is not finished completing the lock has not been released, A top priority at this time to time and try to get the lock but failed because C haven’t release the lock, continue to block A, B as priority than C, at this time for time slice, this leads to the illusion of A waiting for B look, this phenomenon is called priority inversion, So how to solve the problem, which requires A programmer in the design of multi-threaded tasks, keep concurrent multiple threads in the same priority as far as possible, namely A and C the same priority, so that when waiting for B, C A well-deserved together wait for, if really has the strict priority business logic, then what can we do, set up strong dependence, That is, C is executed only when A has completed
The real problem with OSSpinLock is that the CPU allocated time slice is wasted on a single statement instruction until the time slice is used up, and the attempt is invalid on a single-core CPU because it is impossible to acquire the lock while trying. Therefore, the application scenario of OSSpinLock is for concurrent short tasks on multi-core cpus. So use it if it’s appropriate
Conditions for the lockNSConditionLock
When allocated to a time slice, NSConditionLock detects whether the condition matches, and executes directly. Otherwise, the time slice is abandoned and the thread enters the wait state. NSConditionLock implements inter-thread dependency through condition variables, and can be used with mutex to implement read/write locks
Condition variables,NSCondition
Condition variables are applicable to scenarios where multiple threads are waiting for an operation to be executed at the same time. When the operation is executed, a single thread can be awakened or all threads can be awakened together
A semaphoredispatch_semaphore
Controlling multi-threaded concurrency is fundamentally different from mutex, which is designed to control simultaneous access to resources and does not care about the number of concurrent threads, whereas semaphores can strictly control the number of concurrent threads