Learn the key
Process and thread definitions
Multithreading and its Principle
The life cycle of a thread
Schedulable thread pools and saturation policies
Spin-locks and mutex
1. Definition of processes and threads
1.1 process
A process is an application program that is running in the system. Each process is independent and runs in its dedicated and protected memory space. You can view the running process in the Mac system through the Activity Monitor.
As shown below:
1.2 the thread
Thread is the basic execution unit of a process. All tasks of a process are executed in the thread. If a process wants to execute a task, it must have a thread and at least one thread.
1.3 Relationship between processes and threads
Address space: Threads of the same process share the address space of the same process, while processes are independent of each other.
Resource ownership: Threads in the same process share the resources of the same process, such as memory, I/O, and CPU, but resources between processes are independent.
- The crash of one process in protected mode has no impact on other processes, but the crash of one thread can kill the entire process. So multi-processing is more robust than multi-threading.
- Process switchover consumes large resources and is efficient. So when it comes to frequent switching, threads are better than processes. Also, if you require concurrent operations that share variables at the same time, use threads instead of processes.
- Execution process: Each independent process has a program run entry, sequential execution sequence and program entry. However, threads cannot execute independently and must depend on the application, which provides control over the execution of multiple threads.
- Threads are the basic unit of processor scheduling, processes are not.
- Threads have no address space and are contained in the process address space.
2. Multithreading
2.1 Why Use Multi-threading
In iOS development, it is not possible to use single-threaded development because we often need to write code that takes time to operate (e.g. Network request, file upload and download, Bluetooth operation, etc.), if it is only single-thread development (only the main thread), then the UI will be stuck, which will greatly affect the user experience. For example, the following code is written in the following interface:
- (IBAction)networkOperation:(id)sender {NSLog(@" start operation "); for (int i = 0; i < 100 * 1000; i++) { int a = i * i; NSString *str = @"----"; NSLog(@" time consuming operation in progress: %@ %d", STR, a); } NSLog(@" time-consuming operation completed "); }Copy the code
When the button is clicked, the time-consuming operation is performed, and you find that you can’t do anything else (such as scrolling textView text) while the console is printing out the information (which is about 20 seconds long), so a single thread doesn’t work for us.
2.2 Advantages and disadvantages of multi-threading
2.2.1 Advantages of multi-threading
-
Can improve the execution efficiency of the program.
-
Can appropriately improve the utilization of resources (CPU, memory).
-
The task on the thread is automatically destroyed after it completes execution.
2.2.2 Advantages of multi-threading
-
Starting threads takes up a certain amount of memory (512KB per thread by default).
-
If a large number of threads are enabled, a large amount of memory space will be occupied and the performance of the program will be reduced.
-
The more threads there are, the more overhead the CPU has on calling threads.
-
Programming is more complex, such as communication between threads, data sharing between multiple threads.
2.3 Principle of multi-threading
The execution of tasks depends on threads, so multi-threading means that multiple tasks are executed at the same time, and the CPU can only execute one task at a time, so how does multi-threading come? When the CPU executes a task, it actually switches between multiple threads quickly. The time interval is so small that it feels like multiple threads are executing simultaneously. So single-core CPU in multiple threads of multithreading is fast constantly switch calls, has reached to all threads are in effect at the same time, but now many devices are dual-core, four or eight nuclei, equivalent to multiple CPU processing, at the same time can speed up the system operation process, this is the true sense of the threads.
This very short period of time is also known as a time slice (usually around 10-100 milliseconds). The concept is that the CPU quickly switches between tasks, and this time slice is the time interval. If the number of threads is very large, the CPU will switch between N threads, consuming a large amount of CPU resources. As a result, the number of times that each thread is scheduled decreases, and the execution efficiency of the thread decreases.
3. Thread declaration cycles, schedulable thread pools, and saturation policies
3.1 Declaration cycle of threads
The life cycle of a thread is divided into five parts: New, ready, running, blocking, and dead. Since the CPU needs to switch between multiple threads, thread state also switches between multiple runs and blocking, as shown below:
-
New state: Created, in memory, but not in the schedulable thread pool.
-
Ready state: After calling the start method of a thread, the task can be executed in the schedulable pool.
-
Running Status: Executing a task in a schedulable thread pool.
-
Blocked: Removed from the schedulable thread pool, in memory, unable to perform tasks.
-
Dead: Released.
3.2 Schedulable thread pools
The thread running strategy is shown in the following figure:
-
First, determine whether the number of threads in the thread pool exceeds the number of core threads (corePoolSize). If not, create a new thread to execute the task.
-
Otherwise, judge whether the task queue is full, if not, add the task to the task queue;
-
If the number of threads exceeds the maximum number of threads (maximumPoolSize), create a new thread to execute the task.
-
Otherwise, a saturation policy is implemented
3.3 Saturation Strategy
The following four saturation policies implement the RejectedExecutionHandler interface.
-
AbortPolicy directly thrown RejectedExecutionExeception exceptions to prevent normal operation of system.
-
CallerRunsPolicy rolls back the task to the caller.
-
DisOldestPolicy loses the most awaited task.
-
DisCardPolicy Discards the task directly.
4. Lock
In computer science, a lock is a synchronization mechanism used to restrict access to resources in an environment where multiple threads exist. You can think of it as a strategy to eliminate concurrency! You can think of it as a way to keep threads synchronized in order to prevent multiple threads from grabbing resources.
Using any lock consumes system resources (memory resources and CPU time), which can be divided into two categories:
-
Resources required to establish locks
-
Resources required when a thread is blocked
4.1 the spin lock
A spin lock is a lock used to protect a multithreaded shared resource. It differs from a mutex in that a spin lock iterates through a busy waiting loop to check whether the lock is available while trying to acquire it. When the previous thread’s task is not finished (locked), the next thread will wait (no sleep), and when the previous thread’s task is finished, the next thread will execute immediately.
In a multi-CPU environment, using a spin lock instead of a common mutex can often improve performance for programs with short locks.
-
Advantages: The advantage of spin locks is that since they do not cause stack sleep, there is no thread scheduling, CPU time slice rotation and other time-consuming operations, so if the lock can be obtained in a short time, spin locks are far more efficient than mutex
-
Disadvantages: The spin lock is always occupying CPU, in the case of no lock, always running (spin), so occupy CPU, if not in a short period of time to obtain the lock, this will undoubtedly make CPU efficiency, spin lock can not achieve recursive calls.
Common spin locks: atomic, OSSpinLock, dispatch_semaphore_t.
4.2 the mutex
When the task of the previous thread is not finished (locked), the next thread will go to sleep and wait for the task to complete. When the task of the previous thread is finished, the next thread will automatically wake up and execute the task.
Mutex ensures that the code in the lock can be executed by only one thread at a time. When using mutex, the lock scope should be as small as possible. The larger the lock scope is, the worse the efficiency will be.
Common mutex: pthread_mutex, @synchronized, NSLock, NSConditionLock, NSCondition, and NSRecursiveLock.
4.3 atomic and nonatomic
4.3.1 Definitions and differences between the two
When we use OC to develop iOS applications, we can choose atomic or nonatomic when defining OC class attributes, where:
-
Atomic: Atomic property that adds spin locks to setter methods.
-
Nonatomic: Nonatomic property that does not lock setter methods.
The differences between the two approaches are as follows:
-
Atomic: Thread-safe, consuming a lot of resources.
-
Nontomic: Non-thread-safe, suitable for mobile devices with small memory.
4.3.2 Usage Suggestions
In iOS development, we generally follow the following principles to write program code:
-
All properties that are not preempted (such as ticket purchase, recharge, etc.) are declared nonatomic.
-
Try to avoid multiple threads grabbing the same resource.
-
Try to transfer the business logic of locking and resource snatching to the server to reduce the pressure on mobile clients.
4.3.3 Atomic principle exploration
First, we know that underneath the setter method for the property, the objc_setProperty function is actually called, which looks like this:
This function calls the reallySetProperty function, which looks like this:
In this function, you can see that atomic is really just an identifier, and adding a spinlock is actually done by the slotlock variable of type Spinlock_t, defined as follows:
Spinlock is one of the spinlocks provided by the system.