This is the 16th day of my participation in the August More Text Challenge. For details, see: [August More Text Challenge]

1. Threads and processes

1.1 Definition of threads and processes

thread

  • A thread is the basic execution unit of a process. All tasks of a process are executed in a thread.
  • In order for a process to execute a task, it must have threads. The process must have at least one thread.
  • When the program starts, a thread is started by default. This thread is called the main thread or UI thread

process

  • A process is an application that is running on the system
  • Each process is independent, and each process runs in its own dedicated and protected memory space.
  • Activity Monitor allows you to view the processes that are started on the Mac system.

1.2 Relationship between threads and processes

  • Address space: Threads of the same process share the address space of the same process, but processes are independent of the address space.
  • Resource ownership: Threads in the same process share the resources of the process, such as memory, I/O, and CPU, but the resources between the processes are independent.
  • When a process crashes, it does not affect other processes in protected mode, but when a thread crashes, the entire process dies. So multiple processes are more robust than multiple threads.
  • The process switchover consumes a large number of resources and is highly efficient. So when it comes to frequent switching, using threads is better than your process. Similarly, if concurrent operations are required simultaneously and share some variables, only threads can be used, not processes.
  • Execution process: Each independent process has a program running entry, sequential execution sequence and program entry. However, threads cannot be executed independently, and must be dependent on the application program, which provides multiple thread execution control.

– Threads are the basic unit of processor scheduling, but processes are not.

  • Threads have no address space; they are contained in the process address space.

2. Multithreading

2.1 Significance of multithreading

  • Advantages:
    • Can properly improve the efficiency of the implementation of procedures;
    • To improve the utilization of resources (CPU, memory);
    • When the tasks on the thread finish executing, the thread is automatically destroyed.
  • Disadvantages:
    • Enabling threads requires a certain amount of memory (512KB per thread by default);
    • If a large number of threads are enabled, a large amount of memory space is occupied and the performance of the program is degraded.
    • The more threads, the greater the CPU overhead on the calling thread, the fewer times each thread is scheduled, and the lower the execution efficiency of the thread.
    • More complex programming, such as communication between threads, multi-threaded data sharing.

2.2 Principle of Multithreading

For single core CPU, at the same time, the CPU can only handle one thread, is only one thread at the same time, execute multiple threads of iOS is the nature of the CPU has the ability of scheduling, can carry on the fast switching in multiple tasks directly, due to the time fast enough CPU scheduling threads, creates a multi-threaded “and” the effect of the execution. Time slicing is the amount of time the CPU can quickly switch between tasks. If you drink too much CPU, you can really handle multiple threads at the same time, which is concurrency. Multithreaded consumption:

2.3 Multi-thread technical scheme

plan Introduction to the language Thread life cycle Use frequency
pthread A universal set of multithreadingAPIApply toUnix/Linux/WindowsEtc. System cross-platform, portable use difficulty C Programmer management Almost no
NSThread Use more object oriented easy to use, can directly manipulate the thread object OC Programmer management Occasionally use
GCD Designed to replaceNSThreadSuch threading technology takes full advantage of the device’s multiple cores C Automatic management Often use
NSOperation Based on theGCD(the bottom isGCD) thanGCDA few more simple and practical features used more object-oriented OC Automatic management Often use
### 2.4 Multithreaded lifecycle
The five states of the thread
  • New: Whenever it is an instantiated thread object.
  • Ready: The thread object calls the start method, adds the thread object to the schedulable thread pool, and waits for the CPU to call the start method, which does not execute immediately, and then enters the ready state. It needs to wait for a period of time, and then executes after the CPU scheduling, which is to enter the running state from the ready state.
  • Run: The CPU is responsible for scheduling the execution of threads in the pool of schedulable threads. The state of a thread may switch back and forth between ready and run before it completes. The CPU is responsible for this change and the developer cannot intervene.
  • Block: When certain predetermined conditions are met, sleep, or synchronization lock, can be used to block the thread from executing. When sleep arrives, the synchronization lock is acquired and the thread is re-added to the schedulable thread pool.
  • Death: Ambience in two ways
    • 1. Normal death: The thread completes execution.
    • 2. Abnormal death: When a condition is met, the execution within the thread (or within the thread) is aborted (exit method is called, etc.).

2.5 Thread Pool Principles

Saturated strategy

  • AbortPolicy directly thrown RejectedExecutionExeception exceptions to prevent normal operation of system
  • CallerRunsPolicy rolls back the task to the caller
  • DisOldestPolicy drops the task that is waiting the longest
  • DisCardPolicy Discards the task directly

These four rejection strategies all implement the RejectedExecutionHandler interface

Problem 3.

3.1 Influencing factors of task execution speed

  • cpu
  • Task complexity
  • priority
  • Thread state

3.2 Priority inversion

Before looking at priority inversion, let’s understand what IO – and CPU-intensive threads are.

  • IO intensive thread: The thread that waits frequently, and the waiting time will give up time slices.
  • Cpu-intensive threads: Threads that wait less, which means they occupy the CPU for a long time.

IO – intensive threads are more likely to be prioritized than CPU – intensive threads. In a special scenario, when multiple CPU-intensive threads with high priorities occupy all CPU resources, the IO-intensive threads with low priorities continue to wait, resulting in thread starvation. Of course, in order to avoid thread starvation, the CPU performs a scheduling function to gradually increase the priority of the “left out” threads (increasing the priority does not necessarily result in immediate execution), and IT is generally easier for IO – intensive threads to get a priority boost than CPU – intensive threads. Factors affecting thread priority:

  • Quality of service for a user-specified thread
  • Increases or decreases depending on how frequently the thread waits
  • The priority of a thread that has not been executed for a long time is increased

User specifies thread priority:

4. Thread safety

In the process of multithreaded operation, multiple threads are often executed concurrently. The same resource may be accessed by multiple threads at the same time, resulting in resource snatch. In this process, if there is no locking mechanism, it will often cause major problems. IOS has two main types of locks:

  • The mutex
  • spinlocks

4.1 the mutex

  • Used to protect critical sections and ensure that only one thread can execute at a time
  • If there is only one place in your code that needs to be locked, most use self to avoid creating a separate lock object
  • When a new thread accesses mutex code, if it finds that another thread is executing the locked code, the new thread will go to sleep

Precautions for using mutex:

  • The mutex lock range should be as small as possible. The larger the lock range, the worse the efficiency
  • Any NSObject that can be locked
  • The lock object must be accessible to all threads

4.2 the spin lock

  • Spin locks on the mutex lace, but instead of hibernating the thread to block, it stays in a busy (spinning in place, called spin) blocking state until the lock is acquired
  • Use case: If the lock is held for a short time and the thread does not want to spend a lot of money on rescheduled, spin locks are needed. The property modifier atomic is a spin lock
  • – Added spin-locking, when a new thread accesses code and finds that another thread is locking the code, the new thread will use an infinite loop, waiting for the locked code to complete, that is, constantly trying to execute the code, and comparing performance.

4.3 Similarities and differences between spin locks and mutex

Similarities:

  • At the same time, only one thread is guaranteed to execute the task, which guarantees the corresponding synchronization function

Difference:

  • Mutex: When another thread is found to be executing, the current thread is hibernated (that is, ready state) and enters the waiting state, that is, suspended. Wait until the other threads are open, then wake up the execution
  • Spin-lock: Found that other threads are executing, the current thread is always asking (that is, always accessing), is busy, etc., high performance

Usage scenario:

  • According to the complexity of the task, different locks are used, but when the judgment is incomplete, mutex is more used to process

  • Use spin-lock when the current task state is short and tidier

  • Otherwise, use a mutex