Threads and processes

Definition of threads and processes

Thread:

  • A thread is the basic execution unit of a process. All tasks of a process are executed in a thread
  • In order for a process to execute a task, it must have threads. The process must have at least one thread
  • When the program starts, a thread is started by default. This thread is called the main thread or UI thread

Process:

  • A process is an application that is running on the system
  • Each process is independent, and each process runs in its own dedicated and protected memory space
  • Activity Monitor allows you to view the processes that are started on the Mac system

The relationship between processes and threads

Address space: Threads of the same process share the address space of the same process, but the address space between processes is independent. Resource ownership: Threads in the same process share the resources of the process, such as memory, I/O, and CPU, but the resources between the processes are independent.

  1. When a process crashes, it has no effect on other processes in protected mode, but when a thread crashes, the entire process dies. So multiple processes are more robust than multiple threads.
  2. The process switchover consumes a large number of resources and is highly efficient. So when it comes to frequent switching, threads are better than processes. Similarly, if concurrent operations are required simultaneously and share some variables, only threads can be used, not processes
  3. Execution process: Each independent process has a program running entry, sequential execution sequence and program entry. However, threads cannot be executed independently, and must be dependent on the application program, which provides multiple thread execution control.
  4. Threads are the basic unit of processor scheduling, but processes are not.
  5. Threads have no address space; they are contained in the process address space

The meaning of multithreading

Advantages:

  • Can properly improve the efficiency of the implementation of procedures
  • Appropriate resource utilization (CPU, memory)
  • When the tasks on the thread finish executing, the thread is automatically destroyed

disadvantages

  • Starting threads requires a certain amount of memory (512 KB per thread by default)
  • If a large number of threads are enabled, a large amount of memory space will be occupied and the performance of the program will be degraded
  • The more threads there are, the greater the CPU overhead on the calling thread

* More complex programming, such as communication between threads, multi-threaded data sharing

Principle of multithreading

The concept of time slicing: the CPU makes a quick switch between tasks directly. This interval is called time slicing

* (single-core CPU) The CPU can only process one thread at a time * In other words, only one thread is executing at a time

* Multi-threaded execution: * allows the CPU to quickly switch between multiple threads

  • The CPU schedules threads fast enough to create the effect of “simultaneous” execution of multiple threads

* If the number of threads is very large

  • The CPU switches between N threads, which consumes a lot of CPU resources

* The number of times each thread is scheduled is reduced, making the thread less efficient

Multi-thread technical scheme

Bridge between C and OC

  • __bridgeDo type conversion only, but do not change the object (memory) management rights;
  • __bridge_retained(Can also useCFBridgingRetainConvert objective-C objects to Core Foundation objects and give us the management of the object (memory). Then we need to use CFRelease or related methods to release the object.
  • __bridge_transfer(Can also useCFBridgingRelease) will beCore FoundationTo objective-C objects and give ARC control of the object (memory).

Thread life cycle

  • New: When a new thread is called start, it does not execute immediately, but enters the ready state, waiting for the CPU to schedule.
  • Run: The CPU schedules the current thread, enters the running state, and starts executing the task. If the current thread is still running, the CPU calls another thread from the schedulable pool to perform the task.
  • Blocking: A running task that is blocked when it is called to sleep/ wait for the synchronization lock. All threads stop and wait for sleep to finish/the synchronization lock to be acquired before returning to the ready state.
  • Dead: A running task that automatically enters Dead destruction when it finishes executing or is forced to exit.

The thread pool

Saturated strategy

  • AbortPolicy directly thrown RejectedExecutionExeception exceptions to prevent normal operation of system
  • CallerRunsPolicy rolls back the task to the caller
  • DisOldestPolicy drops the task that is waiting the longest
  • DisCardPolicy Discards the task directly

These four rejection strategies all implement the RejectedExecutionHandler interface

Multithreaded FaQs

  1. What are the factors that affect the speed of task execution
  • CPU scheduling
  • Task complexity
  • priority
  • Thread state
  1. Priority reversal

Before looking at priority inversion, let’s understand what IO – and CPU-intensive threads are.

  • IO intensive thread: A thread that waits frequently and gives up time slices while waiting.
  • Cpu-intensive threads: Threads that wait less, which means they occupy the CPU for a long time.

IO – intensive threads are more likely to be prioritized than CPU – intensive threads.

In a special scenario, when multiple CPU-intensive threads with high priorities occupy all CPU resources, the IO-intensive threads with low priorities continue to wait, resulting in thread starvation. Of course, in order to avoid thread starvation, the CPU performs a scheduling function to gradually increase the priority of the “neglected” threads (increasing the priority does not necessarily result in execution, there is a probability problem), and it is generally easier for IO intensive threads to get priority promotion than CPU intensive threads.

  1. Thread priority factors
  • The user to specify
  • Increase or decrease depending on the frequency of the wait
  • Long periods of non-execution will be increased in priority

Spin locks and mutexes

In our use of multithreading process will encounter a phenomenon, is the resource grab.

For example, in the case of multi-window ticket selling, suppose there are 1000 tickets, window 1 has sold one ticket and 999 tickets are left, but window 2 does not know the remaining tickets simultaneously, so we still sell the tickets according to 1000 tickets. In this case, there will be problems. At this point we need to use the lock operation to solve this problem. Here we introduce two types of locks, spin-locks and mutex locks.

Exclusive locks:

  • Another thread is found to be executing, the current thread sleep (ready state) has been waiting to open, wake up execution.
  • Ensure that the code in the lock is executed by only one thread at a time.
  • The mutex lock range should be as small as possible. The larger the lock range, the worse the efficiency.

Mutex parameters:

  • Any NSObject that can be locked.
  • The lock object must be accessible to all threads.
  • If there is only one place in your code that needs to be locked, most use self to avoid creating a separate lock object.

The spin lock.

  • Other threads are found to execute, the current thread is asked, busy, etc., the cost of performance is relatively high. The contents of the spin lock should be as small as possible to ensure that the tasks in the lock are completed as soon as possible.

The difference between a mutex and a spin lock:

  • A mutex passively waits for code to fire before locking it. When the task is triggered passively and the task resources are large (it takes a long time to execute), the mutex lock is selected.
  • A spin lock is an active round request for resources. So spinlocks are more resource-intensive. If the task needs to be executed immediately and the task resources are small (the execution time is short), you can select spin-lock.

Atomic and nonatomic

Atomic:

  • atomicIs an atomic property, is for multithreaded development, is the default property!
  • Only in propertiessetterMethod to ensure that only one thread is working on the property at a timewriteoperation
  • Thread processing in which a single (thread) writes multiple (thread) reads at the same time

nonatomic:

  • nonatomic Nonatomic property
  • There is no lock! High performance!

Let’s take a look at the underlying implementation of atomic:

static inline void reallySetProperty(id self, SEL _cmd, id newValue, ptrdiff_t offset, bool atomic, bool copy, bool mutableCopy) { if (offset == 0) { object_setClass(self, newValue); return; } id oldValue; id *slot = (id*) ((char*)self + offset); if (copy) { newValue = [newValue copyWithZone:nil]; } else if (mutableCopy) { newValue = [newValue mutableCopyWithZone:nil]; } else { if (*slot == newValue) return; newValue = objc_retain(newValue); } // Determine if it is atomic and if it is, add a lock if (! atomic) { oldValue = *slot; *slot = newValue; } else { spinlock_t& slotlock = PropertyLocks[slot]; slotlock.lock(); oldValue = *slot; *slot = newValue; slotlock.unlock(); } objc_release(oldValue); }Copy the code

In the source code, we can see that atomic is actually an identifier, and the underlying atomic identifier determines whether to lock.