Definitions of threads and processes

  • Thread is the basic execution unit of a process. All tasks in a process are executed in the thread. If a process wants to execute tasks, it must have at least one thread.
  • A process is a running program in the system. Each process is independent of each other. Each process runs in its dedicated and protected memory spaceActivity monitorTo see all the processes.

For more details on threads and processes, see the book Programmer Self Training.

The relationship between threads and processes

Address space: Threads in the same process share the address space of the same process, while separate processes share the address space of the same process. Resource ownership: Threads in the same process share the resources of the same process, such as memory, I/O, CPU, etc., but the resources between processes are independent.

  1. The crash of one process does not affect other processes, but the crash of one thread can cause a crash of the process. So multi-processing is more robust than multi-threading.
  2. Process switchover consumes large resources.
  3. Execution: Each independent thread has a unique program entry, sequential execution sequence. However, threads cannot execute independently. Threads must be dependent on the process, and the application provides multiple thread execution control.
  4. Threads are the processor’s scheduling unit, processes are not.
  5. Threads have no address space and are contained in the address space of a process.

Meaning of multithreading

Advantages:

  1. Can properly improve the efficiency of program execution
  2. Appropriately improve resource utilization (CPU, memory)
  3. The task on the thread is automatically destroyed after it completes execution

Disadvantages:

  1. Starting threads takes up a certain amount of memory (512KB per thread by default, depending on the configuration of your machine, of course)
  2. If a large number of threads are enabled, a large amount of memory space will be occupied and the performance of the program will be reduced
  3. The more threads there are, the more overhead the CPU has in calling threads
  4. Programming is more responsible, such as communication between threads, multi-threaded data sharing

Factors affecting the speed of task execution

  1. CPU scheduling
  2. Task complexity
  3. priority
  4. Thread state

Priority flipping, two thread types

  1. I/O intensive, frequent waits
  2. CPU intensive, very little waiting

For the above two kinds of threads, CPU-intensive threads are more likely to get priority promotion than I/O intensive threads, but the I/O intensive threads may starve to death. However, the CPU is scheduled, and when a thread waits for a period of time, it starts to promote. The longer the waiting time, the higher the priority. So this is called priority flipping.

Priority factors

  1. When creating a thread, you can specify a priority
  2. The frequency of waiting
  3. If the command is not executed for a long time, the priority will be increased

Spin-locks and mutex

When we access a resource, it is possible for two or two threads to operate on the resource at the same time, so this will cause the abnormal state of the resource, so how to solve this problem? Here are two types of locks

  1. Spin lock, detection of other thread execution, current thread query, busy, etc., high performance
  2. Mutex, other threads are found to execute, the current thread sleeps (suspends), and the thread waits to wake up

, of course, the existence of the spin lock is meaningful, if access to a resource, find other thread holds, it hangs, while other threads to release, just wake up, will involve the context switch, if the resources in other threads in a short period of time to release, that this thread hung again wake up, will waste CPU performance. So when a resource is hotly contested and held for a short period of time, spin locks can be used. So when did we use spin locks in ios/ MacOS development? I’m sure you’ve all used it when you declared a propertyatomicThis keyword, then the underlying implementation of this propertysetterThe method is to use the spin lock. As shown in the figure below

GCD

GCD: Adds tasks to the queue and executes functions that execute tasks. GCD is apple’s solution for multi-core parallel computing. GCD will automatically utilize more CPU cores, and GCD will automatically manage thread lifecycles (create thread, schedule task, destroy thread)

Serial and concurrent queues

Serial queues are executed one by one, on the same thread. The concurrent queue is scheduled one by one, dispatching each task to a thread to execute, so the execution order of the concurrent queue is not certain.

Let me give you an exampleWhat is the order in which the above code is executed? The answer should be 1, 5, 2, 3, 4. What if I changed the queue to a serial queue?When changing to a serial queue, execute todispatch_syncBlock the current execution until it completes, but beforeasyncThe serial queue must wait for the previous task to complete, so it is deadlocked.