This is the 21st day of my participation in the August More Text Challenge

1. Threads and processes

1.1 Definitions of threads and processes

thread

  • Threads are processBasic execution unitAll tasks of a process are executed in the thread.
  • For a process to execute a task,Must beMust havethreadProcess,There must be at least one thread.
  • Program kick-off meetingOne thread is started by default, this thread is calledThe main threadorThe UI thread

process

  • A process is one that is running in the systemThe application.
  • Between each process isindependent, where each process runsDedicated and protected memory spaceInside.
  • The Activity Monitor allows you to view processes that are running on the Mac system.

1.2 Relationship between Threads and processes

  • Address space: of the same processThreads shareThe address space of this process, whileprocessIt is betweenindependentAddress space of.
  • Resources have: Threads in the same process share resources of the same process, such as memory, I/O, and CPU, but resources between processes are independent.
  • aA process be crashedAfter, in protected modeThere is no impact on other processesBut aThe thread crashes and the entire process diesIt off. soMultiple processes are more robust than multiple threads.
  • Process switchingWhen, consumedresources.High efficiency. soFrequent switching is involvedWhen usingThreads are better than processes. Again, if you ask forAt the same timeAnd toSharing some variablesConcurrent operation of,Thread onlyUnable to use process
  • Execution process: Each independent process has one program runningThe entrance,Sequential execution sequenceAnd program entry. butThreads cannot execute independentlyThat must beDependencies in the applicationIs provided by the applicationMultiple threads perform control.
  • The thread isProcessor schedulingtheThe basic unitBut the process is not.
  • Threads have no address spaceThreads,Contained in the process address spaceIn the

2. Multithreading

2.1 The meaning of multi-threading

advantages

  • Can improve the execution efficiency of the program
  • Increase resource utilization (CPU, memory)
  • The task on the thread is automatically destroyed after it completes execution

disadvantages

  • Starting threads takes up a certain amount of memory (512 KB for each thread by default)
  • If a large number of threads are enabled, a large amount of memory space will be occupied and the performance of the program will be reduced
  • The more threads there are, the more overhead the CPU has on the calling thread, the fewer times each thread is scheduled, and the lower the execution efficiency of the thread
  • The program design is more complex, such as communication between threads, multithreaded data sharing

2.2 Principle of multi-threading

  • For a single-core CPU, the CPU can process only one thread at a time, that is, only one thread is executing at a time.

  • The essence of the simultaneous execution of multiple threads in iOS is that THE CPU has the scheduling ability to directly switch between multiple tasks quickly. Due to the fast enough time of the CPU scheduling thread, it causes the effect of simultaneous execution of multiple threads. The time interval between the CPU switching between tasks is the time slice.

  • If you have a multi-core CPU you can really handle multiple threads at the same time, which is concurrency.

    Multithreading consumption:

2.3 Multithreading technical scheme

plan Introduction to the language Field life cycle Use frequency
pthread A set of generic multithreading apis

It is applicable to Unix, Linux, and Windows

Cross-platform, portable

Difficult to use
C Programmer management Almost no
NSThread Use more object-oriented

Easy to use, direct manipulation of thread objects
OC Programmer management Occasionally use
GCD Designed to replace threading technologies such as NSThreads

Make full use of multi-core equipment
C Automatic management Often use
NSOperation Based on the GCD

More than GCD some more simple and practical functions

Use more object-oriented
OC Automatic management Often use
### 2.3 Multithreaded lifecycle

The five states of a thread

  • New: Mainly instantiates thread objects
  • Ready: a thread object calls the start method, adds the thread object to the schedulable thread pool, and waits for the CPU to call the start method. The start method is not executed immediately, and the thread object enters the ready state. It needs to wait for the CPU to schedule the execution, that is, from the ready state to run state
  • Run: The CPU is responsible for scheduling the execution of threads in a schedulable line city. The state of a thread may switch back and forth between ready and run before its execution is complete. The CPU is responsible for this change and the developer cannot interfere.
  • Blocking: When a predetermined condition is met, sleep, or a synchronous lock, can be used to block thread execution. When sleep arrives, the synchronization lock is acquired and the thread is re-added to the schedulable thread pool.
  • Death: there are two cases: 1. Normal death: The thread is finished. 2. Unnatural death: Terminating execution within a thread (or in the main thread) after a condition is met (exit, etc.)

2.4 Principle of Thread Pool

Saturated strategy

  • AbortPolicy directly thrown RejectedExecutionExeception exceptions to prevent normal operation of system
  • CallerRunsPolicy rolls back the task to the caller
  • DisOldestPolicy Discards the most awaited task
  • DisCardPolicy Discards the task directly

All four rejection policies implement the RejectedExecutionHandler interface

3. The interview questions

3.1 Invisible factors of task execution speed

  • CPU
  • Task complexity
  • priority
  • Thread state

3.2 Priority inversion

Understand what IO – and CPU-intensive threads are before looking at priority inversion.

  • IO – intensive threads: threads that wait frequently and give up time slices while waiting.
  • Cpu-intensive threads: Threads that rarely wait, meaning they occupy the CPU for long periods of time.

IO – intensive threads are more easily prioritized than CPU – intensive threads. In a special scenario, multiple CPU intensive threads with higher priorities occupy all CPU resources, while IO intensive threads with lower priorities continue to wait, resulting in thread starvation. Of course, in order to avoid thread starvation, the CPU acts as a scheduler to gradually increase the priority of the “left out” thread (the increase in priority does not necessarily take place immediately), and IO-intensive threads generally get priority increases more easily than CPU-intensive threads.

Factors affecting the priority of a thread:

  • The user to specify
  • The frequency of waiting
  • Does not perform

User specified thread priority:

4. Thread safety issues

In multi-threaded operations, multiple threads are often executed concurrently. The same resource may be accessed by multiple threads at the same time, resulting in resource snatches. In this process, if there is no locking mechanism, it will often cause major problems.

IOS has two main types of locks:

  • The mutex
  • Spin lock: when other threads are found to be executing, the current thread is asking, busy, etc., consuming high performance. Spinlocks are used for short tasks, as atomic implementations use.

Exclusive locks:

  • Ensure that the code in the lock can be executed by only one thread at a time.
  • The lock range of a mutex should be as small as possible. The greater the lock range, the worse the efficiency.
  • When another thread is found executing, the current thread sleeps (ready state), waiting to open and wake up execution.

Mutex parameter

  • Any NSObject object that can be locked
  • The lock object must be accessible to all threads
  • If there is only one place in the code that needs to be locked, use self mostly to avoid creating a separate lock

Lock object

The spin lock.

  • When other threads are found to be executing, the current thread asks, busy, etc., which consumes high performance. Spin locks are used when tasks are short, and that’s what atomic locks the setter and getter.

5. GCD

The GCD, which stands for Grand Central Dispatch, is all about adding tasks to a queue and specifying the functions that perform them. Pure C language, provides a lot of powerful functions,.

  • GCD is apple’s solution to multi-core parallel computing
  • – GCDS automatically utilize more CPU cores (such as dual core, quad core)
  • GCD automatically manages thread lifecycles (thread creation, task scheduling, thread destruction)
  • The programmer only needs to tell the COMMUNIST party what task it wants to perform, without writing any thread management code

5.1 Synchronous and Asynchronous

  • asynchronousdispatch_async: The next statement can be executed without waiting for the completion of the current statement, which opens the thread to execute the task of the block. Asynchrony is a synonym for multithreading. Asynchrony is also the execution of multiple tasks in sequence, but in multiple threads at the same time, the order in which the execution ends is random and unpredictable. The total time is approximately the time consumed by the longest task.
  • synchronousdispatch_sync: The next statement is executed only after the current statement is executed. The thread is not started and the task of the block is executed at the current time. The order in which the execution ends is fixed and the order in which the tasks are executed. The total time is the sum of the time consumed by all tasks.

5.2 Serial and Parallel

  • Serial: When multiple tasks are performed, each task is executed in sequence. The next task can be performed only after one is completed, and FIFO is observed.
  • Parallelism: Refers to the fact that multiple tasks can be executed simultaneously.

5.3 Comparison of Synchronization and Asynchronous Time

The reason for the time difference here is that synchronization blocks the current thread and testMehod() is executed before NSLOG() is executed. Asynchronous, on the other hand, probably starts with the NSLog executing testMehod().

The interview questions

What is the following printing order? So here’s the parallel queue, and the most likely print order is 1-5, 2-5, 3-4, or maybe 1, 2, 3, 4-5. Since async does not block the current thread, the print order of 2 and 5 is uncertain, and 3 is sync, which blocks the current thread, so it must print before 4.

This code will deadlock, why? Because it is a serial queue, dispatch_sync waits for dispatch_async to finish execution, but dispatch_async is blocked by dispatch_sync and cannot finish execution, resulting in deadlock.