Welcome to the iOS Basics series (suggested in order)

IOS low-level – Alloc and init explore

IOS Bottom – Isa for everything

IOS low-level – Analysis of the nature of classes

IOS Underlying – cache_t Process analysis

IOS Low-level – Method lookup process analysis

IOS bottom layer – Analysis of message forwarding process

IOS Low-level – How does Dyld load app

IOS low-level – class load analysis

IOS low-level – Load analysis of categories

Writing in the front

Multithreading is a relatively large and separate module, so this multithreading series plan from four aspects, including principle, application, the bottom, interview. This article focuses on the principles of multithreading, followed by other articles.

1. Processes and threads

1.1 What is a process

A process is an executable file that is running in the system. Each process is independent of each other and runs in its own dedicated and protected memory.

Note:

We often say that iOS is single-process. That’s true and that’s not true.

If it’s a single process with no background at all, it doesn’t make sense to think about how the push gets received, how the phone gets woken up. If you are familiar with reverse, you will know that there are a lot of background is running after the boot.

So strictly speaking, iOS is single-process for applications.

Why is iOS single-process?

Because switching between multiple processes consumes a lot of resources, sandbox mechanism is used to ensure smooth and safe operation of the system.

1.2 What is a thread

Thread is the basic unit of the process, all tasks in the process are executed in the thread, the process must have at least one thread to execute the task. The program starts a thread by default, called the master thread.

From a technical perspective, threads are the combination of kernel-level and application-level data structures needed to manage code execution. The kernel-level structure coordinates the scheduling of events to threads on an available kernel. Application-level structure includes the call stack used to store function calls and the structure required for applications to manage and manipulate the properties and state of threads.

In simple terms, threads rely on kernel-level scheduling to complete application-level tasks.

1.3 Relationship between Processes and threads

Address space: Threads of the same process share the address space of the same process. The address space of different processes is independent of each other.

Resource ownership: Threads of the same process share the resources of the same process, such as memory, I/O, and CPU. Resources of different processes are independent of each other.

Exception handling: The crash of one process does not affect another process. When a thread crashes, the corresponding process also crashes.

Execution process: Each independent process has application entry, sequential execution sequence; Threads cannot execute independently and must depend on processes. Tasks in a thread are serial, and only one task can be executed by a thread at a time, so a thread is a path of execution for a process.

1.4 Relationship between queues and threads

A queue is a special linear table with limited operation in FIFO structure. It only allows delete operations at the front of the table and insert operations at the rear.

Queues have nothing to do with threads, yet they always appear together. It’s like a bank window doesn’t care what the line is, it just does the business.

Queues are responsible for the order in which tasks are executed, and threads are responsible for the specific execution of tasks

1.5 Relationship between Runloops and threads

  • runloopwiththreadIt’s one to one, onerunloopCorresponding to a core thread
  • whenthreadtherunloopWhen it’s turned on,threadAfter the execution of the task will enter the sleep state, the task will be woken up to execute
  • runLoopCreated when first retrieved, destroyed whenthreadAt the end
  • The main thread of therunLoopIt is created by default when the program starts, while the child thread’srunLoopdWe need to actively create and maintain

2. Multithreading

2.1 Multithreading and single threading

Single thread:

In a non-parallel application, only one thread of execution executes the task. The thread starts and ends with the application’s main routine and branches, one by one, into different methods or functions to implement the overall behavior of the application.

It can be thought of as a bank (process) that has only one window (thread) to perform business (task) and needs at least one window (thread).

Multithreading:

In multithreading, that is, applications that support concurrency start with one thread and add more threads as needed to create other execution paths. Each new path has its own custom startup routine that runs independently of the code in the application’s Main routine.

You can think of it as a bank (process) that opens multiple Windows (threads) to conduct business (perform tasks).

2.2 Principle of multi-threading

For single-core devices, multi-threading concurrent execution, in fact, the CPU quickly scheduling between multiple threads, time slice in the continuous switching, if the CPU scheduling thread time is fast enough, it will cause the illusion of multi-threading concurrent execution. For multi-core devices, this is true multithreading.

These two conditions are called multitasking and multiprocessing.

Multitasking and multiprocessing are related concepts, but not the same. Multitasking is the ability to handle many different tasks at once. Multiprocessing is a computer’s ability to use more than one processor at a time. Typically, if you have more than one processor available in a multitasking environment, you can assign tasks among the processors. In this case, tasks can run simultaneously.

What happens if there are very, very many threads?

The CPU will be scheduled between N multi-threads, consuming a large number of CPU resources, and the execution frequency of each thread will be reduced, resulting in a decrease in program efficiency.

2.3 Advantages and disadvantages of multi-threading

Advantages of multithreading:

  • Multithreading improves the perceived responsiveness of applications
  • Multithreading improves the real-time performance of applications on multi-core systems
  • Multithreading can properly improve the utilization of resources

Disadvantages of multithreading:

  • Multithreading will increase the complexity of the program, need to consider the communication between threads, multithreading data sharing, increase the possibility of error
  • If a large number of threads are enabled, a large amount of memory space will be occupied
  • The more threads there are, the more overhead the CPU has on scheduling threads

2.4 Thread life cycle

After a new thread is created, it runs in one of three main states: running, ready, or blocked. If the thread is not currently running, it either blocks and waits for input or is ready to run. The thread moves back and forth between these states until it finally exits and enters a dead state.

  • new: Creates a thread instance
  • ready: Sent to the threadstarMessage, and add the threadSchedulable thread pools
  • run:cpuSchedule the execution of tasks
  • jam:threadBlocked, as in a callsleep, synchronous lock, fence function, etc
  • death: The task is completed and no new tasks need to be executed, orthreadBe forced out

When you create a new thread, you must specify an entry point function for that thread. This entry point function constitutes the code to be run on a thread. When the function returns, or when a thread is forced to exit, the thread is permanently stopped and reclaimed by the system. Because the cost of creating threads is relatively high in terms of memory and time, it is recommended that entry point functions do a lot of work or set up running loops to allow repetitive work rather than creating multiple threads for execution.

The best way to exit a thread is naturally to let the thread reach the end of its main entry point routine. Although there are capabilities to force exit threads, these should only be used as a last resort. Forcing a thread to exit before it reaches its natural endpoint can have unpredictable consequences. If a thread has allocated memory, opened files, or acquired other types of resources, it may not be able to reclaim those resources, resulting in memory leaks or other potential problems.

2.5 Thread Pool Principles

CorePoolSize: indicates the number of core threads. MaxPoolSize: indicates the maximum number of threadsCopy the code
  1. If less thancorePoolSizeCreated,threadAnd perform the task; Otherwise, put the task on a blocking queue;
  2. If the task is successfully placed on the blocking queue, if the current thread pool isNot runningState, the task is removed from the blocking queue and executedreject()Handle the task; If the current thread pool is inrunState, the thread pool needs to be checked again (because thread resources may have been freed since the last check) to see if there are idle threads; If yes, perform the task.
  3. If the task cannot be put into the blocking queue, the blocking queue is full. So try creating a new onethreadTo carry out the task; If the execution fails, the number of threads in the thread pool reaches the limitmaxPoolSize, the implementationreject()Deal with tasks;

Thread pool saturation strategy: When the blocking queue is full and there are no idle worker threads, if a task continues to be submitted, a policy must be adopted to process the task. The thread pool provides four strategies:

  • AbortPolicy: Discards the task and throws itRejectedExecutionExceptionException: Prevents the system from running properly
  • CallerRunsPolicy: rolls back the task to the caller
  • DiscardOldestPolicy: Discard the most waiting task and try again (repeat the process)
  • DisCardPolicy: Directly discards the task

You can also implement the RejectedExecutionHandler interface to customize saturation policies, such as task logging.

2.6 Multi-threading technical scheme

plan Introduction to the language The life cycle
pthread Generic API, cross-platform, difficult to use C Programmer management
NSThread Object oriented, easy to use, can directly operate threads OC Programmer management
GCD Take full advantage of multi-core equipment C Automatic management
NSOperation Object-oriented, GCD based, more features OC Automatic management

Managing thread life cycles manually is tedious and error-prone, so avoid using PThreads and NSThreads whenever possible. Therefore, GCD and NSOperation are used more frequently.

2.7 Task execution efficiency

The efficiency affecting task execution includes:

  • The number of threads andcpuDispatch speed of
  • Task complexity
  • Task priority

2.6 Thread-safe library development

Many students will package some libraries to publish on the same sex dating website, or provide group use.

Note during encapsulation that although you can control whether an application executes using multiple threads, libraries cannot. Libraries must be developed under the assumption that callers are multithreaded, or can be switched to multithreaded at any time. Therefore, you should always use locks on key parts of your code.

One of the hazards of threaded programming is resource contention between multiple threads. Problems can occur if multiple threads try to use or modify the same resource at the same time. When individual resources cannot be fully maintained, locks, conditions, atomic operations, and other techniques must be used to synchronize access to resources.

For libraries, it is unwise to create locks only if the application becomes multithreaded. If you need to lock code at some point, create lock objects early in your library use, preferably through some explicit call to initialize the library.

A widely circulated library must be thread-safe.

2.7 Multi-threaded communication

  • Direct communication: Performs selector functions directly on other threads, seeperformSelectorOnMainThreadThe relevantAPI
  • The global variable: Although shared variables are fast and easy, they are more fragile than direct messaging. You must use theThe lockOr other synchronization mechanisms to protect shared variables to ensure correct code
  • Conditions for the lock: Threads waiting for a condition remain blocked until another thread explicitly notifies the condition. referenceNSCondition
  • Runloop source: Like Settingstimer
  • portPort-based communication is a more complex approach to communicating between two threads, but it is also a very reliable technique that is also based onrunloopThe referenceNSPort
  • The message queueA message queue is a collection of data (messages) that a task must process in first-in, first-out order. referenceNSNotificationQueue

Write in the back

Multithreading is a powerful tool to improve program efficiency, but it should not be abused to produce hard-to-locate errors. The rational use of multithreading is everyone’s responsibility.

Stay tuned for the next installment in the multithreaded application series.