The purpose of this article is to understand the basic concepts and principles of processes, threads, multithreading, thread pools, etc

Threads and processes

Definitions of threads and processes

thread

  • Thread is the basic execution unit of a process. All tasks of a process are executed in a thread
  • In order for a process to perform a task, it must have threads,A process must have at least one thread
  • The program starts with a thread by default, this thread is calledThe main threadorThe UI thread

process

  • processRefers to theAn application that is running on the system
  • eachProcesses are independent from each otherEach process runs in its own dedicated and protected memory space
  • The Activity Monitor allows you to view the threads that are started in the MAC system

A process is a container for threads that perform tasks. In iOS, it is a single process development, a process is an APP, the process is independent of each other, such as Alipay, wechat, QQ, etc., these are different processes

The relationship between processes and threads

The relationship between processes and threads mainly involves two aspects:

  • Address space

    • Threads of the same process share the address space of the same process

    • Processes are independent address Spaces

  • Resources have

    • A thread in a process shares resources of the process, such as memory, I/O, and CPU

    • But resources are independent between processes

The relationship between the two is equivalent to that between the factory and the assembly line, which are mutually independent, while the assembly line in the factory shares the resources of the factory, that is, the process is equivalent to a factory, and the thread is equivalent to an assembly line in the factory

For processes and threads, there are a few more notes:

  • 1: Multiple processes are more robust than multithreading

    • A process crash in protected mode does not affect other processes

    • And one thread crashes and the whole process dies

  • 2: Application scenario: Frequent switchover and concurrent operation

    • Process switchover consumes large resources and is efficient. So when it comes to frequent switching, threads are better than processes.

    • Also, if you require concurrent operations that share variables at the same time, use threads instead of processes

  • 3: Execution process

    • Each independent process has a program run entry, sequential execution sequence, and program entry

    • However, threads cannot execute independently and must depend on the application, which provides multiple thread execution control.

  • Thread is the basic unit of processor scheduling, but process is not.

  • 5: Threads have no address space. Threads are contained in the process address space

Thread and Runloop relationship

  • One runloop corresponds to a core thread. The reason why it is core is that runloops can be nested, but there can only be one core, and their relationship is stored in a global dictionary.

  • 2: The runloop is used to manage threads. When the runloop is enabled, threads will go to sleep after executing tasks and will be woken up to execute tasks.

  • 3: Runloop is created on the first fetch and destroyed at thread end.

  • 4: For the main thread, the runloop is created by default as soon as the program starts.

  • 5: For child threads, runloops are lazily loaded and only created when we use them, so be careful when using timers on child threads: make sure the runloop is created for the child thread, otherwise the timer will not call back.

multithreading

Principle of multithreading

  • With a single-core CPU, the CPU can only process one thread at a time, that is, only one thread is working,

  • The essence of the simultaneous execution of multiple threads in iOS is that the CPU directly switches between multiple tasks quickly. Due to the fast enough time of the CPU scheduling threads, the effect of simultaneous execution of multiple threads is caused. The time interval for switching is the time slice

Meaning of multithreading

advantages

  • To the appropriateImprove the efficiency of program execution
  • To the appropriateImprove resource utilization, such as CPU and memory
  • When the task on the thread is complete,The thread is automatically destroyed

disadvantages

  • Starting a thread takes up a certain amount of memory, by default,Each thread occupies 512KB
  • If openA large number of threads will occupy a large amount of memory space, reducing the performance of the program
  • The more threads, the CPUOn the calling threadThe more it costs
  • Programming is more complex, such as communication between threads, data sharing between multiple threads

Multithreaded life cycle

The life cycle of multithreading is divided into five parts: new – ready – run – block – death, as shown in the following figure

  • New: Mainly instantiates thread objects

  • Ready: a thread object calls the start method, adds the thread object to the schedulable thread pool, and waits for the CPU to call the start method. The start method is not executed immediately, and the thread object enters the ready state. It needs to wait for the CPU to schedule the execution, that is, from the ready state to run state

  • Run: The CPU is responsible for scheduling the execution of threads in a schedulable line city. The state of a thread may switch back and forth between ready and run before its execution is complete. The CPU is responsible for this change and the developer cannot interfere.

  • Blocking: When a predetermined condition is met, sleep, or a synchronous lock, can be used to block thread execution. When sleep is entered, the thread is re-added to the ready. The following Settings for sleep time are for NSThreads

    • SleepUntilDate: blocks the current thread until the specified time

    • SleepForTimeInterval: The thread is hibernated at a given time interval, that is, the specified sleep duration

    • @synchronized(self) :

  • Death: falls into two categories

    • Normal death, that is, the thread completes execution

    • Abnormal death, which is the termination of execution inside the thread (or in the main thread) after a condition is met (exit by calling the exit method, etc.)

A running thread has a period of time (called a timeslice) that it can execute.

  • If the time slice runs out, the thread enters the ready state queue

  • If the time slice is not used up and you need to start waiting for an event, the block state queue is entered

  • After an event occurs, the thread reenters the ready queue

  • Whenever a thread leaves the run, that is, after execution or forced exit, another thread is selected from the ready queue to continue execution

Exit and cancel instructions for the thread

  • exit: Once the thread is forcibly terminated, all subsequent code is not executed
- 'cancel' : cancels the current thread, but not the executing threadCopy the code

Does a higher priority of a thread mean faster execution of a task?

No, the speed of thread execution depends not only on priority, but also on resource size (i.e. task complexity) and CPU scheduling. In nsthreads, threadPriority has been replaced by qualityOfService, and the following enumeration values are relevant

Thread Pool Principle

  • [Step 1] Determine whether the core thread pool is all executing tasks

    • Return NO and create a new worker thread to execute

    • Return YES to enter [Step 2]

  • [Step 2] Determine whether the thread pool work queue is full

    • Returns NO and stores the task to a work queue waiting for CPU scheduling

    • Return YES to enter [Step 3]

  • 【 Step 3 】 Check whether all threads in the thread pool are in the executing state

    • Returns NO to schedule free threads from the schedulable thread pool to execute the task

    • Return YES to enter [Step 4]

  • [Step 4] Give saturation strategies to execute, mainly including the following four strategies (the following four strategies are not found in iOS)

    • AbortPolicy: direct selling RejectedExecutionExeception exceptions to prevent normal operation of system

    • CallerRunsPolicy: Rolls back the task to the caller

    • DisOldestPolicy: Discards the most awaited task

    • DisCardPolicy: Discards tasks directly

IOS multithreading implementation scheme

There are four ways to implement multithreading in iOS: pThread, NSThread, GCD, and NSOperation, as shown in the figure

Here are simple examples of the above four scenarios

// *********1: pthread********* pthread_t threadId = NULL; //c string char *cString = "HelloCode"; Pthread_t: pointer to the structure of the thread to be created. Normally, if you encounter C structures, the type suffix '_t/Ref' does not need to end with '*' 2\. Thread property, nil(NULL object -oc used)/NULL(NULL address, 0 C used) 3\. Void *: returns a pointer to an arbitrary object, similar to the id in OC (*): function name (void *): parameter type, void * 4\. */ int result = pthread_create(&threaDID, NULL, pthreadTest, cString); If (result == 0) {NSLog(@" success "); } else {NSLog(@" failed "); } *********2, NSThread********* [NSThread detachNewThreadSelector:@selector(threadTest) toTarget:self withObject:nil]; / / * * * * * * * * * 3, the GCD is * * * * * * * * * dispatch_async (dispatch_get_global_queue (0, 0), ^ {[self threadTest]; }); //*********4, NSOperation********* [[[NSOperationQueue alloc] init] addOperationWithBlock:^{[self threadTest];}]; - (void)threadTest{ NSLog(@"begin"); NSInteger count = 1000 * 100; for (NSInteger i = 0; i < count; I ++) {NSInteger num = I; // NSString *name = @"zhang"; NSString *myName = [NSString stringWithFormat:@"%@ - %zd", name, num]; NSLog(@"%@", myName); } NSLog(@"over"); } void *pthreadTest(void *para){// C string // NSLog(@"===> %@ %s", [NSThread currentThread], para); NSString *name = (__bridge NSString *)(para); NSLog(@"===>%@ %@", [NSThread currentThread], name); return NULL; }Copy the code

Bridge between C and OC

The bridge between C and OC is involved, as described below

  • __bridge only does type conversions, but does not modify object (memory) management

  • __bridge_retained(CFBridgingRetain can also be used), which translates objective-C objects into Core Foundation objects and allows us to manage objects (memory). CFRelease or related methods are then used to release the object

  • __bridge_Transfer (you can also use CFBridgingRelease) converts Core Foundation objects into Objective-C objects, while handing over object (memory) management to ARC.

Thread safety

When multiple threads access a resource at the same time, data corruption and data security problems may occur. The following two solutions are available

  • Mutex (i.e., synchronized) : @synchronized

  • spinlocks

The mutex

  • Used to protect critical sections and ensure that only one thread can execute at a time

  • If there is only one place in the code that needs to be locked, use self most of the time to avoid creating a separate lock object

  • Mutex code, when a new thread access, if it is found that another thread is executing the locked code, the new thread will go to sleep

There are a few other things to note about mutex:

  • The lock range of a mutex should be as small as possible. The greater the lock range, the worse the efficiency

  • Any NSObject object that can be locked

  • The lock object must be accessible to all threads

spinlocks

  • A spin lock is similar to a mutex, but instead of blocking a thread by sleeping, it stays busy (spinning in place, called spin) blocking until the lock is acquired

  • Use scenario: Spin locks are needed when the lock is held for a short time and the thread does not want to spend too much on rescheduling. Atomic, the property modifier, has a spin lock of its own

  • With the addition of spin lock, when a new thread accesses the code, if it finds that another thread is locking the code, the new thread will use an infinite loop to wait for the completion of the locked code execution, that is, it keeps trying to execute the code, which consumes performance

[Interview question] : Spinlocks vs mutex

  • Same: At the same time, only one thread is guaranteed to execute the task, that is, the corresponding synchronization function is guaranteed

  • Different:

    • Mutex: Another thread is found executing, the current thread sleeps (that is, is ready), and enters pending execution, that is, suspends. Wait for another thread to open, then wake up execution

    • Spin lock: It is found that other threads are executing, and the current thread is in a busy state, which consumes high performance

  • Scenario: Different locks are used according to the complexity of the task. However, mutex is used to handle incomplete tasks

    • Use spin locks when the current task state is short and sharp

    • Instead, use mutex

Atomic atomic locks & Nonatomic nonatomic locks

Atomic and nonatomic are mainly used for attribute modification. Here are some related instructions:

  • Atomic isThe atomic properties, it is forPreparation for multithreaded developmentIs the default property!
    • Only in attributessetterMethod, addLock (spin lock)Can,Ensure that there is only one thread at a timeApply to attributeswriteoperation
    • One (thread) write many (thread) read at the same timetheThread processing technology
    • Used in Mac development
  • Nonatomic isNonatomic property
    • There is no lock!High performance!
    • Commonly used in mobile development

What is the difference between atomic and nonatomic

  • nonatomic

    • Nonatomic property

    • Non-thread-safe, suitable for mobile devices with small memory

  • atomic

    • Atomic property (thread safety), designed for multithreading, default

    • Ensure that only one thread can write at a time (but multiple threads can write at a time)

    • Atomic has a single lock (spin lock) : a single thread writes and multiple threads can read

    • Thread safety, which consumes a lot of resources

iOSThe development ofadvice

  • All properties are declared nonatomic

  • Avoid multiple threads grabbing the same resource. Try to transfer the service logic of locking and resource grabbing to the server to reduce the pressure on the mobile client

Interthread communication

In the Threading Programming Guide, threads can communicate in the following ways

  • Direct messaging: Through a series of performSelector methods, it is possible to execute a task on another thread specified by one thread. Since the execution context of the task is the target thread, messages sent in this manner will be automatically serialized

  • Global variables, shared memory blocks, and objects: Another simple way to pass information between two threads is to use global variables, shared objects, or shared memory blocks. Although shared variables are fast and easy, they are more fragile than direct messaging. Shared variables must be carefully protected using locks or other synchronization mechanisms to ensure correct code. Failure to do so may result in competitive conditions, data corruption, or crashes.

  • Conditional execution: A condition is a synchronization tool that can be used to control when a thread executes a particular part of code. You can treat a condition as a lock and let the thread run only when the specified condition is met.

  • Runloop Sources: A custom Runloop source configuration allows specific application messages to be received on a thread. Because Runloop sources are event-driven, threads automatically go to sleep when there is nothing to do, increasing thread efficiency

  • Ports and Sockets: Port-based communication is a more sophisticated way to communicate between two threads, but it’s also a very reliable technique. More importantly, ports and sockets can be used to communicate with external entities, such as other processes and services. To improve efficiency, the port is implemented using a Runloop source, so the thread goes to sleep when there is no data waiting on the port. Note that port communication needs to add the port to the main thread Runloop, otherwise it will not go to the port callback method

  • Message queues: Traditional multiprocessing services define a first-in, first-out (FIFO) queue abstraction for managing incoming and outgoing data. Although message queues are simple and convenient, they are not as efficient as some other communication technologies

  • Cocoa Distributed Objects: Distributed objects are a Cocoa technology that provides a high-level implementation of port-based communication. Although it is possible to use this technique for interthread communication, it is strongly recommended not to do so because of the overhead involved. Distributed objects are better suited for communicating with other processes, although transactions between these processes are also expensive