Write in the front: the exploration of the underlying principles of iOS is my usual development and learning in the continuous accumulation of a step forward road. I hope it will be helpful for readers to record my journey of discovery.Copy the code
The directory is as follows:
- Exploring the underlying principles of iOS alloc
- Exploration of the underlying principles of iOS structure in vivo alignment
- The nature of the object explored by iOS underlying principles & the underlying implementation of ISA
- The underlying principles of the ISA-class (part 1)
- The underlying principles of the ISA class (middle)
- The underlying principles of isA-Class (part 2)
- Exploring the nature of Runtime Runtime & Methods in iOS Underlying principles
- Objc_msgSend explores the underlying principles of iOS
- Runtime Runtime slow lookup process
- Dynamic method resolution for exploring the underlying principles of iOS
- IOS underlying principles to explore the message forwarding process
- IOS Application loading principle
- Application loading principle (2)
- IOS underlying principle exploration class load
- IOS Underlying principles to explore the classification of loading
- Associated object for iOS underlying principle exploration
- Exploration of iOS underlying principles of sorcerer KVC
- Exploring the underlying principles of iOS — KVO Principles | more challenging in August
- Exploration of the underlying principles of iOS rewrite KVO | August more text challenges
Summary column for the above
- Summary of the phase of iOS underlying principle exploration
preface
In our actual development process, there will be a variety of business logic needs to be completed, with the development of business and content complexity is becoming more and more high, the application of multithreading technology is becoming more and more important. At the beginning of this article, we will explore the principles and applications of multithreading in iOS development and summarize some of the holes in the interview. Ok, content bit by bit, everybody, come on!!
What is a thread
Threading is one of several techniques that allow you to execute multiple code paths simultaneously in a single application. While new technologies such as Operation Objects and Grand Central Dispatch (GCD) provide a more modern and efficient infrastructure for concurrency, OS X and iOS also provide interfaces for creating and managing threads.
About Thread Programming
For many years, the maximum performance of a computer was largely limited by the speed of the single microprocessor at the heart of the computer. However, as the speed of individual processors began to reach its practical limits, chipmakers turned to multi-core designs that gave computers the chance to perform multiple tasks at once. While OS X does its best to utilize these kernels for system-related tasks, your own applications can also utilize them through threads.
What is a thread?
Threads are a relatively lightweight way to implement multiple execution paths within an application. At the system level, programs run in parallel, and the system assigns execution time to each program based on its needs and those of other programs. However, within each program, there exists one or more threads of execution that can be used to perform different tasks simultaneously or in a nearly simultaneous manner. The system itself actually manages these threads of execution, arranging for them to run on the available kernel, and breaking them first as needed to allow other threads to run.
From a technical perspective, threads are a combination of kernel-level and application-level data structures needed to manage code execution. The kernel-level structure coordinates the preemptive scheduling of events to threads and threads on one of the available kernels. The application-level structure includes the call stack used to store function calls and the structure the application needs to manage and manipulate the properties and state of threads.
In a non-concurrent application, there is only one thread of execution. The thread starts and ends with your application’s main routine, branching into different methods or functions to implement the overall behavior of the application. In contrast, an application that supports concurrency starts with one thread and adds more threads as needed to create additional execution paths. Each new path has its own custom startup routine that runs independently of the code in the application’s main routine. Having multiple threads in your application offers two very important potential advantages:
- Multithreading can improve the perceived responsiveness of an application.
- Multithreading can improve the real-time performance of applications on multi-core systems.
If your application has only one thread, that thread must do everything. It must respond to events, update the application’s Windows, and perform all the calculations needed to implement the application’s behavior. The problem with having only one thread is that it can only do one thing at a time. So what happens when one of your calculations takes a long time to complete? While your code is busy calculating the values it needs, your application stops responding to user events and updating its Windows. If this behavior persists long enough, users may think your application is suspended and try to force it out. However, if you move custom computation to a separate thread, the main thread of your application will be more responsive to user interactions.
With the popularity of multi-core computers, threading provides a way to improve the performance of certain types of applications. Threads that perform different tasks can execute simultaneously on different processor cores, allowing an application to increase the amount of work it does in a given amount of time.
Of course, threading is not a panacea for application performance problems. With the benefits that threading provides come potential problems. Having multiple execution paths in your application can significantly increase the complexity of your code. Each thread must coordinate its operations with other threads to prevent it from corrupting the application’s state information. Since threads in a single application share the same memory space, they have access to all the same data structures. If two threads try to operate on the same data structure at the same time, one thread may overwrite the other thread’s changes in a way that destroys the resulting data structure. Even with the proper safeguards in place, you still have to pay attention to compiler optimizations that introduce subtle (and not so subtle) errors into your code.
- threadUsed to refer to a separate execution path for code.
- processUsed to refer to a running executable file, which can contain multiple threads.
- taskAn abstract concept used to refer to the work to be performed.
Alternatives to Threads
One problem with creating your own threads is that they add uncertainty to your code. Threading is a relatively low-level and complex way to support concurrency in an application. If you don’t fully understand the implications of your design choices, it’s easy to run into synchronization or timing problems that can range in severity from subtle changes in behavior to application crashes and user data corruption.
Another factor to consider is whether you need threads or concurrency. Threads solve the specific problem of how to execute multiple code paths concurrently within the same process. However, in some cases, the amount of work you are performing does not guarantee concurrency. Threads can impose significant overhead on your process in terms of memory consumption and CPU time. You may find that this overhead is too great for the intended task, or that other options are easier to implement.
Table 1-1 lists some alternatives to threading. This table includes alternatives to threading (such as manipulation objects and GCDS) as well as alternatives designed to effectively use the single threading that you already have.
Table 1-1 Alternative technologies for threading
technology | describe |
---|---|
Operation objects | Operation objects introduced in OS X V10.5 are wrappers for tasks that are normally performed on worker threads. This wrapper hides the thread management aspect of executing a task, leaving you free to focus on the task itself. You typically use these objects in combination with an action queue object, which actually manages the execution of action objects on one or more threads. For more information on how to use action objects, see *Concurrent Programming Guide*. |
Grand Central Dispatch (GCD) | Grand Central Dispatch, introduced in Mac OS X V10.6, is another thread alternative that lets you focus on the task you need to perform rather than thread management. Using GCD, you can define tasks to be executed and add them to a work queue that handles the scheduling of tasks on the appropriate thread. Work queues take into account the number of available kernels and current load to perform tasks more efficiently than threads. For information on how to use GCD and work queues, see *Concurrent Programming Guide* |
Idle-time notifications | For relatively short, very low priority tasks, idle time notifications allow you to perform tasks when the application is less busy. Use the CocoaNSNotificationQueue Object provides support for idle time notifications. To request an idle time notification, pleaseNSNotificationQueue useNSPostWhenIdle Option to publish notifications to the default object. The queue delays the delivery of the notification object until the run loop is idle. See * for more informationNotification programming topics*. |
Asynchronous functions | The system interface includes a number of asynchronous functions that provide you with automatic concurrency. These apis may use system daemons and processes or create custom threads to perform their tasks and return the results to you. (The actual implementation doesn’t matter because it’s separate from your code.) When designing your application, look for functions that provide asynchronous behavior and consider using them, rather than using equivalent synchronous functions on custom threads. |
Timers | You can use timers on the main thread of your application to perform periodic tasks that are too trivial to require threads but still need to be serviced regularly. For information about timers, seeThe timer source. |
Separate processes | Although more important than threads, creating separate processes can be useful in cases where tasks are relevant only to your application. You can use processes if tasks require a lot of memory or must be executed with root permission. For example, you might use a 64-bit server process to compute a large data set, while your 32-bit application displays the results to the user. |
Thread management
Cost of thread
Threads have a real cost to your program (and system) in terms of memory usage and performance. Each thread needs to allocate memory in both the kernel memory space and the program memory space. The core structures needed to manage threads and coordinate their scheduling are stored in the kernel using wired memory. The stack space of threads and the data of each thread are stored in the program’s memory space. Most of these structures are created and initialized when you first create a thread — a process that can be relatively expensive due to the interaction with the kernel.
Table 2-1 quantifies the approximate costs associated with creating new user-level threads in an application. Some of these costs are configurable, such as the amount of stack space allocated for worker threads. The time cost of creating threads is a rough approximation and should only be used in comparison with one another. Thread creation times can vary greatly depending on processor load, computer speed, and the amount of system and program memory available.
Table 2-1 Cost of creating a thread
Note: Because of its underlying kernel support, manipulating objects can generally create threads faster. Instead of creating threads from scratch every time, they use a pool of threads already residing in the kernel to save allocation time. For more information on working with action objects, see the Concurrent Programming Guide.
Another cost to consider when writing threaded code is the cost of production. Designing threaded applications sometimes requires fundamental changes in the way the application’s data structures are organized. Making these changes may be necessary to avoid the use of synchronization, which itself can impose a significant performance penalty on a poorly designed application. Designing these data structures and debugging problems in threaded code can increase the time required to develop threaded applications. Avoiding these costs can be a bigger problem at run time, however, if your threads spend too much time waiting for locks or doing nothing.
Create a thread
Creating low-level threads is relatively simple. In all cases, you must have a function or method as the main entry point for the thread, and you must start the thread using one of the available thread routines. The following sections show the basic creation process for the more common threading techniques. Threads created using these techniques inherit a set of default properties, depending on the technique you use.
Using NSThread
There are two ways to create a thread using the NSThread class:
- use
detachNewThreadSelector:toTarget:withObject:
Class methods generate new threads. - Create a new
NSThread
Object and calls itstart
Methods. (Supported only on iOS and OS X V10.5 and later.)
Both techniques create a separate thread in your application. A separated thread means that when a thread exits, the system automatically reclaims its resources. This also means that your code does not have to explicitly add threads later.
Because detachNewThreadSelector: toTarget: withObject: all versions of the OS X support the method, so in the use of thread existing Cocoa applications can often find it. To separate a new thread, simply provide the name of the method (specified as a selector) to be used as the entry point for the thread, the object that defines the method, and any data to be passed to the thread at startup. The following example shows a basic call to this method, which uses the custom method of the current object to generate the thread.
[NSThread detachNewThreadSelector:@selector(myThreadMainMethod:) toTarget:self withObject:nil];
Copy the code
Prior to OS X V10.5, you NSThread primarily used this class to generate threads. Although you can get an NSThread object and access certain thread properties, you can only do this from the thread itself after the thread has run. In OS X V10.5, support was added for creating NSThread objects without the need to generate corresponding new threads immediately. (This is also available in iOS.) This support makes it possible to get and set various thread properties before starting the thread. It also enables the thread object to be used later to refer to the running thread.
NSThread in OS X v10.5 and higher initializes the object in the simple method is to use initWithTarget: selector: object: method. This approach method is used to exactly the same information, detachNewThreadSelector: toTarget: withObject: and use it to initialize a new NSThread instance. However, it does not start the thread. To start a thread, start explicitly calls the method of the thread object, as shown in the following example:
NSThread* myThread = [[NSThread alloc] initWithTarget:self
selector:@selector(myThreadMainMethod:)
object:nil];
[myThread start]; // Actually create the thread
Copy the code
If you have an NSThread object of the current thread is running, you can send the message to the thread performSelector: onThread: withObject: waitUntilDone: one way is to use almost all the objects in the application. OS X V10.5 introduced support for executing selectors on threads other than the main thread, which is a convenient way to communicate between threads. (This is also available in iOS.) The messages you send using this technique are executed directly by another thread as part of its normal running loop processing. (Of course, this does mean that the target thread must run in its run loop; See Run Loops.) When you communicate in this way, you may still need some form of synchronization, but it is simpler than setting up a communication port between two threads. Threads.
Note: although the occasional communication between threads is good, but the performSelector: onThread: withObject: waitUntilDone: for time urgent or frequent communication between threads, you should not use this method.
Write thread entry routines
In most cases, the structure of your thread entry point routine is the same in OS X as it is on other platforms. You initialize your data structure, do some work or choose to set up a run loop and clean up when your thread code is finished. Depending on your design, you may need to take some additional steps when writing the input routine.
Create an automatic release pool
Applications linked in the Objective-C framework generally must create at least one automatic release pool in each of their threads. If the application uses a managed model — where the application handles the retention and release of objects — the automatic release pool captures any objects that are automatically released from that thread.
If your application uses garbage collection instead of a managed memory model, then creating an automatic release pool is not absolutely necessary. The presence of automatic release pools in garbage collection applications is harmless and, for the most part, simply ignored. This is allowed in cases where the code module must support both garbage collection and managed memory models. In this case, the automatic release pool must exist to support managed memory model code and is ignored if the application is run with garbage collection enabled.
If your application uses the managed memory model, creating an automatic release pool should be the first thing you do in the thread entry routine. Also, destroying the auto-release pool should be the last thing you do in the thread. This pool ensures that automatically released objects are captured, although it does not release them until the thread itself exits. Listing 2-2 shows the structure of a basic thread entry routine that uses an automatic release pool.
Listing 2-2 defines the thread entry point routine
- (void)myThreadMainRoutine
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; // Top-level pool
// Do thread work here.
[pool release]; // Release the objects in the pool.
}
Copy the code
Since the top-level automatic release pool does not release its objects until the thread exits, long-running threads should create additional automatic release pools to release objects more frequently. For example, a thread using a run loop might create and release an AutoRelease pool each time it passes a run loop. Releasing objects more frequently prevents an application’s memory footprint from growing too large and causing performance problems. As with any performance-related behavior, you should measure the actual performance of your code and adjust the use of automatic release pools as appropriate.
Set the exception handler
If your application catches and handles exceptions, your threaded code should be prepared to catch any exceptions that may occur. Although it is best to handle exceptions where they are likely to occur, failure to catch thrown exceptions in the thread causes the application to exit. Installing the final try/catch in the thread entry routine allows you to catch any unknown exceptions and provide the appropriate response.
When building projects in Xcode, you can use either C++ or Objective-C exception handling styles. For information about setting up how exceptions are thrown and caught in Objective-C, see * Exception Programming topics *.
Set the run loop
When writing code to run on a separate thread, you have two options. The first option is to write the thread’s code as a long task that executes with little or no interruption, and exits when the thread completes. The second option is to put your thread in a loop and have it process requests dynamically as they arrive. The first option requires no special setting for your code; You just start doing what you want to do. However, the second option involves setting the running loop for the thread.
OS X and iOS provide built-in support for running loops in each thread. The application framework automatically starts a run loop for the main thread of the application. If you create any helper threads, you must configure the run loop and start it manually.
Termination of the thread
The recommended way to exit a thread is to let it exit its entry point routine normally. Although Cocoa, POSIX, and Multiprocessing Services provide routines that directly kill threads, they are strongly recommended against use. Killing a thread prevents it from cleaning itself. Memory allocated by a thread can leak, and any other resources currently being used by the thread may not be cleaned properly, creating potential problems later.
If you expect to terminate the thread in the middle of an operation, you should design the thread from the start to respond to cancellation or exit messages. For long-running operations, this might mean periodically stopping work and checking to see if such messages are received. If a message does ask the thread to exit, the thread will have the opportunity to perform any required cleanup and exit gracefully; Otherwise, it can simply return to work and process the next piece of data.
One way to respond to cancellation messages is to use a run loop input source to receive such messages. Listing 2-3 shows how this code looks in the thread’s main entry routine. (This example shows only the main loop section and does not include steps to set up the automatic release pool or configure the actual work to be performed.) This example installs a custom input source on the run loop, presumably from another of your threads; For information about setting up an input source, see Configuring a Running circular Source. After executing a portion of the total workload, the thread runs a run loop briefly to see if the message has reached the input source. If not, the run loop exits immediately, and the loop continues to the next working block. Because handlers do not have direct access to exitNow local variables, exit conditions communicate through key-value pairs in the thread dictionary. Listing 2-3 checks exit conditions during a long job
- (void)threadMainRoutine
{
BOOL moreWorkToDo = YES;
BOOL exitNow = NO;
NSRunLoop* runLoop = [NSRunLoop currentRunLoop];
// Add the exitNow BOOL to the thread dictionary.
NSMutableDictionary* threadDict = [[NSThread currentThread] threadDictionary];
[threadDict setValue:[NSNumber numberWithBool:exitNow] forKey:@"ThreadShouldExitNow"];
// Install an input source.
[self myInstallCustomInputSource];
while(moreWorkToDo && ! exitNow) {// Do one chunk of a larger body of work here.
// Change the value of the moreWorkToDo Boolean when done.
// Run the run loop but timeout immediately if the input source isn't waiting to fire.
[runLoop runUntilDate:[NSDate date]];
// Check to see if an input source handler changed the exitNow value.
exitNow = [[threadDict valueForKey:@"ThreadShouldExitNow"] boolValue]; }}Copy the code
synchronous
The presence of multiple threads in an application creates potential problems with thread-safe access to resources from multiple executions. Two threads modifying the same resource can interfere with each other in unexpected ways. For example, one thread might overwrite another thread’s changes or place the application in an unknown and possibly invalid state. If you’re lucky, corrupted resources can cause significant performance problems or crashes that are relatively easy to track down and fix. But if you’re unlucky, the damage can lead to subtle errors that don’t show up until much later, or errors that may require a major review of your basic coding assumptions.
When it comes to thread-safety, good design is the best protection. Avoiding shared resources and minimizing interaction between threads reduces the likelihood that these threads will interfere with each other. However, completely interference-free design is not always possible. In cases where your threads must interact, you need to use synchronization tools to ensure that they do so safely.
OS X and iOS provide a number of synchronization tools at your disposal, from tools that provide mutually exclusive access to tools that order events correctly in your application.
Synchronization tool
To prevent accidental changes to data by different threads, you can design your application to have no synchronization issues, or you can use synchronization tools. While it is desirable to avoid synchronization problems entirely, it is not always possible.
Atomic operation
Atomic operations are a simple form of synchronization for simple data types. The advantage of atomic operations is that they do not block competing threads. For simple operations, such as adding a counter variable, this can result in better performance than acquiring a lock.
OS X and iOS include a number of operations to perform basic mathematical and logical operations on 32-bit and 64-bit values. These operations include comparing and swapping, testing and setting, and testing and clearing atomic versions of operations. About the support atomic operation list, please see the/usr/include/libkern/OSAtomic h header file or atomic manual pages.
Memory barriers and volatile variables
For best performance, compilers often reorder assemble-level instructions to keep the processor’s instruction pipeline as full as possible. As part of this optimization, the compiler may reorder instructions that access main memory because it believes that doing so will not generate incorrect data. Unfortunately, the compiler is not always able to detect all memory-related operations. If seemingly independent variables actually influence each other, compiler optimizations can update these variables in the wrong order, resulting in potentially incorrect results.
A memory barrier is a non-blocking synchronization tool used to ensure that memory operations occur in the correct order. A memory barrier acts like a fence, forcing the processor to complete any load and store operations in front of the barrier before allowing the load and store operations behind it to be performed. Memory barriers are often used to ensure that memory operations by one thread (but visible to another) always occur in the expected order. The lack of a memory barrier in this case can cause other threads to see seemingly impossible results. (See Wikipedia’s entry on memory barriers, for example.) To use a memory barrier, you just need OSMemoryBarrier to call the function at the appropriate place in your code.
Volatile variables apply another type of memory constraint to a single variable. Compilers often optimize code by loading the values of variables into registers. For local variables, this is usually not a problem. However, if the variable is visible to another thread, this optimization may prevent the other thread from noticing any changes to it. Applying the volatile keyword to a variable forces the compiler to load the variable from memory each time it is used. You can declare a variable, as volatile, as if its value could be changed at any time by an external source that might not be detected by the compiler.
Because both memory barriers and volatile variables reduce the number of optimizations that the compiler can perform, they should be used with care and only when required to ensure correctness. For information on using memory barriers, see the OSMemoryBarrier man page.
The lock
Locks are one of the most commonly used synchronization tools. You can use locks to protect a critical part of the code, which is a piece of code that only one thread is allowed to access at a time. For example, a critical section may operate on a specific data structure or use a resource that supports up to one client at a time. By placing a lock around this section, you can exclude other threads from making changes that might affect the correctness of your code.
Table 4-1 lists some of the locks commonly used by programmers. OS X and iOS provide implementations for most of these lock types, but not all. For unsupported lock types, the description column explains why these locks are not implemented directly on the platform.
Table 4-1 Lock types
The lock | describe |
---|---|
The Mutex Mutex | The mutex (orThe mutexThe lock acts as a protective barrier around the resource. A mutex is a semaphore that allows access to only one thread at a time. If a mutex is in use and another thread attempts to acquire it, it blocks until the mutex is released by its original owner. If multiple threads are competing for the same mutex, only one thread is allowed to access it at a time. |
Recursive lock | A recursive lock is a variant of a mutex. Recursive locks allow a single thread to acquire the lock multiple times before releasing it. The other threads remain blocked until the lock is released by the owner as many times as it was acquired. Recursive locks are primarily used during recursive iterations, but can also be used in cases where multiple methods each need to acquire locks separately. |
Read-write Lock Read-write lock | Read/write locks are also called shared exclusive locks. This type of lock is typically used for larger-scale operations and can significantly improve performance if the protected data structure is read frequently and modified occasionally. During normal operation, multiple readers can access data structures simultaneously. However, when a thread wants to write to a structure, it blocks until all readers release the lock, at which point it acquies the lock and can update the structure. When a writer thread is waiting for a lock, the new reader thread blocks until the writer thread completes. The system supports only read/write locks of POSIX threads. For more information on how to use these locks, seepthread Manual pages. |
Distributed lock | Distributed locking provides mutually exclusive access at the process level. Unlike true mutex locks, distributed locks do not block a process or prevent it from running. It simply reports when the lock is busy and lets the process decide how to proceed. |
Spin lock | The spinlock polls its locking condition repeatedly until the condition becomes true. Spinlocks are most commonly used on multiprocessor systems where the expected wait time for the lock is short. In these cases, polling is often more efficient than blocking the thread, which involves context switching and updating the thread’s data structure. Because of the polling nature of spin locks, the system does not provide any implementations of spin locks, but you can easily implement them in specific situations. For information on implementing spin-locking in the kernel, see *Kernel Programming Guide*. |
Double checked lock | Double-checked locking attempts to reduce the cost of acquiring a lock by testing the locking criteria before acquiring it. Because double-checked locks can be insecure, their use is discouraged because the system does not explicitly support them. |
Note: Most types of locks also include a memory barrier to ensure that any previous load and store instructions are completed before entering a critical section.
For information on how to use locks, see Using Locks.
Thread safety and signaling
When it comes to threaded applications, nothing is more frightening or confusing than the problem of handling signals. Signaling is a low-level BSD mechanism that can be used to pass information to a process or to manipulate it in some way. Some programs use signals to detect events, such as the death of a child process. The system uses signals to terminate runaway processes and communicate other types of information.
The problem with signals is not what they do, but how they behave when your application has multiple threads. In a single-threaded application, all signal handlers run on the main thread. In multithreaded applications, signals unrelated to a particular hardware error (such as an illegal instruction) are passed to the thread that is running at the time. If multiple threads are running simultaneously, the signal is passed to whichever one the system happens to choose. In other words, the signal can be passed to any thread of the application.
The first rule for implementing a signal handler in your application is to avoid making assumptions about which thread is handling the signal. If a particular thread wants to process a given signal, you need to find a way to notify the thread when the signal arrives. You cannot just assume that installing the signal handler from that thread will cause the signal to be passed to the same thread.
See the Signal and SIGAction man pages for more information about signals and installing signal handlers.
conclusion
process
- A process is an application that is running on the system
- Each process is independent, and each process runs in its own dedicated and protected memory space
- through
Activity monitor
You can view the processes that are started in the Mac system
thread
A thread is the basic execution unit of a process. All tasks of a process are executed in a thread
- In order for a process to execute a task, it must have threads. The process must have at least one thread
- When the program starts, a thread is started by default. This thread is called the main thread or UI thread
The relationship between processes and threads
- Address space: Threads of the same process share the address space of the same process, but the address space between processes is independent.
- Resource ownership: Threads in the same process share the resources of the process, such as memory, I/O, and CPU, but the resources between the processes are independent.
- When a process crashes, it has no effect on other processes in protected mode, but when a thread crashes, the entire process dies. So multiple processes are more robust than multiple threads.
- The process switchover consumes a large number of resources and is highly efficient. So when it comes to frequent switching, it’s better to use threads than processes. Similarly, if concurrent operations are required simultaneously and share some variables, only threads can be used, not processes.
- Execution process: Each independent process has a program running entry, sequential execution sequence and program entry. However, threads cannot be executed independently, and must be dependent on the application program, which provides multiple thread execution control.
- Threads are the basic unit of processor scheduling, but processes are not.
- Threads have no address space; they are contained in the process address space
The meaning of multithreading
advantages
- Can properly improve the efficiency of the implementation of procedures
- Appropriate resource utilization (CPU, memory)
- When the tasks in the thread are finished, the thread can destroy itself
disadvantages
- Starting threads requires a certain amount of memory (512K by default)
- If a large number of threads are enabled, a large amount of memory space will be occupied and the performance of the program will be degraded
- The more threads there are, the greater the CPU overhead on the calling thread
- More complex programming, such as communication between threads, multi-threaded data sharing
Principle of multithreading
Time slice
The CPU quickly switches between tasks. This interval is known as time slicing
(Single-core CPU) The CPU can process only one thread at a time
- In other words, only one thread is executing at any one time
Multithreaded execution
- It allows the CPU to quickly switch between multiple threads
- The CPU schedules threads fast enough to create the effect of “simultaneous” execution of multiple threads
If you have a lot of threads
- The CPU switches between N threads, which consumes a lot of CPU resources
- The number of times each thread is scheduled is reduced and the execution efficiency of the thread is reduced
Thread life cycle
Once a thread is created, we can use it to complete the tasks we assign to it. We, as developers, naturally want it to be done quickly and with high quality, according to our business logic.
However, the specific task content is to the system to do processing, and then to the bottom is to the CPU to execute the code. So what happens when a thread is created and finally recycled? I briefly collated the following images:
Threads have several states: ready, running, blocked, and dead. The lifetime of the thread is also directly related to the pool of schedulable threads and the scheduling of the CPU. Once the thread is created, it waits to be scheduled by the CPU and is in the ready state. After the CPU is scheduled to our thread, the thread enters the running state to execute the task. If we sleep or waiting for synchronization lock or have a call from the scheduling after removing the thread pool, then enter the blocking state, with sleep over, access to the synchronization locks, is added to the schedulable thread pool, after our thread returns to the ready state, waiting to be scheduled, the system until the final recovery, is the state of death.