process

As shown in the figure, a program corresponds to a process. That is, a program and a process are a pair of concepts. They describe the static form and dynamic characteristics of a program respectively.

Each process has a separate address space between them, and each process runs in its own dedicated and protected memory.

Process definition:

  • Narrow sense: a process is an instance of a program being executed
  • In broad sense: a process is a running activity of a program with some independent function about a set of data.

In short, a program is an inanimate entity that is called a process only when the processor is executing a program.

In addition, The Linux operating system can use the CPU to quickly switch between multiple processes, that is, context switching between processes. There can only be one process running at a time, regardless of the switching speed. For example, if the kernel needs to replace process A running on the CPU at this moment and start process B running on the CPU, the kernel must save the running status of process A before replacing process A. If process B is not running for the first time, The kernel must then retrieve the state of process B’s last run and save the information, which must be done during the process switching out and switching in.

thread

A Thread is the smallest unit in which an operating system can schedule operations. It is contained in the process and is the actual operating unit or basic execution unit in the process. When the program starts, a thread is opened by default, called the master thread or UI thread.

A process can have multiple concurrent threads. Each thread performs different tasks in parallel. Multiple threads share the address space of the process.

When a thread’s processing does not require CPU but only resources, it gives other threads that need CPU resources the opportunity to acquire CPU resources. Basically, that’s what multithreaded programming is all about.

Simply put, the purpose of multithreading is not to improve operating efficiency, but to improve the efficiency of resource use, in order to maximize the use of CPU resources. But open up threads, and will occupy CPU resources, waste memory space, so the use of multithreading needs to be careful restraint.

Pay attention to

(By default, the main thread takes up 1M and the child thread takes up 512KB)

  1. The purpose of multithreaded programming is to maximize CPU resources
  2. Multithreading advantage is mainly to occupy a long time in the process of the task in the background, so that the running speed of the program may be accelerated.
  3. The large number of threads also affects performance because the operating system needs to switch between them. Threads also need to open up memory space.

multithreading

Managing thread life cycles manually is tedious and error-prone, so avoid using PThreads and NSThreads whenever possible. Therefore, GCD and NSOperation are used more frequently.

plan Introduction to the language The life cycle Use frequency
Pthread Generic API, cross-platform, difficult to use C Programmer management Almost no
NSThread Object oriented, easy to use, can directly operate threads OC Programmer management Occasionally use
GCD Take full advantage of multi-core equipment. C Programmer management Often use
NSOperation Object-oriented based on GCD, more features OC Programmer management Often use
NSThread *main = [NSThread currentThread]; //NSThread if(self.t == nil || self.t.isCancelled || self.t.isFinished){ self.t = [[NSThread alloc]initWithTarget:self selector:@selector(study:) object:@100]; [self.t start]; } // The main thread and child thread are 512 (1M) NSLog(@"%lu====%lu", main.stacksize /1024, self.t.tacksize /1024); / / implicit ` multi-thread method is called [self. P performSelectorInBackground: @ the selector (study) withObject: @ 5000]; / / the GCD disaptch_async (dispatch_get_global_queue (0, 0), ^ {});Copy the code

The life cycle of a thread is

New – Ready – Run – Block – Die

  • New: Creates a thread instance
  • Ready: Send a STAR message to the thread and add the thread to the schedulable thread pool
  • Run: When a task is scheduled by the CPU
  • Blocked: Threads are blocked, such as calls to sleep, synchronous locks, fence functions, etc
  • Death: The task completes and no new tasks need to be executed, or the thread is forced to exit

Note:

[NSThread exit] : Once a thread is forcibly terminated, all subsequent code will not be cancelled by executing [Thread Cancel] : the thread is not cancelled directly, but the isCancelled flag is added to the thread objectCopy the code

The thread pool

CorePoolSize: number of core threads

MaxPoolSize: indicates the maximum number of threads

  • If less than corePoolSize, a thread is created and the task is executed; Otherwise, put the task on a blocking queue;

  • If the task cannot be put into the blocking queue, the blocking queue is full. Try creating a new thread to perform this task; If the command fails, the number of threads in the pool reaches maxPoolSize. Reject () is processed.

Thread pool saturation strategy: When the blocking queue is full and there are no idle worker threads, if a task continues to be submitted, a policy must be adopted to process the task. The thread pool provides four strategies:

  • AbortPolicy: discard task and throw RejectedExecutionException abnormalities, prevent normal operation of system
  • CallerRunsPolicy: Rolls back the task to the caller
  • DiscardOldestPolicy: Discards the task that has been waiting the longest and then retry the task (repeat the process)
  • DisCardPolicy: Discards tasks directly

Thread safety

When multi-threading, multiple threads may access the same resource, which is easy to cause problems such as data disorder and data security. At this time, we need to ensure that only one thread accesses this resource at a time, and the lock arises at the historic moment. Graph the efficiency of various locks,

1. NSLock

NSLock implements the most basic mutex

Because it is a mutex lock, when one thread accesses, the thread obtains the lock, when other threads access, the operating system will suspend, until the thread releases the lock, other threads can access it, thus ensuring thread safety. But if you lock it twice in a row, you can cause a deadlock problem.

NSLock * cjlock = [NSLock new]; dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ [lock lock]; NSLog(@" thread 1 locked successfully "); sleep(2); [lock unlock]; NSLog(@" thread 1 unlocked successfully "); }); dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ sleep(1); [lock lock]; NSLog(@" thread 2 locked successfully "); [lock unlock]; NSLog(@" Thread 2 unlocked successfully "); }); }Copy the code

2. @synchronized

  1. Obj used by synchronized(obj) instruction is the unique identifier of the lock. Only when the identifier is the same, in order to meet the mutual exclusion, thread 3 uses self as the identifier, so thread 3 will not be blocked.

  2. @sychronized(obj){} Internal OBj being freed or set to nil does not affect the lock function, but if OBj was nil to begin with, the lock function would be lost.

  3. The @synchronized instruction does not need to explicitly create a lock object in the code to implement the locking mechanism, but as a precaution, the @synchronized block implicitly protects the code by adding an exception handler that automatically releases the mutex when an exception is thrown. So if you don’t want the extra overhead of implicit exception-handling routines, you can consider using lock objects.

NSObject * obj = [NSObject new]; Dispatch_async (dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{@synchronized(obj){NSLog(@" thread 1 starts "); sleep(3); NSLog(@" thread 1 ends "); }}); dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ sleep(1); @synchronized(obj){NSLog(@synchronized(obj); }}); Dispatch_async (dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{@synchronized(self){NSLog(@" thread 2"); }}); }Copy the code

The output is as follows:

  • Start thread 1
  • Thread 3
  • End of thread 1
  • Thread 2

3. dispatch_semaphore

Dispatch_semaphore uses the semaphore mechanism to lock, wait for signals and send signals. Through three thinking functions, one is to create a semaphore, one is to wait for a signal, one is to send a signal.

The mechanism of dispatch_semaphore is that if one thread receives a signal, the other threads must wait for the signal to be released.

// The value of dispatch_semapHORE_CREATE must be greater than or equal to 0. Otherwise, NULL dispatch_semapHORE_t semaphore = dispatch_semaphoRE_create (1); dispatch_time_t overTime = dispatch_time(DISPATCH_TIME_NOW, 6 * NSEC_PER_SEC); dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ dispatch_semaphore_wait(semaphore, overTime); NSLog(@" thread 1 starts "); sleep(5); NSLog(@" thread 1 ends "); dispatch_semaphore_signal(semaphore); }); dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ sleep(1); dispatch_semaphore_wait(semaphore, overTime); NSLog(@" thread 2 starts "); dispatch_semaphore_signal(semaphore); });Copy the code

Thread and Runloop relationship

  1. Runloops correspond to threads one by one, with one runloop corresponding to one core thread.

    Core is because runloops can be nested, but there can only be one core, and their relationship is stored in a global dictionary.

  2. Runloop is used to manage threads. When a thread’s Runloop is enabled, the thread will go to sleep after executing a task. When a task is available, the thread will be awakened to perform the task.

  3. The runloop is created on the first fetch and destroyed at the end of the thread.

  4. For the main thread, the runloop is created by default as soon as the program starts.

  5. For child threads, runloops are lazily loaded and only created when we use them, so be careful when using timers on child threads: make sure the runloop is created for the child thread, otherwise the timer will not call back.