One, process, thread

Process:

· 1. A process is a running activity of a program with certain independent functions about a data set, and it is the basic unit for the operating system to allocate resources.

· 2. Process refers to an application that is running in the system, which is the execution process of a program. We can understand it as an app on the mobile phone.

· 3. Each process is independent of each other. Each process runs in its own dedicated and protected memory space and has all the resources needed to run independently

thread

· 1. The smallest unit of program execution flow. A thread is an entity in a process.

· 2. A process must have at least one thread in order to execute a task. When an application starts, the system starts one thread by default, the main thread

The relationship between processes and threads

· 1. Thread is the execution unit of a process, and all tasks of a process are executed in thread

· 2. Threads are the minimum unit of CPU resource allocation and scheduling

· 3. A program can correspond to multiple processes (multiple processes), and a process can have multiple threads, but there must be at least one thread

· 4. Threads within the same process share process resources

Two, multi-process, multi-threaded

Multiple processes

Turn on the MAC’s activity monitor and you can see many processes running at the same time

· A process is an execution of a program on a computer. When you run a program, you start a process. Obviously, programs are dead (static) and processes are alive (dynamic).

· Processes can be divided into system processes and user processes. Any process used to complete the various functions of the operating system is the system process, they are in the running state of the operating system itself; All processes started by users are user processes. A process is the unit by which the operating system allocates resources.

· Processes are further divided into threads, which means that a process has several smaller units that can run independently. If two or more processes are allowed to be running at the same time on the same computer system, this is called multiprocessing.

multithreading

1. The CPU can process only one thread at a time, and only one thread is executing. Concurrent execution of multiple threads means that the CPU can quickly schedule (switch) between multiple threads. If the CPU schedules threads quickly enough, it creates the illusion that multiple threads are executing concurrently

2. If the number of threads is very large, the CPU will be scheduled between N threads, consuming a large number of CPU resources, and each thread will be scheduled less frequently (the execution efficiency of the thread will be reduced).

3. Advantages of multithreading:

Can properly improve the efficiency of program execution can properly improve the utilization of resources (CPU, memory utilization)

  1. Disadvantages of multithreading:

Enabling threads requires a certain amount of memory space (1 MB for the main thread and 512KB for the child thread by default). If you enable a large number of threads, a large amount of memory space is used, which reduces the performance of the program. The more threads, the more overhead the CPU spends on scheduling threads

The program design is more complex: such as communication between threads, multi-threaded data sharing

Task and queue

task

That’s what it means to perform the operation, which is the piece of code that executes in the thread. In GCD they are placed in blocks. There are two ways to perform tasks: sync and async.

Sync: Adds a task to the specified queue synchronously. Before the execution of the added task is finished, the thread will wait until the task in the queue is finished and then continue to execute, that is, the thread will block. Tasks can only be executed in the current thread (this is the current thread, not necessarily the main thread), without the ability to start new threads.

Async: The thread returns immediately and continues the task without waiting, without blocking the current thread. Tasks can be performed in new threads, with the ability to start new threads (not necessarily). Asynchrony executes tasks in child threads if they are not added to the main queue

The queue

Dispatch Queue: The Queue here refers to the waiting Queue for the execution of the task, that is, the Queue used to store the task. A queue is a special type of linear table that uses FIFO (first in, first out) principle, that is, new tasks are always inserted at the end of the queue, and tasks are always read from the head of the queue. Each time a task is read, a task is released from the queue. There are two kinds of queues in GCD: serial queues and concurrent queues. Both comply with the FIFO (First in first out) principle. The main differences between the two are: the order of execution is different, and the number of open threads is different.

L Serial Dispatch Queue:

Only one task can be executed in the queue at a time. The next task can be executed only after the current task is completed. (Only one thread is started. After one task is executed, the next task is executed.) The main queue is a serial queue on the main thread that the system automatically creates for us

L Concurrent Dispatch Queue:

Allow multiple tasks to execute concurrently. (Multiple threads can be started and tasks can be executed simultaneously). The concurrency function of a concurrent queue is valid only for the asynchronous (dispatch_async) function

First of all, as a developer, it is particularly important to have a learning atmosphere and a communication circle. This is my iOS development public account: Programming Daxin. No matter you are a small white or a big ox, you are welcome to enter. (The group will provide some free learning books collected by the group owner and hundreds of interview questions and answer documents!)

Multithreading in iOS

There are three main types: NSThread, NSoperationQueue, and GCD

1. NSThread: a lightweight multithreaded technology

It’s a child thread that we create manually, we start it if we initialize it, and we start it automatically if we construct it. As long as we manually create a thread, we need to manage the thread, not only start, but also after the use of the thread after the resource recycling

performSelector… Any subclass or object of NSObject can get into the child thread and the main thread by calling a method, but the child thread created by these methods is another representation of an NSThread.

There is no check at compile time to see if the method is valid, and only a warning is given if it is not

Note that if it is an afterDelay function, an NSTimer is created internally and added to the current thread

In the Runloop. This method is invalid if the current thread does not have runloop enabled. In the child thread, you need to start the runloop

Order of call)

The performSelector: withObject: just a simple message is sent, and no one time. So you don’t have to add to the child

It can also be executed in a thread’s Runloop

2. GCD compared with NSOprationQueue

Let’s be clear about the relationship between NSOperationQueue and the GCD

NSOpertaionQueue is a high-level abstraction of GCD, which is an API for the low-level C language.

1. GCD executes more efficiently, and since the tasks in the queue are executed with blocks, it is a lightweight data structure that is easier to write

GCD supports only FIFO queues. NSOperationQueue can be executed by setting maximum concurrency, setting priority, and adding dependencies

NSOperationQueue can even set dependencies across queues, but GCD can only control the order of execution by setting serial queues or adding barrier(dispatch_barrier_Async) tasks to queues

4. NSOperationQueue supports KVO because it is object oriented. It can monitor whether an operation isExecuted, isFinished, or is cancelled.

L In the actual project development, most of the time, only asynchronous operation is used, without particularly complex thread relationship management. Therefore, the optimized and fast GCD recommended by Apple is the first choice

L If you consider transactionality between asynchronous operations, sequential lines, dependencies, such as multi-threaded concurrent downloads, GCD needs to write more code to implement this, and NSOperationQueue has built-in support for this

L Both GCD and NSOperationQueue, we are dealing with tasks and queues, and we are not dealing with threads directly. In fact, thread management is really not our concern. The system does a good job of creating, scheduling and releasing threads. Nsthreads, on the other hand, require us to manage the thread life cycle ourselves, and there are performance costs associated with thread synchronization and locking

5. GCD– Queue

In iOS, there are GCD, NSOperation, NSThread and other multi-thread technical solutions.

The GCD has three queue types:

Main queue: Obtained from dispatch_get_main_queue (), which is a serial queue associated with the main thread.

Global queue: A global queue is a concurrent queue that is shared by the entire process. There are three global queues with high, medium, and low priorities. Call disPATH_get_global_queue and pass in a priority to access the queue.

Custom queues: Queues created using the function dispatch_queue_create

Six, a deadlock

A deadlock is a circular wait caused by a queue

1. A common example of deadlock is main queue synchronization

Use main queue synchronization on the main thread, which means that tasks are queued on the main thread.

Synchronization is immediate for a task, so when a task is put in the main queue, it will execute immediately, and viewDidLoad will not proceed until that task has been executed.

ViewDidLoad and the task are in the main queue, and because of the first-in, first-out principle of the queue, the task has to wait for viewDidLoad to complete before it can continue to execute, viewDidLoad and the task form a loop waiting for each other, resulting in deadlock.

To avoid this deadlock, change the synchronization to asynchronous dispatch_ASYNc, or change dispatch_GET_MAIN_queue to another serial or parallel queue.

2. The following code also causes a deadlock:

External functions cause deadlocks whether they are synchronous or asynchronous.

This is because the task inside and the task outside are in the same serialQueue and are synchronized, which is the same as the main queue synchronization example above and causes a deadlock and the solution is the same as above, This can be done by changing the synchronization to dispatch_async, or by changing the serialQueue to some other serial or parallel queue

This is deadlock free, and serialQueue and serialQueue2 are in the same thread.

7. Execution sequence of GCD tasks

1, serial queue first asynchronous and then synchronous

The print order is 13245

The reason is:

So let’s print 1 first

Next, add task 2 to the serial queue. Since task 2 is asynchronous, it will not block the thread. Continue down, printing 3

Then the task 4, add task 4 to the serial port on a queue, because task 4 and task 2 in the same serial queue, according to the queue first in first out principle, task executed the ability after must wait for task 2, 4, because task 4 is synchronization task, will block the thread, only perform the task 4 can continue to perform print down 5

So the final order is 13245.

Here, task 4 is executed in the main thread, while task 2 is executed in the child thread.

If task 4 is added to another serial queue or parallel queue, task 2 and Task 4 execute out of order (you can add multiple tasks to see the effect).

2, performSelector

The test method here will not be executed because

This method creates the submit task on the runloop, but the GCD layer creates a thread that does not have the runloop enabled by default, so this method will fail.

If you change dispatch_GET_global_queue to the main queue, since the main queue is on the main thread with runloop enabled by default, it will execute (change dispatch_async to synchronous, because synchronization is on the current thread, if the current thread is the main thread, test

Methods will be implemented).

Eight, dispatch_barrier_async

1. Q: How to use GCD to achieve multiple read and single write?

Multiple read and single write means that data can be read by multiple readers at the same time, but data cannot be read or written while reading. And, in the process of writing, no other writer can write. That is, readers are concurrent, and writers and readers or other writers are mutually exclusive.

The write process here is to write through a fence.

This can be done with dispatch_barrier_sync(the fence function)

2, dispatch_barrier_sync

The queue on dispatch_barrier_sync here must be on the same queue as the task that needs to be blocked, otherwise it is invalid.

From print, tasks 0-9 and 10-19 are out of order because they are asynchronous and concurrent. Due to the existence of the fence function, the order must be to execute tasks 0-9 first, then execute the fence function, and then execute tasks 10-19.

l dispatch_barrier_sync: Submits a barrier block object for execution and waits until that block completes.

l dispatch_barrier_async: Submits a barrier block for asynchronous execution and returns immediately.

The difference between “dispatch_barrier_sync” and “dispatch_barrier_Async” is that it will block the current thread. For example, if you print “dispatch_barrier_async” after “dispatch_barrier_async”, it will print first. Then perform tasks 0-9 and the fence function; For dispatch_barrier_sync, this print is performed after tasks 0-9 and the fence function.

3, can be designed like this read single write:

Nine, dispatch_group_async

Scenario: After n time-consuming concurrent tasks are completed, the next task is executed. For example, refresh the UI page after n network requests have been completed

Surface.

Ten, Dispatch Semaphore

A Semaphore in GCD is a Dispatch Semaphore, which is a signal that holds a count.

Dispatch Semaphore provides three functions

1. Dispatch_semaphore_create: Creates a Semaphore and initializes the total number of signals. 2. You can reduce the total semaphore by one, and wait (blocking the thread) when the total amount of semaphore is zero, otherwise it can execute normally.

The Dispatch Semaphore is mainly used in actual development:

L Keep threads synchronized and convert asynchronous execution tasks to synchronous execution tasks

L Ensure thread safety by locking the thread

1. Keep threads synchronized:

The lock of dispatch_semaphore_WAIT blocks the current thread. After dispatch_semaphore_signal is unlocked, the current thread continues to execute

2, ensure thread safety, lock for thread:

In thread safety you can think of dispatch_semaphore_wait as locking and dispatch_semaphore_signal as unlocking first create the global variable

Notice that the initialization semaphore here is 1.

Asynchronous concurrent calls to asyncTask

Then it is found that the print is executed sequentially from task 1 to 100, and no two tasks are executed simultaneously.

Here’s why:

If asyncTask is executed concurrently in a child thread, then the first one added to the concurrent queue will subtract one from the semaphore, and the semaphore equals zero.

You can proceed to the next task. For other tasks in the concurrent queue, since the semaphore is not equal to 0 at this time, it is necessary to call dispatch_semaphore_signal to increase the semaphore by 1 after the current task is completed, and then continue to execute the next task, and so on, so as to achieve the purpose of thread locking.

11. Delay function (dispatch_after)

Dispatch_after allows us to delay the execution of the tasks we add to the queue. This function does not execute processing after a specified time, but simply appends processing to dispatch_queue at a specified time

This is because it uses dispatch_time_t internally to manage time, not NSTimer.

So if the child thread invokes the, compared performSelector: afterDelay, need not care about whether the runloop open

Implement singletons with dispatch_once

Advantages of NSOperationQueue

NSOperation and NSOperationQueue are a set of multi-threaded solutions provided by Apple. In fact, NSOperation and NSOperationQueue are based on a higher level of encapsulation of GCD, fully object-oriented. But it is easier to use than GCD and the code is more readable.

1. Task dependencies can be added to facilitate the control of execution sequence

2. You can set the priority of the operation

3, task execution status control: isReady, isExecuting, isFinished, isCancelled

If you just override the main method of NSOperation, the underlying control changes the execution and completion state of the task, and the exit state of the task. If you override the start method of NSOperation, you control the task state yourself

The system removes NSOperation in isFinished==YES by using KVO

4. You can set the maximum concurrency

NSOperation and NSOperationQueue

Operation:

The meaning of performing an operation, in other words, is the piece of code that you execute in the thread.

In GCD they are placed in blocks. In NSOperation, use NSOperation subclasses NSInvocationOperation, NSBlockOperation, or custom subclasses to encapsulate the operation.

Operation Queues:

The queue here refers to the operation queue, that is, the queue used to store the operation. This is different from the SCHEDULING queue FIFO (first in, first out) principle in GCD.

NSOperationQueue first enters the ready state for an operation added to the queue (the ready state depends on the dependencies between the operations

Relationship), and the start order (not the end order) of the operations that enter the ready state is determined by their relative priority (which is a property of the operation object itself).

Operation queue by setting the maximum number of concurrent operation (maxConcurrentOperationCount) to control the concurrency, serial.

NSOperationQueue provides us with two different types of queues: a main queue and a custom queue. The main queue runs on top of the main thread,

Custom queues are executed in the background.

NSThread+ Runloop implements resident threads

One of the most common development scenarios for NSThreads is to implement resident threads.

Since each subthread will consume CPU, in the case of frequent use of subthreads, frequent subthreads will consume a lot of CPU, and the creation of a thread is completed after the task is released, can not be reused, so how to create a thread so that it can work again? That is, create a resident thread.

First, since resident threads are resident, we can implement a singleton using GCD to hold nsThreads

So the thread created will not be destroyed?

Does not print, indicating that the test method was not called. You can use runloop to make the thread resident

And then you call performSelector and you print.

Spin locks and mutexes

The spin lock.

Is a lock used to protect a shared resource shared by multiple threads. Unlike mutex, a spin lock iterates to check whether the lock is available in the form of a busy waiting while trying to acquire it. When the task of the previous thread is not finished (locked), then the next thread will wait (not sleep), when the task of the previous thread is finished, the next thread will execute immediately.

In a multi-CPU environment, using a spin lock instead of a general mutex can often improve performance for programs with short locks.

Exclusive locks:

When the previous thread’s task does not finish (it is locked), the next thread will go to sleep and wait for the task to finish. When the previous thread’s task finishes, the next thread will automatically wake up and execute the task.

Conclusion:

Spin-locks are busy, etc. : When accessing a locked resource, the caller thread does not go to sleep, but loops around until the locked resource releases the lock.

The mutex will sleep: Sleep means that when accessing the locked resource, the caller thread will sleep and the CPU can schedule other threads to work. Until the locked resource releases the lock. The dormant thread is awakened.

The advantages and disadvantages:

The advantage of a spin lock is that because it does not cause the caller to sleep, there are no time-consuming operations such as thread scheduling, CPU time slits, and so on. So spin locks are much more efficient than mutex locks if they can be acquired in a short amount of time.

The disadvantage is that a spin-lock consumes the CPU all the time. It is running without the lock, spinning, so it consumes the CPU. If the lock cannot be acquired in a short time, it will definitely reduce the CPU efficiency. Spin locks do not implement recursive calls.

Spin-lock: atomic, OSSpinLock, dispatch_semaphore_t

Mutex: pthread_mutex, @synchronized, NSLock, NSConditionLock, NSCondition, NSRecursiveLock

Well, that’s all for this time. Thanks for watching!