Threads and Processes

Thread is the smallest unit to perform computation in the process, is an entity in the process, is the basic unit that is independently scheduled and dispatched by the system, the thread itself does not own system resources, only has a little necessary resources in the running, but it can share all the resources owned by the process with other threads belonging to the same process. One thread can create and undo another, and multiple threads in the same process can execute concurrently.

Processes: Each process has its own code and data space (process context). Switching between processes can be costly. A process contains 1– N threads. (Process is the smallest unit of resource allocation)

Thread: The same class of threads share code and data space, each thread has its own run stack and program counter (PC), thread switching overhead is small. (Threads are the minimum unit of CPU scheduling)

multithreading

Multithreading (threading) is a technique that enables multiple threads of software or hardware to execute concurrently. Multithreaded computers have hardware that allows more than one thread to execute at the same time, improving overall processing performance. Systems with this capability include symmetric multiprocessors, multi-core processors, and chip-level multiprocessing or simultaneous multithreading processors.

advantages

  • Threads can be used to put tasks in a long-running program into the background
  • The user interface can be more engaging, so that if a user clicks a button to trigger the processing of certain events, a progress bar can pop up to show the progress of the processing
  • The program may run faster
  • Threads are useful for waiting tasks such as user input, file reading and writing, and network data sending and receiving. In this case, you can free up some valuable resources such as memory usage

disadvantages

  • If there are a large number of threads, performance can be affected because the operating system needs to switch between them
  • More threads require more memory space
  • Threads can cause more “bugs” to your program, so use them with care
  • The interruption of a thread needs to be considered for its impact on the running of the program
  • Typically, block model data is shared between multiple threads, and thread deadlock needs to be prevented

The life cycle of the thread

Five states of the life cycle

  • New: Instantiates the thread object

  • Ready: A start message is sent to the thread object, which is added to the schedulable thread pool and waits for THE CPU to schedule.

  • Run: The CPU is responsible for scheduling the execution of threads in the schedulable thread pool. The state may switch back and forth between ready and running until the thread completes execution. State changes between ready and running are the responsibility of the CPU and cannot be interfered with by the programmer.

  • Block: When a predetermined condition is met, you can use sleep or lock to block a thread from executing. SleepForTimeInterval sleepUntilDate @synchronized(self) :(mutex)

  • Death: Normally dies and the thread completes execution. Abnormal death, when certain conditions are met, terminates execution within the thread/terminates the thread object on the main thread

  • There is also thread exit and cancel

  • [NSThread exit] : Once a thread is forcibly terminated, all subsequent code will not be executed.

  • [thread cancel] cancel: the thread object isCancelled without cancellation. The object isCancelled.

The multithreaded implementation of iOS

Scheduling of thread pools

Saturated strategy

When the task submission thread pool fails, that is, when the number of currently submitted tasks exceeds the sum of maxmumPoolSize and workQueue, the task is assigned to the saturation policy. There are four saturating policies: AbortPolicy, CallerRunsPolicy, DiscardPolicy, and DiscardOldestPolicy.

  • AbortPolicySuspension strategies, belonging to the default saturation strategy, this strategy will throw unchecked RejectedExecutionException. The caller can catch this exception and then write its own handler code as needed.
  • DiscardPolicyDiscard policy. When a newly submitted task cannot be saved to the queue for execution, the “Discard” policy quietly abandves the task.
  • DiscardOldestPolicyDiscard the next task that will be executed and try to resubmit a new one. (If the work queue is a priority queue, then the “discard oldest” policy will cause the highest priority tasks to be discarded, so it is best not to use the “discard oldest” saturation policy together with the priority queue.)
  • CallerRunsPolicyCaller policy, a policy implements a scheduling mechanism that neither throws tasks nor throws exceptions, but instead rolls some tasks back to the caller, thereby reducing traffic for new tasks. Instead of executing a newly submitted task in a thread in the pool, it executes the task in a thread that has called execute. With bounded queues and “caller run” saturation, when all threads in the thread pool are occupied and the work queue is filled, the next task is executed on the main thread when execute is called. Because of the time it takes to execute a task, the main thread cannot commit any tasks for at least a while, giving the worker thread time to finish processing the tasks that are executing. During this time, the main thread does not call Accept, so incoming requests are queued at the TCP layer rather than in the application’s queue. If the overload continues, then the TCP layer will eventually find that its request queue is full, and so will start discarding requests as well. When the server is overloaded, the overload gradually spreads out — from the thread pool to the work queue to the application to the TCP layer, and finally to the client, causing the server to achieve a gentle performance degradation under high load.

The RejectedExecutionHandler interface is implemented by the above four rejection policies.

Related Interview Questions

What factors affect the speed of a task?

  • cpu   
  • Task complexity
  • priority
  • Thread state

Personal understanding: bank counter 1. There are several counters open, waiter work efficiency is not high 2. 3. Are there ANY VIP customers? 4. When several counters are occupied, does the lobby manager dispatch personnel?

Spin locks and mutexes

Spin lock: Atomic, OSSpinLock, Dispatch_semAPhore_t mutex: Pthread_mutex, @synchronized, NSLock, NSConditionLock, NSCondition, NSRecursiveLock

  • The spinlock detects that other threads performing the current thread sleep (ready state) have been waiting to open the wake execution
  • The mutex detects that other threads are performing the current thread query (busy, etc.) at a high performance cost

atomicwithnonatomic

atomic

  • Is the default
  • The set and get operations on the same object are performed sequentially
  • Speed is slow, because you want to ensure that the operation is complete
  • It is thread-safe and consumes a lot of system resources to lock properties

    useatomicThere is no guarantee of absolute thread-safety, and operations that are absolutely thread-safe need to be handled in a more advanced way, such asNSSpinLock,@syncronizedEtc.

nonatomic

  • Not by default
  • faster
  • If two threads access the same property, unexpected results can occur
  • Non-thread safe, suitable for mobile devices with small memory