Why multithreading

As we all know, the speed of CPU, memory and I/O devices is greatly different. In order to make reasonable use of the high performance of CPU and balance the speed difference of these three, computer architecture, operating system and compiler have made contributions, mainly reflected in:

  • The CPU added a cache to balance the speed difference with memory. / / leavevisibilityThe problem
  • The operating system added processes and threads to time-share multiplexing CPU, and then balance the speed difference between CPU and I/O device; / / leaveatomicThe problem
  • The compiler optimizes the order of instruction execution so that the cache can be used more efficiently. / / leaveorderThe problem

Processes and threads

process

A process is a running activity of code on the data set (the dynamic process of running programs), and is the basic unit of system resource allocation and scheduling. In Java, each virtual machine is a process

thread

Is an execution path of a process, the basic unit of system operation (CPU allocation progress unit), thread cannot be separated from the process, there is at least one thread in a process, there can be multiple threads sharing process resources

Resources are allocated on a process basis, but CPU resources are allocated to threads

daemon

Java threads are divided into daemons and user threads, and daemons maintain execution by JVM user threads, such as GC threads

Thread state

NEW new
RUNNABLE ready
TERMINATED The end of the
BLOCKED blocking
WAITING Waiting for the
TIMED-WAITING Finite time waiting
running Being performed
Thread operation
start Start the Create a new thread
run perform The thread of execution
stop Termination of Stopping threads directly is not recommended because data inconsistency may occur
interrupt interrupt Threads receive interrupt notifications, but interrupt is up to the thread itself
wait Waiting for the Object method on which the process waits until a thread calls the notify method of the object. It is an effective means of communication between threads
notify notice Participate in
suspend hang Discards, suspends without releasing resources
resume Continue to perform Disuse, corresponding to SUSend
join Wait for thread to end Wait for the target thread to complete execution before executing
yield humility A static method that cedes the CPU after execution and continues to compete for CPU resources

Synchronous and asynchronous

Usually used to describe a method call, synchronization refers to a method call that must wait for the method to return before continuing. An asynchronous method is called and immediately returns to perform subsequent operations. The caller is notified of the return of the called method by another thread.

Parallelism and concurrency

parallel

Multiple cpus execute tasks at the same time

concurrent

Refers to the sequential execution of tasks in succession

Concurrent level

blocking

The thread waits and cannot execute until another thread releases the resource

There is no hunger

If there is a priority between threads, thread scheduling is biased towards satisfying the thread with the higher priority. In this case, the thread with lower priority may be interrupted by the thread with higher priority and cannot obtain resources, that is, the state of hunger

barrier-free

In the weakest non-blocking schedules, threads can enter the shared memory region, detect for thread conflicts, and roll back when they are found

This can be done through conformance tags

unlocked

All threads can attempt to access the shared memory

Without waiting for

In addition to locking requires one thread to complete in a limited number of steps, all threads must complete in a limited number of steps, eliminating hunger.

It can also be further divided into several types, such as bounded no wait and number of threads no wait

Three elements of concurrency

atomic

Atomicity: An operation or operations are either all performed without interruption by any factor, or none at all.

The basic logical unit of code execution

order

Orderliness: that is, the order in which the program is executed is the order in which the code is executed.

Code instruction rearrangement is involved

visibility

Visibility: Changes made by one thread to a shared variable can be immediately seen by another thread. After thread 1 makes changes to variable I, thread 2 does not immediately see the value changed by thread 1

The lock

classification

To optimize the

To lock

If a thread acquires a lock, the lock enters the bias model. When the thread requests the lock again, no synchronization is required

Applies when there are few locks or the same process often acquires locks

Lightweight lock

The object header is used as a pointer to the inside of the thread stack that holds the lock to determine whether the thread holds the object lock.

spinlocks

The JVM, in order to prevent a thread from actually being suspended at the operating system level, temporarily loops empty several times because the current thread cannot acquire the lock

Lock elimination

At JIT compilation time, locks that cannot compete for shared resources are removed by scanning the running context.

The thread pool

Constantly opening and closing threads increases CPU and memory overhead.

Frequent threads are maintained through thread pools to reduce overhead.