JAVA’s concurrency model
Shared memory model
In the concurrent model of shared memory, threads share the common state of the program and communicate implicitly by reading and writing the common state in memory
This memory refers to main memory, which is actually a small portion of physical memory
Second, JAVA memory model abstraction
1. Which data in Java memory is thread safe and which is not
Non-thread-safe:
In Java, all instance fields, static fields, and array elements are stored in heap memory, and the data is shared by threads, so there are memory visibility issues
Thread safety
Local variables, method-defined parameters, and exception handler parameters are data in the current thread’s virtual stack and are not shared by threads, so there are no memory visibility issues
2. The nature of communication between threads
The nature of communication between threads is
The JMM, or JAVA memory model, determines when writes to shared variables by one thread are visible to other threads.
It can be seen from the above figure that the communication between threads is transmitted through the main memory. When each thread conducts shared data processing, it copies the shared data to the current thread local (each thread has its own memory) for operation.
Message communication process (regardless of data security)
Thread one loads the shared variable A from main memory into its own local memory for processing. For example, A = 1; At this point, the modified shared variable A is flushed into the main memory, and then thread 2 reads the shared variable A from the main memory into the local memory for operation. The entire process of data interaction is controlled by the JMM, which controls how the main memory interacts with the local memory of each thread to provide visibility into shared data
Reorder
Program instructions are reordered during execution to improve efficiency
1. Reorder categories
The compiler optimizes reordering
The compiler optimizes the order of statement execution without changing the semantics of a single-threaded program
Instruction set parallel reordering
If there is no data dependency, the processor can change the execution order of the corresponding machine instructions
Memory system reordering
Because the processor uses caching and read/write buffers, this makes the load and store operations appear to be out of order
2. The reordering process
All three types of reordering can cause memory visibility problems when we write concurrent programs.
The JMM’s compiler reordering rules prohibit reordering of certain types of compilers;
The JMM’s processor reordering rules require the Java compiler to insert specific memory barrier instructions when generating instruction sequences to prohibit reordering by certain types of processors
3. Reorder the handler
In order to avoid the delay of the processor waiting to write data into the memory, a buffer is added between the processor and the memory. In this way, the processor can write data into the buffer until a certain period of time, and flush the data into the memory at a time.
Advantages:
1. Different pauses of the processor improve the running efficiency of the processor
2. Reduce the memory bus usage when writing data to the memory
Disadvantages:
The write buffer on each processor is visible only to the current processor, so the execution order of memory operations is not consistent with the actual situation, such as the following scenario:
In the current scenario, it may occur that processor A and processor B did not flush the data in their respective write buffers back to the memory, and assigned the values of A=0 and B =0 read from the memory to X and Y. At this time, the data in the buffer was flushed into the memory, causing the final result to be inconsistent with the actual desired result. Because the buffer is not executed until the data is flushed into memory
The JMM defines the following eight operations to implement the protocol for the interaction between main memory and working memory, i.e. how a variable is copied from main memory to working memory, and how it is synchronized from working memory to main memory
To copy a variable from main memory to working memory, read and load operations need to be performed sequentially, and to synchronize a variable from working memory to main memory, store and write operations need to be performed sequentially. But the Java memory model only requires that these operations be performed sequentially, not sequentially
Operation execution process diagram:
Synchronization Rule Analysis
1. Do not allow a thread to synchronize data from the working memory back to the main memory without causing any assign operation
2. A new variable can only be created in main memory. It is not allowed to use an uninitialized load or assign variable in working memory. The use and store operations on a variable must be assigned and loaded.
3. A variable can be locked by only one thread at a time. However, the lock operation can be repeated by the same thread several times. Lock and unlock must come in pairs.
4. If the lock operation is performed on a variable, the value of the variable will be emptied from the working memory. Before the execution engine can use the variable, load or assign operations need to be performed again to initialize the value.
5. If a variable has not been locked by a lock operation, it cannot be unlocked. It is also not allowed to unlock a variable that has been locked by another thread.
6. Before you can unlock a variable, you must synchronize it to main memory (store and write).
4. Memory barrier instructions
To address memory errors caused by processor reordering, the Java compiler inserts memory barrier instructions at the appropriate locations in the generated instruction sequence to prohibit a particular type of processor reordering
Memory barrier instruction
5. Happens-before
Happens-before principle assists in ensuring atomicity, visibility, and orderliness of program execution. It is the basis for judging whether there is contention in data and whether threads are safe
In the JMM, if the result of one operation is visible to another, there must be a happens-before relationship between the two operations (both operations may or may not be on the same thread).
Content of rules:
Procedural order rule
It refers to the control of code order within a thread, such as branches, loops, etc., meaning that semantic serialization must be guaranteed within a thread, i.e. execution in code order
Locking rules
An unlock operation must take place before a lock operation. That is, if a lock is added after a lock is unlocked, the lock must be added after the unlock action.
Volatile variable rule
A write to a volatile variable must occur before a read is performed. This ensures visibility of the volatile variable. In simple terms, a volatile variable forces the value of the variable to be read from main memory each time it is accessed by a thread. It forces the latest value to be flushed to main memory, and at any time, different threads can always see the latest value of the variable
Thread start rule
The thread’s start method, start(), occurs before all operations on the current thread
Thread termination rule
All operations in a Thread must occur before the Thread terminates. The purpose of the thread.join () method is to wait for the currently executing Thread to terminate. Suppose that the shared variable is modified before thread B terminates. After thread A successfully returns from thread B’s join method, thread B’s changes to the shared variable will be visible to thread A
Thread interrupt rule
The thread calls the interrupt() method before the interrupt thread’s code checks for the interrupt event
Object finalization rule
The completion of an object’s initialization occurs before the object is reclaimed
Transitivity rule
If operation A occurs before operation B, and operation B occurs before operation C, then operation A must occur before operation C
Note that a happens-before relationship between two actions does not mean that the previous action must be performed before the latter, only that the result of the previous action is visible to the latter, and that the former action precedes the latter in order.
6. Data dependency
The result of a previous operation affects the result of a later operation. In this case, the compiler and processor do not change the execution order of two data-dependent operations while processing the current data-dependent operation
Note: Data dependencies are only for sequences of instructions executed in a single processor or operations performed in a single thread. The compiler and processor don’t take into account different processors and different threads
7, the as – if – serial
The result of a reordering program cannot be changed in single-threaded cases, so the compiler and processor will not reorder data-dependent operations in single-processor or single-threaded cases. Conversely, instruction reordering can occur if there is no data-dependent operation.
Data competition and sequence consistency
Data contention occurs in the case of multiple threads
1. Data competition
A variable is written in one thread and a variable is read in another thread, and the writes and reads are not synchronized
2. Order consistency
If the synchronization mechanism is used correctly in multithreaded applications, the execution will be sequential (just as it would be in single-threaded applications) and the result will be exactly what you expect
3. Sequential consistent memory model
5.3.1 features:
All operations in one thread must be performed in programmatic order. All operations must be atomic and visible to other threads
5.3.2 concept:
Conceptually, sequential consistency has a single global memory, and at most one thread can be connected to memory at any point in time. In a multi-threaded scenario, all memory reads and writes become serialized
5.3.3 case:
For example, if there are multiple concurrent threads A, B, and C, thread A has two operations A1, A2, and they execute in the order A1->A2. Thread B has three operations B1, B2, and B3, and they are executed in the order B1->B2->B3. C thread has two operations C1 C2 so they execute in the order C1->C2 in the program.
Scenario analysis:
Scenario 1: Concurrent security (Synchronization) Execution sequence
A1->A2->B1->B2->B3->C1->C2
Scenario 2: Insecure concurrency (asynchronous) Execution sequence
A1->B1->A2->C1->B2->B3->C2
Conclusion:
In an asynchronous scenario, the individual operations in each of the three threads remain in order, even though each operation in each thread executes out of order. And all threads see only one consistent overall order of execution, that is, all three threads see that order: A1->B1->A2->C1->B2->B3->C2, because every operation in the sequential consistency memory model must be immediately visible to any thread.
This is not the case in the JMM, where an unsynchronized program not only executes in a different order, but also executes in a different order as seen by each thread.
For example, if thread A writes the variable A =2 to its local memory, it has not yet been flushed into main memory The value has changed, but the other thread, thread B, thread C, does not see the value change, and considers that the operation of thread A has not happened, only thread A flusher the value of the working memory back to the main memory thread B and thread C. However, in the case of synchronization, the execution results of the sequential consistency model and THE JMM model are consistent, but the execution order of the program is not necessarily the same, because in the JMM, the instruction rearrangement phenomenon will occur, so the execution order will be inconsistent.
The last
I here organized a: JVM atlas, Mybatis related information documents, Spring family barrel series, Java systematic information, (including Java core knowledge, interview topics and 20 years of the latest Internet real questions, e-books, etc.) need friends can pay attention to the public number [procedure Yuan Small wan] can obtain.