Instruction rearrangement
During the compilation phase of the program, the code is optimized (that is, rearranged) to improve the efficiency of operation. Instruction reordering rearranges the pointing order of statements without changing what a single-threaded program does.
For example, singletion = new singletion (); This sentence is executed in the following order:
- Allocate memory
- Initialize the
- Object points to memory space
However, after the order is rearranged, it may be:
- Allocate memory
- Object points to memory space
- Initialize the
In concurrent cases, instruction reordering can result in a thread being picked up by another thread before it is initialized or partially initialized, resulting in null or incorrect values.
Therefore, in the case of concurrent execution, the instruction rearrangement will have ambiguity, that is, according to different execution logic, different result information will be obtained.
There are two solutions to this problem: memory barriers (which directly forbid instruction reordering) and happens-before (instructions must be reordered according to certain rules).
happens-before
Happens-before does not mean that the previous action must precede the latter; rather, it means that the result of the previous action must be visible to the latter. If this requirement is not met, the two actions are not allowed to be reordered.
A few of happens-Befoer’s rules:
- Program Order rule: The execution of a piece of code in a thread is ordered. That is, no matter how the instructions are rearranged, their macroscopic execution order is the same as the sequential generation of our code.
- Monitor Lock rule: In both single-threaded and multi-threaded environments, if one thread unlocks the same lock, another thread acquires the lock and can see the result of the previous thread’s operation.
- Volatile variable rule: If a thread writes a volatile variable before another thread reads it, the results of the previous writes are visible to the reads.
- Thread start rule: If Thread A starts child Thread B during the execution of the main Thread A, the modification of shared variables made by Thread A before starting child Thread B is visible to Thread B.
- Thread termination rule: When child Thread B terminates during the execution of the main Thread A, the modification of the shared variable before the termination of Thread B is visible in Thread A.
- Interruption rule: Main thread A calls the interrupt() method of thread B before B detects that it has been interrupted. (that is, B must not be interrupted before it is interrupted)
- Transitivity: if thread A happends-before B, B happens-before C, then A happens-before C. (that is, C must see the result of A)
- Object finalization rule: An object’s initialization (constructor execution) must be before Finalize ().
Principle of rule Implementation
The JVM already implements these rules. Let’s talk about how these rules are implemented.
How volatile works
Volatile shared variables are written using the CPU’s Lock prefix:
- Writes current processor cached data back to system memory.
- Other threads read it directly from system memory.
The realization principle of synchronized
Use monitorenter and Monitorexit directives to:
- The Monitorenter directive inserts at the start of the synchronized code block after compilation, while the Monitorexit inserts at the end of the method
- Each Monitorenter must be matched with a Monitorexit.
- Any object has a Monitor associated with it, and when a Monitor is held, it is locked.
How final is implemented
- The compiler inserts a StoreStore barrier before the constructor return.
- Reordering rules for reading final fields require the compiler to insert a LoadLoad barrier before reading operations on final fields.