Before explaining this, let’s distinguish between two concepts: the Java memory model and the JVM memory model.
- Java Memory Model: JMM (Java Memory Model), the purpose of JMM is to solve the Java multithreading of shared data read and write consistency problem, through
Happens-Before
Semantics define data access rules for Java programs and correct Cache data inconsistency caused by read/write conflicts. This is a logical abstraction and has no corresponding memory entity. It regulates (the focus of this article) - JVM memory model: Refers to the data region of the JVM during execution. See links for an actual memory region.
As mentioned above, the JMM is just a logical abstraction with no corresponding memory running area, so don’t mix the two, or you’ll go crazy.
Java concurrent programming encountered problems
Two of the most common problems we solve in multithreaded programming:
- How multiple threads operate on the same variable;
- How to handle synchronization in multithreading.
Java is a cross-platform language that has its own caches in different processor racks and handles communication coordination with main memory. Different processor architectures also provide their own cache consistency. In order to realize the cross-platform language features, Java proposes the JMM specification in [JSR-133] to solve the above complex multi-platform problems. The JMM determines when one thread’s writes to the shared convenience are visible to another thread. The JMM defines an abstract relationship:
Shared variables between threads are stored in main memory, and each thread has a private working memory that stores a copy of the shared variables (similar to the relationship between the CPU cache and memory, in fact, working memory contains the concept of cache). Working memory is an abstract concept and does not exist in real memory.
As shown in the figure:
image.png
public class Test{ private int i=1; Public void setVar(){I =2; } thread B gets public int getVar(){return I; }}Copy the code
The following steps are required for thread A to modify variables and make them visible to thread B: 1. (setVar) Thread A modifies the copy of variables in local memory A and flusher it to main memory B. 2. (getVar) Thread B retrieves the variable value from the main memory and updates the value in local memory B
Visibility and atomicity
The above two steps are called visibility if the changes made by thread A can be correctly displayed in thread B (i.e., the changes made by one thread are visible to the other thread). To ensure the correct operation of the preceding code, ensure that the operations in Step 1 cannot be split and are executed in the sequence of thread A (A) -> thread B (B) -> thread A (B). If the sequence is executed in the sequence of thread A (A) -> thread B (B), data acquisition errors will occur. We need to ensure that setVar is atomic, which is called atomicity.
Instruction reordering
The sole purpose of both the processor and the JVM is to make the program run as efficiently as possible while ensuring that the results are correct. To improve operating efficiency, the compiler optimizes reordering of instructions during processor execution and when the processor cache is stored in write back main memory. The following code (from the Art of Java concurrent Programming) :
class ReorderExample { int a = 0; boolean flag = false; Public void writer() {A = 1; // 1 flag = true; Public void reader() {if (flag) {// 3 int I = a * a; // 4}}}Copy the code
The flag variable is a flag that indicates whether variable A has been written. Here we assume that we have two threads A and B, with A first executing writer() and then THREAD B executing reader(). When thread B performs operation 4, thread A may not get the latest value of A. See below:
image.png
During execution, operations 1 and 2 May be reordered, with 1 and 2 executed upside down. This is fine in a single thread, but can cause errors in multiple threads.
Happens-before semantics
The Java memory model is defined using a variety of operations, including reads and writes to variables, gets and releases of monitors, etc. The HAPPENs-before semantics are used in the JMM to describe memory visibility between operations. If you want to ensure that the thread performing operation B sees the structure of operation A (whether or not AB is on the same thread), then A, B must satisfy the happens-before relationship. If there is no happens-before relationship between two operations, then the JVM can reorder them arbitrarily.
Happens-before rules include:
- Program order rules. In A thread, if operation A precedes operation B, operation A in the thread will be executed before operation B.
- Monitor lock rules. Unlock operations on the monitor lock must be performed before a good operation on the same monitor lock.
- Volatile rules. Writes to volatile variables must precede reads to changes.
- Thread start rules. A call to thread.start () on a Thread must be executed before any action is performed on the Thread.
- Thread termination rules. Any operation in a thread must be performed before other threads can detect that the thread has terminated.
- Finalizer rules. The constructor of an object must complete before starting the object’s finalizer.
- Transitivity. If operation A is performed before B, and operation B is performed before C, then operation A is performed before C.
When we write code, when a variable is read by multiple threads and written by at least one thread, if there is no happens-before ordering of read and write operations, data race problems can occur, resulting in incorrect results.