Welcome to search “little monkey technical notes” to follow my public account, you can timely communicate with me if you have any questions.
To ensure visibility of memory, the Java compiler inserts memory barrier instructions at the appropriate locations of the generated instruction sequence to prohibit reordering of a particular type of handler. The JMM is divided into four classes by memory barrier instructions (Load for read, store for write) :
LoadLoad Barriers: Insert a “LoadLoad” memory barrier between the two read instructions to ensure that Load1 loads data before Load2 loads data.
StoreStore Barriers: Insert a “StoreStore” memory barrier between two write instructions. Ensure that the data for Store1 is flushed to main memory first and is visible to its data. The write data of Store1 precedes that of Store2.
LoadStore Barriers: Add a “LoadStore” barrier between read and write instructions to ensure that Load1 loads data before Store2 writes data.
StoreLoad Barriers: Add a “StoreLoad” barrier between writes and reads to ensure that data at Store1 is written and flushed to memory before Load2. “StoreLoad” will execute memory access instructions behind the barrier until all memory access instructions (store and load instructions) preceding the barrier are complete. Implementing the “StoreLoad” barrier is expensive because current processors typically flush all the data in the write buffer into memory. (For those who are not familiar with the concept of write buffers, check out my previous article “Why reorder? What does it do to threads?” ).
If you have multithreading knowledge on your resume, there is a good chance that the interviewer will ask you about the keyword volatile. You might say that volatile solves the memory visibility problem and disallows reordering. So do you know what the underlying solution is?
To implement the memory semantics of volatile, the JMM conservatively inserts memory barriers into instruction sequences when the compiler generates bytecode to prevent reordering of certain types of processors.
1. Insert a StoreStore barrier before each volatile write.
2. Insert a StoreLoad barrier after each volatile write.
3. Insert a LoadLoad barrier after each volatile read.
4. Insert a LoadStore barrier after each volatile read.
Here is a relational interpretation of the memory barrier for conservative policies:
Note: The memory barrier insertion strategy for volatile writes and volatile reads described above is very conservative. In practice, the compiler can omit unnecessary barriers as long as the memory semantics of volatile write-read are not changed. Take this example:
public class VolatileBarriersExample { int a; volatile int v1 = 1; volatile int v2 = 2; Void readAndWrite() {// the first volatile read int I = v1; // The second volatile reads int j = v2; // a = I + j; // volatile v1 = I + 1; // Volatile v2 = j * 2; }}Copy the code
Note that the final StoreLoad barrier cannot be omitted because the method returns immediately after the second volatile write. At this point, the compiler cannot accurately determine whether volatile reads or writes will follow. To be safe, the compiler usually inserts a StoreLoad barrier here. In fact, volatile prohibited instruction reordering by using memory barriers as a guarantee.
Now that we know how volatile’s underlying memory barriers are implemented, let’s look at the changes in the local and main memory (and hence the memory visibility issue) that occur when a volatile variable is read or written:
public class VolatileExample { int a = 0; volatile boolean flag = false; Public void writer() {// first step a = 1; // step 2 flag = true; } public void reader() {// iF (flag) {// System.out.println(i); }}}Copy the code
If thread A executes writer () first and thread B executes reader () later, the state of the shared variable after thread A executes is shown below:
Conclusion: When writing a volatile variable, the JMM flusher the value of the shared variable from the thread’s local memory to main memory.
Conclusion: When reading a volatile variable, the JMM invalidates the thread’s local memory. The thread will next read the shared variable from main memory.
It follows that the memory visibility of volatile is that if a thread changes a shared variable, it is immediately flushed to main memory. At the same time, another thread holding the shared variable is notified that the shared variable has been modified, and that it should stop using the value in your working memory and retrieve it from main memory.
In summary, volatile uses memory barriers to prevent instruction reordering, flushing main memory, notifying other threads that shared variables in working memory are invalid, and forcing other threads to remove main memory for the latest values to ensure that volatile variables are visible in memory.