Translate that for me. What the fuck is volatile
In the happends-before principle, writes to a volatile must happend-before subsequent reads of that volatile, meaning that subsequent volatile reads must wait until previous volatile writes have completed.
Memory Barriers (Fences)
You might see a conclusion like this in some articles:
1. Add a storeStore barrier before each volatile write 2. Add a loadStore barrier after each volatile write. Add a loadStore to each volatile read. Add a loadLoad to each volatile readCopy the code
Are you confused by this part of the theory? Good. Now I’m going to show you what this nonsense means with three pieces of code and three images.
First code + graph (add a storeStore barrier before each volatile write)
Public class VolatileBarrierExample {// Ordinary variable private int normal; private int i = 10; private int j = 0; Private volatile int v1 = 1; private volatile int v2 = 2; /** * volatileWriteBefore() public void volatileWriteBefore() {j = I; // normal read j = j + 1; // Common /**** Imagine if there were no barrier here? ****/ v1 = j + 1; / / write volatile}}Copy the code
As you can see, this code adds some common reads and writes before volatile writes. Imagine what would happen if all common reads and writes were volatile writes. This leads us to the first memory barrier: a StoreStore barrier before volatile writes to prevent volatile writes from retrieving normal data.
Second code + graph (add a storeLoad barrier after each volatile write)
Public class VolatileBarrierExample {// Ordinary variable private int normal; private int i = 10; private int j = 0; Private volatile int v1 = 1; private volatile int v2 = 2; Public void volatileWriteAfter(){v2=v2+1; /* volatileWriteAfter(){volatileWriteAfter(){v2=v2+1; //volatile write /**** Imagine if there were no barriers here. ****/ i=v2; / / volatile read j = v2 + 1; / / write volatile}}Copy the code
As you can see, only after volatile writes does the price storeload keep subsequent instructions from being corrupted, thus ensuring that subsequent volatile reads and writes are normal.
Third code + graph (add a storeLoad barrier after each volatile read)
Public class VolatileBarrierExample {// Ordinary variable private int normal; private int i = 10; private int j = 0; Private volatile int v1 = 1; private volatile int v2 = 2; Public void volatileReadAfter(){I =v2; /* volatileReadAfter(){volatileReadAfter(){I =v2; //volatile read /**** Imagine if there were no barriers here. ****/ i=i+1; // j= I; // v2= I +1; / / write volatile}}Copy the code
So with that foundation I think this code is pretty clear, right? Basically, “You can’t use it until I finish reading it
A few more words about the memory semantics of volatile write before read
public class ReorderExample { private int x = 0; private int y = 1; private /*volatile*/ boolean flag = false; public void writter() { x = 42; // code 1 y = 50; // code 2 flag = true; Public void reader() {if (flag) {system.out.println ("x=" + x + "y=" + y); // code 5}}}Copy the code
In the code above, suppose we have two threads. The first thread writes to and the second thread reads to volatile variables. How is visibility between threads implemented at this point? In fact, when a thread makes a read to a volatile variable, the JMM invalidates that thread’s local variable setting and allows it to read from main memory. Does that sound a little crazy? A diagram takes you through the process.
As is shown above, when a thread 1 to write operations after a volatile variable flag, all threads will inform this variable to modify, you are useless data, when a thread 2 received the request and the need to read the data, will go to main memory read, surface as it seems like the above said “JMM will set the corresponding local variables that thread will be null and void, Let it read from main memory.”
Finished work, this article source address
Gitee.com/fugongliude…