2020/11/13 Supplementary content
If you find the content helpful, why don’t you give it a thumbs-up to encourage updates? (σ゚∀゚)σ.. :* * Good for you
Volatile is also a basic question that is often asked about multithreading. Using volatile as a small entry point, we can often ask questions about the Java Memory Model (JMM) and some of the features of Concurrent programming in Java. In this article, we will learn about volitile.
Volatile of multithreading
The Java Memory Model (JMM) is defined by the Java Virtual Machine specification to mask Memory access differences between Java programs on different hardware and operating systems. In this way, Java programs can achieve consistent memory access across a variety of platforms.
Note: Do not confuse the Java memory model with the Java memory structure! JMM refers to the knowledge point or a relatively long length, want to know more about the stamp link
As you can see from the above figure, when a thread executes, it first loads the value of the read variable from main memory to its copy in working memory, and then passes it to the processor for execution. After execution, the value is assigned to the copy in working memory, and then the working memory passes the value back to main memory, where the value is updated. Using working memory and main memory, while speeded up, also brought some problems.
Atomicity, visibility, and order are the three characteristics of the JMM. How do you solve these three problems? Here we use volatile, which stores only visibility and order.
When using volatile, note:
volatile
Can only be used to modify member and static variables and should precede data type keywords.volatile
And final cannot modify a variable at the same time. Volatile ensures that the result is visible to other threads when a variable is written, and final prevents the variable from being written again
Three characteristics
atomic
Atomicity means that an operation cannot be separated into more than one step. An operation or operations will either all be performed without interruption by any factor, or none at all. Volatile does not guarantee atomicity!
Note: In Java, reads and assignments to primitive data types are atomic operations. = atomic operation
Such as:
i = 1;
i++;
Copy the code
In the above two lines of code, I = 1 is a read operation, so it’s atomic. I ++ and I = I + 1 are actually equivalent, as follows
// i++ can be disassembled as
1, the thread reads I2, temp = I +1
3, I = tempCopy the code
Read I, add 1, write back to main memory, that’s 3 steps. So in the example above, the final value can be different because it doesn’t satisfy the atomicity.
So how can i++ be thread-safe?
- Use the synchronized keyword or Lock.
synchronized(object){
i++;
}
Copy the code
- Use classes that support atomic operations, such as
java.util.concurrent.atomic.AtomicInteger
, it uses CAS(Compare and swap) algorithm, which encapsulates the autoincrement (add 1), autosubtract (subtract 1), and addition (add a number) and subtraction (subtract a number) of basic data types, ensuring that these operations are atomic.
visibility
Java uses volatile to provide visibility. When a variable is volatile, writes to it are immediately flushed to main memory, forcing synchronization between the cache and main memory. When other threads need to read the variable, they discover that the cache is invalid and read the new value from main memory, thereby ensuring visibility. Visibility is also guaranteed with synchronized and Lock, which flush shared variable values back to main memory before releasing the Lock, but synchronized and Lock are more expensive.
JMM makes two rules about synchronized:
- The thread must flush the latest value of the variable to main memory before it can be unlocked.
- When a thread locks, it first clears variable values in the working memory and retrieves the latest values from the main memory to the working memory.
order
The JMM allows the compiler and processor to reorder instructions, but specifies the as-if-Serial semantics, meaning that no matter how reordered, the execution result of the program cannot be changed. Volatile, on the other hand, prevents instruction reordering, and therefore guarantees order. What is Instruction Reorder? In the Java memory model, the compiler and processor are allowed to reorder instructions without affecting the execution of a single thread, but there is no guarantee that concurrent execution of multiple threads will not be affected. For example, the following code is executed in the order 1->2->3->4 without instruction reordering. But in the actual execution, it will be 1->2->4->3 or 2->1->3->4 or other. But it will guarantee that 1 is before 3 and 2 is before 4. All of these end up being a=2; B = 3.
int a = 0;//语句1
int b = 1;//语句2
a = 2; //语句3
b = 3; //语句4
Copy the code
In addition, the JMM has some innate orderlessness, that is, orderlessness that can be guaranteed without any means, often referred to as the happens-before principle.
The implementation of the singleton pattern we often see, typically double-checked locking (DCL), is a lazy singleton pattern in which the object is created on use and instance is volatile to avoid reordering the initialization instructions
Volatile example
The following code goes into an infinite loop without volatile and synchronized:
public class VolatileDemo {
boolean flag = true;
public void test(a) {
System.out.println("test begin------");
while (flag){
// System.out.println("------");
}
System.out.println("test end------");
}
public static void main(String[] args) throws InterruptedException {
VolatileDemo volatileDemo = new VolatileDemo();
new Thread(volatileDemo::test,Thread 1 "").start();
TimeUnit.SECONDS.sleep(3);
volatileDemo.flag = false;
System.out.println("main end----"); }}Copy the code
Running results:This code
- If thread 1 executes first, then thread 1 reads the flag into its working memory, which is true, and enters an infinite loop.
- The main thread executes again
flag = false
Since the main thread sets flag to false in working memory, it may not write back to main memory immediately, so thread 1 will continue to loop indefinitely.
Note that the println method uses the synchronized keyword, which guarantees visibility, atomicity, and order.
public void println(String x) {
synchronized (this) { print(x); newLine(); }}Copy the code
Synchronized obtains a lock, clears working memory, copies objects from main memory to working memory, executes code, refreshes main memory data, and releases the lock.
The following code demonstrates that volatile does not guarantee atomicity:
public class AutoIncrement {
public volatile int inc = 0;
public void autoIncrement(a) {
inc++;
}
public static void main(String[] args) {
final AutoIncrement autoIncrement = new AutoIncrement();
for (int i = 0; i < 10; i++) {
new Thread(() -> {
for (int j = 0; j < 10000; j++)
autoIncrement.autoIncrement();
}, "Thread" + i).start();
}
// Ensure that all previous threads have finished executing, greater than 2 because there is also a Monitor Ctrl-break thread in idea
while (Thread.activeCount() > 2) { Thread.yield(); } Thread.currentThread().getThreadGroup().list(); System.out.println(autoIncrement.inc); }}Copy the code
The logical result is 100,000, but it’s probably going to be less than 100,000.One might wonder: “Volatile guarantees visibility, so changes made to inc by one thread should be immediately visible to another thread.”
- But note: this is the operation
inc++
It’s a compound operation, and the atomic content above explained the decomposition of i++, which includesRead the value of inc, increment it, and then write it back to main memory.
We can assume that thread 1 executes first, reads inc with a value of 100, and then blocks without making changes to the variable, so it does not trigger the volatile rule. Thread 2 also reads the value of inc, and since thread 1 has not changed, the value of inc in main memory remains 100, increments, and is immediately written back to main memory, which is now 101. It’s thread A’s turn to execute again, and since it’s holding 100 in working memory, it keeps incrementing, writes back to main memory, and 101 gets written again. So although two threads have executed autoIncrement() twice, thread 1 overwrites thread 2 and only adds once.
One might then ask, “Doesn’t volatile invalidate the cache?”
- In this case, thread 1 has read, and thread 2 has read, but no thread has changed inc, so when thread 2 reads, it still reads 100.
“Thread 2 writes 101 back to main memory, doesn’t thread 1 set its cache to invalid?”
- Note that thread 1 has already read from main memory, so thread 1 can only read from main memory if it finds out if its cache is valid.
To sum up, atomicity cannot be guaranteed in the case of this compound operation. In the flag case, however, volatile guarantees atomicity because reads/writes to flags are single steps.
conclusion
If you want to learn more about volatile, check out other articles on volatile.