preface
Volatile is the lightest synchronization mechanism provided by the JVM (much lighter than synchronized), and unlike synchronized, volatile is a variable modifier and can only be used to modify variables. Cannot decorate methods, code blocks, etc.
When a variable is volatile, it has two properties:
- Visibility of this variable to all threads
Visibility: When one thread changes the value of this variable, the new value is immediately visible to other threads.
- Disallow instruction reordering optimization
Instruction reordering: The JVM performs a series of optimizations for operations such as variable assignments to ensure that the correct result is obtained wherever the assignment depends, but not in the order in which the assignment is performed in the program code.
Note: Reorder optimization is machine-level optimization, not Java source code level.
The use of the volatile
The use of volatile is relatively simple, requiring only the modifier volatile when declaring a variable that can be accessed by multiple threads simultaneously.
The following code, for example, is a typical implementation of a singleton in the form of a double-lock check, using the volatile keyword to denote a singleton that may be accessed simultaneously by multiple threads.
class Singleton{
private volatile static Singleton instance = null;
private Singleton(a) {}public static Singleton getInstance(a) {
if(instance==null) { // step 1
synchronized (Singleton.class) {
if(instance==null) // step 2
instance = new Singleton(); //step 3}}returninstance; }}Copy the code
The principle of volatile
In order to improve the execution speed of the processor, multiple levels of cache are added between the processor and memory to improve it. However, due to the introduction of multi-level cache, there is the problem of cache data inconsistency.
For volatile variables, however, when a volatile variable is written, the JVM sends an instruction prefixed with lock to the processor to write the cached variable back into system main memory.
However, even if it is written back to memory, if the value cached by other processors is still old, it will be a problem to perform the calculation operation. Therefore, in multi-processors, to ensure that the cache of each processor is consistent, the cache consistency protocol is implemented
Cache Consistency Protocol (MESI protocol) : Each processor by sniffing the spread of the data on the bus to check the value of the cache is expired, when the processor found himself cache line corresponding to the memory address has been changed, and will be set for the current processor cache line in invalid state, when the processor wants to modify the data operation, will be forced to read the data from the system memory to the processor cache.
So, if a variable is volatile, its value is forcibly flushed into main memory after each data change. The caches of other processors also load the value of this variable from main memory into their caches because they comply with the cache consistency protocol. This ensures that the value of a volatile is visible in multiple caches in concurrent programming.
Volatile and visibility
Visibility means that when multiple threads access the same variable and one thread changes the value of the variable, other threads can immediately see the changed value.
The Java memory model stipulates that all variables are stored in the main memory, and each thread has its own working memory. The working memory of the thread stores a copy of the main memory of the variables used in the thread. All operations on variables must be carried out in the working memory of the thread, instead of reading and writing the main memory directly. Different threads cannot directly access variables in each other’s working memory, and the transfer of variables between threads requires data synchronization between their own working memory and main memory. Therefore, it is possible for thread 1 to change the value of a variable, but thread 2 is not visible.
As described earlier in the principle of volatile, the Volatile keyword in Java provides the ability to synchronize modified variables to main memory immediately after they are modified, and to flush variables from main memory each time they are used. Therefore, volatile can be used to ensure visibility of variables in multithreaded operations.
Volatile and order
Orderliness means that the order in which a program is executed is the order in which the code is executed.
In addition to the introduction of time slices, due to processor optimization and instruction rearrangement, the CPU may also execute input code out of order, such as load->add->save may be optimized to load->save-> Add. This is where there may be an order problem.
In addition to ensuring data visibility, volatile has the power to prevent instruction reordering optimizations, etc.
Ordinary variables only guarantee that the correct result will be obtained wherever the method depends on the assignment, but not in the same order of execution as in the program code.
Volatile prevents instruction reordering, ensuring that code is executed in strict order. This guarantees order. Operations on volatile variables are performed in strict code order: load->add->save: Load, add, save.
class Singleton{
private volatile(take out) static Singleton instance = null;
private Singleton(a) {}public static Singleton getInstance(a) {
if(instance==null) { // step 1
synchronized (Singleton.class) {
if(instance==null) // step 2
instance = new Singleton(); //step 3}}returninstance; }}Copy the code
What might happen if instance were not volatile?
Let’s say two threads are calling getInstance(). Thread 1 performs step1, finds instance null, synchronously locks the Singleton class, and checks again if instance is null. Find that it is still null, then execute Step 3 to start instantiating the Singleton. During the instantiation process, thread 2 May go to step 1 and find instance is not empty, but instance may not be fully initialized.
- The object is initialized in three steps, represented by the following pseudocode:
memory = allocate(); //1. Allocate the memory space of the object
ctorInstance(memory); //2. Initialize the object
instance = memory; //3. Set instance to the memory space of the object
Copy the code
Step 2 and Step 3 depend on Step 1, but step 2 and Step 3 do not depend on each other. Therefore, these two statements may have instruction rearrangement, that is, step 3 May be executed before Step 2. In this case, step 3 is executed, but step 2 is not yet executed, that is, the instance instance has not been initialized. At this point, thread 2 determines that instance is not null, so it returns instance directly. In this case instance is actually an incomplete object, so there will be problems when using it.
Volatile and atomicity
Atomicity means that an operation or operations are either all performed without interruption by any factor, or none at all.
Thread is the basic unit of CPU scheduling. The CPU has the concept of time slice and will schedule threads according to different scheduling algorithms. When a thread starts executing after the time slice is acquired, it loses CPU usage after the time slice is exhausted. So in multithreaded scenarios, atomicity problems occur because time slices rotate between threads.
To ensure atomicity, the bytecode monitorenter and Monitorexit instructions are required, but volatile has no relationship to either instruction.
Therefore, volatile does not guarantee atomicity.
Example:
public class Part implements Runnable {
private volatile static int a = 0;
@Override
public void run(a) {
for (int i = 0; i < 200000; i++) {
// If the number is larger, it will not work
// print the variable values around a++
System.out.println(Thread.currentThread().getId()+" before a = " + a);
a++;
System.out.println(Thread.currentThread().getId()+" after a = "+ a); }}}Copy the code
The main function
public class Test {
public static void main(String[] args) throws Exception {
Part a = new Part();
Thread t1 = new Thread(a);
Thread t2 = newThread(a); t1.start(); t2.start(); }}Copy the code
There are two threads, and the final result of A is expected to be 400,000. But it’s going to run less than 400,000, and it’s going to change every time, so that’s the atomicity problem.
A++ has three steps: read, increment, and write. Suppose the following situation occurs: thread A reads a=1 before it increments, and thread B also reads a=1. And then they increment, and then a writes to memory 2, and B writes to memory 2. There is no guarantee of atomicity.
Solution: You can use synchronized or lock to ensure atomicity. You can also do this by using AtomicInteger.
I modified it with AutomicInteger:
public class Part implements Runnable {
private volatile static AtomicInteger a = new AtomicInteger(0);
@Override
public void run(a) {
for (int i = 0; i < 200000; i++) {
// This method increments and returns the incremented value
System.out.println("a = "+ a.incrementAndGet() ); }}}Copy the code
synchronized
public class Part implements Runnable {
private volatile static int a = 0;
@Override
public void run(a) {
for (int i = 0; i < 200000; i++) {
synchronized (Part.class) {
// If the number is larger, it will not work
// print the variable values around a++
System.out.println(Thread.currentThread().getId() + " before a = " + a);
a++;
System.out.println(Thread.currentThread().getId() + " after a = "+ a); }}}}Copy the code
ReentrantLock
public class Part implements Runnable {
private volatile static int a = 0;
private Lock lock = new ReentrantLock();
@Override
public void run(a) {
lock.lock();
for (int i = 0; i < 200000; i++) {
// If the number is larger, it will not work
// print the variable values around a++
System.out.println(Thread.currentThread().getId() + " before a = " + a);
a++;
System.out.println(Thread.currentThread().getId() + " after a = "+ a); } lock.unlock(); }}Copy the code