The volatile keyword is the lightest synchronization mechanism provided by the Java Virtual machine. Volatile is a lightweight lock that guarantees visibility and order, not atomicity (atomicity is best for 64-bit writes). Because there is no exclusive lock, synchronization does not cause thread context switching and scheduling

Three major features:

  • Memory visibility
  • Atomicity is not guaranteed
  • Disallow reordering

Memory visibility

Visibility concept: When multiple threads access the same variable and one thread changes the value of the variable, the other threads can see the changed value immediately

:: What causes visibility problems ::

Computer CPU level: Due to the CPU multi-layer cache of modern computers, cache updates between L1, L2 and L3 are carried out synchronously, but Store buffer and Load buffer exist between core and L1. Write operations write to Store buffer are flushed into L1 asynchronously. Asynchronous operations can cause visibility problems:

Java Memory Model (JMM) : The JMM defines: Each thread has its own working memory, and variables need to be read and written from the main memory into its own working memory for operation. Because different threads cannot access each other’s working memory, there is synchronization between the thread working memory and the main memory. Visibility problems can occur due to the sequential nature of synchronization time and thread execution time

Solution ::

Ensure visibility by adding the volatile keyword to variables, primarily by:

  • For volatile variables, the JMM forces variables in the thread’s local memory to be flushed to main memory
  • Write operations invalidate volatile variable caches in other threads

: : case: :

public static int a = 0;

public static class Main {
    
    public static void main(String[] args) throws Exception {
        Thread t1 = new Thread(() -> {
            while (a == 0) {
                
            }
            System.out.println(T1 tells us that a is equal to 1.);
        });

        Thread t2 = new Thread(() -> {
            try {
                Thread.sleep(1000);
                a = 1;
                System.out.println("T2 change a = 1");
            } catch(InterruptedException e) { e.printStackTrace(); }}); t1.start(); t2.start(); }Copy the code

T1 is always in the while loop when a does not declare the volatile keyword

When a declares the volatile keyword:

Atomicity is not guaranteed

Volatile does not guarantee atomicity for common multithreading and i++ methods

: : case: :

public static int nums = 0;

public static void main(String[] args) {
    for (int i = 0; i < 5; i++) {
        new Thread(() -> {
            for (int k = 0; k < 4000; k++) {
                nums++;
            }
        }).start();
    }

    System.out.println("nums: " + nums);
}
Copy the code

Inconsistent results:

nums: 14642
nums: 12505
nums: 6732
Copy the code

Conclusion: Lock a modified code block like Synchronized. Only after it completes execution can other threads be executed. Only in this way can atomicity be guaranteed. Because volatile does not use locks, and because cpus schedule threads according to time slices, atomicity is inherently not guaranteed as long as multiple steps are performed.

That is, when thread A modifies data and suspends before synchronizing the main memory, thread B completes the i++ operation and writes to the main memory. When thread A wakes up, it will synchronize the data again, resulting in less data accumulation

Disallow reordering

Reordering is not inevitable, but reordering can cause thread-safety problems :: single thread: no matter how reordering occurs, the results of a single threaded program cannot be changed ::

Common reorder types:

  • Compiler reordering; For statements that do not have sequential dependencies, the compiler can readjust the execution properties of the statement
  • CPU reordering: Instruction-level reordering that executes in parallel on multiple instructions without dependencies
  • Memory reordering: The CPU has its own cache and the order in which instructions are executed is inconsistent with the order in which they are written to main memory

Volatile prevents instruction reordering by adding “memory barriers” to instruction sequences when the compiler generates bytecode

The memory barrier

Memory barrier function:

  • Prevents reordering of instructions on both sides of the barrier
  • Forces write buffer/cache data to be written back to main memory

Basic memory barrier types:

  • Types of memory barriers at the hardware level:
    • Sfence: A Store Barrier
    • Lfence: Load Barrier
    • Mfence: modify/mix Barrier
    • Lock prefix: Lock is not a memory barrier
  • Types of MEMORY barriers at the JMM level:
    • LoadLoad: disables read and read reordering
    • StoreStore: Disables write and write reordering
    • LoadStore: disables read and write reordering
    • StoreLoad: disables the reordering of writes and reads

Double check lock

public class Singleton {
    
    private Singleton(a) {}private static Singleton instance = null;
    
    public static Singleton getInstance(a) {
        if (instance == null) {
            synchronized (Singleton.class) {
                if (instance == null) {
                    instance = newSingleton(); }}}returninstance; }}Copy the code

The line of code instance = new Singleton() is not an atomic instruction, and there may be an instruction rearrangement problem. When you have multiple threads, there are thread-safety issues that cause a thread to create an incorrect singleton

Instance = new Singleton() creates the object process:

  1. Allocates memory space for objects
  2. Initialize an object
  3. Set instance to the newly allocated memory address

During object creation, there may be reordering in 2 or 3 steps, causing some threads to access uninitialized variables. Therefore, in order to ensure thread safety in the singleton mode with multiple threads, volatile was added to the detection object

Underlying Implementation Principles

Add ACC_VOLATILE to volatile variables that are known to be volatile by decompilation

2. Writes to more static variable elements find the file bytecodeinterpreter.cpp

Use to determine whether the access flag is volatile to modify the file: accessFlags.hpp

4. The specific transfer method file: AccessFlags.hpp

5. There are different implementation mechanisms for different CPU architectures: each of these directories has an OrderAccess class

6. Specific underlying volatile operations:

  • In single-threaded cases, no memory barrier is used
  • Multithreaded threads using CvolatileKeyword modifier to which to addlocklogo

Conclusion:

Volatile ensures visibility and order, but not order, through memory barriers. Decompilation tells us that each volatile variable is marked with ACC_VOLATILE, and that the volatile keyword is checked and added to the barrier if it is.

The core role of barriers:

  1. Prevents reordering of instructions on both sides of the barrier
  2. Forces write buffer/cache data to be written back to main memory
  3. Write operations invalidate volatile variable caches in other threads

【 Relevant information 】

  • Java concurrent programming practice
  • Understanding Java Concurrency: How volatile guarantees visibility – Who knows
  • Details on the underlying principle of volatile
  • Why does volatile not guarantee atomicity