@

  • Principle 1.
  • 2. Application scenarios
  • 3. Summary

Principle 1.

The Java language provides a slightly weaker synchronization mechanism, known as the volatile keyword, which ensures that modified variable updates are notified to other threads and that the variables are executed sequentially. This solves the 01- Visibility, atomicity, and order problem: Sources of Concurrent programming bugs mentioned in “Sources of concurrent programming bugs” are two factors: visible lines and ordering.

1.1 Guarantee order principle

The JMM disallows certain types of reordering by inserting memory barrier instructions. When the Java compiler generates bytecode, it inserts a memory barrier in the sequence of instructions before and after volatile operations to prohibit certain types of reordering.

Volatile memory barrier insertion strategy:

  1. Insert a StoreStore barrier before each volatile write;
  2. Insert a StoreLoad barrier after each volatile write;
  3. Insert a LoadLoad barrier after each volatile read;
  4. Insert a LoadStore barrier after each volatile read.
Barrier type Order sample instructions
LoadLoadBarriers Load1; LoadLoad; Load2 Ensure that Load1 data is loaded before Load2 and all subsequent load instructions
StoreStoreBarriers Store1; StoreStore; Store2 Ensure that Store1 data is visible to other processors (flushed to memory) and stored before Store2 and all subsequent storage instructions
LoadStoreBarriers Load1; LoadStore; Load2 Ensure that Load1 data is loaded before Store2 and all subsequent storage instructions are flushed to memory
StoreLoadBarriers Store1; StoreLoad; Load2 Ensure that Store1 data is visible to other processors (flushed to memory) before loading Load2 and all subsequent load instructions. StoreLoadBarriers execute memory access instructions behind the barrier until all memory access instructions (store and load) prior to the barrier have been completed

Store: Data is visible to other processors and flushed to main memory. Load: Invalidates data in the cache and reloads data from main memory.

  

1.2 Guarantee visible line principle

The Volatile memory barrier insertion policy includes a clause, “Insert a StoreLoad barrier after each volatile write.” The StoreLoad barrier generates an instruction with a Lock prefix. The instruction with a Lock prefix causes two things on a multi-core processor:

  1. Write the current processor cache row data back to system memory;
  2. This write back to memory invalidates data cached in other cpus.

Volatile The visible write-read process of memory:

  1. Write to volatile variables;
  2. Since the JMM inserts a StoreLoad memory barrier during compilation, the JVM sends a Lock prefixed instruction to the processor;
  3. Instructions prefixed with Lock write the data in the variable’s cache line back to main memory and invalidate data in other processors that cache the variable’s memory address;
  4. When other threads read volatile variables, the cache in local memory becomes invalid and the latest data is read from main memory.

  

1.3 Use Cases

When we implement singleton pattern, a classic writing method is double retrieval implementation, for singleton pattern can refer to “Design pattern six singleton pattern”. In this implementation, we use the volatile keyword to modify the singleton object. Let’s think about what we should do.

/ * ** Lazy loading of double check singletons* /
public class LazyDoubleCheckSingleton implements Serializable  {

 / * ** Static private instances with volatile modifier to ensure visibility* /  private volatile static LazyDoubleCheckSingleton lazyDoubleCheckSingleton = null; 1   private LazyDoubleCheckSingleton(a) {   }   / * ** Public static functions that create or get static private instances * @return * /  public static LazyDoubleCheckSingleton getInstance(a) {  if (lazyDoubleCheckSingleton == null) {  synchronized (LazyDoubleCheckSingleton.class) {  if (lazyDoubleCheckSingleton == null) {  lazyDoubleCheckSingleton = new LazyDoubleCheckSingleton(); 2  }  }  }  return lazyDoubleCheckSingleton;  }   / * ** To prevent serialization and deserialization from breaking singleton, singleton classes must implement the Serializable serialization interface * @return * /  public Object readResolve(a) {  return instance;  } }  Copy the code

Use volatile to modify instance variables to prevent reordering and ensure visibility. In line 23 of this code, three commands are executed: 1. 2. Initialize the object. 3 Set the variable to point to the memory address just allocated. However, the sequence between step 2 and step 3 is not fixed. Sometimes step 2 is executed first, and sometimes Step 3 is executed first. Therefore, if thread 1 executes step 3 first, the lock is released. = null returns an empty object, so use volatile to prevent reordering and ensure visibility.

2. Application scenarios

Access to volatile variables is not locked and therefore does not block the thread of execution, making volatile variables a lighter synchronization mechanism than the sychronized keyword. Volatile is suitable for situations where a variable is shared by multiple threads, and threads directly assign a value to the variable.

When reading or writing non-volatile variables, each line first copies the variables from memory to the CPU cache. If the computer has multiple cpus, each thread may be processed on a different CPU, which means that each thread can be copied to a different CPU cache. Declaring variables is volatile, and the JVM ensures that each read is read from memory, skipping the CPU cache.

  

3. Summary

In concurrent programming, the use of the volatile keyword to modify variables ensures that changes to variables are visible to other threads. Volatile can guarantee visibility and order by inserting memory barriers, but not atomicity, which must be guaranteed by locking or CAS mechanisms.