This article is participating in “Java Theme Month – Java Development Practice”, for details: juejin.cn/post/696826…

This is the fourth day of my participation in Gwen Challenge

If the profile

We all know the properties of the synchronized keyword: atomicity, visibility, orderliness, reentrancy, although the JDK is constantly trying to optimize the built-in lock, as mentioned in this article: There are four states: no lock -> bias lock -> Light lock -> weight lock, but in high concurrency and with a lot of collisions, they all eventually swell to weight lock.

This article focuses on the keyword volatile, which differs from synchronized in that volatile is not atomic! Note: Just because it is not atomic does not mean it is not primordial!

Why do you say so?

That’s because synchronized is a synchronized block of code that performs an atomic operation on the entire block of code through the monitor by determining the ACC_SYNCHRONZED flag bit for the entire method. Volatile is atomic for single operations and nonatomic for non-single operations.

Basic usage

The Volatile keyword in the Java language is used to modify variables as follows. Representation: This variable needs to be stored directly in main memory.

public class SharedClass {
    public volatile int counter = 0;
}
Copy the code

Int Counter variables decorated with the volatile keyword are stored directly into main memory. And all reads of this variable are read directly from main memory, not directly from the CPU cache. (Don’t worry if you don’t understand the difference between main memory and CPU cache, it will be explained below.)

What problem does this solve? There are two main problems:

  • Multithreading sees visibility problems,
  • CPU instruction reorder problem

Note: For descriptive purposes, we will refer to volatile variables as “volatile variables” for short, and to variables that are not volatile as “non-volatile”.

Understand the volatile keyword

Variable Visibility Problem: Volatile ensures that Variable changes are visible across multiple threads.

In a multithreaded application, each thread copies the variable from main memory to the cache of the thread’s CPU by default for computational performance purposes, and then performs read and write operations. Today’s computers are mostly multi-core cpus, and different threads may run on different cores, each with its own cache space. As shown in the figure below (CPU 1 and CPU 2 in the figure can be directly understood as two cores) :

One problem is that the JVM does not guarantee when data from the CPU cache is written to main memory, nor when data is read from main memory to the CPU cache. In other words, threads on different cpus may read different values for the same variable, which is commonly referred to as an invisible problem between threads.

For example, Thread 1’s change of counter = 7 is only visible in CPU 1’s cache. When Thread 2 reads the variable counter from CPU 2’s cache, the value of the variable counter is still 0.

One purpose of volatile was to address interthread invisibility. Variables that were volatile would become visible across threads.

The solution is mentioned at the beginning of the article:

  • All reads to volatile variables are read directly from main memory, rather than from the CPU’s cache. All writes to that variable are written to main memory.

  • Since the main memory is shared by all cpus, it stands to reason that even threads on different cpus can see changes made to this variable by other threads. Volatile does more than just ensure the visibility of volatile variables. Volatile actually does more with visibility than volatile variables do:

When Thread A modifies A volatile variable V, Thread B immediately reads it. Once Thread B reads variable V, not only is variable V visible to Thread B, but all variables visible to Thread A before Thread A modifies variable V are visible to Thread B.

When Thread A reads A volatile variable V, all other variables visible to Thread A are also read from main memory.

Characteristics and Principles

visibility

If any thread modifies a volatile variable, other threads can immediately recognize the latest value. The principle for achieving visibility is as follows.

  • Step 1: Modify the local memory and forcibly flush back to the main memory.

  • Step 2: Force the working memory of another thread to expire. (This part belongs more to the MESI protocol)

A single read/write has atomicity

Volatile reads/writes (e.g. Vl =l) are atomic, while compound operations (e.g. I++) are not.

public class VolatileFeaturesA {
   
   private volatile long vol = 0L;

    /** * Single read has atomicity *@date: 5:02:38 PM, 14 July 2020 */
    public long get(a) {
        return vol;
    }

    /** * A single write is atomic *@date: 5:01:49 PM, 14 July 2020 */
    public void set(long l) {
        vol = l;
    }

    /** * Composite (multiple) reads and writes are not atomic *@date: 5:02:24 PM on 14 July 2020 */
    public void getAndAdd(a) { vol++; }}Copy the code

Mutual exclusivity

Only one thread is allowed to operate volatile variables at a time. Volatile variables can be locked without being locked, similar to mutex. The volatileFeaturesa. Java and the volatileFeaturesb. Java classes implement the same function (except getAndAdd).

public class VolatileFeaturesB {
    
	private volatile  long vol = 0L;

    /** * Common write operation *@date: 8:18:34 PM on 14 July 2020 *@param l
     */
    public synchronized void set(long l) {  
        vol = l;
    }

    /** * + 1 *@author songjinzhou
     * @date: 8:28:25 PM on 14 July 2020 */
    public void getAndAdd(a) {
        long temp = get();
        temp += 1L;
        set(temp);
    }

    /** * Common read operation *@date: 8:33:00 PM on 14 July 2020 *@return* /
    public synchronized long get(a) {
        returnvol; }}Copy the code

Partial order

The JVM uses memory barriers to prevent instructions from being reordered partially, so you can see why this is only partially ordered by looking at the Demo code below:

//a and b are common variables; flag is volatile
int a = 1;            / / code 1
int b = 2;            / / code 2
volatile boolean flag = true;  / / code 3
int a = 3;            / / code 4
int b = 4;            5 / / code
Copy the code

Because flag variables are volatile, instruction reordering does not place code 3 before codes 1 and 2, nor after codes 4 or 5. However, the order of codes 1 and 2, and codes 4 and 5 are not prohibited when instructions are rearranged. For example, code 2 May be moved before code 1.

There are four types of memory barriers.

    1. LoadLoadBarriers

Example instruction: LoadA – > Loadload – > LoadB

This barrier ensures that both LoadB and subsequent reads can read the data loaded by LoadA, i.e., LoadA must be read before LoadB.

    1. StoreStoreBarriers

Example command: StoreA – > StoreStore – > StoreB

This barrier ensures that StoreB and subsequent write instructions can operate on the data after StoreA is executed, that is, write operation StoreA must be executed before StoreB.

    1. LoadStoreBarriers

Example commands: LoadA – > LoadStore – > StoreB

This barrier ensures that StoreB and subsequent writes can read the data loaded by LoadA, i.e., read LoadA must be executed before write StoreB.

    1. StoreLoadBarriers

Example commands: StoreA – > StoreLoad – > LoadB

This barrier ensures that LoadB and subsequent read instructions can read the data after the execution of StoreA, that is, write operation StoreA must be executed before read operation LoadB.

Principles for achieving order:

If a volatile attribute is used, the four types of memory barriers described above are inserted before or after the attribute at compile time to prevent instruction reordering, such as:

  • Insert StoreStoreBarriers before volatile writes to ensure that normal reads and writes prior to volatile writes are complete before volatile writes.
  • Inserting StoreLoadBarriers after volatile writes ensures that the data from volatile writes is flushed to main memory and that subsequent volatile writes use the latest data (main memory).
  • Insert LoadLoadBarriers and LoadStoreBarriers after volatile reads to ensure that normal reads and writes after volatile writes invalidate thread-local variables and update the main memory’s shared variables to local memory. Local memory variables are then used.

Volatile read memory barrier:

Memory barriers for volatile writes:

A status flag, such as a Boolean status flag, is used to complete an important event. This flag cannot depend on any other variable. The Demo code is as follows:

public class Flag {
    // Whether the task is completed, true: completed, false: not completed
    volatile boolean finishFlag;

    public void finish(a) {
        finishFlag = true;
    }

    public void doTask(a) { 
        while(! finishFlag) {//keep do task}}Copy the code

One-time security release, such as the famous double-checked-locking, the demo code has been posted above. Low overhead read, such as: calculator, Demo code below.


/** * counter */
public class Counter {
    private volatile int value;
    // Read operations do not need locking, reducing synchronization overhead commit performance, using volatile to ensure that read operations are visible, each time the latest value can be read
    public int getValue(a) {
        return value; 
    }
    // Use synchronized locking to ensure atomicity
    public synchronized int increment(a) {
        returnvalue++; }}Copy the code