This article is from — Coke Coke, the author’s homepage link: Coke Coke personal homepage

Personally, JUC is more mysterious than the JVM. JUC will lead us to think about the whole program in a multi-threaded way, which is a challenge.

Volatile; volatile; volatile

Everyone should know this common sense! (Remember if you don’t know)

In Java, if a variable is used by multiple threads, the volatile keyword is used.

し か し! (But) as the future of dACHang advanced computer accessories, we must not only understand this rule, but also understand why, in order to use in the future, stable take-off

First, what is volatile?

The Java language specification, Version 3, defines volatile as follows:

The Java programming language allows threads to access a shared variable, and to ensure that the shared variable can be updated accurately and consistently, threads should ensure that the variable is obtained separately through an exclusive lock.

The Java language provides volatile, which in some cases is more convenient than locking. If a field is declared volatile, the Java thread memory model ensures that all threads see the variable’s value as consistent

To put it simply:

Volatile did two things :(the following two sentences were summed up by the boss, and felt true after learning them)

  1. Ensure that the target is visible in the Java memory space.
  2. Disallow command reordering

In the following demo, if we remove volatile, the consumer will never complete. If we add volatile, the consumer will succeed.

Do not use the print method on a loop wait. Print is a lock method that will flush the cache. See blog.csdn.net/weixin_4449…

public class VolatileTest {

    // Thread pool, used to execute tasks
    ExecutorService executorService ;
    int a;
    volatile boolean flag=false;

    public VolatileTest(a) {
        executorService = new ThreadPoolExecutor(3.3.0L, TimeUnit.MILLISECONDS,
                new LinkedBlockingQueue<>());
    }

    public static void main(String[] args) {
        VolatileTest volatileTest=new VolatileTest();
        volatileTest.runConsumer();
        volatileTest.runProd();
    }

    void runProd(a){
        Runnable task=()->{
            // Sleep for a second, make sure to execute production later, anyway production first causes the consumer to start the fetch results.
            try {
                TimeUnit.SECONDS.sleep(1);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
            for (int i = 0; i < 80; i++) {
                a++;
                a+=a;
            }
            flag=true;
            // Keep the thread running to prevent writing back to main memory
            while (true){ a++; }}; executorService.submit(task); }void runConsumer(a){
        Runnable task=()-> {
            int num=0;
            //flag no update detected, num++, used to check that the wait is actually executed
            while (!flag) {
                num++;
            }
            System.out.println(a);
            System.out.println(num);

        };
        executorService.submit(task);
    }
}
Copy the code

The implementation principle of volatile

To illustrate the memory semantics of volatile, we understand it at two levels: implementation principles and memory semantics, which correspond to the two points above

First, in order to understand the following principles, I need to make sure that as you read here, you already have some understanding of the Java memory model JMM

Java memory model basics

1. How volatile works

You have a general idea of the JMM memory abstraction

The tools on the X86 processor get the JIT compiler generated assembly instructions to see what the CPU does when it writes to volatile.

The Java code is as follows.

instance = new Singleton(); // Instance is volatile
Copy the code

Convert to assembly code as follows

0 x01a3de1d: movb $0 (0, 0 x 1104800 (% esi); The lock addl $0 (0, 0 x01a3de24: esp (%);Copy the code

Write to a volatile shared variable and add a second line of assembly code. According to the IA-32 architecture software developer’s manual, the lock prefix causes two things on a multicore processor.

  1. Writes data from the current processor cache row back to system memory.
  2. This write back to memory invalidates data cached in other cpus.

If you write to a volatile variable, the JVM sends the processor an instruction prefixed with Lock to write the variable’s cached row back to system memory.

At the same time, the use of Lock variable cache, under the multi-processor, in order to ensure that the cache of each processor is consistent, will be implemented cache compliance protocol.

Each processor by sniffing the spread of the data on the bus to check the value of the cache is expired, when the processor found himself cache line corresponding to the memory address has been changed, and will be set for the current processor cache line in invalid state, when the processor to modify the data operation, will start from system memory read data to the processor cache.

Implementation principles for volatile

  1. The Lock prefix instruction causes the processor cache to be written back to memory. The Lock prefix instruction causes the processor’s Lock # signal to be spoken during the execution of the instruction. In a multiprocessor environment, the LOCK# signal ensures that the processor can have exclusive access to any shared memory (lock bus) during the time the signal is spoken. However, in recent processors, the LOCK# signal generally does not lock the bus, but locks the cache, because locking the bus is expensive. For Intel486 and Pentium processors, LOCK# signals are always spoken on the bus during lock operations. However, in P6 and current processors, the LOCK# signal is not said if the accessed memory region is already cached inside the processor. Instead, it locks the cache of the memory region and writes back to memory, using a cache consistency mechanism to ensure atomicity of changes. This operation is known as “cache locking.” Cache consistency prevents simultaneous modification of data in an area of memory cached by more than two processors.
  2. Writing one processor’s cache back to memory invalidates another processor’s cache. The IA-32 processor and Intel 64 processor use the MESI (Modify, Exclusive, Share, Invalid) control protocol to maintain consistency between internal caches and those of other processors. When operating on a multi-core processor system, the IA-32 and Intel 64 processors can sniff out other processors’ access to system memory and their internal caches. The ** processor uses sniffing techniques to keep its internal cache, system memory, and data cached by other processors consistent across the bus. For example, in the Pentium and P6 Family processors, if sniffing one processor detects that another processor intends to write to a memory address that is currently in a shared state, the sniffing processor invalidates its cache row and forces cache row padding the next time the same memory address is accessed.

Reference Book: The Art of Concurrent Programming in Java

The above two long theories, sorted out, are as follows:

  1. Volatile is prefixed with Lock, which allows the variable to update the data in main memory when it is updated.
  2. The handler monitors the bus with sniffing techniques and invalidates the cache flag if any of its own cache Lock prefix variable updates occur on the bus (the channel through which data is transferred).

2. Memory semantics for volatile (read and write barriers

If you want to understand what’s going on here, you need to understand instruction reordering (in the memory model just described), and you’ll see a practical example after you’ve introduced the theory

Volatile uses read and write barriers, so it not only ensures visibility, but also prevents instruction reordering (atomicity).

Again, the memory semantics of volatile writing

  1. Visibility: A read to a volatile variable always shows the last write to that volatile variable (by any thread). In other words, the variables being modified by the thread are in main memory, not in the cache.
  2. Atomicity: atomicity for any single volatile variable read or write, except for the compound i++, which is a compound operation for a single write.

In terms of memory semantics, volatile write-read has the same memory effect as lock release-acquire. Volatile writes and lock release-acquire have the same memory semantics. Volatile reads have the same memory semantics as lock acquisition. (Starting with JDK5)

Volatile since JDK5, volatile writes are as effective as locks. The AQS framework relies on volatile and CAS for locking purposes.

We know that the pre-lock statement must be executed before the post-lock statement. (It is not difficult to reverse the lock.)

This serves the purpose of disallowing instruction reordering.

3. Volatile memory semantics implementation

This table is the compiler’s rules for volatile during the compiler reordering phase.

You can see that

  1. Operations prior to volatile writes are not reordered by the compiler after volatile writes.
  2. Operations after volatile reads are not reordered by the compiler to those before volatile reads.
  3. When the first operation is volatile write and the second is volatile read, reorder cannot be performed.

We can understand that volatile has a write before, read after barrier.

For these two barriers, the JMM’s strategy is as follows

  • Write before
  • Insert a StoreStore barrier before each volatile write.
  • Insert a StoreLoad barrier after each volatile write.
  • applicant
  • Insert a LoadLoad barrier after each volatile read.
  • Insert a LoadStore barrier after each volatile read.

Application – singleton pattern

Give the kid a triple, Aligadu

Why does the singleton need the volatile keyword?