Memory visibility

Volatile, a lightweight synchronization mechanism provided by Java, also plays an important role in concurrent programming. Volatile is lighter than synchronized (which is often referred to as heavyweight locking), and it can be a blessing if used properly and wisely compared to the overhead associated with synchronized.

To get a clear and thorough understanding of volatile, let’s walk through it step by step. Let’s start with the following code

public class TestVolatile { boolean status = false; Public void changeStatus(){status = true; } /** * If the status is true, then running. */ public void run(){ if(status){ System.out.println("running...." ); }}}Copy the code

In this example, in A multi-threaded environment, if thread A executes changeStatus(), thread B runs run(), which guarantees “running…..”. ?

The answer is NO!

This conclusion is understandably confusing. Because if you run the changeStatus method first and then run the run method in a single-threaded model, of course you can print “running….” correctly. ; But in a multithreaded model, you can’t make that guarantee. Because for the shared variable status, changes made by thread A are “invisible” to thread B. That is, thread B may not be able to observe that status has been changed to true at this point. So what is visibility?

Visibility means that when one thread changes the value of a shared variable, the new value is immediately visible to other threads. Obviously, there is no way to achieve memory visibility in the above example.

Java memory model

To understand why this is the case, we need to look at the JMM (Java Memory Model)

Java virtual machines have their own Java Memory Model (JMM). The JMM can mask the differences in Memory access between different hardware and operating systems, so that Java programs can achieve the same Memory access effect on different platforms.

The JMM determines when a thread’s write to a shared variable is visible to another thread. The JMM defines an abstract relationship between threads and main memory: Shared variables are stored in Main Memory. Each thread has a private Local Memory. Local Memory holds a copy of the Main Memory used by the thread. The interaction between the three is as follows

It is important to note that the JMM is an abstract memory model, so local memory, main memory are abstract concepts that do not necessarily correspond to the ACTUAL CPU cache and physical memory. Of course, if it’s for the purpose of understanding, that’s fine.

With A simple definition of the JMM, the problem is easy to understand. For common shared variables, such as status, thread A changes it to true in thread A’s local memory, not yet synchronized to main memory. Thread B caches the initial value false of status, which may not be observed to change the value of status, causing the above problem. So what about this invisibility of shared variables in a multithreaded model? The crude approach is naturally synchronized, but synchronized or Lock is too heavy-duty. The more logical approach is volatile

Volatile has two properties. The first is to ensure that shared variables are visible to all threads. Declaring a shared variable volatile has the following effects:

1. When a volatile variable is written, the JMM forces variables in the thread’s local memory to be flushed to main memory.

2. This write session will invalidate caches in other threads.

The above example simply declares status as volatile to ensure that thread B immediately knows when thread A changes it to true

 volatile boolean status = false;
Copy the code

Watch out for compound class operations

It’s important to note, however, that we keep comparing volatile to synchronized simply because the two keywords have some memory semantics in common. Volatile is not a complete replacement for synchronized. It is still a lightweight lock, and in many cases, Volatile does not do the job. Take a look at this example:

package test; import java.util.concurrent.CountDownLatch; /** * Created by chengxiao on 2017/3/18. */ public class Counter { public static volatile int num = 0; CountDownLatch = new CountDownLatch(30); CountDownLatch = new CountDownLatch(30); Public static void main(String []args) throws InterruptedException {for(int I =0; i<30; i++){ new Thread(){ public void run(){ for(int j=0; j<10000; j++){ num++; Countdownlatch.countdown (); } }.start(); } // Wait for countdownlatch.await (); System.out.println(num); }}./* Welcome to Java chatCopy the code

Execution Result:

224291
Copy the code

Some students might be wondering about this example. If the shared variable volatile guarantees visibility, shouldn’t the result be 300,000?

The problem is num++, because num++ is not an atomic operation, but a compound operation. We can briefly describe this operation as consisting of three steps:

Read 1.

2. Add a

3. The assignment

Therefore, in A multi-threaded environment, it is possible that thread A reads num into the local memory. At this time, other threads may have increased num A lot, but thread A still adds the expired NUM and writes it back into main memory. As A result, the result of NUM is not as expected, but less than 30000.

Resolve atomicity problems with num++ operations

For operations on composite classes such as num++, you can use the atomic operation class in Java and send a packet. Atomic operation classes are guaranteed to be atomic by looping CAS.

/** * Created by chengxiao on 2017/3/18. */ public class Counter {// Public static AtomicInteger num = new AtomicInteger(0); CountDownLatch = new CountDownLatch(30); CountDownLatch = new CountDownLatch(30); Public static void main(String []args) throws InterruptedException {for(int I =0; i<30; i++){ new Thread(){ public void run(){ for(int j=0; j<10000; j++){ num.incrementAndGet(); // atomic num++, via loop CAS} countdownlatch.countdown (); } }.start(); } // Wait for countdownlatch.await (); System.out.println(num); }}Copy the code

The execution result

300000
Copy the code

The basic principles of atomic class operations will be described in a later section and will not be covered here.

Disallow instruction reordering

Volatile also has the property of disallowing instruction reordering optimization.

Reordering is a method by which compilers and processors sort sequences of instructions to optimize program performance. But there are certain rules to reordering:

1. The reorder operation does not reorder operations that have data dependencies.

For example: a = 1; b=a; This sequence of instructions, since the second operation depends on the first operation, will not be reordered at compile time and processor runtime.

2. The purpose of reordering is to optimize performance, but no matter how reordering is done, the execution result of the program in a single thread cannot be changed

For example: a = 1; b=2; The first (a=1) and second (b=2) operations may be reordered because there is no data dependence, but the operation c=a+b will not be reordered because the final result is guaranteed to be c=a+b=3.

Reordering in single-threaded mode is guaranteed to be correct, but in multi-threaded environments, the problem is that we modify the first TestVolatile example by adding a shared variable, a

public class TestVolatile { int a = 1; boolean status = false; Public void changeStatus(){a = 2; //1 status = true; //2} /** * If the status is true, then running. */ public void run(){ if(status){//3 int b = a+1; //4 System.out.println(b); }}}Copy the code

If thread A executes changeStatus and thread B executes run, can we be sure that at 4, B is equal to 3?

Again, the answer is no! Or b could still be 2. As mentioned above, the compiler and processor may reorder the instructions to provide program parallelism, and the instructions 1 and 2 in the above example may be reordered because there are no data dependencies, starting with status=true and then executing a=2. Thread B gets to 4, but thread A hasn’t done A =2 yet, so B = A +1 might still be equal to 2.

Using the volatile keyword to modify shared variables disables this reordering. If a shared variable is volatile, a memory barrier is inserted into the instruction sequence at compile time to prevent a particular type of handler from reordering

Volatile also has some rules for prohibiting instruction reordering, to recap briefly:

1. When the second operation is a Voaltile write, no matter what the first operation is, reorder cannot be done

2. If the local operation is volatile, reorder cannot be performed regardless of the second operation

3. Reorder cannot be performed when the first operation is volatile write and the second operation is volatile read

Conclusion:

To summarize, volatile is a lightweight synchronization mechanism that has two main features: it ensures that shared variables are visible to all threads; Second, forbid instruction reordering optimization. It is also important to note that volatile is atomic for a single shared variable read/write, whereas compound operations such as num++ do not guarantee atomicity. The solution is to use atomic operation classes in packets. The atomicity of num++ operations is guaranteed by cyclic CAS.

Mysql, Netty, Spring, thread, Spring Cloud, JVM, source code, algorithm, etc., also have a detailed learning plan map, interview questions, etc., need to obtain these contents of the friend please add Q: sample: 909038429/./* Welcome to Java chat