I did not understand the volatile part of the first reading of “Understanding the Java Virtual Machine”, and recently returned to study and record some understanding
To understand volatile, you need to understand the following concepts:
1. JMM memory model
2. Three problems of concurrent programming: atomicity, consistency and orderliness
3. The principle of antecedent
Then we look at how volatile is used in combination with the above points
JMM memory model
Taking a look at the picture above, the Java memory model dictates that all variables are stored in main memory (similar to physical memory) and that each thread has its own working memory (similar to caching). All thread operations on variables must be done in working memory, not directly in main memory. And each thread cannot access the working memory of other threads
So why did JMM design this way? There are two main reasons: 1. Achieve consistency of memory access effect among different platforms; 2
For improving the speed of data access, CPU cache is mainly used: when the computer executes the program, each instruction is executed in the CPU, and in the execution of the instruction process, it is bound to involve data reading and writing. Because the program is running in the process of temporary data is stored in main memory of (physical memory), then there is a problem, because the CPU execution speed quickly, and write and read data from memory to memory data process compared to the speed of the CPU executes instructions are much more slowly, so if any time to the operation of the data will be made by interaction and memory, Can greatly slow down the execution of instructions. So you have a cache in the CPU
In this article, the JMM can help us understand why visibility problems occur
Three problems of concurrent programming: atomicity, visibility and order
Atomicity problem
Atomicity means that an operation cannot be interrupted or inserted
For example, i++, the JVM instruction consists of three operations: read x, add 1, and write a new value. If i++ is executed concurrently, the three operations may be interspersed between different threads. Atomicity means that any thread running these three operations cannot be executed by any other thread
Synchronized 2, Lock, and other locks
Visibility problem
Each thread has its own working memory (cache, see JMM). After thread A changes the value of A variable, thread B retrieves the value of the variable from its working memory, which may also be the value before THREAD A changed it
How to solve the visibility problem:
1, volatile keyword 2, Lock, synchronized
Order problem
Let’s take a look at what reordering is: Processor in order to improve the efficiency of the program runs, may optimize the input code, it does not guarantee that all the statements in the program execution of the order the order of the code, but it will ensure the results of the program execution result and eventually code sequence is consistent, weight sorting process will not affect the single thread of execution, but will affect the correctness of the multi-thread concurrent execution. If the program does not satisfy the antecedent principle, instruction reordering may occur
The rearrangement of instructions affects the order of the program
1, volatile keyword 2, Lock, synchronized
In view of the above three problems, volatile can only solve the visibility problem and order problem, but not the atomicity problem, which still requires locking
The happens-before principle
The happens-before principle is the primary basis for determining whether data is contention or thread safety. The happens-before principle can help you determine concurrency safety without having to guess thread safety
Here are some “natural” antecedents in the Java memory model that do not require any synchronization helper collaboration. Java comes with these rules and can be used directly in coding. If two relationships are not in this column and cannot be deduced from these relationships, their order is not guaranteed and the virtual machine can reorder them at will
- The happens-before rule is used to control the flow of code in the same thread in the order in which it appears, since branching and loop structures are taken into account.
- The pipe Lock rule: An unLock action for a monitor Lock is happens-before each subsequent Lock action for the same monitor Lock.
- Volatile variable rule: Writes to a volatile field happens-before each subsequent read to the same field.
- Thread start rule: In the same Thread, a call to thread. start is happens-before every action in the starting Thread.
- Thread termination rule: All actions in a Thread are happens-before another Thread detects that the Thread has terminated, either returns successfully from thread.join (), or thread.isalive returns false.
- Interrupt rule: One thread calls interrupt happens-before the interrupted thread finds the interrupt (by throwing InterruptedException or calling isInterrupted and interrupted).
- Finalization rule: The end of an object’s constructor happens-before the start of the object’s finalizer
- Transitive: If A happens-before B and B happens-before C, then A happens-before C.
Some of the more important and difficult to understand are:
Procedural order rule
The execution of a piece of program code appears to be ordered in a single thread. Although this rule states that “operations written first occur before operations written later”, this should mean that the program appears to be executed in code order, since the virtual machine may reorder the program code. Although reordering is performed, the result of the final execution is the same as the result of the sequential execution of the program. It only reorders instructions that do not have data dependence. Therefore, it is important to understand that program execution appears to be ordered in a single thread. In fact, this rule is used to guarantee the correctness of programs executed in a single thread, but not in multiple threads.
Pipe lock rules
An UNLOCK operation occurs first when the same lock is locked later. That is to say, in either a single thread or a multi-thread lock, a lock must be released before it can be locked
Volatile variable rule
Writes to a volatile variable occur first (in time) after reads to it. If volatile v is written by thread 1 and then read by thread 2, both writes to V by thread 1 and previous writes are visible to thread 2 (thread 1 and thread 2 can be the same thread). Think of it as a description of how volatile solves the visibility problem
To sum up, the antecedent principle determines two things:
1, can help us determine whether the program is thread safe
2, can help us determine whether the program is likely to occur instruction rearrangement
Using volatile
With this knowledge in mind, how can volatile be used correctly
1. Read and write more
Only one thread can change the value of a volitile variable, and one or more threads can concurrently read the value of a volitile variable using volitile
Usually: scenarios such as thread switches or state markers can be used
Because visibility guarantees that volatile can read and write more, but because volatile has no ability to solve the atomicity problem, it cannot read and write more
public static volatile boolean flag = false; // If volatile is used, the program may not be able to exit. // If volatile is used, the program may not be able to exit. Volatile new Thread(() -> {while(!); volatile new Thread() -> {while(!); Flag){} system.out.println (" exit "); }).start(); try { Thread.sleep(3000); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("setup"); flag = true; // Special note: I test flag as non-volatile, while(! Flag){on sleep, the loop will continue, which is very likely not to get the changed value, will continue to get the cache value false from the work content.Copy the code
2, prevent instruction rearrangement
Prevents instruction reordering, usually: used in singleton slacker mode double-check
public class LazySingleton { private volatile static LazySingleton lazySingleton = null; private LazySingleton(){ } public static LazySingleton getInstance(){ if(lazySingleton == null){ synchronized (LazySingletonill.class){ if (lazySingleton == null) { lazySingleton = new LazySingleton(); } } } return lazySingleton; } public static void main(String[] args){ for (int i = 0; i < 100; i++) { new Thread(() -> { System.out.println(LazySingleton.getInstance().hashCode()); }).start(); }}}Copy the code
Let’s look at why not volatile can cause instruction reordering problems:
First of all, the probability of this problem is not high, and I through jdK8 version decompilation is not consistent with the content of the post, let’s first write the principle of the post:
instance = new LazySingleton(); , in fact, the JVM has converted to multiple instructions: memory = allocate(); //1: allocate the object's memory space ctorInstance(memory); //2: initializes the object instance = memory; //3: Memory = allocate(); //3: Memory = allocate(); //1: allocate the memory of the object instance = memory; //3: Set instance to the newly allocated memory address before the object is initialized ctorInstance(memory); //2: initialization object 2, 3 instructions rearranged after the exchange occurredCopy the code
Thread A holds the lock and is executing lazySingleton = new lazySingleton (); Instance = memory is executed before ctorInstance(memory), and instance = memory is completed. If (lazySingleton == null){thread B returns an instance of an uninitialized object and returns an incorrect result
Complain: this singleton writing method is really too pretentious, and this lazy double-check writing method evolution problem analysis can be found at gitee.com/zxporz/zxp-…