Java memory model
The Java memory model attempts to mask the differences in memory access between hardware and operating systems, so that Java programs can achieve a consistent memory access effect across all platforms.
Registers on the processor read and write several orders of magnitude faster than memory, and to resolve this speed contradiction, a cache was inserted between them. If multiple caches share the same main memory area, data from multiple caches may be inconsistent, and a protocol is required to solve this problem.
All variables are stored in main memory, and each thread has its own working memory, which is stored in a cache or register that holds a main memory copy of the variables used by that thread. Threads can only manipulate variables in working memory directly, and variable values between threads need to be passed through main memory.
Intermemory interaction
The Java memory model defines eight operations to interact with main and working memory.
- Read: Transfers the value of a variable from main memory to working memory
- Load: Executes after read, putting the value of read into a copy of the variables in working memory
- Use: Passes the value of a variable in working memory to the execution engine
- Assign: A variable that assigns a value received from the execution engine to working memory
- Store: Transfers the value of a variable in working memory to main memory
- Write: Executes after store, putting the value of store into a variable in main memory
- Lock: variable applied to main memory
- unlock
Three main features of the memory model
1. The atomicity
Atomicity means that an operation either succeeds or fails. The Java memory model guarantees atomicity for the eight operations described above. But the Java memory model allows the virtual machine to divide reads and writes to 64-bit data (long,double) that are not volatile into two 32-bit operations, so the load, Store,read, and write operations are not atomic. However, JMM only guarantees the atomicity of the above operations. The operation like I ++ is actually divided into three steps of obtaining I, increasing I and assigning value to I. If such atomic operations are to be realized, they need to be realized by atomic classes, or synchronized mutex can also be used to ensure the atomicity of the operations.
2. The visibility
Visibility means that when one thread changes the value of a shared variable, other threads are immediately aware of the change. The Java memory model implements visibility by synchronizing the new value back to main memory after a variable is modified and flushing the value from main memory before the variable is read.
- volatile
- Synchronized, the value of a variable must be synchronized back to main memory before it can be unlocked.
- Final, fields decorated with the final keyword are visible to other threads in the constructor once the initialization is complete and no this escape has occurred (other threads access the half-initialized object through this reference).
Volatile variables get the latest data every time they are read, and any changes made to the variable by any thread are flushed to main memory immediately. But the volatile keyword does not guarantee atomicity.
Synchronized and locking can also ensure visibility by preventing other threads from accessing the shared variable until the lock is released. But it is more expensive than volatile.
3. The order
Suppose we have three statements A,b, and c. Within the same thread, the operations are ordered, in a-> B -> C order. However, the JMM rearranges instructions to improve overall efficiency while ensuring that the end result is consistent with the sequential execution of the code. Rearrangement in a single thread is not a problem, in how many small counties, there may be data inconsistency issues.
In Java, you can use the volatile keyword to ensure orderliness, as well as synchronized and lock.
The volatile keyword prevents instruction reordering by adding a memory barrier that does not place subsequent instructions in front of the barrier.
Order is guaranteed by synchronized and lock, which ensures that only one thread executes the synchronized code at any one time, effectively ordering the threads to execute the synchronized code sequentially.
The JVM ensures ordering by happening-before
In addition to using volatile and synchronized to ensure sequencing, the JVM also enforces the principle of preemption, allowing one operation to complete before another without control.
- Single thread principle: In a thread, the preceding operations of the program precede the following operations.
- Pipe lock rule: an UNLOCK operation precedes a subsequent lock operation on the same lock.
- Rule for volatile Variables: Writes to a volatile variable occur before reads, meaning that the values read must be the latest.
- Thread start rule: The start() method of the Thread object calls each action of the Thread first.
- Thread addition rule: The end of the Thread object occurs first when the join() method returns.
- Interrupt rule: A call to the interrupt() method occurs when code in the interrupted thread detects that an interrupt has occurred, which can be detected by the interrupted() method.
- Object finalization rule: The completion of an object’s initialization (completion of constructor execution) occurs first at the start of its Finalize () method.
- Transitivity: If operation A precedes operation B and operation B precedes operation C, then operation A precedes operation C.
The resources
- Github.com/CyC2018/CS-…
- Crossoverjie. Top/JCSprout / # /…