Write in front: The process of learning is the process of digesting and understanding knowledge, but often only when translated into their own language to express, logic loopholes and missing details can be found. The following content is purely a personal summary of my theoretical knowledge, and I am not responsible for its correctness and accuracy. If you kindly find any problems in my understanding and help me to point them out, I would be very grateful. Thank you.

A simple understanding of Java’s memory model

First of all, in order to alleviate the huge difference between CPU computing speed and main memory reading speed, modern high-speed processors have to introduce caches as buffers with read and write speeds as close as possible to CPU processing speed. That is, the used data is first copied to the cache, and then written to the memory after the operation.

Second, in the JVM’s memory structure, the method area and heap area are areas that all threads can see and manipulate, meaning that multiple threads can read and write data from these areas simultaneously. The data for these areas is defined as shared variables.

Third, thread is the smallest unit of CPU scheduling. On processors with CPU caches, when reading and writing shared variables, the actual CPU cache data is used, not the real main memory shared variables. After completing the data operation, the operation result will be written into the main memory, and finally update the value of the shared variable that can be operated by multiple threads.

Based on the above mechanism, the JMM abstracted the memory model as follows: Java shared variables are stored in main memory, and each thread has its own working memory. At runtime, the thread copies the shared variable from main memory into its own working memory, and all subsequent reads and writes are performed on the thread’s working memory variable, writing the result of the operation to main memory at some point. The Java memory model is actually a process abstraction of read and write access to memory and cache in a multiprocessing system, and there is no direct physical implementation of this model. The Java memory model standardizes and defines virtual machine memory interaction by dealing with atomicity, visibility, and order during concurrency. The model is as follows:

reorder

In order to maximize the parallelism of instruction execution, the compiler and processor reorder instructions during compilation and execution to improve parallelism.

Compiler reordering and processor reordering both follow the principle of data dependence, that is, the two operations with dependent target data cannot be reordered.

Reordering follows as-if-serial: the semantics of serial representation within a thread, that is, no matter how reordering is done, the results of running within a single thread are not affected;

The Java memory model’s conventions for lock/unlock and rules for volatile variables affect reordering through volatile visibility and memory barriers.

happens-before

Also known as “antecedent” principle, mainly defines the partial order relation of data operation in multi-threaded scenarios, and determines whether data is thread safe by its rules. If operation A occurs before operation B, the result of operation A can be observed by operation B.

Happens-before rules are: 1. 2. Pipe lock rules; 3. Volatile variable rules; 4. Thread start rule; 5. Thread interrupt rule; 6. Thread termination rule; 7. Object termination rule; 8. Transitivity rules.