directory
- JVM memory structure and memory model
- Three concepts in concurrent programming and reordering
- Happens-before principle
- Principle of volatile
- Application Scenarios of Volatile
- data
- harvest
First, the JVM memory structure and memory model
1.1 JVM memory structure
Image from the book Understanding the Java Virtual Machine in Depth
When running a program, the Java Virtual Machine divides the memory it automatically manages into these areas, each of which has a purpose and a time to create and destroy it
The method area belongs to the memory area shared by threads. It is mainly used to store data such as class information, constants, static variables, and code compiled by real-time compilers that have been loaded by virtual machines.
The Java heap is also an area of memory that is shared by threads. It is created when the virtual machine starts. It is the largest chunk of memory managed by the Java virtual machine and is used to store object instances
** Program counter ** is a thread private data area. It is a small memory space that mainly represents the bytecode line number of the current thread.
The virtual machine stack is a thread-private data area created at the same time as the thread, the total number is associated with the thread, and represents the memory model for Java method execution. When each method executes, a stack frame is created to store the method’s variable table, operand stack, dynamically linked method, return value, return address, and so on. Each method is pushed and unloaded from the virtual machine stack for a stack frame from the end of the call
** Local method stack ** belongs to the thread private data area, this part is mainly related to the vm Native method used.
1.2 JVM memory model
Image from: Book the Art of Java Parallel Programming
The Java Memory Model (JMM) itself is an abstract concept that does not exist. It describes a set of rules or specifications that define how variables in a program (including instance fields, static fields, and elements that make up array objects) are accessed. Because the JVM to run the program of the entity is a thread, and every thread creation when the JVM to create a working memory (called the stack space) in some places, and threads used to store the private data, while the Java memory model that all variables are stored in main memory, main memory is Shared memory region, all threads can access, But threads (reading assignment, etc.) to the operation of the variables must be conducted in the working memory, first of all variables from the main memory to copy their working memory space, and operation, the variable operation to complete before you write variables back to main memory, cannot be directly operating variables in main memory, working memory storage variables in the main memory copy of a copy of the said earlier, Working memory is the private data area of each thread. Therefore, different threads cannot access each other’s working memory. Communication between threads (passing values) must be done through main memory
Three concepts in concurrent programming and reordering
2.1 atomic
Atomicity refers to the fact that an operation is uninterruptible, even in a multi-threaded environment, so that once an operation is started, it will not be affected by other threads
2.2 the visibility
Visibility is when one thread to modify a Shared variable values, other threads can immediately know the value of this change Because thread thread to the operation of the Shared variables are copied to the respective working memory to operate only after written back to main memory, in that there may be A thread A modified the Shared variables x value, also did not write back to main memory, Another thread B in main memory of the same Shared variables x, but this time A thread working memory Shared variables x is not visible to thread B, this kind of working memory and the main memory synchronization delays the visibility problems caused by additional instructions rearrangement and compiler optimizations may also lead to the visibility problem, through the analysis of the front, We know that reordering, both compiler optimizations and processor optimizations, in a multi-threaded environment can lead to problems with round order execution, and thus visibility.
2.3 order
When a program is compiled into machine code instructions, there may be a reorder of instructions, which may not be in the same order as the original instructions
reorder
In order to improve the performance of computer programs, compilers and processors often reorder instructions. There are three types of instructions
-
Compiler optimized reorder a compiler can reorder statements without changing the semantics of a single-threaded program.
-
Instruction parallel reorder Modern processors use instruction level parallelism to execute multiple instructions on top of each other. If there is no data dependency (that is, the subsequent statement does not depend on the result of the previous statement), the processor can change the order of execution of the machine instructions corresponding to the statement
-
Reordering of the memory system due to the processor’s use of cache and read/write cache buffers, the load and store operations may appear to be performed out of order because of the time lag between memory and cached data due to level 3 caching
Reordering only guarantees consistency in the execution of serial semantics within a single thread, but does not concern semantic consistency across multiple threads
The happens-before principle
Happens-before rules are defined in JSR-133:Java Memory Model and Thread Specification as follows:
3.1 Program sequence rules
Every action in a thread happens before any subsequent action in that thread
Here’s a question: don’t ** program ordering rules conflict with the compiler’s reordering of instructions? ** discussed this issue with a friend today and found that he didn’t understand the meaning of happens-before. Happens-before is concerned with visibility on the result, not on the order of execution. A good quote from my friend: the fact that execution results have no correlation is a special kind of visibility. Now let’s look at how to understand the rules of program ordering in happens-before and the reordering of instructions by the compiler. www.zhihu.com/question/65… Case study:
Double PI = 3.14; //A double r = 1.0; //B double area = pi * r * r; //CCopy the code
The example code above calculating the area of A circle has three happens-before relationships: A happens-before B; B. A happens-before C. The third happens-before relation here is derived from the transitivity of the happens-before.
Here A happens -before B, but B can actually be executed before A (see the reordered execution order above).
If A happens before B, the JMM does not require that A must be executed before B. The JMM simply requires that the previous operation (the result of execution) be visible to the subsequent operation, and that the previous operation precedes the second in order. The result of operation A does not need to be visible to operation B; And the result of reordering operation A and operation B is the same as the result of operation A and operation B executed in happens-before order. In this case, the JMM would consider the reordering not illegal and would allow it.
3.2 Monitor lock rules
For the unlocking of a lock, happens-before follows the subsequent locking of the lock
3.3 Rules for volatile variables
For a write to a volatile variable, happens-before any subsequent reads to that volatile variable
3.4 transitivity
If A happens-before B, and B happens-before C, then A happens-before C.
3.5 the start () the rules
If thread A performs the operation threadb.start (which starts ThreadB), then thread A’s threadb.start () operation happens-before any operation on ThreadB
3.6 the join () the rules
If thread A performs the operation threadb.join () and returns successfully, then any operation in ThreadB happens-before thread A returns successfully from the operation threadb.join ()
Iv. Action and principle of volatile
In the process of program execution, there are two aspects involved: instruction execution and data reading and writing. The instruction execution is completed by the processor, and the data read and write depends on the system memory, but the processor’s execution speed is much faster than the memory data read and write, so a cache is added in the processor. During the execution of the program, data is copied to the processor’s cache and then written back to system memory after the operation is complete.
4.1 Effects of volatile
This can cause visibility problems if one more variable is read and written in multiple threads. Volatile is a good solution to visibility and order problems. But atomicity is not guaranteed. It does not cause thread context switches and scheduling compared to synchronized.
So how does it guarantee visibility and order?
When a write is performed on a variable that declares volatile,
- The JVM sends a Lock prefix instruction to the CPU to write the cache line where the variable is located (the smallest storage unit that can be allocated in the CPU cache; a cache line is usually 64 bytes wide) back to system memory.
- For multiple processors, in order to ensure that each processor cache consistency, each processor by sniffing (similar to the observer) on the bus transmission of data to check whether its own cache expiration, when the processor found himself cache line corresponding to the memory address has been changed, and will set the processor cache line to invalid state, when in need of the variable data, Is read back from system memory into the processor’s cache. From The Art of Concurrent Programming in Java
Volatile is common in concurrent programming, but it can also be abused. Volatile variables provide visibility guarantees and prevent reordering of instructions. However, all this comes at the expense of optimization (eliminating the cache and increasing the overhead of directly operating on main memory), so volatile should not be abused and should only be used when there is a real need to increase variable visibility.
4.2 A happens-before relationship is established for volatile write-reads
Let’s take a look at the examples in the Art of Java Concurrent Programming
class VolatileExample { int a = 0; volatile boolean flag = false; public void write(){ a = 1; //1 flag = true; //2 } public void reader(){ if(flag) {. //3 int i = a; / / 4}}}Copy the code
Suppose thread A executes the write method, and thread B executes the reader method as follows: ** According to the happens-before rule: ** according to the sequence rule: ** 1 happens-before 2; 3 happens-before 4 According to the volatile rule: 2 happens-before 3 The transitive rule: 1 happens-before 4
V. Application scenarios of Volatile
5.1 Status Flags
While thread A is doing doWork(), another thread b may have called release, volatile the flag variable to ensure visibility.
volatile boolean flag; public void release() { flag = true; } public void doWork() { while (! flag) { // do something } }Copy the code
5.2 Secure One-time Release
Volatile prevents reordering of instructions, thus preventing out-of-order execution of programs in multithreaded environments. Let’s take a look at a very typical example of disallow rearrangement optimizations: the example of double-checked locks
private volatile static Singleton mInstace; Public static Singleton getInstance(){synchronized(mInstace == null){synchronized(mInstace == null){synchronized(Singleton if(mInstace == null){ mInstace = new Singleton(); } } } return mInstace;Copy the code
MInstace = new Singleton(); Memory = allocate(); //1. Allocate the object memory space instance(memory); //2. Initialize object instance = memory; //3. Set instance to point to the newly allocated memory address. =null
Because of the possible reordering between steps 2 and 3, memory = allocate(); //1. Allocate object memory space instance = memory; //3. Set instance to point to the newly allocated memory address. =null, but the object is not initialized yet! instance(memory); //2. Initialize the object
If mInstace is not marked volatile, then when a thread accesses mInstace that is not null, the instance may not have been initialized, thus creating a thread-safety problem.
5.3 Low-cost Read-Write Lock policy
Synchronized ensures that incremental operations are atomic, and volatile ensures visibility of the current result. This approach provides better performance if updates are infrequent.
public class CheesyCounter { private volatile int value; Public int getValue() {return value; } // Synchronized. Public synchronized int increment() {return value++; }Copy the code
5.4 Independent Observation
public class UserManager { public volatile String lastUser; public boolean authenticate(String user, String password) { boolean valid = passwordIsValid(user, password); if (valid) { User u = new User(); activeUsers.add(u); LastUser = user; } return valid; }}Copy the code
5.5 Volatile Bean Mode
The basic principle of the volatile bean pattern is that all data members in volatile data containers are volatile (and getters and setters must be very common), and objects placed in these containers must be thread-safe.
Six, data
- Book: The Art of Concurrent Programming in Java
- Book: Understanding the Java Virtual Machine in Depth
- In-depth understanding of the Java memory model
- Highly recommended – complete understanding of the Java Memory Model (JMM) and the volatile keyword
- Multithreaded knowledge combing (8) – Volatile keyword
- Volatile in Java
- Java concurrent programming: Volatile keyword parsing
- The use of volatile
- How to understand the rules of program ordering in happens-before and the reordering of instructions by the compiler?
Seven, harvest
Through this study practice
- You learned about the MEMORY structure and memory model of the JVM
- Understand the reordering, atomicity, visibility, order of concurrent programming
- Learned about the happen-and-before principle
- Understand the principles and usage scenarios of volatile
Thank you for reading the next installment of our synchronized programming series, and join us on our audio and Video Development Journey to learn and grow. Welcome to communicate