This is the sixth installment of the JVM column on the memory model and threads. Bytecode is the last chapter, haven’t read can make up for the lesson ha.
Click the jump
Java memory model
Main memory vs. working memory
The Java memory model specifies that all variables are stored in main memory, also known as main Memory. In addition, each thread has its own working memory. The working memory of a thread holds a main memory copy of variables used by the thread. All operations on variables must be performed in the working memory of the thread, rather than reading or writing data directly from the main memory. Different threads cannot directly access variables in each other’s working memory, and the transfer of variable values between threads needs to be completed through the main memory.
The interaction between threads, main memory, and working memory is shown below.
Intermemory operation
The following eight operations are defined in the memory model to complete the interaction
- Lock: A variable acting on main memory that identifies a variable as a thread-exclusive state.
- Unlock: A variable that acts on main memory. It releases a locked variable so that it can be locked by another thread.
- Read: A variable acting on main memory that transfers the value of a variable from main memory to the thread’s working memory for subsequent load action.
- Load: Variable acting on working memory, which puts the value of the variable obtained from main memory by the read operation into a copy of the variable in working memory.
- Use: variable applied to working memory, which passes the value of a variable in working memory to the execution engine. This operation is performed whenever the virtual machine reaches a bytecode instruction that needs to use the value of the variable.
- Assign: a working memory variable that assigns a value received from the execution engine to the working memory variable. This operation is performed whenever the virtual machine accesses a bytecode instruction that assigns a value to the variable.
- Store: Variable applied to working memory that transfers the value of a variable in working memory to main memory for subsequent write operations.
- Write: a variable operating on main memory that places the value of a variable in main memory obtained from the store operation in working memory.
If you want to copy a variable from main memory to working memory, read and load operations are performed in that order. If you want to synchronize variables from working memory back to main memory, store and write operations are performed sequentially.
The memory model also specifies that the following rules must be met when performing the eight basic operations described above:
- It is not allowed for a variable to be read from main memory but not accepted by working memory, or for working memory to write back but not accepted by main memory.
- After a variable is changed in working memory, the change must be synchronized back to main memory.
- A thread is not allowed to synchronize data from the thread’s working memory back to main memory for no reason.
- Before performing use and store operations on a variable, you must perform assign and load operations first.
- A variable can be locked by only one thread at a time. However, the lock operation can be repeated by the same thread several times. After the lock operation is performed several times, the variable can be unlocked only after the same number of UNLOCK operations are performed.
- If you perform a lock operation on a variable, the value of the variable will be emptied from working memory. Before the execution engine can use the variable, load or assign operations need to be performed again to initialize the value of the variable.
- It is not allowed to unlock a variable that has not been previously locked by a lock operation, nor to unlock a variable that has been locked by another thread.
- Before an unlock operation can be performed on a variable, the variable must be synchronized back to main memory
A volatile variable
The keyword volatile is the lightest synchronization mechanism provided by the Java Virtual machine. When a variable is volatile, it has two properties. The first is that the variable is visible to all threads. If one thread changes the value of the variable, other threads can immediately obtain its new value. The second is to prohibit instruction reordering optimization.
Example of ensuring visibility of variables to all threads
Let’s take a look at some code. If thread 1 executes first and thread 2 executes later:
/ / thread 1
boolean stop =false;
while(! stop){ doSomething(); }/ / thread 2
stop =true;
Copy the code
Sometimes you can’t terminate thread 1. Thread 1 makes a copy of the stop variable in its working memory when it runs. When thread 2 changes the value of the stop variable but does not have time to write to main memory, thread 2 moves on to do something else, and thread 1 does not know about thread 2’s change to the stop variable, so it will continue to loop.
If you use volatile, things are different
1. Using volatile forces the changed value to be written to main storage immediately.
2. Using volatile will invalidate the stop line in thread 1’s working memory when thread 2 makes changes. (In the hardware layer, the L1 or L2 line is invalidated.)
3. Thread 1 reads stop again from main memory because the cache line of stop in thread 1’s working memory is invalid.
In thread 2 modify stop values when making thread 1 working memory cache variable stop cache line is invalid, then read by a thread 1, found himself the cache line is invalid, it will wait for the cache line corresponding to the main memory address is updated, then go to the corresponding main memory to read the latest value, so the thread 1 to read is the latest correct value.
Volatile, however, does not guarantee atomicity for operations on variables
public class Test {
public volatile int inc =0;
public void increase(a) {
inc++;
}
public static void main(String[] args) {
final Test test =new Test();
for(int i=0; i<10; i++){new Thread(){
public void run(a) {
for(int j=0; j<1000; j++) test.increase(); }; }.start(); }while(Thread.activeCount()>1) // Ensure that all previous threads are finishedThread.yield(); System.out.println(test.inc); }}Copy the code
Running the code above, our estimate should be 10000, but every time we run it, the result is inconsistent with a number less than 10000.
However, we have used volatile, and then incremented inc. Since volatile provides visibility, the value of inc incremented in each thread will be visible in the other threads, so 1000 increments in ten threads should be 10000.
The reason the code used volatile but missed the estimate is because volatile does not guarantee atomicity. Increment is not atomic. It involves reading the original value of a variable, increments by one, and writes to working memory. The three children of an increment operation may be executed separately, which can result in the following
At some point inc is 10, thread 1 has to increment to read the original value of inc, and thread 1 blocks
, thread 2 to start the operation, also went to read the original value of the variable inc, because thread 1 just to read operation of variable inc, without modify variables, so will not lead to thread 2 working memory cache variable inc cache line is invalid, so the thread 2 will go directly to main memory read inc value, when he found the value of the inc 10, It then increments by 1 and writes 11 to working memory, and then to main memory.
Then thread 1 stops blocking and increments by 1. Since thread 1 has read inc, thread 1 increments by 1 and writes to main memory
The result is that both threads increment by 1, but only increment by 1.
The rules for volatile variables ensure that changes to volatile variables invalidate the cache layer. Other threads read the new value, but thread 1 has not made any changes to the INC variable, so thread 2 accesses main memory to read the value that has not been changed. This logic makes sense, but it does not match the expected result.
In summary, volatile does not guarantee that any operation on a variable is atomic, and we should use it with other methods, such as synchronized or Lock.
Example of disabling instruction reordering optimization
The volatile keyword disallows instruction reordering in two ways:
1. When a program reads or writes to a volatile variable, all changes to the preceding operation must have been made and the result must be visible to subsequent operations; The operation behind it has certainly not taken place;
2. During instruction optimization, statements that access volatile variables must not be followed by statements that access volatile variables and must not be preceded by statements that access volatile variables.
Here’s a simple example:
//x and y are non-volatile variables
//flag is volatile
x = 2; //语句1
y = 0; //语句2
flag = true; //语句3
x = 4; //语句4
y = -1; 5 / / statement
Copy the code
Since flag is volatile, instruction reordering does not place statement 3 before statement 1 or 2, nor does it place statement 3 after statement 4 or 5. Note, however, that the order of statements 1 and 2 or 4 and 5 is not guaranteed. And the volatile keyword guarantees that statements 1 and 2 must have completed by the time statement 3 is executed, and that the results of statements 1 and 2 are visible to statements 3, 4, and 5.
Here’s another example:
/ / thread 1:
context = loadContext(); //语句1
inited =true; //语句2
/ / thread 2:
while(! inited ){ sleep() } doSomethingwithconfig(context);Copy the code
It’s possible that statement 2 will execute before statement 1, and then the context will not be initialized, and thread 2 will use the uninitialized context, and the program will fail. This problem does not occur if the inited variable is modified with the volatile keyword, because the context must be initialized by the time statement 2 is executed.
Principle and implementation mechanism
A look at the assembly code generated with and without the volatile keyword showed that the volatile keyword added a lock prefix
The LOCK prefix directive essentially acts as a memory barrier that provides three functions:
1. It ensures that instruction reordering does not place subsequent instructions in front of the memory barrier, nor does it place previous instructions behind the barrier; That is, by the time the memory barrier instruction is executed, all operations in front of it have been completed;
2. It forces changes to the cache to be written to main storage immediately.
3. If it is a write operation, it invalidates the corresponding cache line in the other CPU.
Special rules for long and double variables
The Java memory model requires that lock, unlock, read, load, assign, use, Store, and write operations be atomic. For 64-bit data types, however, there is a very loose rule in the model that allows the virtual machine to divide reads and writes to 64-bit data that are not volatile into two 32-bit operations. The idea is to allow the virtual machine implementation to choose whether or not to guarantee the atomicity of the load, store, read, and write operations on 64-bit data types. This is called the “non-atomic protocol for long and double”.
If multiple threads share a variable of type LONG or double that is not declared as volatile and simultaneously read and modify them, some threads may read a value representing a “half variable” that is neither the original value nor the value modified by another thread. However, reading a “half variable” is very rare
Atomicity, visibility, order
The Java memory model is built around how atomicity, visibility, and orderliness are handled during concurrency. Here’s how to implement the three features one by one.
atomic
The atomic variable operations directly guaranteed by the Java memory model include read, load, assign, use, Store, and write. We can generally assume that basic data types can be accessed and read atomically (except for the non-atomic conventions of long and double).
visibility
Visibility means that when one thread changes the value of a shared variable, other threads are immediately aware of the change. The Java memory model provides visibility by relying on main memory as a transfer medium by synchronizing the new value back to main memory after the variable is modified and flushing the value from main memory before the variable is read, both for common and volatile variables. Normal variables differ from volatile variables in that the special rules of volatile ensure that new values are immediately synchronized to main memory and flushed from main memory immediately before each use. Thus we can say that volatile guarantees visibility of variables in multithreaded operations, whereas normal variables do not.
In addition to volatile, Java has two other keywords for visibility: synchronized and final. The visibility of synchronized blocks is obtained by the rule that a variable must be synchronized back to main memory (store, write) before unlock is performed. The visibility of the final keyword means that the field modified by final is initialized in the constructor once it has been initialized, and the constructor does not pass a reference to “this” (this reference escape is a dangerous thing, and other threads may access the “half-initialized” object through this reference). The value of the final field can then be seen in other threads. As shown in the following code, variables I and j are both visible and can be accessed correctly by other threads without synchronization.
public static final int i;
public final int j;
static {
i = 0;
// omit subsequent actions
}
{
// You can also choose to initialize it in the constructor
j = 0;
// omit subsequent actions
}
Copy the code
order
The natural orderlessness of Java programs can be summed up in the following sentence: if you observe in one thread, all operations are ordered; if you observe in one thread, all operations are ordered.
The first part of the sentence refers to the semantics that appear to be serial in the thread, and the second part refers to the phenomenon of “instruction reordering” and “synchronization delay between working memory and main memory”. The Java language provides the keywords volatile and synchronized to ensure the order of operations between threads. Volatile contains semantics that forbid instruction reordering. Synchronized is acquired by the rule that a variable can only be locked by one thread at a time, which determines that two synchronized blocks holding the same lock can enter only serially.
Principle of antecedent
This principle is a very useful tool for determining whether data is contestable and threads are safe. Relying on this principle can solve all the problems of whether two operations might conflict in a concurrent environment in a package of a few simple rules.
// The following operations are performed in thread A
i = 1;
// The following operations are performed in thread B
j = i;
// The following operations are performed in thread C
i = 2;
Copy the code
If the operation “I =1” in thread A takes place before the operation “j= I” in thread B, then we can determine that the value of variable j must be equal to 1 after the operation of thread B is executed. We can draw this conclusion based on two aspects:
First, according to the principle of antecedent, the result of “I =1” can be observed.
Second, thread C has not yet appeared, and no other thread will modify the value of variable I after thread A finishes operation.
Now consider thread C again, we still keep thread leading relationship between A and B, and C in the thread between the operation of the A and B, but C and B without first sex, on the occasion of j is not sure when it is 1 or 2, because the thread C’s influence on the variable I might be thread B observed, or may not, In this case, thread B has the risk of reading expired data, which does not have multi-thread security.
The Java and thread
Implementation of threads
There are three main ways to implement threads
- Implementation using kernel threads (1:1 implementation)
- Implementation using user threads (1:N implementation)
- Use a hybrid implementation of user threads and lightweight processes (N:M implementation).
Java thread scheduling
Thread scheduling refers to the process in which the system allocates processor rights to threads. There are two main ways of scheduling, namely cooperative thread scheduling and preemptive thread scheduling.
In the multi-threaded system with cooperative scheduling, the execution time of the thread is controlled by the thread itself. After the thread completes its own work, it should inform the system to switch to another thread. The biggest advantage of cooperative multithreading is simple implementation, and because the thread to finish their own things will be thread switch, switch operation is known to the thread itself, so there is generally no thread synchronization problem.
In a multithreaded system with preemptive scheduling, each thread is allocated execution time by the system, and thread switching is not determined by the thread itself.
State transition
Java defines six thread states. At any point in time, a thread can have only one of these states and can switch between them in a specific way.
-
New (New)
Threads that have not been started since they were created are in this state.
-
Run (Runnable)
This includes Running and Ready in the operating system thread state, where the thread is either executing or waiting for the operating system to allocate time for it to execute.
-
Waiting indefinitely
Threads in this state are not allocated processor execution time and wait to be explicitly woken up by other threads.
-
TimedWaiting
Threads in this state are also not allocated processor execution time, but instead of waiting to be explicitly woken up by other threads, they are automatically woken up by the system after a certain amount of time.
-
Obstruction (Blocked)
A thread is blocked. The difference between the “blocking state” and the “waiting state” is that the “blocking state” is waiting to acquire an exclusive lock. This event will occur when another thread abandons the lock. A “wait state” is waiting for a certain amount of time, or wakeup action, to occur. The thread enters this state while the program is waiting to enter the synchronization zone.
-
The end (Terminated)
The thread state of a terminated thread that has finished executing.
Thread state transitions are shown below