Whether you’re a freshman in college or a CRUD with 2-3 years of experience, that means you’ll be asked these questions for at least 3 years. Take the time to get to the bottom of them. The best way to get rid of your fear is to face him, Ollie! This series is my notes and summary of the learning process, and provides debugging code for everyone to play with
The chapter reviews
1. Advantages and disadvantages of Synchronized?
2. What points should be paid attention to when using Synchronized?
3. What is a deadlock? What can cause a deadlock?
Please review the above questions by yourself, or the previous chapter if you have any questions
In this chapter the feed
By the end of this chapter, we will have a better understanding of the volatile keyword and thread safety. We will illustrate the three features of thread safety through the JMM’s treatment of thread safety, and illustrate the differences and applicable scenarios between volatile and synchronized.
This chapter code download
First, knowledge reserve
Before we start with volatile, we need to review some of the prior knowledge. By reviewing this knowledge, we believe that students will have a better understanding of volatile.
Three features of concurrent programming
The three features of concurrent programming are atomicity, visibility, and order. this section reviews these by describing the three features and how the JMM guarantees them.
1. The atomicity
In simple terms, it is one or more operations, either all at once or none at all. What exactly does that mean? Let’s look at a couple of examples.
1. x = 10
When a thread performs an assignment operation, it first assigns a value of 10 to the copy of x in the current thread’s working memory, and then synchronizes it to main memory, where x is also assigned a value of 10. Students might say, well, if after the assignment, another thread operates on x and changes x in main memory to 11, isn’t that not atomic?
In fact, this is a misunderstanding of understanding, this is actually two assignment operations for main memory X, we pay attention to the latitude is a single assignment operation, so these two single assignment operations have all been completed. So assignment is atomic.
2. x = y
This action statement also looks like an assignment, but it’s not. This so-called assignment needs to be broken down.
1. Read y from main memory or working memory
2. Assign the value of y to x and write the value of x to main memory
It’s not the same thing, it’s not the case that you either do it all at once or you don’t do it all at once, the first step is reading y, and the second step is assigning x, and these two steps are completely separate, so we say that this operation is not atomic.
3. x++
This is a little bit easier to understand than x equals y, but I’m going to break it down into 3 steps
- Reads the value of x from main or working memory
- I increment x by x plus 1
- Assign the incremented value of x to x again and modify the value of x in main memory
Obviously, these three steps are not simultaneous, so they are not atomic operations.
As you can see, except for the simple assignment operation, the other two are not atomic operations. If you are still confused, you can go back and experience it. Let’s summarize a little bit here what are atomic operations.
- Simple reads and assignments are atomic, and the JMM guarantees that assignments between variables are not atomic.
- Multiple atomic operations combined are not necessarily atomic operations. Analogy
x=y
- throughThe previous chapterLearning as we know
synchronized
Keywords guarantee atomicity of code.
2. The visibility
Visibility refers to the fact that in multithreaded concurrent programming, once a thread changes the value of a shared variable in main memory, other threads can immediately sense and retrieve the value of the new shared variable from main memory, rather than following the value of the copy in the thread’s local working memory.
As we learned in the previous chapter, synchronized protects the principle of visibility in concurrent programming by using the exclusivity of the monitor lock to ensure that at most one thread can acquire the monitor lock, and that only after the current monitor lock is released can other threads re-enter synchronized wrapped code. However, the disadvantage is the need to consume thread execution efficiency.
3. The order
Orderliness refers to the order in which code is executed. The JVM provides an instruction reorder feature to improve program efficiency. That is, the code may not execute in the order you coded it, but the data dependencies between instructions must be strictly observed, but the inter-business dependencies are not guaranteed, which can lead to ordering-related problems in multi-threaded development.
Let’s look at this problem with a simple example.
public class InstructionReorder { public static class PrintOut { public void getStr() { System.out.println(thread.currentThread ().getName() + "return "); } } private static boolean initStatus = false; static PrintOut printOut = null; Public static void main(String[] args) {new Thread(() -> { InitStatus) {// Reorder initStatus = true; printOut = new PrintOut(); printOut.getStr(); // // command reorder not performed // initStatus = true; } else { printOut.getStr(); }}, "thread 1 ").start(); New Thread(() -> {singleExample is instantiated if (! initStatus) { printOut = new PrintOut(); printOut.getStr(); initStatus = true; } else { printOut.getStr(); }}, "thread 2 ").start(); }}Copy the code
Private static Boolean initStatus = false; Static PrintOut PrintOut = null; If it has been initialized, we simulate the instruction reorder scenario on thread 1
// Check if singleExample is already instantiated if (! InitStatus) {// Reorder initStatus = true; printOut = new PrintOut(); printOut.getStr(); // // command reorder not performed // initStatus = true; } else { printOut.getStr(); }Copy the code
Output:
One thread returned to bai Exception in the thread "is number two threads" Java. Lang. NullPointerException ats src.com.lyf.page5.InstructionReorder.lambda$main$1(InstructionReorder.java:44) at java.lang.Thread.run(Thread.java:748)Copy the code
As you can see, after thread 1 starts, initStatus is changed to true. After thread 2 starts, printOut. GetStr () is called to determine that the current object is initialized. Throws a null-pointer exception.
If you are interested, you can comment out the code that simulates instruction reordering, turn on the code that normally executes the order, and call it several times to see the result.
The JVM instruction reordering problem has been described.
Those of you who are curious are going to say, well, wouldn’t reordering be a bad idea if that were the case? Why do we have this feature?
Under normal circumstances, instruction reordering strictly follows the dependency relationship between data and complies with the happen-before principle. Let’s move on and see what the “happen before” principle really is.
Happen – before principle
- Procedure order principle: inWithin a threadThe code executes in the order in which it was written, although the JVM may reorder the program code, but it guarantees thatWithin a threadThe execution result is the same as the sequential execution result. And one of the important points here isWithin a threadSo for concurrent programming it does not guarantee that the program will execute in the order we want it to execute, which is the example above
order
The problem. - Locking principle: no matter in multi-threaded or single-threaded environment, once the same lock is locked, if other threads need to lock the lock again, they must call the lock
unlock
State. For example, thread A locks the monitor lock MUTEX, and thread B needs to acquire the monitor lock MUTEX. Therefore, it must call the UNLOCK operation of the monitor lock MUTEX. - The volatile rule: happen-before for writes to a volatile variable to read to that variable. In simple terms, if two threads read and write to the shared variable x, A reads to x and B writes to x, B’s write must take place before A’s read.
- Transfer principle :A,B,C three threads operate, A before B,B before C, then A before C. This is easy to understand.
- Thread start rule: Any operation that a thread executes logic must be executed after the thread is actually started. When start is called and CPU execution rights are obtained, the thread will actually run, otherwise it will only count as a thread object.
- The thread interrupt principle: The thread must be interrupted () before catching an interrupt signal. This means that if the thread receives an interrupt signal, it must be preceded by an interrupt().
- Thread termination principle: all operations in a thread must take place prior to thread termination detection. Generally speaking, thread task execution, logical unit execution must take place before thread death.
- Object finalization rule: the completion of an object’s initialization must occur before an object can be reclaimed.
Volatile parsing
Volatile has two key roles
- Visibility of shared variables is guaranteed.
- Forbid instruction reordering to ensure sequence.
Volatile guarantees visibility
We use an example here to illustrate this feature
The code:
public class VolatileTest { final static int MAX = 5; // static volatile int init_value = 0; static int init_value = 0; public static void main(String[] args) { new Thread(() -> { int localValue = init_value; while (localValue < MAX) { if (init_value ! = localValue) { System.out.println("value read to "+init_value); localValue = init_value; } } }, "Reader").start(); Try {timeunit.seconds.sleep (1); } catch (InterruptedException e) { e.printStackTrace(); } new Thread(() -> { int localValue = init_value; while (localValue < MAX) { System.out.println("value update to "+(++localValue)); init_value = localValue; try { TimeUnit.SECONDS.sleep(2); } catch (InterruptedException e) { e.printStackTrace(); } } }, "Updater").start(); }}Copy the code
First we do not use the volatile keyword
Output:
value update to 1
value update to 2
value update to 3
value update to 4
value update to 5
Copy the code
When added, the output is:
value update to 1
value read to 1
value update to 2
value read to 2
value update to 3
value read to 3
value update to 4
value read to 4
value update to 5
value read to 5
Copy the code
By contrast, if the reader thread is not volatile, it reads init_value and the local variable init_value in its own workspace. The two reading threads are always equal without being aware of changes to init_value, which makes sense because they are reading the value of the workspace each time. However, the value in main memory has already changed, so the reader thread senses the refresh of init_value in main memory when volatile is added. Specific atmosphere can be the following steps
1. The reader thread reads init_value=0 of main memory and stores it in its own local workspace.
2. The update thread reads init_value=0 of main storage and stores it in its own local workspace. At the same time, it accumulates the value to get init_value=1, refreshes its own local workspace and main memory, and notifies all threads holding the shared variable that their shared variable value has expired.
3. During the while loop, the reader thread finds that init_value in the local workspace has expired and is not available, so it needs to read init_value from main memory again and get init_value=1. At this point init_value! =localValue, so output.
Volatile guarantees orderliness
Volatile prevents the JVM and processor from reordering instructions decorated with the volatile keyword. The simple idea is that once a statement is volatile, it is a barrier. What is in front of it is always in front of it, and what is behind it is always behind it. But there is still instruction reordering between the preceding and following statements. For example
int a = 0;
int b = 0;
volatile int C = 2;
int d = 1;
int e = 2;
Copy the code
Our a = 0 and b = 0 must have been completed before the assignment of C = 2, but there may be instruction reordering as to whether the assignment of A is completed or the assignment of B is completed first. Similarly, assignment operations of D and e must be after assignment operations of C = 2, and there is the possibility of instruction reordering between the two.
Volatile does not guarantee atomicity
First of all, just to review, the characteristic of atomicity is either all at once or all at once. In general, CPU polling for threads is not trusted, that is, there is a great deal of uncertainty. Review how the voltile keyword works, and imagine the following situation with the volatile keyword:
1. Create thread A and thread B at the same time, sum is added. Sum is volatile to ensure that the shared variable is visible between different threads.
2. When SUM =1 is read by A, the CPU temporarily relinquishes execution power from A and transfers it to thread B. When SUM =1 is read by B, the CPU transfers execution power to A until A completes A sum and refreshes the value of its thread workspace.
B writes sum = 2 to main memory, and the CPU assigns the execution right to A
4.A also writes sum = 2 into main memory.
I don’t know if you got the key points here. I think there are two parts to this
1. Writing to working memory and main memory separately is thread-safe, no matter what the operation, but the two steps together are not thread-safe, i.e., not atomic. Because different threads can perform these two steps alternately.
Volatile int C = 0 does not guarantee thread-safety. Volatile int C = 0 does not guarantee atomicity.
In summary, volatile is concerned with the visibility and ordering of reads and writes to a single variable. It does not guarantee that all other activities performed on that variable are atomic, and therefore not thread-safe.
Compare volatile to synchronized
Consider three features of multithreaded programming
1. The visibility
Volatile: To invalidate data in another thread’s working memory that needs to be retrieved from main memory by using its lock instruction
Sychronized: Exclusive. Only one thread can obtain the Monitor lock at a time
2. The atomicity
Volatile: Not guaranteed
Synchronized: exclusivity to ensure atomicity
3. The order
Volatile: Ensures order by disallowing JVM instruction reordering
Synchronized: exclusivity ensures order
In terms of performance
Synchronized, while nice to look at, has the downside of being thread-safe through serialized programming, sacrificing performance, and potentially causing thread blocking. Volatile does not guarantee atomicity and is very limited from a thread-safety perspective, but it has the advantage of not getting blocked and performing better.
Usage scenarios of the volatile keyword
Students may wonder, since you can’t even guarantee atomicity, what use do I have with you?
Although it has some limitations, the volatile keyword can be useful if used properly. Generally, the following two conditions should be met when using volatile:
1. The write operation to a variable depends on the current value of the variable.
This is easy to understand. Switching control, state control scenarios are more suitable for volatile
2. Operations on volatile variables need to be thread-safe.
This can be understood in two ways. The first is that volatile variables are read and write operations, and reading and writing variables only provides thread safety for the JVM, while providing visibility and order. The second is that all non-atomic operations on volatile variables use locks to ensure thread-safety, as opposed to double-check.
class Singleton{ private volatile static Singleton instance = null; Private Singleton() {} public static Singleton getInstance() {volatile if(instance==null) {synchronized (Singleton.class) { if(instance==null) instance = new Singleton(); } } return instance; }}Copy the code
For those of you who are interested, take a look at the code above, which provides the visibility principle. Volatile also provides the ability to initialize multiple instance variables in the order in which they are initialized by prohibiting instruction reordering. We will cover the thread safety of singletons in more detail later.
This chapter will be more knowledge points need students to digest, next time I want to ask oh. Well, that’s the end of our study! Thanks for your thumbs up 👍 support ~ ~