This is the fifth day of my participation in Gwen Challenge
Author: JavaGieGie
Wechat official account: Java development zero to one
preface
After the holiday, I started to go to work, and it was a depressing day. Fortunately, I had flower Gie’s article to accompany me, otherwise I would be lonely to death (smelly shame).
Multithreading has introduced three multithreading a lot of knowledge points, have not seen the partner remember to take some time to see, this series is from shallow to deep, step by step, do not want to eat a fat oh, be careful of puking (manual dog head).
“Squat can also enter the factory” multi-threading this several basic interview questions, 80% of small partners on the first answer wrong
Multi-threaded series – Context, deadlock, high frequency interview questions
“Squat can also enter the factory” multi-threaded series -Java memory model in detail
Dog leftover son: Flower GieGie~, so early.
Me: the article hasn’t finished yet, still have a bunch of small friends waiting for me, affirmation come early.
Dog remaining son: that you still squat pit in this
I:…
This chapter concludes with the introduction of the Java memory model. Although it is over, it will continue throughout the knowledge points. The basic knowledge points of multithreading are covered. The next few chapters will also cover thread pools, CAS, ThreadLocal, atomic classes, AQS, concurrent sets, and more. After reading this series, who can compete with you?
Spluttered pa… See how I punch my face
The body of the
Me: Dog, yesterday you mentioned main memory and working memory, the last introduction was a bit rough, today can be more detailed
B: All right, I’ll do it.
When it comes to JMM main memory and working memory, we must first understand the CPU cache structure.
Core0 and Core1 represent two cores
L1: Each core has two L1s, one for Data Cache and one for Instruction Cache.
The CPU has three levels of cache L1/L2/L3, so you might think that the CPU has nothing to do. It would be much easier to read and write data directly from main memory (physical memory). But we have to think about the CPU is very efficient, processing speed and physical memory compared to the same level, if the CPU each read and write directly with the main memory interaction, this will greatly reduce the execution of instructions, which leads to the three levels of cache.
a = a + 1
For example, when executing this statement, the thread reads the value of variable A from main memory and copies it to the cache. Then the CPU executes instructions to increment variable A and write the data to the cache. Finally, the modified value of variable A from the cache is flushed to main memory.
Extension: When retrieving data, the thread will first look for data in the fastest Cache (L1). If there is a Cache miss, it will look down the next level until there is no memory in the third level. Then there is Main Memoy for retrieving data, the more times there are, the longer it takes.
I: that this and our JMM’s memory structure have what relation?
Java, as a high-level language, shields these low-level details. Instead, the JMM defines a specification for reading and writing memory. In JMM, main memory and working memory are not really physical division, but an abstraction of JMM. L1, L2 and registers are abstracted into working memory, which can only be used exclusively by each processor, while L3 and RAM are abstracted into main memory, which can be shared between processors.
JMM constraints on main/working memory:
- All variables are stored in the main memory, and each thread has its own working memory. The variables in the working memory are copies of the main memory.
- Threads cannot manipulate main memory directly, only by modifying local memory, and then synchronizing local memory to main memory;
- Threads cannot communicate with each other directly, but only through the main memory.
- Because of this type of communication between threads, and the fact that communication between threads is delayed, visibility problems arise.
Me: Is there any way we can solve the visibility problem?
We can solve the visibility problem with the happens-before principle.
Me :(hasty, actually not heard) that… Can you tell me exactly what that means?
Happens-before is the rule: Action A happens before ACTION B, so action B must be able to see action A. This is the happens-before principle.
If that still feels abstract, let’s look at a counterexample: two threads (thread 1, thread 2). Sometimes thread 2 can see what thread 1 is doing and sometimes thread 2 can’t, in which case there is no happens-before. Here seen flowers govemment crouch hole series friend, should be able to think of previous article we explained the visibility of a case, the fourth case, b = 3, the case of a = 1 “crouch hole can also into giant” multi-threaded earnestly series – Java memory model, because do not have happens-before principle.
Me: Said happens-before, what are its applications?
It is enough for you to have a general understanding of the scope of happens-before. Later, we will explain each item separately.
Its application is very wide, look at the following categories, small friends estimated most of the familiar bar:
-
Single thread principle:
In a single thread, subsequent operations must be able to see previous operations in the order of the program.
-
Start () :
Thread A starts thread B, and thread B shows what the main thread did before it started thread B.
-
The join () :
The main thread A waits for the child thread B to complete. When the child thread B completes, the main thread A can see all the operations of thread B.
-
volatile
-
Synchronized, the Lock
-
Tools:
- Thread-safe containers: for example, CurreentHashMap
- CountDownLatch
- Semaphore
- The thread pool
- Future
- CyclicBarrier
Synchronized, thread pool and other knowledge points, here due to space constraints, will be explained one by one, gradually updated, interested partners can pay attention to (Gie, today’s advertising for you, pay your salary).
Me :(old face a powder) talk about wages. You talk about volatile first. I’m waiting to move bricks.
First, volatile is a synchronization mechanism. Once a shared variable (a member variable, a static member variable) is volatile, it serves two purposes:
-
Visibility: When one thread changes the value of a variable, other threads immediately know that the variable has been changed.
-
Disallow instruction reordering.
- For visibility, HERE’s an example:
static boolean flag = true;
public static void main(String[] args) throws InterruptedException {
/ / thread 1
new Thread(new Runnable() {
@Override
public void run(a) {
while (flag){
System.out.println("Nothing !!!!");
}
}
}).start();
Thread.sleep(10);
/ / thread 2
new Thread(new Runnable() {
@Override
public void run(a) {
flag = false;
}
}).start();
}
Copy the code
This code is used to stop a thread, but it is not the correct way to stop a thread, because there is a very small probability that the thread will fail to stop. When thread 1 changes the flag variable, it is scheduled to do other things before it can write the content back to main memory. At this time, thread 1 does not know that thread 2 has changed the flag variable, so it will continue to execute.
This can be avoided by using volatile flag variables for several reasons:
-
Using the volatile keyword forces the changed value to be written to main memory immediately.
-
Using volatile invalidates flag rows in thread 1’s working memory when thread 2 makes changes (i.e., L1 or L2).
-
Thread 1 will read the value of flag again from main memory because the cache line of flag cached in thread 1’s working memory is invalid.
Thus thread 2 stop value (modified thread 2 values in working memory, and to write the value of the modified into memory) and make the thread 1 stop working memory cache variable cache line is invalid, then read by a thread 1, found himself the cache line is invalid, it will wait for the cache line corresponds to the main memory addresses are updated, go to the corresponding main memory to read the latest values.
- Disallow instruction reordering
In the previous section, we mentioned that the order in which the compiler interprets the code may be different from the order in which the code is written. To put it plainly, the compiler only ensures that the results are the same as you want them to be, but IT’s up to me to decide which lines of code to execute first and which to execute next. But this is only good for single threads, and once multithreading is introduced, all sorts of weird problems arise.
Here’s a simple chestnut:
//a and b are non-volatile variables
//flag is volatile
a = 2; //语句1
b = 0; //语句2
flag = true; //语句3
c = 4; //语句4
d = -1; 5 / / statement
Copy the code
Since flag is volatile, instruction reordering does not place statement 3 before statement 1 or 2, nor does it place statement 3 after statement 4 or 5. Note, however, that the order of statements 1 and 2 or 4 and 5 is not guaranteed.
And the volatile keyword guarantees that statements 1 and 2 must have completed by the time statement 3 is executed, and that the results of statements 1 and 2 are visible to statements 3, 4, and 5.
Me: that makes sense. Can a++ also be volatile?
So let’s look at the following code.
import java.util.concurrent.atomic.AtomicInteger;
public class volatileDemo implements Runnable {
volatile int a;
// This class is thread-safe as long as you know that it increments or decays in concurrent state
AtomicInteger realCount = new AtomicInteger();
public static void main(String[] args) throws InterruptedException {
Runnable r = new volatileDemo();
Thread thread1 = new Thread(r);
Thread thread2 = new Thread(r);
thread1.start();
thread2.start();
thread1.join();
thread2.join();
// use a++ to get the result
System.out.println(((volatileDemo) r).a);
// The result of the thread-safe class
System.out.println(((volatileDemo) r).realCount.get());
}
@Override
public void run(a) {
for (int i = 0; i < 1000; i++) {
a++;
/ / realCount increased 1realCount.incrementAndGet(); }}}Copy the code
The results obtained are:
Volatile: Volatile: Volatile: Volatile: Volatile: Volatile
Calm… A ++ is not an atomic operation. It involves several steps: reading the value of A, incrementing it by 1, and assigning it to A.
This is not surprising, because volatile does not guarantee atomicity.
For example, the following flow:
- Thread 1 reads the value of A and completes the +1 action (the last assignment has not been performed)
- The other thread 2 also reads the value of A and performs the +1 action.
- Thread 1 and thread 2 complete the assignment and write the new value back to main memory.
- It can be seen that the value of A used by thread 2 to calculate is still the same as before modification, so after thread 2 completes execution, the value of A will be increased one less time.
Me: Well said, must add chicken drumsticks, then you summed up volatile for friends?
The following points are summarized:
- Volatile provides visibility. A variable that is shared by multiple threads to ensure that when modified by any thread, other threads can immediately obtain the modified value.
- Volatile is no substitute
synchronized
, it does not have atomicity and mutual exclusion; - Volatile works only on attributes, and prevents them from being reordered;
- Volatile provides
happens-before
Ensure that any changes made to a volatile variable are accessible to other threads.
conclusion
Today this chapter again and further explore the JMM, do you also have a new understanding to it, in addition we also introduced new knowledge points volatile, this is also a multithreaded comparison basis and very common, is very necessary to grasp, liver for a day, the space is a little long, must struggle to see friends.
In the next chapter, Gie will continue to introduce the familiar synchronized. Will it be different from what you know? See you in the next chapter. Hope you continue to pay attention, for the dachang dream, we continue to liver.
Pay attention to avoid getting lost
The above is the whole content of this issue, if there is any mistake, please leave a message for advice, thank you very much. I’m GieGie, feel free to leave a comment and we’ll see you next time at 🦮.
The article continues to update, you can wechat search a “flower brother programming” for the first time to read, the follow-up will continue to update Java interview and all kinds of knowledge points, interested partners welcome to pay attention to, learn together, ha 🐮🥃.
Original is not easy, how can you bear to whoring for nothing, if you think this article is a little useful to you, thank old tie for this article to like, comment or forward, because this will be my output more quality articles of power, thank you!