• You have one thought, I have one thought, and when we exchange, one person has two thoughts

  • If you can NOT explain it simply, you do NOT understand it well enough

Now the Demo code and technical articles are organized together Github practice selection, convenient for everyone to read and view, this article is also included in this, feel good, please Star

In a series of posts on concurrent programming in Java, a friend of mine asked me in a micro group that I still couldn’t understand the difference between volatile and synchronized. His questions were summarized as follows:

  • What problems are volatile and synchronized dealing with relatively equivalent?

  • Why is volatile a way of weakly synchronized?

  • What problem does volatile solve other than visibility?

  • How do I choose to use them?

If you can’t answer any of the above questions, you may be confused about the difference. This article will talk about their subtle relationship through the way of pictures and texts

If it takes a day for a CPU to execute a common instruction, then the CPU has to wait a year to read and write memory.

Restricted by the “barrel principle”, in the eyes of THE CPU, the overall performance of the program is pulled down by the efficiency of the memory. In order to solve this shortcoming, the hardware students also use the speed strategy we do software commonly used – the use of Cache Cache (actually is the hardware students to the software students dug pit)

Java Memory Model (JMM)

The CPU has increased the cache to even out the speed difference with memory, which is still several layers.

At this point the memory shortcomings are no longer so obvious, the CPU is very happy. But it brings a lot of problems

As you can see in the figure above, each core has its own L1 Cache, and some architectures also have a L2 Cache shared by all cores. With caching, when a thread accesses a shared variable, it does not step up to main memory if the shared variable is present in L1. So, in this way, it makes up the short board of slow access to memory

Specifically, the thread reads/writes shared variables like this:

  1. Copy shared variables from main memory to its own working memory
  2. Variables are processed in working memory
  3. After processing, update the variable value back to main memory

Suppose there is now a shared variable X in main memory with an initial value of 0

Thread 1 accesses variable X first, as described above:

  1. The variable X is not found in either L1 or L2 until it is found in main memory
  2. Copy variable X to L1 and L2
  3. Change the value of X to 1 in L1 and write it layer by layer back to main memory

At this point, in thread 1’s eyes, X looks like this:

Next, thread 2 accesses variable X by following the same steps above

  1. No variable X is found in L1
  2. The variable X is found in L2
  3. Copy variables from L2 to L1
  4. Change the value of X to 2 in L1 and write it layer by layer back to main memory

At this point, thread 2’s view of X looks like this:

When thread 1 accesses variable x again, let’s see what happens:

At this point, if thread 1 writes x=1 back again, it overwrites the result of thread 2 x=2. The same shared variable, thread 1 sees x=1; Thread 2 sees x=2), this is the shared variable memory is not visible.

How to fill the hole? Today’s two heroes make their debuts, but before we get to volatile, let’s talk about synchronized, which you’re most familiar with

synchronized

The synchronized keyword is used to solve the problem of thread insecurity. Whether it is reasonable or not, let’s look at how the synchronized keyword solves the above mentioned memory visibility problem

  • The memory semantics of synchronized blocks are that variables used in synchronized blocks are cleared from the thread’s working memory and read from the main memory
  • Synchronized blocks flush changes made to shared variables in synchronized blocks to main memory

Without further ado, look relentlessly down on Volatile

volatile

When a variable is declared volatile:

  • When a thread reads a shared variable, it first emptying the local memory variable value and then retrieving the latest value from main memory
  • When a thread writes to a shared variable, it does not cache the value in a register or elsewhere (the so-called “working memory”). Instead, it flusher the value back to main memory

It’s like the same old thing. You’re right

So, when synchronized or volatile is used, the steps for multithreading shared variables look like this:

In simple terms, this means no longer referring to the values of shared variables in L1 and L2, but accessing main memory directly

Let’s do a solid example

public class ThreadNotSafeInteger {
	/** * share variable value */
	private int value;

	public int getValue(a) {
		return value;
	}

	public void setValue(int value) {
		this.value = value; }}Copy the code

After the preorder analysis, it is obvious that there is a big hidden danger in the shared variable value in the above code, so try to make some changes to it

Use the volatile keyword first:

public class ThreadSafeInteger {
	/** * share variable value */
	private volatile int value;

	public int getValue(a) {
		return value;
	}

	public void setValue(int value) {
		this.value = value; }}Copy the code

Use the synchronized keyword again

public class ThreadSafeInteger {
	/** * share variable value */
	private int value;

	public synchronized int getValue(a) {
		return value;
	}

	public synchronized void setValue(int value) {
		this.value = value; }}Copy the code

The two results are exactly the same and are equivalent in solving the [current] problem of visibility of shared variable data

If synchronized and volatile were exactly the same, there would be no need to design two keywords. Here’s another example

@Slf4j
public class VisibilityIssue {
	private static final int TOTAL = 10000;

// Even volatile does not solve the problem, because atomicity is not solved
	private volatile int count;

	public static void main(String[] args) {
		VisibilityIssue visibilityIssue = new VisibilityIssue();

		Thread thread1 = new Thread(() -> visibilityIssue.add10KCount());
		Thread thread2 = new Thread(() -> visibilityIssue.add10KCount());

		thread1.start();
		thread2.start();

		try {
			thread1.join();
			thread2.join();
		} catch (InterruptedException e) {
			log.error(e.getMessage());
		}

		log.info("Count is: {}", visibilityIssue.count);

	}

	private void add10KCount(a){
		int start = 0;
		while (start ++ < TOTAL){
			this.count ++; }}}Copy the code

SetValue = value; (this.count ++;) If you run the code, you’ll see that the value of count is always between 1w and 2w

Change the above method in the form of synchronized

@Slf4j
public class VisibilityIssue {
	private static final int TOTAL = 10000;
	private int count;
	
  / /... Same as above

	private synchronized void add10KCount(a){
		int start = 0;
		while (start ++ < TOTAL){
			this.count ++; }}}Copy the code

Run the code again and count is 2w

How can two sets of code, both modified in the same way by the volatile and synchronized keywords, produce the same results and not the same results?

That’s the difference

Count++ is one line of code, but translated into CPU instructions is three lines (try using javap -c).

Synchronized is an exclusive lock, and only one thread can call the add10KCount method at a time, while the other calling threads block. So all three lines of CPU instructions are executed by the same thread before any other thread can continue. This is commonly referred to as atomicity.

Volatile, however, is a non-blocking algorithm (i.e., non-exclusive), and when three lines of CPU instructions are encountered, there is no guarantee that other threads will not interrupt. As is often said, volatile guarantees visibility, but not atomicity

In a word, when is the volatile keyword used? (Remember to say important things three times. This sentence feels outdated.)

If writing variable values does not depend on the current value of the variable, use volatile

If writing variable values does not depend on the current value of the variable, use volatile

If writing variable values does not depend on the current value of the variable, use volatile

Count++, for example, is a fetch-calculate-write three-step operation that depends on the current value, so it cannot be volatile

What problems are volatile and synchronized dealing with relatively equivalent? The answer has been revealed

Imagine himself first, if let you in the same period of time to write a few lines of code 】 【 【 counting money, going to [singing] a few money, sing the song again to [code], repeated frequently such operations, but also connected to the last operation (code and then write, accumulative counting money, song then) also need to ensure that don’t make a mistake, you tired not tired?

Synchronized is exclusive, thread queuing requires a switch, this switch is just like the above example, to complete the switch, also need to remember the last operation of the thread, very tired CPU brain, this is commonly said context switching will bring a lot of overhead

Volatile, on the other hand, is non-blocking, so it is the weak counterpart of synchronized in solving the visibility problem of shared variables

To this point, the second question of the article is: Why is volatile a form of weak synchronization? I think you get the idea

In addition to addressing visibility issues, volatile can also address compile-optimization reordering issues, as described in the previous article.

  • Orderliness, visibility, happens-before
  • What should we talk about when interviewing for volatile?

After reading these two articles, I believe that the third problem will be easily solved

Knowing these, I believe you also know how to use

Select carefully, finally finishing the first version of the Java technology stack core information, the first to see the public account reply [information] / [666] it

Soul asking

  1. Do you understand the life cycle of a thread? What are the different state flows?
  2. Why do threads have wake up notification?

In our next article, we’ll talk about why notifyAll is recommended instead of Notify.

Personal blog: https://dayarch.top

Add my wechat friends, enter the group entertainment learning exchange, note “enter the group”

Welcome to continue to pay attention to the public account: “One soldier of The Japanese Arch”

  • Cutting-edge Java technology dry goods sharing
  • Efficient tool summary | back “tool”
  • Interview question analysis and solution
  • Technical data to receive reply | “data”

To read detective novel thinking easy fun learning Java technology stack related knowledge, in line with the simplification of complex problems, abstract problems concrete and graphical principles of gradual decomposition of technical problems, technology continues to update, please continue to pay attention to……