What is the essential difference between volatile and synchronized?

The only difference between volatile and synchronized is that while volatile only ensures memory visibility, synchronized also ensures memory visibility — and also ensures that only one thread can acquire the lock at a time, while other threads block waiting for it to be acquired.

The difference from synchronized?

Synchronzied ensures that shared data can be updated by only one thread at a time.

Volatile can’t, because volatile only ensures that a single read or write operation is atomic. However, changes to a shared data may, in many cases, depend on the old value of the shared data. For example, the old value of the shared data is read first — then increments by one — and then writes. So volatile does not guarantee true synchronization.

What is the application scenario of volatile?

What is used when volatitle is used? To modify a Shared data, do not rely on Shared data (essence is not the old value using the Shared data), and also to ensure that the right of the variable thread-safe (nature because the right of the variables rather then a read operation, then update the share data, it contains two operations: 1, 2, read the value of a variable on the right to update the share data. The update operation is an atom, but it is not an atom if there is one more read to the right-hand variable, because there are now two atomic operations.

Reference: titanwolf.org/Network/Art… // The author of this article is writing about concurrent programming in Java


The sample

Example 2 – AtomicLong

JDK and package implementation of AtomicLong data volatile, equivalent to our own data definition with valotile is the same, but now is the JDK and package AtomicLong help you do this thing, but the role and essence is the same.

Individual reads or writes are atomic operations

Why is a single read or write atomic operation? This is guaranteed by the Java specification.

Non-long /double data, to be exact, are atoms. Why is that? This is because long/ Double is 64-bit, and the Java specification guarantees only 32-bit atomic operations.

What does volatile do?

When shared data is updated by one thread, other threads can immediately read the latest value (known as visibility). That is, without volatile, the latest values might not be read; If you add it, you make sure you can read the latest value.

Syncronized?

Ensure mutual exclusion, that is, only one thread can update shared data at a time. At the code level, for example, locking a method or code block that updates shared data, only one thread can execute the method or code block at a time. This ensures that no other thread can acquire the lock, and therefore cannot execute the method or code block, and therefore cannot update shared data.

What about visibility? Synchronized also ensures visibility, which means that thread A has modified the shared data and thread B can see the latest data. This may seem like nonsense, and it is nonsense — mainly when it comes to the Java memory model.

Java memory model

An understanding of the Java memory model and its application to concurrency.

The main goal of the Java memory model is to define access rules for variables in a program.

Communication between Java threads is controlled by the Java Memory model (JMM for short in this article).

All variables are stored in the main memory, and each thread also has its own working memory, which stores the main memory copy of the variables used by the thread. All operations on variables must be completed in the working memory, rather than directly reading variables in the main memory. Different threads cannot directly access variables in each other’s working memory, so variable transfer between threads needs to be completed through the main memory.

Interthread communication:

  1. First, thread A flusher (write) the updated shared variable from local memory A to main memory. Cpu1 (thread 1) — cache 1(thread working memory 1) — main memory

  2. Thread B then goes into main memory to read (that is, read) the shared variables that thread A has updated previously. Cpu2 (thread 2) — cache 2(thread working memory 2) — main memory

Threads can only communicate with thread working memory, which in turn communicates with main memory.

Working memory of different threads cannot communicate with each other and must pass through main memory. For example, thread 1 writes to thread working memory 1, and then thread working memory writes to main memory; Thread working memory 2 reads data from main memory, and then thread 2 can read data from thread working memory 2.


Architecture diagram

CPU dimension – Architecture diagram

Thread dimension – Architecture diagram

Explanation: The architecture diagram of the thread dimension may be difficult to understand, because why is it there? Why is that? The essence is the ARCHITECTURE diagram of the CPU dimension.

The essence of the ARCHITECTURE diagram of the CPU dimension is that now there are multiple cpus, each CPU has its own cache, why should each CPU have its own cache? Isn’t there already memory (the main memory in the architecture diagram)? Because cache is fast, cache is faster than memory, and fast memory is smaller than memory, so smaller memory is faster.

The access speeds of various storage devices are as follows:

Registers are the fastest and can be accessed in one clock cycle,

The second is caching, which can be accessed in a few clock cycles,

Ordinary memory can be accessed in tens or hundreds of clock cycles.

zhuanlan.zhihu.com/p/37749443

Memory visibility

The memory model is a virtual and meaningless thing, mainly because without understanding the memory model, you can’t understand the memory visibility of a thread — because for some reason, thread 1 has modified the shared data, but thread 2 can’t necessarily read the latest value of the shared data. The reason is that the memory model, as mentioned in the CPU dimension architecture diagram above, has an additional intermediate memory (cache/thread working memory) between CPU/thread and memory.

More than one intermediary, more than one data inconsistency problem.

So how do you solve the problem of inconsistent data? Synchronized or volatile ensures that thread 2 can read the latest value of the shared data after thread 1 has modified it.

The only difference between synchronized and volatile is that synchronized has a mutual exclusion function. In other words, only one thread can acquire the lock on an object at a time, and other threads must wait for the lock to be released. Essentially, only one thread can execute a method or block of code at a time: That is, only one thread can update shared data.

reference

Segmentfault.com/a/119000002…

www.zhihu.com/question/38…