CPU layer visibility

When a multi-core CPU performs a multi-threaded task, each thread holds a cache of data.

When operating on a shared area, the contents of the data cache of the operation are then synchronized to main memory;

So, there are inconsistent data when thread A and thread B rewrite the shared area at the same time in multi-threaded operation;

Volatile keyword: memory barrier based solutions where writer thread operations (sending invalidate messages to other cpus) invalidate the reader thread’s shared cache. To achieve the effect of visibility;

Extensions: At the assembly level, volatile operations add a lock instruction, which can be visible by bus or cache locking.

Bus locks and cache locks

Bus lock: When a CPU accesses the shared memory with multiple cpus, it sends a lock signal to prevent other cpus from accessing data in the shared memory through the bus. overhead

Cache lock: based on CPU cache consistency protocol, common protocols MSI, MESI,MOSI;

  • M(Modify) : the shared data is cached only in the current CPU cache and is in the modified state. That is, the cached data is inconsistent with the data in the main memory

  • E(Exclusive) : Indicates the Exclusive state of the cache. Data is cached only in the current CPU cache and is not modified

  • S(Shared) : indicates that data may be cached by multiple cpus and the data in each cache is the same as that in the main memory

  • I(Invalid) : indicates that the cache is Invalid

The memory barrier

Problems solved: at the CPU level, it is impossible to know the code execution order at the software level, so the code sorting operation is handed over to the software to execute;

Write instructions from Store Bufferes to memory, making it visible to other threads accessing the same shared memory

  • Store Memory Barrier: Tells the processor to synchronize all data stored in store-bufferes prior to the write Barrier to main Memory. In short, this means that the results of instructions prior to the write Barrier are visible to reads or writes after the write Barrier

  • Load Memory Barrier: All read operations performed by the processor after the read Barrier. In conjunction with the write barrier, memory updates before the write barrier are visible to read operations after the read barrier

  • Full Memory Barrier: Ensure that read and write operations before the Barrier are committed to Memory

JMM

The Java Memory Model, i.e., provides a reasonable way to disable caching and resort; Address visibility and order of memory access;