Volatile and synchronized are widely used in Java concurrent programming, and it is necessary to learn and use them for safe concurrent programming.
In the Java language, the use of multiple threads to access shared variables is a common concurrency scenario. This makes it possible for multiple threads to modify shared variables at the same time, making it necessary to restrict the exclusive access of multiple threads to shared variables. Java provides several ways to restrict multiple threads’ exclusive access to shared variables, but at different costs. Using volatile is a very lightweight approach. If a variable is declared volatile, the JVM guarantees that all threads see the same shared variable. Why might different threads see different variables? To speed up processing, the CPU does not interact directly with memory. Instead, it first reads memory into its internal cache (L1,L2, etc.), and then communicates directly with the internal cache to speed up processing. On machines with multiple cpus, different threads may be accessing a shared variable, and different threads may be running on different cpus, so the same variable may be cached in multiple CPU caches. Without volatile, the shared variable may be consistent across multiple threads. It can happen that shared variables accessed by multiple threads have different values, because we do not know when the CPU will write cached values back into memory. Implementatively, if a volatile shared variable is written, the JVM sends a Lock prefixed instruction to the CPU, and the CPU writes back to memory the line in which the variable resides (the smallest unit of cache that can be allocated in the cache). However, in the case of multiple processors, after the cache row on one CPU is written back to system memory, the cache of the variable on other cpus is still old, which will cause problems in subsequent operations. Therefore, in order to make all threads see the same content, it is necessary to implement the cache consistency protocol. The CPU will determine whether its cache has expired by monitoring the data passing through the bus. If it has expired, it will need to invalidate the cache. If the CPU accesses the cache again, it will find that the cache is invalid and reload the cache from memory. To summarize, there are two principles for implementing volatile: The Lock prefix of the JVM will cause the CPU to write back to system memory. 2. In order to ensure the principle of cache consistency, in the case of multiple cpus, one CPU writes back to memory will cause all caches on other cpus to be invalidated.
Second, the synchronized
Synchronized is a heavyweight relative to volatile. First, we should know that in Java, all objects can be used as locks. Can be divided into the following three conditions: 1, the common method of synchronization, the lock is the current object 2, static methods synchronization, the lock is the Class object of the current Class 3, ordinary piece of synchronization, the lock is the synchronize configuration object When a thread attempts to access synchronization code, must first gain the lock, exit or throw an exception must be releases the lock. The JVM implements method synchronization and code block synchronization based on entering and exiting Monitor objects, using the Monitorenter and Monitorexit directives. Monitorenter directives are inserted at the start of the synchronized code block after compilation, while Monitorexit directives are inserted at the end of methods and exceptions. The JVM guarantees that each Monitorenter will have a Monitorexit threshold corresponding to it. When a monitorenter executes, the thread attempts to obtain access from Synchronize, the monitor lock used by the monitorenter, which is stored in the Java object header.