Recently, I was reading the book “The Art of Concurrent Programming”. Since I had read a lot of blogs about multi-threading before, I still read it smoothly and had no problems. But there are a few doubts about Synchronized optimization in the book. This article comes about when bias locks inflate to lightweight locks and when lightweight locks inflate to heavyweight locks. Synchronized is the main character of this article. Relative to ensure visibility keywords volatiley and various concurrent concurrent container security, implicit lock Synchronized did a bit heavy. But there are many complex multithreaded concurrency situations that do require locking for concurrency security, so Java1.6 has optimized Synchronized. Synchronized performance is greatly improved in scenarios where only a single thread executes Synchronized code repeatedly (using biased locks) and multiple threads take turns executing Synchronized code without contention (using lightweight locks). Synchronized still uses heavyweight locks for multithreaded fights, unchanged.
The expansion of the lock
In Java, there are four lock states, from lowest to highest: no lock state, biased lock state, lightweight lock state, and heavyweight lock state. These states are gradually upgraded with the competition situation. Locks can be upgraded but cannot be downgraded, meaning biased locks cannot be downgraded after being upgraded to lightweight locks. The purpose of this policy is to improve the efficiency of acquiring and releasing locks.
Bias lock expansion
The lock is stored in the Java object head. When a single thread initially passes a Synchronized block, it adds a bias lock to the Synchronized object header and stores its thread ID in the object header via CAS. When the thread itself passes again, it only needs to check the thread ID to re-enter, which is very efficient. This is the single thread scenario that favors lock optimization. If there is another thread that needs to synchronize the block, check that the thread ID in the object header is not your own thread. In this case, it will find out whether the thread holding the bias lock is still active. If it is not active, it will set the bias lock identifier to 0. Thread 2 will redo the above steps to obtain the bias lock. If the original thread holding the biased lock is still active, the original thread holding the lock is suspended, and the biased lock is upgraded to a lightweight lock. First, the lock flag bit of the object header is changed to 00, then the space for storing the lock record is created in the stack frame of the thread holding the lock bias, and the Mark Word in the object header is copied to the lock record. Then the thread tries to use CAS to replace the Mark Word in the object header with a pointer to the lock record. This completes the bulking of the lock and then restores the thread holding the lightweight lock.
Expansion of lightweight locks
The thread uses CAS to replace the pointer in the object header back to the Mark Word when it is unlocked. The new thread uses CAS to replace the Mark Word in the object header with a pointer to its own lock record when holding the lock. This is the scenario where lightweight lockoptimized multithreading alternately executes synchronized code. When the above two CAS operations fail, the lightweight lock expands to the heavyweight lock, and the new thread spins to wait for the lock. The thread is blocked if the spin fails to acquire the lock for a period of time. This may be slightly different from the art of Concurrent Programming, where a lightweight lock expands to a heavyweight lock after a certain amount of spin fails. The spin action itself is optimized by the JVM for heavyweight locks, so as soon as the CAS fails to compete, the lightweight locks get inflated.