Optimistic locking
Optimistic locks are often optimistic that they default to more reads and less writes, and are less likely to encounter concurrent writes. So instead of locking directly, each update is performed by comparing the version number. If the version number is consistent, the update is performed. If the version number is inconsistent, the reread fails.
CAS Compare and Swap
CAS is a common optimistic lock implementation. It contains three parameters, CAS(V,E,N), where V represents the updated variable,E represents the old expected value, and N represents the new value. Only if V is equal to E will V be changed to N. If the value of V is different from the value of E, the variable has been changed by another thread, and the CAS change fails.
ABA problems caused by CAS
- Thread A fetches the value V of variable V1
- At the same time, thread B fetches the value V of variable V1And operates on V to change its value to V2, and then change the value of the variable V from V2Change back to V1
- Thread A next does the CAS operation because thread B has changed the value back to V1The CAS operation will succeed
- In spite of success, the value of variable V has been modified during this period. This problem can be solved by using the version number. Each modification operation will carry a version number, and only the version number and data version number can be executed
Pessimistic locking
The pessimistic lock is the opposite of the optimistic lock. Pessimistic lock by default, there are many write operations and concurrent write operations are often encountered. Therefore, the lock is locked every time data is read or written. Every time you modify or read data, you need to get the lock. Common pessimistic lock implementations are Synchronized and ReentrentLock
Synchronized
Synchronized is an exclusive pessimistic lock that can treat any non-null object as a lock.
Range of Synchronized
- Decorated instance method
- A static method locks the Class instance, so a static method lock locks all threads that call the modified method
- Act on the method block
Sychronized core components
- Wait Set: a thread placement area blocked by a Wait method
- Contention List: Contention queue, the location of all threads requesting locks
- Entry List: The location of the thread in the Contention List that has a candidate for the lock
- OnDeck: at any given time, only one thread is competing for lock resources, and it is placed in this area
- Owner: indicates the thread that currently obtains the resource
- ! Owner: specifies the current thread that releases the lock
ReentrentLock
ReentrentLock inherits the Lock interface and is a reentrant Lock. On the basis of Synchronized, ReentrentLock also provides methods such as responsible interrupt and timing Lock
Key methods within the Lock interface
- Void lock(): If the lock is idle, the current thread acquires it
- Boolean tryLock(): Return true immediately if the lock is available, false otherwise
- Void unlock(): Releases the lock held by the current thread
- Condition newCondition(): Condition object, bound to the current lock, in which the thread releases the lock after calling the await method. Call signal to wake up the wait thread
Difference between Sychronized and ReentrentLock
- ReentrentLock shows the acquire, releases the lock, while Sychronized implicitly acquires, releases the lock
- ReentrentLock responds to interrupt locks, Sychronized does not
- ReentrentLock is API level, Sychronized is JVM level
- ReentrentLock is a fair lock that can be set at the time of creation
- ReentrentLock can bind conditions via Condition
- An exception occurs in ReentrentLock. If there is no UNLOCK, a deadlock is likely to occur, so the finally module must be used to release the lock. Sychronized releases the lock occupied by the thread
Semaphore Semaphore
It is a count based semaphore. A threshold can be set for multiple threads to compete for signals. After the number of threads exceeds the threshold, new threads may block to call acquire() and release() for locking and unlocking
Reentrant lock
An inner recursive function can still acquire the lock after the outer function of the same thread acquires the lock.
Fair locks and unfair locks
Fair lock
Locks are acquired in a first-come-first-served order, with the first queued thread acquiring the lock more quickly than the next thread. A fair queue needs to be maintained each time to ensure that the implementation is first come, first served.
Not fair lock
There is no queuing problem and every time a direct attempt is made to acquire the lock. Because compared with fair lock, there is no queuing problem, so the performance will be much higher than fair lock.
Heavyweight Lock Mutex Lock
Synchronized is implemented through an object’s internal monitor Lock, and moniter essentially relies on the operating system’s Mutex Lock. Synchronzied is expensive and inefficient because it involves operating system and thread switching, and we call these locks that involve operating system Mutex locks “heavyweight locks.” To improve performance, we introduced optimized lightweight locks
Lightweight lock
The “lightweight” of a lightweight lock is actually relative to a heavyweight lock. The intention is to reduce the high cost of heavyweight locks without multi-threaded contention. Therefore, lightweight locks are suitable for alternating threads executing synchronized blocks, and will swell to heavyweight locks if multiple threads compete for the same lock.
Biased locking
In many cases, locks are not contested by multiple threads, but are often acquired multiple times by one thread over a long period of time. Therefore, the introduction of bias lock is to eliminate the cost of re-entering the lock after a thread has acquired the lock, forming a bias to the thread. This bias is introduced to minimize unnecessary overhead without multi-line competition, so bias locks further improve performance on the basis of lightweight locks.
Lock the optimization
- Reduce lock holding time and lock only when thread safety is required
- Reduce lock granularity: split a lock of a large object into many small objects and lock them separately to increase parallelism and reduce competition. The best example is ConcurrentHashmap, which uses segments to reduce lock granularity.
- Lock separation: Separation by function, best example: read and write locks
- Lock coarsening: Require each thread to hold the lock for as short a time as possible and release the lock as soon as it runs out of common resources
- Lock elimination: Once the lock is discovered at compile time that it is not actually a shared object, that is, the lock is not required to maintain thread safety.
The volatile keyword
- Volatile is not a lock, but it can be thread safe.
- It implements variable visibility, meaning that if one thread changes the value of a variable with the volatile keyword, the new value is immediately acquired by other threads. Because there is no locking, there is no thread blocking, making volatile lighter than synchronized.
- It is important to note that different volatile variables cannot depend on each other directly.