The optimism lock corresponds to people who are optimistic in their lives thinking about things going good, and the pessimism lock corresponds to people who are pessimistic in their lives thinking about things going bad. Both kinds of people have their own advantages and disadvantages. One kind of people is better than the other.
Pessimistic locks:
Always assume the worst, every time to fetch the data that people will change, so every time when take data will be locked, so people want to take this data will be blocked until it got locked (Shared resources to only one thread at a time using, other threads blocked, after use to transfer resources to other threads). Traditional relational database inside used a lot of this locking mechanism, such as row lock, table lock, read lock, write lock, etc., are in the operation before the first lock. Exclusive locks such as synchronized and ReentrantLock in Java are implementations of the pessimistic locking idea.
Optimistic locking:
Always assume the best case, every time I go to get the data, I think others will not modify it, so I will not lock it, but when updating, I will judge whether others have updated the data during this period, which can be achieved by using the version number mechanism and CAS algorithm. Optimistic locks are suitable for multi-read applications to improve throughput. Optimistic locks are provided by databases similar to write_condition. In Java. Java util. Concurrent. Atomic package this atomic variable classes is to use the optimistic locking a way of implementation of CAS.
The usage scenarios of the two locks are as follows:
From the introduction of the two kinds of lock, we know that the two kinds of lock have their own advantages and disadvantages, can not be considered better than the other kind, for example, optimistic lock is suitable for the situation of less write (multi-read scenario), that is, conflict is really rare, this can save the lock overhead, increase the overall throughput of the system. However, in the case of overwrite, conflicts often arise, which can cause the upper application to be repeatedly retry, thus reducing performance. Pessimistic locking is suitable for overwrite scenarios.
There are two common implementations of optimistic locking
Optimistic locking is usually implemented using the version number mechanism or CAS algorithm.
1. Version number mechanism
Generally, a version field is added to the data table to indicate the number of times the data has been modified. When the data has been modified, the version value will be increased by one. When thread A wants to update the data value, it also reads the version value. When submitting the update, it updates the version value only when the version value it just read is the same as the version value in the current database. Otherwise, the update operation is retried until the update succeeds.
Take a simple example: Suppose you have a version field in the account information table in the database with the current value of 1. The current account balance field (balance) is $100.
- Operator A then reads it out (version=1) and deducts $50 ($100-$50) from its account balance.
- During operator A’s operation, Operator B also reads in this user information (version=1) and deducts $20 ($100-$20) from his account balance.
- Operator A completed the modification and added one (version=2) to the data version number, together with the account deducted balance (balance=$50), and submitted it to the database for update. At this time, as the submitted data version is larger than the current version recorded in the database, the data is updated, and the database records version is updated to 2.
- Operator B completed the operation and also added the version number (version=2) to try to submit data to the database (balance=$80). However, when comparing the recorded version of the database, it was found that the version number of the data submitted by operator B was 2, and the current version of the database record was also 2. Operator B’s submission is rejected because the optimistic locking policy of “commit version must be greater than record current version to perform updates” is not met.
In this way, the possibility of operator B overwriting operator A’s operation results with modified results based on older data version=1 is avoided.
2. CAS algorithm
Compare and swap is a well-known lock-free algorithm. Lockless programming refers to the Synchronization of variables between multiple threads without using locks. It is also called non-blocking Synchronization when no threads are blocked. The CAS algorithm involves three operands
- Memory value V that needs to be read or written
- The value A for comparison
- The new value B to be written
CAS updates the value of V atomically with A new value B if and only if the value of V is equal to A, otherwise nothing is done (compare and replace is an atomic operation). Typically it is a spin operation, that is, repeated retries.
Disadvantages of optimistic locking
ABA problems are a common problem with optimistic locks
1 ABA problem
If A variable V is A value when it is first read and is still A value when it is ready to be assigned, can we say that its value has not been modified by other threads? Obviously not, because during that time its value could be changed to something else and then back to A, and the CAS operation would assume that it had never been changed. This problem is known as the “ABA” problem of CAS operations.
The AtomicStampedReference class provides this capability after JDK 1.5, where the compareAndSet method first checks whether the current reference is equal to the expected reference, and whether the current flag is equal to the expected flag. The reference and the flag are set atomically to the given updated value.
2. Long cycle time and high cost
Spin CAS (that is, if it fails, it loops until it succeeds) can be very expensive for the CPU to execute if it fails for a long time. The JVM can improve performance if it supports pause instruction provided by the processor. Pause instruction has two functions. First, it can delay de-pipeline instruction so that the CPU does not consume too many execution resources. Second, it improves CPU execution efficiency by avoiding CPU pipeline flush due to memory order violation during loop exit.
3 guarantees atomic operations of only one shared variable
CAS is valid only for a single shared variable, and not when the operation involves spanning multiple shared variables. However, since JDK 1.5, the AtomicReference class has been provided to ensure atomicity between reference objects, and you can put multiple variables into one object to perform CAS operations. So we can use locks or use the AtomicReference class to merge multiple shared variables into a single shared variable.
The use of CAS and synchronized
To put it simply, CAS applies to the case of less writing (read more, less conflict), while synchronized applies to the case of more writing (write more, more conflict).
- In the case of less resource competition (less thread conflict), synchronized synchronized lock is used for thread blocking and wake up switching, as well as switching between user-mode kernel states, which consumes additional CPU resources. CAS, on the other hand, is hardware-based, does not need to enter the kernel, does not need to switch threads, and operates with less spin, thus achieving higher performance.
- In the case of serious resource competition (serious thread conflict), CAS has a high probability of spin, which wastes more CPU resources and is less efficient than synchronized.
Sidebar: The synchronized keyword has long been an elder statesman in Java concurrent programming, and long ago many would have called it a “heavyweight lock.” However, after JavaSE 1.6, biased locking and lightweight locking and various other optimizations were introduced, mainly to reduce the performance cost of acquiring and releasing locks, and became less heavy in some cases. The underlying implementation of synchronized mainly relies on lock-free queues. The basic idea of synchronized is spin-back blocking, which continues to compete for locks after competitive switching, sacrificing fairness slightly, but achieving high throughput. Performance similar to CAS can be achieved with fewer thread collisions; In the case of serious thread conflicts, the performance is much higher than that of CAS.