What is the pessimistic lock and optimistic lock

The optimism lock corresponds to people who are optimistic in life thinking things are going to be good, and the pessimism lock corresponds to people who are pessimistic in life thinking things are going to be bad. Both kinds of people have their own advantages and disadvantages. One kind of people is better than the other.

Pessimistic locking

Always assume the worst, every time you go to get the data you think somebody else is going to change it, so every time you go to get the data. The shared resource is only used by one thread at a time, other threads block, and then transfer the resource to another thread when it is used up. Traditional relational database inside used a lot of this locking mechanism, such as row lock, table lock, read lock, write lock, etc., are in the operation before the first lock. Exclusive locks such as synchronized and Reentrantlock in Java are implementations of the pessimistic locking idea.

Optimistic locking

Always assume the best case, every time I go to get the data, I think others will not modify it, so I will not lock it, but when updating, I will judge whether others have updated the data during this period, which can be achieved by using the version number mechanism and CAS algorithm. Optimistic locks are suitable for multi-read applications to improve throughput. Optimistic locks like write_ condition are provided by databases. In Java, the atomic variable classes under the java.util.concurrent. atomic package are implemented using CAS, an implementation of optimistic locking.

Two types of lock usage scenarios

From the introduction of the two kinds of lock, we know that the two kinds of lock have their own advantages and disadvantages, can not be considered better than the other kind, for example, optimistic lock is suitable for the situation of less write (multi-read scenario), that is, conflict is really rare, this can save the lock overhead, increase the overall throughput of the system. However, in the case of overwrite, conflicts will occur frequently, which will cause the upper application to be continuously retry, which in turn reduces performance, so pessimistic locking is appropriate in the case of overwrite.

There are two common implementations of optimistic locking

Optimistic locking is usually implemented using the version number mechanism or CAS algorithm.

1. Version number mechanism

Generally, a version field is added to the data table to indicate the number of times the data has been modified. When the data has been modified, the version value will be increased by one. When thread A wants to update the data value, it reads the version value as well as the data. When submitting the update, if the version value that it just read is the same as the version value in the previous database, the update operation is retried until the update succeeds.

** A simple example :** suppose that the account information table in the database has a version field with the current value of 1; The current account balance field (balance) is $100.

1. Operator A at this time reads it out (version=1) and deducts 50(50 (100-$50) from its account balance.

2. During operator A’s operation, Operator B also reads the user information (version=1) and deducts 20(20 (100-$20) from its account balance.

3. Operator A completed the modification and added one (version=2) to the data version number, together with the balance of account deducted (balance=$50), and submitted the data to the database for update. At this time, as the submitted data version is larger than the current version recorded in the database, the data is updated, and the database records version is updated to 2.

4. Operator B completed the operation and added the version number by one (version=2) to try to submit the data to the database (balance=$80). However, when comparing the recorded version of the database, it was found that the version number of the data submitted by operator B was 2, and the current version of the database record was also 2. Operator B’s submission is rejected because the optimistic locking policy of “commit version must be greater than record current version in order to perform update” is not satisfied.

In this way, the possibility of operator B overwriting operator A’s operation results with modified results based on older data version=1 is avoided.

2. CAS algorithm.

Compare and swap is a well-known lock-free algorithm. Lockless programming refers to the Synchronization of variables between multiple threads without using locks. It is also called non-blocking Synchronization when no threads are blocked. The CAS algorithm involves three operands

● Memory value V that needs to be read and written

● The value A for comparison

● New value B to be written

CAS updates the value of V atomically with A new value B if and only if the value of V is equal to A, otherwise nothing is done (compare and replace is an atomic operation). In general, it is a spin operation, that is, repeated retries.

Disadvantages of optimistic locking

ABA problems are a common problem with optimistic locks

1. The problem of ABA

If A variable V is A value when it is first read and is still A value when it is ready to be assigned, can we say that its value has not been modified by other threads? Obviously not, because during that time its value could be changed to something else and then back to A, and the CAS operation would assume that it had never been changed. This problem is known as the “ABA” problem of CAS manipulation. **

The AtomicStampedReference class provides this capability after JDK 1.5, where the compareAndSet method first checks whether the current reference is equal to the expected reference and whether the current flag is equal to the expected flag. The reference and the flag are set atomically to the given updated value.

2. Long cycle time and high cost

** Spin CAS (that is, if it is not successful, it will loop until it is successful) can be very expensive for the CPU to execute if it is not successful for a long time. ** The JVM would be more efficient if it could support pause instructions provided by the processor. Pause instructions have two effects. First, they delay the execution of the de-pipeline so that the CPU does not consume too much execution resources. On some processors the latency is zero. Second, it improves CPU execution efficiency by avoiding CPU pipeline flush due to memory order violation during loop exit.

3. Only one atomic operation that shares variable T is guaranteed

CAS is valid only for a single shared variable, and not when the operation involves spanning multiple shared variables. However, since JDK 1.5, the AtomicReference class has been provided to ensure atomicity between reference objects, and you can place multiple variables in a single object for CAS operations. So we can use locking or the AtomicReference class to merge multiple shared variables into a single shared variable.

The use of CAS and synchronized

To put it simply, CAS applies to the case of less writing (read more, less conflict), while synchronized applies to the case of more writing (write more, more conflict).

1. In the case of less resource competition (less thread conflict), synchronized synchronized lock is used for thread blocking, wake up switch and switch operation between user-mode kernel states, which wastes extra CPU resources: CAS is implemented based on hardware, which does not need to enter the kernel or switch threads, and thus has less chance of spin operation. Therefore, higher performance can be achieved.

2. In the case of serious resource competition (serious thread conflict), CAS has a high probability of spin, which wastes more CPU resources and is less efficient than synchronized.

Supplement:

The synchronized keyword has long been an elder statesman in Java concurrent programming, and long ago many would have called it “the heavyweight lock.” However, after JavaSE 1.6, a variety of optimizations including biased locking and light T-class locking, which were introduced primarily to reduce the performance cost of acquiring and releasing locks, became less heavy in some cases. The underlying implementation of synchronized mainly relies on lock-free queues, and the basic idea is spin-back blocking and subsequent competitive switching, which sacrifices fairness slightly, but achieves high throughput T. Performance similar to CAS can be achieved with fewer thread collisions; In the case of serious thread conflicts, the performance is much higher than that of CAS.

-end-

  • It is not easy to create, welcome your likes, comments and attention
  • Your likes, comments, and followings
  • Is the greatest support and encouragement to me!