This is the 22nd day of my participation in Gwen Challenge

Optimistic locking

Optimistic locking is an optimistic idea. It assumes that the current environment is read more and write less, and the probability of concurrent write is low. When reading data, it assumes that other threads are not modifying the data, so the lock is not locked. It works like this: after modifying the shared resource, verify that there is no conflict during this period. If no other thread is modifying the resource, the operation is complete.

Here’s an example scenario: online documentation.

We all know that online documents can be edited by multiple people at the same time. If pessimistic lock is used, then as long as one user is editing the document, other users cannot open the same document, which is of course a bad user experience. That allows multiple users to edit simultaneously, in effect using optimistic locks, which allow multiple users to open the same document for editing, and only verify if the changes conflict after the editing is committed.

What is a conflict? Here is an example. For example, user A first edits the document in the browser, and then user B opens the same document in the browser for editing. However, user B submits changes to the document before user A, and user A is unaware of this process. Then there will be conflicts between parallel modifications made by A and B.

How does the server verify that there is a conflict? The usual scheme is as follows:

  • Since the probability of conflict is low, the user is allowed to edit the document first, but the browser will record the version number returned by the server when downloading the document.
  • When a user submits a modification request, the request is sent to the server with the original document version number. After receiving the request, the server compares it with the current version number. If the version number is the same, the modification succeeds; otherwise, the submission fails.

In fact, our common SVN and Git also use the idea of optimistic locking. Users should first edit the code, and then determine whether there is a conflict by the version number when submitting. If there is a conflict, we need to modify it by ourselves before submitting it again.

Optimistic lock in Java: CAS, compare and replace, compare the current value (value in main memory), and the expected value (value in the current thread, a copy of the value in main memory), update, otherwise continue CAS operation.

Pessimistic locking

Pessimistic Lock Works pessimistically, that is, it thinks that more data is written and less data is read, and the possibility of concurrent write is high. Every time it tries to get data, it thinks that other threads will modify it. Therefore, every time it reads or writes data, it will lock the data. When another thread tries to read or write the data, it blocks it until the thread releases the lock and the other thread acquires the lock.

CAS

CAS: full name Compare and swap, literally: “Compare and swap”, a CAS involves the following operations:

We assume that the old data V in memory, the old expected value A, and the new value B need to be modified.

  1. Compare whether A and V are equal. (compare)
  2. If it’s equal, write B to V. (exchange)
  3. Returns whether the operation succeeded.

When multiple threads perform CAS operations on a resource at the same time, only one thread succeeds in the operation, but the other threads do not block, and the other threads receive a failure signal. So CAS is an optimistic lock.

CAS application scenarios:

  • The use of CAS can avoid thread blocking, such as the implementation of spinlocks.

  • Most of the time we use while true until success.

CAS shortcomings

  • The ABA problem is that A value changes from Ato B and then to A, and the CAS operation does not detect that value change. You can use A version of AtomicStampedReference that carries A similar timestamp

  • Performance issues. Most of the time, we use while true to modify the data until it succeeds. The advantage is that it is correspondingly fast, but when the number of threads keeps increasing, performance degrades significantly because each thread needs to execute, consuming CPU time.