The author recently in learning some basic knowledge of Java multithreading, talking about some of their own understanding of Java lock

What are Java locks used for?

We make a program, the ultimate purpose is to make the program according to the way we want, but in multi-threaded scenarios, many things that we can hardly predict will happen, leading to data insecurity, at this time we will think of some methods to solve the problem of inconsistent data:

  1. Avoid data inconsistencies (ThreadLocal)
  2. Line up
  3. vote

And lock is a way to queue.

What are Java locks?

Java locks fall into two categories

  1. Synchronized
  2. Lock

Synchronized from a JVM perspective

Today we’ll avoid three uses of the Synchronized keyword, the Monitorenter and Monitorexit instructions, and the ACC_SYNCHRONIZED flag, and just talk about what the Synchronized keyword does.

Essentially, Synchronized is based on lock-free queues, so we start from scratch and then optimize to analyze the Synchronized keyword.

The Synchronized keyword is Synchronized, and the Synchronized keyword is Synchronized. When multiple threads request the Synchronized object monitor at the same time, the object monitor will divide all threads into several classes. When many people go to grab an Offer at the same time, they will divide the interviewer into many groups according to your process status

  1. Competitive queue (in the pen interview process)

This Queue is not strictly a Queue, but a linked list based on Pointers to Node and next. All new threads are queued at the top of the list, and all threads are queued at the bottom of the list. This is obviously a lock-free queue, and the only thread that can take threads from the competing queue is the Owner thread, which is the thread holding the Lock, and it will select the appropriate candidate threads and put them into the entry-list.

  1. Entry-list (Spare Tire Pool)

Like the sorting of employment in a company, into the queue is not to say that is sure to get resources, let the thread in the queue is in order to avoid frequently conflict of competition in the queue of the thread, after entering the employment pool, and then will (not fair) random pick one resume for employment (i.e., set to Ready thread), If the thread gets the Offer, it becomes the Owner thread. If it doesn’t get the Offer, it returns to the pool. In fairness, the thread is placed at the head of the Entry List.

  1. WaitSet (pool to consider offers)

If the person holding the Offer (Owner thread) says wait! Wait! I’m going to think about it, and this thread is going to be thrown into the waitSet queue, and when you think about it it’s going to be pushed back into the Entry List for the process.

  1. OnDeck(Ready thread) (verbal Offer)

The thread competing for the lock is the Ready thread (not fair).

  1. Owner thread (the person who gets the Offer)
  2. ! Owner thread (destroys Offer)

spinlocks

We must to be clear, if can not contact will communicate with HR/as user state/blocking state, competition in the queue, spare tire pool, pool of considering offers the three batches of people there is no Offer in the final analysis, is these batches are blocking state can not contact (HR), in order to make your application process is not so slow, We need to urge HR frequently, that is, we need to poll HR to decide whether to give me an Offer or not. This polling cycle is worth studying, because polling will occupy HR, so when polling results for many times, they will be disconnected, that is, they will enter the blocking state.

In order to prevent the thread from entering the blocking state, the thread that can’t get the lock spins first, but after spinning for a period of time, if it can’t get the lock, it enters the blocking state

Of course the interview process is meant to be fair, but it’s not

Where is the injustice?

Those who frequently consult HR may attract attention and be directly hired, which is unfair to those who have been in the competition queue, and may even steal the Offer directly, which is unfair to those who are in the verbal Offer status

The thread attempts to spin before entering the queue. If the lock is acquired directly, it is unfair to the queue waiting thread and the Ready thread

Biased locking

Of course, the above interview scenes are generally large factory scenes, many small factories are willing to interview few people, even only you (happy?) , when you can interview this for the first time, you can come to the interview, (can re-enter), and then come to the interview is always able to pass (of course, our assumption is that others do not face), that is, without competition, I hope you do not go through the interview process, directly can, at this time the design of bias lock.

Biased locking directly eliminates the process of entering the lock/unlock process, because reentrant locking is good, but the CAS operation designed during lock/unlock process can actually affect performance.

Why do CAS operations affect performance?

Second, CAS can cause some local latency, because in a multi-processor scenario, each core has its own L1 cache, which is then connected to main memory via the bus. If Core1 changes some values, the data will be invalid when Core2 gets the data. The latest data synchronization communication process will generate cache consistency traffic. Too much cache consistency traffic will cause too much pressure on the bus and become a performance bottleneck.

Look at Lock from a JVM perspective

Today we avoid the use of Lock, Lock from its implementation is mainly through the implementation of Lock interface, and Lock interface all operations are put in Sync class, Sync class is a subclass of AQS, so its basic idea is completely inherited from AQS, so what is the idea of AQS?

Introduction to AQS

When a thread is the basic idea of AQS requests a resource, if idle resources, directly to the thread, and lock the resources, if the resource has been lock, thread will join a CLH based on block queue, CLH two-way queue, queue is a virtual lock release will wake threads, AQS can be implemented as exclusive, ReentrantLock, for example, can also be shared, such as semaphores, read/write locks, rewinding timers, and so on.

Each time we call the Lock() method, the default method will do the unfair Lock acquisition, first determine the current Lock state, if the current Lock state c==0, that is, empty, directly acquire the Lock. Then acquire property of the lock will be +1, when unlock() will be -1, when is 0 is empty, if the current lock state is not 0, to determine whether it is own lock, if it is own, give the value ++, to avoid CAS operation, that is, to achieve biased lock.

AddWriter () will be called to queue. If there is a queue tail, CAS will update the current thread to queue tail. If there is no queue tail, CAS will loop until the thread joins the queue, and addWriter() will block. Before blocking, tryAcquire() is tried to obtain the lock, and each node decides whether to block by asking its predecessor whether to block.

With that said, let’s compare Synchronized to Lock

  • Synchronized

This is a keyword based on lock-free wait queues. The JVM divides the wait queue into several parts more carefully, in order to speed up the queue, and Synchronized implements spin locking, but this is implemented based on JVM instructions.

  • Lock

This is a block based CLH waiting queue, queue within all the operations are based on CAS, and the thread has acquired the Lock can be biased locking, but did not realize the spin Lock, only rigid waiting, good thing is the Lock is more adapted to the extension, can be expanded into a read-write Lock, fair, not fair Lock, etc., Condition is more flexible than wait/notify().

Implementation principle of Synchronized and Lock Lock in JVM and code parsing