Interviewer: Do you know AQS? Tell me about it.

I:… Go now.

This is a platitude interview question, I believe everyone may have encountered.

About this piece of information is actually a pile of online search, today is mainly want to combine their own understanding, expressed in a more accessible way, also does not involve any source code.

Realize the principle of

AQS (AbstractQueuedSynchronizer), abstract queue type synchronizer

AQS maintains a STATE (shared resource variable) and a FIFO thread wait queue (CLH queue), which is entered when multiple threads competing for state are blocked.

State

State is a shared resource variable of type int that is volatile

There are two ways to share resources:

  • Exclusive: Exclusive. Only one thread can execute this lock, such as ReentrantLock
  • Share: a Share that can be executed simultaneously by multiple threads, such as CountDownLatch, CyclicBarrier, Semaphore, and ReadWriteLock

CLH queue (FIFO)

Short is a bidirectional linked list, using the inner class Node to achieve. The head and tail Pointers point to the head and tail of the list, respectively.

We usually write as follows:

ReentrantLock lock = new ReentrantLock();
/ / lock
lock.lock();  
// Business logic code./ / unlock
lock.unLock(); 
Copy the code

Scenario analysis

What is the specific process of locking and unlocking

lock

If thread A, B, and C preempt the lock at the same time, thread B succeeds in preemption, while thread A and C fail. The specific flow is as follows:

// todo

  • Thread B preempts the lock in the processstateUpdated to 1 through cas.
  • Threads A and C failed to preempt because they failed to update.
  • Any thread that fails to preempt the lock is placed in a FIFO thread waiting queue (bidirectional linked list).
  • Head and tail refer to the head and tail of the queue, respectively.

unlock

After executing the service logic, thread B calls lock.unlock(). The process is as follows:

  • Thread B passes casstateUpdate to 0
  • Wakes up thread A, the next node of the head in the wait queue

Fair and unfair locks

This is also a common interview question, here is a brief talk

  • Fair lock: Threads at the head of a queue are fetched in the order they are waiting in the queue. In the example above, the next thread to acquire the lock must be thread A

  • Unfair lock: After the lock is released, if a new thread attempts to acquire the lock, the preemption may succeed. For example, at the moment thread B releases the lock, a new thread D tries to acquire the lock, and there is a high probability that the preemption will succeed.

See the two static classes FairSync and NonFairSync under ReentrantLock for more code

Condition

In ReentrantLock, you can create a Condition object using the newCondition() method. What is this object?

To put it simply, instead of traditional Object wait() and notify(), threads cooperate.

Let’s start with an example:

public class Demo {

    private Lock lock = new ReentrantLock();
    private Condition condition = lock.newCondition();

    public void methodAwait(a) {
        try {
            lock.lock();
            System.out.println(String.format("### current thread :%s waiting ##", Thread.currentThread().getName()));
            condition.await();
            System.out.println(String.format(%s finished ###, Thread.currentThread().getName()));
        } catch (Exception e) {
            e.printStackTrace();
        } finally{ lock.unlock(); }}public void methodSignal(a) {
        try {
            lock.lock();
            System.out.println(String.format("### current thread :%s signal ###, Thread.currentThread().getName()));
            condition.signalAll();
        } catch (Exception e) {
            e.printStackTrace();
        } finally{ lock.unlock(); }}public static void main(String[] args) throws InterruptedException {
        Demo demo = new Demo();
        Thread t1 = new Thread(() -> demo.methodAwait(),"thread-A");
        Thread t2 = new Thread(() -> demo.methodAwait(), "thread-B");
        Thread t3 = new Thread(() -> demo.methodAwait(), "thread-C");
        Thread t4 = new Thread(() -> demo.methodSignal(), "thread-D");
        t1.start();
        t2.start();
        t3.start();
        Thread.sleep(2000); t4.start(); }}Copy the code

Threads A, B, C, and D start the preemption lock at the same time. At this time, the thread that succeeds in preemption will execute its own logical service, and the thread that fails to preempt will enter the thread CLH queue as mentioned above.

Suppose thread B obtains the lock first, calls the condition.await() method, releases the lock, blocks and enters the conditional wait queue. Thread A and thread C also enter the conditional wait queue after acquiring the lock.

When thread D acquires the lock, it calls the condition.signalAll() method, which places the conditional wait queue threads on the CLH queue and wakes up all the wait threads.

Note that the threads in the conditional queue join the end of the CLH queue one by one.

extension

LockSupport

In AQS, threads in queues are awakened by blocking through LockSupport.

The LockSupport class, which is a basic thread-blocking primitive for creating locks and other synchronized classes, has two core methods:

  • park(): blocks the current calling thread
  • unpark(): Wakes up the specified thread

Compared with wait(), notify(), and notifyAll() of Object classes, the differences are as follows:

  1. Wait /notify/notifyAll must be used with synchronized
  2. LockSupport is more precise and wakes up a thread exactly

Ordinary change, will change ordinary

I am a house xiaonian, a low-key young man in the Internet

Follow the public account “Zexiannian” and personal blog 📖 edisonz.cn to read more shared articles