AbstractQueuedSynchronizer

What is the AQS

AbstractQueuedSynchronizer is the base class in Java synchronization is used for synchronization

But why does it feel like we’ve never used this class in our daily lives?

This is because the inner class in the synchronous utility class inherits the AQS and the inner class encapsulates the AQS so that we don’t even know that the class exists

AQS using template method to achieve synchronization mechanism in Android View onDraw onLayout

Custom synchronization utility classes

If we want to implement exclusivity we need to implement the tryAcquire method

If we want to implement sharing we need to implement the tryAcquireShared method

The key to implementing synchronous utility classes is that the state variable defaults to 0

Display locks are inherited locks

So we want to implement display exclusive Lock to inherit Lock

So how do you do lock and unlock in this case you have to borrow AQS

Exclusive non-reentrant locks

public class SelfLock implements Lock {
    // The inner class inherits AQS
    private static class Sync extends AbstractQueuedSynchronizer {
        /* Get the lock */
        @Override
        protected boolean tryAcquire(int arg) {
            // The lock is obtained by setting the expected state to 0 and setting state to 1 to true
            if (compareAndSetState(0.1)) {
                // Tell someone I got the thread
                setExclusiveOwnerThread(Thread.currentThread());
                return true;
            }
            return false;
        }

        /* Release the lock */
        @Override
        protected boolean tryRelease(int arg) {
            if (getState() == 0) {
                throw new IllegalMonitorStateException();
            }
            setExclusiveOwnerThread(null);
            setState(0);
            return true; }}private final Sync sync=new Sync();
    @Override
    public void lock(a) {
        / / gets the lock
        sync.acquire(1);
    }
    @Override
    public void unlock(a) {
        sync.release(1);
    }
Copy the code

So we have a synchronous lock

Basic idea of AQS CLH queue locking

There’s a lock in a block of code, and only one thread can get the lock at any given time and all the other threads queue up outside so the thread that didn’t get the lock has to queue up outside.

So wrap queued threads into QNode objects

QNode:

  • The precursor node myPred
  • The current thread
  • Locked is true when locked needs to be unlocked

Queue thread QNodes into a linked list

For example, if thread A has queued to get the lock, thread B will point its myPred to A’s QNode (and locked=true).

So how does the thread get the lock?

If locked=false, then thread A has released the lock and set its own locked to false and thread B can get the lock.

Considering the performance of AQS using bidirectional linked lists will not keep spinning will try to spin a certain number of times after the thread is suspended

Wait and notify NotifyAll also have a wait queue like the lock queue

Fair locks and unfair locks

A fair lock is a process in which all threads are queued and the process in which CLH takes the lock is a fair lock

The fair lock code is as follows:

protected final boolean tryAcquire(int acquires) { final Thread current = Thread.currentThread(); int c = getState(); If (c == 0) {// Hasqueued24 Fair lock to determine whether there is a queue if there is a queue if (! hasQueuedPredecessors() && compareAndSetState(0, acquires)) { setExclusiveOwnerThread(current); return true; } } else if (current == getExclusiveOwnerThread()) { int nextc = c + acquires; if (nextc < 0) throw new Error("Maximum lock count exceeded"); setState(nextc); return true; } return false; }}Copy the code

An unfair lock is when there is already a queue and you can jump the queue to get the lock

final boolean nonfairTryAcquire(int acquires) { final Thread current = Thread.currentThread(); int c = getState(); If (compareAndSetState(0, acquires)) {setExclusiveOwnerThread(current); return true; } } else if (current == getExclusiveOwnerThread()) { int nextc = c + acquires; if (nextc < 0) // overflow throw new Error("Maximum lock count exceeded"); setState(nextc); return true; } return false; }Copy the code

Reentrant lock

A reentrant lock is essentially a state change

 private static class RetainSync extends AbstractQueuedSynchronizer {
        @Override
        protected boolean tryAcquire(int arg) {
            if (compareAndSetState(0, 1)) {
                setExclusiveOwnerThread(Thread.currentThread());
                return true;
            } else if (getExclusiveOwnerThread() == Thread.currentThread()) {
                setState(getState() + 1);
                return true;
            }
            return false;
        }

        @Override
        protected boolean tryRelease(int arg) {
            if (getExclusiveOwnerThread() == Thread.currentThread()) {
                throw new IllegalMonitorStateException();
            }
            if (getState() == 0) {
                throw new IllegalMonitorStateException();
            }
            setState(getState() - 1);
            if (getState() == 0) {
                setExclusiveOwnerThread(null);
            }
            return true;
        }
    }
Copy the code

JAVA Memory Model JMM JAVA Memory Model

Because it takes 100 nanoseconds for the Cpu to read memory it takes 0.6 nanoseconds for the Cpu to execute an instruction

If a+ B takes 100 nanoseconds to read A and 100 nanoseconds to read B plus 0.6 nanoseconds, the total execution time is 200.6 nanoseconds

The CPU introduces a cache

CPU cache

L1 L2 L3

  • Working memory contains CPU registers CPU cache (99%) Main memory (1%) Very little main memory

  • Main memory contains CPU cache (1%) main memory (99%)(that is, computer memory) CPU cache is very low

Working memory is thread exclusive

Let’s say a thread wants to accumulate an int count variable and count is stored in main memory

A copy of count is created in the thread’s working memory. Threads cannot manipulate count in main memory, only count copies in the worker thread

Concurrency issues with the Java memory model

You can see that count=count+1; This method is not secure due to working memory. Working memory in a thread is affected by visibility for the same variable. So how do you solve the visibility problem? Volatile guarantees visibility but not atomicity.

Volatile can be used for simple assignment operations but cannot be thread-safe for complex compound operations

Volatile can be used in situations where one thread writes to multiple threads