General documentation: Article directory Github: github.com/black-ant

Locking is one of the most frequent concepts in the entire multi-thread universe, which we briefly discussed earlier in our introduction to synchronized, but let’s try to complete the system by looking at a few other classes.

1. Lock interface

The Lock interface is the foundation of everything. It abstracts a tool for controlling access to shared resources by multiple threads.

> provides the following methods for abstracting the entire business:

  • void lock()
  • Void lockInterruptibly() throws InterruptedException: Interrupts lock
  • Boolean tryLock() : Non-blocking attempt to acquire a lock
  • Boolean tryLock(Long time, TimeUnit unit) throws InterruptedException: Attempts with time
  • void unlock()
  • Condition newCondition()

The Lock interface provides more functionality than an implicit monitor Lock:

  • Ensure that the sorting
  • It is not reentrant
  • Deadlock detection
  • Can itself be a target in a synchronization statement
  • Acquiring a monitor lock for a lock instance has no specified relationship to calling any lock() method on that instance

> Memory synchronization:

  • A successful lock has the same memory synchronization effect as a successful lock.
  • A successful unlock has the same memory synchronization effect as a successful unlock.
  • Unsuccessful lock and unlock operations, as well as reentrant lock/unlock operations, do not require any memory synchronization effects.

2. ReentranLock

2.1 introduction to ReentranLock

A ReentranLock is a lock that can be entered repeatedly within a single thread. That is, a thread can acquire the same lock twice in a row. A ReentranLock provides a more extended line of lock operations than synchronized. It allows for more flexible structures, can have completely different properties, and can support conditional objects for multiple related classes.

Its advantages are:

  • Can make the lock more fair.
  • Recursive non-blocking synchronization mechanism.
  • You can interrupt a thread’s response while waiting for a lock.
  • You can have a thread try to acquire a lock and either return immediately or wait for a period of time when the lock cannot be acquired.
  • Locks can be acquired and released in different ranges and in different orders.

It features:

  • Reentrant mutex
  • Both fair and unfair methods are provided
    • Fair lock: A fair lock is acquired sequentially

ReentranLock basic use

private Lock lock = new ReentrantLock();

public void test(a) {
    lock.lock();
    for (int i = 0; i < 5; i++) {
    logger.info("------> CurrentThread [{}] , i : [{}] <-------", Thread.currentThread().getName(), i);
    }
    lock.unlock();
}
Copy the code

2.2 Internal important classes

2.2.1 Sync

Sync is the internal abstract class of ReentranLock, which will be used later to implement two different locks. What does Sync do internally

Node 1: in AbstractQueuedSynchronizer inheritance

Also known as AQS, that’s what you know, Sync uses AQS state to indicate the number of holdings on the lock

abstract static class Sync extends AbstractQueuedSynchronizer
Copy the code

Node 2: There is an abstract method lock, and subsequent fair and unfair methods are implemented separately

abstract void lock(a);
/ /? - Non-fair lock synchronization objects
static final class NonfairSync extends Sync> < p style = "text-align: center;final void lock() : a change has been made to compare fair locksstateIf the modification succeeds, the thread that has exclusive access rights is setif (compareAndSetState(0, 1))
            setExclusiveOwnerThread(Thread.currentThread());

/ /? - Fair lock synchronization objects
static final class FairSync extends Sync> < p style = "text-align: center;tryAcquire(int acquires), the biggest one is to query whether any threads have been waiting longer than the current threadCopy the code

Node 3: What does the nonfairTryAcquire method do

       
       final boolean nonfairTryAcquire(int acquires) {
            // Get the current Thread and state
            final Thread current = Thread.currentThread();
            int c = getState();
            if (c == 0) {
                // Set the CAS status
                if (compareAndSetState(0, acquires)) {
                    // Set the thread that currently has exclusive access
                    Null indicates that no thread has obtained the access permission
                    setExclusiveOwnerThread(current);
                    return true; }}// Return the last thread set by setExclusiveOwnerThread
            else if (current == getExclusiveOwnerThread()) {
                int nextc = c + acquires;
                if (nextc < 0) // overflow
                    throw new Error("Maximum lock count exceeded");
                setState(nextc);
                return true;
            }
            return false;
        }
Copy the code

Node 3: tryRelease


SetExclusiveOwnerThread + setState
// If setExclusiveOwnerThread is null, no thread has access

        protected final boolean tryRelease(int releases) {
            int c = getState() - releases;
            if(Thread.currentThread() ! = getExclusiveOwnerThread())throw new IllegalMonitorStateException();
            boolean free = false;
            if (c == 0) {
                free = true;
                setExclusiveOwnerThread(null);
            }
            setState(c);
            return free;
        }

Copy the code

2.3 Similarities and Differences between Synchronized and ReentrantLock

The same

  • Both implement multi-threaded synchronization and memory visibility semantics (implicit monitor locking).
  • Both are reentrant locks

The difference between

  • The synchronization mechanism is different
    • Synchronized synchronizes with Monitor objects using Java object header locks.
    • Already by CAS, AQS (AbstractQueuedSynchronizer) and LockSupport (for the blocked and unblocked) synchronization.
  • Visibility implementation mechanisms vary
    • Synchronized relies on the JVM memory model to ensure memory visibility for multiple threads containing shared variables.
    • ReentrantLock guarantees multithreaded memory visibility that contains shared variables through volatile state of AQS.
  • Use in different ways
    • Synchronized can modify instance methods (which lock instance objects), static methods (which lock class objects), and code blocks (which display the specified locked objects).
    • ReentrantLock shows that the tryLock and lock methods are called, requiring the lock to be released in the finally block.
  • Different levels of functional richness
    • Synchronized cannot set the waiting time or interrupt.
    • ReentrantLock provides rich features such as finite-time wait locks (set expiration time), lockInterruptibly (lockInterruptibly), condition (provide await, condition (provide await, signal, and other methods), and more
  • Different lock types
    • Synchronized supports only unfair locks.
    • ReentrantLock provides fair and non-fair lock implementations. Of course, in most cases, unfair locking is the efficient option.

Conclusion:

Before synchronized optimization, its performance was much worse than that of ReenTrantLock, but since synchronized introduced bias locking, lightweight locks (spin-locks), the performance of the two has been similar.

In cases where both methods are available, officials even recommend synchronized. In actual code practice, the possible optimization scenario is to further improve the performance through read and write separation, so ReentrantReadWriteLock is used

2.3 already deeply

//
- void lock(a)
- Condition newCondition(a)
- boolean tryLock(a)
- void unlock(a)  

--------------
// Node 1: based on the Lock interface and supports serialization
ReentrantLock implements Lock, java.io.Serializable

--------------
// Node 2: inner class. ReentrantLock has several important sync classes. Sync is the basis of synchronization control


--------------
// Node 3: Fair or unfair switching mode
 public ReentrantLock(boolean fair) {
        sync = fair ? new FairSync() : newNonfairSync(); } -- -- -- -- -- -- -- -- -- -- -- -- -- -// Node 4: implementation of Lock method, default call NonfairSync
 public void lock(a) { sync.lock(); } -- -- -- -- -- -- -- -- -- -- -- -- -- -// Node 5: lockInterruptibly implementation
sync.acquireInterruptibly(1);

// Node 6 : 
public boolean tryLock(a) {
        return sync.nonfairTryAcquire(1);
}
     
     
Copy the code

3. The ReadWriteLock

Read/write Lock is an implementation class of Lock separation technology used to improve the performance of concurrent programs. It can be used in “more read, less write” scenarios. Read/write locks support multiple concurrent read operations, and write operations can only be performed by one thread.

ReadWriteLock is optimized for situations where the data structure is written to relatively infrequently, but multiple tasks often read the data structure.

ReadWriteLock allows you to have multiple readers at the same time, as long as none of them try to write. If a write lock is already held by another task, no reader can access it until the write lock is released.

The improvement of application performance by ReadWriteLock is mainly limited by the following factors:

  1. The result of how often the data is read compared to how often it is modified.
  2. Read and write time
  3. How many threads are competing
  4. Whether to run on a multiprocessing machine

Features:

  • Fairness: Supports both fairness and non-fairness.
  • Reentrancy: Supports reentrancy. A maximum of 65535 recursive write locks and 65535 recursive read locks are supported for read/write locks.
  • Lock degradation: follows the order in which a write lock is acquired, then a read lock is acquired, then a write lock is released, so that a write lock can be degraded to a read lock.

In-depth ReadWriteLock:

ReadWriteLock is an interface that only provides2Methods:/**
     * Returns the lock used for reading.
     *
     * @return the lock used for reading
     */
    Lock readLock(a);

    /**
     * Returns the lock used for writing.
     *
     * @return the lock used for writing
     */
    Lock writeLock(a);

Copy the code

4. ReentrantReadWriteLock

A ReentrantLock is an exclusive lock, An exclusive lock can be accessed by only one thread at a time. ReentrantReadWriteLock is a reentrant lock implementation class. As long as there is no writer, the lock can be held by multiple Reader threads simultaneously

I- ReadWriteLock M- Lock readLock(); M- Lock writeLock();

C-reentrantreadwritelock: Reentrant read-write lock implementation class i-ReadWritelock? – A pair of related locks is maintained internally, one for read-only operations and the other for write operations. The write lock is exclusive and the read lock is shared

4.1 ReentrantReadWriteLock in-depth

Use case:

    Object data;
    volatile boolean cacheValid;
    final ReentrantReadWriteLock rwl = new ReentrantReadWriteLock();

    void processCachedData(a) {
        rwl.readLock().lock();
        if(! cacheValid) { rwl.readLock().unlock(); rwl.writeLock().lock();try {
                if(! cacheValid) { data ="test";
                    cacheValid = true;
                }
                rwl.readLock().lock();
            } finally {
                rwl.writeLock().unlock(); // Unlock write, still hold read}}}Copy the code

Node 1: Internally provides two internal attributes. That’s why the single lock is separated

/ / read lock
private final ReentrantReadWriteLock.ReadLock readerLock;
/ / write locks
private final ReentrantReadWriteLock.WriteLock writerLock;
Copy the code

Node 2: Sync again. As usual, Sync is created using fair

final Sync sync;
public ReentrantReadWriteLock(boolean fair) {
	sync = fair ? new FairSync() : new NonfairSync();
	readerLock = new ReadLock(this);
	writerLock = new WriteLock(this);
}
Copy the code

Node 3: Sync Internal status control

// Read and write counts extract constants and functions. The Lock state is logically divided into two:
// A low (low 16) indicates the exclusive (write) lock hold count, and a high (high 16) indicates the shared (read) lock hold count.
static final int SHARED_SHIFT   = 16;
static final int SHARED_UNIT    = (1 << SHARED_SHIFT);
static final int MAX_COUNT      = (1 << SHARED_SHIFT) - 1;
static final int EXCLUSIVE_MASK = (1 << SHARED_SHIFT) - 1; The way of counting:// The number of threads that acquired the lock holding the read state
static int sharedCount(int c)    { returnc >>> SHARED_SHIFT; } read status = S >>>16(Unsigned complement0Moves to the right16A)// The number of times a lock holding the write state has been acquired
static int exclusiveCount(int c) { returnc & EXCLUSIVE_MASK; } write state, equal to S &0x0000FFFF(will be high16Erase all bits)Copy the code

The Node 4: HoldCounter class does this: each reader thread needs a separate count for reentrant

// Each thread reads the hold count counter. Maintained as a ThreadLocal, it is cached in a cachedHoldCounter
static final class HoldCounter {
	int count = 0;
	// Non-references help garbage collection
	final long tid = getThreadId(Thread.currentThread());
}
// Obtain the hold count for the last thread of the readLock successfully
private transient HoldCounter cachedHoldCounter;


Copy the code

Node 5: ThreadLocalHoldCounter for the deserialization mechanism

static final class ThreadLocalHoldCounter
	extends ThreadLocal<HoldCounter> {
	public HoldCounter initialValue(a) {
		return newHoldCounter(); }}// The number of reentrant read locks held by the current thread. Initializes only in constructors and ReadObjects. Delete when the thread's read hold count drops to 0
private transient ThreadLocalHoldCounter readHolds;
Copy the code

Node 6: Sync inner class

NonfairSync: indicates unfair lockfinal boolean writerShouldBlock(a) {
	return false; // writers can always barge
}
final boolean readerShouldBlock(a) {
	// Block if the thread temporarily present in the queue head (if any) is a waiting writer
    // If there is a waiting writer behind other enabled readers that have not been exhausted from the queue, then the new reader will not block
	returnapparentlyFirstQueuedIsExclusive(); } FairSync: fair lockstatic final class FairSync extends Sync {
	final boolean writerShouldBlock(a) {
		return hasQueuedPredecessors();
	}
	final boolean readerShouldBlock(a) {
		returnhasQueuedPredecessors(); }}Copy the code

5. The Condition

5.1 introduction of Condition

In Java SE5Java provides the Lock interface, as opposed tosynchronizedThe AQS wait queue and the Condition queue are independent of each other. #await() is to release the Lock resources on the basis of the current thread holding the Lock. And create a Condition node to join the tail of the Condition queue, blocking the current thread. Signal () is to move the head node of Condition to the end of the AQS wait node, making it wait to acquire the lock again.Copy the code

5.2 Condition process

5.3 Condition of source code

Condition is actually an interface, its existence is an implementation class in AQS, ConditionObject, we mainly talk about it:

Node 1: property object

// Condition queue the first node
private transient Node firstWaiter;
// Condition queue the last node
private transient Node lastWaiter;
Copy the code

Node 2: Core method doSignal + doSignalAll


// doSignal: delete and transfer nodes until a non-cancelled 1 or NULL is encountered
private void doSignal(Node first) {
    do {
        // Check whether it is a header or null
        // Note that = is an assignment:! transferForSignal(first) &&(first = firstWaiter) ! = null
        if ( (firstWaiter = first.nextWaiter) == null)
            lastWaiter = null;
            first.nextWaiter = null;
    // transferForSignal is an AQS method that moves a node from a conditional queue to a synchronous queue, mainly by CAS operations to modify the state
    // Node p = enq(node); This is a Node concatenation operation, which means the Node has been added to the corresponding queue
    } while(! transferForSignal(first) &&(first = firstWaiter) ! =null);
}



// doSignalAll: delete and transfer all nodes. Note the difference between Notify and doSignalAll
private void doSignalAll(Node first) {
    lastWaiter = firstWaiter = null;
    do {
        Node next = first.nextWaiter;
        first.nextWaiter = null;
        transferForSignal(first);
        first = next;
    } while(first ! =null);
}
        
Copy the code

Node 3: main method await

public final void await(a) throws InterruptedException {
	if (Thread.interrupted())
		throw new InterruptedException();
    // Step 1: Add to Condition queue
	Node node = addConditionWaiter();
    // Step 2: Call Release with the current state value and return the saved state
	int savedState = fullyRelease(node);
	int interruptMode = 0;
    // Returns true if a node (always the one originally placed on the conditional queue) is now on the synchronous queue waiting to be reacquired
    // That is, if there is a node waiting
	while(! isOnSyncQueue(node)) { LockSupport.park(this);
		if((interruptMode = checkInterruptWhileWaiting(node)) ! =0)
			break;
	}
    // Get threads already in the queue in exclusive non-interruptible mode.
    // For conditional wait methods and fetch methods.
	if(acquireQueued(node, savedState) && interruptMode ! = THROW_IE) interruptMode = REINTERRUPT;if(node.nextWaiter ! =null) // clean up if cancelled
		unlinkCancelledWaiters();
	if(interruptMode ! =0)
		reportInterruptAfterWait(interruptMode);
}

// awaitNanos(long nanosTimeout) : timed conditional wait
    if (nanosTimeout <= 0L) {
		transferAfterCancelledWait(node);
		break;
	}
	if (nanosTimeout >= spinForTimeoutThreshold)
		LockSupport.parkNanos(this, nanosTimeout);
	if((interruptMode = checkInterruptWhileWaiting(node)) ! =0)
		break;
	nanosTimeout = deadline - System.nanoTime();


// awaitUntil(Date deadline) : implement the absolute timing condition waiting, that is, a scheduled operation
// Transfer the node directly after the timeout
	if (System.currentTimeMillis() > abstime) {
		timedout = transferAfterCancelledWait(node);
		break;
	}

Copy the code

Node 4: Release method

    // Call release with the current state value; Returns the saved state.
    // Cancel the node and throw an exception on failure.
    final int fullyRelease(Node node) {
        boolean failed = true;
        try {
            // Get the status
            int savedState = getState();
            if (release(savedState)) {
                failed = false;
                return savedState;
            } else {
                throw newIllegalMonitorStateException(); }}finally {
            if (failed)
                // Set the current status to Cancelnode.waitStatus = Node.CANCELLED; }}public final boolean release(int arg) {
        // Implement a concrete rewrite
        if (tryRelease(arg)) {
            Node h = head;
            if(h ! =null&& h.waitStatus ! =0)
                unparkSuccessor(h);
            return true;
        }
        return false;
    }


Copy the code

Node 5: Other methods:

  • The addConditionWaiter method adds a new server to the queue.
  • AwaitUninterruptibly: Implements the uninterruptible conditional wait
public final void awaitUninterruptibly(a) {
	Node node = addConditionWaiter();
    / / save the state as a parameter, if failed, throw IllegalMonitorStateException
	int savedState = fullyRelease(node);
	boolean interrupted = false;
    // Block until there is a signal
	while(! isOnSyncQueue(node)) { LockSupport.park(this);
        if (Thread.interrupted())
			interrupted = true;
	}
    / / acquireQueued: preservation of the state as a parameter, has been in the queue with the exclusive uninterruptible mode threads, to obtain
	if (acquireQueued(node, savedState) || interrupted)
		selfInterrupt();
}

Copy the code

conclusion

At this point, the multithreaded series is ready to come to an end. In fact, the series is less than half finished, but it is far from finished. But after recently reading the art of Concurrent Programming in Java, I felt I wasn’t quite up to speed on multithreading, so I decided to spend some time reading it further to flesh out the document.

Reference and Thanks

Multithreaded set

Taro, source

Screwing series