This is the second day of my participation in the August More Text Challenge
Source version: OpenJDK 11
Read-write lock
Reentrantlock source code parsing (juejin. Cn), which is frequently used in concurrent scenarios, was previously analyzed. This class implements the Lock interface to achieve exclusive acquisition semantics.
This analysis ReentrantReadWriteLock implements the ReadWriteLock interface.
ReadWriteLock I understand as an extension of Lock in a particular scenario, which we all know is read too much and write too little. In the scenario of over-read and under-write, there is obviously a performance bottleneck if resources are still acquired exclusively.
ReadWriteLock
The read-write lock semantics expressed areMultiple reads can be accessed at the same time, but other writes and reads are blocked if they are written.
ReadWriteLock is an extension of Lock in read-write scenarios because the only two methods on the interface return Lock.
public interface ReadWriteLock { Lock readLock(a); Lock writeLock(a); } Copy the code
The use of read-write locks
More typical read more write less scenario should be a cache. Caches are mainly used in projects to speed up access to certain resources, usually by adding a cache between the application and the database.
But using caching directly introduces some inconsistencies:
-
Different cache update strategies introduce different consistency issues
(Here’s a list of common cache update strategies for those interested.)
- Cache Aside Pattern
- Read/Write Through Pattern
- Write Behind Pattern (Asynchronous cache Write
-
When the cache fails, a large number of read requests may fall into the database breakdown, resulting in cache breakdown.
If consistency is critical, use read/write locks. Or avoid cache breakdown, but it is typically possible to use an exclusive lock directly only on the node where the cache is set.
Write a fake cache demo
public class ReadWriteLockDemo {
private Integer cache;
private ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
public Integer getFromCache(a) {
lock.readLock().lock();
System.out.println(Thread.currentThread().getName() + ": Start reading cache");
Integer res = cache;
System.out.println(Thread.currentThread().getName() + ": End reading cache");
lock.readLock().unlock();
if (cache == null) {
return loadCache();
}
return cache;
}
private Integer loadCache(a) {
lock.writeLock().lock();
Integer val;
if (cache == null) {
System.out.println(Thread.currentThread().getName() + ": Start loading database");
// get from database...
_load();
val = new Random(5).nextInt();
cache = val;
System.out.println(Thread.currentThread().getName() + ": End loading database");
lock.writeLock().unlock();
} else {
lock.readLock().lock();
lock.writeLock().unlock();
System.out.println(Thread.currentThread().getName() + ": Start reading cache");
val = cache;
System.out.println(Thread.currentThread().getName() + ": End reading cache");
lock.readLock().unlock();
}
return val;
}
private void _load(a) {
// Simulate the time spent reading and writing to the database
try {
TimeUnit.SECONDS.sleep(1L);
} catch(InterruptedException e) { e.printStackTrace(); }}private void setCache(Integer val) {
lock.writeLock().lock();
System.out.println(Thread.currentThread().getName() + ": Start write cache");
// write to database...
_load();
cache = val;
System.out.println(Thread.currentThread().getName() + ": End write cache"); lock.writeLock().unlock(); }}Copy the code
Concurrent read
// The default cache is in effect
executorService.execute(() -> cache.getFromCache());
executorService.execute(() -> cache.getFromCache());
executorService.execute(() -> cache.getFromCache());
/ / output
pool-1-thread-1: Starts reading cache pool-1-thread-2: Starts reading cache pool-1-thread-1: Ends reading cache pool-1-thread-3: Starts reading cache pool-1-thread-2: Ends reading cache pool-1-thread-3: Stops reading the cacheCopy the code
You can see that concurrent reads can occur simultaneously without blocking each other.
Read/write block
// The default cache is in effect
executorService.execute(() -> cache.getFromCache());
executorService.execute(() -> cache.getFromCache());
executorService.execute(() -> cache.getFromCache());
executorService.execute(() -> cache.loadCache());
executorService.execute(() -> cache.getFromCache());
executorService.execute(() -> cache.getFromCache());
/ / output
pool-1-thread-2: Starts reading cache pool-1-thread-4: Starts reading cache pool-1-thread-4: Ends reading cache pool-1-thread-2: Ends reading cache pool-1-thread-3: Starts reading cache pool-1-thread-3: Stops reading the cache/ / write cache
pool-1-thread-5: Starts to load database pool-1-thread-5: Stops loading the database// Read only when you are done
pool-1-thread-6: Starts reading cache pool-1-thread-6: Ends reading cache pool-1-thread-7: Starts reading cache pool-1-thread-7: Stops reading the cacheCopy the code
Write cache for thread 5 can be executed only after threads 2, 3, and 4 have finished. The cache can only be read after thread 5 terminates.
This should suffice to say that read and write operations block each other.
Concurrent writes
executorService.execute(() -> cache.loadCache());
executorService.execute(() -> cache.loadCache());
executorService.execute(() -> cache.loadCache());
/ / output
pool-1-thread-1: Starts to load database pool-1-thread-1End Loading database Pool -1-thread-2: Starts reading cache pool-1-thread-2: Ends reading cache pool-1-thread-3: Starts reading cache pool-1-thread-3: Stops reading the cacheCopy the code
In the case of cache failure, a double check can be made to hold the write lock to avoid a large number of concurrent reads turning into serial write locks.
Lock down
Lock degradation means that in the case of holding a write lock, you can obtain the read lock and then release the write lock. The current thread then only holds the read lock.
lock.writeLock().lock();
writeCache();
lock.readLock().lock();
lock.writeLock().unlock();
readCache();
doSomeThing();
lock.readLock().unlock();
Copy the code
The purpose is to avoid cache inconsistencies and ensure concurrent performance.
If there is no lock degradation, either the write lock is released and then the read lock is acquired, then the thread may not be able to fetch the value just set. Either the write lock is held until the end of execution, and other reads are blocked.
If lock degradation is supported, the above problems can be solved.
Lock upgrade is not supported
Java read-write locks do not support lock escalation, that is, you cannot acquire a write lock while holding a read lock.
The reason is that reads are concurrent and multiple threads can hold read locks at the same time. When these threads attempt to acquire the write lock, only one thread can acquire the write lock. The read/write locks of different threads must block each other, so one thread must require other threads to release the read lock in order to acquire the write lock. Of course, other threads will not release the read lock, but will also try to acquire the write lock, resulting in a deadlock.
ReentrantReadWriteLock source code analysis
ReentrantReadWriteLock is an implementation of read/write locks in Java. This class implements the ReadWriteLock interface and provides read/write locks that implement locks. And Java concurrency tools in the big daddy AQS is certainly not absent, read and write locks rely on AQS as the foundation of the ability to achieve the lock.
The Lock system
As mentioned above, ReentrantReadWriteLock implements the ReadWriteLock interface and relies on AQS to implement read and write locks (fair or unfair mode).
public class ReentrantReadWriteLock implements ReadWriteLock {
public ReentrantReadWriteLock(boolean fair) {
sync = fair ? new FairSync() : new NonfairSync();
readerLock = new ReadLock(this);
writerLock = new WriteLock(this);
}
public ReentrantReadWriteLock.WriteLock writeLock(a) { return writerLock; }
public ReentrantReadWriteLock.ReadLock readLock(a) { return readerLock; }
public static class ReadLock implements Lock {
private final Sync sync;
}
public static class WriteLock implements Lock {
private final Sync sync;
}
abstract static class Sync extends AbstractQueuedSynchronizer {}}Copy the code
Dependence AQS locks, believe that familiar with AQS and already students should know how to return a responsibility, not familiar with watch already source code parsing (juejin. Cn) and AbstractQueuedSynchronizer (AQS) : Cornerstone of concurrency tools (juejin. Cn).
Because the implementation of Lock is generally like ReentrantLock, directly proxy to the implementation class of AQS; So I’m not going to focus on the source code for WriteLock and ReadLock. How to implement two locks on a critical resource? What are the fair and unfair modes of two locks?
ReadLock does not support Condition because it is a shared lock and blocks only because of write locks.
High and low boundaries, reading and writing as one
The critical resource state of AQS is of type int and needs to represent both read and write locks. To prevent collisions, a clever design is used: the 32 bits of int are divided into high and low parts, with high 16 bits representing read locks and low 16 bits representing write locks. Read and write lock-related operations (that is, exclusive and share operations of AQS) operate on values at different locations.
static final int SHARED_SHIFT = 16;// Read lock (share) shift
static final int SHARED_UNIT = (1 << SHARED_SHIFT);// Read lock (share) increment unit: 00000000 00000001 00000000 00000000
static final int MAX_COUNT = (1 << SHARED_SHIFT) - 1;// The maximum number of read locks (shared) or write locks (exclusive) that can be reentrant
static final int EXCLUSIVE_MASK = (1 << SHARED_SHIFT) - 1;// Write lock (exclusive) mask: 00000000 00000000 11111111 1111111
Copy the code
Therefore, we need to exclude the interference information of other locks when we count the resources of write locks (exclusive) and read locks (shared) :
/** Returns the number of shared holds represented in count. */
static int sharedCount(int c) { return c >>> SHARED_SHIFT; }// Discard write lock information of lower 16 bits
/** Returns the number of exclusive holds represented in count. */
static int exclusiveCount(int c) { return c & EXCLUSIVE_MASK; }// Discard read lock information of 16 bits high, because write lock mask 16 is 0
Copy the code
Obtain read locks for performance
protected final int tryAcquireShared(int unused) {
Thread current = Thread.currentThread();
int c = getState();
if(exclusiveCount(c) ! =0&& getExclusiveOwnerThread() ! = current)// Only the thread holding the write lock can degrade the lock
return -1;
int r = sharedCount(c);
// A simple attempt is made here. It is considered that in general there is no lock contention (CAS does not fail and locks are not queued), no large concurrency (locks do not exceed the maximum number), and no reentrant scenario
// Whether to block depends on whether the mode is fair. Generally there is multi-threaded competition, fair mode needs to queue
if(! readerShouldBlock() && r < MAX_COUNT && compareAndSetState(c, c + SHARED_UNIT)) {// A typical example of caching is that in a lockless race, a small amount of recorded information can be used to avoid unnecessary wastage
if (r == 0) {
firstReader = current;
firstReaderHoldCount = 1;
} else if (firstReader == current) {
firstReaderHoldCount++;
} else {
HoldCounter rh = cachedHoldCounter;
if (rh == null|| rh.tid ! = LockSupport.getThreadId(current)) cachedHoldCounter = rh = readHolds.get();else if (rh.count == 0)
readHolds.set(rh);
rh.count++;
}
return 1;
}
// This is a more complex logic
return fullTryAcquireShared(current);
}
Copy the code
Read-write lock implementations are riddled with performance concerns. Both the lock count record and lock acquisition are generally considered to be non-concurrent and non-contending scenarios, so a simpler approach is preferred to avoid unnecessary performance waste.
Performance optimization for lock count logging operations
Sync records the number of read locks held by each thread (not the number of write locks) to support ReentrantReadWriteLock to return the number of read locks held by the current thread, getReadHoldCount.
static final class HoldCounter { int count; // initially 0 // Use id, not reference, to avoid garbage retention final long tid = LockSupport.getThreadId(Thread.currentThread()); } static final class ThreadLocalHoldCounter extends ThreadLocal<HoldCounter> { public HoldCounter initialValue(a) { return newHoldCounter(); }}private transient ThreadLocalHoldCounter readHolds; Copy the code
In order to avoid lock contention, Sync also operates on a ThreadLocal to create new entries and clear new entries, and internally redunds two quantity records:
The first thread to acquire the read lock and the number of read locks held by the thread will be null only after the thread releases the lock.
private transient Thread firstReader; private transient int firstReaderHoldCount; Copy the code
The thread that last acquired the read lock and the number of read locks held
private transient HoldCounter cachedHoldCounter; Copy the code
Each time the number of entries is recorded, it takes precedence to determine whether the current thread is a firstReader or a cachedHoldCounter (which must represent two different threads). CachedHoldCounter) to manipulate the quantity directly. Although there is some judgment loss in concurrent contention, in that case the cost is relatively small; And the improvement this optimization brings when there is no competition is obvious.
Many of the subsequent operations have similar record logic to tryAcquireShared and will not be expanded.
If there is no contention, the write lock can be obtained directly from CAS. Otherwise, the logic goes as follows:
final int fullTryAcquireShared(Thread current) {
HoldCounter rh = null;
for (;;) {// Read lock share is not allowed to exceed the limit of the queue, so keep retry
int c = getState();
if(exclusiveCount(c) ! =0) {
if(getExclusiveOwnerThread() ! = current)// Only the thread holding the write lock can degrade the lock
return -1;
// Hold write lock, then block, otherwise deadlock
// USER A holds the write lock, and user B blocks the read lock.
// A is blocked from obtaining the write lock by B's queue, resulting in A deadlock
} else if (readerShouldBlock()) {
// Make sure we're not acquiring read lock reentrantly
if (firstReader == current) {
The current thread is the first thread to acquire the read lock and has never released it. This time it is reentrant and can continue regardless of blocking
} else {
if (rh == null) {
rh = cachedHoldCounter;
if (rh == null|| rh.tid ! = LockSupport.getThreadId(current)) { rh = readHolds.get();if (rh.count == 0) readHolds.remove(); }}if (rh.count == 0)
// It is not reentrant, it will be blocked
return -1; }}if (sharedCount(c) == MAX_COUNT)
// If you exceed the limit, you will not queue
throw new Error("Maximum lock count exceeded");
if (compareAndSetState(c, c + SHARED_UNIT)) {
// omit the logic of the number of records
return 1; }}}Copy the code
This method handles reentrant cases where there is a special treatment for read lock blocking.
Also, unlike the native semantics of other locks or AQS, when resources are insufficient, it does not queue for resources to be released, but reports an error directly.
Read lock share, and do not allow to exceed the limit, certainly will not queue, so always retry
It also has to do with reading and writing. I’ll give you the answer in a minute.
There is also a tryReadLock failure that returns directly and does not block, so the only difference is that there is no judgment and processing for readerShouldBlock(). Ps: WriteLock
final boolean tryReadLock(a) {
Thread current = Thread.currentThread();
for (;;) {
int c = getState();
if(exclusiveCount(c) ! =0&& getExclusiveOwnerThread() ! = current)return false;
int r = sharedCount(c);
if (r == MAX_COUNT)
throw new Error("Maximum lock count exceeded");
if (compareAndSetState(c, c + SHARED_UNIT)) {
// omit the logic of the number of records
return true; }}}Copy the code
The read lock is released, but 0 cannot be exceeded
protected final boolean tryReleaseShared(int unused) {
Thread current = Thread.currentThread();
if (firstReader == current) {
// assert firstReaderHoldCount > 0;
if (firstReaderHoldCount == 1)
// The first thread to acquire the lock releases the lock
firstReader = null;
else
firstReaderHoldCount--;
} else {
// omit the logic of the number of records
}
for (;;) {
int c = getState();
int nextc = c - SHARED_UNIT;
if (compareAndSetState(c, nextc))
In general, returning true indicates that there is a new resource, but in the case of both read and write, it is difficult to distinguish read and write lock resources
// So in tryAcquireShared, it is not allowed to wait when read lock resources are insufficient.
// Write lock is available
return nextc == 0; }}Copy the code
TryReleaseShared return True does not block write locks. TryReleaseShared return true does not block write locks.
Otherwise, there is no way to handle readLock = 0 and readLock <= MAX_COUNT.
Write lock fetch, cliche
protected final boolean tryAcquire(int acquires) {
Thread current = Thread.currentThread();
int c = getState();
int w = exclusiveCount(c);
if(c ! =0) {
// c ! = 0, w stands for write lock = 0, that is not read lock! = 0?
if (w == 0|| current ! = getExclusiveOwnerThread())// Write lock is unique
return false;
if (w + exclusiveCount(acquires) > MAX_COUNT)
// Write locks are not allowed to exceed the limit
throw new Error("Maximum lock count exceeded");
// Reentrant acquire
If (acquires) {acquires (state) {acquires (state) {acquires (state) {acquires (state)
setState(c + acquires);
return true;
}
// There will be queue or contention blocking for read locks, or contention for write locks
if(writerShouldBlock() || ! compareAndSetState(c, c + acquires))return false;
setExclusiveOwnerThread(current);
return true;
}
Copy the code
In comparison, write locks repel read locks because of their exclusive properties, which is a typical reentrant exclusive lock pattern, which is easier to understand.
Write lock release, everything is empty
So that’s even easier, just go all = 0, nothing.
protected final boolean tryRelease(int releases) {
if(! isHeldExclusively())throw new IllegalMonitorStateException();
int nextc = getState() - releases;
boolean free = exclusiveCount(nextc) == 0;
if (free)
setExclusiveOwnerThread(null);
setState(nextc);
return free;
}
Copy the code
Now, that’s pretty much everything. But if you take a closer look, there’s one thing left out, and that’s the different read/write blocking strategies for fair and unfair scenarios.
abstract boolean readerShouldBlock(a);
abstract boolean writerShouldBlock(a);
Copy the code
Fair lock: all beings are equal
static final class FairSync extends Sync {
final boolean writerShouldBlock(a) {
return hasQueuedPredecessors();
}
final boolean readerShouldBlock(a) {
returnhasQueuedPredecessors(); }}Copy the code
In general, in fair mode, as long as there is a lock request queue, give way. Because the read lock is acquired in a loop and cannot exceed the limit, the read lock request is usually queued because of the write lock request or the read lock request caused by the previous write lock.
Unfair lock: Write lock is all ye
static final class NonfairSync extends Sync {
final boolean writerShouldBlock(a) {
return false; // writers can always barge
}
final boolean readerShouldBlock(a) {
returnapparentlyFirstQueuedIsExclusive(); }}Copy the code
Generally speaking, write lock is ye, very bold and horizontal drop.
Write lock between you fight I rob; Read lock as long as you bump into the write lock queue (queue head just happens to be write lock request), pretend to get out of the way.
This is similar to MySQL’s table locking mechanism. Write locks have higher priority than read locks.
conclusion
This gadget door is really much, read and write one, read the performance of the pursuit of lock, or not fair lock “ye”, learning waste learning waste.
Hope this article is helpful to everyone, ask for three even 😁😁😁😁
Source code notes: Openjdk11 / ReentrantReadWriteLock. Java at 545 d26c92a833ac66476b33a7cfc78d5be5ad5ba SaltyFishInJiang/openjdk11 (github.com)