Series index

Concurrent series: Thread locking

  1. Why does CountDownlatch guarantee execution order?

  2. Why can Concurrent Containers achieve efficient concurrency?

  3. From ReentrientLock to see the correct use of lock posture

New series: Android11 system source code analysis

  1. How to download Android source code for Mac environment?

  2. Android11 Source code analysis: How does the application start?

  3. Android11 source code analysis: How to start the Activity?

  4. Android11 source code analysis: Service startup process analysis

  5. Android11 source code analysis: Static broadcast is how to receive notifications?

  6. Binder cross-process Implementation (In process of creation)

  7. How do plug-in activities start?

  8. Android11 source code analysis: why in the end will BE stuck UI?

  9. How does SurfaceFlinger distribute vsync signals? (In process of creation)

Classic series: Android10 system startup process

  1. Source code download and compilation

  2. Overview of the Android system startup process

  3. Init process source parsing

  4. Zygote process source code analysis

  5. SystemServer source code parsing

preface

Earlier we looked at the source code for concurrent containers and thread collaboration tools. Today we will start with usage and continue with the source code to see how ReentraientLock encapsulates and optimizes the use of locks

Next, the body begins

useReentraientLockImplement sequential printing

In Part 1: Why does CountDownlatch guarantee execution order? In, we implemented the sequential printing requirement using CountdownLatch, and analyzed that its principle is actually notification and awakening between threads

ReentraientLock is a lock encapsulation and implementation that also supports notification and wake up between threads. For specific methods, singal() (notifying other threads to acquire the lock), await() releases the lock and waits

The implementation creates a ReentrantLock object and creates four Condition variables

  1. When condition is not satisfied, call condition.await() to release the lock and wait

  2. When the thread completes execution, condition.single() is called to wake up the corresponding thread to continue execution

The specific code is as follows

Public void conditionPrint() {ReentrantLock lock = new ReentrantLock(); Condition aCondition = lock.newCondition(); Condition bCondition = lock.newCondition(); Condition cCondition = lock.newCondition(); Condition dCondition = lock.newCondition(); Thread t1 = new Thread(new Runnable() { @Override public void run() { try { lock.lock(); printStr("a"); aCondition.signal(); } finally { lock.unlock(); }}}); Thread t2 = new Thread(new Runnable() { @Override public void run() { try { lock.lock(); aCondition.await(); printStr("b"); bCondition.signal(); } catch (InterruptedException e) { e.printStackTrace(); } finally { lock.unlock(); }}}); Thread t3 = new Thread(new Runnable() { @Override public void run() { try { lock.lock(); bCondition.await(); printStr("c"); cCondition.signal(); } catch (InterruptedException e) { e.printStackTrace(); } finally { lock.unlock(); }}}); Thread t4 = new Thread(new Runnable() { @Override public void run() { try { lock.lock(); cCondition.await(); printStr("d"); dCondition.signal(); } catch (InterruptedException e) { e.printStackTrace(); } finally { lock.unlock(); }}}); t3.start(); t4.start(); t2.start(); t1.start(); PrintStr ("conditionPrint- start printing "); try { lock.lock(); dCondition.await(); } catch (InterruptedException e) { e.printStackTrace(); } finally { lock.unlock(); PrintStr ("conditionPrint- end of print "); }}Copy the code

ReeentraientLockBreak down

What exactly isReentraientLock?

As the name suggests, ReentraientLock should be a reentrant lock

Reentrant lock means that when a thread holds a lock object and unlocks it before performing the lock operation, it can directly obtain the lock object and continue to perform the lock operation

On multiple for problem of a lock, lock in AbstractQueuedSynchronizer (hereafter referred to as “AQS) will maintain a thread queue, when to notify the thread to get the lock, lock will be removed from the thread queue application threads, let this thread to continue

In ReentraientLock, it inherits its own Sync from AQS, where NonfairSync and FairSync are subclasses of Sync

The default construct of ReentraientLock uses NonfairSync (the AQS implementation class) for acquisition and release logic. You can also pass fair to the construct to control whether a fair lock is used or, if true, FairSync is used to release the lock

What are fair locks and unfair locks?

Before we talk about the code, what is fair locking

As mentioned above, AQS maintains a bidirectional circular list of threads, which is stored by the Thread that obtains the lock first

If all threads access lock and unlock operations in full accordance with the order of application, that is the implementation of fair lock

If under certain conditions, allow some thread to get the lock operation to jump the queue, it is an unfair lock implementation idea

In the actual use, lock. Lock () is called when locking and lock. Unlock () is called when unlocking, and the lock() function and release() function of Sync are called internally

Await () is called when a thread is waiting and signal() is called when the waiting thread is woken up

Let’s take a look at specific implementations

Not fair lockNonfairSync

In the default construct, the implementation of the unfair lock is invoked, executing the lock() method, which modifies the state value in the AQS through the CAS operation

In ReentraientLock, state represents the number of times a lock is reentered, state 0 means no thread holds the lock, and state non-0 means a thread is holding the lock

The specific code is as follows

Static final class NonfairSync extends Sync {static final class NonfairSync extends Sync {static final class NonfairSync extends Sync { final void lock() { if (compareAndSetState(0, 1)) setExclusiveOwnerThread(Thread.currentThread()); else acquire(1); } protected final boolean tryAcquire(int acquires) { return nonfairTryAcquire(acquires); }}Copy the code

If the value is changed successfully (state is 0), it indicates that no Thread has acquired the current lock. The Thread object of the current Thread is saved in the AQS exclusiveOwnerThread, which indicates that the Thread has obtained the lock and can execute related logic

If the modification fails (state is non-0), the thread lock is held and AQS acquire() function is called to add the current thread to the linked list of the request queue

Acquire(1) will call the function in AQS, and call the tryAcquire(1) function of the subclass (i.e. NonfairSync) to apply for the right to lock, and finally call nonfairTryAcquire().

The following code

final boolean nonfairTryAcquire(int acquires) { final Thread current = Thread.currentThread(); int c = getState(); If (compareAndSetState(0, acquires)) {setExclusiveOwnerThread(current); // Set the current thread to execute thread return true; }} else if (current == getExclusiveOwnerThread()) {int nexTC = c + acquires; if (nextc < 0) // overflow throw new Error("Maximum lock count exceeded"); setState(nextc); Return true; } return false; }Copy the code

This function is the core implementation of the reentrant lock logic:

  1. If state==0, the current lock is not held, set it to the thread of execution and return true

  2. If the state! =0, and the thread applying for the lock is the same as the thread currently holding the lock, then the number of reentrant times of the lock is maintained by state and returns true

  3. If the state! If =0 and the thread applying for the lock is not the current thread, the application fails and false is returned

If true is returned, the current Thread has acquired the lock, continue doAcquireInterruptibly(), and wrap Thread as a Node to add to the list, setting it as the head Node

The following code

private void doAcquireInterruptibly(int arg)
        throws InterruptedException {
        final Node node = addWaiter(Node.EXCLUSIVE); 
        
        boolean failed = true;
        try {
            for (;;) {
                final Node p = node.predecessor();
                if (p == head && tryAcquire(arg)) {
                    setHead(node); 
                    p.next = null; // help GC
                    failed = false;
                    return;
                }
                //...
            }
        } finally {
            if (failed)
                cancelAcquire(node);
        }
    }
Copy the code

At this point, we will not be fair lock implementation analysis completed, now to a simple summary

summary

In the implementation of an unfair lock, the thread that currently holds the lock acquires the lock after the lock is released and can continue to execute the lock directly. This is called an unfair implementation, or a reentrant lock (reentrant for the thread that currently holds the lock).

Both Synchronize and Lock implementations in Java are reentrant, and the advantage of an unfair lock (or reentrant lock) is that

  1. Save the overhead of thread wake up, better performance

  2. When the thread that currently holds the lock applies for the lock after release, it can directly obtain the lock object to avoid deadlock

Fair lockFairSync

In the case of FairSync, the implementation is absolutely fair. Regardless of whether the current thread holds the lock or not, the next time the lock object is acquired, it needs to be added to the thread’s wait queue and wait to be awakened in the insertion order

Its advantage is that it ensures absolute fairness and can be executed in the order in which the threads are started (fifo)

The disadvantage is that non-reentrancy results in inferior performance to unfair locks

The specific code is no longer shown here, interested partners can check

Extra meals: How to keep threads safe without locks?

In the source code analysis, we found that the maintenance of thread mutual exclusion in ReentraientLock did not use the locking logic, but use CAS (compareAndSwap) to operate the state value to achieve

In Java concurrent sending, atomic class data structures are also atomic operations implemented by CAS instructions to ensure concurrency safety without locks

AtomicInteger for example, which is similar to ReentraientLock in the CAS modification of the state variable to implement

So we can and can use atomic classes to implement the sequential printing requirements in this article

  1. Create an AtomicInteger and initialize its value to 4

  2. Loop judgment in each thread, execute print when value meets the condition, and change value —

  3. When value counts backwards to 0, it indicates that all threads have completed execution, and the sub-thread is terminated when the conditions are met in the main thread

The following code

public void atomicPrint() { AtomicInteger lock = new AtomicInteger(4); Thread t1 = new Thread(new Runnable() { @Override public void run() { while (! Thread.currentThread().isInterrupted()) { if (lock.get() == 4) { System.out.println("Thread:" + "a" + " lock count:" + lock.get()); System.out.println("a"); lock.decrementAndGet(); }}}}); Thread t2 = new Thread(new Runnable() { @Override public void run() { while (! Thread.currentThread().isInterrupted()) { if (lock.get() == 3) { System.out.println("Thread:" + "b" + " lock count:" + lock.get()); System.out.println("b"); lock.decrementAndGet(); }}}}); Thread t3 = new Thread(new Runnable() { @Override public void run() { while (! Thread.currentThread().isInterrupted()) { if (lock.get() == 2) { System.out.println("Thread:" + "c" + " lock count:" + lock.get()); System.out.println("c"); lock.decrementAndGet(); }}}}); Thread t4 = new Thread(new Runnable() { @Override public void run() { while (! Thread.currentThread().isInterrupted()) { if (lock.get() == 1) { System.out.println("Thread:" + "d" + " lock count:" + lock.get()); System.out.println("d"); lock.decrementAndGet(); }}}}); System.out.println("atomicPrint- start execution "); t3.start(); t4.start(); t2.start(); t1.start(); while (! Thread.currentThread().isInterrupted()) { if (lock.get() == 0) { t3.interrupt(); t4.interrupt(); t2.interrupt(); t1.interrupt(); System.out.println("atomicPrint- end of execution "); lock.decrementAndGet(); return; } } Thread.currentThread().interrupt(); }Copy the code

summary

When we use atomic class to maintain thread execution in order, we need to judge whether it meets the condition through the method loop of atomic class in each thread, which is equivalent to spin operation. Therefore, the overhead is higher than loop waiting. In general, we should use thread cooperation to achieve it

When atomic operations are needed on a state, a correlation implementation of atomic classes can be used, which is more efficient than locking

The last

This article is the third and planned final installment of “Threading Trivia,” but the area of concurrency is much deeper than a few articles, so there may be more to follow, such as an analysis of the Java memory model and an analysis of atomicity/orderliness/visibility

According to my learning and analysis mode, it must be a combination of theory and source code, which is essential to involve the JVM VIRTUAL machine related source code, so it takes a certain amount of time to accumulate and precipitate, and then do the relevant output

Today’s article is here first, I am relieved, we next article, goodbye!

If this article inspires you, please give it a thumbs up