preface

Thread concurrency series:

Java Unsafe/CAS/LockSupport applications and principles Java Unsafe/CAS/LockSupport The nature of Java concurrent “lock” (step by step to achieve the lock) Java Synchronized implementation of mutual exclusion application and source exploration Java object header analysis and use (Synchronized related) Java Synchronized biased lock/lightweight lock/heavyweight lock evolution process The principle of Java Synchronized heavy lock in depth (mutual exclusion) The principle of Java Synchronized heavy lock in depth (synchronization) Java concurrent AQS in-depth analysis (top) Java concurrent AQS in-depth analysis (bottom) Java Thread.sleep/Thread. Join/Thread. Yield/Object. Wait/Condition. Await, Java already the thorough analysis of concurrent Java (with Synchronized difference) ReentrantReadWriteLock in-depth analysis of concurrent Java Semaphore CountDownLatch/CyclicBarrier in-depth analysis (principle), Java Semaphore CountDownLatch/CyclicBarrier in-depth analytical (application), the most detailed graphic analytic Java various locks (ultimate)

More than ten articles in front are from the point of view of the source code analysis thread concurrency involves knowledge points, this article will focus on summary, induction, refining knowledge points, as little as possible to stick code. If you encounter a point of doubt, please refer to the analysis of the corresponding article. Through this article, you will learn:

How to verify fair/unfair lock 3. How to obtain/release lock at the bottom level 4. Spin lock and adaptive spin 5

1. Lock’s family photo

2. How to verify fair/unfair locks

The difference between fair and unfair is the strategy used to acquire the lock.

As shown above:

Thread 1 holds the lock. Thread 2, thread 3, thread 4 queued up for the lock in the synchronization queue.

Thread 5 also wants to acquire the lock, and there are two different strategies based on fairness.

Fair lock

Thread 5 first determines whether there are threads waiting in the synchronization queue. Obviously, there are threads waiting in the synchronization queue, so thread 5 joins the tail of the synchronization queue to wait.

Not fair lock

1, thread 5 regardless of whether there are threads waiting on the synchronization queue, let’s grab the lock first. If you are lucky, you can directly pick up the cheap lock. If you fail, you can queue up again. 2. Thread 5 still has a chance to pick up the lock if thread 1 just releases the lock and wakes up thread 2, which then gets the lock. If thread 5 grabs the lock before thread 2 does, it succeeds. Its success is unfair to thread 2, thread 3, thread 4.

We know that ReentrantLock can implement fair/unfair locks, so let’s verify.

To verify fair locking:

public class TestThread { private ReentrantLock reentrantLock = new ReentrantLock(true); private void testLock() { for (int i = 0; i < 5; i++) { Thread thread = new Thread(runnable); Thread. setName(" thread "+ (I + 1)); thread.start(); try { Thread.sleep(100); } catch (InterruptedException e) { e.printStackTrace(); } } } private Runnable runnable = new Runnable() { @Override public void run() { try { System.out.println(thread.currentThread ().getName() + "up, ready for lock "); reentrantLock.lock(); System.out.println(thread.currentThread ().getName() + "lock "); Thread.sleep(500); } catch (InterruptedException e) { e.printStackTrace(); } finally { reentrantLock.unlock(); }}}; public static void main(String args[]) { TestThread testThread = new TestThread(); testThread.testLock(); }}Copy the code

Print the following:

As you can see, threads 2, 3, 4, and 5 acquire locks in that order, and actually acquire locks in that order. Therefore, first come, first served is fair.

To verify the unfair lock

public class TestThread { private ReentrantLock reentrantLock = new ReentrantLock(false); private void testLock() { for (int i = 0; i < 10; i++) { Thread thread = new Thread(runnable); Thread. setName(" thread "+ (I + 1)); thread.start(); try { Thread.sleep(50); } catch (InterruptedException e) { e.printStackTrace(); } } } private void testUnfair() { try { Thread.sleep(500); While (true) {system.out.println ("+++++++ I grab... + + + + + + + "); boolean isLock = reentrantLock.tryLock(); If (isLock) {System. Out. Println (" = = = = = = = = I get the lock!!! = = = = = = = = = = = "); reentrantLock.unlock(); return; } Thread.sleep(10); } } catch (InterruptedException e) { e.printStackTrace(); } } private Runnable runnable = new Runnable() { @Override public void run() { try { System.out.println(thread.currentThread ().getName() + "up, ready for lock "); reentrantLock.lock(); System.out.println(thread.currentThread ().getName() + "lock "); Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } finally { reentrantLock.unlock(); }}}; public static void main(String args[]) { TestThread testThread = new TestThread(); testThread.testLock(); testThread.testUnfair(); }}Copy the code

Print the following:

Put these two together:

Thread 1 to thread 10 call lock snatching, and then the main thread snatches the lock. 2. If it can be proved once that the main thread acquired the lock before a thread between thread 1 and thread 10, then the lock is proved to be unfair. 3. The second figure: the main thread obtains the lock before threads 4 to 10, indicating that the process is unfair.

It is worth noting:

TryLock () is used here to preempt the lock. TryLock () and Lock(unfair mode) have the same core logic.

3. How does the bottom layer acquire/release the lock

The thread acquires the lock and releases the lock. From the first picture of the family, it can be seen that the basic data structure of the lock includes:

Share lock variables, volatile, CAS, synchronization queues.

Assume that the shared variable is volatile int threadId.

ThreadId == 0 indicates that no thread has acquired the lock, thread! =0 indicates that a thread is holding the lock.

Acquiring a lock

1. The thread calls the CAS(threadId, 0, 1), and threadId == 0. If the CAS is successful, threadId is set to 1. If CAS fails, threadId! = 0, indicating that another thread has changed the threadId, so the thread fails to acquire the lock and joins the synchronization queue.

Release the lock

1, the thread holding the lock does not need to release the lock, let’s say it is an exclusive lock (mutual exclusion), because only one thread can obtain the lock, so when releasing the lock, modify threadId without CAS, threadId == 0, indicating that the lock is released successfully. After success, wake up the thread waiting in the synchronization queue.

Synchronized and AQS acquire/release lock core idea is the above steps, but more complex control, fine, more comprehensive consideration.

Note: CAS(threadId, xx, xx) is pseudocode

4. Spin locking and adaptive spin

Many articles say that CAS is a spinlock, which is problematic, essentially without a complete understanding of CAS functionality and locking.

1. CAS stands for comparison and swap. If the memory value is consistent with the expected value, it means that no other thread has changed the target variable, so you can safely change the target variable to the new value. CAS is atomic operation, underlying are CPU instructions. 3. CAS is just an attempt to modify the target variable. The result is either success or failure, and the call returns.

From the analysis of the last summary, we know that synchronized and AQS both rely on CAS to acquire/release locks.

Spinlocks do not block, and CAS does not block, so CAS can be used to implement spinlocks:

class MyLock { AtomicInteger atomicInteger = new AtomicInteger(0); private void lock() { boolean suc = false; Do {/ / bottom is CAS suc = atomicInteger.com pareAndSet (0, 1); } while (! suc); }}Copy the code

As shown above, the custom lock MyLock, thread 1, thread 2 respectively call lock() to lock.

1, thread 1 calls lock(), atomicInteger== 0, suc == true, thread 1 successfully acquired lock. Thread 2 also calls lock() because atomicInteger==1, the lock is occupied, so suc==false, while thread 2 is not blocking and keeps iterating. As long as thread 1 does not release the lock, thread 2 can never acquire it.

This is the implementation of the spin lock, as can be seen:

1. Spin locks minimize thread hangs/wakes and context switches, but unlimited spin can be a futile drain on CPU resources. 2. Therefore, optional locks are suitable for scenarios where thread execution of critical sections is relatively fast, that is, after acquiring the lock, the lock is quickly released.

To want both spin and avoid unlimited spin, adaptive spin was introduced:

class MyLock { AtomicInteger atomicInteger = new AtomicInteger(0); Final int MAX_COUNT = 10; int count = 0; private void lock() { boolean suc = false; while (! Suc && count < = MAX_COUNT) {/ / bottom is CAS suc = atomicInteger.com pareAndSet (0, 1); if (! suc) Thread.yield(); count++; }}}Copy the code

As you can see, the maximum number of spins is set. If the lock still cannot be obtained, the loop is closed.

In fact, synchronized, ReentrantReadWriteLock and other implementations also use the idea of spin (adaptive spin) in the lock preemption process in order to avoid thread suspension/wake up as much as possible, but this is only a small part of their lock implementation, they are not spin locks.

5. Why wait/notify

Let’s look at the pseudocode for the exclusive lock:

//Thread1 myLock.lock(); {// critical section code} mylock. unLock(); //Thread2 myLock.lock(); {// critical section code} mylock. unLock();Copy the code

Thread1 and Thread2 are mutually exclusive and do their own work without interference. If Thread1 and Thread2 need to work together, for example:

//Thread1 myLock.lock(); {// While (flag == false) wait(); } mylock. unLock(); //Thread2 myLock.lock(); {// critical section code flag = true; notify(); } mylock. unLock();Copy the code

As shown in the preceding code, Thread1 needs to check flag == true to proceed, and this value needs to be changed by Thread2. Thread1 and Thread2 have a cooperative relationship. Thread1 then needs to call wait to release the lock and block the wait. Thread2 acquires the lock after Thread1 releases the lock, changes its flag, and notifies Thread1(wake up after Thread2 has executed the critical section code and released the lock). Thread1 wakes up and continues to grab the lock, then checks flag==true and continues to work. Thread1 and Thread2 happily cooperate to do their job. Why do wait/notify need to acquire locks first? Flag is a shared variable between threads that needs to be accessed correctly under concurrent conditions, so locks are required.

This concludes the thread concurrency series for now. If you have any questions about this series, please leave a comment.

This article is based on JDK 1.8.

If you like, please like, pay attention to your encouragement is my motivation to move forward

Continue to update, with me step by step system, in-depth study of Android/Java

4, Windows /WindowManager: : View ( Measure/Layout/Draw really understood, Android events distribution of full service 7, 6 Android invalidate postInvalidate/requestLayout thoroughly clarify 8, Android Window How to determine the size /onMeasure() multiple execution cause 9, Android event driver handler-message-looper 10, Android keyboard in one move 11, Android coordinates completely clear 12, Android 14, Binder base for Android IPC 14, Binder base for Android IPC Android IPC: Android IPC: Android IPC: Android IPC: Android IPC: Android IPC: Android IPC: Android IPC: Android IPC: Android IPC: Android IPC: Android IPC: Android IPC: Android IPC: Android IPC: Android IPC: Android IPC: Android IPC: Android IPC: Android IPC: Android IPC: Android IPC: Android IPC 21, Android 10, 11 storage fully compatible (top) 22, Android 10, 11 storage fully compatible (bottom) 23, Java concurrent series no longer confused