Whether the thread wants to lock a synchronous resource

  • Lock the pessimism lock
  • Don’t lock the optimism lock

Whether to block a thread that failed to lock a synchronized resource

  • blocking
  • Non-blocking spinlocks, adaptive spinlocks

Is there a difference in the details of the process in which multiple threads compete for synchronized resources

  • If the resource is not locked, only one thread can modify the resource successfully. The other threads will try again without locking
  • When the same thread synchronizes resources, it automatically acquires the resource bias lock
  • When multiple threads compete to synchronize resources, the thread that did not acquire the resource spins to wait for the lightweight lock to be released
  • When multiple threads compete to synchronize resources, the thread that did not acquire the resource blocks waiting to wake up the heavyweight lock

4. Whether to queue when multiple threads compete for locks

  • Queue fair lock
  • Try queue jumping first, queue jumping fails in queue not fair lock

Can multiple processes in a thread acquire the same lock

  • Can be reentrant lock
  • There is no non-reentrant lock

Can multiple threads share a lock

  • Can share
  • No exclusive lock

Pessimistic locks and optimistic locks

Pessimistic locking and optimistic locking are broad concepts that reflect different perspectives on thread synchronization.

Pessimistic locking

Pessimistic locks assume that when they use data, other threads must modify the data. When they acquire data, they lock the data first to ensure that the data will not be modified by other threads. Lock implementation: Implementation of synchronized interface Lock Applicable scenario: Multiple write operations are performed. Locking data first ensures correct data during write operations.

Optimistic locking

Optimistic locks do not add locks because they assume that no other thread will modify the data while they are using it. They simply update the data to determine whether the data was previously updated by another thread. Lock implementation: CAS algorithms, such as the CAS spin implementation for AtomicInteger class atomic increment. Application scenario: The read operation performance is greatly improved because there are many read operations. The execution process of optimistic locking is as follows: Thread A directly operates after obtaining data, and then prepares to update synchronized resources. Before updating synchronized resources in memory, thread A checks whether the synchronized resources are updated: 1. If not, update the value of the synchronized resource in memory. 2. If the synchronized resource is updated by another thread, perform different operations based on the implementation method (error or retry).

CAS algorithm

Lock-free algorithm: Implemented based on hardware primitives to synchronize variables between multiple threads without using locks (no threads are blocked). Implementation in JDK: The atomic classes in the java.util.Concurrent package implement optimistic locking through CAS. The three operands involved in the algorithm:

The memory value V that needs to be read and written the value for comparison A the new value B to be writtenCopy the code

Problems with CAS

Thread 1 intends to replace the value of the variable from A to B with CAS. Before that, thread 2 replaces the value of the variable from A to C, and then from C to A. Thread 1 then executes CAS and finds that the value of the variable is still A, so the CAS succeeds. But the reality is that the scene is different now, and while CAS is successful, there may be latent problems. Example: a thief who steals money from other people’s house and then returns it back, is it the same money? Your wife comes back after cheating on you, is it the same wife? ABA problems, too, can cause a lot of problems if not addressed. The most common is money problems, where someone embezzled your money and paid it back before you knew it. But others have already broken the law. But the JDK has solved this problem. Want to chase down the source code, but a chase found directly to C.

2. Long cycle time and high overhead 3. Atomic operation of only one shared variable can be guaranteed

Unsafe

AtomicInteger

public final int incrementAndGet() {
        for (;;) {
            int current = get();
            int next = current + 1;
            if (compareAndSet(current, next))
                return next;
        }
    }

public final boolean compareAndSet(int expect, int update) {
        return unsafe.compareAndSwapInt(this, valueOffset, expect, update);
    }

Copy the code

From here, you can see that the atomic class’s method calls the unsafe class’s method unsafe

public final native boolean compareAndSwapInt(Object var1, long var2, int var4, int var5);

Copy the code

If you go down to the bottom, you actually add it and you get C. The final implementation is CMPXCHG = cas to modify the variable value

Lock the CMPXCHG instructionCopy the code

Hardware: The lock command locks a northbridge signal while executing the following command (without bus lock)

spinlocks

When a thread is acquiring a lock, if the lock has been acquired by another thread, the thread will loop and then continuously check whether the lock can be acquired successfully. The spin will not exit the loop until the lock is acquired.

The meaning and usage of spin

Blocking and waking up threads require the operating system to switch CPU states, which takes some time. Synchronous code blocks have simple logic and short execution time. Adaptive spin lock assumes that different threads hold the same lock object for roughly the same time, and the degree of competition tends to be stable. Therefore, the time of the next spin can be adjusted according to the time of the last spin and the result. JDK>=1.7 The spin lock parameter is removed, the virtual machine no longer supports user configuration of the spin lock, the spin lock is always executed, and the number of spin locks is automatically adjusted by the virtual machine.

Lock escalation

Lock Upgrade Process

There are many optimizations for locks in JDK6, including the lock upgrade process.

1. Biased lock: Only one thread enters the critical section, which is suitable for scenario 2 where only one thread accesses the synchronized block. Lightweight lock: multi-threaded for competitive or not competitive, suitable for the pursuit of response time, synchronous block execution speed is very fast. 3. Heavyweight lock: multi-threaded competition, suitable for the pursuit of throughput, synchronous block execution speed is long.Copy the code

The principle of locking JVM objects

Object memory structure? Object headers: such as hash code, age of the object, object lock, lock status flag, bias lock (thread) ID, bias time, array length (array object), etc. Instance data: Member variables, methods, etc. in an object when it is created. Alignment fill: Just to make up multiples of 8.

Using a plugin, compare the differences between objects before and after locking:

<dependency> <groupId>org.openjdk. Jol </groupId> <artifactId>jol-core</artifactId> <version>0.9</version> </dependency> </dependencies>Copy the code
public static void main(String[] args) { Object o = new Object(); System.out.println(ClassLayout.parseInstance(o).toPrintable()); System.out.println("==============================================="); synchronized (o){ System.out.println(ClassLayout.parseInstance(o).toPrintable()); }}Copy the code

How are instance objects stored?

Object instances are stored in heap space, object metadata is stored in metadata space, and object references are stored in stack space.

Lock elimination

/** * @author yhd * @createtime 2020/9/8 20:40 */ public class Demo2 { public static void main(String[] args) { StringBuffer buffer = new StringBuffer(); buffer.append("a").append("b"); System.out.println(ClassLayout.parseInstance(buffer).toPrintable()); }}Copy the code

We all know that a StringBuffer is thread-safe because its key methods are synchronized, but as you can see from the print, the lock is removed. Because the reference to buffer is only used in the main method and cannot be referenced by other threads (because it is a local variable and the stack is private), buffer is an impossible resource to share and the JVM automatically removes the lock inside the StringBuffer object.

Lock coarsening

/**
 * @author yhd
 * @createtime 2020/9/8 20:48
 */
public class Demo3 {
    public static void main(String[] args) {
        int i=0;
        StringBuffer buffer = new StringBuffer();
        while (i<100){
            buffer.append(i);
            i++;
        }
        System.out.println(buffer.toString());
        System.out.println(ClassLayout.parseInstance(buffer).toPrintable());
    }
}

Copy the code

When the JVM detects that a sequence of operations is locking the same object (append 100 times in the while loop, lock/unlock 100 times without lock coarsening), the JVM coarsenes the scope of the lock outside of the sequence of operations (such as while Unreal outside). The sequence of operations requires only one lock.

AQS

After studying AQS all day and finally finding his entrance, here’s what I think:

Multithreading operations do share data problems

/** * @author yhd * @createtime 2020/9/8 8:11 */ public class Demo1 { public static int m=0; public static void main(String[] args)throws Exception { Thread []threads=new Thread[100]; for (int i = 0; i < threads.length; i++) { threads[i]=new Thread(()->{ for (int j = 0; j < 100; j++) { m++; }}); } for (Thread t :threads) t.start(); for (Thread t :threads) t.join(); System.out.println(m); }}Copy the code

There is no doubt that this code has thread-safety issues, as anyone who knows a little about concurrent programming can tell. So what can we do about it?

Use synchronized to solve

/** * @author yhd * @createtime 2020/9/8 8:32 */ public class Demo2 { public static int m=0; public static void main(String[] args)throws Exception { Thread []threads=new Thread[100]; for (int i = 0; i < threads.length; i++) { threads[i]=new Thread(()->{ synchronized (Demo2.class) { for (int j = 0; j < 100; j++) { m++; }}}); } for (Thread t :threads) t.start(); for (Thread t :threads) t.join(); System.out.println(m); }Copy the code

Synchronized is an operating system level method that requires a switch between the JVM and the operating system (user-mode and kernel-mode switching), which actually affects efficiency. Another solution:

Use ReentrantLock to resolve this

/** * @author yhd * @createtime 2020/9/8 8:41 */ public class Demo3 { public static int m=0; public static Lock lock=new ReentrantLock(); public static void main(String[] args)throws Exception { Thread []threads=new Thread[100]; for (int i = 0; i < threads.length; i++) { threads[i]=new Thread(()->{ try { lock.lock(); for (int j = 0; j < 100; j++) { m++; } } finally { lock.unlock(); }}); } for (Thread t :threads) t.start(); for (Thread t :threads) t.join(); System.out.println(m); }}Copy the code

So what’s the bottom line of this approach? Follow the source code.

    public ReentrantLock() {
        sync = new NonfairSync();
    }

Copy the code

What is sync? Then after

static final class NonfairSync extends Sync { private static final long serialVersionUID = 7316153563782823691L; /** * Performs lock. Try immediate barge, backing up to normal * acquire on failure. */ final void lock() { if (compareAndSetState(0, 1)) setExclusiveOwnerThread(Thread.currentThread()); else acquire(1); } protected final boolean tryAcquire(int acquires) { return nonfairTryAcquire(acquires); }}Copy the code

It’s actually an inner class of ReentrantLock that inherits Sync, and its methods are actually methods that call Sync. With that in mind, we can take a look at Sync. The source of Sync:

abstract static class Sync extends AbstractQueuedSynchronizer

Copy the code

Therefore, actually already is using AbstractQueuedSynchronizer namely AQS.

AbstractQueuedSynchronizer overview

Inside this class is an inner class Node

static final class Node { static final Node SHARED = new Node(); volatile Node prev; // The precursor pointer is volatile Node next; // Subsequent pointer volatile Thread Thread; Private TRANSIENT volatile Node head; // header Node private TRANSIENT volatile Node tail; // Private volatile int state; If the lock is successfully added, the value is 1. If the reentrant +1 is unlocked, the value is 0..... }Copy the code

You see it right hereLinkedHashMapIn fact, it is similar to maintaining a two-way linked list, each node is a thread.

It maintains a volatile int state (representing shared resources) and a FIFO thread wait queue (which is entered when multithreaded contention for resources is blocked). Volatile is the key keyword here. The semantics of volatile are not covered here. There are three ways to access state:

    getState()
    setState()
    compareAndSetState()

Copy the code

AQS defines two resource sharing modes: Exclusive (which can be executed by only one thread, such as ReentrantLock) and Share (which can be executed by multiple threads at the same time, such as Semaphore/CountDownLatch). Different custom synchronizers compete for shared resources in different ways. The implementation of custom synchronizer only needs to realize the acquisition and release of shared resource state. As for the maintenance of specific thread waiting queue (such as failure to acquire resources in queue/wake up queue, etc.), AQS has been implemented at the top level. The implementation of a custom synchronizer is mainly implemented in the following ways:

'isHeldExclusively()' : Whether the thread is monopolizing resources. You only need to implement it if you use condition. 'tryAcquire(int)' : exclusive. Attempts to obtain the resource return true on success and false on failure. 'tryRelease(int)' : exclusive. Attempts to free resources return true on success and false on failure. 'tryAcquireShared(int)' : Shared mode. Attempt to obtain resources. Negative numbers indicate failure; 0 indicates success, but no available resources are available. A positive number indicates success and free resources. 'tryReleaseShared(int)' : share mode. Attempts to free the resource, returning true if subsequent wait nodes are allowed to wake up after release, false otherwise.Copy the code

In the case of ReentrantLock, state is initialized to 0, indicating that the state is not locked. When thread A locks (), tryAcquire() is called to monopolize the lock and state+1. After that, another thread will fail to tryAcquire() until the unlock() of thread A reaches state=0. Of course, thread A can repeatedly acquire the lock itself before releasing it (state will accumulate), which is the concept of reentrant. But be careful how many times you get it and how many times you release it, so that state can go back to zero.

In the CountDownLatch example, the task is divided into N child threads to execute, and state is initialized to N (note that N must be consistent with the number of threads). The N child threads are executed in parallel, countDown() once for each child thread, and state is reduced by 1. After all child threads have finished executing (i.e., state=0), unpark() the calling thread, and then the calling thread returns from the await() function to continue the residual action.

In general, custom synchronizers are either exclusive or shared methods, and they only need to implement either Tryacquire-TryRelease or tryAcquireShared. However, AQS also supports both exclusive and shared custom synchronizers, such as ReentrantReadWriteLock.

Source code interpretation and execution process analysis

An exclusive lock

Acquire (int)== This method is the top-level entry through which threads acquire shared resources in exclusive mode. If the resource is retrieved, the thread returns directly, otherwise it waits until the resource is retrieved, ignoring the interrupt. This is the semantics of lock(), but not just lock(). After obtaining the resource, the thread can execute its critical section code.

public final void acquire(int arg) { if (! tryAcquire(arg) && acquireQueued(addWaiter(Node.EXCLUSIVE), arg)) selfInterrupt(); }Copy the code

The function flow is as follows:

TryAcquire () attempts to acquire the resource directly and returns if it succeeds; AddWaiter () adds this thread to the end of the wait queue and marks it in exclusive mode; AcquireQueued () causes a thread to wait for a resource in a queue until it has acquired it before returning. Returns true if it was interrupted during the entire wait, false otherwise. If a thread has been interrupted while waiting, it does not respond. SelfInterrupt () is performed only after the resource has been retrieved, reclaiming the interrupt. TryAcquire (int) This method attempts to acquire an exclusive resource. Returns true on success, false otherwise. This is the semantics of tryLock(), again, not just tryLock().

    protected boolean tryAcquire(int arg) {
        throw new UnsupportedOperationException();
    }

Copy the code

At first, I was silly to think why I threw the exception directly, but then I realized that this is not a custom method? AQS defines only one interface here, and the acquisition of specific resources is implemented by the custom synchronizer. AddWaiter (Node) This method is used to add the current thread to the end of the queue and return the Node where the current thread resides.

Private Node addWaiter(Node mode) {// Constructs a Node with a given mode. Node = new Node(thread.currentThread (), mode); // Try the fast path of enq; backup to full enq on failure Node pred = tail; if (pred ! = null) { node.prev = pred; if (compareAndSetTail(pred, node)) { pred.next = node; return node; } } enq(node); return node; }Copy the code

Node encapsulates each thread that accesses the synchronized code. It contains the thread itself and its status, such as whether it is blocked, waiting to wake up, or canceled. The variable waitStatus represents the waiting state encapsulated as Node Node at present, with four values CANCELLED, SIGNAL, CONDITION and PROPAGATE.

CANCELLED: if the value is 1, the thread waiting in the synchronization queue shall wait for timeout or be interrupted, and the Node of the Node shall be CANCELLED from the synchronization queue. Its waitStatus shall be CANCELLED, that is, the Node will not change after entering the status.

SIGNAL: a value of -1, identified as the wake-up state of the successor, which is notified to execute when its predecessor’s thread releases the synchronization lock or is cancelled. As soon as the predecessor releases the lock, the thread of the successor identified as SIGNAL is notified to execute it.

CONDITION: The value -2 is associated with Condition. The node is in the wait queue, and the thread of the node is waiting on Condition. When other threads call Condition’s signal() method, the node in Condition state will be transferred from the wait queue to the synchronization queue, waiting for the synchronization lock.

PROPAGATE: the value is -3, which is related to the sharing mode, in which the state indicates that the thread of the node is in a runnable state.

0 status: The value is 0, indicating the initialization status.

AQS determines the status by using waitStatus>0 to indicate that the status is cancelled, while waitStatus<0 indicates that the status is valid.

== enq(Node)== This method is used to add Node to the end of the queue.

private Node enq(final Node node) { for (;;) {// spin Node t = tail; If (t == null) {// Must initialize; If (compareAndSetHead(new Node())) tail = head; } else {// Put the queue back normally node.prev = t; if (compareAndSetTail(t, node)) { t.next = node; return t; }}}}Copy the code

Cas spin Volatile acquireQueued(Node, int) The thread has failed to acquire the resource through tryAcquire() and addWaiter() and has been placed at the end of the wait queue. You should immediately know what the thread should do next: go into the wait state and rest until the other thread has completely released the resource, wake up, get the resource, and then go do what you want. Yes, that’s it! AcquireQueued () acquireQueued() : waits in a queue to get a number (with nothing else to do) until it gets a number and then returns. This function is very critical, or source code:

final boolean acquireQueued(final Node node, int arg) { boolean failed = true; {Boolean interrupted = false; For (;;) { final Node p = node.predecessor(); If the precursor is head, then it is eligible to attempt to fetch a resource (it may have been woken up by the elder, or it may have been interrupted). if (p == head && tryAcquire(arg)) { setHead(node); // After getting the resource, point head to the node. So head refers to the benchmarking node, which is the node from which the resource was obtained or null. p.next = null; // help GC failed = false; return interrupted; } // If I'm ready to rest, I'm waiting. Until it is unpark () if (shouldParkAfterFailedAcquire (p, node) && parkAndCheckInterrupt ()) interrupted = true; }} Finally {if (failed) cancelAcquire(node); }}Copy the code

ShouldParkAfterFailedAcquire (Node, the Node) this method is mainly used to check the status and see if it really can go to rest.

private static boolean shouldParkAfterFailedAcquire(Node pred, Node node) { int ws = pred.waitStatus; If (ws == node. SIGNAL) return true; If (ws) > 0 {/ * * if the precursor to give up, then go straight to find, wait until recently found a normal state, side by side in the back of it. * Note: the abandoned nodes, because they are "pushed" in front of them, form a no-reference chain and will be removed by the security uncle later (GC collection)! */ do { node.prev = pred = pred.prev; } while (pred.waitStatus > 0); pred.next = node; } else {// If the precursor works well, set the precursor status to SIGNAL and tell it to notify itself when it finishes retrieving the number. It could fail. He might have just been released! compareAndSetWaitStatus(pred, ws, Node.SIGNAL); } return false; }Copy the code

During the whole process, if the state of the precursor node is not SIGNAL, then I cannot rest at ease. I need to find a rest point and try again to see if it is my turn to get the number. ParkAndCheckInterrupt () Once the thread has found a safe rest point, it can rest safely. The idea is to let the thread rest and really wait.

private final boolean parkAndCheckInterrupt() { LockSupport.park(this); return Thread.interrupted(); } ' 'thread.interrupted () clears the interrupt flag of the current Thread. == The entire process of obtaining the lock is as follows: == 1. 2. If no, join the end of the waiting queue and mark it as exclusive mode. 3. After attempting to acquire the lock once, if still unable to obtain the lock, go to rest and try to acquire resources when it is your turn (unpark()). Return after obtaining the resource. Returns true if it was interrupted during the entire wait, false otherwise. 4. If a thread has been interrupted while waiting, it does not respond. SelfInterrupt () is performed only after the resource has been retrieved, reclaiming the interrupt. This is the process of reentrantLock. lock() **! [](https://img-blog.csdnimg.cn/20200909091109420.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,te xt_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl80NTU5NjAyMg==,size_16,color_FFFFFF,t_70#pic_center) ==release(int)== This method is the top-level entry for threads in exclusive mode to release shared resources. It frees a specified amount of resources, and if it frees completely (i.e., state=0), it wakes up other threads in the waiting queue to acquire resources. ```java public final boolean release(int arg) { if (tryRelease(arg)) { Node h = head; if (h ! = null && h.waitStatus ! = 0) unparkSuccessor(h); // Wake up the next thread in the queue return true; } return false; }Copy the code

TryRelease () is used to determine if the thread has finished releasing the resource. So custom synchronizers should be aware of this when designing tryRelease()!! TryRelease (int) This method attempts to release a specified amount of resources.

    protected boolean tryRelease(int arg) {
        throw new UnsupportedOperationException();
    }

Copy the code

Again, the implementation class of AQS needs to write itself. Unparksucceeded (Node) This method is used to wake up the next thread in the wait queue.

Private void unparksucceeded (Node Node) {// Node is the Node of the current thread. int ws = node.waitStatus; If (ws < 0) // sets the state of the node where the current thread resides to zero, allowing failure. compareAndSetWaitStatus(node, ws, 0); S Node s = node.next; If (s = = null | | s. aitStatus > 0) {/ / if null or cancelled s = null; for (Node t = tail; t ! = null && t ! = node; T = t.rev) if (t.waitStatus <=0)// If (t.waitStatus <=0) s = t; } if (s ! = null) LockSupport.unpark(s.thread); / / wake up}Copy the code

Wake up the first unabandoned thread in the wait queue with unpark(). Procedure for releasing locks 1. Release the specified lock resources and return the result. 2. If the release is successful, it wakes up the first unabandoned thread in the wait queue. 3. If not, return false.

A Shared lock

    public final void acquireShared(int arg) {
        if (tryAcquireShared(arg) < 0)
            doAcquireShared(arg);
    }

Copy the code

This method is used to send the current thread to the end of the queue and rest until other threads wake it up by releasing resources, and it has successfully obtained the corresponding amount of resources.

private void doAcquireShared(int arg) { final Node node = addWaiter(Node.SHARED); boolean failed = true; try { boolean interrupted = false; for (;;) { final Node p = node.predecessor(); if (p == head) { int r = tryAcquireShared(arg); Propagate(node, r); propagate (node, r); // point the head at yourself, and there are resources left to wake up later threads. // help GC if (interrupted) selfInterrupt(); failed = false; return; } } if (shouldParkAfterFailedAcquire(p, node) && parkAndCheckInterrupt()) interrupted = true; } } finally { if (failed) cancelAcquire(node); }}Copy the code

In contrast to exclusive mode, the thread will only attempt to acquire resources if it is head.next (the “second” thread), and wake up subsequent teammates if there is any left. So the problem is, if the eldest child runs out and releases five resources, the second child needs six, the third child needs one, and the fourth child needs two. The eldest brother wakes up the second brother first. The second brother sees that resources are not enough. Does he give resources to the third brother or not? The answer is no! The second child will continue to park(), waiting for other threads to release resources, and will not wake up the third and fourth children. Exclusive mode, where only one thread is executing at a time, is fine; However, in shared mode, multiple threads can be executed at the same time. Now, because the second thread has a large demand for resources, the third and fourth threads with a small amount of resources are also stuck. Of course, this is not a problem, but AQS guarantees that it will wake up in strict queue order (ensuring fairness, but reducing concurrency). == setHeadAndPropagate(Node, int)==

private void setHeadAndPropagate(Node node, int propagate) { Node h = head; // Record old head for check below setHead(node); if (propagate > 0 || h == null || h.waitStatus < 0 || (h = head) == null || h.waitStatus < 0) { Node s = node.next; if (s == null || s.isShared()) doReleaseShared(); }}Copy the code

== doReleaseShared()==

private void doReleaseShared() { for (;;) { Node h = head; if (h ! = null && h ! = tail) { int ws = h.waitStatus; if (ws == Node.SIGNAL) { if (! compareAndSetWaitStatus(h, Node.SIGNAL, 0)) continue; // loop to recheck cases unparkSuccessor(h); } else if (ws == 0 && ! compareAndSetWaitStatus(h, 0, Node.PROPAGATE)) continue; // loop on failed CAS } if (h == head) // loop if head changed break; }}Copy the code

Custom lock

Different custom synchronizers compete for shared resources in different ways. The implementation of custom synchronizer only needs to realize the acquisition and release of shared resource state. As for the maintenance of specific thread waiting queue (such as failure to acquire resources in queue/wake up queue, etc.), AQS has been implemented at the top level. The implementation of a custom synchronizer is mainly implemented in the following ways:

IsHeldExclusively () : Whether the thread is monopolizing resources. You only need to implement it if you use condition. TryAcquire (int) : Exclusive mode. Attempts to obtain the resource return true on success and false on failure. TryRelease (int) : Exclusive mode. Attempts to free resources return true on success and false on failure. TryAcquireShared (int) : Share mode. Attempt to obtain resources. Negative numbers indicate failure; 0 indicates success, but no available resources are available. A positive number indicates success and free resources. TryReleaseShared (int) : share mode. Attempts to free the resource, returning true if subsequent wait nodes are allowed to wake up after release, false otherwise.Copy the code

Customize a simple lock

/** * @author yhd * @createtime 2020/9/8 9:44 */ public class MLock implements Lock { private AbstractQueuedSynchronizer  sync=new Sync(); @Override public void lock() { sync.acquire(1); } @Override public void lockInterruptibly() throws InterruptedException { } @Override public boolean tryLock() { return false; } @Override public boolean tryLock(long time, TimeUnit unit) throws InterruptedException { return false; } @Override public void unlock() { sync.release(1); } @Override public Condition newCondition() { return null; } / / custom an exclusive lock private class Sync extends AbstractQueuedSynchronizer {@ Override protected Boolean tryAcquire (int arg) { if (compareAndSetState(0, 1)) { setExclusiveOwnerThread(Thread.currentThread()); return true; } return false; } @Override protected boolean tryRelease(int arg) { assert arg == 1; if (! isHeldExclusively()) throw new IllegalMonitorStateException(); setExclusiveOwnerThread(null); setState(0); return true; } @Override protected boolean isHeldExclusively() { return getExclusiveOwnerThread() == Thread.currentThread(); }}}Copy the code

The Demo test:

/** * @author yhd * @createtime 2020/9/8 9:36 */ public class Demo6 { public static int m = 0; public static Lock lock = new MLock(); public static void main(String[] args) throws Exception { Thread[] threads = new Thread[100]; for (int i = 0; i < threads.length; i++) { threads[i] = new Thread(() -> { try { lock.lock(); for (int j = 0; j < 100; j++) { m++; } } finally { lock.unlock(); }}); } for (Thread t : threads) t.start(); for (Thread t : threads) t.join(); System.out.println(m); }}Copy the code

The last

Welcome to pay attention to my official account: bright future, golden nine silver ten job-hopping interview season, for you to organize more than 1000 BATZ interview questions! More than 500 PDF pages of Java interview questions are included to help you realize your dream. Articles will be updated in it, and sorted data will be placed in it. Thanks for watching!