How do deadlocks occur

  1. Mutually exclusive: A process uses the allocated resources exclusively. That is, only one process occupies a resource in a period of time. If another process requests the resource at this point, the requester can only wait until the process holding the resource is released.
  2. Request and hold conditions: a process that has held at least one resource makes a request for a new resource, and the resource is occupied by another process. In this case, the requesting process blocks but does not release other resources it has obtained.
  3. Unstripped condition: a process that has acquired resources cannot be stripped of them until they are used up. The process can only release resources when they are used up.
  4. Loop waiting condition: when a deadlock occurs, there must be a process-resource ring chain, that is, the process set (P0, P1,P2… P0 in Pn) is waiting for a resource occupied by P1; P1 is waiting for a resource occupied by P2… Pn is waiting for a resource that has been occupied by P0.

A link to the

  1. Java common thread pools
  2. android handlerThread

How do I avoid deadlocks

  1. Lock the order
  2. Lock time limit
  3. Deadlock detection
public class T2 {
    public String str="abc";

    static class R1 implements Runnable{
        public String str;
        public Object o1;
        public Object o2;

        public R1(String str,Object ob1,Object ob2) {
            this.str=str;
            this.o1=ob1;
            this.o2=ob2;
        }
        @Override
        public void run(a) {
            if (str.contains("a")) {synchronized (o1){
                    try {
                        Thread.sleep(2000);
                    } catch (InterruptedException e) {
                        e.printStackTrace();
                    }
                    synchronized (o2){
                        System.out.println("aaa-bbb"); }}}if (str.contains("b")) {synchronized (o2){
                  try {
                      Thread.sleep(2000);
                  } catch (InterruptedException e) {
                      e.printStackTrace();
                  }
                  synchronized (o1){
                      System.out.println("bbb-aaa");
                  }
              }
            }
        }
    }
    public static void main(String[] args){
        Object ob1=new Object();
        Object ob2=new Object();
        R1 r1=new R1("a",ob1,ob2);
        Thread thread1=new Thread(r1);
        thread1.start();

        R1 r2=new R1("b",ob1,ob2);
        Thread thread2=newThread(r2); thread2.start(); }}Copy the code

Can threads make a program run faster?

Multi-threaded programs can not improve the running speed of the program, but can improve the efficiency of the program, so that the CPU utilization rate is higher.

1. Lock classification

  • Fair lock, unfair lock
  • Reentrant lock
  • Exclusive lock/shared lock
  • Mutex/read-write lock
  • Optimistic lock/pessimistic lock
  • Segmented lock
  • Bias lock/lightweight lock/heavyweight lock
  • spinlocks

Fair lock/unfair lock

  1. A fair lock is one in which multiple threads acquire locks in the order in which they are applied. Fair locks do not produce hunger locks, that is, as long as you wait in line, you can eventually wait for the opportunity to get the lock. Create a fair lock using a reentrant lock (default unfair lock) :
  2. An unfair lock means that multiple threads obtain locks in a different order than the one that applied for the locks earlier. Possibly, it could lead to priority reversals or starvation.
  3. forReentrantLockThe constructor specifies whether the lock is fair. The default is unfair. The advantage of an unfair lock is that the throughput is greater than that of a fair lock. Synchronized is also an unfair lock. Since it does not implement thread scheduling via AQS like ReentrantLock, there is no way to make it a fair lock.

Already the instance

Create a fair lock using a reentrant lock (default unfair lock) :

public ReentrantLock(boolean fair) {
    sync = fair ? new FairSync() : new NonfairSync();
}
final ReentrantLock lock = new ReentrantLock();
final ReentrantLock lock = new ReentrantLock(false);
Copy the code

Reentrant lock

A reentrant lock, also known as a recursive lock, means that the inner method automatically acquires the lock when the same thread acquires the lock in the outer method

  1. forReentrantLockAs far as his name can be seen, it is a reentrant lock whose name isRe entrant LockRe-enter the lock.
  2. forSynchronizedIs also a reentrant lock. One benefit of reentrant locks is that deadlocks are somewhat avoided.

Key implementation points of reentrant locks

  1. After the lock is acquired, the lock is acquired again. Check whether the current thread is the thread that has acquired the lock, if so, pass; Otherwise, block.
  2. If the thread has acquired the lock for n(n>=1) times, it needs to release the lock for n times to release it successfully. Otherwise, return false.
synchronized void setA(a) throws Exception{
    Thread.sleep(1000);
    setB();
}

synchronized void setB(a) throws Exception{
    Thread.sleep(1000);
}
Copy the code

The above code is a feature of a reentrant lock. Without a reentrant lock, setB may not be executed by the current thread, possibly causing a deadlock.

Take a look at ReentrantLock first

1. Interrupt response

For synchronized blocks, either lock execution is acquired or wait. The interrupt response function of reentrant lock reasonably avoids this situation. For example, a thread waiting to acquire a lock is “told” that it does not need to wait any longer and can stop working.

2. Wait time for lock application

You can use tryLock() or tryLock(long Timeout, TimeUtil Unit) to perform a time-limited lock wait.

  1. If the lock is held by another thread, false is returned immediately. If the lock is held by another thread, the current thread is not made to wait, so no deadlock is generated.
  2. The latter takes an argument indicating that the lock has been acquired within the specified period of time. If the lock has not been acquired within the specified period of time, false is returned.
public class TryLockTest implements Runnable{
    public static ReentrantLock lock = new ReentrantLock();

    @Override
    public void run(a) {
        try {
            if (lock.tryLock(1, TimeUnit.SECONDS)) { // Wait 1 second
                Thread.sleep(2000);  // Sleep for 2 seconds
            } else {
                System.err.println(Thread.currentThread().getName() + "Lock acquisition failed!"); }}catch (Exception e) {
            if(lock.isHeldByCurrentThread()) lock.unlock(); }}public static void main(String[] args) throws InterruptedException {
        TryLockTest test = new TryLockTest();
        Thread t1 = new Thread(test); t1.setName(Thread 1 "");
        Thread t2 = new Thread(test); t1.setName(Thread 2 "");
        t1.start();t2.start();
    }

}
Copy the code

Exclusive lock ()/ shared lock

  1. An exclusive lock is one that can only be held by one thread
  2. A shared lock is one that can be held by multiple threads
  3. For JavaReentrantLockIn terms of its exclusive lock. But another implementation class for LockReadWriteLock, its read lock is shared lock, its write lock is exclusive lock.
  4. The shared lock of read lock ensures that concurrent read is very efficient. Read, write, read, and write processes are mutually exclusive.
  5. Exclusive lock and shared lock is also achieved through AQS, through the implementation of different methods, to achieve exclusive or shared.
  6. forSynchronizedAs far as the lock is concerned, it is exclusive.

Mutex/Shared lock (read lock)/exclusive lock (write lock)

Exclusive/shared lock is a broad term, and mutex/read-write lock is an implementation.

  1. Mutexes are implemented in JavaReentrantLock.
  2. Read/write lock is implemented in JavaReadWriteLock.
  3. Shared lock (S lock) : Shared (S) is used for operations (read-only operations) that do not change or update data, such as SELECT statements. If transaction T holds A shared lock on data A, other transactions can only hold A shared lock on data A, not an exclusive lock. Transactions that are allowed to share locks can only read data, not modify it.
  4. Exclusive lock (X lock) : Used for data modification operations, such as INSERT, UPDATE, or DELETE. Ensure that multiple updates are not made to the same resource simultaneously. If transaction T places an exclusive lock on data A, other transactions cannot place any kind of lock on data A. Transactions that are granted exclusive locks can both read and modify data. When we are working with a database, we may encounter data inconsistencies (data collisions) due to concurrency problems.

Optimistic lock/pessimistic lock

Optimistic and pessimistic locks do not refer to specific types of locks, but rather to the perspective of concurrent synchronization.

  1. Pessimistic locks believe that concurrent operations on the same data must be modified, even if no modification, it will be considered modified. Therefore, for concurrent operations on the same data, pessimistic locking takes the form of locking. The pessimistic view is that concurrent operations without locking are bound to cause problems.
  2. Optimistic locks assume that concurrent operations on the same data will not change. When the data is updated, the data will be updated by trying to update and constantly re – updating. Optimistically, concurrency without locking is fine.
  3. Pessimistic locking is used in Java to take advantage of various locks. Optimistic locking is used in Java, is lockless programming, often using CAS algorithm, a typical example is atomic class, through CAS spin atomic operation update.

Two types of lock usage scenarios

From the introduction of the two kinds of locks, we know that the two kinds of locks have their own advantages and disadvantages, can not be regarded as better than the other, == for example, optimistic lock is suitable for the situation of less write (multi-read scenario) ==, that is, conflict is really rare, this can save the lock overhead, increase the overall throughput of the system. However, in the case of overwrite, conflicts often arise, which can cause the upper application to be repeatedly retry, thus reducing performance. Pessimistic locking is suitable for overwrite scenarios.

There are two common implementations of optimistic locking

1. Version number mechanism

Generally, a version field is added to the data table to indicate the number of times the data has been modified. When the data has been modified, the version value will be increased by one. When thread A wants to update the data value, it also reads the version value. When submitting the update, it updates the version value only when the version value it just read is the same as the version value in the current database. Otherwise, the update operation is retried until the update succeeds.

2. CAS algorithm

Compare and swap is a well-known lock-free algorithm. Lockless programming refers to the Synchronization of variables between multiple threads without using locks. It is also called non-blocking Synchronization when no threads are blocked. The CAS algorithm involves three operands.

  1. Memory value V that needs to be read or written
  2. The value A for comparison
  3. The new value B to be written

Disadvantages of optimistic locking

1. The problem of ABA

If A variable V is A value when it is first read and is still A value when it is ready to be assigned, can we say that its value has not been modified by other threads? Obviously not, because during that time its value could be changed to something else and then back to A, and the CAS operation would assume that it had never been changed. This problem is known as the “ABA” problem of CAS operations.

2. Long cycle time and high cost

Spin CAS (that is, if it fails, it loops until it succeeds) can be very expensive for the CPU to execute if it fails for a long time. The JVM can improve performance if it supports pause instruction provided by the processor. Pause instruction has two functions. First, it can delay de-pipeline instruction so that the CPU does not consume too many execution resources. Second, it improves CPU execution efficiency by avoiding CPU pipeline flush due to memory order violation during loop exit.

3 guarantees atomic operations of only one shared variable

CAS is valid only for a single shared variable, and not when the operation involves spanning multiple shared variables. However, since JDK 1.5, the AtomicReference class has been provided to ensure atomicity between reference objects, and you can put multiple variables into one object to perform CAS operations. So we can use locks or use the AtomicReference class to merge multiple shared variables into a single shared variable.

Segmented lock

Segmented locking is actually a lock design, not a specific lock. For ConcurrentHashMap, its concurrent implementation is to achieve efficient concurrent operations in the form of segmented locking.

  1. We take theConcurrentHashMapLet’s talk about the meaning and design idea of segmental lock.ConcurrentHashMapThe segment lock is calledSegment, it is similar toHashMap(JDK7 and JDK8 implementation of HashMap) structure, that is, have an array of entries, each element in the array is a linked list; And at the same time aReentrantLockSegment inherits ReentrantLock.
  2. When a put element is required, instead of locking the entire HashMap, hashCode first knows which segment it is to be placed in, and then locks that segment. So when multiple threads put elements, as long as they are not placed in a segment, they achieve true parallel inserts.
  3. However, in order to obtain the global information of the HashMap, we need to obtain all the segment locks to obtain the statistics.
  4. Segment locking is designed to refine the granularity of the lock. When an operation does not need to update the entire array, only one item in the array is locked.

Heavyweight/bias/lightweight locks

These three types of locks refer to lock states and are specific to Synchronized. In Java 5, efficient Synchronized is implemented by introducing a mechanism of lock escalation. The status of these three locks is indicated by an object monitor field in the object header.

  1. Biased locking is when a piece of synchronized code is always accessed by a thread, and that thread automatically acquires the lock. Reduce the cost of acquiring locks.
  2. Lightweight lock refers to that when a biased lock is accessed by another thread, the biased lock will be upgraded to lightweight lock. Other threads will try to acquire the lock through the form of spin, which will not block and improve performance.
  3. Heavyweight lock means that when the lock is a lightweight lock, the other thread spins, but the spin will not continue. When the spin is a certain number of times, the lock has not been acquired, it will enter the block, and the lock expands to the heavyweight lock. Heavyweight locks can block other threads that apply for them and degrade performance.

Synchronized heavyweight lock

1. The role of Synchronized

  1. When applied to a method, it locks an instance of the object (this)
  2. When used for static methods, the Class instance is locked, and since the data associated with the Class is stored in PermGen (metaspace in JDK1.8), the permanent band is shared globally, so the static method lock is equivalent to a global lock for the Class, locking all threads that call the method.
  3. Synchronized, when applied to an object instance, locks all blocks of code that are locked on that object.

2. Implementation of Synchronized

spinlocks

In Java, a spin lock means that the thread attempting to acquire the lock does not block immediately, but instead attempts to acquire the lock in a loop. This has the advantage of reducing the cost of thread context switching, but the disadvantage is that the loop consumes CPU.

1. Time threshold of the spin lock

The purpose of a spin lock is to hold CPU resources until the lock is acquired. But how do you choose when to execute the spin? If the spin execution time is too long, a large number of threads in the spin state will occupy CPU resources, which will affect the overall system performance. So the cycle of spin is extra important!

  • In JDK 1.6, adaptive spin locking was introduced. Adaptive spin locking means that the time of spin on the same lock is not fixed, but is determined by the state of the previous spin on the lock and the owner of the lock. The time for a thread context switch is generally considered to be the best time, and the JVM is optimized for the current CPU load
    1. Spins if the average load is less than CPUs
    2. If more than (CPUs/2) threads are spinning, the subsequent thread blocks
    3. If the spinning thread finds that the Owner has changed, it delays the spin time (spin count) or blocks
    4. Stop spinning if the CPU is in power-saving mode
    5. The worst case of spin time is the memory delay of the CPU (the time difference between CPU A storing A piece of data and CPU B learning about it directly).
    6. Differences between thread priorities are appropriately waived when spinning

Advantages of spin locks

  1. The spin lock does not cause the thread state to switch, it is always in user state, that is, the thread is always active; Does not make the thread into the blocking state, reduces the unnecessary context switch, execution speed.
  2. When a non-spin lock cannot be acquired, it will enter the blocking state, thus entering the kernel state. When the lock is acquired, it needs to recover from the kernel state, requiring a thread context switch. (After the thread is blocked, it will enter the kernel (Linux) scheduling state, which will cause the system to switch back and forth between user state and kernel state, seriously affecting the lock performance)

Example Spin lock

public class SpinLock {
    private AtomicReference<Thread> cas = new AtomicReference<Thread>();
    public void lock(a) {
        Thread current = Thread.currentThread();
        / / use the CAS
        while(! cas.compareAndSet(null, current)) {
            // DO nothing}}public void unlock(a) {
        Thread current = Thread.currentThread();
        cas.compareAndSet(current, null); }}Copy the code

If thread A does not release the lock, another thread B will acquire the lock again. At this point, since CAS is not satisfied, it will enter the while loop to continuously determine whether CAS is satisfied. Until thread A calls unlock to release the lock.

Reentrant spinlocks and non-reentrant spinlocks

The code above is not reentrant, it is not reentrant, that is, when a thread has acquired the lock the first time, it can acquire it again before the lock is released, and the second time it cannot successfully acquire it. Because the CAS is not satisfied, the second acquisition will wait in the while loop, and if the lock is reentrant, the second acquisition should be successful. Moreover, even if the second acquisition is successful, the lock will be released when the first lock is released, which is not reasonable. To implement a reentrant lock, we need to introduce a counter that records the number of threads acquiring the lock.

ublic class ReentrantSpinLock {
    private AtomicReference<Thread> cas = new AtomicReference<Thread>();
    private int count;
    public void lock(a) {
        Thread current = Thread.currentThread();
        if (current == cas.get()) { // If the current thread has acquired the lock, the number of threads increases by one, and then returns
            count++;
            return;
        }
        // If no lock is obtained, spin through CAS
        while(! cas.compareAndSet(null, current)) {
            // DO nothing}}public void unlock(a) {
        Thread cur = Thread.currentThread();
        if (cur == cas.get()) {
            if (count > 0) {// If the value is greater than 0, the current thread has acquired the lock multiple times. Releasing the lock is simulated by counting minus one
                count--;
            } else {// If count==0, the lock can be released, so that the number of times the lock is acquired is the same as the number of times the lock is released.
                cas.compareAndSet(cur, null); }}}}Copy the code

reference

  1. Blog garden
  2. ReenTrantlock: ReenTrantlock
  3. Locks in Java – bias locks, lightweight locks, spin locks, heavyweight locks
  4. An in-depth understanding of spin locks
  5. Deadlocks occur and how to avoid deadlocks
  6. The necessary conditions for thread deadlocks to occur
  7. Introduction to reentrant locks
  8. Optimistic locks and pessimistic locks
  9. How do I check for deadlocks