Multi-threaded concurrent questions are basically a must in an interview.

Most of you know Synchronized, Lock, some of you can say volatile, and send packets, and some of you can say the principles of Synchronized, volatile, and the data structures commonly used in sending packets. For example, the principle of ConcurrentHashMap.

This article will summarize the various processing methods of multi-threaded concurrency, hope to be helpful to you.

Why does multithreading have concurrency problems

Why is concurrency a problem when multiple threads are accessing the same variable simultaneously?

  1. The Java memory model specifies that all variables are stored in main memory, and that each thread has its own working memory.
  2. The working memory of a thread holds a copy of the main memory of the variables used by the thread. All operations on variables must be performed in the working memory, and cannot be directly read or written to main memory.
  3. When a thread accesses a variable, it first copies the variable from main memory to working memory. Any writes to the variable are not immediately synchronized to main memory.
  4. Different threads cannot directly access variables in each other’s working memory. The transfer of variables between threads requires data synchronization between their own working memory and main memory.

Java Memory Model (JMM)

The Java Memory Model (JMM) governs the process of synchronizing data between working memory (local memory) and main memory. It specifies how and when data is synchronized, as shown in the figure below.

Three, concurrent three elements

Atomicity: In an operation, the CPU cannot pause and then schedule, that is, the uninterrupted operation either completes or does not execute.

Visibility: When multiple threads access the same variable, one thread modifies the value of the variable, and the other threads immediately see the changed value.

Orderliness: The order in which a program is executed in accordance with the order in which the code is executed.

Four, how to do, to solve the concurrency problem? (key)

The following is an analysis of the solutions to concurrency problems in different scenarios.

A volatile,

1.1 volatile features

Guaranteed visibility, not guaranteed atomicity

  1. When a volatile variable is written, the JVM forces the local memory variable to be flushed into main memory
  2. This write invalidates the cache in the other thread, and the other thread reads from the main memory. Volatile writes are visible to other threads in real time.

Instruction reordering is a method used by compilers and processors to order instructions in order to optimize program performance, subject to certain rules:

  1. Do not reorder instructions that have dependencies, such as a = 1; b = a; A and B are dependent and will not be reordered
  2. The execution result of a single thread cannot be affected. For example: a = 1; b=2; The three operations, c=a+b, the first two operations can be reordered, but c=a+b won’t be reordered, because you want to make sure that the result is 3

1.2 Application Scenarios

Volatile can be used when only one thread writes a variable and all other threads read it.

1.3 Why use volatile for singleton double locking?

public class TestInstance {

private static volatile TestInstance mInstance;

public static TestInstance getInstance(){       //1
    if (mInstance == null){                     //2
        synchronized (TestInstance.class){      //3
            if (mInstance == null){             //4
                mInstance = new TestInstance(); //5
            }
        }
    }
    return mInstance;
}
Copy the code

}

If volatile is not used, concurrency will be A problem. Thread A executes on comment 5 new TestInstance() in the following steps:

  1. Allocate memory
  2. Initialize an object
  3. MInstance points to memory

If a reorder occurs, the order of execution is 132, and thread B enters at 3 and executes at comment 2, it determines that mInstance is not empty and uses an uninitialized object. So use the volatile keyword to disallow instruction reordering.

1.4 principle of volatile

At the bottom of the JVM, volatile is implemented using a memory barrier that provides three functions:

  1. It ensures that instructions are not reordered before the memory barrier, nor are they reordered after the memory barrier; That is, when the memory barrier instruction is executed, all operations in front of it have been completed;
  2. It forces changes to the cache to be written to main memory immediately
  3. A write invalidates a cache line in another CPU. After a write, a read from another thread reads from the main memory.

1.5 Limitations of volatile

Volatile only guarantees visibility, not visibility, of atomic writes to other threads, but it does not solve the problem of simultaneous writes by multiple threads.

Second, the Synchronized

2.1 Usage Scenarios of Synchronized

Multiple threads write a variable simultaneously.

For example, in A ticket sale, if the balance is 100 and window A and window B are selling one ticket at the same time, it is problematic if the balance variable is volatile. The remaining ticket obtained by window A is 100, and the remaining ticket obtained by window B is also 100. If A sells A ticket, it becomes 99 and is refreshed back to main memory. At the same time, if B sells A ticket, it becomes 99 and is refreshed back to main memory, so the final remaining ticket in main memory is 99 instead of 98.

The limitation of volatile, mentioned earlier, is in the case of multiple threads writing at the same time, in which case Synchronized is usually used.

Synchronized guarantees that only one thread can execute a method or block of code at a time.

2.2 principle of Synchronized

public class SynchronizedTest {

public static void main(String[] args) {
    synchronized (SynchronizedTest.class) {
        System.out.println("123");
    }
    method();
}

private static void method() {
}
}
Copy the code

Java p -v synchronizedtest. class command to view the bytecode. Some of the bytecode is shown below

public static void main(java.lang.String[]);
descriptor: ([Ljava/lang/String;)V
flags: ACC_PUBLIC, ACC_STATIC
Code:
  stack=2, locals=3, args_size=1
     0: ldc           #2                  // class com/lanshifu/opengldemo/test/SynchronizedTest
     2: dup
     3: astore_1
     4: monitorenter
     5: getstatic     #3                  // Field java/lang/System.out:Ljava/io/PrintStream;
     8: ldc           #4                  // String 123
    10: invokevirtual #5                  // Method java/io/PrintStream.println:(Ljava/lang/String;)V
    13: aload_1
    14: monitorexit
    15: goto          23
    18: astore_2
    19: aload_1
    20: monitorexit
    21: aload_2
    22: athrow
    23: invokestatic  #6                  // Method method:()V
    26: return
Copy the code

You can see 4: Monitorenter and 14: Monitorexit, with the printed statement in the middle.

When the synchronized block is executed, the Monitorenter directive is first executed, then the code in the synchronized block is executed, and the exit of the synchronized block is executed with the Monitorexit directive.

Synchronized is used to synchronize objects with each other. Synchronized is used to synchronize objects with each other. Synchronized is used to synchronize objects with each other. When monitorexit is called, one of the threads in the queue dequeues to fetch the monitor. Details refer to: www.jianshu.com/p/d53bf830f…

Each object has a counter, which increases by one when a thread acquires the lock, and decreases by one when the lock is released, so as long as the lock’s counter is greater than zero, other threads can only wait to access it.

2.3 Synchronized lock Escalation

You might think of Synchronized as a heavyweight lock, but with all the enhancements Java1.6 has made to Synchronized, it’s not so heavy in some cases, including biased locks and lightweight locks introduced in order to reduce the performance cost of acquiring and releasing locks.

Biased locking: In most cases, locks are not only non-competitive by multiple threads, but are always acquired by the same thread multiple times. Biased locking is introduced to make it cheaper for threads to acquire locks.

When thread A accesses A synchronized block of code, it stores the id of the current thread in the object header. When thread A enters or exits the synchronized block, it does not need to re-lock or release the lock.

Lightweight lock: In the biased lock case, if thread B also accesses the synchronized block, it will be upgraded to a lightweight lock with a different thread ID compared to the object head, and will spin to acquire the lightweight lock.

Heavy lock: If thread A and thread B access the synchronized BLOCK at the same time, the light lock will be upgraded to A heavy lock. If thread A obtains A heavy lock, thread B will have to queue and wait to enter the BLOCK state.

2.4 Synchronized shortcomings

  1. The lock timeout period cannot be set
  2. The lock cannot be released through code
  3. Easy to cause deadlock

Third, already

Referring to the disadvantages of Synchronized, the inability to set lock timeouts and the inability to release locks through code, ReentranLock solves this problem.

ReentrantLock is more suitable for situations where there are multiple Condition variables and highly competitive locks. ReentrantLock also provides Condition, which is more flexible to wait and wake up threads. A single ReentrantLock can have multiple instances of Condition, so it is more scalable.

3.1 ReentrantLock usage

The lock and unlock

        ReentrantLock reentrantLock = new ReentrantLock();
        System.out.println("reentrantLock->lock");
        reentrantLock.lock();
        try {
            
            System.out.println("Sleep for 2 seconds...");
            Thread.sleep(2000);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }finally {
            reentrantLock.unlock();
            System.out.println("reentrantLock->unlock");
        }
Copy the code

Implement a timed lock request: tryLock

    public static void main(String[] args) {
        ReentrantLock reentrantLock = new ReentrantLock();
        Thread thread1 = new Thread_tryLock(reentrantLock);
        thread1.setName("thread1");
        thread1.start();
        Thread thread2 = new Thread_tryLock(reentrantLock);
        thread2.setName("thread2");
        thread2.start();
}


    static class Thread_tryLock extends Thread {
        ReentrantLock reentrantLock;

        public Thread_tryLock(ReentrantLock reentrantLock) {
            this.reentrantLock = reentrantLock;
        }

        @Override
        public void run() {
            try {
                System.out.println("try lock:" + Thread.currentThread().getName());
                boolean tryLock = reentrantLock.tryLock(3, TimeUnit.SECONDS);
                if (tryLock) {
                    System.out.println("try lock success :" + Thread.currentThread().getName());
                    System.out.println("Get some sleep:" + Thread.currentThread().getName());
                    Thread.sleep(5000);
                    System.out.println("Awake:" + Thread.currentThread().getName());
                } else {
                    System.out.println("Try lock timeout :" + Thread.currentThread().getName());
                }

            } catch (InterruptedException e) {
                e.printStackTrace();
            } finally {
                System.out.println("unlock:"+ Thread.currentThread().getName()); reentrantLock.unlock(); }}}Copy the code

Printed logs:

Try LOCK :thread1 try lock:thread2 Try lock Success :thread2 Take a nap :thread2 Try lock Timeout :thread1 unlock:thread1 Exceptionin thread "thread1" java.lang.IllegalMonitorStateException
	at java.util.concurrent.locks.ReentrantLock$Sync.tryRelease(ReentrantLock.java:151)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1261)
	at java.util.concurrent.locks.ReentrantLock.unlock(ReentrantLock.java:457)
	at com.lanshifu.demo_module.test.lock.ReentranLockTest$Thread_tryLock.run(reentranlocktest.java :60) wakes up :thread2 unlock:thread2Copy the code

TrtLock is used to set the wait time for a lock to be acquired. After 3 seconds, the lock is returned as a failure. You can see the result in the log. The exception is because Thread1 failed to acquire the lock. Unlock should not be called.

3.2 Condition Condition

public static void main(String[] args) {

        Thread_Condition thread_condition = new Thread_Condition();
        thread_condition.setName("Test the thread of the Condition");
        thread_condition.start();
        try {
            Thread.sleep(2000);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        thread_condition.singal();

    }


static class Thread_Condition extends Thread {

        @Override
        public void run() {
            await();
        }

        private ReentrantLock lock = new ReentrantLock();
        public Condition condition = lock.newCondition();

        public void await() {
            try {
                System.out.println("lock");
                lock.lock();
                System.out.println(Thread.currentThread().getName() + ": I'm waiting for the announcement..."); condition.await(); //await (2, TimeUnit.SECONDS); //await (2, TimeUnit. System.out.println(thread.currentThread ().getName() +": Wait until notification, I continue to execute >>>");
            } catch (Exception e) {
                e.printStackTrace();
            } finally {
                System.out.println("unlock");
                lock.unlock();
            }
        }

        public void singal() {
            try {
                System.out.println("lock");
                lock.lock();
                System.out.println("I want to notify the waiting thread, condition. Signal ()"); condition.signal(); //await and signal correspond to thread.sleep (1000); } catch (InterruptedException e) { e.printStackTrace(); } finally { System.out.println("unlock"); lock.unlock(); }}}Copy the code

Run print log

Lock tests the thread of the Condition: I'm waiting for the notification to arrive... Condition. Signal () Unlock tests the thread of condition: When notified, I proceed with >>> unlockCopy the code

The above illustrates the use of await and signal of Condition, provided that the lock is used first.

3.3 Fair lock and Non-fair Lock

The ReentrantLock constructor passes true to indicate a fair lock.

Fair locking means that locks are acquired by threads in the order in which they are allocated, that is, first come, first served. A non-fair lock is a lock preemption mechanism that randomly obtains the lock, which may cause some threads to uniformly lose the lock, so it is unfair.

3.4 ReentrantLock

  1. ReentrantLock uses lock and unlock to obtain and release locks
  2. The unlock should be placed in the finally so that the lock is released during normal operation or exception
  3. Before you can use the await and signal methods of the condition, you must call the lock method to get the object monitor

Iv. Outsourcing

From the above analysis, the use of locks is inefficient in the case of severe concurrency, because only one thread can acquire the lock at a time, and the other threads have to wait.

Java provides and sends packages to solve this problem. Here are some common data structures used in and sends packages.

4.1 ConcurrentHashMap

We all know that HashMap is a thread-unsafe data structure. HashTable is based on HashMap, and the methods get and put are made thread safe with the addition of Synchronized. However, ConcurrentHashMap is replaced by ConcurrentHashMap because of the high concurrency efficiency.

ConcurrentHashMap is a fragmented lock. It has 16 buckets by default. The get and PUT operations first calculate hashcode for the key, then mod 16, and fall into one of the 16 buckets. Inside the bucket is a HashMap structure (array plus list, red black tree if the list is too long).

In theory, a maximum of 16 threads can be accessed simultaneously.

4.2 LinkBlockingQueue

Linked list structure blocking queue, using multiple reentrantLocks internally

    /** Lock held by take, poll, etc */
    private final ReentrantLock takeLock = new ReentrantLock();

    /** Wait queue for waiting takes */
    private final Condition notEmpty = takeLock.newCondition();

    /** Lock held by put, offer, etc */
    private final ReentrantLock putLock = new ReentrantLock();

    /** Wait queue for waiting puts */
    private final Condition notFull = putLock.newCondition();

private void signalNotEmpty() {
        final ReentrantLock takeLock = this.takeLock;
        takeLock.lock();
        try {
            notEmpty.signal();
        } finally {
            takeLock.unlock();
        }
    }

    /**
     * Signals a waiting put. Called only from take/poll.
     */
    private void signalNotFull() { final ReentrantLock putLock = this.putLock; putLock.lock(); try { notFull.signal(); } finally { putLock.unlock(); }}Copy the code

LinkBlockingQueue: LinkBlockingQueue: LinkBlockingQueue: LinkBlockingQueue

  1. Gets data from the queue, and if there is no data in the queue, callsnotEmpty.await();Enter wait.
  2. It’s called when you’re putting data in the queuenotEmpty.signal();Notifies the consumer that the wait in 1 is over and the wake continues.
  3. It’s called when you get data from the queuenotFull.signal();To inform producers to continue production.
  4. When putting data into a queue, if it determines that the number of data in the queue has reached the maximum, thennotFull.await();And wait for the consumer to consume it, which means wait for 3 to fetch the data and send itnotFull.signal();Then the producer can continue to produce.

LinkBlockingQueue is a typical producer-consumer pattern. I won’t go into the details of the source code.

4.3 Atomic Operation class: AtomicInteger

Compare and swap (CAS) is used internally to ensure atomicity

Let’s take an example of int incrementing

AtomicInteger atomicInteger = new AtomicInteger(0); atomicInteger.incrementAndGet(); / / since the increaseCopy the code

Take a look at the source code

   /**
     * Atomically increments by one the current value.
     *
     * @return the updated value
     */
    public final int incrementAndGet() {
        return U.getAndAddInt(this, VALUE, 1) + 1;
    }
Copy the code

U is Unsafe, so Unsafe#getAndAddInt

    public final int getAndAddInt(Object var1, long var2, int var4) {
        int var5;
        do {
            var5 = this.getIntVolatile(var1, var2);
        } while(! this.compareAndSwapInt(var1, var2, var5, var5 + var4));return var5;
    }
Copy the code

Atomicity is guaranteed through compareAndSwapInt.

Five, the summary

If you ask a multi-threaded question during an interview, you can answer it like this:

  1. This is used when only one thread is writing and all other threads are readingvolatileModify variables
  2. When multiple threads are writing, then in general the concurrency is not seriousSynchronizedSynchronized is not originally a heavyweight lock, but rather a biased lock when concurrency is not severe, such as when only one thread accesses it. When multiple threads access, but not all at once, the lock is upgraded to lightweight. When multiple threads access at the same time, the lock is upgraded to a heavyweight lock. So Synchronized is fine in cases where concurrency is not so severe. Synchronized has limitations, such as the inability to set a lock timeout, and the inability to release locks through code.
  3. ReentranLockThe lock can be released through code, and the lock timeout can be set.
  4. Synchronized and ReentranLock are inefficient at high concurrency because only one thread can enter the Synchronized block at a time, and if many threads access it at the same time, all the others are waiting for the lock. At this point you can use and send the data structure under the package, for exampleConcurrentHashMap.LinkBlockingQueue, and atomic data structures such as:AtomicInteger.

When the interview in accordance with the above summary of this train of thought answer basic OK. In addition to ConcurrentHashMap, other commonly used data structure principles also need to understand, such as HashMap, HashTable, TreeMap principle, ArrayList, LinkedList comparison, these are commonplace. To see their own source code or some blogs.

To sum up the multi-threaded concurrent here, if it is to deal with the interview if the train of thought in this article to prepare should not be too big a problem.


My Jane books

Reference: Synchronized through and through