Multi-threaded concurrent questions, the basic interview must ask.
Most of you know about Synchronized and Lock. Some of you can talk about volatile and sending packets. The good ones can talk about the principles of Synchronized and volatile and the data structures used in sending packets. For example, the principle of ConcurrentHashMap.
This article will summarize the various processing methods of multi-threaded concurrency, hoping to help you.
First, why there are concurrent problems with multi-threading
Why is there a concurrency problem when multiple threads are accessing the same variable at the same time?
- The Java memory model specifies that all variables are stored in main memory and that each thread has its own working memory.
- The working memory of a thread holds a main memory copy of variables used by the thread. All operations on variables must be performed in the working memory of the thread instead of reading or writing directly to the main memory.
- When a thread accesses a variable, it copies the variable from main memory to working memory. Writes to the variable are not immediately synchronized to main memory.
- Different threads cannot directly access variables in each other’s working memory, and the transfer of variables between threads requires data synchronization between their own working memory and main memory.
Ii. Java Memory Model (JMM)
The Java Memory Model (JMM) is used to synchronize data between working memory (local memory) and main memory. It specifies how and when to synchronize data, as shown in the following figure.
Three, the three elements of concurrent programming
Atomicity: In an operation, the CPU may not pause and then schedule, i.e., the operation will either complete or not be executed without interruption.
Visibility: When multiple threads access the same variable and one thread changes the value of the variable, other threads can immediately see the changed value.
Orderliness: The order in which a program is executed is the order in which the code is executed.
Three, how to do, to solve the problem of concurrency? (key)
The following describes how to solve concurrency problems in different scenarios.
A volatile,
1.1 volatile features
Guaranteed visibility, not guaranteed atomicity
- When a volatile variable is written, the JVM forces variables from local memory to be flushed to main memory
- This write operation invalidates the cache in the other thread, and the other thread reads from the main memory. Volatile writes are visible to other threads in real time.
Instruction reordering prohibition Instruction reordering is a method by which compilers and processors order instructions to optimize program performance, subject to certain rules:
- Instructions with dependencies will not be reordered, such as a = 1; b = a; A and B are dependent and will not be reordered
- The execution results in a single thread cannot be affected. For example: a = 1; b=2; C =a+b, the first two operations can be reordered, but c=a+b can’t be reordered, because we want to get 3
1.2 Application Scenarios
Volatile Is used to modify a variable where only one thread writes and all other threads read.
1.3 Why is volatile used for singleton double locking?
public class TestInstance {
private static volatile TestInstance mInstance;
public static TestInstance getInstance() {/ / 1if (mInstance == null){ //2
synchronized (TestInstance.class){ //3
if(mInstance == null){ //4 mInstance = new TestInstance(); / / 5}}}returnmInstance; } Duplicate codeCopy the code
}
If volatile is not used, concurrency can be problematic. Thread A executes comment 5 new TestInstance() in the following steps:
- Allocate memory
- Initialize an object
- MInstance points to memory
At this time, if the instruction is rearranged, the execution sequence is 132. At the third execution, thread B just comes in and executes comment 2. At this time, judge that mInstance is not empty and directly use an uninitialized object. Therefore, the volatile keyword is used to disable instruction reordering.
1.4 principle of volatile
Volatile is implemented in the JVM using memory barriers that provide three functions:
- It ensures that instruction reordering does not place subsequent instructions in front of the memory barrier, nor does it place previous instructions behind the barrier; That is, by the time the memory barrier instruction is executed, all operations in front of it have been completed;
- It forces cached changes to be written to main memory immediately
- A write invalidates cached rows in other cpus, after which reads from other threads are read from main memory.
1.5 Limitations of Volatile
Volatile guarantees visibility, not atomicity. ** Write operations are visible to other threads, but do not solve the problem of multiple threads writing simultaneously.
Second, the Synchronized
2.1 Synchronized Usage scenario
Multiple threads write a variable at the same time.
For example, if there are 100 tickets remaining and Windows A and B each sell one ticket at the same time, it is problematic if the remaining variable volatile is used.
The remaining tickets obtained by window A are 100, and the remaining tickets obtained by window B are also 100. When A sells A ticket, it becomes 99 and is refreshed back to main memory. At the same time, B sells A ticket, becomes 99 and is refreshed back to main memory, resulting in the remaining tickets in main memory is 99 instead of 98.
The limitation of volatile is that multiple threads write simultaneously, in which case Synchronized can be used.
Synchronized guarantees that only one thread can execute a method or block of code at a time.
2.2 principle of Synchronized
public class SynchronizedTest {
public static void main(String[] args) {
synchronized (SynchronizedTest.class) {
System.out.println("123");
}
method();
}
private static void method() {}} copy codeCopy the code
Compile this code using the javac command and then use the Java p-v synchronizedtest. class command to view the bytecode
public static void main(java.lang.String[]);
descriptor: ([Ljava/lang/String;)V
flags: ACC_PUBLIC, ACC_STATIC
Code:
stack=2, locals=3, args_size=1
0: ldc #2 // class com/lanshifu/opengldemo/test/SynchronizedTest
2: dup
3: astore_1
4: monitorenter
5: getstatic #3 // Field java/lang/System.out:Ljava/io/PrintStream;
8: ldc #4 // String 123
10: invokevirtual #5 // Method java/io/PrintStream.println:(Ljava/lang/String;) V
13: aload_1
14: monitorexit
15: goto 23
18: astore_2
19: aload_1
20: monitorexit
21: aload_2
22: athrow
23: invokestatic #6 // Method method:()V
26: returnCopy the codeCopy the code
You can see 4: Monitorenter and 14: Monitorexit, with printed statements in the middle.
When a synchronized block is executed, the Monitorenter directive is executed, the code in the synchronized block is executed, and the Monitorexit directive is executed on exiting the synchronized block.
The key to Synchronized synchronization is that the monitor of the object must be acquired, and the thread can proceed only after obtaining the monitor; otherwise, it will enter the synchronization queue, and the thread state will change to BLOCK, and only one thread can obtain the monitor at the same time. When monitorexit is called, a thread in the queue exits to fetch monitor. For details:
Each object has a counter, which is incremented by one when the thread acquires the lock and decrement by one when the lock is released, so as long as the lock’s counter is greater than zero, other threads can only wait to access it.
2.3 Upgrading a Synchronized lock
Synchronized may be known as heavyweight locking, but Java1.6 has made various optimizations to Synchronized that make it less heavy in some cases. Biased locking and lightweight locking were introduced in Java1.6 to reduce the performance cost of acquiring and releasing locks.
Biased locks: In most cases, locks are not only not contested by multiple threads, but are always acquired multiple times by the same thread. Biased locks are introduced to make it cheaper for threads to acquire locks.
When thread A accesses A block of code with A synchronized lock, the id of the current thread is stored in the object header, and the subsequent entry and exit of the block of code with A synchronized lock do not need to re-lock and release the lock.
Lightweight lock: In the biased lock case, if thread B also accesses the synchronized block and the thread ID of the comparison object header is different, it will upgrade to lightweight lock and acquire the lightweight lock by means of spin.
Heavyweight lock: If thread A and thread B access the synchronized code BLOCK at the same time, the lightweight lock is upgraded to the heavyweight lock. If thread A acquires the heavyweight lock, thread B has to wait and enter the BLOCK state.
2.4 Synchronized shortcomings
- You cannot set the lock timeout period
- Locks cannot be released by code
- Prone to deadlocks
Third, already
ReentranLock addresses the drawbacks of Synchronized: you can’t set lock timeouts or release locks through code.
ReentrantLock also provides Condition, which is more flexible for thread wait and wake up operations. A ReentrantLock can have multiple Condition instances. So it’s more scalable.
3.1 Use of ReentrantLock
The lock and unlock
ReentrantLock reentrantLock = new ReentrantLock();
System.out.println("reentrantLock->lock");
reentrantLock.lock();
try {
System.out.println("Sleep for two seconds...");
Thread.sleep(2000);
} catch (InterruptedException e) {
e.printStackTrace();
}finally {
reentrantLock.unlock();
System.out.println("reentrantLock->unlock"); } Duplicate codeCopy the code
Implement a timed lock request: tryLock
public static void main(String[] args) {
ReentrantLock reentrantLock = new ReentrantLock();
Thread thread1 = new Thread_tryLock(reentrantLock);
thread1.setName("thread1");
thread1.start();
Thread thread2 = new Thread_tryLock(reentrantLock);
thread2.setName("thread2");
thread2.start();
}
static class Thread_tryLock extends Thread {
ReentrantLock reentrantLock;
public Thread_tryLock(ReentrantLock reentrantLock) {
this.reentrantLock = reentrantLock;
}
@Override
public void run() {
try {
System.out.println("try lock:" + Thread.currentThread().getName());
boolean tryLock = reentrantLock.tryLock(3, TimeUnit.SECONDS);
if (tryLock) {
System.out.println("try lock success :" + Thread.currentThread().getName());
System.out.println("Get some sleep:" + Thread.currentThread().getName());
Thread.sleep(5000);
System.out.println("Awake:" + Thread.currentThread().getName());
} else {
System.out.println("Try lock timeout :" + Thread.currentThread().getName());
}
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
System.out.println("unlock:"+ Thread.currentThread().getName()); reentrantLock.unlock(); }}} copy the codeCopy the code
Logs printed:
Try lock:thread1 try lock:thread2 try lock success :thread2 Try lock timeout :thread1 UNLOCK :thread1 Exceptionin thread "thread1" java.lang.IllegalMonitorStateException
at java.util.concurrent.locks.ReentrantLock$Sync.tryRelease(ReentrantLock.java:151)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1261)
at java.util.concurrent.locks.ReentrantLock.unlock(ReentrantLock.java:457)
at com.lanshifu.demo_module.test.lock.ReentranLockTest$Thread_tryLock.run(reenTranlocktest.java :60) awake :thread2 unlock:thread2 copies codeCopy the code
TrtLock, as demonstrated above, sets the wait time for obtaining the lock, and returns a failure after 3 seconds. You can see the result in the log. The exception occurred because Thread1 failed to acquire the lock and should not have called unlock.
3.2 Condition Condition
public static void main(String[] args) {
Thread_Condition thread_condition = new Thread_Condition();
thread_condition.setName("Testing the thread of Condition");
thread_condition.start();
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
e.printStackTrace();
}
thread_condition.singal();
}
static class Thread_Condition extends Thread {
@Override
public void run() {
await();
}
private ReentrantLock lock = new ReentrantLock();
public Condition condition = lock.newCondition();
public void await() {
try {
System.out.println("lock");
lock.lock();
System.out.println(Thread.currentThread().getName() + "I'm waiting for the announcement..."); condition.await(); //await and signal correspond to //condition.await(2, timeunit.seconds); System.out.println(thread.currentThread ().getName() +": When notified, I continue to execute >>>");
} catch (Exception e) {
e.printStackTrace();
} finally {
System.out.println("unlock");
lock.unlock();
}
}
public void singal() {
try {
System.out.println("lock");
lock.lock();
System.out.println("I want to notify the waiting thread, condition.signal()"); condition.signal(); //await and signal correspond to thread.sleep (1000); } catch (InterruptedException e) { e.printStackTrace(); } finally { System.out.println("unlock"); lock.unlock(); }}} copy the codeCopy the code
Run Print Logs
Lock tests the Condition thread: I'm waiting for a notification... Lock I want to notify the waiting thread, conditional.signal () UNLOCK tests the thread of condition: once notified, I proceed >>> UNLOCK copy codeCopy the code
The above demonstrates the use of await and signal in Condition, with lock first.
3.3 Fair lock and Unfair Lock
The ReentrantLock constructor passes true for a fair lock.
A fair lock means that locks are acquired by threads in the order in which they are locked, i.e., first come, first served. Non-fair lock is a kind of lock preemption mechanism. The lock is acquired randomly, which may cause some threads to uniformly fail to get the lock, so it is unfair.
3.4 ReentrantLock
- ReentrantLock Uses lock and unlock to acquire and release locks
- Unlock is placed in finally so that a normal run or exception releases the lock
- Before you can use the await and signal methods of condition, you must call the Lock method to get the object monitor
Four, and the package
From the above analysis, using locks is obviously inefficient in the case of severe concurrency, because only one thread can acquire the lock at a time, and the other threads have to wait.
Java provides and packages to solve this problem. Here are some common data structures used in and packages.
# # # # 4.1 ConcurrentHashMap
We all know that HashMap is a thread-unsafe data structure. HashTable is thread-safe based on HashMap, with get and PUT methods and Synchronized modification, but it is replaced by ConcurrentHashMap in the case of high concurrency.
By default, there are 16 buckets in ConcurrentHashMap. The get and PUT operations first calculate the hashcode of the key, and then interpolate with 16 to one of the 16 buckets, and then ReentrantLock each bucket. Inside the bucket is the HashMap structure (array plus list, if the list becomes too long, it becomes a red-black tree).
Therefore, a maximum of 16 threads can be accessed simultaneously.
4.2 LinkBlockingQueue
A linked list structured blocking queue that internally uses multiple ReentrantLocks
/** Lock held by take, poll, etc */
private final ReentrantLock takeLock = new ReentrantLock();
/** Wait queue for waiting takes */
private final Condition notEmpty = takeLock.newCondition();
/** Lock held by put, offer, etc */
private final ReentrantLock putLock = new ReentrantLock();
/** Wait queue for waiting puts */
private final Condition notFull = putLock.newCondition();
private void signalNotEmpty() {
final ReentrantLock takeLock = this.takeLock;
takeLock.lock();
try {
notEmpty.signal();
} finally {
takeLock.unlock();
}
}
/**
* Signals a waiting put. Called only from take/poll.
*/
private void signalNotFull() { final ReentrantLock putLock = this.putLock; putLock.lock(); try { notFull.signal(); } finally { putLock.unlock(); }} Copy the codeCopy the code
LinkBlockingQueue = LinkBlockingQueue = LinkBlockingQueue
- Get data from the queue, notempty.await () is called if there is no data in the queue; Enter the wait.
- Notempty.signal () is called when data is put into the queue; , notifies the consumer that the wait in 1 ends and the wake continues.
- Notfull.signal () is called when data is fetched from the queue; , notifying the producer to continue production.
- When put data is queued, notfull.await () is called if the maximum number of data in the queue is determined; , waiting for the consumer to consume, i.e. waiting for 3 to fetch the data and issue notfull.signal (); Only then can the producer continue to produce.
LinkBlockingQueue is a typical producer-consumer pattern, without going into the source code details.
4.3 Atomic Operation Class: AtomicInteger
Internal CAS (Compare and Swap) is used to ensure atomicity
Take an example of an increment of int
AtomicInteger atomicInteger = new AtomicInteger(0); atomicInteger.incrementAndGet(); // Auto-increment copy codeCopy the code
Look at the source code
/**
* Atomically increments by one the current value.
*
* @return the updated value
*/
public final int incrementAndGet() {
returnU.getAndAddInt(this, VALUE, 1) + 1; } Duplicate codeCopy the code
U is Unsafe, look at Unsafe#getAndAddInt
public final int getAndAddInt(Object var1, long var2, int var4) {
int var5;
do {
var5 = this.getIntVolatile(var1, var2);
} while(! this.compareAndSwapInt(var1, var2, var5, var5 + var4));returnvar5; } Duplicate codeCopy the code
Atomicity is guaranteed by compareAndSwapInt.
conclusion
Multithreaded concurrency:
- Volatile is used when only one thread is writing and all other threads are reading
- Synchronized is not a heavyweight lock at the beginning. When the concurrency is not serious, such as when only one thread accesses, Synchronized is biased lock. When multiple threads access, but not simultaneously, the lock is upgraded to a lightweight lock; Upgrade to a heavyweight lock when multiple threads are accessing simultaneously. So Synchronized is ok in situations where concurrency is not too severe. Synchronized has limitations, such as the inability to set lock timeouts and release locks through code.
- ReentranLock can be released by code, and lock timeout can be set.
- Synchronized and ReentranLock are inefficient under high concurrency, because only one thread can enter a Synchronized code block at a time. If many threads access the block at the same time, all other threads are waiting for the lock. You can use and dispatch data structures such as ConcurrentHashMap, LinkBlockingQueue, and atomic data structures such as AtomicInteger.
When the interview according to the above summary of this train of thought answer basically ok. HashMap, HashTabel, TreeMap, Arraylist, LinkList comparison, etc. Look at the source code yourself or some blogs.