1. Pessimistic locking (blocking)
1.1. Critical regions and race conditions
1.1.1. The critical region
A code block is called a Critical Section if it contains multithreaded reads and writes to shared resources. Easy to occur instruction interleaving, will appear before the problem.
private static int count = 0; // Share resources
private static void increment(a)
// Critical section (entire code block)
{ count++; }
private static void decrement(a)
// Critical section (entire code block)
{ count--; }
Copy the code
1.1.2. Race conditions
When multiple threads execute in the critical region, the result cannot be predicted due to the different execution sequences of the code, which is called Race Condition.
⭐ Solutions to avoid race conditions:
- Blocking: synchronized, lock
- Nonblocking: atomic variable.
1.1.3. Atomicity
public class ThreadTest {
private static int count = 0;
public static void main(String[] args) {
Thread 1 increments count 5000 times
Thread thread1 = new Thread(() -> {
// A race condition occurs in the critical region
for (int i = 0; i < 5000; i++) count++;
});
Thread 2 decrement count 5000 times
Thread thread2 = new Thread(() -> {
// A race condition occurs in the critical region
for (int i = 0; i < 5000; i++) count--; }); thread1.start(); thread2.start(); }}Copy the code
- The ideal situationAfter the two threads finish running
count == 0
。 - The actual situationAfter the two threads finish running
count ! = 0
。
I ++ and I — are not atomic operations in Java. For I ++ (I is a static variable), this actually produces the following JVM bytecode instructions:
getstatic i // Get the value of static variable I
iconst_1 // Prepare constant 1
iadd / / since the increase
putstatic i // Store the modified value into the static variable I
Copy the code
And the corresponding I — is similar:
getstatic i // Get the value of static variable I
iconst_1 // Prepare constant 1
isub / / the decrement
putstatic i // Store the modified value into the static variable I
Copy the code
If a context switch occurs at the same time as the instruction is executed, it is possible to increment and decrement the I! = 0.
1.2. Synchronized concept
The lock is used to lock a target (object, method, etc.), which means that no matter which thread runs to the row, it must first check whether any other thread is using the target. If there is a lock, it must wait for the thread to finish running and then release the lock. If there is no lock, the target is locked before running.
public class ThreadTest {
private static int count = 0;
/ / lock object
private static Object lock = new Object;
public static void main(String[] args) {
Thread 1 increments count 5000 times
Thread thread1 = new Thread(() -> {
for (int i = 0; i < 5000; i++) {
synchronized(lock) count++; }});Thread 2 decrement count 5000 times
Thread thread2 = new Thread(() -> {
for (int i = 0; i < 5000; i++) {
synchronized(lock) count--; }}); thread1.start(); thread2.start(); }}Copy the code
Synchronized key actions count = 0.
⭐️ understanding of synchronized: Synchronized actually uses object locking to ensure the atomicity of the code in the critical area. The code in the critical area is indivisible externally and will not be interrupted by thread switching.
1.2.1. Synchronized modification method
Member methods
class Test {
public synchronized void test(a) {
/ / critical region}}// They are equivalent in effect
class Test {
public void test(a) {
// Lock this as a Test object.
synchronized (this) {
/ / critical region}}}Copy the code
Because the class object is locked, when the class has multiple synchronized methods, multiple threads will call different synchronized methods of the same object, resulting in lock competition.
class Test {
public synchronized void test1(a) {
System.out.println("test1");
try {
Thread.sleep(5000);
} catch(InterruptedException e) { e.printStackTrace(); }}public synchronized void test2(a) {
System.out.println("test2"); }}// Call code
Test test = new Test();
// effect: output "test1" immediately. After test1() is executed, the test object is locked. After 5s, the lock is released
new Thread(() -> test.test1()).start();
// Effect: output "test2" after 5s. The thread synchronously blocks and waits for the test object to release the lock
new Thread(() -> test.test2()).start();
// effect: output "test2" immediately. The newly created test object has not been locked and can be executed immediately
new Thread(() -> new Test().test2()).start();
Copy the code
Static method
class Test {
public synchronized static void test(a) {
/ / critical region}}// They are equivalent in effect
class Test {
public static void test(a) {
// Static method, no instance object, can only lock class object (test.class)
synchronized (Test.class) {
/ / critical region}}}Copy the code
Because class objects are locked, multithreaded calls can cause lock contention when the class has multiple synchronized static methods.
class Test {
public synchronized static void test1(a) {
System.out.println("test1");
try {
Thread.sleep(5000);
} catch(InterruptedException e) { e.printStackTrace(); }}public synchronized static void test2(a) {
System.out.println("test2"); }}// Call code
// effect: output "test1" immediately. After test1() is executed, the test.class will be locked, and the lock will be released after 5s
new Thread(() -> Test.test1()).start();
// Effect: output "test2" after 5s. The thread synchronously blocks and waits for test.class to release the lock
new Thread(() -> Test.test2()).start();
Copy the code
1.2.2. Thread-safety analysis of variables
Member variables and static variables
-
If not, it is thread safe.
-
If shared, it is judged by whether it is read or written:
- If there are only reads, it is thread-safe.
- If there are read and write operations, the code is critical and not thread safe.
[Local variable]
- Local variables are generally thread-safe.
- The object referenced by a local variable, depending on whether there is oneMethods the escapeJudge:
- If the object does not escape the function access of the method, it is thread-safe.
- If the object escapes the scope of the method, the thread is not safe.
1.2.3. Common thread-safe classes
Classes under the String, Integer, StringBuffer, Random, Vecator, HashTable, java.util.concurrent package.
⭐ ️ note:
- By thread-safe, I mean thread-safe when multiple threads call a method on their same instance.
private static Hashtable<String, Integer> hashtable = new Hashtable<>();
// Multiple threads call the test() method
public static void test(a) {
// hashtable.put() is atomic and thread-safe
hashtable.put("TEST".200);
}
Copy the code
- Each of their methods is atomic, but the combination of their methods is not (it is possible to execute one sentence but then switch contexts before the next sentence is executed).
private static Hashtable<String, Integer> hashtable = new Hashtable<>();
// Multiple threads call the test() method
public static void test(a) {
// hashtable.get() is atomic and thread-safe
if (hashtable.get("TEST") = =null) {
// hashtable.put() is atomic and thread-safe
hashtable.put("TEST".200); }}Copy the code
- similar
String
This class, which is an immutable class, has its own thread-safe property.
1.3. Monitor (Pipe Process)
1.3.1. Java object headers
Due to the Object-oriented nature of Java, a large number of objects need to be stored in the JVM. In order to achieve additional functionality, some markup fields need to be added to the object to enhance its functionality. These markup fields form the object header.
[Object header] (Using a 32-bit VM as an example)
- Common objects:
|--------------------------------------------------------|
| Object Header (64 bits) |
|---------------------------|----------------------------|
| Mark Word (32 bits) | Klass Word (32 bits) |
|---------------------------|----------------------------|
Copy the code
- Array objects:
|---------------------------------------------------------------------------------------|
| Object Header (96 bits) |
|---------------------------|----------------------------|------------------------------|
| Mark Word (32 bits) | Klass Word (32 bits) | array length (32 bits) |
|---------------------------|----------------------------|------------------------------|
Copy the code
[Composition of object head]
It is used to store runtime data of the object itself, such as HashCode, GC generation age, etc. The bit length of the Mark Word is one word size for the JVM, meaning that the mark Word is 32 bits for a 32-bit JVM and 64 bits for a 64-bit JVM. To store more information in a Word size, the JVM sets the lowest two bits of a Word to Mark bits.
|-----------------------------------------------------------|--------------------| | Mark Word (32 bits) | State | |-----------------------------------------------------------|--------------------| | identity_hashcode:25 | age:4 | Biased_lock: 1 | lock: 2 (01) | Normal | unlocked |-----------------------------------------------------------|--------------------| | thread:23 | epoch:2 | age:4 | Biased_lock: 1 | lock: 2 (01) | Biased | Biased locking |-----------------------------------------------------------|--------------------| | ptr_to_lock_record:30 | lock:2(00) | | Lightweight Locked | Lightweight lock -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - | -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - | | Ptr_to_heavyweight_monitor: 30 | lock: 2 (10) | double Locked | Heavyweight lock | -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - | -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - | | | lock: 2 (11) | Marked for GC | GC tag |-----------------------------------------------------------|--------------------|Copy the code
-
Lock: Lock-status flag bit, set because you want to represent as much information as possible in as few bits as possible. Depending on the value of the tag, the meaning of the entire Mark word is different.
-
Identity_hashcode: Object identification hashcode, using lazy loading technology. The method system.identityHashCode () is called to compute and the result is written to the object header. When the object is locked, the value is moved to Monitor.
-
Age: GC age of a Java object, up to 15.
-
Biased_lock: Whether the biased lock flag is enabled for the object. 1 indicates that biased locking is enabled for the object, 0 indicates that biased locking is not enabled for the object.
-
Thread: ID of the thread that holds the biased lock.
-
Epoch: Biased timestamp.
-
Ptr_to_lock_record: pointer to the lock record in the stack.
-
Ptr_to_heavyweight_monitor: pointer to the management procedure Monitor.
1.3.2. The Monitor
Monitor is translated as Monitor or pipe process. A pipe provides a mechanism for a thread to temporarily abandon mutex access, wait for certain conditions to be met, and then regain execution rights to resume its mutex access.
Each Java object can be associated with a Monitor object, and if the object is locked (heavyweight) with synchronized, a pointer to the Monitor object is set in the Mark Word of the object’s header.
Composition and Operation of Monitor
- At first, the Owner of Monitor is
null
. - When the Thread – 2
synchronized(obj)
When,obj
Object headerlock
The state changes to a heavyweight lock and is in the object headerptr_to_heavyweight_monitor
The pointer points to the Monitor object and the owner of the MonitorOwnerIf thread-2 is set, only one Monitor can be created at a timeOwner. - After thread-2 is locked, thread-3, thread-4, and thread-5 are also executed
synchronized(obj)
, because the Owner in Monitor is not emptyEntryListWait for the lock to be released and becomeBLOCKEDState. - When Thread-2 finishes synchronizing the contents of the code block, it wakes up the waiting threads in the EntryList to compete unfairly for the lock.
- Thread-1 in WaitSet is a Thread that has previously acquired a lock but is not in WAITING state. We will analyze wait-notify later.
Note:
synchronized
The monitor must enter the same object to have the above effect.- Do not add
synchronized
Object is not associated with the monitor and does not comply with the above rules.
1.4. Principle of synchronized
There are four lock states: no lock, biased lock, lightweight lock, and heavyweight lock. As locks compete, locks can be upgraded from bias locks to lightweight locks to heavyweight locks (but only up).
1.4.1. Lightweight locks
Usage scenario: If an object has multithreaded access, but the multithreaded access times are staggered (i.e., there is no contention), then lightweight locking can be optimized.
private static final Object obj = new Object();
public static void method1(a){
synchronized (obj){ / / 1.
method2();
} / / 4.
}
private static void method2(a) {
synchronized (obj){ / / 2.
} / / 3.
}
Copy the code
By default, the main thread calls method1()
- Method1 () tries to lock the lock object:
- The virtual machine creates a stack frame for the current Main ThreadLock record(Lock Record) space, including
The lock records its own address
andA pointer to a lock object
.
- Try CAS
The lock records its own address
andLock the Mark Word for the object
Because the status of the lock object is lockless (01), the status will change to lightweight lock (00) after the successful exchange.
-
Method2 () tries to lock the lock object: but because the lock object is in a lightweight lock (00) state and the lock object’s Mark Word points to the current Thread (Main Thread). Therefore, when a lock reentrant is performed, a new lock record is added to count the reentrant.
-
Method2 () attempts to unlock the lock object: the address of the lock record is NULL, indicating a reentrant, and you just need to remove the lock record, thereby reducing the reentrant count by one.
-
Method1 () attempts to unlock lock objects: the address of the lock record is not NULL. CAS is used to restore the Mark Word of the lock object to the object header. Therefore, after the exchange succeeds, the status will be unlocked (01).
1.4.2. Heavyweight locks
If the CAS operation fails while trying to add a lightweight lock, and if another thread has added a lightweight lock to the object (in contention), lock inflation is performed to change the lightweight lock to a heavyweight lock.
private static final Object obj = new Object();
public static void method(a){
synchronized (obj){ / / 1.
} / / 2.
}
Copy the code
Both the main thread and other threads call the method() method
- Other threads attempt to apply a lightweight lock on the lock object: The lock fails because the Main Thread has already applied a lightweight lock on the object.
Then enter the lock expansion process:
- Apply for a Monitor lock for a lock object.
- Point the Mark Word of the lock object to the current Monitor and set the state to the heavyweight lock (10).
- The Owner of Monitor points to
- Other threads enter the EntryList of Monitor and become BLOCKED.
- The Main Thread attempts to unlock the lightweight lock object:Try CAS
Lock the Mark Word for the object
Failed to restore to object header. Then enter the heavyweight unlock process, that is, find the Monitor object according to the address of Monitor and set the Owner asnull
Wake up the BLOCKED thread in EntryList and transfer the lock record of the main thread to another thread.
1.4.3. Biased locking
Lightweight locks with no contention (only one thread) still require a CAS operation for each reentrant.
Java 6 introduced biased locking to further optimize: only the first time the CAS is used to set the thread ID to the Mark Word header of the object, then if the thread ID is found to be its own, there is no contention and no need to re-cas. The object is then owned by the thread as long as no contention occurs.
[Object creation]
- If biased locking is enabled (enabled by default) : After the object is created, the Mark Word value is 0x05 (the last three bits are 101), and Thread, epoch, and age are 0.
public static void main(String[] args){
Object obj = new Object(); // Bias locks are added by default, but thread is 0 and is not associated with any thread
// obj.hashCode(); // If executed, bias lock will be removed, the last 3 bits of Mark Word will be 001, hashcode has a value, equivalent to disable bias lock
synchronized (obj){ // Hold bias lock, thread has a value, associated with the current thread
} // Hold bias lock, thread has a value, associated with the current thread (always bias to the current thread)
}
Copy the code
- If biased locking is disabled: After the object is created, the Mark Word value is 0x01 (the last three bits are 001), and hashCode (assigned only when used for the first time) and age are 0.
public static void main(String[] args){
Object obj = new Object(); // There is no lock
synchronized (obj){ // Add lightweight locks
} // No lock state (release lightweight lock)
}
Copy the code
[Partial lock revocation]
- HashCode () :Of the called object
hashCode()
But the bias lock object stored in MarkWord is the thread ID if calledhashCode()
Can cause bias locks to be revoked:- Lightweight locks record hashCode in the lock record.
- Heavyweight locks record hashCode in Monitor.
- Other thread preemption:The bias lock is upgraded when another thread uses the bias lock object:
- If the thread holding the lock has not completed the synchronization block: biased lock -> heavyweight lock.
- If the thread holding the lock has finished executing the synchronized code block: biased lock -> lightweight lock.
- Wait ()/notify () :Of the called object
wait()
ornotify()
Method, favoring locks -> heavyweight locks because only heavyweight locks are valid.
[Batch rebias and Batch Undo]
- Batch rebias: If the object is accessed by multiple threads but there is no contest, the object biased to Thread T1 still has the chance to rebias to Thread T2. Rebias will reset the Thread ID of the object. When the bias lock threshold is revoked more than 20 times, the JVM rebias these objects to the locked thread when they are locked.
- Batch Undo: When the bias lock threshold is revoked more than 40 times, the JVM makes all objects of the entire class unbiased, and new objects are also unbiased.
Spin optimization
Spin can also be used to optimize for heavyweight lock contention, so that if the current thread spins successfully (i.e. the thread holding the lock has exited the block and released the lock), then the current thread can avoid blocking (thus avoiding context switching). If the spin times out, the spin fails and the current thread is blocked.
⭐ ️ note:
-
Spin takes up CPU time. Single-core CPU spin is wasteful. Multi-core CPU spin takes advantage.
-
After Java 6, spin-locking is adaptive. For example, if an object has just had a successful spin operation, it will spin more times because it thinks the probability of successful spin operation is high. Otherwise, you have little or no spin.
-
After Java 7, you can’t control whether to turn on the spin function.
1.4.5. Synchronization elimination
Synchronization of an object can be eliminated if it can be determined that no thread escape occurs. There is no lock contention in a single thread. (That is, the lock and the object in the lock block can not escape the thread, can cancel the synchronization block)
public static void alloc(a) {
byte[] b = new byte[2];
// No thread escapes, so the synchronization lock can be removed
// Enable synchronization elimination execution time about 10 ms
// Close the execution time using synchronization elimination around 3870 ms
synchronized (b) {
b[0] = 1; }}public static void main(String[] args) {
for (int i = 0; i < 100000000; i++) { alloc(); }}Copy the code
1.5. Wait () and notify ()
Principle of 1.5.1.
If the Owner thread finds that the condition is not met, it calls wait() to enter the WaitSet state.
- Both BLOCKED and WAITING threads are BLOCKED and do not consume CPU time slices.
- The BLOCKED thread wakes up when the Owner thread releases the lock; The WAITING thread is called on the Owner thread
notify()
或notifyAll()
When the EntryList is open, the EntryList does not immediately lock the EntryList.
1.5.2. Differences between sleep() and wait()
sleep()
是Thread
Method, andwait()
是Object
Methods.sleep()
You don’t have to force the sumsynchronized
Used in conjunction with, butwait()
The needs andsynchronized
Together.sleep()
Object locks are not released while sleeping, butwait()
The object lock is released while waiting.sleep()
The thread state is TIMED_WAITING,wait()
WAITING set TIMED_WAITING
Correct use of wait() and notify()
synchronized(lock){
while(Work/wake condition not true){lock.wait(); }// TODO:Perform work tasks
}
// Other threads
synchronized(lock){
// TODO:Modify work/wake up conditions
// Wait ()
lock.notifyAll();
}
Copy the code
1.5.4. Synchronous mode – Protective pause
One thread waits for the result of another thread. The difference between join() and join() is that a thread waits for another thread to finish.
public class GuardedObject<T> {
private T data;
// Get data
public T get(a) {
synchronized (this) {
while (data == null) {
try {
this.wait();
} catch(InterruptedException e) { e.printStackTrace(); }}returndata; }}// Get data (timeout)
public T get(long timeout) {
synchronized (this) {
// Start waiting time
long beginTime = System.currentTimeMillis();
// The time that has been waited
long passedTime = 0;
while (data == null) {
// There is still time to wait
long waitTime = timeout - passedTime;
// Exit the loop if it has timed out
if (waitTime < 0) break;
try {
this.wait(waitTime);
} catch (InterruptedException e) {
e.printStackTrace();
}
// Calculate how long you have waited
passedTime = System.currentTimeMillis() - beginTime;
}
returndata; }}// Set the calculation result
public void complete(T data) {
synchronized (this) {this.data = data;
this.notifyAll(); }}}Copy the code
1.5.5. Asynchronous mode – Message queuing
- Unlike GuardObject in protective pause, there is no one-to-one correspondence between the thread producing the result and the thread consuming the result.
- Consumption queues can be used to balance thread resources for production and consumption.
- The producer is only responsible for producing the resulting data and does not care how the data is processed, while the consumer concentrates on processing the resulting data.
- Message queues are limited in capacity; when full, no more data is added to them, and when empty, no more data is consumed.
- This is the pattern used in various blocking queues in the JDK.
public class MessageQueue<T> {
private final LinkedList<T> queue = new LinkedList<>();
private final int capacity;
public MessageQueue(int capacity) {
this.capacity = capacity;
}
// Get messages from the message queue
public T take(a) {
synchronized (queue) {
while (queue.isEmpty()) {
try {
queue.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
queue.notifyAll();
returnqueue.removeFirst(); }}// Add messages to the message queue
public void put(T data) {
synchronized (queue) {
while (queue.size() >= capacity) {
try {
queue.wait();
} catch(InterruptedException e) { e.printStackTrace(); } } queue.addLast(data); queue.notifyAll(); }}}Copy the code
Park (1.6) and unpark ()
Locksupport.park () and locksupport.unpark () implement suspension and continuation of threads. The blocking state is WAITING.
LockSupport.park(); // Check whether there is a pass or not
LockSupport.unpark(); // Issue a pass to continue running the thread RUNNABLE
Copy the code
If unpark() is called before park(), the thread is not paused.
LockSupport.unpark(); // Issue a pass TIMED_WAITING
LockSupport.park(); // Check if there is a pass, do not suspend the thread RUNNABLE
Copy the code
1.7. Thread activity
1.7.1. Deadlock
Deadlocks occur when a thread needs to acquire more than one lock at a time.
t1
Thread to acquireA object
Lock, next want to obtainB object
The lock.t2
Thread to acquireB object
Lock, next want to obtainA object
The lock.
Thread t1 = new Thread(() -> {
synchronized (A) {
Thread.sleep(1000);
synchronized (B) {
}
}
});
Thread t2 = new Thread(() -> {
synchronized (B) {
Thread.sleep(500);
synchronized (A) {
}
}
});
Copy the code
1.7.2. Live locks
A live lock occurs when two threads change each other’s termination conditions so that neither thread can terminate.
Two threads increase random sleep time and prevent live locks.
int count = 10;
Thread t1 = new Thread(() -> {
// When the expectation drops to 0, exit
while (count > 0) {
Thread.sleep(200);
count--;
System.out.println("-"+ count); }}); Thread t2 =new Thread(() -> {
// Exit when expected to reach 20
while (count < 20) {
Thread.sleep(200);
count++;
System.out.println(""+ count); }});Copy the code
1.7.3. Hunger
A thread whose priority is too low is never scheduled to be executed by the CPU, nor can it be terminated.
Hunger is not easy to demonstrate and is involved in reading and writing locks.
1.8. ReentrantLock
Reentrant means that if the same thread acquires the lock, it can acquire it again without being blocked by the lock.
, version 1.8.1. API
ReentrantLock lock = new ReentrantLock();
Create a reentrant lock object(Unfair lock).ReentrantLock lock = new ReentrantLock(true);
Create a reentrant lock object(Fair lock).lock.lock();
Gets a reentrant lock(Cannot be interrupted), fail to enterblockingStatus until success.lock.lockInterruptibly();
Gets a reentrant lock(Can be interrupted), fail to enterblockingStatus until success.lock.tryLock();
Try to obtain a reentrant lock(Can be interrupted, can set the timeout time), return immediately if successfultrue
, and returns immediately on failuretrue
.lock.unlock();
Release a reentrant lock.lock.newCondition();
Create a condition variable
1.8.2. Method of use
ReentrantLock reentrantLock = new ReentrantLock();
/ / acquiring a lock
reentrantLock.lock();
// Be successful
try {
/ / critical region
} finally {
// Release the lock anyway
reentrantLock.unlock();
}
Copy the code
1.8.3. Conditional variables
ReentrantLock reentrantLock = new ReentrantLock();
Create a reentrant lock.Condition condition = reentrantLock.newCondition();
Create a condition variable(Can be called multiple times to create multiple condition variables).condition.await();
The current thread andcondition
Association and enter WaitSet to wait.condition.signal();
Wake up a sumcondition
Associated threads.condition.signalAll();
Wake up all sumcondition
Associated threads.
public static void main(String[] args) throws InterruptedException {
ReentrantLock reentrantLock = new ReentrantLock();
// Condition variable
Condition condition1 = reentrantLock.newCondition();
Condition condition2 = reentrantLock.newCondition();
Thread t1 = new Thread(() -> {
reentrantLock.lock();
try {
Condition1 associates the T1 thread with condition1 and waits in WaitSet
condition1.await();
} catch (InterruptedException e) {
e.printStackTrace();
} finally{ reentrantLock.unlock(); }}); Thread t2 =new Thread(() -> {
reentrantLock.lock();
try {
Condition2 associates the T2 thread with condition2 and enters WaitSet
condition2.await();
} catch (InterruptedException e) {
e.printStackTrace();
} finally{ reentrantLock.unlock(); }}); t1.start(); t2.start(); Thread.sleep(1000);
reentrantLock.lock();
try {
// Call up a thread associated with condition1
condition1.signal();
} finally {
reentrantLock.unlock();
}
Thread.sleep(1000);
reentrantLock.lock();
try {
// Call up all threads associated with condition2
condition2.signalAll();
} finally{ reentrantLock.unlock(); }}Copy the code
2. Optimistic locking (non-blocking)
Every time data is manipulated, it is assumed that no other thread will compete to modify it, so no lock is placed. It is best if the operation succeeds, but if it fails, it will not block, and some compensation mechanism can be used (retry repeatedly).
2.1. The CAS instruction
The CAS (compare-and-swap or compare-and-set) instruction is atomically guaranteed directly by the CPU. There are three operands: memory value V, expected value A, new value B, and memory value V is changed to B if and only if V matches expected value A, otherwise nothing is done.
CAS represents lockless concurrency and blockless concurrency:
- Synchronized is not used, so threads don’t get blocked (reducing context switching), increasing efficiency.
- If competition is intense, retries are bound to occur frequently, reducing efficiency.
2.2. Atomic classes
AtomicBoolean, AtomicInteger, AtomicLong.
Principle of 2.2.1.
Take AtomicInteger’s incrementAndGet() method for implementing the ++ I operation:
// AtomicInteger.java
public final int incrementAndGet(a) {
return U.getAndAddInt(this, VALUE, 1) + 1;
}
// Unsafe.java
@HotSpotIntrinsicCandidate
public final int getAndAddInt(Object o, long offset, int delta) {
int v;
// The loop continues trying to update until weakCompareAndSetInt returns true
do {
// Get the current value and the latest value
v = getIntVolatile(o, offset);
} while (
// Perform atomic update:
// Check whether the current value is equal to current.
// If the value is equal, it returns true, meaning that the value has not been modified by other threads and is updated to the target value.
// If not, return false.! weakCompareAndSetInt(o, offset, v, v + delta) );return v;
}
@HotSpotIntrinsicCandidate
public final boolean weakCompareAndSetInt(Object o, long offset, int expected, int x) {
return compareAndSetInt(o, offset, expected, x);
}
// Native method: use CAS machine instructions to guarantee atomicity directly
@HotSpotIntrinsicCandidate
public final native boolean compareAndSetInt(Object o, long offset, int expected, int x);
Copy the code
2.2.2. The problem of ABA
What is ABA problem?
When executing CAS, A thread can only determine whether the value of the shared variable is the same as the initial value A, but cannot sense the change from A to B and back to A.
public class ThreadTest {
private static final AtomicReference<String> reference = new AtomicReference<>("A");
public static void main(String[] args) throws InterruptedException {
String prev = reference.get();
System.out.println("Original value:" + prev);
// The child thread changes A to B and then to A
new Thread(() -> {
System.out.println("Child thread A->B:" + reference.compareAndSet(reference.get(), "B"));
System.out.println("Child thread B->A:" + reference.compareAndSet(reference.get(), "A"));
}).start();
Thread.sleep(1000);
// The main thread is not aware of the change
System.out.println("Main thread A->C:" + reference.compareAndSet(prev, "C")); }}Copy the code
Original value: A
Child thread A->B: true
Child thread B->A: true
Main thread A->C: true
[Counting to solve ABA problems]
AtomicStampedReference
public class ThreadTest {
private static final AtomicStampedReference<String> reference = new AtomicStampedReference<>("A".0);
public static void main(String[] args) throws InterruptedException {
String prev = reference.getReference();
int prevStamp = reference.getStamp();
System.out.println(Original value + prev);
new Thread(() -> {
System.out.println("Child thread A->B:" + reference.compareAndSet(reference.getReference(), "B", reference.getStamp(), reference.getStamp() + 1));
System.out.println("Child thread B->A:" + reference.compareAndSet(reference.getReference(), "A", reference.getStamp(), reference.getStamp() + 1));
}).start();
Thread.sleep(1000);
System.out.println("Main thread A->C:" + reference.compareAndSet(prev, "C", prevStamp, prevStamp + 1)); }}Copy the code
Original value: A
Child thread A->B: true
Child thread B->A: true
Main thread A->C: false
[Markers to solve ABA problems]
public class ThreadTest {
private static final AtomicMarkableReference<String> reference = new AtomicMarkableReference<>("A".false);
public static void main(String[] args) throws InterruptedException {
String prev = reference.getReference();
boolean marked = reference.isMarked();
System.out.println(Original value + prev);
new Thread(() -> {
System.out.println("Child thread A->B:" + reference.compareAndSet(reference.getReference(), "B", reference.isMarked(), true));
System.out.println("Child thread B->A:" + reference.compareAndSet(reference.getReference(), "A", reference.isMarked(), true));
}).start();
Thread.sleep(1000);
System.out.println("Main thread A->C:" + reference.compareAndSet(prev, "C", marked, !marked));
}
}
Copy the code
Original value: A
Child thread A->B: true
Child thread B->A: true
Main thread A->C: false