Java is a naturally concurrent language. Multithreading not only brings higher efficiency, but also brings data security problems. Generally, we divide the data security problems of multithreading into three kinds: atomicity, visibility and order. Atomicity is a sequence of operations that we either do all or none of. Visibility means that when one thread makes a change to a shared variable, the change can be seen by another thread in a timely manner. Orderliness refers to the fact that in Java, instructions are reordered for performance optimization. In one thread, our operations look ordered, but in another thread, our operations are out of order. In order to solve the data security problem of multiple threads, Java introduced locks to prevent concurrent data security problems when multiple threads simultaneously read and write a shared memory. Locks in Java fall into two broad categories: “synchronized” keyword locks and locks in the “JUC”(java.util.concurrent package), and locks provided in atomic.
Atomicity, visibility and order
atomic
Atomicity means that our series of operations are a whole, so we either do all of them, or we don’t do all of them, otherwise there will be data security problems. Here’s an example:
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class AtomicityViolation {
static long counter = 0L;
static ExecutorService executorService = Executors.newFixedThreadPool(10);
public static void main(String[] args) {
violateAtomicity();
}
static void violateAtomicity() {
CountDownLatch latch = new CountDownLatch(10);
for (int i = 0; i < 10; i++) {
executorService.submit(new Runnable() {
@Override
public void run() {
for(int j = 0; j < 10000; j++) { counter++; } latch.countDown(); }}); } try { latch.await(); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println(counter); executorService.shutdown(); }}Copy the code
In the example above, we start 10 threads, each of which is responsible for counting 10,000 times against a counter counter. If there were no security issues, we would expect 100,000 times, but this is not the case, and the result of each run is different, but always less than or equal to 100,000 times. Why is that? The reason is that the counter++ operation is not an atomic operation. The Java memory model specifies six atomic operations: read, Load, assign, Use, Store, and write. We must lock this operation if we want to ensure that counter++ is an atomic operation:
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class SafeCounter {
static long counter = 0L;
static ExecutorService executorService = Executors.newFixedThreadPool(10);
public static void main(String[] args) throws InterruptedException {
safeCount();
}
static void safeCount() throws InterruptedException {
CountDownLatch latch = new CountDownLatch(10);
for (int i = 0; i < 10; i++) {
executorService.submit(() -> {
for(int j = 0; j < 10000; j++) { synchronized (SafeCounter.class) { counter++; } } latch.countDown(); }); } latch.await(); System.out.println(volatileCounter); executorService.shutdown(); }}Copy the code
The expected results can be obtained after the locking process. Notice that in the for loop, we don’t normally write this, we should put the lock outside the loop. I’m just writing this for atomic operations.
visibility
The Java memory model specifies that each Java thread can have its own working memory, which is thread private, while shared memory (main memory) is shared by threads. The thread working memory has a copy of the shared variable. When a thread writes to a shared variable, it writes to the thread’s private working memory before flushing it to main memory. This can cause a problem: thread 1 changes the value of a shared variable, and thread 2 reads the variable before it is flushed to main memory. Thread 2 will not see the changes made by thread 1 to the variable. This is the problem of data visibility with multi-threaded concurrency.
The visibility problem can be solved in Java by declaring a variable volatile. When a thread reads a volatile variable, the JMM forces it to read it from main memory, and when it writes a volatile variable, the JMM forces it to flush it to main memory immediately. In Java, writes after synchronized locking can also ensure data visibility. Volatile addresses visibility and order but does not guarantee atomicity, and locks are required if atomicity is required. It is important to note that volatile longs and double are read atomically, not as volatile longs. Double is read nonatomically, so volatile solves the atomicity problem to some extent.
order
If you look inside this thread, all operations are in order, but if you look inside another thread, all operations are out of order. The root cause of this problem is “instruction reorder” and “working memory and main memory synchronization delay”. In Java, volatile variables ensure order by preventing instruction reordering through memory barriers.
In Java lock
Above, we introduced data security issues in multithreaded concurrency: atomicity, visibility, and order. Locks in Java are used to guarantee these three features. Locks in Java fall into two broad categories: synchronized locks and Lock locks in JUC packages.
Synchronized lock
Synchronized locking mode
Synchronized is a keyword in the JVM that can be used in two ways: to methods or to blocks of code. To add to a method:
synchronized void foo() {
//...
}
Copy the code
If applied to a method and the current method is non-” static”, it locks an instance of the current class. If the method is “static”, it locks the current class object. To add to a code block:
void foo() { synchronized(lock) { //... }}Copy the code
For a lock placed on a code block, the lock is on the object represented by ‘lock’.
Synchronized lock feature
Synchronized locks are built-in locks provided by the JVM. Synchronized locks are unfair and block, and do not support lock request interrupts. A synchronized lock is reentrant. The so-called reentrant means that the same thread can acquire the lock of an object without blocking until the lock is released. The memory layout of an object in the JVM includes object headers, instance data, and alignment padding, which are used by synchronized locks. Synchronized also supports biased, lightweight, and heavyweight locks.
Biased locking
In most cases locks are not only not contested by multiple threads, but are always acquired multiple times by the same thread. The purpose of biased locking is to eliminate the overhead of lock reentrant (CAS) after a thread has acquired the lock, seemingly favoring the thread. In addition, the JVM to that there will be a multithreaded lock, but there is no lock contention is optimized, the sound is man, but in the real application is indeed may appear this kind of circumstance, because in addition to the mutex can also occur before thread synchronization relationship, be sync the two threads of (tandem) competition for the Shared object lock is likely to be no conflict. In this case, the JVM uses an epoch to represent the timestamp of a biased lock (actually generating a timestamp is expensive, so it should be understood here as a timestamp like identifier)
-
Partial lock acquisition: When a thread, when accessing a synchronized block and obtain the lock will lock in the head and stack frame object record store to lock in the thread ID, after the thread on the entry and exit the synchronized block does not need to take the CAS operation to lock and unlock, and simply test object head Mark Word whether to store the point to the current thread to lock, if the test is successful, If the test fails, it needs to test again whether the bias lock identifier in Mark Word is set to 1 (indicating that the current is a bias lock). If not, CAS is used to compete for the lock. If so, CAS is used to try to point the bias lock of the object head to the current thread.
-
Biased lock cancellation: Biased locks use a mechanism that waits until a contention occurs to release the lock, so the thread holding the biased lock will release the lock only when other threads attempt to contest the biased lock. Biased locking revocation, needs to wait for the global security point (at this point in time not execute bytecode), it will first suspend threads with biased locking, and then check whether hold biased locking thread alive, if the thread is not active, head of state is set to do not have a lock will object, if the thread is still alive, stack with biased locking may be implemented, The Mark Word that iterates through the lock records of the biased object, the lock records in the stack, and the object header, either rebiases to other threads, reverts to lockless or marks the object as unsuitable as a biased lock, and finally wakes up the suspended thread.
-
Biased locking Settings: close the biased locking: biased locking in Java 6 and Java 7 is enabled by default, but it’s on the application startup after a few seconds to activate, if necessary, you can use the JVM parameters to close the delay – XX: BiasedLockingStartupDelay = 0. If you are sure that all locks in your application are normally in contention, you can turn off biased locking with the JVM parameter -xx: -usebiasedLocking =false, and the lightweight locking will be switched on by default.
Lightweight locks and heavyweight locks
-
The JVM will create a space for storing lock records in the current thread’s stack frame before executing the synchronized block and copy the product header’s Mark Word (officially called product Mark Word) into the lock record. The thread then tries to use CAS to replace the Mark Word in the object header with a pointer to the lock record. If it succeeds, the current thread acquires the lock; if it fails, the spin acquires the lock. If the spin acquires the lock still fails, it indicates that there are other threads competing for the lock (two or more threads competing for the same lock), and the lightweight lock expands to the heavyweight lock. Unlock: The lightweight unlock will use atomic CAS operation to replace the product of the herbier Mark Word back to the object head, if successful, the synchronization process has been completed. If this fails, another thread has attempted to acquire the lock, and the suspended thread must be awakened at the same time the lock is released.
-
Heavyweight locks: Weight to lock in the JVM also called object’s Monitor (Monitor), it’s like the Mutex in C, in addition to possess a Mutex (0 | 1) the function of the Mutex, it also is responsible for implementing the Semaphore (Semaphore) function, which means it contains at least one lock queue of the competition, and a signal blocking queue (wait queue), The former is used for mutex and the latter for thread synchronization.
Locks provided by the “JUC” framework
Java.util. concurrent (JUC) packages include locks, reentrantlocks, and ReentrantReadWriteLock. And StampedLock. Classes for optimistic locking based on “CAS” (Compare And Set) are provided in the atomic package.
ReentrantLock
As the name implies, ReentrantLock is a ReentrantLock that supports lock break, fair lock and other features compared to synchronized locks. The main methods involved in ReentrantLock source code are:
Public ReentrantLock(Boolean fair) {sync = fair? new FairSync() : new NonfairSync(); } public voidlock() {
sync.lock();
}
public void lockInterruptibly() throws InterruptedException {
sync.acquireInterruptibly(1);
}
public boolean tryLock() {
return sync.nonfairTryAcquire(1);
}
public boolean tryLock(long timeout, TimeUnit unit) throws InterruptedException {
return sync.tryAcquireNanos(1, unit.toNanos(timeout));
}
Copy the code
The lock() method is used to acquire the lock synchronously; if the lock is not acquired, the thread blocks until it can. The lockInterruptibly() method is used to acquire the lock synchronously, but the request can be interrupted. The tryLock() method does not block the wait. If the current lock has not been acquired by another thread, the current thread returns true. If the current lock has been acquired by another thread, the method immediately returns false and does not block the wait. TryLock (long timeout, TimeUnit Unit) has a timeout mechanism over tryLock(). If the lock has not been obtained within the specified timeout period, false will not be returned immediately, and the request can be interrupted before the timeout period. Note that the Lock in JUC requires us to release the Lock manually. If the method fails after obtaining the Lock, remember to release the Lock (finally), otherwise other threads will not be able to acquire the Lock. Synchronized locks are automatically released by the JVM if a method fails. That’s the difference between the two.
ReentrantReadWriteLock
ReentrantLock obtains an exclusive lock, while ReentrantReadWriteLock is a read-write lock separation lock. In the case that the write lock is not acquired, the concurrent acquisition of the write lock by multiple threads does not block, which has an obvious advantage over ReentrantLock in the case of more reads and less writes. Suppose thread 1 obtains the read lock or write lock first, then thread 2 requests to acquire the read lock or write lock as follows:
Thread 1\ thread 2 | read | write |
---|---|---|
read | Square root | x |
write | x | x |
StampedLock
First, StampedLock locks are not reentrant. The idea behind StampedLock is that read requests should not block not only read requests (as in ReentrantReadWriteLock), but also write requests. StampedLock control lock has three modes (write, read, optimistic read). A StampedLock status is composed of two parts: version and mode. The lock acquisition method returns a number as stamp, which represents and controls access with the corresponding lock status. Read locks are divided into pessimistic locks and optimistic locks.
So-called optimistic read mode, that is, if read a lot of operation, operation a few cases, can be optimistic that you write and read very little risk, at the same time so there is no use completely read lock pessimistically, application reads the data, you can look at whether have been written to perform changes, and then follow-up measures (reread the change information, Or throw an exception), this small improvement can greatly improve the throughput of the program!! Here is an example of StampedLock provided by Java Doc:
class Point { private final StampedLock sl = new StampedLock(); private double x, y; void move(double deltaX, double deltaY) { // an exclusively locked method long stamp = sl.writeLock(); try { x += deltaX; y += deltaY; } finally { sl.unlockWrite(stamp); }} // Let's look at the optimistic read lock case doubledistanceFromOrigin() { // A read-only method long stamp = sl.tryOptimisticRead(); // Get an optimistic read lock double currentX = x, currentY = y; // Read two fields into local variablesif(! Sl.validate (stamp)) {// Check whether other write locks occur after the optimistic read lock is issued. stamp = sl.readLock(); // If not, we get a read-pessimistic lock again try {currentX = x; // Read two fields into the local variable currentY = y; } finally {sl.unlockRead(stamp);} finally {sl.unlockread (stamp); }}returnMath.sqrt(currentX * currentX + currentY * currentY); Void moveIfAtOrigin(double newX, double newY) {// Upgrade // Could instead start with optimistic, notread mode
long stamp = sl.readLock();
try {
while(x == 0.0&&y == 0.0) {// loop to check whether the current state is consistent with long ws = sl.tryconvertToWritelock (stamp); // Convert a read lock to a write lockif(ws ! < span style = "max-width: 100%; clear: both; // If the ticket is replaced successfully x = newX; // State change y = newY; // Make a state changebreak;
} else{// If the write lock is not successfully converted to sl.unlockRead(stamp); // We explicitly release the read lock stamp = sl.writelock (); }}} finally {sl.unlock(stamp); // Release read lock or write lock}}Copy the code
Atomic class based on the “CAS” optimistic lock
Java. Util. Concurrent. Atomic package provides some AtomicXXX class, for example: AtomicInteger, AtomicLong, AtomicBoolean class, etc. These classes are thread-safe through “CAS” spinlocks. In contrast to the locks in the JUC LOCKS package, it does not need to suspend and wake up threads, avoiding system calls by “busy spinning” threads. The advantage is that there are no system calls that need to suspend and wake up threads, and the disadvantage is that it uses too much CPU to solve the “ABA” problem (which can be solved by the AtomicStampedReference class).
The resources
In-depth Understanding of the Java Virtual Machine