In fact, most Java programs during our development were riddled with concurrency bugs that just “happened” to work

How can we, as good programmers, tolerate unsafe situations? Let’s look at the following to avoid these problems

Strength of thread safety

The data shared by various operations in the Java language can be classified into five categories in order of “safety” from strong to weak: immutable, absolute thread-safe, relative thread-safe, thread-compatible, and thread-antagonistic

immutable

Immutable objects must be thread-safe, and neither the object’s method implementation nor its callers need any thread-safe guarantees. As long as an immutable object is constructed correctly (without this reference escaping), its externally visible state never changes, and it is never seen in an inconsistent state across multiple threads. Immutable security is the most immediate and pure. We can modify it with the final keyword. For example, an object instance of the java.lang.String class is a typical immutable object whose original value is unaffected by calls to substring(), replace (), or concat(). Only a new string object is returned.

Absolute thread safety

This definition is very restrictive, and it can be expensive for a class to achieve that “no matter what the runtime environment is, the caller doesn’t need any additional synchronization.” Classes that label themselves as thread-safe in the Java API are mostly not thread-safe. Java.util.vector is defined as thread-safe, and its add(), get(), and size() methods are modified by synchronized, which reduces efficiency but keeps atomicity. But in a multithreaded environment, if not do additional synchronization measures call end, when a thread is just removed one of the elements in the wrong time, lead to the serial number has been deleted, I use it again I visit array will be thrown ArrayIndexOutOfBoundsException anomalies. Assuming that Vector must be absolutely thread-safe, it must maintain a consistent set of snapshots inside it, generating new snapshots every time its elements are modified.

Relative thread safety

Relative thread safety is the common sense of the thread safety, we do not need to carry out additional safeguards when calling, but for some continuous calls with a specific order, may need to use additional means to ensure the correctness of the call. Most of the supposedly thread-safe classes in Java are of this type, such as vectors, HashTable, Collections wrapped by the synchronizedCollection() method of Collections, and so on.

The thread is compatible with

Thread-compatible means that the object itself is not thread-safe, but can be safely used in a concurrent environment by properly using synchronization on the calling side. When we talk about a class that’s not thread-safe that’s usually what we mean, most of the classes in the Java class library API are thread-compliant.

Thread opposite

Thread antagonism is the inability to use code concurrently in a multithreaded environment, regardless of whether or not the caller has taken synchronization measures. Because the Java language naturally supports multithreading, this type of code is rare. Consider the suspend() and resume() methods of the Thread class. If two threads hold a thread object at the same time, one trying to interrupt and one trying to resume, in the case of concurrency, the target is at risk of deadlock regardless of whether synchronization is performed at the time of the call, so the suspend() and resume() methods are now declared obsolete.

Thread-safe implementation

Non-blocking synchronous

This is our most common safeguard. Synchronization is to ensure that shared data can only be used by one thread at a time when multiple threads concurrently access the shared data.

The synchronized keyword, for example, implements blocking synchronization. A synchronized block is reentrant to the same thread and blocks subsequent threads unconditionally before releasing the lock. This means that you cannot force the acquired thread to release the lock as you can in some databases. There is also no way to force a waiting thread to interrupt the wait or timeout out.

JDK5 after the introduction of Java. Util. Concurrent. The locks, Lock to realize the mutex synchronization, which already is one of the most common Lock interface implementation.

Nonblocking synchronization

The main problem with mutex is the performance cost of thread blocking and waking up, also known as blocking synchronization. The CAS operation was only used in Java libraries after JDK 5, with several methods wrapped in the Sun.misc.unsafe class, such as compareAndSwapInt() and compareAndSwapLong(). The HotSpot virtual machine does special processing for these methods internally, and the result of instant compilation is a platform-specific processor CAS instruction with no method calls, or can be thought of as unconditionally inlined. Prior to JDK 9, CAS was only available to Java libraries, such as the integer atom class in the J.U.C package, where methods such as compareAndSet() and getAndIncrement() were implemented using the Unsafe class’s CAS operations. If an application also needs to use CAS, it must either use reflection to overcome Unsafe’s access restrictions, or use it indirectly through the Java library API. It wasn’t until JDK 9 that the Java class library opened up CAS operations for user applications in the VarHandle class. CAS while it is simple and efficient, but this kind of operation can’t cover the mutex synchronization all scenarios, and from the CAS is A logical flaw, if A variable A to read for the first time is 1, and prepared to give the assignment check to its value is 1, still but does not guarantee that no intermediate process by other threads to mend, If another thread changes it to 2 and then to 1 during that time, the CAS operation will assume that it has not been changed. The JUC package fixes this problem by introducing an AtomicStampedReference class, which guarantees CAS correctness with the version of the variable value. Most ABA problems don’t affect your program’s correctness. Switching to mutex synchronization may be more efficient than atomic classes.

Let’s look at Atomic increment in code

The final output is 2000, so using AtomicInteger, the line outputs the correct result, thanks to the atomicity of the incrementAndGet() method

As you can see in this incrementAndGet, it uses an unsafe class, and unsafe provides a getAndAddInt method, so take a closer look at the source code

We can take a look at the implementation, the core of which is to call a compareAndSwapInt

CompareAndSwapInt is a native annotation method that represents the underlying code. So if we look back at the implementation of getAndAddInt, when compareAndSwapInt fails, it keeps executing until it succeeds.

Asynchronous scheme

Neither blocking nor non-blocking synchronization is necessary to be thread-safe, and synchronization is not necessarily related to thread-safety. Synchronization is simply a means of ensuring the correctness of shared data contention. If a method does not involve shared data in the first place, it does not need any synchronization to ensure its correctness, so some code is inherently thread-safe. Reentrant code: This code, also known as pure code, is code that can be interrupted at any point in its execution and switched to another piece of code, without any errors or consequences to the original program after control is returned. We can think of all reentrant code as thread-safe, but not all thread-safe code is reentrant.

Lock the optimization

One of the major improvements in JDK6 is the optimization of various locks, Adaptive Spinning, Lock Elimination, Lock inflation, Lightweight Locking, Biased Locking, and so on. These techniques are designed to share data and resolve contention issues more efficiently between threads, thus improving program execution efficiency.

Spin-locking and adaptive spin

The issue of mutex synchronization, which blocks and suspends and restores threads into kernel mode, puts a lot of pressure on the concurrency performance of the Java VIRTUAL machine. Sometimes shared data is locked for only a short period of time, which is not worth suspending and resuming threads. So we can ask the following thread to wait, that is, to execute a busy loop (spin).

Spin-locking was introduced in JDK1.4.2, but is turned off by default. It can be turned on using the -xx :+UseSpinning parameter, which is turned on by default in JDK6. Spin wait is not a substitute for blocking, and while spin wait itself avoids the overhead of thread switching, it does consume processor time, which works well if the lock is held for a short time. However, if the lock is held for a long time, the spin is a waste of resources, so the spin wait time must be limited. The default number of spins is ten, which the user can change by using the -xx :PreBlockSpin parameter, which suspends the thread when it exceeds that number.

However, the number of possible spins for different scenarios is different, and it is not possible to say that a single value is suitable for all scenarios, so in JDK6, the spin lock is also optimized to introduce adaptive spin. Adaptive means that the time to find spin is no longer fixed, but determined by the previous spin time on the same lock and the state of the lock owner. With adaptive spin, the virtual machine will be able to predict the condition of the application lock more accurately as the application run time increases and performance monitoring information improves.

Lock elimination

Lock elimination refers to the elimination of locks that are detected as impossible to compete for shared data, even though the compiler is running and requires some code to be synchronized. The primary judgment for lock elimination comes from the data support of escape analysis. If it is determined that in a piece of code, all data on the heap will not escape and be accessed by other threads, then it can be treated as data on the stack, and it is considered thread-safe and does not need to be locked.

Lock coarsening

In principle, when writing code, it is recommended to limit the scope of synchronized blocks as small as possible, so that the number of synchronized operations is as small as possible, and even if lock contention exists, the waiting time is reduced. However, if a series of consecutive operations repeatedly lock and unlock the same object, even if the locking operation may be in the body of the loop, frequent mutex synchronization can lead to unnecessary performance loss, even if there is no thread contention. In this case, the virtual machine coarsenes the lock.

Lightweight lock

The new locking mechanism introduced in JDK6, lightweight locking can improve application performance based on the rule of thumb that “the vast majority of locks can be uncontested throughout the synchronization cycle”. If there is no competition, lightweight locks successfully avoid the cost of using mutex through CAS operations. However, if lock contention exists, CAS has some additional CAS operation overhead in addition to the mutex itself. Therefore, in competitive situations, using a lightweight lock can be less efficient than a traditional heavyweight lock.

Biased locking

Also a lock optimization measure introduced in JDK6 to eliminate synchronization primitives in uncontested situations. A biased lock means that the lock is biased in favor of the first thread to acquire it, and if the lock is never acquired by another thread during subsequent execution, the thread holding the biased lock will never need to synchronize.

We can use the argument -xx: +UseBiased Locking (default value). When the lock object is acquired by the thread for the first time, the VM sets the flag in the object header to 01 and the biased Locking mode to 1. The CAS operation is also used to record the ID of the thread that acquired the lock in the object’s Mark Word. If the CAS operation succeeds, the VM does not need to lock, unlock, or update the Mark Word every time the thread that holds the biased lock accesses the lock related synchronization block.

But as soon as another thread tries to acquire the lock, the bias pattern ends and the lock expands to a heavyweight lock. Biased locking can improve the performance of programs with synchronization but no contention, but if most locks are accessed by multiple different threads, it can be improved by disabling biased locking optimization with the -xx: -usebiasedlocking parameter.

Reference: Understanding the Java Virtual Machine in Depth

Thank you for your ❤️ attention + thumbs up ❤️, original is not easy, encourage the author to create better articles