What is thread safety? Let’s paraphrase Java Concurrency in Practice: When multiple threads access to an object, if don’t have to consider these threads in the runtime environment of scheduling and execution alternately, also do not need to undertake additional synchronization, or in any other coordinated operation of the caller, call the object’s behavior can get the right results, that the object is thread-safe. 1. Thread safety is closely tied to objects. 2. Thread safety is caused by thread scheduling and alternate execution; 3. The purpose of thread safety is to achieve the right result. We will refer to this definition repeatedly in the details that follow to illustrate thread-safety of objects.

One, security

According to the security of Java shared objects, thread safety can be divided into five levels: immutable, absolute thread safety, relative thread safety, thread compatibility, thread opposition;

1.1 the immutable

An Immutable object in Java is necessarily thread-safe because scheduling and alternate execution of threads does not change the object. Also immutable are custom constants (final and objects ina constant pool are immutable as well.

1.2 Absolute thread safety

That is, these objects will not affect the correctness of the data regardless of the environment of thread scheduling or alternate execution. In fact, the price of achieving absolute security of objects is to deal with various environments and uncertainties; Even this absoluteness is impossible, so in practice there are no absolutely thread-safe objects in the javaApi.

1.3 Relative thread safety

What we generally refer to as thread-safety in the Java API is relative thread-safety. The following is an example

public class ThreadSafeTest {
    public static Vector<Integer> num =  new Vector<>();

    public static void main (String[] args) {
        while (true) {
            for(int i = 0; i < 100; i++) { num.add(i); } // remove Thread t1 = new Thread(new)Runnable() {
                @Override
                public void run() {
                    for(int i = 0; i < num.size(); i++) { num.remove(i); }}}); Thread t2 = new Thread(newRunnable() {
                @Override
                public void run() {
                    for(int i = 0; i < num.size(); i++) { System.out.println(num.get(i)); }}}); t1.start(); t2.start(); }}}Copy the code

We know that vector is a Java thread-safe array, and that t1 and T2 are both likely to throw an overflow exception. Num.size () will change in size if an interrupt occurs during a for loop, and when trying to get it will be removed. That is to say, vector is thread-safe when used separately by get or remove, but not when used together. Therefore, vector is thread-safe conditionally, i.e. relatively thread-safe. The same is true for Java concurrent packages, of course.

1.4 Thread Compatibility

Objects themselves are not thread-safe, but in a multithreaded environment, clients can achieve thread-safe by properly using synchronization. Most of the non-thread-safe apis we use in Java fall into this category, such as HashMap, ArrayList, and so on.

1.5 Thread Confrontation

Thread opposition is code that should not occur at the same time, regardless of any synchronization measures, in a multi-concurrent environment. Consider Thread suspend and resume. If the previous Thread suspends and another Thread attempts to resume, the object cannot be retrieved and a deadlock occurs. (www.zhihu.com/question/40…

Second, the synchronization

In the case of multi-concurrency, the scheduling and alternate execution between multiple threads affect the state of the object, so how to realize the data synchronization between multiple threads will be the key to solve the thread safety.

2.1 Mutually exclusive synchronization

Mutually exclusive synchronization means that when one thread is working on an object, no other thread can work on the object. There are many ways to achieve mutex synchronization, such as critical section, mutex, semaphore are all ways to achieve mutex synchronization. Java implements synchronization mainly through the use of synchronized keyword, which, after compilation, forms two byte codes, monitorenter and Monitorexit, respectively, before and after the synchronization block. Both of these bytecodes require a reference type pair parameter. To indicate which objects need to be locked and unlocked. If synchronized does not specify a locked object, it points to the object corresponding to the class or to the class itself (if it is a static class). When Monitorenter is executed, the lock counter increases by one, whereas when Monitorexit is executed, the counter decreases by one, and when the counter reaches zero, the object is released. Synchronized, of course, is reentrant on the same thread to avoid locking itself out. In addition to synchronized, the java.util.Concurrent package also provides a ReentrantLock for synchronization, which provides richer functionality: wait interruptable: Fair locking: Locks can be acquired in chronological order (but are still not fair by default) and locks are bound to multiple conditions

For synchronized and already more detailed comparative reference: blog.csdn.net/fw0124/arti… Mutex synchronization is a heavy synchronization and its performance is relatively low. Because two threads accessing the same object, regardless of whether there is a data synchronization problem, lock contention is a pessimistic type of lock. The blocking process of the thread waiting for the lock to be released seriously reduces the efficiency of the code.

2.2 Non-blocking synchronization

In order to solve the thread blocking problem of mutually exclusive synchronization, non-blocking synchronization was created. Non-blocking synchronization is an optimistic lock: an optimistic concurrency strategy based on conflict detection. Generally speaking, it operates first and succeeds if there is no contention from other threads. If other threads are contending, conflict compensation is performed. The emergence of optimistic locking is based on the development of hardware, why so say, this is because we need to ensure that the operation of data and collision detection atomic, which requires a single instruction of the hardware to achieve. The most common instruction is (compare – and – swap) (details see: www.cnblogs.com/Mainz/p/354… Non-blocking synchronization is not intended to replace mutex synchronization, but is a complement to mutex synchronization because of its high performance. However, due to its design defect (ABA problem), it is not a substitute for mutex synchronization.

2.3 no synchronization

Not all data needs to be synchronized, and there is no need to synchronize if there is no data contention. In addition, Thread Local storage can be used for partial scenarios where code that shares data executes within a single Thread, which is particularly appropriate for the consumption queue architectural pattern.

Third, the lock

Most data synchronization is still based on mutex synchronization. We can see from the above that when mutex synchronization is in lock contention, threads will be blocked and constantly switch between suspension and wake up. This kind of thread switch has a relatively large impact on performance. Lock optimization has become an important part of JAVA’s work for these reasons, and I’ll take a look at some common lock optimizations.

3.1 Spin lock and adaptive spin

In most cases, when a thread is not contending for a lock, it usually only needs to wait a short time to regain the lock, so Java uses a circular busy wait mode to wait for the lock to be released, rather than suspending and resuming the thread. However, busy waiting is not loss-free. It takes up CPU resources. If you wait for a long time, the lock will not be released, which is also a terrible waste of resources. So at 1.4, the number of spins can be controlled by a parameter, beyond which the thread will be suspended. In 1.6, Java took an adaptive approach to controlling the number of spins without having to manually set them.

3.2 lock elimination

Lock elimination is the elimination of locks that have code synchronization requirements at just-in-time JVM compilation, but are not actually competing to share data. Shared data whether there is competition, relies on the JVM escape analysis technology (www.importnew.com/23150.html)

3.3 lock coarsening

Although it is often desirable to refine the granularity of locks to reduce the scope of code for lock contention; However, when a code block needs to be synchronized with multiple locks, a single lock can be used to lock the entire code block to reduce lock contention overhead.

3.4 Lightweight Lock

Lightweight locking, as opposed to heavyweight locking, improves lock performance by assuming that data is not in contention. The way to improve lock performance is to use mutex to lock objects instead of heavyweight locks, but to use the flag bit of the object head and CAS operation to detect lock competition. Specific implementation principle of reference: my.oschina.net/u/140462/bl…

3.5 biased locking

Biased locking, like lightweight locking, is an optimization for uncontested situations. Because lightweight locks still need to synchronize markword, biased locks eliminate this synchronization, further optimizing performance. Biased locks lock objects by writing threadId and flag bits in markword. If the markword part of the object is not changed during the operation, it means that there is no competition and no synchronization is required. If it is changed, it indicates that there is a competition and will be upgraded to a lightweight or heavyweight lock for synchronization. The implementation principle of biased locking reference: my.oschina.net/u/140462/bl…