This is the fourth installment in our multithreading series.

Java Multithreading – Where do threads come from?

Java multithreaded memory model

Java multithreading — volatile

If you have read the previous several articles about threads, you will know the implementation principle of threads clearly, with the support of theory will have a better guide to practice, so this article will focus on the practice of threads, several applications of threads to make a brief introduction.

The main content of this article:

  1. Classification of thread safety
  2. Implementation of thread synchronization
  3. Lock the optimization

 

Thread safety classification

Thread safety is not a binary world of either true or false, and Java can be divided into the following categories in order of “safety”

The immutable. If the data type is modified to final, it is guaranteed to be immutable (except for reference objects, where the attributes of final objects are not guaranteed, only the memory address is guaranteed). Immutable objects must be thread-safe, such as the String class, which is a typical immutable object in which calls to its subString (), replace(), and concat () methods do not affect its original value and only return a newly constructed String object.

Absolutely thread-safe. This definition is so strict that for a class to achieve, regardless of the runtime environment, the caller does not need any additional synchronization measures.

Relative thread safety. This is commonly referred to as thread-safety. It guarantees that individual operations on objects are thread-safe and no additional guarantee is required. However, for continuous calls in a particular order, additional synchronization may be required to ensure the correctness of the calls.

Thread compatibility. This means that objects themselves are not thread-safe and can be safely used in a concurrent environment by properly using synchronization on the calling side. The non-thread-safe classes that we use all fall into this category.

Thread opposition. This means that the code cannot be used concurrently in a multithreaded environment, regardless of whether the calling side is synchronous or not. This is something we should avoid.

 

The distinction between absolute security and relative security may not be very good. Let’s use an example to make the distinction:

public class VectorTest {



    private Vector<Integer> vector = new Vector<Integer>();



    public void remove() {

        new Thread() {

            @Override

            public void run() {

                for (int i = 0; i < vector.size(); i++) {

                    vector.remove(i);

                }

            }

        }.start();

    }



    public void print() {

        new Thread() {

            @Override

            public void run() {

                for (int i = 0; i < vector.size(); i++) {

                    System.out.println(vector.get(i));

                }

            }

        }.start();

    }



    public void add(int data) {

        vector.add(data);

    }



    public static void main(String[] args) {

        VectorTest test = new VectorTest();

        for (int j=0;j<100;j++){

            for (int i = 0; i < 10; i++) {

                test.add(i);

            }

            test.remove();

            test.print();

        }



    }

}Copy the code

The code running in the above complains: ArrayIndexOutOfBoundsException exception, the exception is in the print method, after remove the thread to delete an element, the print method just to perform to the vector. The get () method, That’s when you get this exception.

A vector is known to be thread-safe, with its get(), remove(), size (), and add () methods all using synchronize. However, a multi-threaded vector is not thread-safe without additional synchronization at the method calling end. This is what we call relative thread safety, which guarantees that the caller does not need any additional synchronization at any time.

 

Implementation of thread-safe synchronization

Mutually exclusive synchronization is a common method to ensure concurrency correctness. Synchronization means that when multiple threads concurrently access shared data, only one thread can use the shared data at the same time. Common mutually exclusive synchronization methods in Java are Synchronize and ReentrantLock.

Presumably these two kinds of locks have an understanding of multithreading people know. We won’t talk about how to use it. Let’s talk a little bit about the differences between the two and how they apply to specific scenarios.

Since a JVM thread is mapped to a native operating system thread, switch from user mode to the kernel when a thread is blocked or woken up, which sometimes takes longer than the code execution time. So the JVM will use spin-locks to execute short synchronized code on some code to avoid frequent dives into the core mentality.

Synchronize is a type of built-in lock provided by the JVM and recommended by the JVM. The code written by synchronize is relatively simple and compact. ReentrantLock is used only when the internal lock fails.

 

ReentrantLock

So what advanced features does ReentrantLock offer? Let’s look at an example;

public void synA() { synchronized (lockA) { synchronized (lockB) { //doSomeThing.... } } } public void synB() { synchronized (lockB) { synchronized (lockA) { //doSomeThing.... }}}Copy the code

 

The above synchronized code is prone to deadlock when multiple threads call synA and synB respectively. The only way to avoid this is to force all calls in the same order at code time. ReentrantLock avoids this problem by polling locks.

public void tryLockA() { long stopTime = System.currentTimeMillis() + 10000l while (true) { if (lockA.tryLock()) { try {  if (lockB.tryLock()) { try { // doSomeThing..... return; } finally { lockB.unlock(); } } } finally { lockA.unlock(); } } if (System.currentTimeMillis() > stopTime) { return; }}}Copy the code

 

If a lock cannot be obtained by trylock, polling can be used to obtain the lock, so that the program can regain control and release the lock. Trylock also provides a timed reload method, which allows you to obtain the lock within a specified period of time.

In addition to providing polling and timing locks, ReentrantLock can also provide interruptible lock acquisition operations to obtain locks used in cancelable operations. Lock acquisition operations, fair queues, and non-block locking are also provided. These features greatly enrich the customizability of the lock operation.

 

Of course, synchronized is recommended if you don’t use the advanced features of ReentrantLock. Synchronized has balanced ReentrantLock in terms of performance since java6, and according to the official, it will continue to do so in the future because it is a built-in property of the JVM and can perform optimizations such as lock elimination optimizations for thread locked lock objects. And increase the granularity of lock to eliminate lock synchronization. These are difficult to implement in ReentrantLock.

 

Lock the optimization

As we have seen above, when multithreading is competing for resources, other uncontending lines will block and wait, and blocking and waking up require scheduling by the kernel, which is too expensive for the limited CPU, so the JVM spends a lot of energy on lock optimization to improve execution efficiency.

Let’s look at common lock optimizations.

spinlocks

In the shared data lock state, there are many methods that only hold for a short period of time, and it is not worth suspending and resuming threads for such a short period of time. Then the JVM lets the thread waiting for the lock wait, but does not give up the corresponding execution time. To see if the waiting thread is released quickly, thus reducing the stress of thread scheduling. This works well if the lock is held for a short period of time, but if it is held for a long time, it wastes recycled resources and can lead to wasted resources.

Adaptive spin lock

Spin-locks could not be handled according to the length of time the lock was held, and then adaptive spin-locks were introduced, where the selectable time was no longer fixed, but was determined by the previous selectable time of the same lock and the owning state. This will become intelligent.

Lock elimination

Lock elimination is when the JVM just-in-time compiler removes locks that require synchronization on some code but detect that there is no possibility of competing for shared data. Lock elimination detection is based on data support from escape analysis. If it is determined that in a piece of code, all data on the heap will not escape, it is treated as if it is on the stack and is considered thread private, and synchronization is not required.

Lock coarsening

When writing code, it is always recommended to keep synchronized blocks as small as possible. If a series of operations repeatedly lock and unlock an object, even in the body of the loop, even if there is no thread contention, it can lead to unnecessary performance loss. For this type of code, the JVM expands the granularity of the lock and performs only one synchronous operation on this part of the code.

Lightweight lock

Lightweight locks are designed to reduce the performance cost of traditional heavy locks using OS mutual exclusion without multi-threading competition. The object header in the JVM is divided into two parts of information. The first part is used to store the running data of the object itself, called the “Mark Word”, which is the key to implementing lightweight locking. The other part is used to store Pointers to execute method area object type data. When the code enters the synchronization block, if the synchronization object is not locked, Mark the “Mark World” pointer to the lock record as “01”.

If the Mark Word update is successful and the thread owns the lock of the object, the pointer that executes the lock bit is marked as “00”. If the update fails and the Mark Word of the current object does not point to the stack frame of the current thread, the lock object has been preempted by another thread. If there are more than two threads competing for the same lock, the lightweight lock is no longer effective, the lock is marked “10” and inflated to the heavyweight lock.

Lightweight locks are based on the fact that, for the most part, locks are not contested during the synchronization cycle, thus reducing the performance cost of mutual exclusion. Of course, if there is lock contention, in addition to avoiding the cost of using mutex, there is also the additional operation of synchronizing the modification of the tag bit.

Biased locking

Biased locking is the whole synchronization under the condition of no competition to eliminate, doesn’t even do the CAS update operation, it will be to first obtain its thread, if in the process of the execution of the next, the lock is not other threads to obtain, will hold biased locking thread will never need to be synchronized, when another thread attempts to acquire the lock, end, pronounced bias model.

 

—————————————————————————–

For more interesting and original technical articles, scan our public account.

Focus on personal growth and game development to promote the growth and progress of the domestic gaming community.