The reason for multithreading

With the advent of processes, the performance of the operating system has been greatly improved. Although the emergence of process solves the concurrency problem of operating system, but people are still not satisfied, people gradually have requirements for real-time.

One of the reasons for using multithreading is that it is a much cheaper, faster, and more “frugal” way to multitask than a process.

Under Linux, starting a new process must be assigned its own address space, and numerous data tables must be created to maintain its code segments, stack segments, and data segments, which is an “expensive” way of multitasking. Running multiple threads simultaneously in a process, using the same address space with each other and sharing most of the data, takes much less space to start a thread than to start a process, and the time required to switch between threads is much less than the time required to switch between processes.

Multithreading concurrency problems

Since multiple threads share the resources and address space of their own processes, there is a problem:

What if multiple threads want to access a resource at the same time?

In Concurrent Java programming, it is common for multiple threads to access the same shared resource, and developers must consider how to maintain data consistency, which is where the Java locking mechanism (synchronization problem) comes in.

Java provides a variety of multithreaded locking mechanisms to implement, the common ones are:

  • synchronized
  • ReentrantLock
  • Semaphore
  • AtomicInteger etc.

Each mechanism has advantages and disadvantages and their own applicable scenarios, must master their characteristics in order to be handy in Java multi-threaded application development.

4 Java Thread locks (Thread Synchronization)

1.synchronized

The synchronized keyword is often used to maintain data consistency in Java.

The synchronized mechanism locks shared resources, and only the thread that obtains the lock can access the shared resources. In this way, the access to the shared resources is forced to be sequential.

Java developers are familiar with synchronized, and it is very simple to use it to achieve multithreaded synchronization. As long as the keyword is added to the method, class or code block of the other party that needs to be synchronized, it can ensure that at most one thread executes the synchronization code of the same object at the same time. The modified code is guaranteed to execute without interference from other threads. Synchronized modified code has atomicity and visibility, and is used very frequently in programs that require process synchronization, which can meet the general requirements of process synchronization.

synchronized (obj) {

/ / method

… .

}

The mechanics of synchronized implementations depend on the JVM at the software level, so performance improves with Java versions.

In Java1.6, synchronized has carried out a lot of optimization, including adaptive spin, lock elimination, lock coarser, lightweight lock and biased lock, and the efficiency has been substantially improved. In Java1.7 and 1.8, the implementation mechanism of this keyword is optimized.

It is important to note that a Thread waiting for a lock through synchronized cannot be interrupted by thread.interrupt (), so it must be checked to ensure that it is properly designed. Otherwise, it may cause Thread deadlocks.

Finally, although there are many locking mechanisms implemented in Java, and some of them have higher performance than synchronized, the use of this keyword in multithreaded applications is highly recommended because of the ease of implementation and reliability of the SUBSEQUENT work done by the JVM. Consider using other mechanisms, such as ReentrantLock, only if you determine that locking is a performance bottleneck for current multithreaded programs.

2.ReentrantLock

A reentrant lock, as its name implies, can be accessed multiple times by a thread to acquire it.

ReentantLock inherits interface Lock and implements the methods defined in the interface. In addition to completing all the work that synchronized can do, ReentantLock also provides methods such as responseable interrupt Lock, polling Lock request, timing Lock and so on to avoid multi-thread deadlock.

The Lock implementation mechanism relies on the specific CPU specification, can be considered independent of the JVM, and the underlying implementation can be done through other language platforms. In multi-threaded applications with low concurrency, the performance of ReentrantLock is almost the same as that of synchronized, but under high concurrency conditions, synchronized performance can rapidly decrease by tens of times, while ReentrantLock performance can still maintain the same level.

Therefore, we recommend using ReentrantLock in high concurrency situations.

ReentrantLock introduces two concepts: fair and unfair locks.

Fair lock means that the lock allocation mechanism is fair, usually the thread that requests the lock first will be allocated the lock first. Conversely, a mechanism by which the JVM allocates locks on a random, nearby basis is called unfair locking.

ReentrantLock provides an initialization method in the constructor for whether the lock is fair, which defaults to unfair. This is because unfair locks are actually performed much more efficiently than fair locks, and unless the program has a special need, the allocation mechanism of unfair locks is most commonly used.

ReentrantLock uses methods Lock () and unlock() to lock and unlock. Unlike synchronized, which is automatically unlocked by the JVM, ReentrantLock needs to be unlocked manually. In order to prevent the program from being able to unlock properly due to an exception, the ReentrantLock must be unlocked in the finally control block. The usual usage is as follows:

Lock lock = new ReentrantLock();

try {

lock.lock();

/ /… Perform task operation 5}

finally {

lock.unlock();

}

3.Semaphore

Both types of locking mechanisms are “mutex”. Those who have learned operating systems know that mutex is a special case of process synchronization. It is equivalent to having only one critical resource, so that only one thread can be served at most at the same time. However, in real complex multithreaded applications, where there may be multiple critical resources, we can use Semaphore Semaphore to access multiple critical resources.

Semaphore does almost all the work of ReentrantLock, using similar methods, acquire() and release() to acquire and release critical resources.

, checked by actual measured Semaphone. Acquire () method of the lock, the default for a response to an interrupt and already. LockInterruptibly () function are same, that is to say, waiting for the process of critical resources can be Thread. The interrupt () method of the interrupt.

Semaphore also implements polling lock requests and timing locks, which are almost identical to ReentrantLock except that the method name tryAcquire is different from tryLock. Semaphore also provides mechanisms for fair and unfair locks, which can also be set in constructors.

Semaphore’s lock release is also done manually, so like ReentrantLock, the lock release must be done in the finally code block to prevent the thread from throwing an exception and failing to release the lock properly.

4.AtomicInteger

First of all, AtomicInteger is one of the representatives of a series of the same class, there are common AtomicLong, AtomicLong, etc., their implementation principle is the same, the difference is different from the operation object type.

We know that in multithreaded programs, operations such as ++ I or I ++ are not atomic and are one of the unsafe thread operations. Normally we would use synchronized to turn this into an atomic operation, but the JVM specifically provides synchronization classes for this type of operation to make it easier to use and make the program run more efficiently. According to relevant data, AtomicInteger usually performs several times better than ReentantLock.

Java thread lock summary

1.synchronized:

In situations where competition for resources is low and there is occasional synchronization, synchronized is appropriate. The reason for this is that a compiler usually optimizes Synchronize as much as possible and is very readable.

2.ReentrantLock:

In the case of low resource competition, the performance is slightly worse than synchronized. But when synchronization is intense, synchronized performance drops tens of times, while ReentrantLock maintains its normal performance.

Use ReentrantLock in high concurrency situations.

3.Atomic:

Similar to the above, the performance is slightly inferior to synchronized in the case of low intensity, but can maintain the normal state in the case of intense intensity. In the heat of the moment, Atomic performance is about twice as good as ReentrantLock’s. However, it has a disadvantage that only one value can be synchronized. Only one Atomic variable can appear in a piece of code, and more than one synchronization is invalid. Because it can’t synchronize between Atomic objects.

Therefore, we write synchronization in favor of synchronized, and optimize it further if there is a special need. ReentrantLock and Atomic, if used incorrectly, can not only fail to improve performance, but also lead to disaster.