In Java concurrency scenarios, there are various types of locks involved such as fair locks, optimistic locks, pessimistic locks, and so on. This article introduces the types of locks:

Fair lock/unfair lock

Reentrant lock

Exclusive lock/shared lock

Optimistic lock/pessimistic lock

Segmented lock

spinlocks

Thread lock

Optimistic locks vs. pessimistic locks

Optimistic locking and pessimistic locking are broad concepts that reflect different perspectives on thread synchronization and have practical applications in Both Java and databases.

1. Optimistic locking

As the name implies, it is very optimistic. Every time I go to get the data, I think that others will not modify it, so I will not lock it. But when I update the data, I will judge whether others have updated the data during this period, and I can use the version number and other mechanisms.

Optimistic locking is suitable for multi-read applications. Optimistic locking is implemented in Java by using lock-free programming, most commonly CAS algorithm. Incremental operations in Java atomic classes are implemented by CAS spin.

CAS, short for Compare And Swap, is a lock-free algorithm. Variable synchronization between multiple threads without locking (no threads are blocked). CAS is used by atomic classes in the java.util.Concurrent package to implement optimistic locking.

In simple terms, the CAS algorithm has three trioperands:

  • Memory value V that needs to be read or written.
  • The value A for comparison.
  • The new value B to write.

Change the memory value V to B if and only if the expected value A and memory value V are the same, otherwise return V. This is an optimistic locking approach, believing that until it changes, no other thread will change it; Synchronized is a pessimistic lock, which believes that before it can be modified, other threads must modify it. Pessimistic lock is very inefficient.

2. The pessimistic locking

Always assume the worst, every time you go to get the data you think someone else is going to change it, so every time you go to get the data you lock it, so that someone else tries to get the data and it blocks until it gets the lock.

The traditional MySQL relational database uses a lot of this locking mechanism, such as row lock, table lock, read lock, write lock, etc., are done before the operation of the lock.

Compared with other databases, the locking mechanism of MySQL is relatively simple. The most significant feature of MySQL is that different storage engines support different locking mechanisms.

Such as:

  1. Table-level locking is adopted by MyISAM and MEMORY storage engines.
  2. InnoDB storage engine supports row-level locking and table-level locking, but row-level locking is used by default.

The features of the two main types of MySQL locks can be summarized as follows:

  • Table lock: low overhead, fast lock; There are no deadlocks (because MyISAM acquires all the locks required by SQL at once); The lock granularity is large, and the probability of lock conflict is high and the concurrency is low.
  • Row-level lock: expensive, slow lock; Deadlocks occur; The lock granularity is the lowest, the probability of lock conflict is the lowest, and the concurrency is the highest.
  • Page locks: the overhead and speed of locking is between table and row locks. Deadlocks occur; The locking granularity is between table and row locks, and the concurrency is average

Row and table locks

1. Lock granularity is mainly divided into row lock, table lock and library lock

(1) Row lock: when accessing the database, lock the entire row of data to prevent concurrent errors.

(2) Table lock: when accessing the database, lock the entire table data to prevent concurrent errors.

2. Row locking and table locking:

  • Table lock: small overhead, lock fast, will not appear deadlock; The locking force is large, the probability of lock conflict is high, and the concurrency is low
  • Line lock: overhead, slow lock, deadlock will appear; The lock granularity is small, the probability of lock conflict is low, and the concurrency is high

Pessimistic locks and optimistic locks

(1) Pessimistic lock: just as the name implies, it is very pessimistic. Every time I go to get the data, I think others will modify it, so every time I get the data, I lock it, so that others will block the data until it gets the lock.

Traditional relational database inside used a lot of this locking mechanism, such as row lock, table lock, read lock, write lock, etc., are in the operation before the first lock.

(2) Optimistic lock: as the name implies, it is very optimistic. Every time I go to get the data, I think that others will not modify it, so I will not lock it. But when UPDATING, I will judge whether others have updated the data during this period, and I can use the version number and other mechanisms.

Optimistic locks are suitable for multi-read applications to improve throughput. For example, if a database provides a mechanism similar to write_condition, it will provide optimistic locks.

(3) The difference between pessimistic lock and optimistic lock:

The two kinds of locks have their own advantages and disadvantages, and one can not be considered better than the other. For example, optimistic lock is suitable for the situation with less write, that is, when conflicts really rarely occur, which can save the lock overhead and increase the overall throughput of the system. However, if there are frequent conflicts, the upper application will continue to retry, which can degrade performance, so pessimistic locking is appropriate in this case.

A Shared lock

A shared lock means that the same lock is shared on the same resource for multiple different transactions. It’s like having multiple keys to the same door. You have one key, your girlfriend has one key, and you can all use this key to get into your house. This is called a shared lock.

Shared lock is also a kind of pessimistic lock, so shared lock in mysql is called through what command? By querying the data, we learned that by adding lock in share mode to the end of the execution statement, it means that some resources are locked with a share.

When to use table locks

For InnoDB tables, row-level locking should be used in most cases, because transactions and row locking are often the reasons why we choose InnoDB tables. However, table level locking can also be considered for specific transactions.

  • The first case is: the transaction needs to update most or all of the data, and the table is large. If the default row lock is used, the transaction execution efficiency is low, and other transactions may be locked for a long time and lock conflicts. In this case, table locks can be considered to improve the execution speed of the transaction.
  • In the second case, transactions involving multiple tables are complex and likely to cause deadlocks and a large number of transactions to roll back. In this case, you can also consider locking the tables involved in a transaction once to avoid deadlocks and reduce database overhead due to transaction rollback.

Of course, you should not have too many of these two types of transactions in your application, otherwise you should consider using MyISAM tables.

Table and row locks are used in the following scenarios:

  • Table level locking is not used and concurrency is not high, mainly query, a small number of updated applications, such as small Web applications;
  • However, row-level locking is suitable for systems with high concurrency and high requirement on transaction integrity, such as online transaction processing system.

Java’s implementation of the synchronized keyword, mentioned above, is a classic example of pessimistic locking.

3. The bottom line:

  • Pessimistic locking applies to scenarios where many write operations are performed. The pessimistic locking ensures correct data during write operations.
  • Optimistic lock is suitable for scenarios where many read operations are performed. The no-lock feature greatly improves the read operation performance.

Fair locks VS unfair locks

1. A fair lock

Is very fair, in the concurrent environment, each thread in the lock will first check the lock maintenance wait queue, if it is empty, or the current thread is the first wait queue, hold the lock, otherwise it will join the wait queue, in accordance with the RULES of FIFO from the queue to get their own.

The advantage of a fair lock is that the thread waiting for the lock does not starve. The disadvantage is that the overall throughput efficiency is lower than that of the unfair lock. All threads except the first thread in the waiting queue will block, and the cost of CPU waking up the blocked thread is higher than that of the unfair lock.

2. Unfair lock

First try to possess the lock, and if you fail, try something like a fair lock.

The advantage of unfair locking is that it reduces the overhead of invoking threads, and the overall throughput is high, because threads have a chance to acquire the lock without blocking and the CPU does not have to wake up all threads. The downside is that threads in a waiting queue might starve to death or wait too long to acquire locks.

3. Typical Applications

The Boolean type of the constructor can be specified to create fair and unfair locks (the default). For example, fair locks can be implemented using new ReentrantLock(true).

Exclusive locks VS shared locks

1. The exclusive lock

The lock can only be held by one thread at a time.

2. Shared lock

The lock can be held by multiple threads.

3. Compare

In the case of Java ReentrantLock, this is the exclusive lock. But ReadWriteLock, another implementation of Lock, has a shared read Lock and an exclusive write Lock.

The shared lock of read lock ensures that concurrent read is very efficient. Read, write, read, and write processes are mutually exclusive.

Exclusive lock and shared lock is also achieved through AQS, through the implementation of different methods, to achieve exclusive or shared.

4.AQS

Abstract queue synchronizer (AbstractQueuedSynchronizer, AQS) is used to construct the lock or other synchronous component based framework, it USES an integer volatile variables (named state) to maintain the synchronization state, Through the built-in FIFO queue to complete the queuing of resource acquisition threads.

The concurrent package implementation structure is shown above, with base classes such as AQS, non-blocking data structures, and atomic variable classes based on the read/write and CAS implementations of volatile variables, and high-level classes such as Lock, synchronizer, blocking queue, Executor, and concurrent containers based on base classes.

Segmented lock

Segmented locking is actually a lock design, not a specific lock. For ConcurrentHashMap, its concurrent implementation is to achieve efficient concurrent operations in the form of segmented locking.

ConcurrentHashMap Segment lock, which is similar to the structure of HashMap (JDK7 and JDK8) That is, there is an internal array of entries, each element in the array is a linked list; It is also a ReentrantLock (the Segment inherits ReentrantLock).

When a put element is required, instead of locking the entire HashMap, hashCode first knows which segment it is to be placed in, and then locks that segment. So when multiple threads put elements, as long as they are not placed in a segment, they achieve true parallel inserts.

However, in order to obtain the global information of the HashMap, we need to obtain all the segment locks to obtain the statistics.

Segment locking is designed to refine the granularity of the lock. When an operation does not need to update the entire array, only one item in the array is locked.


Java thread lock

Multithreaded resources idle

The thread deadlock

The choice of the lock

Since multiple threads share the resources and address space of their own processes, there is a problem:

What if multiple threads want to access a resource at the same time?

In Concurrent Java programming, it is common for multiple threads to access the same shared resource, and developers must consider how to maintain data consistency, which is where the Java locking mechanism (synchronization problem) comes in.

Java provides a variety of multithreaded locking mechanisms to implement, the common ones are:

  1. synchronized
  2. ReentrantLock
  3. Semaphore
  4. AtomicInteger etc.

Each mechanism has advantages and disadvantages and their own applicable scenarios, must master their characteristics in order to be handy in Java multi-threaded application development.

4 Java Thread locks (Thread Synchronization)

1.synchronized

The synchronized keyword is often used to maintain data consistency in Java.

The synchronized mechanism locks shared resources, and only the thread that obtains the lock can access the shared resources. In this way, the access to the shared resources is forced to be sequential.

Java developers are familiar with synchronized, and it is very simple to use it to achieve multithreaded synchronization. As long as the keyword is added to the method, class or code block of the other party that needs to be synchronized, it can ensure that at most one thread executes the synchronization code of the same object at the same time. The modified code is guaranteed to execute without interference from other threads. Synchronized modified code has atomicity and visibility, and is used very frequently in programs that require process synchronization, which can meet the general requirements of process synchronization.

synchronized (obj) {

/ / method

… .

}

The mechanics of synchronized implementations depend on the JVM at the software level, so performance improves with Java versions.

In Java1.6, synchronized has carried out a lot of optimization, including adaptive spin, lock elimination, lock coarser, lightweight lock and biased lock, and the efficiency has been substantially improved. In Java1.7 and 1.8, the implementation mechanism of this keyword is optimized.

It is important to note that a Thread waiting for a lock through synchronized cannot be interrupted by thread.interrupt (), so it must be checked to ensure that it is properly designed. Otherwise, it may cause Thread deadlocks.

Finally, although there are many locking mechanisms implemented in Java, and some of them have higher performance than synchronized, the use of this keyword in multithreaded applications is highly recommended because of the ease of implementation and reliability of the SUBSEQUENT work done by the JVM. Consider using other mechanisms, such as ReentrantLock, only if you determine that locking is a performance bottleneck for current multithreaded programs.

2.ReentrantLock

A reentrant lock, as its name implies, can be accessed multiple times by a thread to acquire it.

ReentantLock inherits interface Lock and implements the methods defined in the interface. In addition to completing all the work that synchronized can do, ReentantLock also provides methods such as responseable interrupt Lock, polling Lock request, timing Lock and so on to avoid multi-thread deadlock.

The Lock implementation mechanism relies on the specific CPU specification, can be considered independent of the JVM, and the underlying implementation can be done through other language platforms. In multi-threaded applications with low concurrency, the performance of ReentrantLock is almost the same as that of synchronized, but under high concurrency conditions, synchronized performance can rapidly decrease by tens of times, while ReentrantLock performance can still maintain the same level.

Therefore, we recommend using ReentrantLock in high concurrency situations.

ReentrantLock introduces two concepts: fair and unfair locks.

Fair lock means that the lock allocation mechanism is fair, usually the thread that requests the lock first will be allocated the lock first. Conversely, a mechanism by which the JVM allocates locks on a random, nearby basis is called unfair locking.

ReentrantLock provides an initialization method in the constructor for whether the lock is fair, which defaults to unfair. This is because unfair locks are actually performed much more efficiently than fair locks, and unless the program has a special need, the allocation mechanism of unfair locks is most commonly used.

ReentrantLock uses methods Lock () and unlock() to lock and unlock. Unlike synchronized, which is automatically unlocked by the JVM, ReentrantLock needs to be unlocked manually. In order to prevent the program from being able to unlock properly due to an exception, the ReentrantLock must be unlocked in the finally control block. The usual usage is as follows:

Lock lock = new ReentrantLock();

try {

lock.lock();

/ /… Perform task operation 5}

finally {

lock.unlock();

}

3.Semaphore

Both types of locking mechanisms are “mutex”. Those who have learned operating systems know that mutex is a special case of process synchronization. It is equivalent to having only one critical resource, so that only one thread can be served at most at the same time. However, in real complex multithreaded applications, where there may be multiple critical resources, we can use Semaphore Semaphore to access multiple critical resources.

Semaphore does almost all the work of ReentrantLock, using similar methods, acquire() and release() to acquire and release critical resources.

, checked by actual measured Semaphone. Acquire () method of the lock, the default for a response to an interrupt and already. LockInterruptibly () function are same, that is to say, waiting for the process of critical resources can be Thread. The interrupt () method of the interrupt.

Semaphore also implements polling lock requests and timing locks, which are almost identical to ReentrantLock except that the method name tryAcquire is different from tryLock. Semaphore also provides mechanisms for fair and unfair locks, which can also be set in constructors.

Semaphore’s lock release is also done manually, so like ReentrantLock, the lock release must be done in the finally code block to prevent the thread from throwing an exception and failing to release the lock properly.

4.AtomicInteger

First of all, AtomicInteger is one of the representatives of a series of the same class, there are common AtomicLong, AtomicLong, etc., their implementation principle is the same, the difference is different from the operation object type.

We know that in multithreaded programs, operations such as ++ I or I ++ are not atomic and are one of the unsafe thread operations. Normally we would use synchronized to turn this into an atomic operation, but the JVM specifically provides synchronization classes for this type of operation to make it easier to use and make the program run more efficiently. According to relevant data, AtomicInteger usually performs several times better than ReentantLock.

Java thread lock summary

1. The synchronized:

In situations where competition for resources is low and there is occasional synchronization, synchronized is appropriate. The reason for this is that a compiler usually optimizes Synchronize as much as possible and is very readable.

2.ReentrantLock:

In the case of low resource competition, the performance is slightly worse than synchronized. But when synchronization is intense, synchronized performance drops tens of times, while ReentrantLock maintains its normal performance.

Use ReentrantLock in high concurrency situations.

3.Atomic:

Similar to the above, the performance is slightly inferior to synchronized in the case of low intensity, but can maintain the normal state in the case of intense intensity. In the heat of the moment, Atomic performance is about twice as good as ReentrantLock’s. However, it has a disadvantage that only one value can be synchronized. Only one Atomic variable can appear in a piece of code, and more than one synchronization is invalid. Because it can’t synchronize between Atomic objects.

Therefore, we write synchronization in favor of synchronized, and optimize it further if there is a special need. ReentrantLock and Atomic, if used incorrectly, can not only fail to improve performance, but also lead to disaster.

Above is the detailed explanation of Java thread lock, in addition to dealing with high concurrency from the programming point of view, more also need to deal with high concurrency from the level of architecture design, such as Redis cache, CDN, asynchronous messaging, etc., the details are as follows.