Preface SuiYu

Synchronized and ReentrantLock are the most commonly used local locks in Java. In the original version, ReentrantLock was much better than Synchronized. In subsequent Java iterations, the two types of Synchronized have been significantly optimized, and after jdk1.6, the performance of the two types of locks has been similar, and Synchronized’s automatic release of locks is even better.

When asked about the choice between “Synchronized” and “ReentrantLock” during an interview, many of my friends would blurt out “Synchronized”. Even when I asked interviewees about it during an interview, few of them could answer the question. If you are only interested in the title, you can go straight to the end. I am not a headline party

Synchronized using

The use of synchronized in Java code is fairly straightforward

  • 1. Stick directly to the method
  • 2. Paste on the code block

What happens to the Synchronized code while the program is running?

Let’s look at a picture

In the process of multithreading, the thread will grab the object’s monitor, which is unique to the object. In fact, it is equivalent to a key. When you grab the monitor, you get the right to execute the current block of code.

Other threads that are not grabbed are queued synchronized. the lock is released when the current thread finishes executing.

Finally, when the current thread finishes executing, it is notified to exit the queue and continue the current process.

From the JVM’s perspective the Monitorenter and Monitorexit directives represent the execution and termination of code.

SynchronizedQueue:

A SynchronizedQueue is a special queue that has no storage function. Its function is to maintain a set of threads in which each insertion must wait for a removal by another thread, and each removal waits for an insertion by another thread. So this queue really doesn’t have any elements inside it, or it has a capacity of zero, so it’s not technically a container. Because the queue has no capacity, the PEEK operation cannot be called because elements are only available when they are removed.

Here’s an example:

When drinking, the wine is first poured into the glass and then into the glass, which is the normal queue.

When you drink, you pour it straight into the glass. This is a SynchronizedQueue.

This example should be clear and easy to understand. The advantage is that it can be passed directly, eliminating the need for a third party to pass it along.

Talk about the details, the lock escalation process

Before jdk1.6, Synchronized was a heavyweight lock, with a picture posted first

This is why Synchronized is a heavyweight lock, because the resources of each lock are directly applied to the CPU, and the CPU has a fixed number of locks, and when the CPU uses up its locked resources, it waits for the lock, which is a very time-consuming operation.

But in jdk1.6, a lot of optimizations have been made at the code level, the process known as lock upgrade.

This is a lock upgrade process, let’s briefly say:

  • Unlocked: The object is unlocked to begin with.
  • Biased locking: this is the equivalent of attaching a tag to an object (storing its thread ID in the object header). The next time I come in and find the tag is mine, I can use it again.
  • Spin-lock: Imagine there’s a toilet with a person in it. You want to go but there’s only one hole, so you just hang around and wait until the person comes out and you can use it. The spin is atomicity using CAS, which I won’t go into here.
  • Heavy lock: The lock is applied directly to the CPU, and all other threads wait in the queue.

When does lock escalation occur?

  • Biased lock: When a thread acquies a lock, it is upgraded from unlocked to biased lock
  • Spin lock: Upgrade from bias lock to spin lock when a thread race occurs. Imagine while(true);
  • Heavy lock: Promoted to a heavy lock when threads compete for a certain number of times or over a certain period of time

Where is the lock information recorded?

This figure is the data structure of markword in the object header, where the lock information is stored. It clearly shows the change of the lock information when the lock is upgraded. In fact, the object is marked by binary values, each value represents a state.

And synchronized?

This question is very relevant to our topic.

Lock degradation occurs in the HotSpot virtual machine, but only during STW and only observed by the garbage collection thread. In other words, lock degradation does not occur during normal use, only during GC.

So the answer to the question, do you understand? Haha, let’s go down.

The use of already

The use of ReentrantLock is also fairly straightforward, unlike Synchronized in that it requires you to release the lock manually, and is usually used in conjunction with try~finally in order to ensure a certain release.

The principle of already

ReentrantLock means ReentrantLock, and when talking about ReentrantLock, we have to say AQS, because the underlying implementation is using AQS.

There are two modes of ReentrantLock, one is fair lock, the other is unfair lock.

  • In fair mode, waiting threads are queued in strict order
  • In unfair mode, queue jumping may occur after a waiting thread enters the queue

This is the structure of ReentrantLock. It is actually quite simple to look at this diagram, because the main implementation is done by AQS, and we will focus on AQS.

AQS

AQS (AbstractQueuedSynchronizer) : AQS is can be understood as a framework can realize the lock.

Simple process understanding:

Fair lock:

  • Step 1: Get the value of state.
    • If state=0 indicates that the lock is not occupied by another thread, proceed to step 2.
    • If the state! =0 indicates that the lock is being occupied by another thread.
  • Step 2: Determine whether there are threads waiting in the queue.
    • If not, the owner of the lock is directly set to the current thread and the state is updated.
    • If it exists, join the team.
  • Step 3: Determine whether the lock is owned by the current thread.
    • If yes, update the value of state.
    • If not, the thread enters the queue and waits.

Unfair lock:

  • Step 1: Get the value of state.

    • If state=0 indicates that the lock is not occupied by another thread, then set the current lock holder to the current thread. This operation is completed using CAS.
    • If the value is not 0 or the setting fails, the lock is occupied and goes to the next step.
  • We get the value of state,

    • A value of 0 means that the lock happened to be released by the thread, at which point the lock holder is set to itself
    • If not, check to see if the thread owner is himself
      • If so, give state+1 and acquire the lock
      • If not, it waits in the queue

After reading the above section, you should have a clear idea of AQS, so let’s talk about the small details.

AQS uses the state synchronization state (0 for no lock, 1 for yes) and exposes the getState, setState, and compareAndSet operations to read and update the state so that it is atomically set to the new value only if the synchronization state has an expected value.

When a thread fails to acquire a lock, AQS is added to the end of the queue to manage the synchronization status through a two-way synchronization queue.

This is the code that defines the header and tail nodes. We can use volatile to make them visible to other threads first. AQS actually modifies the header and tail nodes to complete the enqueue and dequeue operations.

AQS does not require only one thread to hold the lock, so there is a difference between exclusive mode and shared mode. In this article, ReentrantLock is used in exclusive mode. In multithreaded cases, only one thread will acquire the lock.

The process of exclusive mode is relatively simple. It determines whether a thread has acquired the lock based on whether the state is 0. If not, it blocks, and if it has, it continues to execute the subsequent code logic.

The process in shared mode determines whether a thread has acquired the lock based on whether the state is greater than 0; if not, it blocks; if it is greater than 0, it subtractes the state value via an atomic operation of the CAS, and then continues with the subsequent code logic.

The difference between ReentrantLock and Synchronized

  • In fact, the most important difference between ReentrantLock and Synchronized is that Synchronized is suitable for the situation of low concurrent contention, because Synchronized lock upgrade can not be eliminated in the process of using if it is eventually upgraded to heavyweight lock. ReentrantLock basically provides the ability to block, reducing contention and improving concurrency by suspending threads at high concurrency levels, so the answer to our title is obvious.

  • Synchronized is a keyword that is implemented at the JVM level, while ReentrantLock is implemented by the Java API.

  • Synchronized is an implicit lock that can be released automatically, and ReentrantLock is an explicit lock that needs to be released manually.

  • ReentrantLock can interrupt a thread waiting for a lock, but synchronized cannot. With synchronized, waiting threads wait forever and cannot respond to an interrupt.

  • ReentrantLock can acquire the lock state; synchronized cannot.

Talk about the answer to the title

Synchronized can’t degrade when it is upgraded to a heavyweight lock, while ReentrantLock can block to improve performance, which is a design pattern that supports multithreading.