This is the fifth day of my participation in the August Wen Challenge.More challenges in August

Synchronized and ReentrantLock both use unfair locks by default in Java, and both use unfair locks for the same reason: to improve program performance. So why does unfair locking improve performance? Let’s take a look.

Not fair lock

Unfair lock: The order in which each thread obtains the lock is random and does not follow a first-come, first-served rule. Any thread can acquire and own the lock directly at some point.This is just like the brother Lei went to the gas station, and when he got to the gas station, he found someone in front of him adding, so I brushed douyin in the car. After a while, the car in front of me finished filling up the gas and left, but Brother Lei didn’t notice that he was still happily brushing Douyin in the car. However, a car came to the gas station at this time, and found that there were idle oil guns, so they took the lead in lei brother before the oil. The grease gun here is the lock, and if you don’t get the grease gun in the order of arrival, it’s an unfair lock.

Fair lock

Fair lock: Each thread acquires the lock in the order in which it was accessed, with the first thread always acquiring the lock first.It’s like getting on the highway and waiting in line to pass the toll booth. All the cars have to wait in line to pass, and the first cars come through the toll booth first.

The performance comparison

Fair and unfair lock performance test results are as follows. The following test data are from Java Concurrent Programming In Action:

As you can see from the above results, the throughput rate (the average rate of successful lock acquisition per unit of time) with an unfair lock is much higher than with a fair lock.

Performance analysis

The above test data, while illustrative, does not explain why unfair locks perform better? So, next, we analyze fair locking and unfair execution processes to get the answer to this question.

Fair lock execution process

When acquiring a lock, add thread itself to the first party and dormant waiting queue, when a thread after using the lock, will awaken waiting queue squadron first thread to try to get the lock, lock the use of order is the order in the queue, in the whole process, thread will switch from running state to a dormant state, to recover from a dormant state into running state, However, every time a thread sleeps and recovers, it needs to transition from user state to kernel state, and this state transition is relatively slow, so the execution speed of fair lock will be slow.

User mode & kernel mode

User Mode: when a process is executing the User’s own code, it is said to be in User Mode. Kernel Mode: When a task (process) executing a system call is stuck in Kernel code, the process is said to be in Kernel Mode, with the processor executing in Kernel code with the highest privileges.

Why are there inner core states and user states?

Assuming that there is no kernel and user state, programs can read and write hardware resources at will, such as reading and allocating memory. If a programmer accidentally writes something inappropriate in the wrong place, the system will likely crash.

And with the distinction between user mode and kernel mode, program when they perform an operation after a series of verification and inspection, confirmed after to normal operation resources, so as not to worry about accidentally put the addled system, there is the distinction between kernel mode and user mode after can make programs more safe operation, But at the same time, switching between the two modes will cause some performance overhead.

Unfair lock execution process

When a thread obtains a lock, it will first try to obtain the lock through CAS. If it succeeds in obtaining the lock, it will directly own the lock. If it fails to obtain the lock, it will enter the waiting queue and wait for the next attempt to obtain the lock. The advantage of this approach is that locks are not acquired on a first-come, first-served basis, thus avoiding thread sleep and recovery operations, thus speeding up the execution of the program.

Such as a few days ago leilei to a small business hall to handle business network switching, found in front of someone in the office after the business, Yu Shilei elder brother is told to deal with (business) in front of the little sister, “the gate of the I have a rest, you’ll finish with the business, the door of the trouble to call me a”, little sister who is better, it promised to down. But after little sister done business call me, and I went back to the counter for business, is to have a free time, and that between the threads in a wait queue awakened and resume execution is to have a free time is the same, and in the free time, business hall here comes a Lao li to hand over the phone, such as li paid the phone bill, I just came back to handle business directly, so it was a “win-win-win” situation. Laolitou need not wait behind me to pay the phone fee, I also need not wait for Laolitou to pay the phone fee to deal with the phone, and in the unit time to improve the efficiency of the business, she can also go home early, this is the so-called “triple win”.The advantage of unfair locking is to perform more tasks in less time.

conclusion

In this paper, we introduce the definition and execution process of fair lock and unfair lock. From the details of the execution process of the two, we can see that the unfair lock does not need to be executed in (order), so the subsequent lock can also directly try to obtain the lock, without the steps of blocking and resuming execution, so its performance will be higher.

Original articles in this series are recommended

  1. Thread 4 ways to create and use details!
  2. Is there a big difference between user threads and daemon threads in Java?
  3. Understand thread pools in depth
  4. 7 ways to create a thread pool, highly recommended…
  5. How great is pooling technology? I was shocked to see the comparison between threads and thread pools!
  6. Thread synchronization and locking in concurrency
  7. Synchronized “this” and “class”
  8. The difference between volatile and synchronized
  9. Is a lightweight lock faster than a weightweight lock?
  10. How can terminating the thread cause the service to go down?
  11. SimpleDateFormat 5 Solutions to Thread Insecurity!
  12. ThreadLocal not working well? That’s you!
  13. ThreadLocal memory overflow code
  14. Semaphore confessions: I’m the one with the restrictor!
  15. CountDownLatch: Wait till you latch on!
  16. CyclicBarrier: Once the driver has finished, he can start the train!
  17. Synchronized optimization means of lock expansion mechanism!
  18. Four optimizations in Synchronized. How many do you know?
  19. 4 pits in ReentrantLock!

Follow “Java Chinese Community” for more interesting and informative Java concurrent articles.