Author: magic good

Source: Hang Seng LIGHT Cloud Community

– The concept of locking in Java

  • When multiple threads access a shared resource, there is no way to avoid resource competition and lead to data disorder. Therefore, we usually solve this problem by locking the shared resource before accessing it.
  • The purpose of locking is to ensure that only one thread can access a shared resource at any given time, thus avoiding the problem of multiple threads leading to shared data clutter.

– Indicates the lock type

  • Depending on the state, characteristics, and design of the lock, there may be several differences:
    • Optimistic/pessimistic lock, spin lock, distributed lock, biased lock, lightweight lock, heavyweight lock, etc.

– Optimistic/pessimistic lock

Whether the thread needs to lock the synchronization resource

  • Optimistic and pessimistic locking are broad concepts that represent different ways of looking at thread synchronization and have practical applications in Both Java and databases.

  • Concept:

    • Pessimistic lockingIt is considered that concurrent operations on the same data must be modified. The pessimistic view is that concurrent operations without locks are bound to cause problems.
      • Pessimistic locking takes the form of locking for concurrent operations on the same data.
      • In Java,synchronizedKey words andLockAll implementation classes are pessimistic locks.
    • Optimistic lockingFor the same data concurrent operations, the modification will not occur; Optimistically, concurrent operations without locks are fine.
      • When updating data, it determines whether the data was updated by another thread before attempting to update it.
      • If the data is not updated, the current thread successfully writes the modified data.
      • If the data has been updated by another thread, different actions (such as an error or automatic retry) are performed depending on the implementation.
      • In Java, it is implemented by using lockless programming, most commonly the CAS algorithm. Increments in Java atomic classes are implemented by CAS spin.
  • Scene:

    • Pessimistic locking is suitable for scenarios with multiple write operations. Locking the data first ensures correct data during write operations.
    • Optimistic locking is suitable for scenarios with a large number of read operations. This feature greatly improves read performance.
  • Example:

    *

    • The example shows that pessimistic locks are used to synchronize resources after explicit locking, while optimistic locks are used to synchronize resources directly.
    • Why does optimistic locking allow threads to synchronize correctly without locking the synchronized resource?
      • The technical principle of the main implementation of optimistic lock “CAS”.
        • CAS (Compare And Swap) is a lock-free algorithm. Synchronizes variables between multiple threads without locking (no threads are blocked).java.util.concurrentThe atomic classes in the package implement optimistic locking through CAS.
    • Look at the source code for AtomicInteger. Take a lookAtomicIntegerDefinition:
      • According to the definition, we can see the function of each attribute:
        • unsafe: Obtains and manipulates memory data.
        • valueOffset: storagevalueAtomicIntegerThe offset in.
        • value: storageAtomicIntegerintValue for this attributevolatileThe keyword is guaranteed to be visible between threads.
      • To viewAtomicIntegerThe self-increasing function of thetaincrementAndGet()When the source code, the underlying call to the auto-increment function is foundunsafe.getAndAddInt()
      • Looking at the source code for Unsafe with OpenJDK 8:
      • According to the OpenJDK 8 source code,getAndAddInt()Loop to get the value v at the offset in the given object O and determine if the memory value is equal to v. If it is equal, set the memory value to V + delta, otherwise return false, and continue the loop to retry until the setting is successful, leaving the loop and returning the old value. The entire “compare + update” operation is encapsulated incompareAndSwapInt()In JNI, this is done by means of a SINGLE CPU instruction, which is an atomic operation that ensures that multiple threads can see the modified value of the same variable.
      • JDK via CPUcmpxchgInstruction to compare A in the register with V in memory. If so, the new value B is stored in memory. If not, the memory value V is assigned to the register value A. It is then called again through a while loop in your Java codecmpxchgInstruction to retry until the setting is successful.
    • Although CAS is very efficient, there are three major problems:
      • ABA problem
        • The CAS needs to check whether the memory value changes when operating the value. The CAS updates the memory value only if the memory value does not change. But if the memory value was A, then B, and then A again, then CAS will check and see that the value has not changed, but it has changed.
        • The solution to the ABA problem is to add the version number to the front of the variable, and then add one to the version number each time the variable is updated, so that the change process is changed from “a-b-a” to “1A-2B-3a”.
        • The JDK has been available since 1.5AtomicStampedReferenceClass to solve ABA problems, specific operations encapsulated incompareAndSet()In the.compareAndSet()The current reference and current flag are first checked to see if they are equal to the expected reference and expected flag, and if both are equal, the values of the reference value and flag are set atomically to the given update value.
      • Long cycle time and high overhead
        • If the CAS operation is not successful for a long time, the CAS operation will keep spinning, which brings a large overhead to the CPU.
      • Atomic operations that guarantee only one shared variable
        • CAS can guarantee atomic operations on one shared variable, but cannot guarantee atomic operations on multiple shared variables.
        • Java has been available with the JDK since 1.5AtomicReferenceClass to ensure atomicity between reference objects. Multiple variables can be placed in a single object to perform CAS operations.

– Blocking lock/spin lock/adaptive spin lock

  • Background:
    • Blocking or waking up a Java thread requires the operating system to switch CPU states in order to do so, which takes processor time. If synchronizing the contents of a code block is too simple, the state transition may take longer than the user code executes.
    • In many scenarios, synchronous resources are locked for a short period of time, and the cost of suspending and recovering threads in the field for that short period of time may be more than the system gains. If a physical machine has multiple processors and can allow two or more threads to execute simultaneously in parallel, we can keep the subsequent thread requesting the lock from forfeit the CPU’s execution time and see if the thread holding the lock releases it soon.
  • Blocking locks
    • When the corresponding signal (wake up, time) is obtained, the thread can enter the ready state. All the threads in the ready state, through the race, enter the running state.
    • In JAVA, the methods that can enter, exit, block, or contain a blocking lock are,synchronizedKeywords (wherein the weight of the lock),ReentrantLockObject.wait()\notify(),LockSupport.park()/unpart()(often used by J.U.C.)
  • spinlocks
    • In Java, the thread attempting to acquire the lock does not immediately block, but tries to acquire the lock in a circular manner, which has the advantage of reducing the cost of thread context switching and the disadvantage of the loop consuming CPU performance.

    *

    • Spin-locking itself has its drawbacks. It is not a substitute for blocking. Spin-waiting avoids the overhead of thread switching, but it consumes processor time. If the lock is occupied for a short period of time, the spin-wait works very well. On the other hand, if the lock is held for a long time, the spinning thread is a waste of processor resources.
    • There must be a limit to the amount of time the spin wait can be used if the number of spins exceeds the limit (default is 10)-XX:PreBlockSpinIf the lock has not been successfully acquired, the thread should be suspended.
    • Spin-locks are also implemented using CAS,AtomicIntegerIn the callunsafeThe do-while loop in the source code for the increment operation is a spin operation. If the modification fails, the spin is performed through the loop until the modification succeeds.
    • Spin lock is introduced in JDK1.4.2-XX:+UseSpinningTo open it. It became the default in JDK 6, and adaptive spinlocking (adaptive spinlocking) was introduced.
  • Adaptive spin lock
    • Adaptive means that the spin time (number of spins) is no longer fixed, but is determined by the previous spin time on the same lock and the state of the lock owner.
    • If, on the same lock object, the spin wait has just succeeded in obtaining a lock, and the thread holding the lock is running, the virtual machine will assume that the spin is likely to succeed again, and it will allow the spin wait to last a relatively longer time.
    • If the spin is rarely successfully acquired for a lock, it may be possible to omit the spin when attempting to acquire the lock in the future, blocking the thread directly and avoiding wasting processor resources.
  • In spinlocks, there are three other common types of lock:TicketLock,CLHlockMCSlock

– No lock/biased lock/lightweight lock/heavyweight lock

  • Background:
    • These four locks refer to the state of the lock, specifically forsynchronizedThe upgrade of the lock is one-way and irreversible.
    • synchronizedA pessimistic lock is a lock that must be placed on a synchronized resource before it can be manipulated. This lock is stored in the Java object header.
      • Taking the Hotspot virtual machine as an example, the Hotspot object header mainly contains two parts of data:Mark Word(Tag field),Klass Pointer(type pointer).
      • Mark Word: Stores the object’s HashCode, generational age, and lock flag bit information by default. This information is data independent of the definition of the object itself, so Mark Word is designed as a non-fixed data structure in order to store as much data as possible in a very small space. It reuses its storage space depending on the state of the object, which means that the data stored in Mark Word changes at runtime as the lock flag bit changes.
      • Klass PointAn object is a pointer to its class metadata, which the vm uses to determine which class the object is an instance of.
      • MonitorCan be thought of as a synchronization tool or a synchronization mechanism and is usually described as an object. Every Java object has an invisible lock, called an internal lock orMonitorThe lock.
    • As mentioned in spin-locking, “blocking or waking up a Java thread requires the operating system to switch CPU states in order to do so, which takes processor time. If synchronizing the contents of a block of code is too simple, the state transition may take longer than the user code executes. This is how synchronized originally implemented synchronization, before JDK 6synchronizedCauses of inefficiency. It depends on the operating systemMutex LockThe locks implemented are what we call “heavyweight locking,” and “biased locking” and “lightweight locking” were introduced in JDK 6 to reduce the performance cost of acquiring and releasing locks.
    • Currently, there are four lock states, which are in order from low to high: no lock, biased lock, lightweight lock and heavyweight lock. The lock status can be upgraded but not degraded.
  • unlocked
    • No lock No resource is locked. All threads can access and modify the same resource, but only one thread can modify the resource successfully.
    • The lockless feature is that the modification takes place in a loop, and the thread keeps trying to modify the shared resource. If there is no conflict, the change succeeds and exits, otherwise the loop continues. If multiple threads modify the same value, one thread will succeed, and the others will try again until the change succeeds.
    • The principle and application of CAS is the implementation of no lock. No lock is not a complete replacement for the lock, but no lock in some cases performance is very high.
  • Biased locking
    • When a piece of synchronized code is accessed by a thread all the time, that thread will automatically acquire the lock. Reduces the cost of acquiring locks.
    • In most cases, the lock is always acquired by the same thread multiple times, and there is no multithreaded contention, so biased locking occurs. The goal is to improve performance when only one thread executes synchronized blocks of code.
    • When a thread accesses a synchronized block of code and acquires a lock, the thread ID of the lock bias is stored in Mark Word. Instead of using CAS operations to lock and unlock threads when they enter and exit the synchronized block, Mark Word detects whether there is a biased lock pointing to the current thread. Bias locking was introduced to minimize unnecessary lightweight lock execution paths without multithreaded contention, since lightweight locks rely on multiple CAS atomic instructions for acquisition and release, whereas bias locks rely on only one CAS atomic instruction for replacement of ThreadID.
    • A biased lock is only released by a thread holding a biased lock if another thread tries to compete for it. A thread does not actively release a biased lock. When a biased lock is revoked, it waits for a global safe point (at which point no bytecode is executing) and first suspends the thread that owns the biased lock to determine whether the lock object is locked. Undo bias lock to revert to a lockless (” 01 “flag bit) or lightweight (” 00″ flag bit) state.
    • Bias locking is enabled by default in JDK 6 and later JVMS. Bias locking can be turned off with the JVM parameter:-XX:-UseBiasedLocking=false, the program will enter the lightweight lock state by default.
  • Lightweight lock
    • When a biased lock is accessed by another thread, the biased lock is upgraded to a lightweight lock, and the other threads spin to try to acquire the lock without blocking, improving performance.
    • Production process:
      • When the code enters the synchronization block, if the synchronization object Lock state is no Lock state (the Lock flag bit is “01” and the bias Lock is “0”), the VIRTUAL machine will first create a space named Lock Record in the stack frame of the current thread, which is used to store the current Lock objectMark WordAnd then copy the object headerMark WordCopy to the lock record.
      • After the copy is successful, the VIRTUAL machine will use CAS operation to try to update the Mark Word of the object to the pointer pointing to the Lock Record, and the owner pointer in the Lock Record points to the objectMark Word
      • If the update action is successful, the thread owns the lock on the object, and the objectMark WordIf the lock flag bit is set to 00, the object is in the lightweight locked state.
      • If a lightweight lock fails to be updated, the vm first checks the objectMark WordIf yes, it means that the current thread already has the lock on this object, and then it can directly enter the synchronization block to continue execution. Otherwise, it means that multiple threads are competing for the lock.
      • If there is currently only one waiting thread, it waits by spinning. However, when the spin exceeds a certain number of times, or when one thread is holding the lock, one is spinning, and a third is visiting, a lightweight lock becomes a heavyweight lock.
  • Heavyweight lock
    • When the lock is a lightweight lock, the other thread is spinning, but when the spin a certain number of times, has not acquired the lock, will enter the block, the lock expanded to a heavyweight lock. Heavy locking causes other requesting threads to block, causing performance degradation.
    • When you upgrade to a heavyweight lock, the status value of the lock flag changes to “10”, and the Mark Word stores a pointer to the heavyweight lock. In this case, all threads waiting for the lock will be blocked.
  • To sum up, bias lock through contrastMark WordResolve the locking problem and avoid performing CAS operations. Lightweight locking solves the locking problem by using CAS operations and spins to avoid thread blocking and waking up, which can affect performance. A heavyweight lock blocks all threads except the one that owns the lock.

– Fair lock/non-fair lock

  • Fair lock
    • Multiple threads acquire a lock in the order in which they apply for it. The threads are queued directly and the first thread in the queue can acquire the lock.
    • The advantage is that the thread waiting for the lock will not starve.
    • The disadvantage is that the overall throughput efficiency is lower than that of the unfair lock, and all threads except the first thread in the waiting queue will block. The overhead of THE CPU to wake up the blocked thread is larger than that of the unfair lock.
  • Not fair lock
    • The order in which multiple threads acquire a lock is not the same as the order in which they apply for the lock. If they fail to acquire the lock, they wait at the end of the queue. It is possible that the thread that obtains the lock later takes precedence over the thread that obtains the lock first. (Potentially, it could lead to a reversal of priorities or starvation.)
    • The advantage is that the overhead of arousing threads is reduced and the overall throughput is high because threads have the chance to acquire locks without blocking and the CPU does not have to wake up all threads.
    • The downside is that threads in the wait queue can starve to death or wait too long to acquire a lock.

  • Example:
    • For JavaReentrantLockThe constructor specifies whether the lock is a fair lock. The default is an unfair lock. The advantage of non-fair locking is that the throughput is greater than fair locking.
      • According to the code,ReentrantLockThere’s an inner classSyncSyncInheritance AQS (AbstractQueuedSynchronizer), most of the operations for adding and releasing locks are actually done in Sync. It has a fair lockFairSyncAnd non-fair locksNonfairSyncTwo subclasses.ReentrantLockNon-fair locks are used by default, and fair locks can be specified as shown in the constructor.

      *

      • Can clearly see the fair lock and non – fair locklock()The only difference is that fair locking has a restriction on obtaining synchronization status:hasQueuedPredecessors(). *
        • Enter thehasQueuedPredecessors()As you can see, this method mainly does one thing: it mainly determines whether the current thread is the first in the synchronization queue. Return true if yes, false otherwise.
    • forSynchronizedIs also an unfair lock. Because it doesn’t look likeReentrantLockAQS is used to implement thread scheduling, so there is no way to make it fair locking.
  • To sum up, fair locking is achieved by synchronizing the queue to achieve multiple threads in accordance with the order of lock application, so as to achieve fair characteristics. Unfair locking The system attempts to acquire the lock without considering the queuing problem. Therefore, the system obtains the lock first after applying for it.

– Reentrant locks/non-reentrant locks

  • Reentrant lock
    • Also known as a recursive lock, when the same thread acquired the lock in the outer method, the inner method will automatically acquire the lock (provided that the lock object is the same or the same class), and will not block because it has been acquired before and has not been released.
  • In JavaReentrantLocksynchronizedBoth are reentrant locks, and one advantage of reentrant locks is that deadlocks can be avoided to some extent.
    • In the above code, both methods in the class are locked internallysynchronizedModified,doSomething()Method calldoOthers()Methods. Because the built-in lock is reentrant, the same thread is calling itdoOthers()Can directly obtain the lock of the current object, enterdoOthers()Perform operations.
    • If it is a non-reentrant lock, then the current thread is calling itdoOthers()Before the need will be executeddoSomething()In fact, the lock of the current object is already held by the current thread and cannot be released, so a deadlock occurs.
  • Non-reentrant locks cause deadlock analysis
    • By reentry lockReentrantLockAnd non-reentrant locksNonReentrantLockSource code to compare and analyze why non-reentrant locks appear deadlocks when repeated calls to synchronous resources.
    • First of all,ReentrantLockNonReentrantLockBoth inherit from parent AQS, which maintains a synchronization statestatusTo count the number of reentrants. The initial value of status is 0.
    • When a thread attempts to acquire a lock, a reentrant lock first attempts to acquire and update itstatusValue, ifstatus == 0Indicates that no other thread is executing the synchronization codestatusSet to 1, the current thread starts execution. ifstatus ! = 0If the current thread is the thread that acquired the lock, and if so, executestatus+1, and the current thread can acquire the lock again. A non-reentrant lock is acquired directly and attempts to update the currentstatusValue of, ifstatus ! = 0Failure to acquire the lock causes the current thread to block.
    • When the lock is released, the reentrant lock is also acquired firststatusIf the current thread is the thread holding the lock. ifstatus-1 == 0, indicating that all repeated lock acquisition operations of the current thread have been completed, and then the thread will release the lock. A non-reentrant lock is a lock that is held by the thread that is currently holding the lockstatusSet to 0 to release the lock.

– Exclusive/shared locks

  • Exclusive locks and shared locks are the same concept.

  • Exclusive lock

    • Also called an exclusive lock, it means that the lock can only be held by one thread at a time.
    • If thread T applies an exclusive lock to data A, no other thread can apply any type of lock to data A.
    • The thread that acquires an exclusive lock can both read and modify the data. In the JDKsynchronizedAnd in the JUCLockThe implementation class is a mutex.
  • A Shared lock

    • Means that the lock can be held by multiple threads.
    • If thread T attaches A shared lock to data A, other threads can only attach A shared lock to data A, but cannot attach an exclusive lock to it.
    • The thread that acquires the shared lock can only read the data, not modify it.
  • Exclusive locks and shared locks are also implemented through AQS, through the implementation of different methods, to achieve exclusive or shared. Through ReentrantLock and ReentrantReadWriteLock source code to introduce exclusive locks and shared locks.

  • ReentrantReadWriteLock has two locks: ReadLock and WriteLock. The meaning is known by the word. A ReadLock and a WriteLock are called “read and write locks”. A closer look shows that ReadLock and WriteLock are locks implemented by the internal Sync class. Sync is a subclass of AQS. This structure also exists in CountDownLatch, ReentrantLock, and Semaphore.

  • In ReentrantReadWriteLock, the lock body of both read locks and write locks is Sync. However, the lock modes of read locks and write locks are different. Read locks are shared locks and write locks are exclusive locks. The shared lock of the read lock ensures that concurrent reads are very efficient, and the read-write, write-read, and write-write processes are mutually exclusive because the read and write locks are separate. Therefore, the concurrency of ReentrantReadWriteLock is much better than that of a mutex.

  • The differences between read lock and write lock are as follows:

    • When we first mentioned AQSstateField (int, 32-bit) that describes how many threads are holding the lock.
    • In exclusive locks this value is usually 0 or 1 (if it is a reentrant lock)stateThe value is the number of reentrants) in shared locksstateIs the number of locks held.
    • But in theReentrantReadWriteLockThere are read and write locks, so you need an integer variablestateDescribes the number (or state) of read locks and write locks, respectively.
    • Therefore, the state variable is divided into two parts: the higher 16 bits indicates the read lock state (number of read locks), and the lower 16 bits indicates the write lock state (number of write locks).

  • Write lock lock source:

    • This code first obtains the number of current locks c, and then obtains the number of write locks w from C. Since the write lock is the lower 16 bits, the maximum value of the lower 16 bits is applied to the current c (int w = exclusiveCount(c);), the high 16 bits and 0 are 0, and the rest is the value of the low bit and the number of threads holding the write lock.
    • After the number of write lock threads is obtained, the first step is to determine whether any threads already hold the lock. If a thread already holds the lock (c! =0), then check the number of current write lock threads, if the number of write threads is 0 (that is, there is a read lock) or the thread holding the lock is not the current thread, return failure (related to fair and non-fair lock implementation).
    • An Error is raised if the number of write locks is greater than the maximum number (65,535, 2 ^ 16 -1).
    • If the number of write threads is 0 (then the reader thread should also be 0, since c! Has already been processed above) =0), and the current thread needs to block. If the number of write threads fails to be increased through the CAS, a failure message is returned.
    • If c=0, w=0, or c>0, w>0 (reentrant), set the owner of the current thread or lock, return success!
    • tryAcquire()In addition to the reentrant condition (the current thread is the thread that acquired the write lock), a read lock exists. If there is a read lock, the write lock cannot be acquired because the operation of the write lock must be visible to the read lock. If the read lock is allowed to be acquired while the write lock has been acquired, other running reading threads will not be aware of the operation of the current writing thread.
    • Therefore, a write lock can only be acquired by the current thread until all other reading threads have released the lock. Once a write lock is acquired, all subsequent access by the other reading and writing threads is blocked. Write lock release andReentrantLockWhen the write state is 0, it indicates that the write lock has been released. Then the waiting thread can continue to access the lock. At the same time, the changes made by the previous thread are visible to subsequent threads.
  • Read lock code:

    • intryAcquireShared(int unused)Method, if another thread has already acquired the write lock, the current thread fails to acquire the read lock and enters the wait state.
    • If the current thread acquires or does not acquire the write lock, the current thread (thread safety, dependent on the CAS guarantee) increases the read status and successfully acquires the read lock.
    • Each release of a read lock reduces the read state by the value “1<<16”.
    • So the read-write lock can realize the read-read process sharing, and read-write, read-write and write-write process mutually exclusive.
  • Example:

    • For JavaReentrantLockFor its is exclusive lock. But forLockAnother implementation class ofReadWriteLock, whose read lock is a shared lock and whose write lock is an exclusive lock.
    • Read locks are shared to ensure that concurrent reads are highly efficient. Read/write, read/write, and write/write are mutually exclusive.
    • Exclusive locks and shared locks are also implemented through AQS, through the implementation of different methods, to achieve exclusive or shared.
    • forSynchronizedFor, of course, is exclusive lock.

– Mutex/read/write lock

  • The exclusive/shared locks mentioned above are a broad term, and the mutex/read-write locks are a concrete implementation.
  • Example:
    • The specific implementation of mutex in Java isReentrantLock
    • Read/write locking is implemented in JavaReadWriteLock

– section lock

  • Segment lock is actually a lock design, not a specific lock, forConcurrentHashMapIn terms of its concurrent implementation is to achieve efficient concurrent operations through the form of segmented locking.
  • The piecewise lock is designed to refine the granularity of the lock by locking only one item in the array without updating the entire array.
  • Example:
    • ConcurrentHashMapTo talk a little bit about what segment locking means and how it’s designed,ConcurrentHashMapA Segment lock is called a Segment, which is similar toHashMap(in JDK7 and JDK8HashMapThat is, it has an internal structureEntryArrays, each element in an array is a linked list; And at the same time aReentrantLockThe Segment is inheritedReentrantLock).

– Basic principles of optimistic lock and pessimistic lock

  • Pessimistic locking is pessimistic. It thinks that the probability of multiple threads modifying a shared resource is high, so conflicts are easy to occur. Therefore, before accessing a shared resource, lock it first.
  • Optimistic locking work more optimistic, it is assumed that the conflict probability is very low and the way it works is that: first, modify the Shared resources, to verify this time there is no conflict, if there is no other threads in the modify resources, so operation is complete, if it is found that there are other threads have modified the resources, give up the operation.

The principle of CAS

  • In computer science, Conmpare And Swap are atomic instructions used to achieve synchronization across multiple threads.
  • It compares the contents of a memory location to a given value, and modifies the contents of that memory location to the new given value only if they are identical, which is done as a single atomic operation.
  • Atomicity guarantees that new values are calculated based on the latest information; If the value is updated by another thread at the same time, the write will fail.

– Pros and cons of CAS

  • ABAThe problem
    • If the variable V, when first read, is A, and when it’s ready to assign, it’s still A, does that mean that its value hasn’t been changed by another thread? And the answer is no, because in that time, it could change to something else, and then change back to the A value, soCASThe operation will assume that it has never been modified. This problem is calledCASOperation of theABAThe problem.
    • JDK1.5After theAtomicStampedReferenceClass provides such functionality, among whichcompareAndSet()Method, which first checks whether the current reference equals the expected reference and whether the current flag equals the expected flag. If all are equal, the reference and flag values are set atomically to the given update value.
  • Loop time is long and overhead is high
    • The spinCASIf it is not successful for a long time, it will bring a very large execution overhead to the CPU. ifJVMBeing able to support pause instructions provided by the processor is a bit more efficient,pauseAn instruction has two functions. First, it can delay pipeline execution (de-pipeline), so that the CPU does not consume too much execution resources. The delay time depends on the implementation version. On some processors, the delay time is zero. Second, it prevents the MEMORY order violation of a loop from emptying the CPU pipeline.CPU pipeline flush) to improve the CPU execution efficiency.
  • Atomic operations that guarantee only a single shared variable
    • CASIs only valid for a single shared variable, and when operations are involved across multiple shared variables,CASInvalid operation. But in theJDK1.5Here we goAtomicReferenceClass to ensure atomicity between reference objects. You can do this by putting multiple variables in one objectCASOperation. So you can use locks, or you can useAtomicReferenceClass to combine multiple shared variables into a single shared variable.

– Lock elimination, coarsening, upgrading, degrading

  • Lock the elimination of
    • When the JIT compiler (Just In Time compiler) can dynamically compile synchronized code, it applies a technique called escape analysis to determine whether the lock object used In the program is used by only one thread and not spread to other threads. If this is the case, then the JIT compiler will not generate the synchronized code when compiling itsynchronizedKeyword identifies the application and release of the lock machine code, thus eliminating the lock use process.
  • Lock coarsening
    • When the JIt compiler executes a dynamic compilation, if the JIt compiler finds that thesynchronizedBlock is using the same lock object, so it will take thesesynchronizedThe benefit of combining blocks into one large synchronized block is that threads do not have to apply for and release locks frequently while executing the code. This allows the thread to apply and release locks once and execute a fully synchronized block of code, thus improving performance.
  • Lock escalation and demotion
    • Lock promotion and demotion are mainly passed in the object headerMark WordThe lock flag bit and whether it is biased to the lock flag bit to achieve;

– What is the difference between a mutex and a spin lock?

  • When one thread is already locked, the other thread will fail to lock it. Mutexes and spinlocks handle the failure differently:
    • When a mutex fails to be locked, the thread releases the CPU to another thread.
    • When a spin lock fails, the thread waits until it gets the lock.
  • A mutex is an “exclusive lock”. For example, if thread A successfully locks the mutex, thread A already owns the mutex. If thread A does not release the lock, thread B’s lock will fail and release the CPU to another thread.Since thread B releases the CPU, the locked code of natural thread B is blocked.
    • For the mutex lock failure blocking phenomenon, is implemented by the operating system kernel. When the lock fails, the kernel puts the thread to sleep. When the lock is released, the kernel wakes up the thread at the appropriate time. When the thread successfully acquires the lock, it can continue executing.
    • So, when a mutex fails to lock, it goes from user mode to kernel mode, and having the kernel switch threads for us makes it easier to use the lock, but there is a performance overhead.
    • So what’s the cost of this overhead? There will beThe cost of two thread context switches:
      • When a thread fails to lock, the kernel changes the state of the thread from “running” to “sleeping” and switches the CPU to another thread.
      • Then, when the lock is released, the previously “sleeping” thread will go to the “ready” state, and the kernel will switch the CPU to that thread at the appropriate time.
    • So, if you are sure that the locked code will be executed in a very short time, you should not use a mutex, you should use a spinlock, or you should use a mutex.
  • Spin-locks are provided through the CPUCASFunction (Compare And Swap), locking and unlocking in “user mode” does not cause an active thread context switch, so it is faster and less expensive than mutex.
    • The general locking process consists of two steps:
      • Step 1: Check the status of the lock. If the lock is idle, go to Step 2.
      • Second, set the lock to be held by the current thread.
    • A spin lock is the simplest type of lock that keeps spinning, using CPU cycles, until the lock is available. Note that on a single-core CPU, a preemptive scheduler is required (that is, one thread is constantly interrupted by the clock to run other threads). Otherwise, spin locking cannot be used on a single CPU because a spin thread never gives up the CPU.
  • Spin-locks are similar to mutex in terms of usage, but different in terms of implementation: when locking fails, mutex uses a “thread switch”, while spin-locks use a “busy wait”.

– What is the priority of read/write locks?

  • A read/write lock is used to read a shared resource, and a write lock is used to modify a shared resource.
  • The read-write lock works as follows:
    • When the “write lock” is not held by the thread, multiple threads can concurrently hold the read lock, which greatly improves the efficiency of access to the shared resource, because “read lock” is used to read the shared resource, so multiple threads holding the read lock at the same time does not destroy the shared resource data.
    • However, once a “write lock” is held by a thread, the lock acquisition by the reader thread will be blocked, and the lock acquisition by other writer threads will also be blocked.
  • A write lock is an exclusive lock because only one thread can hold it at any one time, like a mutex or spin-lock, while a read lock is a shared lock because it can be held by multiple threads simultaneously.
  • It is expected that the read lock can be held by more threads in order to improve the concurrency of the reader threads. It works as follows: When the reader thread A holds the lock, the writer thread B is blocked while trying to acquire the lock. During the blocking process, the subsequent reader thread C is still able to acquire the lock successfully. Finally, the writer thread B cannot acquire the lock successfully until the reader threads A and C release the lock.

– Concept of deadlock

  • A deadlock is a situation in which multiple threads are blocked while one or all of them wait for a resource to be released. Because the thread is blocked indefinitely, the program cannot terminate normally.
  • Four deadlocks in JavaA necessary condition for:
    • Mutex, which means that when a resource is used (owned) by one thread, no other thread can use it.
    • Non-preemption: the resource requester cannot forcibly seize the resource from the possessor, and the resource can only be released by the possessor.
    • Request and maintain, that is, when the resource requester requests other resources while maintaining the possession of the original resources;
    • Circular wait, that is, there is a waiting queue: P1 occupies the resources of P2, P2 occupies the resources of P3, and P3 occupies the resources of P1. This creates a waiting loop.
  • When all four conditions are true, a deadlock is formed. Of course, if any of these conditions are broken in the case of a deadlock, the deadlock will disappear.

– Deadlock scenario

  • A deadlock occurs in the following scenarios:
    • Deadlock caused by sequential locking;
    • Collaborative object calls cause deadlocks;
    • Dynamic locking leads to deadlock.

– Deadlock location

  • In our actual project, in order to solve the deadlock problem, the premise is to first locate the specific location of the deadlock occurred.
  • Deadlock instance code:
public class Deadlock { public static String obj01 = "objA"; public static String obj02 = "objB"; public static void main(String[] args) { LockA lockA = new LockA(); LockB lockB = new LockB(); new Thread(lockA).start(); new Thread(lockB).start(); } public static class LockA implements Runnable { @Override public void run() { try { System.out.println(new Date().toString() + "LockA starts execution "); Synchronized (Deadlock. Obj01) {system.out.println (new Date().tostring () + "LockA lock object objA"); Thread.sleep(1000); Synchronized (Deadlock. Obj02){system.out.println (new Date().tostring () + "LockA lock object objB"); Thread.sleep(60 * 1000); } system.out.println (new Date().tostring () + "LockA release object objA"); } system.out.println (new Date().tostring () + "LockA release object objB"); } catch (Exception e) { e.printStackTrace(); } } } public static class LockB implements Runnable { @Override public void run() { try { System.out.println(new Date().toString() + "LockB starts execution "); Synchronized (Deadlock. Obj02) {system.out.println (new Date().tostring () + "LockB "); Thread.sleep(1000); Synchronized (Deadlock. Obj01){system.out.println (new Date().tostring () + "LockB "); Thread.sleep(60 * 1000); } system.out.println (new Date().tostring () + "LockA release object objB"); } system.out.println (new Date().tostring () + "LockA release object objA"); } catch (Exception e) { e.printStackTrace(); }}}}Copy the code
  • Run the above example code and locate the deadlock by doing the following:
      1. Using the commandjstack -l [pid]To locate
      • Start by entering commands in the project consolejpsTo find the thread PID of the deadlock
>jps
13136 Deadlock
15952 Launcher
18784 Launcher
9172 RemoteMavenServer36
20376 Jps
11996
Copy the code
- Then find the pid 13136 thread, occurred a deadlock - then enter the command 'jstack -l 13136', print out the thread details -! [image.png](https://cdn.nlark.com/yuque/0/2021/png/3010199/1627740718472-84b9a835-a6b9-4799-998b-d0ce890e0058.png#align= left&display=inline&height=416&margin=%5Bobject%20Object%5D&name=image.png&originHeight=636&originWidth=987&size=59400&s Tatus =done&style=shadow&width=646) - You can see how the deadlock occurred and the number of lines of code.Copy the code
    1. useJConsoleGraphical tool for locating deadlocks
    • Type in the command consolejconsole, run the tool
      • Select the process connection the project is running on
      • After entering the graphical monitoring interface, the thread interface is displayed. There is a deadlock detection button to automatically detect deadlocked threads
      • Tool to find the code location of the specific deadlock.

– How to avoid deadlock

  • Multiple tasks cannot be executed due to death locks in the system, resulting in endless waiting and serious accidents such as system crash.
  • To avoid a deadlock, start with the four conditions of a deadlock:
    • 1. The incompatible; 2. Inalienable; 3. Wait in cycles; 4. Request to keep;
  • Specific methods:
      1. Avoid multiple locks, and holds locks only when needed, nestedsynchronizedorlockIt’s very easy to go wrong.
      1. Fixed lock order. All threads try to acquire locks in a fixed order.
      1. Timeout automatically releases the lock, using the timeout method, to bring more control to the program.
      • Object.wait()orCountDownLatch.await()Both support the so-called timed wait, which specifies a timeout and prepares exit logic if the lock cannot be obtained.
      • Use an explicit Lock Lock, which is used when the Lock is acquiredtryLock()Methods. When waiting forOver timeThe time,tryLock()Instead of waiting, an error message is returned.
if(lock.tryLock() || lock.tryLock(timeout, unit)){
    // ...
}
Copy the code