This is the 8th day of my participation in Gwen Challenge
preface
Recently in the review lock this piece, on the Java lock collation, this article introduces a variety of locks, I hope to give you help.
The lock of the Java
Optimistic locking
Optimistic locking is a kind of optimistic thought that reading more than writing less, meet low likelihood of concurrent write, every time to fetch the data that other people will not change, so will not lock, but at the time of update will assess the during that time, people have to update the data, at the time of writing in a first read the current version number, and then lock operation (compared with the previous version number, Update if it is the same), and repeat the read-compare-write operation if it fails. Optimistic locking in Java is basically implemented through CAS operation, which is an updated atomic operation. The CAS operation compares the current value with the passed value. If the current value is the same as the passed value, it will be updated, otherwise it will fail.
Pessimistic locking
Pessimistic locking is a pessimistic thinking, that is, the likelihood of concurrent writing is high, and every time I go to fetch data, I think someone else will modify it, so every time I read or write data, I will lock it, so that someone else will try to read or write the data, and the data will be blocked until I get the lock. Pessimistic lock in Java is Synchronized. In AQS framework, cas optimistic lock is first tried to obtain the lock. If it fails to obtain the lock, it will be converted to pessimistic lock, such as RetreenLock.
spinlocks
The principle of spin locking is very simple, if the thread holding the lock can release the lock resource in a short time, then those waiting for competing locks
Threads don’t have to switch between kernel and user to block and suspend, they just have to wait (spin),
The lock can be acquired as soon as the thread holding the lock releases it, thus avoiding the consumption of switching between the user thread and the kernel.
If the thread can’t get the lock, it can’t use the spin to do idle work, so it needs to set a maximum spin wait time.
If the thread holding the lock executes longer than the maximum spin wait time without releasing the lock, other locks will be contended for
The contended thread will stop spinning and enter the blocking state.
Advantages and disadvantages of spin locks
Spinlocks minimize thread blocking, which is less competitive for locks and takes up very short blocks of code
The performance is greatly improved because the spin cost is less than the cost of blocking, suspending, and then waking up operations that cause the thread to switch contexts twice!
This is not appropriate if the lock is contested, or if the thread holding the lock needs to hold the lock for a long time to execute a synchronized block
The use of spin lock, because the spin lock before acquiring the lock has always occupied the CPU to do idle work, occupy XX XX, at the same time, there are a large number of threads competing for a lock, will lead to a long lock acquisition time, the consumption of thread spin is greater than the consumption of thread blocking suspend operation, other threads that need CPU can not acquire CPU. Waste of CPU. So in this case we want to turn off the spin lock;
Spin lock time threshold
The purpose of a spin lock is to hold CPU resources until the lock is acquired. But how do you choose when to execute the spin? If the spin execution time is too long, a large number of threads in the spin state will occupy CPU resources, which will affect the overall system performance. So the cycle of spin is extra important!
In JDK 1.6, adaptive spin locking was introduced. Adaptive spin locking means that the time of spin on the same lock is not fixed, but is determined by the state of the previous spin on the lock and the owner of the lock. It is generally considered that the time for a thread context switch is the optimal time, and the JVM is optimized for the current CPU load. If the average load is less than CPUs, the thread spins; if more than (CPUs/2) threads are spinning, the thread blocks directly afterwards. If the thread that is spinning finds that its Owner has changed, it will delay spin time (spin count) or block. If the CPU is in power-saving mode, it will stop spinning. The worst case of spin time is CPU memory delay (CPU A stores A piece of data, and CPU B gets the data directly). Differences between thread priorities are appropriately waived when spinning.
A Synchronized synchronous lock
Synchronized treats any non-null object as a lock. It is an exclusive pessimistic lock and a reentrant lock.
Range of Synchronized
1. When applied to a method, it locks an instance of the object (this);
When used for static methods, the Class instance is locked. In jdK1.8, the permanent band is shared globally, so the static method lock is equivalent to a global lock on all threads calling the method.
3. Synchronized, when applied to an object instance, locks all code blocks that use the object as a lock. It has multiple queues, and when multiple threads access an object monitor together, the object monitor stores these threads in different containers.
Synchronized core components
-
Wait Set: this is where threads that blocked calling the Wait method are placed;
-
Contention List: Contention queue where all threads requesting locks are placed first;
-
Entry List: Threads in the Contention List that qualify as candidate resources are moved to the Entry List;
-
OnDeck: At most one thread is competing for a lock resource at any given time. This thread is called OnDeck;
-
Owner: The thread that has obtained the resource is called Owner.
-
! Owner: specifies the current thread that releases the lock.
1. The JVM fetches data one at a time from the end of the queue for lock contention candidates (OnDeck), but in concurrent cases,
The ContentionList is CAS accessed by a large number of concurrent threads, and to reduce contention for tail elements, the JVM moves some threads into the EntryList as candidate contention threads.
2. The Owner thread migrates some ContentionList threads to EntryList when unlock is created and designates one of the EntryList threads as an OnDeck thread.
3. The Owner thread does not pass the lock directly to the OnDeck thread. Instead, it gives OnDeck the right to compete for the lock.
OnDeck needs to re-compete for locks. This sacrifices some fairness, but can greatly improve the throughput of the system. In the JVM, this choice behavior is also called “competitive switching.”
4. The OnDeck thread becomes the Owner thread after acquiring the lock resource, while the thread that does not acquire the lock resource remains in the EntryList. If the Owner thread is blocked by wait, it is placed in the WaitSet queue until it is awakened by notify or notifyAll and re-entered into the EntryList.
5. The ContentionList, EntryList, and WaitSet threads are all blocked by the operating system
To complete (Linux kernel using pthread_mutex_lock kernel function implementation).
6. Synchronized is an unfair lock. Synchronized when a thread enters the ContentionList, the waiting thread will first try to acquire the spin lock. If it fails to acquire the lock, it will enter the ContentionList, which is obviously unfair to the thread that has entered the queue. Another unfair thing is that the thread that spins to acquire the lock may also directly preempt the lock resource of the OnDeck thread.
Reference: blog.csdn.net/zqz\_zqz/ar…
7. Each object has a Monitor object, and locking competes with the monitor object. Block locking is implemented by adding monitorenter and Monitorexit directives, respectively
8. Synchronized is a heavyweight operation that requires the invocation of the relevant interface of the operating system, and its performance is inefficient. It may consume more time to lock the thread than to use the useful operation.
9. Java1.6, synchronized has carried out a lot of optimization, including adaptive spin, lock elimination, lock coarser, lightweight lock and biased lock, and the efficiency has been substantially improved. In Java1.7 and 1.8, the implementation mechanism of this keyword is optimized. Biased locks and lightweight locks were introduced. Both have token bits in the object header and do not need to be locked by the operating system.
ReentrantLock
ReentantLock inherits interface Lock and implements methods defined in the interface. It is a reentrant Lock, except that it can complete
In addition to all the work that synchronized can do, it also provides methods such as responsible interrupt lock, polling lock request, timing lock and so on to avoid multi-thread deadlock.
The main method of the Lock interface
Void lock(): When this method is executed, if the lock is idle, the current thread acquires the lock. Conversely, if the lock is already held by another thread, the current thread is disabled until the current thread acquires the lock.
2. Boolean tryLock() : Obtain the lock if it is available and return true immediately, otherwise return false. The method and
The difference with lock() is that tryLock() is only “trying” to acquire the lock, and if the lock is unavailable, the current thread will not be disabled.
The current thread still continues to execute the code. The lock() method, on the other hand, must obtain the lock, and if the lock is not available, a
Straight wait, the current thread does not proceed until the lock is acquired.
3. Void unlock() : When this method is executed, the current thread releases the lock it holds. The lock can only be released by the holder if the thread
Executing this method without holding the lock may cause an exception to occur.
4. Condition newCondition() : Condition object that gets the waiting notification component. This component is bound to the current lock,
The current thread cannot call the await() method of the component until it has acquired the lock, and the current thread scales the lock.
5. GetHoldCount () : Queries the number of times the current thread holds the lock, that is, the number of times the thread executes the lock method
The number.
GetQueueLength () : Returns an estimate of the number of threads that are waiting to acquire this lock, such as starting 10 threads, 1
The thread acquires the lock and returns 9
7. GetWaitQueueLength :(Condition Condition) returns the line waiting for the given Condition associated with this lock
Process estimates. For example, 10 threads use the same condition object, and all 10 threads execute
The await method of the condition object, then executing this method returns 10
8. HasWaiters (Condition Condition) : Check whether there are threads waiting for a given Condition associated with this lock
(condition), how many threads have executed the condition.await method for the specified contidion object
HasQueuedThread (Thread Thread) : queries whether a given Thread is waiting to acquire the lock
10. HasQueuedThreads () : Whether there are threads waiting for this lock
11. IsFair () : whether the lock isFair
12. IsHeldByCurrentThread () : specifies whether the current thread holds the lock
Don’t be false and true
IsLock () : Whether this lock is occupied by any thread
14. LockInterruptibly () : Acquires the lock if the current thread is not interrupted
15. TryLock () : Attempts to acquire the lock, only if the lock is not occupied by the thread when called
16. TryLock (long timeout TimeUnit Unit) : If the lock is not held by another thread within a given wait time,
The lock is obtained
Not fair lock
The mechanism by which the JVM allocates locks randomly and nearby is called unfair locking, and ReentrantLock is provided in the constructor
Fair lock initialization mode. The default value is unfair lock. The actual execution of unfair locks is far more efficient than that of fair locks. Unless the program has special needs, the allocation mechanism of unfair locks is most commonly used.
Fair lock
A fair lock means that the lock allocation mechanism is fair. Usually, the thread that requests the lock first will be allocated the lock first. ReentrantLock defines a fair lock by providing an initialization method in the constructor to determine whether the lock is fair.
Already with synchronized
1. ReentrantLock Uses lock() and unlock() to lock and unlock. Unlike synchronized, which is automatically unlocked by the JVM, ReentrantLock must be manually unlocked. In order to prevent the program from being able to unlock properly due to an exception, the ReentrantLock must be unlocked in the finally control block.
2. ReentrantLock has the advantages of interruptible, fair, and multiple locks compared with synchronized. In this case
Use already.
Condition and Object lock methods are different
1. Condition’s AWIAT method is equivalent to Object’s WAIT method
2. Condition signal is equivalent to notify Object
3. The signalAll method of Condition is equivalent to notifyAll of Object
4. The ReentrantLock class can wake up threads with specified conditions, whereas object wakes up randomly
Specifies the condition to wake up. Create several more conditions.
TryLock and the difference between lock and lockInterruptibly
1. TryLock (long timeout,TimeUnit) returns true if the lock is obtained, false if it is not
Unit), you can increase the time limit, return false if the lock has not been acquired after that time.
2. Lock returns true if the lock is available, or wait until the lock is available.
3. Lock and lockInterruptibly. If two threads execute these methods separately and interrupt them,
Lock does not throw an exception, while lockInterruptibly does.
Benefits of reentrant locking
Suppose a thread owns the lock. The lock is called when another thread needs it. Can be called directly without waiting to retrieve.
Semaphore Semaphore
Semaphore is a counting – based Semaphore. It can set a threshold at which multiple threads compete for a license signal, complete their own request and return it, and block the request signal after the threshold is exceeded. Semaphore can be used to build object pools, resource pools, etc., such as database connection pools.
Implement mutex (counter 1)
We can also create a Semaphore with a count of one as a mutex like mechanism. This is also called a binary Semaphore, representing two mutually exclusive states.
Other USES
You can create a semaphore and each thread consumes one semaphore. After that. Get the remaining number. If it is equal to the initial number, it indicates that the thread has finished executing internally and can continue executing the main thread.
Semaphore with already
Semaphore does almost all the work of ReentrantLock, using similar methods, acquire() and release() to acquire and release critical resources. , checked by actual measured Semaphone. Acquire () method of the lock, the default for a response to an interrupt and already. LockInterruptibly () function are same, that is to say, waiting for the process of critical resources can be Thread. The interrupt () method of the interrupt.
Semaphore also implements polling lock requests and timing locks, which are almost identical to ReentrantLock except that the method name tryAcquire is different from tryLock. Semaphore also provides mechanisms for fair and unfair locks, which can also be set in constructors.
Semaphore’s lock release is also done manually, so like ReentrantLock, the lock release must be done in the finally code block to prevent the thread from throwing an exception and failing to release the lock properly.
AtomicInteger
First, AtomicInteger, a class that provides an Integer for atomic operations, is common
AtomicBoolean, AtomicInteger, AtomicLong, AtomicReference, etc.
The difference is the operand type. An AtomicReference is also an exciting way to translate all operations on an object into atomic operations.
We know that in multithreaded programs, operations such as ++ I or I ++ are not atomic and are one of the unsafe thread operations.
Normally we would use synchronized to turn this into an atomic operation, but the JVM specifically provides synchronization classes for this type of operation to make it easier to use and make the program run more efficiently. According to relevant data, AtomicInteger usually performs several times better than ReentantLock.
Reentrant lock (recursive lock)
This article is about reentrantlocks in the broad sense, not reentrantLocks in JAVA alone. A reentrant lock, also known as a recursive lock, means that after an outer function of the same thread acquires the lock, the inner recursive function still has the code to acquire the lock, but is not affected. ReentrantLock and synchronized are both reentrant locks in JAVA environments.
Fair and unfair locks
Fair Lock
Check whether there are queued threads before locking. Priority is given to queued threads. First come, first served
Nonfair Lock
When the lock is added, it does not consider the queuing problem, and directly tries to obtain the lock. If the lock cannot be obtained, it will automatically wait at the end of the queue
1. Performance of an unfair lock is 5 to 10 times higher than that of a fair lock, because a fair lock requires the maintenance of a queue in the case of multiple cores
2. Synchronized in Java is an unfair lock. The default lock() method of ReentrantLock uses an unfair lock.
ReadWriteLock read-write lock
In order to improve the performance, Java provides read and write lock, read lock is used in the read place, write lock is used in the write place, flexible control, if there is no write lock, read is non-blocking, to a certain extent, improve the efficiency of the program. Read locks are divided into read locks and write locks. Multiple read locks are not mutually exclusive. Read locks and write locks are mutually exclusive, which is controlled by the JVM itself.
Read lock
If your code reads only data, which can be read by many people at the same time, but cannot be written at the same time, lock it.
Write lock
If your code modifies data and only one person is writing it and cannot read it at the same time, then write locks. In short, read while lock, write while lock!
Read-write lock a Java interface in Java. The util. Concurrent. The locks. ReadWriteLock, also has a specific implementation
ReentrantReadWriteLock.
Shared and exclusive locks
An exclusive lock
In exclusive lock mode, only one thread can hold the lock at a time. ReentrantLock is a mutex implemented in exclusive mode.
An exclusive lock is a pessimistic and conservative locking strategy that avoids read/read conflicts. If one read-only thread acquires the lock, all other readers must wait, which limits unnecessary concurrency because the read operation does not affect data consistency.
A Shared lock
A shared lock allows multiple threads to obtain the lock and concurrently access shared resources, such as ReadWriteLock. The shared lock is an optimistic lock, which relaxes the locking policy and allows multiple read threads to access the shared resource simultaneously.
1. The internal Node class of AQS defines two constants: SHARED and EXCLUSIVE, which identify the lock acquisition mode of the waiting thread in the AQS queue.
2. Java provides ReadWriteLock, a read-write lock. It allows a resource to be accessed by multiple reads,
Or by a write operation, but not both at the same time.
Mutex Lock
Synchronized is implemented through a lock inside an object called a monitor. But the essence of the monitor Lock depends on the underlying operating system Mutex Lock to implement. However, the operating system realizes the switch between threads, which requires the conversion from user state to core state. This cost is very high, and the conversion between states takes a relatively long time, which is why Synchronized has low efficiency. Therefore, this type of Lock, which relies on the implementation of the operating system Mutex Lock, is called a “heavyweight Lock.” At the heart of all the JDK optimizations for Synchronized are efforts to reduce the use of this heavyweight lock.
After JDK1.6, “lightweight locking” and “biased locking” were introduced to reduce the performance cost of acquiring and releasing locks and improve performance.
Lightweight lock
There are four types of lock states: unlocked, biased, lightweight, and heavyweight.
Lock escalation
As locks compete, locks can be upgraded from biased locks to lightweight locks to heavyweight locks (but locks are upgraded in one direction, meaning they can only be upgraded from low to high, with no lock degradation).
“Lightweight” is in contrast to traditional locks implemented using operating system mutex. However, it is important to note that lightweight locks are not intended to replace heavyweight locks. They are intended to reduce the performance cost of traditional heavyweight locks without multi-threaded competition. Before explaining how lightweight locks are executed, it is important to understand that lightweight locks are adapted to situations where threads alternately execute synchronized blocks. If the same lock is accessed at the same time, it will cause the lightweight lock to expand into a heavyweight lock.
Biased locking
Previous research by Hotspot authors has found that in most cases locks are not only not contested by multiple threads, but are always acquired multiple times by the same thread. The purpose of biased locking is to eliminate the overhead of lock reentrant (CAS) after a thread has acquired the lock, seemingly favoring the thread. Biased locking is introduced in order to minimize unnecessary lightweight lock execution paths without multi-threaded contention, because the acquisition and release of lightweight locks depend on multiple CAS atomic instructions. The biased lock only needs to rely on the CAS atomic instruction once the ThreadID is replaced (since the biased lock must be revoked once the multithreading competition occurs, the performance loss of the cancellation operation of the biased lock must be less than that of the saved CAS atomic instruction). As mentioned above, lightweight locking is intended to improve performance when threads alternately execute synchronized blocks, while biased locking is intended to further improve performance when only one thread executes synchronized blocks.
Segmented lock
Segmented locking is also not an actual lock, but an idea. ConcurrentHashMap is the best practice for learning segmented locking.
Lock the optimization
Reduce lock holding time
Lock only on programs that require thread-safety
Reduce lock granularity
Splitting large objects (which may be accessed by many threads) into smaller objects greatly increases parallelism and reduces lock contention.
Reduce the lock competition, biased lock, lightweight lock success rate will be improved. The most typical example of reducing lock granularity is this
ConcurrentHashMap.
Lock the separation
The most common lock separation is the read/write lock ReadWriteLock, which can be separated into read and write locks according to its functions. In this way, read/write locks are mutually exclusive, but read/write locks are mutually exclusive, which ensures thread safety and improves performance. JDK and send package 1. The read-write separation idea can be extended so that locks can be separated as long as the operations do not affect each other. For example, LinkedBlockingQueue is taken from the head and data is put from the tail.
Lock coarsening
In general, to ensure effective concurrency between multiple threads, each thread is required to hold the lock for as short a time as possible, that is, the lock should be released immediately after the use of common resources. However, everything has a certain degree, if the same lock is constantly requested, synchronized and released, it will consume valuable resources of the system, but not conducive to performance optimization.
Lock elimination
Lock elimination is a matter at the compiler level. When the just-in-time compiler finds objects that cannot be shared, it can eliminate locking operations on those objects, mostly because of programmer coding irregularities.