Small knowledge, big challenge! This article is participating in the creation activity of “Essential Tips for Programmers”.
Thread synchronization
1. Introduction to the thread synchronization mechanism
Thread synchronization is a mechanism for coordinating data access between threads. This mechanism ensures thread safety.
The Java platform provides thread synchronization mechanisms including locks, volatile keywords, final keywords,static keywords, and related apis such as Object.wait()/ object.notify ()
2. The overview of the lock
Thread safety issues arise when multiple threads concurrently access shared data. Convert concurrent access to shared data by multiple threads into serial access, that is, a shared data can only be accessed by one thread at a time. Locks reuse this idea to keep threads safe
A Lock can be understood as a license to protect shared data. For shared data protected by the same license, any thread that wants to access the shared data must first hold the license. A thread can access this shared data only with a license; And a license can only be held by one thread at a time; The license thread must release its license when it finishes accessing shared data
A thread must acquire a lock before accessing shared data. The thread that acquires the lock is called the thread that holds the lock; A lock can only be held by one thread at a time. The code that the thread that holds the lock executes between the time it acquires the lock and the time it releases it is called the CriticalSection.
Locks are Exclusive, meaning that a lock can only be held by one thread at a time. This type of lock is called an exclusive lock or Mutex.
The JVM divides locks into internal and display locks. Internal lock is realized by synchronized keyword. According to Lock by Java. Concurrent. The locks. The Lock interface implementation class implements
The role of the lock
Locks enable secure access to shared data. Ensure atomicity, visibility and order of threads.
Locking is atomic by mutual exclusion. A lock can only be held by one thread, which ensures that critical sections of code can only be executed by one thread at a time. The operations performed by the critical section code are naturally indivisible, that is, atomic
Visibility is guaranteed by the writer thread flushing the processor cache and the reader thread flushing the processor cache. In the Java platform, acquiring a lock implies flushing the processor cache, and releasing a lock implies flushing the processor cache.
Locking ensures order. The critical sections executed by the writer thread in the critical section by the reader thread appear to be executed in full source order.
Note:
To ensure thread security, the following conditions must be met:
These threads must use the same lock when accessing shared data
Even threads reading shared data need to use synchronization locks
Lock-related concepts
1) Reentrancy
Reentrancy describes the problem of a thread holding the lock being able to apply for it again (many times)
Void methodA(){methodB(); Release a lock}Copy the code
Void methodB(){apply a lock.... Release a lock}Copy the code
A lock is reentrant if it is held by a thread and can continue to successfully apply for it; otherwise, the lock is non-reentrant
2) Lock contention and scheduling
Java platform internal Lock is not fair Lock, display Lock Lock support both fair and unfair Lock
3) Granularity of locks
The amount of shared data that can be protected by a lock is called the lock granularity. If a lock protects a large amount of shared data, the granularity of the lock is coarse; otherwise, the granularity of the lock is fine.
Coarse-grained locks can cause threads to wait unnecessarily while applying for locks. Excessive granularity of locks increases the overhead of lock scheduling.