An overview of the

  • Implicit lock: synchronized
  • Lock: now already, the relevant interface Lock, ReadWriteLock, are based on AQS (AbstractQueuedSynchronizer) implementation

use

  • synchronized
    • Synchronized works on ordinary methods where the lock object is this and other threads cannot access all synchronized methods of that object. But you can access non-synchronized methods (provided different objects are used)
    • Synchronized acts on static methods when the lock object is the Class object of the current Class and the static synchronized method on that Class object cannot be accessed by other threads. However, non-static [synchronized optional] methods of objects can be accessed
    • When synchronized acts on a block, the lock object is the obJ of synchronized(obj)
    • Synchronized locks objects, not code
    • Not fair lock
    • Support the reentrant
    • It is usually recommended
  • ReentrantLock
    • Reentrant lock: The same thread can request the same lock more than once
    • TryLock (long timeout, TimeUnit unit)
    • Interruptible: lockInterruptibly(). If the thread is interrupted while waiting for a lock or reawakening to enter the method, the interrupt is handled first. And throw an exception
    • Condition queue: Once a lock is acquired, it is sometimes necessary to wait for a condition to be satisfied. These conditions are called conditional predicates, and the thread needs to acquire the lock and determine whether the conditional predicate is satisfied. If not, it does not proceed, and the corresponding thread waives execution and automatically releases the lock. Support for multiple conditional queues
    • You need to apply and release explicitly, and the release must be placed ina finally block, otherwise the lock may never be released due to an exception
    • Support fair and unfair locks
    • Support the reentrant
    • Support exclusive and sharing

synchronized

  • Code block: Implemented by the underlying operating system’s Mutex Lock using monitorenter and Monitorexit directives, the Mutex Lock suspsuspes the current thread and switches from user to kernel mode for execution, an expensive switch
  • Modification method: The ACC_SYNCHRONIZED implementation of the access_flags field in the Class file is 1. Means that the method is synchronized and uses the object that called the method or the Class to which the method belongs to to represent Klass as the lock object within the JVM
  • Principle: Based on Java object header and Monitor mechanism to achieve
    • Object header: HashCode, GC generation age, lock status flag, thread held lock, bias thread ID, bias timestamp, etc. It can be 4 bytes or 8 bytes depending on the compilation environment
    • Monitor: each object has its own monitor lock, monitor, implemented by c++
      • Owner: Points to the thread holding the ObjectMonitor object
      • WaitSet: Holds a queue of threads in wait state
      • EntryList: Holds a queue of threads in the lockwaiting block state
      • Recursions: The number of lock reentries
      • Count: counts the number of times the thread obtains the lock
    • When multiple threads compete for locks, they enter the EntryList queue first. The thread that successfully contends is marked as Owner, count+1. Other threads continue to block and wait in this queue
    • If the Owner thread calls wait(), it releases the object lock and enters the WaitSet to be awakened. Owner is left empty, and the threads in count-1. EntryList again compete for the lock
    • If the Owner thread finishes executing, the lock is released. The Owner is left empty, and the threads in the EntryList again compete for the lock
  • To optimize the
    • Adaptive Spinning: The waiting thread executes a loop a certain number of times to acquire the lock in the loop. This technique is called spin locking. It is not limited by the number of times, but by the previous spin time on the same lock and the state of the lock owner is called adaptive
    • Lock Coarsening: When the virtual machine detects a string of fragmented operations that Lock the same object, the Lock is extended outside the entire operation sequence
    • Lock Elimination: When the virtual machine is running, Lock Elimination is performed if shared data cannot exist ina locked piece of code
    • Biased Locking: During synchronization, there is a high probability that no other thread will compete for the lock. When the thread requests to the lock object, change the status flag bit of the lock object to 01, that is, to bias the mode. The ID of the thread is then recorded in the MarkWord of the lock object using the CAS operation. The thread can then go directly to the synchronized block. But once a second thread has to contend for the lock. Then the bias pattern ends immediately. Enter the lightweight lock state. Threads do not actively release bias locks and need to wait for other threads to compete. Revocation of bias locks requires waiting for the global security point
    • Lightweight Locking: If the state of the synchronized object is non-lock, the virtual machine will first establish a space named Lock Record in the stack frame of the current thread, and then copy the Mark Word in the header of the object to the Lock Record (officially called the Lock Mark Word). After the copy is successful, the VM will use the CAS operation to try to update the object’s Mark Word to a pointer to the Lock Record, and the owner pointer in the Lock Record will point to the object’s Mark Word. If the update succeeds, the thread owns the lock on the object, and the object’s Mark Word lock bit is set to 00, indicating that the object is in a lightweight locked state. If the lightweight lock fails to update, the virtual machine first checks whether the object’s Mark Word refers to the current thread’s stack frame. If it does, the current thread already owns the lock and can proceed directly to the synchronization block. Otherwise, multiple threads compete for the lock. If there is currently only one waiting thread, the thread waits through spin. But when the spin exceeds a certain number, or when one thread is holding the lock, one is spinning, and a third person is calling, the lightweight lock is upgraded to the heavyweight lock
    • Highweight Locking: When upgrading to a heavyweight lock, the status value of the lock flag changes to “10”. At this time, the pointer to the heavyweight lock is stored in the Mark Word, and all the threads waiting for the lock will enter the blocking state. Execute through monitor.

ReentrantLock

JMM

  • JVM memory area: method area, heap, VM stack, native Method stack, program Counter Register
  • Abstract relationship between threads and main memory: Shared variables between threads are stored in main memory. Each thread has a private working memory, and local memory stores copies of shared variables that the thread reads/writes.
    • When the CPU needs to access main memory, it first reads some main memory data to the CPU cache (if it exists in the CPU cache, it directly obtains the data), and then reads the CPU cache to the register. When the CPU writes data to main memory, it first refreshes the data in the register to the CPU cache, and then to the main memory
    • The implementation of Java thread is based on the one-to-one thread model, through the language level program to indirectly call the system kernel thread model, that is, when we use Java thread, the Java VIRTUAL machine is to call the current operating system kernel thread to complete the current task
    • The Java memory model and the computer hardware memory architecture are an intersection of abstract concepts and real physical hardware
  • Instruction reordering: Compilers and processors typically reorder instructions: compiler optimization reordering, instruction level parallelization reordering (instruction level parallelization technique, no data dependence), memory system reordering
    • The rearrangement of compiler optimizations is compile-time rearrangement, and the rearrangement of instruction parallelism and memory systems is processor rearrangement. In multithreaded environments, these rearrangement optimizations can cause memory visibility problems in programs
    • Different processor instruction sets implement runtime reordering differently
  • Sequential consistent memory model: An ideal reference model that provides developers with strong memory visibility. In the sequential consistent memory model, every operation must be performed atomically and immediately visible to all threads
    • In order to improve performance, the internal training model needs to provide strong memory visibility for developers and loosen the constraints on the compiler processor as much as possible
    • As long as the program results do not change, the compiler processor can be optimized with established rules. Disallow reordering if it changes the results of the program
  • JMM revolves around atomicity, orderliness, and visibility of program execution
    • Atomicity: Atomicity refers to the fact that an operation is not interruptible, even in a multi-threaded environment, and once an operation is started, it will not be affected by other threads
    • Orderliness: Orderliness means that for single-threaded code, we always think of code execution as sequential. In multithreaded environments, reordering operations that have control dependencies maintains order between threads
    • Visibility: Visibility refers to whether when one thread changes the value of a shared variable, other threads can immediately know the changed value
  • How does the JMM ensure the same consistency across platforms
    • When the Java compiler generates instructions, it inserts a memory barrier instruction in place to disallow reordering of a particular type of handler
    • Happens-before: JSR-133 memory model, if the result specified by one operation needs to be visible to another operation. Then there is a happens-before relationship between the two operations
      • Procedure order rule: for each action in the same thread, happens-before is used for any subsequent action in that thread
      • The monitor rule: for the unlock of the same monitor, happens-before follows the lock of the same monitor
      • Volatile variable rule: Writes to a volatile field, happens-before any subsequent reads to that volatile field
      • Thread start rule: Before a thread is started, the thread is started to write happens-before the read of the started thread.
      • A. happens-before B. happens-before C. A happens-before C
      • Thread termination rule: All operations of A thread are preceded by the termination of the thread. Before thread B terminates, it modifies the shared variable. After thread A successfully returns from thread B’s join method, thread B’s changes to the shared variable will be visible to thread A
      • Thread interrupt rule: A call to the interrupt() method occurs when code on the interrupted Thread detects that an interrupt event has occurred, which can be checked by thread.interrupted ()
      • Object finalization rule: An object’s constructor executes, finalizing before the Finalize () method
      • Happens-before only requires that the previous result be visible to subsequent actions, not the order of execution
      • Happends-before is the view that the JMM presents to developers, with underlying memory barriers to mask differences between processors. No additional synchronization operations are required

volatile

  • Features:
    • Visibility: A read of a volatile variable always sees the last write to that variable by any thread
    • Atomicity: Atomicity of reads and writes to any single volatile variable
  • Volatile memory semantics for write-read
    • When a volatile variable is written, the JMM flushers the thread’s shared variables from local memory to main memory
    • When a volatile variable is read, the JMM invalidates the thread’s local memory and reads from main memory back to shared memory
  • Volatile memory semantics implementation
    • To implement volatile semantics. Insert Memory barriers before and after reading and writing

CAS

  • CAS stands for Compare And Swap, which is an atomic operation mechanism
    • unsafe.compareAndSwapInt(this, valueOffset, expect, update); The three parameters are the memory location V, the old expected value A, and the new value B
    • Concurrency conflicts are avoided without blocking other threads
    • CAS bottom layer is completed by calling CPU instruction set CMPXCHG, in the case of multi-core, this instruction can not guarantee atomicity, need to add lock instruction in front.
  • Bus locks and cache locks:
    • A bus lock locks the bus so that other cores cannot access memory
    • A cache lock locks a portion of memory. When a CPU core reads data from a memory region into its cache, it locks the memory region corresponding to the cache
    • Lock instruction: Older processors used a bus lock, while newer processors generally use a cache lock
  • ABA problem: A value changes from A to B and then to A. No change can be detected using CAS. In general, there is no need to deal with the occurrence of low probability
    • One of your AtomicStampedReference is to record a Boolean value and its update time.

AQS

  • AbstractQueuedSynchronizer: queue synchronizer. It is the basic framework for building locks or other synchronous components, and it is the core basic component in JUC(java.util.concurrent) and packet distribution
  • The core attribute volatile int state
    • A state of 0 means that no thread is holding the lock, and the thread that holds the lock increments state by one and decays state by one when it releases the lock. Multiple hold releases add and subtract multiple times
    • CompareAndSetState (int expect,int update), set is also a state variable
  • Waiting queue: AbstractQueuedSynchronizer Node, acquiring the synchronization state failure, the current thread structure into a Node and add it to the CLH synchronous queue, blocks the current thread at the same time, when the sync release, will wake the first Node

concurrent

  • General pattern
    • First, declare the shared variable volatile
    • Then, atomic conditional update of CAS is used to achieve synchronization between threads.
    • At the same time, the thread communication is realized with volatile read/write and CAS memory semantics
    • The whole package structure evernotecid: / / 437 c61 be5 e9954-1-4 – A442 – EF7A006229B0 / appyinxiangcom / 2048513 / ENResource/p6