Synchronized lock

Use 1.

  1. A modifier block, called a synchronous block, is scoped to the entire block
synchronized(data){
      data.add(i);
  }
Copy the code
  1. Modifies a non-static method. The modified method is called a synchronous method and has scope for the entire method
pulic synchronized void add(int i){
	data.add(i);
}
Copy the code
  1. Pertaining to a static method that covers the entire static method
pulic synchronized static void add(int i){
	data.add(i);
}
Copy the code

In the JVM, objects are divided into three regions in memory:

  • Object head

Mark Word: By default stores the object’s HashCode, generational age, and lock flag bit information. It reuses its storage space depending on the state of the object, which means that the data stored in Mark Word changes at runtime as the lock flag bit changes. Klass Point: A pointer that an object points to its class metadata, which the VIRTUAL machine uses to determine which class the object is an instance of.

  • The instance data

This part mainly stores the data information of the class, the information of the parent class. Since the VIRTUAL machine requires that the object start address be an integer multiple of 8 bytes, the padding data is not required, just for byte alignment.

First, if synchronized, it will be associated with a Monitor object — if so, it will be bound to the Owner, and if so, it will be in the blocking queue

volatile

  • Volatile is a lightweight synchronized that guarantees visibility and prevents reordering of instructions
  • Visibility means that when one thread modifies a shared variable, another thread can read the changed value.

How do you guarantee visibility?

The Java code is as follows.

instance = new Singleton(); // Instance is a volatile variable converted to assembly code, as shown below. 0 x01a3de1d: movb $0 (0, 0 x 1104800 (% esi); The lock addl $0 (0, 0 x01a3de24: esp (%);

A second line of assembly code is added when writing to a shared variable that is modified by volatile variables

According to the software developer’s manual, the Lock prefix instruction causes two things on multicore processors [1]. 1) Write the current processor cache row data back to the system memory. 2) This write-back operation invalidates the memory address cached in other cpus.

A quick word about as-if-serial

1. No matter how much reordering (compiler and processor to improve parallelism), the execution result of the (single-threaded) program cannot be changed.
2. As-if-serial semantics protect single-threaded programs, and follow the as-IF-serial semantics of compiler, runtime, and processor
Together, they create the illusion for programmers who write single-threaded programs that they execute in the order of the program. The AS-if-serial semantics allow single-threaded programmers neither to worry about reordering interfering with them nor to worry about memory visibility issues.

Here are two principles for implementing volatile.

1. The Lock prefix directive causes the processor cache to write back to memory. The Lock prefix instruction causes the processor’s Lock # signal to be spoken during the execution of the instruction. In a multiprocessor environment, the LOCK# signal ensures that the processor can monopolize any shared memory while the signal is voiced. However, in recent processors, the LOCK# signal usually locks the cache rather than the bus, which is expensive to lock. The effect of a lock operation on the processor cache is described in detail in section 8.1.4. For Intel486 and Pentium processors, the LOCK# signal is always pronounced on the bus during a lock operation. However, in P6 and current processors, the LOCK# signal is not declared if the memory area being accessed is already cached inside the processor. Instead, it locks the cache of the memory region and writes back to memory, and uses a cache consistency mechanism to ensure atomicity of the modifications. This operation is called “cache locking.” The cache consistency mechanism prevents simultaneous modifications to the memory region cached by more than two processors.

2) One processor’s cache writing back to memory invalidates the other processor’s cache. Ia-32 processors and Intel 64 processors use the MESI (Modified, Exclusive, Shared, Invalid) control protocol to maintain consistency between internal caches and other processor caches. When operating on a multi-core processor system, ia-32 and Intel 64 processors can sniff other processors’ access to system memory and their internal caches. The processor uses sniffer technology to ensure that data from its internal cache, system memory, and other processors’ caches are consistent across the bus. For example, in Pentium and P6 family processors, if one processor is sniffed to detect that other processors intend to write to a memory address that is currently in the shared state, then the sniffing processor will invalidation its cache line and force cache line filling the next time the same memory address is accessed.

Optimization of volatile

– Linked-TransferQueue is a new queue-collection class in JDK 7 that uses volatile variables as a way to append bytes to optimize queue-outbound and queue-inbound performance. The code for LinkedTransferQueue is as follows.

/** Private transient f? inal PaddedAtomicReference<QNode> head; /** Private transient f? inal PaddedAtomicReference<QNode> tail; static f? Object p0, p1, p2, p3, p4, p5 <T> extends AtomicReference <T> p6, p7, p8, p9, pa, pb, pc, pd, pe; PaddedAtomicReference(T r) { super(r); } } public class AtomicReference <V> implements java.io.Serializable { private volatile V value; // Omit other code}Copy the code
  • LinkedTransferQueue is a class that uses an inner class type to define the head and tail nodes of the queue
  • The inner PaddedAtomicReference class only does one thing relative to its parent AtomicReference class, which is append the shared variable to 64 bytes.
  • We can calculate that an object reference is 4 bytes, it appends 15 variables (60 bytes), plus the parent value variable, for a total of 64 bytes.

Biased locking

When a thread accesses a synchronized block and acquies a lock, it stores the thread ID of the lock bias in the lock record in the object header and the stack frame. In the future, when the thread enters and exits the synchronized block, it does not need to perform CAS operations to lock and unlock, but simply tests the object headerMark WordIs there a bias lock that points to the current thread. If the test is successful, the thread has acquired the lock. If the test fails, you need to test Mark Word againWhether the bias lock identifier is set to 1(indicates biased locking) : If not set, thenUse CAS contention locking; If set, an attempt is made to use CAS to point the biased lock of the object header to the current thread.
Biased locking is a mechanism that waits until a race occurs to release a lock –> the thread holding a biased lock releases the lock when another thread tries to race for a biased lock;
Biased lock undo:
  • First wait for the global safety point –> pause the thread that has the biased lock (check if it is alive)–> If alive, the biased lock stack executes –> facilitate the biased lock object’s lock record –> the object and object headers in the lockMark WordEither re-bias other threads –> or revert to the null flag object not suitable for biased locking –> finally wake up the suspended thread

Lightweight Lock Unlocking

The CAS action that will use atoms willDisplaced Mark WordThe substitution returns to the object header and, if successful, indicates that no contest occurred. If this fails, the current lock is in contention and the lock expands toHeavyweight lock.

The pros and cons of locking

Interview questions:

A. synchronized B. volatile C. synchronized

  • Volatile can only modify instance and class variables, whereas synchronized can modify methods, as well as blocks of code.
  • Volatile guarantees visibility of data, but not atomicity (multiple threads writing does not guarantee thread safety); Synchronized is a mechanism of mutual exclusion.
  • Volatile is used to prevent reordering of instructions: it solves the problem of singletons double-checking the execution of object initialization code out of order.
  • Volatile can be seen as a lightweight version of synchronized. Volatile does not guarantee atomicity, but if multiple threads of assignment to a shared variable do not perform other operations, volatile can be used instead of synchronized, because assignment itself is atomic. Volatile, on the other hand, guarantees visibility, so it is thread-safe.

2. Synchronized and synchronized

  • Synchronized is the keyword, it’s the underlying JVM level that does everything for us, and Lock is an interface, it’s a rich API at the JDK level.
  • Synchronized releases the Lock automatically, while Lock must release the Lock manually.
  • Synchronized is not interruptible, and Lock may or may not interrupt.
  • Lock tells you whether or not a thread has a Lock; synchronized does not.
  • Synchronized can Lock methods and blocks of code; Lock can only Lock blocks of code.
  • Lock Can use read locks to improve multithreaded reading efficiency.
  • Synchronized is not a fair lock, and ReentrantLock controls whether or not it is a fair lock.