How do you use synchronized

There are three main uses of the synchronized keyword:

  • Locks the object instance
  • Locks static methods of a class, locking class objects
  • Lock the code block

A scenario of synchronized, in which double check implements the singleton mode

public class Singleton{
// Use volatile to modify singletons to make them immediately visible to other threads
private volatile static Singleton instance;
// Privatize the constructor to ensure that external objects cannot be created
private Singleton(a){}
// Synchronized locks cannot be synchronized
public static Singleton getSingletonInstance(a){
	// Enter methods first
	if(instance==null)
	{
// A class lock can only be passed one at a time. A class has only one class object, so all asynchronous methods will be synchronized in this lock
   synchronized (SingletonDemo3.class){
                // Enter the lock first
                if (instance==null)
                {
                // Volatile immediately after initialization makes the object visible to other threads
                    instance=new SingletonDemo3();
                }
            }
	}
}

}
Copy the code

Talk about what you know about synchronized

  1. The synchronized keyword guarantees that a method or block of code it modifies can only be executed by one thread at a time,
  2. Underlying Implementation Principles

The synchronized code block is decorated by using monitorenter and Monitorexit directives, where Monitorenter points to the start of the synchronized code block and Monitorexit points to the end of the synchronized code block. When monitorenter is executed,The thread attempts to acquire the lock by acquiring the Monitor monitor object(The Monitor object exists in the object header of every Java object,) The ownership of. When the counter is 0, it can be successfully obtained, and then set the lock counter to 1, that is, add 1. ** After monitorexit, set the lock counter to 0 to indicate that the lock is released. 支那If the object lock fails to be acquired, the current thread blocks and waits until the lock is released by another thread.

(2) Synchronized does not include monitorenter and Monitorexit. Instead, ACC_SYNCHRONIZED identifies monitorexit. The ACC_SYNCHRONIZED access flag identifies the method as a synchronized method, and the JVM uses the ACC_SYNCHRONIZED access flag to determine whether a method is declared as a synchronized method and therefore to execute the corresponding synchronized call.

Difference between synchronized and Lock?

(1) Synchronized is implemented at the JVM level, Java provides a key, Lock is JDK classes provided under Concurrent package, API level Lock. (2) Synchronized does not need to manually release the Lock. The underlying system automatically releases the Lock. Lock requires manual release. Synchronized will automatically release the occupied lock when an exception occurs, so there will not be deadlocks; When a LOCK exception occurs, the occupied lock is not released. You must manually unlock the lock to release it, which may cause a deadlock. (It’s best to wrap synchronized blocks ina try catch, and then write unlock to avoid deadlocks.) (3) In the lock waiting process, interrupt can be used to interrupt the wait, while synchronized can only wait for the release of the lock and cannot respond to the interrupt; (4) Lock can know whether the Lock state is obtained by trylock, while synchronized cannot; (5) synchronized can be reentrant, uninterruptible, non-fair, lock can be reentrant, judge and fair.

What is the difference between synchronized and ReentrantLock?

  1. Both are reentrant locks

It can acquire its own internal lock again. For example, a thread can acquire the lock of an object when it wants to acquire the lock again. If the lock is not re-entrant, it will cause deadlock. Each time the same thread acquires a lock, the lock counter increases by 1, so the lock can’t be released until the lock counter drops to 0. Synchronized relies on the JVM while ReentrantLock relies on the API. There are three main points: (1) waiting can be interrupted; ② Can realize fair lock;

  • ReentrantLock provides a mechanism to interrupt a thread waiting for a lock, using lock.lockInterruptibly(). This means that a thread that is waiting can choose to give up waiting and process something else instead.
  • ReentrantLock can specify whether the lock is fair or unfair. Synchronized can only be an unfair lock. The so-called fair lock is that the line that waits first gets the lock first.
  1. ReentrantLock adds some advanced features over synchronized

Visibility, atomicity, and orderliness of Java memory models?

There is usually no way to ensure that a thread performing a read will see a value written by another thread in time, so synchronization must be used to ensure visibility of in-memory writes between multiple threads.

  • Visibility: This refers to the visibility between threads. Changes made by one thread are visible to another thread. That is, changes made by one thread are immediately visible to another thread. For example, variables that are volatile are visible. Volatile variables do not allow in-thread caching and reordering, that is, direct memory modification. So it’s visible to other threads. One caveat here, however, is that volatile only makes the content it modifiable visible, but it does not guarantee atomicity. For example, volatile int a = 0; And then we have an operation a++; The variable a is visible, but a++ is still a non-atomic operation, which also has thread-safety issues.

  • Atomicity: Atomicity means that an operation or operations can only be performed in one thread before another thread can perform the operation. That is, these operations cannot be separated and executed alternately in several threads.

For example, I ++: I = I + 1 is not an atomic operation: it can be decomposed into three atomic operations

  1. Read the value of variable I
  2. Increment the value of variable I by 1
  3. Write the result to the I variable

Since threads are executed based on a processor allocated slice of time, these three steps allow multiple threads to execute interleaved in the process. Synchronized and lock and unlock ensure atomicity in Java.

  • Orderliness: In the Java memory model, editors and processors are allowed to reorder instructions to improve performance, but the reordering process does not affect the execution of a single-threaded program, but affects the correctness of concurrent execution of multiple threads. The Java language ensures the order of operations between threads by providing the keywords volatile and synchronized, volatile because it contains the semantics of “forbid instruction reordering.” Synchronized is acquired by the rule that a variable can only be locked by one thread at a time, which determines that two synchronized blocks holding the same object lock can only be executed serially.

Why is there a thread-safety issue?

Blog.csdn.net/qingxinzira… First of all, the essence of thread safety is: in multi-threaded environment, the read and write operations of shared variables lead to inconsistent operation results and predicted results. Thread safety problems are caused by global variables and static variables.

Thread safety problems are caused by the visibility, orderliness, and atomicity of the Java memory model.

  1. Multiple threads are executed in multiple cpus, each of which has its own cache. The execution of a thread begins with the CPU caching a copy of a variable from memory into the CPU, and then performing operations on the copy of the variable, and writing back to the CPU after the operation. Each thread is then a variable copy of the operation, and operations between threads are not visible.

There are many non-atomic operations in jave, such as ++ I requiring at least three CPU instructions: instruction 1: First, the variable count needs to be loaded from memory into the CPU register; Instruction 2: After that, the +1 operation is performed in the register; Instruction 3: Finally, write the result to memory (the caching mechanism makes it possible to write to the CPU cache instead of memory). If count++ is executed by thread A and thread B at the same time, the result should be count=2; The operating system can switch tasks when any CPU instruction, yes, CPU instruction, not a statement in a high-level language, is completed. For the above three instructions, we assume that count=0. If thread A switches after instruction 1 completes, thread A and thread B follow the sequence shown below, then we see that both threads perform count+=1, but get 1 instead of the expected 2.3. Ordering problems caused by compiler optimization Reorder instructions to improve performance, but the reordering process does not affect the execution of single-threaded programs, but will affect the correctness of multi-threaded concurrent execution.

Synchronized guarantees atomicity in code fragments. The volatile keyword ensures visibility of shared variables. The volatile keyword prevents instructions from reordering optimizations.

What is the difference between synchronized and volatile?

(1) Volatile is a lightweight implementation of thread synchronization, so volatile certainly performs better than synchronized. However, the volatile keyword can only be used with variables and the synchronized keyword can modify methods and blocks of code. (2) The execution efficiency of synchronized keyword has been significantly improved after JavaSE1.6, mainly including biased lock and lightweight lock introduced to reduce performance consumption caused by lock acquisition and lock release, and other kinds of optimization. The synchronized keyword is more likely to be used in actual development. (3) The volatile keyword guarantees visibility, but not atomicity. The synchronized keyword guarantees both. (4) Volatile keyword is mainly used to solve the visibility of variables between multiple threads, while synchronized keyword is to solve the synchronization of resources access between multiple threads.

ThreadLocal profile

The ThreadLocal class is designed to give each thread its own value, which can be accessed only by one thread and not by any other thread. Each thread actually holds a ThreadLocalMap object. Each new Thread instantiates a ThreadLocalMap and assigns it to the member variable threadLocals. If threadLocals already exists, the existing object will be used. First, each Thread object has a ThreadLocal member variable that is null by default. When we need to use it, the get method in ThreadLocal fetches the ThreadLocalMap of the current Thread. ThreadLocal actually stores data in a ThreadLocalMap, where the key is a threadLocal object and the value is an object stored by threadLocal, which is also essentially a HashMap.

The ThreadLocal memory leaks

By looking at the source, we can see that the key used in ThreadLocalMap is a weak reference to ThreadLocal and the value is a strong reference. If ThreadLocal is not strongly referenced, the key will be cleaned up during garbage collection, but the value will not be cleaned up, and the key will be null. If we do nothing, the value will remain strongly referenced and cannot be collected by the GC, which may cause a memory leak. The **ThreadLocalMap implementation already takes this into account by calling the set(), get(), and remove() methods to clean up records with a null key. ** It is best to call the remove() method manually after using ThreadLocal.

      static class Entry extends WeakReference<ThreadLocal<? >>{
            /** The value associated with this ThreadLocal. */Object value; Entry(ThreadLocal<? > k, Object v) {super(k); value = v; }}Copy the code

The conditions under which deadlocks occur and how to avoid deadlocks in multithreading?

First of all,Deadlock refers toA state of waiting for each other caused by multiple threads competing for resources,

There are four conditions for deadlock generation. (1) Mutual exclusion condition: process requires exclusive control over allocated resources (such as printers), that is, a certain resource is occupied by only one process in a period of time. If another process requests the resource, the requesting process can only wait. (2) Non-deprivation condition: the resources obtained by the process cannot be forcibly taken by other processes before they are fully used, that is, the process that obtained the resources can only release them (only on its own initiative). (3) Request and hold conditions: the process has kept at least one resource, but it puts forward a new resource request, and the resource has been occupied by other processes. At this time, the requesting process is blocked, but it does not release the resources it has obtained. (4) Cyclic waiting conditions: there is a cyclic waiting chain of process resources, in which the resources obtained by each process are simultaneously requested by the next process in the chain. That is, there is a set of processes {Pl, P2… , pn}, where resources such as Pi are occupied by P(I +1) (I =0, 1… , n-1), resources waiting for Pn are occupied by P0, as shown in Figure 2-15.

There are three common ways to avoid deadlocks in multithreading:

(1) Adjust the order of locking Deadlocks can easily occur when multiple threads need the same locks but lock them in different order. If you can ensure that all threads acquire locks in the same order, deadlocks will not occur. (2) Set lock timeout. Set a timeout period when the thread tries to acquire the lock. If the timeout period is exceeded, the thread will give up the request for the lock. (3) Linear resource allocation method

Threads communicate through pipes

  1. A pipeStream is a special type of stream used to transfer data directly between different threads.
  2. One thread sends data to the output pipe, and another thread reads data from the input pipe. By connecting the input pipe and output pipe with CONNECT method, the communication between different threads can be realized without the need of reading and writing similar to public variables.

Optimistic and pessimistic locks

CAS – optimistic locking

The problem of the CAS

  1. ABA problem

There is an interesting problem here because CAS checks to see if the old value has changed. Let’s say an old value A goes to B, and then it goes to A, and it happens that when you do the CAS you see that the old value hasn’t changed and it’s still A, but in fact it has changed. The solution is to follow the optimistic locking approach commonly used in databases and add a version number. The original change path A->B->A becomes 1A->2B->3C. Java is a great language, and of course an Atom Stampedreference is provided in the Java 1.5 atomic package to solve the ABA problem.

  1. Spin time is too long

With CAS, non-blocking synchronization means that threads are not suspended and spin (nothing more than an infinite loop) for the next attempt, where spinning for too long can be costly to performance. If the JVM could support the processor-provided pause instruction, there would be some efficiency gains.

  1. Atomic operations of only one shared variable are guaranteed

CAS guarantees atomicity when operating on one shared variable, but not when operating on multiple shared variables. One solution is to use objects to consolidate multiple shared variables, i.e. member variables in a class are shared variables. Then CAS the object to ensure its atomicity. Atomicreferences are provided in atomic to ensure atomicity between reference objects.

The optimization of the lock

www.jianshu.com/p/d53bf830f…

Biased locking

Scenario: When the same thread obtains the lock for many times and no other threads compete for the lock, biased lock is used. For this lock, it is equivalent to a single-thread scenario. The whole synchronization and CAS operation of the thread holding biased lock need not be done, which reduces the lock cost and improves the performance.

Biased lock acquisition process: When the lock object is first acquired by the thread, the ** VM will set the flag bit in the object header to bias lock and record the thread ID of the acquired lock in the Mark Word using the CAS operation. ** If the CAS operation is successful, each time the thread that holds the bias lock enters the lock related synchronization block, When another thread attempts to acquire the lock, the bias mode ends. Depending on whether the lock object is currently locked, the undo bias reverts to the unlocked or lightweight locked state, and subsequent synchronization operations are performed as described above for the lightweight lock

Lightweight lock

Scenario: There are multiple threads competing for the same lock, but the degree of contention is very light. It can be assumed that the two threads will stagger the lock most of the time, or wait a little bit (the other thread can acquire the lock after a short spin).

The thread determines whether the object header is lock-free before executing the synchronized block. If it is, the JVM will first create space in the current thread’s stack frame to store the lock record, and copy the Mark Word from the object header into the lock record, and then the thread will tryUpdate the Mark Word in the object header to a pointer to the lock record using CAS. If it succeeds, the current thread acquires the lock; if it fails, other threads compete for the lock and the current thread attempts to acquire the lock using spin.

Heavyweight lock

Sychronized is the heavyweight lock. Synchronized is implemented through a lock inside an object called a monitor. But the essence of the monitor Lock depends on the underlying operating system Mutex Lock to implement. However, the operating system realizes the switch between threads, which requires the conversion from user state to core state. This cost is very high, and the conversion between states takes a relatively long time, which is why Synchronized has low efficiency. The JDK introduced lightweight and biased locking for sychronized optimization. One argument: “For the vast majority of locks, there is no contention for the entire synchronization cycle.” This is the basis for lightweight and biased locking.

Biased locks, lightweight locks and heavyweight locks

Biased locks, lightweight locks are optimistic locks, heavyweight locks are pessimistic locks. When an object is first instantiated and no threads are accessing it. It’s biased, meaning it now thinks that only one thread can access it, so when the first thread accesses it, it favors that thread, and in that case, the object holds the biased lock. Bias to the first thread. This thread uses CAS when it changes the object header to bias lock and changes the ThreadID in the object header to its own ID. When it accesses the object again, it only needs to compare the IDS and does not need to use CAS for operations. Once a second thread to access the object because the biased locking does not take the initiative to release, so the second thread can see objects to state, at this time that already there is competition on the object, check whether the original owners of the thread lock the object is still alive, if you hang up and the object can be become unlocked state, then back to the new thread, If the original thread is still alive, the stack of that thread is immediately executed to check the usage of the object, and if bias locks are still needed, bias locks are upgraded to lightweight locks (this is when bias locks are upgraded to lightweight locks). If no use exists, you can revert the object to an unlocked state and then re-bias it. Lightweight locks consider contention to exist, but the degree of contention is minimal. Usually two threads will stagger operations on the same lock, or wait a little (spin) before the other thread releases the lock. But when the spin exceeds a certain number, or when one thread is holding the lock, one is spinning, and a third person is calling, the lightweight lock bulges into a heavyweight lock, which blocks all threads except the one that owns the lock, preventing the CPU from idling.