1.1 critical region
A simple way to ensure that only one thread can access data at any one time, allowing only one thread to access a resource at any one time. If more than one thread attempts to access a critical section at the same time, all other threads attempting to access the critical section will be suspended after one thread enters and continue until the thread that entered the critical section leaves. After the critical section is released, other threads can continue to preempt and thus operate atomically on shared resources
1.2 the mutex
Mutexes are similar to critical sections in that only the thread that owns the mutex can have access to the resource. Since there is only one mutex, it is determined that in any case the next shared resource will not be accessed by multiple threads at the same time. The thread currently occupying the resource should surrender the mutex it owns after the task is completed so that other threads can access the resource once they have acquired it. Mutexes are more complex than critical sections because using mutexes makes it possible to safely share resources not only between different threads of the same application, but also between threads of different applications.
1.3 Pipe side/semaphore
Tube side and semaphore are the same concept. Refers to a mutex that is exclusively locked or called a mutex. At any given time, only one thread can acquire a pipe. When a thread needs to be locked, it must enter the pipe. All other threads attempting to enter the locked pipe must be suspended until the first thread exits the pipe. These other threads are called wait threads. A thread that owns a pipe can re-enter the same pipe if it wishes (reentrancy)
1.4 the CAS operation
CAS has three operands: memory value V, old expected value A, and new value B to be changed. Change the memory value V to B if and only if the expected value A and memory value V are the same, otherwise return V. This is an optimistic locking approach, believing that until it changes, no other thread will change it; Synchronized is a pessimistic lock, which believes that before it can be modified, other threads must modify it. Pessimistic lock is very inefficient. Let’s take a look at how AtomicInteger uses CAS to implement atomic operations.
1.5 reorder
Compilers and processors reorder programs during execution to improve performance. He is there to improve the concurrency of the program. Thus improving performance; However, for multithreaded programs, reordering may cause the program to execute results that are not what we want. Reordering is divided into two aspects: the compiler and the processor. Processor reordering includes instruction – level reordering and memory reordering.
section
In Java, all variables (instance fields, static fields, and elements that make up arrays, excluding local variables and method parameters) are stored in main memory. Each thread has its own working memory, which holds a main memory copy of variables used by the thread. All thread operations on variables must be performed in working memory, as variables cannot be read or written directly to main memory. Different threads do not have direct access to variables in each other’s working memory, and the transfer of variable values between threads is done through main memory.
Thread-safety keywords and classes in JAVA
Mainly include: synchronized, Volitile, ThreadLocal, Lock, Condition
2.1 Volitile
Function:
1) Ensure that the heart intelligence is immediately stored in the main memory, immediately refresh from the main memory before each use
2) Forbid instruction reordering optimization
The Volitile keyword does not guarantee correct operations on shared data in a multithreaded environment. It can be used to guarantee visibility, not atomicity, when you need to notify all threads immediately after your state changes. That is, visibility is ensured by refreshing variable values.
Synchronized and final in Java also guarantee visibility
Synchronized: fast synchronization Before variable locking, you must clear the variable values in the working memory and read them from the main memory again. Before unlocking a variable, you must synchronize the variable values back to the main memory to ensure visibility.
Final: Once a field modified by final is initialized in the constructor and the constructor does not pass the this reference, the value of the final field is visible in other threads and can be accessed correctly without synchronization.
2.2 synchronized
Declaring a block of code to be synchronized serves two purposes, usually referring to changing code to be atomic and visible. Without this visibility provided by the synchronization mechanism, threads see shared values than where they might be modified or inconsistent values, which can cause many serious problems.
Principle: When an object acquires a lock, it first invalidates its own cache, ensuring that variables are loaded directly from main memory. Also, before the object releases the lock, it flushes its cache, forcing any changes it has made to appear in main memory. This ensures that two threads synchronized on the same lock see the same values of variables modified in the synchronized block.
Synchronized release is managed by the JVM itself.
Existing problems:
1) Cannot interrupt a thread that is waiting to acquire a lock
2) You can’t get a lock by voting, and you can’t get a lock if you don’t want to wait
3) Synchronization also requires that locks be released only on the same stack frame from which the lock was acquired, which is fine in most cases (and interacts well with a process), but there are cases where non-block locking is more appropriate.
2.3 the Lock
Lock is written in JAVA and is not implemented by the JVM at the JAVA level. The options are ReentrantLock and ReadWriteLock. Its essence is dependent on the AbstractQueueSynchronized class. Lock provides many ways to Lock, such as try Lock, interrupt Lock and so on. The lock release process is managed by the JAVA developers themselves.
In terms of performance, synchronized is more reasonable in the case of few resource conflicts, but in the case of many resource access conflicts, synchronized performance deteriorates rapidly, while Lock can maintain balance.
2.4 condition
Condition breaks the Object monitor methods (wait,notify, and Notifyall) into distinct objects so that these objects can be used in combination with any Lock implementation to provide multiple wait-sets for each Object. Lock replaces the use of synchronized methods and statements, and condition replaces the use of Object monitor methods. The Condition instance is essentially bound to a lock by you. To get a Condition instance for a particular Lock instance, use its newCondition () method.
2.5 ThreadLock
Thread-local variables.
TreadLocal is useful when the variables are the same, but each thread uses the same initial value, meaning that a new copy of the same variable is used.
Application scenario: TreadLocal is best used when many threads need to use the same object multiple times and the object has the same initial value.
In fact, in essence, each thread maintains a MAP. The key of this MAP is TreadLocal, and the value of this MAP is the TreadLocal value of our set value. Each time a thread gets a value from its own variable, there is no thread-safety problem. In general, the TreadLocal variable has not changed at all. It simply acts as a key and provides each thread with an initial value. We could implement such a feature ourselves if we were allowed to, but it just so happens that the JDK already does it for us.
TreadLocal provides a separate copy of a variable for each thread using TreadLocal, so that each thread can independently change its own copy without knowing the corresponding copy of other threads. From the thread’s point of view, the target variable object is the thread’s Local variable, which is what the class name Local needs to mean.
Four methods for TreadLocal:
Void set(Object val) sets the value of thread-local variables for the current thread
Object get () returns the thread-local variable used by the current thread.
Void remove() removes the value of a local variable in the current thread. The purpose is to reduce memory usage
Object initValue() returns the initial value of the thread-local variable, protected, obviously designed to be overridden by subclasses.
Thread-safe implementation
3.1 Mutually exclusive synchronization
When accessing multiple threads, ensure that only one thread is in use at a time.
Critical sections, mutexes, and routines are all a means of synchronization.
The most basic mutex in Java is synchronized, which when compiled forms the bytecode instructions Monitorenter and Monitorexit. Each bytecode requires a reference parameter to specify which object to lock and unlock, as well as a lock counter. To record the number of times to lock, lock several times to unlock several times to return to the lock state.
The Java thread is mapped to the native thread of the operating system. No matter blocking or waking up, it needs the help of the operating system to complete. It needs to transform from user state to core state, which is very time-consuming and a heavyweight operation in the Java language. For example, a spin wait is added before the OS blocks to avoid frequent switching to the core mentality.
3.2 Non-blocking synchronization
The main problem with mutex and synchronization is the performance of blocking and awakening, so this is often called blocking synchronization (pessimistic concurrency strategy). As the hardware instruction set evolves, we have another option: an optimistic concurrency strategy based on collision detection, which in general means that the operation succeeds if there are no other threads competing for the shared data, and if there are other compensations (the most common being constant retries). Many implementations of this optimistic concurrency strategy do not need to suspend the line first, a synchronization operation known as non-blocking synchronization.
3.3 no synchronization
Some of the code is inherently thread-safe and does not require synchronization.
1) Reentrant code: Pure code that does not rely on data stored in the heap and common system resources, uses state quantities passed in from parameters, does not call non-reentrant methods, and returns predictable results.
2) Thread local storage: the visibility of shared data is limited to the same thread, so there is no need for synchronization to ensure that there is no data contention between threads. Thread-local storage can be implemented through the java.lang.TreadLocal class.