One, foreword
Regardless of the language, concurrent programming is advanced because it involves so much knowledge, not only of the operating system, but also of the composition of the computer, and so on. After all, over the years of hardware development, there has always been a core contradiction: the speed difference between CPU, memory, and I/O devices. This is where all the concurrency comes from.
CPU and memory: cache, CPU increased the cache, to balance the difference with memory; CPU and I/O devices: processes and threads. The operating system adds processes and threads to reuse cpus in time-sharing, thus balancing the speed difference between CPUS and I/O devices. Compiler instructions are reordered. The compiler optimizes the order in which instructions are executed so that the cache can be used more efficiently. (But it brings order problems)
Two, three sources
2.1 Caching causes visibility issues
2.1.1 Theory: From single-core CPU to multi-core CPU
Single-core: All threads are executed on the same CPU. Data consistency between CPU cache and memory is easy to solve. Since all threads are cached by the same CPU, a write from one thread to the cache must be visible to another thread;
Multicore: When each CPU has its own cache, data consistency between the CPU cache and memory is not easy to resolve. When multiple threads are executing on different cpus, these threads operate on different CPU caches (CPU solution: MESI protocol).
Single-core CPU vs. multi-core CPU:
Auric goldfinger: There is only one CPU, so there is only one cache. All memory is used by this cache. Once the cache memory is modified, all cpus are visible, and there is no visibility. So n is n CPU caches, n cache interactive data with the same main storage, main memory for all the data in the cache, but different cache of data between the single thread is not visible, cache with data from main memory, calculation, and then deposited in the main memory, there is no problem Multithreading, two threads using two different CPU, Both CPU cache take data from main memory, calculation, and then deposited in the main memory, and the process, each thread is based on the count value in the CPU cache to calculate, but only their own cache data change, the visibility problem From the CPU cache – main memory to the execution engine – working memory – the main memory, In the case of JMM, the working memory between threads is not visible, causing visibility problems.
2.1.2 Practice: Multithreading visibility issues
To test the visibility, write the following code:
public class Test { private static long count = 0; Private void add10K() {int idx = 0; While (idx++ < 10000) {count += 1; } } public static long calc() throws InterruptedException { final Test test = new Test(); Th1 = new Thread(test::add10K); Thread th2 = new Thread(test::add10K); // Start two threads th1.start(); th2.start(); th1.join(); // suspend the main thread, join thread1, and execute main th2.join() after thread1 completes; // Suspend the main thread, add thread2, and execute main return count; Public static void main(String[] args) throws InterruptedException {system.out.println (calc()); // Execute, return count}}Copy the code
The result is as follows: it can be seen that the value is not 20000.
11022
Copy the code
Cause: Atomicity not guaranteed, thread unsafe, count += 1; It’s not an atomic operation, it’s a three-step operation that can be interrupted. In other cases, visibility and order are not guaranteed (volatile is not guaranteed, atomicity is not guaranteed). Synchronized == for + if(CAS) synchronized == for + if(CAS), note that CAS has an ABA problem, although it is not covered here in aggregate
If thread A and thread B start at the same time, the first time they both read count=0 into their respective CPU caches, and the second time they read count+=1 into their respective CPU caches, the second time they write count+=1 into memory, the second time they read count+=1 into memory, the second time they read count+=1 into memory, and the second time they write count+=1 into memory, they now have 1, not 2. After that, both threads calculated the count value based on the count value in the CPU cache, so the final count value was less than 20000. This is the issue of cache visibility.
Conclusion: Changes made by one thread to a shared variable that can be immediately seen by another thread are called visibility.
2.2 Atomicity problems caused by thread switching
Definition of atomicity: We call atomicity the property of one or more operations being executed by the CPU without interruption.
-
Introduction of time slices, CPU fragmentation execution: To provide CPU utilization, the operating system allows a process to use the CPU at different times. When a process performs I/O operations (disk read/write in count+=1, operation, and disk write operations), the process flags itself in the hibernation state and cedes the CPU usage to other threads. So the CPU usage is going to go up. On the other hand, if a process is trying to perform an I/O operation, it will find that another process is already performing an I/O operation, and the new process will wait. When the previous I/O process ends, the new I/O process is restarted, so that the I/O usage also increases.
-
Statements in high-level languages tend not to be atomic: all high-level languages tend to require multiple instructions from the CPU for a single statement. For example, count+=1 requires at least three CPU instructions: Instruction 1: First, the variable count needs to be loaded from memory into the CPU register; Instruction 2: After that, the +1 operation is performed in the register; Instruction 3: Finally, write the result to memory (caching mechanism makes it possible to write to CPU cache instead of memory)
-
Operating system task switching can occur at the end of any CPU instruction. Note that this is a CPU instruction, not a language in a high-level language, so a statement in a high-level language can be interrupted in mid-stream.
Read disk – compute – write disk, read memory – compute – write memory is the same.
Goldfinger: sentence parsing time sharing time slices in a multithreaded atomicity problem 1, if the operating system does not support points using CPU, which must be a process execution is completed, the execution of a process, a process execution will not be interrupted, each an execution of a process is atomic, there exists no problem of atomicity. 2, if the operating system support points using CPU, disk, or a process execution, speaking, reading and writing, speaking, reading and writing memory, the CPU is let out, to use other processes, if get right to the use of the CPU process and the current process to modify the same variable, security problem, the problem in the final analysis because interrupted atomic operation of a variable, speaking, reading and writing. From CPU-main memory to execution engine-working memory-main memory, in the case of JMM, working memory operations between threads accessing main memory cannot be atomized, causing atomicity problems.
2.3 Orderliness problems caused by compilation optimization
2.3.1 First, the classic case: double checking to create a singleton
public class Singleton { private Singleton(){} private static Singleton instance; public static Singleton getInstance() { if (instance == null) { synchronized (Singleton.class) { if (instance == null) instance = new Singleton(); } } return instance; }}Copy the code
Note that the lazy singleton does not use the volatile keyword to disable reordering of instance variables, so even using a double-level if judgment is not thread-safe.
2.3.2 Second, we consider the new operation: instance = new Singleton();
instance = new Singleton();
The first step is to allocate a block of memory M. Second, initialize the Singleton object on memory M to get the address of M. Step 3, then M’s address is returned to the instance variable. // Finally assign to the instance variable
If executed this way, there will be no problem with either single-threaded or multi-threaded. Single thread needless to say, multithreading is also safe (because the third step must be assigned to the instance variable) example: Thread A performs the second step, assuming it is switched by thread B, which of course will not be switched because synchronized does not release the lock until instance completes its assignment, so thread B cannot enter at all. Therefore, follow this three steps, can ensure multithreading safety
2.3.3 Third, the actual optimized execution path: instance = new Singleton();
instance = new Singleton();
The first step is to allocate a block of memory M. Second, assign M’s address to the instance variable; / / interpretation: At this point, the Singleton object has not been initialized, and the address of M is &m, but note that after assigning the instance variable, thread A will release the synchronization lock, and thread B can enter. This is where the lazy Singleton double-checking is unsafe. Step 3: Finally, the Singleton object is initialized on memory M. // Explanation: this time to initialize, but memory M address is not given to the instance variable
Here, since instance = new Singleton(); The underlying instructions are reordered, so the second step of thread A is instance= &m, and once the instance object has been modified, the synchronization lock will be released, and thread B will be switched to instance== &m, which is null because instance has been assigned. Therefore, thread B will not execute its own instance=new Singleton step 3, but thread B sees that M has no initialized Singleton object
Goldfinger: sentence summary order problem under the instruction reordering in multithreaded (LanHanShi singleton double if judgment, for example) 1, do not use the instruction reordering, single-threaded and multithreading under no thread safety problem Single-threaded don’t need to say more, multithreading is safe (because must step 3 assigned to the instance variables), for example: Thread A performs the second step, assuming it is switched by thread B, which of course will not be switched because synchronized does not release the lock until instance completes its assignment, so thread B cannot enter at all. Instance = new Singleton(); instance = new Singleton(); So instance= &m is the second step of thread A, because the instance object is modified, the synchronization lock will be released, and it will be switched to thread B, where instance== &m is no longer null because instance has been assigned. So thread B will no longer execute its own instance=new Singleton three steps, but thread B sees that there is no Singleton object initialized in M memory, and the error goes from CPU-cache-main memory to execution-engine-working memory-main memory. For JMM, Execution engine instructions are reordered between threads, causing ordering problems.
How to solve visibility and order problems in Java? (JMM:Java Memory Model)
3.1 Visibility and order
The root cause: caching leads to visibility, compilation optimizations lead to order.
Workaround: Disable caching and compilation optimizations. The performance of our program would be poor if caching and compilation optimizations were all disabled, so we disable caching and compilation optimizations on demand.
3.2 THE JMM deals with visibility and orderliness
The JMM (Java Memory Model) addresses the above issues primarily with the three keywords volatile, synchronized, and final, as well as the eight happens-before rules.
3.2.1 Volatile: Ensures visibility
To ensure visibility, you disable the cache and force the CPU processor to take data directly from main memory when it needs it. Since main memory has only one piece and is visible to all threads, taking data from main memory ensures that changes are visible to all threads, thus solving the visibility problem. However, you cannot disable caching for all variables, which would be inefficient.
Java specifies that the CPU reads and writes variables that are volatile, forcing operations in main memory. But which variables do you need to use the volatile keyword? This is the job of the Java programmer, and it is the variables used in multithreaded critical sections, usually only one or a few.
Volatile guarantees visibility: Tells the compiler that reads and writes to this variable must be read and written from memory instead of using CPU cache.
public class VolatileExample { int x = 0; volatile boolean v = false; Public void writer() {x = 42; // Use volatile to write from main memory. v = true; } public void reader() {if (v == true) {// what is x? System.out.println(x); }}}Copy the code
Prior to JDK1.5, x=0 could occur because the variable x could be cached by the CPU, causing visibility problems. The semantics of volatile were enhanced after 1.5 to take advantage of the happens-before rule.
3.2.2 Happens-before rule (Orderality defined in the JVM: the result of a previous operation is visible to subsequent operations)
-
Procedural order rule: In a thread, the previous action is happens-before any subsequent action, in procedural order. (Explanation: in the same thread, or a single thread, the order is the same, the order of execution is the order of writing)
-
Volatile variable rule: Writes to a volatile program are happens-before subsequent reads to that volatile variable. (Explanation: Volatile enforces order, as one of the 8 happen-before clauses)
-
If A happens-before B, and B happens-before C, then A happens-before C. (Explanation: Nothing to say)
-
Locking rules: happens-before for unlocking a lock and subsequent locking of the lock. (Explanation: The last one can be unlocked first, then the next one can be unlocked. To ensure a certain unlock, the unlock is usually placed in finally)
-
Thread start rule: If thread A calls the start() method of thread B (that is, starting thread B in thread A), the start() operation is happens-before any operation in thread B. (Explanation: start() before starting the thread)
-
Thread termination rule: If join() of thread B is called in thread A and returns successfully, any action in thread B happens-before the return of the join() operation. (Explanation: Join () returns after all operations)
-
Thread interrupt rule: a call to the threadinterrupt () method happens-before the interrupt thread’s code detects that the interrupt event occurred (explanation: Interrupt () precedes the interrupt thread).
-
Object finalization rule: an object’s initialization completes happens-before the start of its finalize() method. Finalize () must be before finalizing objects.
3.2.3 Final Keyword (immutable)
The JVM level of five kinds of thread safety: immutable, absolute thread-safe, relative thread safety, compatibility and threads, thread the immutable is the highest level of thread safety, and constants, use the final modify variables is to achieve the immutable final modify variables are, original intention is to tell the compiler: this variable is born the same, can for optimization.
Escape problem:
final int x; Public FinalFieldExample() {x = 3; y = 4; Global. Obj = this; }Copy the code
After 1.5, the Java memory model imposed constraints on the rearrangement of variables of final type. Now as long as we can provide the right function and no escape, we should be ok.
How to solve atomic problems in Java (mutex)
4.1 Guarantee atomicity (two points of atomicity: ensure that the critical zone is entered after locking, and ensure that the code is unlocked after crossing the critical zone)
1. Concept: The ability of one or more operations to execute without interruption on the CPU is called atomicity.
2. Cause: Thread switching.
3, the operating system to do thread switching is dependent on CPU interrupt, so forbid CPU interrupt can prevent thread switching. (1) Long writes on 32-bit cpus are split into two operations. Write high 32 bits and low 32 bits. (2) the single-core: under the CPU, the same time there is only one thread execution, prohibit the CPU interrupt, means not to schedule threads operation, which is banned thread switches and get right to the use of the CPU thread execution can constantly, so the two write operation must be: either are implemented, or are not implemented, atomicity. (3) multi-core: at the same time, there may be two threads executing at the same time, one thread executing on CPU-1 and one thread executing on CPU-2. At this time, CPU interruption is prohibited, which can only ensure the continuous execution of CPU threads, and can not guarantee that only one thread executing at the same time.
4, solution: mutex
Goldfinger: Cause: The atomicity of the operation is broken because of thread switch solution: Disable CPU interrupt (the operating system does thread switch depends on CPU interrupt, so disable CPU interrupt can prevent thread switch) Java disables CPU interrupt in two ways: Synchronously blocking CAS and asynchronously blocking CAS
How does synchronized and asynchronous blocking guarantee atomicity?
- Cas and synchronized can be guaranteed, CAS must be current==real to enter, synchronized must obtain a lock to enter
- Ensure that the code is unlocked after the critical section is completed. I didn’t change the data before, so the inside didn’t go out, and the outside couldn’t come in; Synchronized guarantees this, too. The inside can’t release the lock until it’s done, and the outside can’t get in
4.2 Lock model theory to solve atomicity problem
4.2.1 Simple lock model
Synchronzied lock CAS guarantees two things: first, that the critical section is locked and second, that the code is unlocked once it has crossed the critical section
An explanation of the above locking model:
Before the thread enters the critical region, it tries to lock() first. If it succeeds, it enters the critical region. At this point, we say that the thread 1 holds the lock. Otherwise, wait until the thread holding the lock unlocks;
Second, ensure that the critical block code is completed to unlock: the thread holding the lock executes the critical block code to unlock().
4.2.2 Improved Lock Model (Adding Lock objects and Protected Resources)
An explanation of the improved lock model shown above:
First, in critical code, added a protected Resource R (R is for Resource); Second, in order to protect Resource R, we must create a Lock for it. Third, for the newly created lock LR, lock(LR) operation and unlock(LR) operation need to be added when entering and leaving the critical region.
4.3 Practice lock model, five uses of synchronized
Public class X {// modify non-static methods synchronized void foo() {// synchronize static void bar() {// synchronize static void bar()} // modify code blocks Object obj = new Object(); Void baz() {synchronized (obj) {// synchronized (obj)}}Copy the code
Synchronized modifies static methods equivalent to
Class X {// synchronized(x.lass) static void bar() {// critical section}}Copy the code
Synchronized modifies non-static methods equivalent to
Class X {// synchronized(this) void bar() {// critical section}}Copy the code
Summary: Synchronized can modify static methods or blocks of static code, and the lock is a. Class bytecode object of its class. Synchronized can modify non-static methods or blocks of code, and the lock is the current object of the class this; Synchronized modifies non-static code blocks, and locks are arbitrary objects;
Note that methods are locked, either this or bytecode objects, not Object objects.
There are five kinds in total, summarized in the table below:
4.4 Additional 1: Solve count+1 problem with synchronized (1:1 to N:1)
4.4.1 The relationship between locks and protected resources is 1:1 (at the usage level, not important, two locks protect one resource, and there is no mutually exclusive relationship between the two critical sections)
class SafeCalc { long value = 0L; synchronized long get() { return value; // Use synchronized once to obtain a critical resource, but it is a step operation to read the memory variable,long,long and double are 64 bits each time, so it is not atomic operation, so use synchronized once. Synchronized void addOne() {value += 1; synchronized void addOne() {value += 1; // synchronized =value+1}}Copy the code
The general model is as follows:
There are two elements of thread concurrency: multithreading and synchronous locking, where a lock locks a variable value and plays the role of atomization, so there is no concurrency safety problem.
4.4.2 The relationship between locks and protected resources is N:1 (at the usage level, not important, two locks protect one resource, and there is no mutually exclusive relationship between the two critical sections)
The relationship between the protected resource and the lock is N: 1.
Rewrite to modify the above code
class SafeCalc { long value = 0L; synchronized long get() { return value; } synchronized static void addOne() { value += 1; }}Copy the code
The general model is as follows:
Note: The problem here is that two locks protect one resource. Therefore, the two critical sections are not mutually exclusive, and the modification of value by addOne() does not guarantee visibility to get(), which leads to concurrency problems
There are two elements of thread concurrency: multithreading, synchronous lock, where two locks lock a variable value, does not play the role of isolation, so there are concurrency safety issues.
4.4.3 Lock: Resource =1:N, use one lock to protect multiple resources (use level, but use less, belong to advanced use of synchronous lock object, interview has distinction, important)
4.4.3.1 One Lock Protects Multiple Resources: Protects unrelated resources
There is no correlation between account balance and account password
Class Account {// Lock: private final Object balLock = new Object(); Private Integer balance; Private final Object pwLock = new Object(); // Account password private String password; Void withdraw(Integer amt) {synchronized (balLock) {if (this.balance > amt) {this.balance -= amt; Integer getBalance() {synchronized (balLock) {return balance; }} // Change password void updatePassword(String pw) {synchronized (pwLock) {this.password = pw; String getPassword() {synchronized (pwLock) {return password; }}}Copy the code
4.4.3.2 Cause problems: One lock protects multiple resources: Protects multiple resources related to each other (bank transfer problem)
class Account { private int balance; Synchronized void transfer(Account target, int amt) {if (this.balance > amt) {this.balance -= amt; target.balance += amt; }}}Copy the code
The general model is as follows:
This lock can protect your own balance this.balance, but not someone else’s balance target.balance, just like you can’t use your own lock to protect someone else’s property, or use your ticket to protect someone else’s seat.
4.4.3.3 Solution: A lock protects multiple resources related to each other (bank transfer problem)
Modify the above code again (serialization exists in the following code)
class Account { private int balance; // synchronized void transfer(Account target, int amt) { synchronized(Account.class){ if (this.balance > amt) { this.balance -= amt; target.balance += amt; }}}}Copy the code
Conclusion: First, analyze the relationship between multiple resources. If there is no relationship between resources, it is easy to handle, one lock per resource is ok. If there is an association between resources, choose a lock with a larger granularity that covers all related resources. Atomic nature: The intermediate state of an operation is invisible to the outside world.
Goldfinger: What if you use a lock to protect multiple resources? The first step is to analyze the relationships between multiple resources. If there is no relationship between resources, it is easy to handle, one lock per resource is ok. If there is an association between resources, choose a lock with a larger granularity that covers all related resources.
4.5 Additional 2: Deadlock problem
4.5.1 Introduction of deadlocks: Bank transfer issues
Synchronize (Xxx. Class); synchronize (Xxx. Class); synchronize (Xxx. Class); There is no way to lock the entire bytecode object.
In real life, let’s say the bank, when it gives me the money to transfer, goes to the file rack and gets both the outgoing and incoming books, and then does the money transfer. Three things can happen
(1) If there happen to be outgoing books and incoming books on the file shelf, take them away at the same time;
(2) If only one of the outgoing and incoming books is added to the document, the teller takes the one on the shelf and waits for the other teller to return the other book;
(3) If there is neither an out book nor an in book, the teller will wait for both books to be returned.
In real life, synchronized(this) and synchronized(target) are used to lock two resources, this.balance and target.balance. Synchronized (xxx.class) cannot be used. This kind of static lock is only theoretical meaning, can not be simulated in reality, the real simulation is to lock two books at the same time, but in reality, generally do not do so.
So we changed the code to look like this:
class Account { private int balance; // synchronized void transfer(Account target, Synchronized (target){if (this.balance > amt) {this.balance -= amt; synchronized(target){this.balance -= amt; target.balance += amt; } } } } }Copy the code
Such code can cause a new problem, the problem of deadlocks
The general model is:
4.5.2 Deadlock: Deadlock occurs when bank transfer is added twice
Goldfinger 1: Locking twice creates a condition for a deadlock. If a deadlock is created twice, do not write it in the code.
Concept: A group of threads competing for resources wait on each other, resulting in a “permanent” block.
class Account { private int balance; // synchronized void transfer(Account target, Synchronized (target){//② if (this.balance > amt) {this.balance -= synchronized(target){this.balance -= amt; target.balance += amt; } } } } }Copy the code
(1) Suppose thread T1 transfers money from account A to account B, account A. ransfer(account B); (2) Simultaneously, thread T2 performs the operation of transferring account A from account B to account B. ransfer(Account A); (3) When T1 and T2 execute the code at ① simultaneously, T1 obtains the lock of account A (for T1, this is account A) and T2 obtains the lock of account B (for T2, this is account B). (4) After T1 and T2 execute the code at ②, T1 tries to obtain the lock of account B. Account B is locked (by T2), so T1 starts to wait (5). When T2 tries to obtain the lock of account A, it finds account A is locked (by T1), so T2 also starts to wait
Deadlock conditions (1) Mutually exclusive, shared resources X and Y can be occupied by only one thread; (2) the possession and waiting, thread T1 has Shared resource X, Y waiting for sharing resources, don’t release the Shared resource X (3) cannot be preempted, other threads can’t forced thread preemption T1 possession of resources (4) circular wait, thread T1 waiting thread T2 possession of resources, thread T2 waiting thread T1 possession of resources, is the circular wait.
4.5.3 Solution: Deadlock solution (breaking one of the four conditions)
4.5.3.1 Most Common: Destroy possession and wait conditions (apply for all resources at one time)
class Allocator { private List<Object> als = new ArrayList<>(); Synchronized als.add(from); synchronized als.add(from); als.add(to); synchronized boolean apply(Object from, Object to){ if(als.contains(from) || als.contains(to)){ return false; } else { als.add(from); als.add(to); } return true; } // Synchronized synchronized unlock, atomized als.remove(from); als.remove(to); synchronized void free(Object from, Object to){ als.remove(from); als.remove(to); }} class Account {// actr should be singleton private Allocator actr; private int balance; Void transfer(Account target, int amt){// Transfer (Account target, int amt){ actr.apply(this, target)); Synchronized (target){if (this.balance > amt){this.balance -= amt; synchronized(target){this.balance -= amt; target.balance += amt; } } } } finally { actr.free(this, target) } } }Copy the code
Explain the code above:
-
Synchronized (this) synchronized(target) while(! actr.apply(this, target)); And actr. Free (this, target); In the middle, come with the package. while(! actr.apply(this, target)); Use synchronized to ensure that only one thread can enter, and no other thread can enter until the execution of this thread is complete. Then execute the code that will cause deadlock (only one thread will not deadlock), and unlock after completion of execution.
-
The Allocator class applies resources apply() and releases resources free() simultaneously. The Account Account class holds a singleton of Alloctor (it must be a singleton; only one person can allocate resources). When the Account transfers money, apply to Allocator for the two resources, and then lock the two resources. After the transfer is complete and the lock is released, we need to notify Allocator to release the two resources.
Problem: CPU consumption caused by spin once request roll-out account and roll-in account until successful while(! actr.apply(this, target)); This works if the execution time of the above code is very short, but if the operation takes a long time, the IDLE time of the CPU will be greatly increased.
Solution (wait-notification mechanism) If the conditions required by the thread (transfer out of the ledger and transfer into the ledger on the file rack) are not met, the thread blocks itself and enters the wait state; When the conditions required by the thread (roll-out and roll-in on the file rack at the same time) are met, the waiting thread is notified to re-execute. A complete wait-notification mechanism: the thread first obtains the mutex, when the thread requirements are not met, release the mutex, enter the wait state; When the required condition is met, the waiting thread is notified to reacquire the mutex.
4.5.3.2 Breaking the Non-preemption condition
Can voluntarily release the resources it occupies. Synchronized does not support JUC packages
4.5.3.3 Break the circular waiting condition and apply for resources in sequence
To break this condition, you need to sort the resources and then request them in order
class Account { private int id; private int balance; Void transfer(Account target, int amt){Account left = this; // All threads are in this order: Thread A < id of thread B, thread A enters, thread A is left, thread B is right, thread A is left, thread B is right, thread A < ID of thread B, thread B enters, thread A is left, thread B is right, thread A < ID of thread B, thread B enters, thread A < ID of thread B is left, thread B is right, thread A < ID of thread B is left, thread B is right Thread A = left; thread A = right; thread B = left; thread B = right; //② if (this.id > target.id) {//③ left = target; / / (4) right = this; Synchronized (left){synchronized(right){if (this.balance > amt){this.balance -= amt; target.balance += amt; } } } } }Copy the code
The original
synchronized(this){
synchronized(target){
Copy the code
Modified to
synchronized(left){
synchronized(right){
Copy the code
How to apply resources in order, that is, define the order of applying resources, any thread in this order, as follows: If thread A’s ID is less than thread B’s ID, thread A comes in, thread A is left, thread B is right, and if it meets the conditions, it will swap. If it doesn’t meet the conditions, then lock left thread A, and then lock right thread B. Second, if thread A’s ID is less than thread B’s ID, thread B comes in, thread B is left, thread A is right, and thread A is left, thread B is right. Finally, thread A is left, and thread B is right.
4.6 Additional 3: Thread communication, synchronized wait queue: wait(), notify(), and notifyAll()
When a thread enters a critical section, it needs to enter the wait state because some conditions are not met. Note that the wait queue is one-to-one with the mutex, and each mutex has its own separate wait queue.
First, when the wait() method is called, the current thread blocks and enters the right wait queue, which is also the mutex wait queue. When a thread enters the queue, it releases the mutex it holds. After releasing the mutex, other threads have the opportunity to acquire the lock and enter the critical area.
Second, calling notify() when the condition is met notifies the thread in the wait queue (the mutex wait queue) that the condition has been met and that the thread is removed from the wait queue.
Note: Because notify() is only guaranteed at the notification point in time, the condition is satisfied. , however, be notified of thread execution time and inform the timing of the basic will not overlap, so, when informed of thread execution conditions are probably not satisfied (that is, by other competitors in the thread), be notified of thread to perform again, still need to get to the mutex (because when inserted into a waiting queue already release synchronous lock), Notify () simply helps the thread escape from the wait queue.
The general model is as follows:
Wait () puts a thread into a wait queue, blocks, and does not compete for a synchronous lock. Notify () unqueues a thread and wakes it up to rejoin the lock.
So let me rewrite the example
class Allocator { private List<Object> als; // Synchronized void apply(Object from, Object to) {/ / classic written while (als) the contains (from) | | als. The contains (to)) {try {wait (); Wait ()}catch(Exception e){}} als.add(from); als.add(to); Synchronized void free(Object from, Object to){als.remove(from); synchronized void free(Object from, Object to); als.remove(to); notifyAll(); // Add a notifyAll()}}Copy the code
Auric goldfinger:
Synchronized Boolean apply(Object from, synchronized Boolean apply) Object to){ if(als.contains(from) || als.contains(to)){ return false; } else { als.add(from); als.add(to); } return true; Synchronized void free(Object from, Object to){als.remove(from); als.remove(to); }Copy the code
Tip1: Lock has fair lock and non-fair lock, while synchronized only has non-fair lock. The bottom layer of lock to achieve fair lock is synchronous queue AQS, which is not a non-cyclic singlelist wait queue, and synchronized also has wait queue. Tip2: In lock, the synchronization queue FIFO guarantees the fairness of locking, and the wait queue FIFO guarantees the fairness of blocking wake up. In synchronized, lock unlocking is preemptive and unfair, but wait queue FIFO guarantees wake up blocking is fair, but wake up does not guarantee fair competition to synchronized lock.
Five, summary (macro perspective)
5.1 There is shared data and the data will change. In plain English, multiple threads are reading and writing the same data at the same time.
5.2 Concurrency problems mainly lie in three aspects: security problems, activity problems and performance problems
5.2.1 Security Issues
1. Essential issues: The program performs as we expect it to.
2. Data contention: When multiple threads access the same data at the same time and at least one thread writes to the data, if we do not take protection measures, it will lead to the concurrency Bug.
3. Race conditions: The execution result of the program depends on the order in which the threads are executed.
4, the above two problems, can be through the mutual exclusion of the technical solution, and the realization of mutual exclusion there are many, CPU provides the relevant mutual exclusion instructions, operating system, programming language will also provide the relevant API lock.
5.2.2 Activity problem
Deadlocks: Threads will wait for each other, and will wait forever, which is technically a permanent blocking of threads
2. Live lock: the thread is not blocked, but it can still fail to execute. (Try waiting for a random time.)
3. Hunger: A situation in which a thread cannot execute because it cannot access the resources it needs. Solution: Ensure sufficient resources, distribute resources fairly, and avoid long execution by threads holding locks.
5.2.3 Performance Problems
1. Use lock-free algorithms and data structures whenever possible
2, reduce lock holding time
(1) Throughput: the number of requests that can be processed per unit of time. (2) Latency: the time from sending a request to receiving a response. (3) Concurrency: the number of requests that can be processed at the same time.
6. Interview Goldfinger (Fundamentals: Talk about what you know about concurrency)
6.1 Optimization of performance breaks three threading features
CPU and memory: cache, CPU increased the cache, to balance the difference with memory; CPU and I/O devices: processes and threads. The operating system adds processes and threads to reuse cpus in time-sharing, thus balancing the speed difference between CPUS and I/O devices. Compiler instructions are reordered. The compiler optimizes the order in which instructions are executed so that the cache can be used more efficiently. (But it brings order problems)
6.2 Explain how caching causes visibility problems, how CPU time slices cause atomicity problems, and how instruction reordering causes ordering problems
Auric goldfinger: There is only one CPU, so there is only one cache. All memory is used by this cache. Once the cache memory is modified, all cpus are visible, and there is no visibility. So n is n CPU caches, n cache interactive data with the same main storage, main memory for all the data in the cache, but different cache of data between the single thread is not visible, cache with data from main memory, calculation, and then deposited in the main memory, there is no problem Multithreading, two threads using two different CPU, Both CPU cache take data from main memory, calculation, and then deposited in the main memory, and the process, each thread is based on the count value in the CPU cache to calculate, but only their own cache data change, the visibility problem From the CPU cache – main memory to the execution engine – working memory – the main memory, In the case of JMM, the working memory between threads is not visible, causing visibility problems.
Goldfinger: sentence parsing time sharing time slices in a multithreaded atomicity problem 1, if the operating system does not support points using CPU, which must be a process execution is completed, the execution of a process, a process execution will not be interrupted, each an execution of a process is atomic, there exists no problem of atomicity. 2, if the operating system support points using CPU, disk, or a process execution, speaking, reading and writing, speaking, reading and writing memory, the CPU is let out, to use other processes, if get right to the use of the CPU process and the current process to modify the same variable, security problem, the problem in the final analysis because interrupted atomic operation of a variable, speaking, reading and writing. From CPU-main memory to execution engine-working memory-main memory, in the case of JMM, working memory operations between threads accessing main memory cannot be atomized, causing atomicity problems.
Goldfinger: sentence summary order problem under the instruction reordering in multithreaded (LanHanShi singleton double if judgment, for example) 1, do not use the instruction reordering, single-threaded and multithreading under no thread safety problem Single-threaded don’t need to say more, multithreading is safe (because must step 3 assigned to the instance variables), for example: Thread A performs the second step, assuming it is switched by thread B, which of course will not be switched because synchronized does not release the lock until instance completes its assignment, so thread B cannot enter at all. Instance = new Singleton(); instance = new Singleton(); So instance= &m is the second step of thread A, because the instance object is modified, the synchronization lock will be released, and it will be switched to thread B, where instance== &m is no longer null because instance has been assigned. So thread B will no longer execute its own instance=new Singleton three steps, but thread B sees that there is no Singleton object initialized in M memory, and the error goes from CPU-cache-main memory to execution-engine-working memory-main memory. For JMM, Execution engine instructions are reordered between threads, causing ordering problems.
6.3 Java handling visibility and ordering issues (cause, resolution, volatile, cost-before, final, synchronized)
Order root cause: Caching leads to visibility, compilation optimizations lead to order. Orderliness workaround: Disable caching and compilation optimizations. If caching and compiler optimization were all disabled, the performance of our program would be poor. What we need is to disable caching and compile optimizations on demand.
To ensure visibility, you disable the cache and force the CPU processor to take data directly from main memory when it needs it. Since main memory has only one piece and is visible to all threads, taking data from main memory ensures that changes are visible to all threads, thus solving the visibility problem. However, you cannot disable caching for all variables, which would be inefficient.
Java specifies that the CPU reads and writes variables that are volatile, forcing operations in main memory. But which variables do you need to use the volatile keyword? This is the job of the Java programmer, and it is the variables used in multithreaded critical sections, usually only one or a few.
Volatile guarantees visibility: Tells the compiler that reads and writes to this variable must be read and written from memory instead of using CPU cache.
-
Procedural order rule: In a thread, the previous action is happens-before any subsequent action, in procedural order. (Explanation: in the same thread, or a single thread, the order is the same, the order of execution is the order of writing)
-
Volatile variable rule: Writes to a volatile program are happens-before subsequent reads to that volatile variable. (Explanation: Volatile enforces order, as one of the 8 happen-before clauses)
-
If A happens-before B, and B happens-before C, then A happens-before C. (Explanation: Nothing to say)
-
Locking rules: happens-before for unlocking a lock and subsequent locking of the lock. (Explanation: The last one can be unlocked first, then the next one can be unlocked. To ensure a certain unlock, the unlock is usually placed in finally)
-
Thread start rule: If thread A calls the start() method of thread B (that is, starting thread B in thread A), the start() operation is happens-before any operation in thread B. (Explanation: start() before starting the thread)
-
Thread termination rule: If join() of thread B is called in thread A and returns successfully, any action in thread B happens-before the return of the join() operation. (Explanation: Join () returns after all operations)
-
Thread interrupt rule: a call to the threadinterrupt () method happens-before the interrupt thread’s code detects that the interrupt event occurred (explanation: Interrupt () precedes the interrupt thread).
-
Object finalization rule: an object’s initialization completes happens-before the start of its finalize() method. Finalize () must be before finalizing objects.
The JVM level of five kinds of thread safety: immutable, absolute thread-safe, relative thread safety, compatibility and threads, thread the immutable is the highest level of thread safety, and constants, use the final modify variables is to achieve the immutable final modify variables are, original intention is to tell the compiler: this variable is born the same, can for optimization.
Additional: synchronized guarantees atomicity, orderliness and visibility
6.4 Java handling atomicity issues (Why, How to solve them, two ways, and why atomicity is guaranteed)
Cause: The atomicity of the operation is broken because of thread switching
Solution: Disable CPU interrupts (the operating system relies on CPU interrupts for thread switching, so disabling CPU interrupts can prevent thread switching)
Java disables CPU interrupts in two ways: synchronized and CAS
Reasons for ensuring atomicity: How do synchronized and asynchronous blocking guarantee atomicity?
- Cas and synchronized can be guaranteed, CAS must be current==real to enter, synchronized must obtain a lock to enter
- Ensure that the code is unlocked after the critical section is completed. I didn’t change the data before, so the inside didn’t go out, and the outside couldn’t come in; Synchronized guarantees this, too. The inside can’t release the lock until it’s done, and the outside can’t get in
6.5 Additional 1: Three mappings between locks and resources (Lock: Resource =1:1 N:1 1:N)
Goldfinger: What if you use a lock to protect multiple resources? The first step is to analyze the relationships between multiple resources. If there is no relationship between resources, it is easy to handle, one lock per resource is ok. If there is an association between resources, choose a lock with a larger granularity that covers all related resources.
6.6 Additional 2: Deadlock Problem (Four conditions and three Solutions)
Goldfinger 1: Locking twice creates a condition for a deadlock. If a deadlock is created twice, do not write it in the code. Three ways: destroy request with waiting on the condition (the most common: a one-time application resources) or damage cannot be preempted conditions (synchronized does not support, can’t take someone else’s hands) resources, destruction of circular wait conditions (in order to apply for resources), the mutex conditions could not be destroyed (X and Y Shared resources can only be one thread take up, can’t change)
One-time application resources:
-
Synchronized (this) synchronized(target) while(! actr.apply(this, target)); And actr. Free (this, target); In the middle, come with the package. while(! actr.apply(this, target)); Use synchronized to ensure that only one thread can enter, and no other thread can enter until the execution of this thread is complete. Then execute the code that will cause deadlock (only one thread will not deadlock), and unlock after completion of execution.
-
The Allocator class applies resources apply() and releases resources free() simultaneously. The Account Account class holds a singleton of Alloctor (it must be a singleton; only one person can allocate resources). When the Account transfers money, apply to Allocator for the two resources, and then lock the two resources. After the transfer is complete and the lock is released, we need to notify Allocator to release the two resources.
Problem: CPU consumption caused by spin once request roll-out account and roll-in account until successful while(! actr.apply(this, target)); This works if the execution time of the above code is very short, but if the operation takes a long time, the IDLE time of the CPU will be greatly increased.
Solution (wait-notification mechanism) If the conditions required by the thread (transfer out of the ledger and transfer into the ledger on the file rack) are not met, the thread blocks itself and enters the wait state; When the conditions required by the thread (roll-out and roll-in on the file rack at the same time) are met, the waiting thread is notified to re-execute. A complete wait-notification mechanism: the thread first obtains the mutex, when the thread requirements are not met, release the mutex, enter the wait state; When the required condition is met, the waiting thread is notified to reacquire the mutex.
Break the loop wait condition and customize the order in which any thread requests resources
The original
synchronized(this){
synchronized(target){
12
Copy the code
Modified to
synchronized(left){
synchronized(right){
12
Copy the code
How to apply resources in order, that is, define the order of applying resources, any thread in this order, as follows: If thread A’s ID is less than thread B’s ID, thread A comes in, thread A is left, thread B is right, and if it meets the conditions, it will swap. If it doesn’t meet the conditions, then lock left thread A, and then lock right thread B. Second, if thread A’s ID is less than thread B’s ID, thread B comes in, thread B is left, thread A is right, and thread A is left, thread B is right. Finally, thread A is left, and thread B is right.
6.7 Additional 3: Thread communication, synchronized wait queue: wait(), notify(), and notifyAll()
-
Wait () puts a thread in a wait queue, blocks it, and does not compete for the lock. Notify () unqueues a thread and wakes it up to rejoin the lock.
-
(1) Lock has fair lock and non-fair lock, while synchronized only has non-fair lock. The bottom layer of lock to achieve fair lock is the synchronous queue AQS, not the non-cyclic singlelist wait queue, and the wait queue synchronized also has. (2) In lock, the SYNCHRONIZATION queue FIFO guarantees the fairness of locking, and the wait queue FIFO guarantees the fairness of blocking wake up; In synchronized, lock unlocking is preemptive and unfair, but wait queue FIFO guarantees wake up blocking is fair, but wake up does not guarantee fair competition to synchronized lock.
6.8 Concurrent Three Questions
Security issues 1. Essential issues: The program performs as we expect it to. 2. Data contention: When multiple threads access the same data at the same time and at least one thread writes to the data, if we do not take protection measures, it will lead to the concurrency Bug. 3. Race conditions: The execution result of the program depends on the order in which the threads are executed. 4, the above two problems, can be through the mutual exclusion of the technical solution, and the realization of mutual exclusion there are many, CPU provides the relevant mutual exclusion instructions, operating system, programming language will also provide the relevant API lock.
Deadlocks: threads will wait for each other, and will wait forever, which is technically permanent blocking. Live locks: threads are not blocked, but still cannot execute. 3. Hunger: A situation in which a thread cannot execute because it cannot access the resources it needs. Solution: Ensure sufficient resources, distribute resources fairly, and avoid long execution by threads holding locks.
Performance problems, try to use 1 2 lock-free algorithm and data structure, reduce the time of the lock hold 3, measure: (1) the throughput per unit time internal energy to deal with the number of requests (2) latency: from the request to receive response time (3) the concurrency value: handle the number of requests at the same time, with the increase of concurrency, delay will increase.)
Seven, summary
Principle level :Java concurrency hardware low-level support, complete.
Play code every day, progress every day!!