An overview of the

Thread itself due to the overhead of creation and switching, the use of multithreading will not improve the execution speed of the program, but will reduce the speed, but for frequent IO operations of the program, multithreading can effectively concurrency. For programs with different tasks, consider using one thread per task. Such a program in the design of a single thread to do all things for the program, more clear, if it is a simple calculation operation, multi-threaded and not single thread calculation efficiency is high, but for some deliberately scattered use of computer system resources operation, it is suitable for the use of multi-threading. In the actual development of performance optimization problems need to consider the specific scenario to consider whether to use multithreading technology. Generally speaking, a program is run in a process, process is a certain independent function of the program, it is a computer system for resource allocation and scheduling of an independent unit. A thread is an entity of a process, the basic unit of CPU scheduling and dispatch, which is smaller than a process and can run independently.

In JMM, threads can store variables in local memory (such as machine registers) rather than reading or writing directly to main memory. This can cause one thread to change the value of a variable in main memory, while another thread continues to use its copy of the value of the variable in the register, resulting in inconsistent data, which can cause thread insecurity. Here are several common ways of thread synchronization in Java.

The body of the

The reason for thread insecurity is that JMM defines the main memory and the working memory, which causes the inconsistency problem when multiple thread colleagues access the same resource. Therefore, to solve this problem is also very simple, starting from JMM, there are three main ways:

  • Ensure that each thread accesses the resource with the latest value (visibility)
  • Locks a resource while it is being manipulated by another thread, preventing other threads from accessing it (lock)
  • A thread privates a local variable locally, reading or writing its own variable at a time.

The lock

synchronized

A synchronization mechanism that uses the synchronized modifier is called a mutex mechanism, and the lock it acquires is called a mutex. Each object has a lock mark, and the thread can access the resource only when it has the lock mark. Without the lock mark, the thread will enter the lock pool. The mutex is divided into two types: class lock and object lock. Class lock: For a static method of a class or for a class, one object lock per object: for a common method of an instantiated object, there can be more than one

Here’s an example of synchronized used by a programmer fixing a bug

The Bug class

public class Bug { private static Integer bugNumber = 0; public static int getBugNumber() { return bugNumber; } public synchronized void addNormal() {bugNumber++; System.out.println("normalSynchronized--->" + getBugNumber()); } public static static void addStatic() {bugNumber++; System.out.println("staticSynchronized--->" + getBugNumber()); } public synchronized void addBlock() {synchronized (bugNumber) {this.bugnumber = ++bugNumber; System.out.println("blockSynchronized--->" + getBugNumber()); }}}Copy the code

Runnable

public class BugRunnable implements Runnable { private Bug mBug=new Bug(); @Override public void run() { mBug.addNormal(); // mbug.addblock (); // synchronize code blocks // bug.addstatic (); // static method synchronization}}Copy the code

The test code

public static void main(String[] args) { BugRunnable bugRunnable = new BugRunnable(); for (int i = 0; i < 6; i++) { new Thread(bugRunnable).start(); }}Copy the code
Synchronized code block
Public synchronized void addBlock() {synchronized (bugNumber) {this.bugNumber = ++bugNumber; System.out.println("blockSynchronized--->" + getBugNumber()); }}Copy the code

The test results

blockSynchronized--->1
blockSynchronized--->2
blockSynchronized--->3
blockSynchronized--->4
blockSynchronized--->5
blockSynchronized--->6

Copy the code
Common method synchronization
Public synchronized void addNormal() {bugNumber++; System.out.println("normalSynchronized--->" + getBugNumber()); }Copy the code

The test results

normalSynchronized--->1
normalSynchronized--->2
normalSynchronized--->3
normalSynchronized--->4
normalSynchronized--->5
normalSynchronized--->6

Copy the code
Static method synchronization
Public static static void addStatic() {bugNumber++; System.out.println("staticSynchronized--->" + getBugNumber()); }Copy the code

The test results

staticSynchronized--->1
staticSynchronized--->2
staticSynchronized--->3
staticSynchronized--->4
staticSynchronized--->5
staticSynchronized--->6

Copy the code
Comparison and analysis
  • Each instance of a class has its own object lock. When a thread to access instance objects of the synchronized code block sync, or synchronization method, the thread then get the instance of the object level lock, other threads at this point if you want to access the same instance (because objects can have multiple instances) synchronized code block or synchronization method, must wait for the current thread lock can release object, Not if you are accessing another instance of the class.
  • If an object has more than one synchronized method or code block, the thread that does not acquire the object lock is blocked from all synchronized methods, but can access non-synchronized methods
  • For a static method, you can actually convert it to a synchronized code block, just take the static method above, which is actually equivalent to:
Public static static void addStatic() {bugNumber++; System.out.println("staticSynchronized--->" + getBugNumber()); } public static void changeStatic() {synchronized (bug.class) {++bugNumber; System.out.println("blockSynchronized--->" + getBugNumber()); }}Copy the code

Here is a detailed summary of the three differences

  • Synchronized code blocks: Synchronized code blocks are smaller in scope and only lock an object, so performance is higher
  • Common synchronization methods: Lock the entire method, which has low performance
  • Statically synchronized methods: Synchronized code blocks equivalent to an entire class, with low performance

ReentrantLock

In addition to the synchronized keyword, we can achieve this effect through the Lock interface under the Concurrent package. ReentrantLock is an implementation class of Lock that can Lock wherever you want and is more flexible than the synchronized keyword. Let’s take a look at the usage

Public void addReentrantLock() {mreentrantLock. lock(); / / lock bugNumber++; System.out.println("normalSynchronized--->" + getBugNumber()); mReentrantLock.unlock(); / / unlock}Copy the code

Run the test

ReentrantLock--->1
ReentrantLock--->2
ReentrantLock--->3
ReentrantLock--->4
ReentrantLock--->5
ReentrantLock--->6

Copy the code

We found that synchronization can also be achieved by looking at the inheritance relationship of ReentrantLock

ReentrantLock implements the lock interface, but the lock interface only defines some methods. Therefore, ReentrantLock implements a locking mechanism.

  • “Waiting” lock in the CLH: AbstractQueuedSynchronizer thread queue. In the process of thread concurrency, threads that do not acquire locks will enter a queue, and CLH is the queue that manages these waiting locks.
  • CAS: Compare and exchange functions, which are atomic manipulation functions, meaning that all data manipulated through CAS is atomic.

Member variables

private static final long serialVersionUID = 7373984872572414699L; /** Synchronizer providing all implementation mechanics */ private final Sync sync; / / synchronizerCopy the code

The member variable has only one Sync in addition to the serialization ID

Sync has two implementation classes, one is FairSync, the other is NonfairSync, from the name can be inferred that one is fair lock, one is not fair lock,

FairSync lock method:

    final void lock() {
            acquire(1);
        }

Copy the code

ReentrantLock is an exclusive lock. 1 indicates the state of the lock. For an exclusive lock, the state is 0 if it is in the acquisitable state, and becomes 1 when the lock is first acquired by a thread. Acquire finally calls the tryAcquire method

protected final boolean tryAcquire(int acquires) { final Thread current = Thread.currentThread(); int c = getState(); If (c ==0) {// When c==0 indicates that the lock is not occupied by any thread (hasqueued24), if (! hasQueuedPredecessors() && compareAndSetState(0, acquires)) { setExclusiveOwnerThread(current); return true; } else if (current == getExclusiveOwnerThread()) {int nexTC = c + acquires; if (nextc < 0) throw new Error("Maximum lock count exceeded"); setState(nextc); return true; } return false; }Copy the code

TryAcquire essentially attempts to acquire the lock, setting the lock status on success and returning true, otherwise returning false

NonfairSync the lock() of NonfairSync is the same as the lock() of a fair lock, but because it is unfair, the lock acquisition mechanism is a little different. From the above we know that fair lock adopts fair strategy (CLH queue) when acquiring lock, while non-fair lock adopts unfair strategy. It ignores waiting queue and tries to obtain lock directly.

final void lock() {
            if (compareAndSetState(0, 1))
           setExclusiveOwnerThread(Thread.currentThread());
            else
                acquire(1);
        }

Copy the code

Lock () attempts to set the lock state through compareAndSetState. If it succeeds, set the owner of the lock to the current thread. Otherwise, acquire() is called to attempt to acquire the lock

//NonfairSync 
  if (c == 0) {
                if (compareAndSetState(0, acquires)) {
                    setExclusiveOwnerThread(current);
                    return true;
                }
            }
 //FairSync 
 if (c == 0) {
                if (!hasQueuedPredecessors() &&
                    compareAndSetState(0, acquires)) {
                    setExclusiveOwnerThread(current);
                    return true;
                }
            }

Copy the code

The Fair lock is used to determine whether the thread is located on the head of the CLH queue by hasqueued24 (), and to acquire the lock if it is. Non-fair locks get the lock no matter where you are.

unlock

public void unlock() { sync.release(1); Public final Boolean release(int arg) {if (tryRelease(arg)) {Node h = head; if (h ! = null && h.waitStatus ! = 0) unparkSuccessor(h); return true; } return false; }Copy the code

Comparison and analysis

Wait interruptible
  • Synchronized: Thread A and thread B compete for the same lock at the same time. If thread A does not release the lock, thread B waits and does not release the lock.

  • ReentrantLock: Interrupts after a thread has waited a long time, rather than waiting forever.

Fairness of locks

Fair lock: when multiple threads are waiting for the same lock, they must obtain the lock according to the time sequence of application. Unfair lock: when the lock is released, any thread waiting for the lock has the opportunity to acquire the lock.

  • Synchronized: indicates an unfair lock
  • ReentrantLock: Can be an unfair lock or a fair lock
The binding conditions
  • Default implied conditions in synchronized.
  • ReentrantLock can bind multiple conditions

visibility

volatile

Semantic memory

The main reason for thread-safety problems is that the variables in the working copy of the thread are inconsistent with the main memory. If we can fix this problem, we can ensure that the threads are synchronized. Java provides the volatile keyword to ensure that the memory is visible. When we declare a volatile keyword, it has two meanings;

  • Command reordering is disabled.
  • One thread changes the value of a variable, and the new value is immediately visible to other threads.

Volatile is a weaker synchronization mechanism. Access to volatile variables does not lock and therefore does not block, making volatile variables a lighter synchronization mechanism than the synchronized keyword.

The principle of

The lock prefix, used with volatile, acts as a memory barrier that provides three functions:

1) It ensures that instruction reordering does not place subsequent instructions in front of the memory barrier, nor does it place previous instructions behind the memory barrier; That is, by the time the memory barrier instruction is executed, all operations in front of it have been completed;

2) It forces changes to the cache to be written to main memory immediately;

3) If it is a write operation, it invalidates the corresponding cache line in the other CPU.

Usage scenarios

It is important to note that volatile does not guarantee thread synchronization. If volatile is used to guarantee thread synchronization, the following conditions must be met:

  • Writes to variables do not depend on the current value
  • This variable is not contained in an invariant with other variables

Thread synchronization is guaranteed only if volatile objects are atomic. For example, thread synchronization is guaranteed only if volatile objects are atomic.

Test code:

class Volatile { volatile static int count = 0; public static void main(String[] args) { for (int i = 0; i < 1000; i++) { new Thread(new Runnable() { @Override public void run() { Volatile.add(); } }).start(); } try { Thread.sleep(10000); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("count--->" + ++count); } private static void add() { count++; }}Copy the code

The results

count--->1001

Copy the code

In theory, 1000 is correct, but the output value is 1001. Why? This has been analyzed in JMM before

As before, we do get the latest count from main memory, but since the count operation is not atomic, if we have two threads, thread 1 and thread 2, if thread 1 reads count 5 and then reads –>load into memory, Now that thread 2 has preempted the CPU, thread 2 will start reading –>load, complete the assignment of the working copy, and write the count value back to main memory. Thread 1 has already loaded, so it will not read from main memory, and will continue to do its own operation. Thread-safe is thus possible, and therefore volatile must be atomic to be thread-safe. For these reasons, volatile is used primarily for token bits:

volatile boolean flag = false; Thread 1 while(! flag){ doSomething(); } // thread 2 public void setFlag() {flag = true; }Copy the code

When there are multiple threads to access, as long as one thread changes the state of the flag, then the state will be refreshed to the main memory, will be visible to all threads, so that thread safety can be guaranteed.

automatic

Automatic is a class added to the Concurrent package in Java since JDK1.5. Although volatile provides memory visibility, most operations are not atomic, so the use of volatile is relatively simple. It helps us make sure that some of the operations are atomic.

use

Replace the previous volatile code

 public static AtomicInteger atomicInteger = new AtomicInteger(0);
 private static void add() {
        atomicInteger.getAndIncrement();
    }

Copy the code

Test it out:

AtomicInteger: 1000

Copy the code
The principle of analytic

How does AtomicInteger achieve visibility while ensuring atomicity that volatile cannot?

Member variables

 private static final long serialVersionUID = 6214790243416807050L;
    private static final Unsafe unsafe = Unsafe.getUnsafe();
    private static final long valueOffset;
    private volatile int value;

Copy the code

Operation way

public final int getAndIncrement() { return unsafe.getAndAddInt(this, valueOffset, 1); } public final int getAndAddInt(Object var1, long var2, int var4) { int var5; do { var5 = this.getIntVolatile(var1, var2); } while(! this.compareAndSwapInt(var1, var2, var5, var5 + var4)); return var5; } int compare_and_swap(int reg, int oldval, int newval) { ATOMIC(); int old_reg_val = reg; if (old_reg_val == oldval) reg = newval; END_ATOMIC(); return old_reg_val; }Copy the code

Two concepts need to be known before analysis:

  • Pessimistic locks, as the name implies, are Pessimistic. Each time I fetch the data, I think someone else will change it, so I Lock the data each time I fetch it, so that someone else will try to fetch it and block it until it gets the Lock.

  • Optimistic Lock, as the name implies, is very Optimistic. Every time I go to get data, I think that others will not modify it, so I will not Lock it. But when UPDATING, I will judge whether others have updated the data during this period, and I can use the version number and other mechanisms.

Compare_and_swap is the core method, also known as CAS, because CAS is based on optimistic locking, which means that when writing, if the old register value is not equal to the present value, another CPU is changing, so keep trying. So this guarantees atomicity.

Privatization of variables

A ThreadLocal provides a separate copy of the variable for each thread that uses it. Instead of reading the copy from main memory,ThreadLocal creates its own copy, which is independent of each other. As opposed to syncronized time-for-space, ThreadLocal is the opposite, reducing the complexity of concurrent threads.

Simple to use

class ThreadLocalDemo { public static ThreadLocal<String> local = new ThreadLocal<>(); Public static void main(String[] args) {local.set("Android"); for (int i = 0; i < 5; i++) { SetThread localThread = new SetThread(); // Create 5 threads new Thread(localThread).start(); } try { Thread.sleep(2000); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println(local.get()); } static class SetThread implements Runnable { @Override public void run() { local.set(Thread.currentThread().getName()); }}}Copy the code

test

Android

Copy the code

Even though I created several threads with the for loop, I didn’t change the values in my ThreadLocal, which is still my big Android, which shows that the values I assigned are tied to my thread, and each thread has a specific value.

Source code analysis

Member variables
private final int threadLocalHashCode = nextHashCode(); // Hash value of the current thread private static AtomicInteger nextHashCode =// Hash value of the next thread new AtomicInteger(); private static final int HASH_INCREMENT = 0x61c88647; // Hash growth factorCopy the code
The constructor
  public ThreadLocal() {
    }

Copy the code

Empty implementation…

Set method
public void set(T value) { Thread t = Thread.currentThread(); ThreadLocalMap map = getMap(t); // Get a map if (map! Map. set(this, value); Else create a map createMap(t, value) if map is empty; } ThreadLocalMap getMap(Thread t) { return t.threadLocals; }Copy the code
ThreadLocalMap

The Map created above is actually a ThreadLocalMap, which is used to store data bound to a thread

Basic method

Member variables
private static final int INITIAL_CAPACITY = 16; // Initial size, power of 2 private Entry[] table; Private int size = 0; Private int threshold; Static class Entry extends WeakReference<ThreadLocal<? >> { /** The value associated with this ThreadLocal. */ Object value; Entry(ThreadLocal<? > k, Object v) { super(k); value = v; }}Copy the code
A constructor
ThreadLocalMap(java.lang.ThreadLocal<? > firstKey, Object firstValue) {// Initialize table array table = new Entry[INITIAL_CAPACITY]; / / by the hash value to calculate storage index int I = firstKey. ThreadLocalHashCode & (INITIAL_CAPACITY - 1); Table [I] = new entry (firstKey, firstValue); // Array length from 0 to 1 size = 1; // Set the threshold to the initial capacity setThreshold(INITIAL_CAPACITY); }Copy the code

Another constructor is to pass a Map, much the same as a key-value

getEntry
private Entry getEntry(ThreadLocal<? Int I = key.threadLocalHashCode & (table.length-1); Entry e = table[i]; if (e ! = null && LLDB () == key) Return getEntryAfterMiss(key, I, e); getEntryAfterMiss(key, I, e); }Copy the code
Set method
private void set(ThreadLocal<? > key, Object value) { Entry[] tab = table; Int len = tab.length; Int I = key.threadLocalHashCode & (len-1); int I = key.threadLocalHashCode & (len-1); // Calculate the subscript for (Entry e = TAB [I]; e ! = null; e = tab[i = nextIndex(i, len)]) { ThreadLocal<? > k = e.get(); If (k == key) {// if (k == key) {// if (k == key) { return; } if (k == null) {replaceStaleEntry(key, value, I); // return; } } tab[i] = new Entry(key, value); Int sz = ++size; if (! cleanSomeSlots(i, sz) && sz >= threshold) rehash(); }Copy the code
The remove method
private void remove(ThreadLocal<? > key) { Entry[] tab = table; int len = tab.length; int i = key.threadLocalHashCode & (len-1); I for (Entry e = TAB [I]; e ! = null; e = tab[i = nextIndex(i, len)]) { if (e.get() == key) { e.clear(); expungeStaleEntry(i); // Clear the specified key return; }}}Copy the code

Basically, ThreadLocal has been analyzed clearly. Its core is a ThreadLocalMap, which stores an entry array. The interim key is the Weakreference of ThreadLocal, and the value is the value of set. Then each set and GET will clean up the existing entry, adding weakReference can place memory leakage to the maximum extent.

A deadlock

define

Deadlock: A deadlock (waiting on each other) caused by multiple threads competing for resources, so that none of these processes can move forward without external action.

Here is an example of a deadlock

public class DeadLock implements Runnable { public int flag = 1; Private static Object o1 = new Object(), o2 = new Object(); @Override public void run() { System.out.println("flag=" + flag); if (flag == 1) { synchronized (o1) { try { Thread.sleep(500); } catch (Exception e) { e.printStackTrace(); } synchronized (o2) { System.out.println("1"); } } } if (flag == 0) { synchronized (o2) { try { Thread.sleep(500); } catch (Exception e) { e.printStackTrace(); } synchronized (o1) { System.out.println("0"); } } } } public static void main(String[] args) { DeadLock td1 = new DeadLock(); DeadLock td2 = new DeadLock(); td1.flag = 1; td2.flag = 0; //td1,td2 are both in an executable state, but the JVM thread schedule is not determined which thread executes first. //td2's run() may run new Thread(td1).start() before td1's run(); new Thread(td2).start(); }}Copy the code

Whichever thread start, start the threads will sleep500ms first, get another thread right to the use of the CPU, thereby ensuring the thread object of capturing the td1 O1 lock, in the competition of O2 object lock, capturing the td2 O2 object lock, in competition O1 object lock, ha ha, this is awkward, then each other is not want, It jammed, causing a deadlock.

The necessary conditions for a deadlock to occur
  • 1) Mutually exclusive conditions: a process uses the allocated resources in an exclusive manner, that is, only one process occupies a certain resource in a period of time. If another process requests the resource at this time, the requester can only wait until the process holding the resource is released.
  • 2) Request and hold conditions: the process has held at least one resource, but it puts forward a new resource request, and the resource has been occupied by other processes. In this case, the requesting process blocks, but does not release other resources it has obtained.
  • 3) Non-deprivation condition: the process can not be deprived of the resources it has acquired before they are used up, but can only release them when they are used up.
  • 4) Loop waiting condition: when deadlock occurs, there must be a process — a loop chain of resources.
Deadlock prevention
  • Break the mutual exclusion condition. That is, allowing processes to access certain resources at the same time.
  • Break the non-preemption condition. That is, allowing the process to forcibly take certain resources from the possessor. That is, when a process has occupied some resources and it requests new resources that cannot be satisfied immediately, it must release all occupied resources and re-apply later.
  • Break the conditions of possession and application. A resource pre-allocation strategy can be implemented. That is, a process requests all the resources it needs from the system at once before it runs.
  • Break the circular waiting condition and implement the strategy of orderly resource allocation. Using this strategy, the resources are numbered and allocated according to the number in advance, so that the process does not form a loop when applying for or occupying resources. Requests for resources from all processes must be made strictly in ascending order.