Start thread mode:

1 / / way
public class MyThread extends Thread{
    @Override
    public void run(a) {
        super.run();
        //do my work}}new MyThread().start();

// Method 2:
public class MyRunnable implements Runnable{
    @Override
    public void run(a) {
        //do my work}}new Thread(new MyRunnable()).start();

3 / / way
class MyCallable implements Callable<String> {   
  @Override      
  public String call(a) throws Exception {      
    return "result";
  }
}
FutureTask<String> futureTask = new FutureTask<>(new MyCallable());
new Thread(futureTask).start();
Copy the code

Of course, handlerThreads, ThreadPools, asyncTasks, etc., can also start new threads, all based on Thread encapsulation


Thread status:

The difference between waiting and blocking: it is simply understood that waiting is a state in which the thread decides that the condition is not satisfied and calls the instruction actively. Blocking is passive entry and can only be triggered by the synchronized keyword modification.

The thread block state entered by IO, sleep, synchronized, etc., is called blocking. What they have in common is that they give up the CPU, but do not release the acquired lock.


Thread safety

Do not share data: ThreadLocal

        ThreadLocal<String> threadLocal = new ThreadLocal<>();
        
        Thread thread1 = new Thread(new Runnable() {
            @Override
            public void run(a) {
                threadLocal.set("Thread 1 data");
                //do some work ...String data = threadLocal.get(); }}); Thread thread2 =new Thread(new Runnable() {
            @Override
            public void run(a) {
                threadLocal.set("Thread 2 data");
                //do some work ...String data = threadLocal.get(); }});Copy the code

Principle:

Each thread has a threadLocalMap array. When threadLocalSet is called, the hash value of the threadLocal variable is used as the key and the value is stored in the array.

Sharing data:

1. JVM keyword synchronized, note: not fair, reentrant

    private List<Integer> shareData = new ArrayList<>();
    
    public void trySync(int data) {
        synchronized (this) {// Lock the TestSyncData object, thread1, thread2...
            // Operate shared data
            shareData.add(data);
        }
        //do something later ...
    }
Copy the code

Note pit point: lock range, life cycle is different

    public void trySync1(a) {
        synchronized (this) {// Lock the TestSyncData object}}// Lock the TestSyncData object
    public synchronized void trySync2(a) {}/ / lock TestSyncData class
    public synchronized static void trySync3(a) {}/ / lock TestSyncData class
    public  void trySync4(a) {
        synchronized (TestSyncData.class){
        }
    }
Copy the code

If there are time-consuming tasks to acquire the lock, the lock release time must be considered in the follow-up process. For example, when a singleton is released from the main thread, it must be noted that the lock can be obtained in time.

If in doubt, consider downgrading the lifetime of the lock, such as using in-stack locks and thread-safe beans

See blog.csdn.net/weixin_3960…

2. Wait-notify, a member function of the Java ancestor Object class

    public void testWait(a) {
        //1. Create synchronization objects
        Object lock = new Object();
      
        new Thread(new Runnable() {
            @Override
            public void run(a) {
                //2. do something hard ...
                shareData = new ShareData();
                
                // wake up the original thread
                synchronized (lock){
                    lock.notify();
                }
                //5.some other logic
            }
        }).start();
        
        //4. The original thread waits
        synchronized (lock){
            try {
                lock.wait(45000);
            } catch (InterruptedException e) {
                //6. Thread interrupt is triggerede.printStackTrace(); }}if(shareData ! =null) {
            //do something happy ...}}Copy the code

Many pit points:

  1. The number of threads in wait state must be equal to the number of threads in wait state. The number of threads in wait state must be equal to the number of threads in wait state. If you want to redirect wakeup, consider using ReentrantLock’s Condition

  2. It is best not to do any other logic at the mark 5, because it may not be executed. Try to ensure that notify is the last logic in a time-consuming task

  3. Note that the wait is interrupted by the thread and out of sync logic.

            while (shareData == null) {//4 Anti-thread disturbance writing method
                synchronized (lock) {
                    try {
                        lock.wait();
                    } catch(InterruptedException e) { e.printStackTrace(); }}}Copy the code
  4. During service destruction, notifyAll is used to terminate thread tasks and release objects without waiting for data to be returned

3. Aqs (queue synchronizer) interface based on a series of synchronization lock or components ReentrantLock, Semaphore, CountDownLantch; Through queue + Count +CAS

    private ReentrantLock lock = new ReentrantLock();
    private List<Integer> shareData = new ArrayList<>();

    public void testLock(a) {/ / non standard
        lock.lock();
        shareData.add(1);
        lock.unlock();
    }
Copy the code

Pit point: pay attention to manual release to avoid deadlock; Synchronization logic will have exceptions, the standard is best to try-catch-finally


Thread-safe data types: StringBuffer, CopyOnWriteArrayList, ConCurrentXXX, BlockingQueue, Atomicxxx(CAS)

Some are based on locks, some are based on lockless CAS (compare and swap), and some are a mixture of the two

The cas theory reference: cloud.tencent.com/developer/a…

Pit:

  1. CopyOnWriteArrayList: first Copy the current container, Copy a new container, and then add deleted elements in the new container, add deleted elements, and then reference to the original container to the new container, lock the whole process, to ensure the safety of the write thread. However, we still have multithreaded array out-of-bounds exceptions
        // It is not safe to write
        for (int i = 0; i < copyOnWriteArrayList.size(); i++) {
            copyOnWriteArrayList.get(i);
        }

				// It's safe to write
        Iterator iterator = copyOnWriteArrayList.iterator();
        while (iterator.hasNext()){
            iterator.next();
        }
Copy the code
  1. Atomicxxx Pay attention to the usage scenario

    ABA problem AtomicStampedReference, AtomicMarkableReference overhead problem can only guarantee a shared variable atomic operation AtomicReferenceCopy the code

BlockingQueue: In the producer-consumer model, producer and consumer decouple, and producer and consumer performance balance problem

Graph LR A(BlockingQueue)-->B1(ArrayBlockingQueue)-->C1(ArrayBlockingQueue) A-->B2(LinkedBlockingQueue)-->C2(ArrayBlockingQueue) A-->B3(PriorityBlockingQueue)-->C3(DelayQueue)-->C4(PriorityBlockingQueue)--> D4(application -- expired cache) A - - > B5 (SynchronousQueue will) -- - > C5 (not blocking queue of storage elements) - > D5 (application - newCachedThreadPool) A > B6 (LinkedTransferQueue) -- - > C6 (chain table structure unbounded blocking queue) A-->B7(LinkedBlockingDeque)-->C7

Bounded: defines the maximum capacity unbounded: not defined


Thread pooling: Thread management and unified scheduling

Create:

ThreadPoolExecutor(int corePoolSize,    // Number of core threads
                   int maximumPoolSize, // Maximum number of threads
                   long keepAliveTime,  // Idle thread lifetime
                   TimeUnit unit,
                   BlockingQueue<Runnable> workQueue, // block the queue
                   ThreadFactory threadFactory,       // Thread pool factory
                   RejectedExecutionHandler handler)  // Reject the policy
Copy the code

Threads are scarce and need to be properly configured based on task characteristics

CPU intensive: fetch data in memory (Max threads <= Runtime.getruntime ().availableProcessors()+1) Note: +1 virtual memory – page missing

I/O intensive: network communication, disk read and write (maximum number of threads <= CPU core x 2), high blocking but low occupation

Hybrid: split the thread pool if the execution time is equal. Otherwise, see the distribution of power. The specific duration can be tested locally or estimated according to the experience table below

Execution: Pay attention to the sequence of tasks stored 1, 2, 3, 4 (key)

Rejection strategy (saturation strategy) :

Graph LR A(RejectedExecutionHandler)-->B1(AbortPolicy)-->C1(directly throw exception -- default) A-->B2(CallerRunsPolicy)-->C2(caller thread execution) A--> DiscardOldestPolicy -->C3(DiscardOldestPolicy)--> C4(DiscardOldestPolicy)

Close:

awaitTermination(long timeout, TimeUnit unit)/ / blocking
shutDown()// Interrupts a thread that is not executing a task
shutdownNow()// Interrupts all threads cooperatively, but only emits an interrupt signal
isShutdown()// Whether to close
Copy the code