Java concurrent programming and high concurrency interviews

I. Course preparation

1.1 Course guidance

  • This course mainly focuses on the core of concurrent programming and highly concurrent solutions.

  • Hopefully this course will lead you to solve the problem of concurrent programming and high concurrency;

  • Course Features:

    1. Lots of diagrams and code demonstrations;
    2. Comprehensive coverage of concurrent knowledge, the establishment of a complete knowledge system, including: thread safety, thread closure, thread scheduling, synchronous container, concurrent container, AQS, J.U.C and so on;
    3. High concurrency solutions and ideas mainly include: capacity expansion, cache, queue, split, service degradation and fusing, database cutting, sub-database sub-table, etc., through the above to help you build a complete knowledge system of concurrency and high concurrency.
    4. Close to the interview, improve the success rate of high salary interview
  • Suitable for people:

    1. Students who are not familiar with concurrency and high concurrency
    2. Students who understand concurrency and high concurrency
    3. Students who are already proficient in programming jump ship
  • Learning gains:

    1. Systematic learning of concurrent programming knowledge and high concurrency processing ideas
    2. Fixed some concurrency issues that were unknowingly committed before
    3. Avoid concurrency issues in future development
    4. Take a more comprehensive look at your knowledge in order to improve your knowledge system.
    5. Learn a lot of practical case analysis and code optimization techniques
    6. Gives you a qualitative improvement in concurrent programming and high concurrency processing
    7. This will save you time preparing for the interview and make your interview more focused
    8. You can learn from some ideas and means of solving problems that you may not have thought of before
  • Content and steps:

    1. Basic knowledge explanation and core knowledge preparation:
    2. Concurrency and concurrent thread-safe handling
    3. Ideas and means of high concurrency processing
  • Some knowledge and skills involved:

    1. Overall architecture: SpringBoot, Maven, Jdk8, MySQL
    2. Basic components: Mybatis, Guava, Lombok, Redis, Kafka
    3. Advanced components (classes) : Joda-time, Atomic package, J.U.C, AQS, ThreadLocal, RateLimiter, Hystrix, threadPool, Shardbatis, Curator, elastice-job…

1.2 Initial experience of concurrent programming

  • Simplest Case of concurrent programming: Implementing a counting function (let’s use two examples to get our first taste of concurrent programming)

    1. CountExample.java:
      @Slf4j
      public Class CountExample{
      	
      	private static int threadTotal = 200;
      	private static int clientTotal = 5000;
      	
      	private static long count = 0;
      	
      	public static void main(String[] args){
      		ExecutorService exec = Executors.newCachedThreadPool();
      		final Semaphore semaphore =new Semaphore(threadTotal);
      		for (int index =0; index < clientTotal; index ++){
      			exec.execute(()->{
      				try{
      					semaphore.acquire();
      					add();
      					semaphore.release();
      				} catch (Exception e){
      					log.error("exception",e); }}); } exec.shutdown(); log.info("count:{}",count);
      	}
      	private static void add(a){ count++; }}Copy the code

      5000 requests, allowing only 200 threads to execute at a time, printing out the total number of passes; It’s less than 5000 and it’s different every time;

    2. MapExample.java
      @Slf4j
      public Class MapExample{
      	
      	private static Map<Integer,Integer> map = Maps.newHashMap();
      
      	private static int threadNum = 200;
      	private static int clientNum= 5000;
      	
      	public static void main(String[] args){
      		ExecutorService exec = Executors.newCachedThreadPool();
      		final Semaphore semaphore =new Semaphore(threadNum);
      		for (int index =0; index < clientNum; index ++){
      			final int threadNum = index;
      			exec.execute(()->{
      				try{
      					semaphore.acquire();
      					func(threadNum);
      					semaphore.release();
      				} catch (Exception e){
      					log.error("exception",e); }}); } exec.shutdown(); log.info("count:{}",map.size());
      	}
      	private static void func(int threadNum){ map.put(threadNum,threadNum); }}Copy the code

      Each time the value is stored in the map, 200 threads running at the same time, found that each time is less than 5000, cannot equal 5000

  • Think about:

    • When we have 200 threads running at the same time, we cannot equal 5000, but when we change 200 to 1, we will find that we can count up to 5000 correctly. This means that in many cases, there is no problem in the single-threaded environment, but in the multi-threaded environment is prone to problems.

1.3 Basic concepts of concurrency and high concurrency

  • Concurrent: Have two or more threads at the same time, if the program to run on a single-core processor, multiple threads will alternately in or out of memory, the thread of “existence”, at the same time each thread is in a state in the process of execution, if you’re running on multi-core processors, at this point, the application of each thread will be assigned to a processor cores, so you can run at the same time.
  • High Concurrency: High Concurrentcy is one of the factors that must be considered in the architecture design of distributed Internet systems. It usually refers to the design to ensure that the system can handle many requests in parallel at the same time.

Concurrency means that multiple threads operate the same resource to ensure thread safety and rational use of resources. High concurrency means that the service can handle many requests at the same time, improving application performance.

Second, concurrency basis

2.1 CPU Multi-level Cache – Cache Consistency

  • The system evolved from the initial level 1 cache to level 2 or even level 3 cache:

    It is expensive to increase the level 1 cache directly, so increasing the level 2 cache can maximize the utilization of resources and reduce the cost.

  • Why you need a CPU cache: The CPU frequency is too fast to keep up with main memory, so the CPU often has to wait for main memory during the processor clock cycle, wasting resources. A cache is designed to alleviate the mismatch between CPU and memory speed.
  • What does a CPU cache mean?
    1. Time limitations: If some data is accessed, it is likely to be accessed again in the near future;
    2. Spatial limitations: If a piece of data is accessed, then adjacent data may be accessed soon;
  • To ensure consistency of cache shared data between multiple CPU caches (MESI):

    This agreement is relatively complex, the details can be understood by yourself;

2.2 CPU Multi-level Cache – Out-of-order execution optimization

  • Out-of-order optimization is when the processor makes an optimization that goes against the original order of the code in order to run faster.
  • Here is:

    Our formula is to calculate a=10, then b=200, and finally result= ab. B =200, a=10, and result=ab

  • In the era of single-core economy, such problems did not occur. However, in a multi-core computer, there will be multiple cores executing commands at the same time, and each command may be out of order. In addition, the processor also introduces L1,L2 and other cache mechanisms, and each core has its own cache, which leads to the command that is logically written last may not be written last. If we do not take any measures, In the end, it may lead to different results than what the computer calculates.

2.3 Java Memory Model (JVM)

  • The JVM is a specification that specifies how the Java virtual machine and computer memory work together. It specifies how and when one thread can see the values of shared variables that have been modified by other threads. And how to access shared variables synchronously if necessary. The illustration is as follows:

  • Heap: It is a run-time data area that is taken care of by garbage collection.
    • Advantage: It can dynamically allocate memory size. The build period also does not have to be told to the compiler in advance because it is dynamically generated at run time. Java’s garbage collector automatically cleans up data that is no longer used.
    • Disadvantages: Because it is dynamically allocated at runtime, it is relatively slow to access.
  • Stack: Access is faster than the heap, second only to registers. The data in the stack can be shared. However, its data size is determined, lack of flexibility, mainly store some basic types and variables. For example, our lowercase ones: int, byte, long, char, etc.
  • Description:
    1. The Java memory model requires that the call stack and local variables reside on the thread stack and objects reside on the heap.
    2. A local variable may also be a reference to an object, but the object itself is stored on the heap. Local variables in an object are still on the stack, even if the object is in the heap.
    3. A member variable of an object may be stored on the heap with the object itself, regardless of whether the variable is of primitive or reference type. The method reference of an object is on the stack, the object of the method itself is on the heap, and the local variables within the method are still on the thread stack.
    4. Static member variables are stored on the heap along with the class definition, and the variables stored on the heap can be accessed by the thread that holds the reference.
    5. When a thread can access an object, it can also access the object’s member variables. When two threads simultaneously access a member variable of an object, they both have a private copy of the variable. (When two threads simultaneously access a method on an object and then access a member variable, they both have a private copy of that member variable.)
  • Chart 2:

  • CPU:
    • What we see now is a simple diagram of the computer hardware architecture. A computer generally has two or more cpus, and each CPU may have multiple cores, so it is common to have more than two cpus running multiple threads at the same time, and it is not a problem for each CPU to run one thread at a time.
    • This means that if your Java program is multithreaded, each CPU thread in the Java program can execute concurrently.
  • 2. CPU Registers:
    • Each CPU contains a series of registers that form the basis of the CPU’s memory. The CPU can perform memory operations on registers much faster than it can on main memory. This is because the CPU accesses registers much faster than main memory.
  • CPU Cache Memory:
    • Because of the order of magnitude difference between a computer’s memory and processing speed, modern computer systems have to buffer between memory and processor by adding a layer of high-level caches that read and write as fast as possible as the processor runs.
    • The data used by the operation is copied to the cache so that the operation can be performed quickly. When the operation is finished, it is synchronized from the cache back to memory. So the processor doesn’t have to wait for slow memory reads and writes.
    • The CPU accesses the Cache faster than the main memory, but usually less than the CPU accesses the registers.
    • Each CPU may have one CPU cache layer, and a CPU may have multiple caches. At some point, one or more cache rows may be read into the cache, one or more cache rows may be flushed back into main memory, and there may be many operation points in them at the same time.
  • Ram-main Memory:
    • A computer also has one main memory, which is accessible to all cpus and is usually much larger than other caches in the CPU.
    • Typically, when a CPU needs to read data from main Memory, it will first read the data into CPU Cache Memory, and possibly even read some of the data in CPU Cache Memory to CPU Registers, and then perform operations in the Registers. When the CPU needs to write the result back from the register to main memory, it flusher the value of the internal register to the cache, and then at some point back to main memory.
  • Graphical representation of the relationship between the Java memory model and hardware resources:

For hardware, all thread stacks and heaps are distributed in main memory, and some thread stacks and heaps may appear in CPU caches and registers inside the CPU.

  • Java Memory Model Abstract structure diagram:

– The diagram shows the abstract relationship between threads and main memory. Shared variables between threads are stored in main memory, each thread has a private local memory, andLocal memoryIt is an abstraction of the Java memory model, not a real one. It covers the buffer cache and writing, registers, and other hardware and optimizing compiler, it stores the thread in the local memory read or write copy A copy of the Shared variables (such as thread A to use A Shared variable in the main memory, it copies the Shared variables into their own local memory, generates A copy of the Shared variables). Hardware memory is to get better speed, virtual machine and hardware system can make the working memory is stored in registers and priority cache, Java memory model threads in the working memory is the CPU registers and caching an abstract description of the JVM memory model, it is only the memory of physical partitioning, it only in memory, And it’s limited to JVM memory. – If threads need to communicate with each other, they must first update the copy in their local memory to the shared variable in the main memory, and then thread B can read it. So if one thread copies a shared variable, and the second thread copies the shared variable, and both make changes to the shared variable, then data errors may occur. For example, if both threads perform a +1 operation once, if both threads read +1 at the same time, then the +1 operation only happens once. To solve this problem, we need to use synchronization to ensure that the shared variables are read sequentially.

  • Java Memory model – Synchronize eight operations

    1. Lock: A variable that acts on main memory and identifies a variable as a thread-exclusive state.
    2. Unlock: A variable that operates on main memory. It releases a locked variable so that it can be locked by another thread.
    3. Read: a variable acting on main memory that transfers a variable value from main memory to the thread’s working memory for subsequent load action.
    4. Load: Variable acting on working memory, which puts the value of the variable obtained from main memory by the read operation into a copy of the variable in working memory.
    5. Use: variable applied to working memory, passing the value of a variable in working memory to the execution engine.
    6. Assign: A working memory variable that assigns a value received from the execution engine to the working memory variable.
    7. Store: Variable applied to working memory that transfers the value of a variable in working memory to main memory for subsequent write operations.

    8. Write: A variable operating on main memory that transfers store operations from the value of a variable in working memory to a variable in main memory.

  • Java memory model = synchronization rules

    1. To copy a variable from main memory to working memory, read and load operations are performed sequentially, and to synchronize variables from working memory back to main memory, store and write operations are performed sequentially. But the Java memory model only requires that these operations be performed sequentially, not sequentially.
    2. One of the read and load, store and write operations is not allowed to occur separately.
    3. A thread is not allowed to discard its most recent assign operation, i.e. variables that have changed in working memory must be synchronized to main memory.
    4. A thread is not allowed to synchronize data from working memory back to main memory without a cause (without any assign operation).
    5. A new variable can only be created in main memory. It is not allowed to directly use an uninitialized (load or assign) variable in working memory. Assign and load operations must be performed before use and store operations can be performed on a variable.
    6. A variable can be locked by only one thread at a time. However, the lock operation can be repeated by the same thread several times. After the lock operation is performed several times, the variable can be unlocked only after the same number of UNLOCK operations are performed. Lock and unlock must come in pairs.
    7. If you lock a variable, the value of the variable will be emptied from working memory, and the load or assign operation will be performed again to initialize the variable before the execution engine can use it.
    8. Unlock is not allowed if a variable implementation is not locked by a lock operation. It is also not allowed to unlock a variable that has been locked by another thread.
    9. Before you can unlock a variable, you must first synchronize it to main memory (store and write)
  • Here is:

2.4 Advantages and Risks of concurrency

  • Points involved in concurrent simulation:
    1. Postman: Http request emulation tool that you can use to invoke the back-end interface
    2. Apache Bench (AB): A tool shipped with Apache that tests web site performance
    3. JMeter: A stress test tool developed by the Apache organization (it is more powerful than AB)
    4. Code: Semaphore, CountDownLatch, etc

2.5 Summary of this Chapter

  1. Discusses CPU multi-level cache: cache consistency, out of order execution optimization
  2. Java Memory model: JMM specification, abstract structure, synchronous eight operations and rules
  3. Advantages and risks of Java concurrency

Third, thread safety explanation

3.1 Thread safety – atomicity -Atomic

  • What is thread safety?
    • A class is thread-safe if it behaves correctly when accessed by multiple threads, regardless of how the runtime environment is scheduled or how the processes are executed interchangeably, and does not require any additional synchronization or coordination in the calling code.
  • There are three main things to get right about thread safety:
    1. Atomicity: Provides mutually exclusive access that can only be accessed by one thread at a time
    2. Visibility: Changes made to main memory by one thread can be observed by other threads in real time
    3. Orderliness: A thread observes the order in which instructions are executed in other threads. This observation is often disordered due to instruction reordering.
  • Atomic -Atomic package:
    • Java natively provides some basic atomic classes that can be used to control concurrent data anomalies in some scenarios.

      For example, using AtomicLong for addition and subtraction, in the source code, it will compare the expected value with the result value, only if the correct task, otherwise return, similar to the database version optimistic lock, only if the expected version is consistent will take effect, can solve concurrent public variable data exceptions in some scenarios.

    • Question to consider: What is the difference between LongAdder and AtomicLong?
    • AtomicReferenceFieldUpdater:
    • Atomic package:

3.2 Thread safety – atomicity – locking

  • Synchronized: depends on the JVM:
    • Modifies code blocks, bracketed code, that apply to the called object
    • Modifier method: The entire method, applied to the called object
    • Modify static methods: The entire static method, applied to all objects
    • Modifier class: the bracketed part that applies to all objects
    • Examples of decorated code blocks and decorated methods:
  • Lock: Rely on special CPU instructions, code implementation, ReentrantLock
  • Contrast:
    1. Synchronized: can not interrupt the lock, suitable for competition is not fierce, good readability
    2. Lock: interruptible Lock, diversified synchronization, can maintain the normal when the competition is fierce
    3. Atomic: Can maintain normal performance when competition is fierce, better than Lock; Only one value can be synchronized.

3.3 Thread safety – Visibility

  • Shared variables are not visible between threads:

    1. Thread cross execution
    2. Reordering in combination with thread crossing
    3. The updated value of the shared variable is not updated in time between working memory and main memory
  • (visibility -synchronized) The JVM has two rules about synchronized

    1. The thread must flush the latest value of the shared variable to main memory before it can be unlocked
    2. When a thread locks, the value of a shared variable in working memory is emptied, so that the latest value needs to be read from main memory when using a shared variable (note that locking and unlocking are the same lock).
  • (Visibility -volatile) is achieved by adding memory barriers and disallowing reordering optimizations.

    • Writes to volatile variables are followed by a store barrier instruction that flusher shared variable values from local memory to main memory.
    • Read objects from volatile variables are preceded by a load barrier instruction that reads the shared variable from main memory.
  • StoreStore and StoreLoad barriers:

3.4 Thread Security – Order

  • Order:

    • In the Java memory model, the compiler and processor are allowed to reorder instructions, but the reordering process does not affect the execution of a single-threaded program, but affects the correctness of concurrent execution of multiple threads.
    • Volatile, synchronized, Lock
  • Orders-happens-before principle

    • The principle of sequence: in a thread, the operations written earlier take place before the operations written later, in order of code.

      Order can only be guaranteed under a single thread

    • Lock principle: An unLock operation occurs first when a lock operation is performed on the same lock
    • Volatile variable rules: Writes to a variable occur first before reads to that variable occur later
    • Transfer rule: If operation A precedes operation B, and operation B precedes operation C, then operation A occurs linearly in operation C
    • Thread start rule: The start() method of the Thread object occurs first for each action of the Thread
    • Thread interrupt rule: A call to the threadinterrupt () method occurs before code in the interrupted thread detects the occurrence of an interrupt event
    • Thread termination rules: All operations in a Thread occur before the Thread terminates. Thread termination can be detected by the end of thread.join () method and the return value of thread.isalive ()
    • Object finalization rule: The finalization of an object occurs first at the beginning of its Finalize () method

    If the orderliness of a thread cannot be deduced from happens-before, then the system can order it at will.

  • Thread Safety – Summary:

    1. Principles: Atomic package, CAS algorithm, synchronized, Lock
    2. Visibility: synchronized, volatile
    3. Orderliness: happens-before

Fourth, security release object explanation

4.1 Publishing Objects

  • Publish objects: Make an object usable by code outside the current scope

  • Object escape: The erroneous publication of an object to be seen by other threads before it has been constructed.

  • Unsafe publishing objects:

    • Code examples:
    • We can use getStatus to get a reference to the above array of status. So any object can modify the array, so the array is not safe.
  • Object escape:

    • Code examples:
    • Code interpretation: As shown, for a code to be distributed, it should be distributed safely.

4.2 Publishing Objects Securely

  • Methods for securely publishing objects:

    1. Initializes a reference to an object in a static initialization function.
    2. Save references to objects in volatile type fields or AtomicReference objects
    3. Save a reference to an object in the final type field of a properly constructed object.
    4. Saves a reference to an object into a lock protected domain
  • Lazy mode:

    • Code examples:

      It is an implementation of the lazy pattern, and the singleton example is created the first time it is used. This code is fine under single threads, but it is thread unsafe and we can optimize it, such as locking.

    • To make lazy thread-safe, we can add the synchronized keyword, but it has a high performance overhead, as shown in the figure below:

    • In the case of empty in the inside of the lock, and then empty again. Thread-safe and performance maximized:

      In the case of multi-threading, if both threads pass the first interception, int2 == NULL, one thread obtains the lock and instantiates it. When the lock is released, if the second thread goes in, it will judge whether the instance has been instantiated. This prevents both from instantiating an instance at the same time.

    • Reordering of instructions leads to possible problems:

      • The JVM and CPU changed and instructions were rearranged; If the new SingletonExample() instantiates the object, it may be referenced first. If the new SingletonExample() instantiates the object first and calls the interface, an exception will occur because the instantiation is not complete.
    • Use volatile to prevent CPU reordering of instructions on this object, so that the class’s instantiation method is thread-safe. As shown in the figure:

  • Hungry and Hungry:

    • Examples of code for using static fields:
      • The hungry mode is thread-safe in that it initializes an instance at the beginning and returns it when the static factory method retrieves it. In the case of multithreading, only one instance is always returned.
      • Disadvantages of hungrier mode:
        1. It can’t have too much processing in its constructor or it will load slowly.
        2. It should be used, otherwise it would be wasteful.
    • Using static code blocks to implement the singleton pattern:
      • Note: Order is important when using static code blocks. Whoever is on top will execute first. If =null follows and new precedes, the new will be overwritten by null, resulting in null exception.
  • Enumeration mode: Recommended

    • Code examples:
      /** * the safest **/
      public class SingletonExceple {
      	
      	// Private constructor
      	private SingletonExample(a){}
      
      	public static SingetonExample getInstance(a){
      		return Singleton.INSTANCE.getInstance();
      	}
      	
      	private enum Singleton{
      		INSTANCE;
      
      		private SingletonExample singleton;
      
      		// The JVM guarantees that this method is called absolutely once
      		Singleton(){
      			singleton = new SingletonExample();
      		}				
      		
      		public SingletonExample getInstance(a){
      			returnsingleton; }}}Copy the code

5. Thread safety policy

5.1 Immutable Objects

  • Immutable objects must meet the following conditions:

    1. The state of an object cannot be changed after it is created
    2. Object all fields are of final type
    3. The object was created correctly (the this reference did not escape during object creation)

    A String is an immutable object. When two strings are combined, a new address is generated.

  • Final keywords: class, method, variable

    • Modifier classes: cannot be inherited
    • Modification method: 1, lock method is not modified by inheritance class 2, efficiency
    • Modifiers: primitive datatype variables (values cannot be modified), reference type variables (cannot reference other addresses)

    Recent releases do not need to use final for performance tuning.

  • What are immutable objects besides final?

    1. The Collections. UnmodifiableXXX: Collection, List, Set, Map…

      In the source code of UnmodifiableXXX it will make a lot of methods to exception thrown, so when calling the modified method, will be directly thrown exception, can not be modified.

    2. Guava: ImmutableXXX: Collection, List, Set, Map…

      When using the ImmutableXXX method, changes to the collection are also thrown directly, and if they are Immutable, calls to methods such as add() indicate expired lines.

      There are two forms of Map construction, as shown in the figure;

    No changes are allowed once initialization is complete.

5.2 Thread Blocking

  • What is thread closure?
    • It essentially encapsulates an object in a thread, and only one thread can see the object, so even if the object is not thread-safe, there’s no thread-safe problem, because it can only be accessed by one thread;
  • Normally, our request is run by one thread to the server, so we want to isolate threads. First, when the thread is processed by the backend server, we can directly fetch the current user through filter, and store the data into ThreadLocal. When the thread is being processed by a Service or other related class, it is likely to fetch the current user, which can be easily accessed from ThreadLocal at any time.
  • Otherwise, we would have to pass user information endlessly, passing extra variables in the method that we don’t want to pass.
  • Thread closed methods:
    1. Ad-hoc thread blocking: Program control implemented, worst of all, ignored. (This is thread closure that is completely implementor controlled and very vulnerable)
    2. Stack closed: local variables, no concurrency problems (that is, our local variables, when multiple threads access a method, the local variables in the method will be copied to the thread stack, so local variables are not shared by multiple threads, so there is no concurrency problem. So it is better not to use global variables when we can use local variables, because global variables can cause concurrency problems.
    3. ThreadLocal thread blocking: A particularly good blocking method, recommended. (It maintains a map, the key of which is the name of each thread, and the value of the map are the objects we want to close. Each object in a thread corresponds to a value in the map.

    The following code demonstrates the simple use of ThreadLocal.

  • Use ThreadLocal to store variables for a specified thread:
    • Create a new Controller and interface test:

      Here the value of id is obtained by requestholder.getid ();

    • Define a RequestHolder class and add three methods: add add, getId to get the value from ThreadLocal, and remove to remove:
    • Create a new HttpFilter class, use doFilter to intercept the request, get the thread ID, and use RequestHolder to add values to the internal ThreadLocal variable:

      The request will go through here before it gets inside the Controller interface;

    • Create a new HttpInterceptor class, and afterCompletion will be executed after the interface method completes. In this case, the requestholder.remove () method will continue to be executed to avoid memory leaks:

5.3 Thread unsafe classes and writing methods

  • What is a thread-unsafe class?

    • If you have an object of a class that can be accessed by multiple threads at the same time, if you don’t do special synchronization or concurrent processing, then it’s very easy for it to exhibit thread-unsafe behavior, like throwing exceptions, like logic processing errors, that’s what we call thread-unsafe classes.
  • How to use StringBuilder and StringBuffer

    1. StringBuilder is thread-safe, and StringBuffer is thread-safe.
    2. StringBuffer is thread-safe because Synchronized is added internally, whereas StringBuilder does not. Because of this, StringBuilder performs better.
    3. It’s not just about who’s better, it’s about who works best in a given situation. For a StringBuilder defined inside a method, it is thread-closed, and only one thread can use the object, so it is better to use it, even if it is not thread safe, because the StringBuilder defined internally is a local object.

      This is why Java provides two String handling classes at the same time.

  • Similarly, the SimpleDateFormat class that we often use, if defined as a global variable, might be prone to errors. The correct way to define new SimpleDateForm is within a local method:

    This way, there will be no exceptions caused by unsafe threads. The other DateTimeFormatter is a thread-safe class, whether defined in a method or placed in a global variable. We recommend DateTime, which is not only thread-safe, but has many advantages.

  • Summary of thread-unsafe classes:

    • StringBuilder-> StringBuffer
    • SimpleDateFormat->JodaTime
    • ArrayList, HashSet,HashMap, etc Collections

    When we use thread-unsafe classes, we can ensure that concurrency is not a problem if we only use them for queries and do not modify them. If you need to modify its contents, you can do so in local variables, so that each thread can have a thread-closed instance object of its own, without affecting each other. If it needs to be global and needs to be modified, such as snatching tickets, adding and subtracting a variable, then we need to keep adding and subtracting atomically, locking, Amtoc, etc., to keep the thread safe.

5.4 Thread Safety – Synchronization containers

  • Types of synchronous containers:

    1. ArrayList-> Vector,Stack
    2. HashMap->HashTable(key,value cannot be null)
    3. The Collections. SynchronizedXXX (List, Set, Map)
  • Thread-safe is not necessarily safe:

    import java.util.Vector;
    public class VectorExample{
    	private static Vector<Integer> vector = new Vector<>();
    	
    	public static void main(String[] args){
    
    		while(true) {for(int i=0; i<10; i++){ vector.add(i); } Thread thread1=new Thread(){
    				public void run(a){
    					for(int i=0; i<10; i++){ vector.remove(i); } } } Thread thread2=new Thread(){
    				public void run(a){
    					for(int i=0; i<10; i++){ vector.get(i); } } } } } }Copy the code

    In the above code, an array out of bounds exception must be raised by constant deletion and retrieval. An array out-of-bounds exception is raised because it is possible that the data for this coordinate index has been deleted while fetching. Therefore, thread safety is not always safe to use, we need to understand the characteristics of each container. When using Iterator, do not delete the Iterator. If you do need to delete the Iterator, you can mark it and wait until the Iterator is finished. Otherwise, exceptions may occur.

  • As we can see from the above examples, synchronous containers tend not to perform particularly well and are not fully concurrency safe. So we have a better alternative, which is the concurrent container.

5.5 Thread Safety – concurrent container J.U.C

  • J.U.C it is a three-word abbreviation for a Java path and is the abbreviation for java.util.current.

  • Types and corresponding relationships of concurrent containers:

    1. ArrayList-> CopyOnWriteArrayList (ArrayList-> CopyOnWriteArrayList) Disadvantages: 1. Write operations consume memory, which can lead to young GC or Full GC if there are too many elements. 2. Cannot be used in real-time read scenarios because writing takes time. Therefore, it is more suitable for read and write scenarios, and should be used with caution if there is a large amount of data.

      Design philosophy of copyOrWrite: read-write separation, final consistency, additional space for use (to resolve concurrent conflicts)

      • Source code icon:
    2. HashSet, TreeSet->CopyOnWriteArraySet, ConcurrentSkipListSet

      CopyOnWriteArraySet is similar to CopyOnWriteArrayList. The removeAll and addAll batch operations of ConcurrentSkipListSet are not thread-safe and need to be synchronized manually. Although they are atomic operations, they are not guaranteed to be interrupted by others.

    3. HashMap, TreeMap->ConcurrentHashMap, ConcurrentSkipListMap

      ConcurrentHashMap is faster to access, but ConcurrentSkipListMap(which supports higher concurrency, has ordered keys, and access speed is not directly related to the number of threads) also has some advantages.

  • Concurrent programming path:

  • Secure Shared Object Policy – Summary

    1. Thread-restricted: A thread-restricted object that is exclusive to the thread and can only be modified by the thread that owns it.
    2. Shared read-only: A shared read-only object that can be accessed concurrently by multiple threads without additional synchronization, but cannot be modified by any thread.
    3. Thread-safe object: A thread-safe object or container that is thread-safe internally through synchronization so that other threads can access it through a public interface without additional synchronization.

AQS by J.U.C

6.1 INTRODUCTION to AQS of J.U.C

  • AQS: AbstractQueuedSynchronizer it is concurrent container synchronizer, referred to as “AQS, since Jdk5, it improves the performance of concurrent Java, can build lock, sync framework infrastructure.

  • Data structure:

    Just get a general impression.

  • Introduction:

    1. FIFO queues are implemented using Node and can be used to build the basic framework for locks and other synchronization devices
    2. An int is used to represent the state
    3. The method of use is inheritance
    4. Subclasses manipulate state by inheriting and managing their state {acquire and release} through methods that implement it
    5. You can implement both exclusive and shared lock modes (exclusive and shared)
  • AQS Synchronization components:

    1. CountDownLatch (a count that indicates whether the thread needs to be blocked all the time)
    2. Semaphore (controls the number of concurrent threads at the same time)
    3. CyclicBarrier (which, like the previous two, can also block processes)
    4. ReentrantLock (Important)
    5. Condition
    6. FutureTask

6.2 CountDownLatch

  • Here is:

    CountDownLatch is a synchronization helper class that blocks the current thread. In other words, one or more threads can be made to wait until the other thread completes the operation.

  • CountDownLatch combined with graphic analysis:

    • It is initialized with a powerful counter that operates atomically and can only be operated by one thread at a time.
    • The thread calling the await method in the diagram will block until another thread calls the countDown() method, making the current counter 0.
    • Each call to the countDown() method decrement the counter by one, and when the counter drops to zero, all waiting threads calling await() continue to execute.
    • This only happens once, because counters cannot be reset. If a business needs a version of reset count times, consider Semaphore, which will be introduced later.
  • CountDownLatch is used in some business scenarios where program execution needs to wait for a condition to complete before continuing. Typical use: parallel computing (breaking up a large task into many smaller tasks and waiting for all tasks to complete before aggregating.)

  • Why do we use CountDownLatch for concurrent emulation? Because the function of the scenario we simulated is relatively simple, and the business is more suitable for the adaptive usage scenario.

  • Code examples:

    In case the countdownlatch.countdown () method does not reduce the value to 0, we can countdownlatch.await (); Instead of countDownLatch. Await (10, TimeUnit. MILLISECONDS); This way, if the specified condition has not been reached by the specified time, you can execute the following code directly.

6.3 Semaphore- Semaphore

  • Semaphore concept: A Semaphore, commonly known as a Semaphore, can be used to control the number of threads accessing a particular resource at the same time, by coordinating the threads to ensure proper use of the resource.

  • Here is:

    Can make it simple as our car park entrance stood the display screen, each has a car into the parking lot screen will display the remaining parking minus 1, each have a car from the parking lot, shown on the display of the remaining vehicles will add 1, while the rest of the screen space to 0, a car park entrance won’t open the railings of vehicles couldn’t enter the parking lot, Until a car leaves the parking lot.

  • Usage scenario: This mode is used for traffic limiting.

  • Semaphore common methods:

    1. Acquire () acquires a token, and the thread is blocked until the token is acquired or interrupted by a call from another thread.

Acquire (int permits) a token, a thread is blocked until acquire(int permits) a token, or interrupt by another thread, or time out. 3. AcquireUninterruptibly () acquires a token and the thread remains blocked (interrupts are ignored) until the token is obtained. 4. TryAcquire () attempts to acquire a token and returns success or failure to acquire the token without blocking the thread. 5. TryAcquire (long timeout, TimeUnit Unit) tries to acquire the token, and loops to acquire the token within the timeout period until the attempt succeeds or the token returns due to timeout, without blocking the thread. 6. Release () releases a token, waking up a blocked thread that failed to acquire the token. 7. HasQueuedThreads () Specifies whether there are waiting threads in the queue. 8. GetQueueLength () gets the number of blocked threads in the wait queue. Sets the number of available tokens to 0 and returns the number of emptied tokens. AvailablePermits () returns the number of available tokens.

  • Code demo:

    In the diagram, three tokens are placed using Semaphore. Even if there are 20 threads accessing at the same time, only three threads can execute at the same time. They acquire the token through the acquire() method to execute the code below, and when release() releases the license, the blocked thread can attempt to acquire the token. It can be very convenient to limit the current. When the number of tokens is 1, the single-threaded effect can be achieved. In addition, the license tokens for acquire and release operations are not restricted, and we can release or obtain multiple licenses at the same time (where licenses are tokens).

  • Try to get permission:

    TryAcquire () is used to attempt to obtain permission, and internal code is executed when permission is obtained and not if it is not. Here, if 20 threads attempt to acquire licenses at the same time, Semaphore has only three tokens, and the licensed threads do not release licenses at the time of all licenses, then only three threads can obtain licenses at most. Even if the licensed thread releases the license, a maximum of three threads can hold the license (token) at any one time.

6.4 CyclicBarrier

  • Here is:

    It can be used for multi-threaded computation, where each thread processes a portion of the logic at the same time, and when all the threads have finished the computation, the results are returned together.

  • CyclicBarrier and CountDownLatch are introduced

    1. The CountDownLatch counter can only be used once, while the CyclicBarrier can be reset using the restart method and used in a cycle.
    2. CountDownLatch is used by one or more threads waiting for other threads to perform an operation before they can continue. It represents the relationship between one or more threads waiting for other threads. CyclicBarrier is a barrier that allows multiple threads to wait for each other until all threads are satisfied. It describes the interwaiting relationship between threads and can handle more complex scenarios.
  • Code examples:

    In this code, five concurrent threads are defined by the new CyclicBarrier. When barrier.await() is executed, the thread will wait, and when the number reaches five, all the threads will continue to execute. The release can also be done by setting the specified time, as shown in the figure 2000 ms. The await() method of CyclicBarrier throws exceptions like BrokenBarrierException, TimeoutException, and so on, which we need to handle.

  • The initialization of a CyclicBarrier can be followed by code blocks, which are executed when the initialization is complete, as shown in the figure below:

6.5 already locked

  • Java has two main types of locks: those modified by the Synchronized keyword and those provided in J.U.C.

  • ReentrantLock is different from synchronized

    1. Reentrant: RentrantLock literally means re-entry lock. Synchronized is also reentrant, and there’s not much difference between the two. They are entered once, the lock counter will increase by 1, to wait for the lock counter to drop to 0 to release the lock.
    2. Lock implementation: Synchronized is based on JVM implementation, ReentrantLock is based on JDK implementation. Similar to the difference between operating system implementation and code control implementation, the latter is easier to manipulate.
    3. Performance differences: Synchronized borrowed from ReentrantLock’s CAS technology (attempting to solve the locking problem in user mode and avoid blocking threads entering kernel mode), and introduced biased locking and lightweight locking (spin locking), the performance of the two is almost the same. In cases where both are available, the Synchronized keyword is officially recommended because it is easier to write.
    4. Synchronized is more convenient. It automatically locks and releases locks. ReentrantLock is more flexible.
  • Features unique to ReentrantLock

    1. You can specify a fair or unfair lock
    2. A Condition class is provided to wake up threads in groups. Synchronized can only wake up a random thread or all threads.
    3. Provides a mechanism to interrupt threads waiting for locks, lock.lockInterruptibly()

    ReentrantLock is essentially a spinlock that is locked by a loop calling the CAS operation, and it works well because it avoids blocking when threads enter the kernel state. When you must use these three features unique to ReentrantLock, use the utility class J.U.C in ReentrantLock. Java for advanced users. Junior developers are better off using synchronized to minimize errors and reduce the cost of troubleshooting.

  • ReentrantReadWriteLock:

    • Example:
      		   class RWDictionary {
          private final Map<String, Data> m = new TreeMap<String, Data>();
          private final ReentrantReadWriteLock rwl = new ReentrantReadWriteLock();
          private final Lock r = rwl.readLock();
          private final Lock w = rwl.writeLock();
       
          public Data get(String key) {
            r.lock();
            try { return m.get(key); }
            finally{ r.unlock(); }}public String[] allKeys() {
            r.lock();
            try { return m.keySet().toArray(); }
            finally{ r.unlock(); }}public Data put(String key, Data value) {
            w.lock();
            try { return m.put(key, value); }
            finally{ w.unlock(); }}public void clear(a) {
            w.lock();
            try { m.clear(); }
            finally{ w.unlock(); }}}Copy the code
    • Introduction:
      1. To address thread safety issues in concurrent scenarios, we use exclusive locks almost all the time, You typically use the Java keyword synchronized (see this article for synchronized) or ReentrantLock, which implements the Lock interface in the ConCurrents package.
      2. They are exclusive locks, meaning that only one thread can acquire the lock at a time.
      3. In some business scenarios, most of the data is read, but very little data is written. If the data is read only, data correctness is not affected (dirty reads occur), and if the exclusive lock is still used in such a business scenario, this is obviously a performance bottleneck.
      4. Java also provides another ReentrantReadWriteLock(read/write Lock) that implements the Lock interface.
      5. Read/write locks allow multiple reader threads to access them at the same time, but all reader threads and other writer threads are blocked when the writer thread accesses them. The mutual exclusion of WirteLock and ReadLock can be analyzed in terms of WriteLock to WriteLock, WriteLock to ReadLock, and ReadLock to ReadLock.
    • Features:
      1. Fair selection: support unfair (default) and fair lock acquisition methods, throughput or unfair is better than fair;
      2. Reentrant: Support reentrant, read lock can be acquired again, write lock can be acquired again, but also can be acquired read lock;
      3. Lock degradation: A write lock can be degraded to a read lock by following the sequence of acquiring a write lock, acquiring a read lock, and releasing the write lock

7. J.U.C component expansion

7.1 FutureTask (inside J.U.C)

  • There are usually two ways to create a Thread, either by inheriting Thread directly or by implementing the Runnabale interface. There is a common flaw in both approaches, which is that the execution results cannot be obtained after the task is executed. Since Java 1.5, methods such as Callable, Future, and so on have been provided to obtain the execution results of tasks.

  • Comparison of Callable and Runnable interfaces:

    1. Callable is a generic interface that has a call function, Runnable is an interface that has only one method :run; (Callable execution returns a value and can throw an exception)
    2. Future interface: It monitors how the target thread calls the method and gets the result when you call the Future’s GET method. At this point, the thread of execution may not complete directly, and the current thread blocks until the call method ends and returns the result. In summary, it can get the return value of the other interface’s execution.
    3. FutureTask class: Its parent is RunnableFuture, and its parent inherits Runnable and Future, so it also performs Callable tasks. It implements two interfaces, Runnable and Future. So it can either execute directly as a thread or get a return value.

      When is FutureTask used? This class can be used for scenarios where a thread needs to evaluate a value that is not immediately needed and is time consuming, one for calculation, and one for waiting for the result to be obtained while doing other operations, to maximize performance.

  • Example code for the Future interface:

  • Example code for the FutureTask class:

7.2 the Fork/Join framework

  • Fork/Join is a framework provided in Java 7 for executing parallel tasks. It takes the idea of divide and conquer, breaking up a large task into several smaller tasks to execute and eventually merging the results. Fork is a shard task and Join is a merge result. It mainly uses work stealing algorithms, where one thread steals tasks from another queue to execute them. The stolen task is taken from one end to be executed, and the thread of the stolen task is taken from the other end of the task to reduce thread contention and maximize thread utilization.

    • Here is:
  • Disadvantages:

    1. Only Fork/Join can be used as the synchronization mechanism. With other synchronization mechanisms, threads block and do not steal tasks from other threads. (For example, if a task is put to sleep in a Fork/Join, the related threads will not perform other tasks while asleep.)
    2. I/O operations cannot be performed.
    3. Check exceptions cannot be thrown; they must be handled.
  • Code examples:

7.3 BlockingQueue- Block queue

  • Here is:

    The operations on this queue are shown in the diagram.

  • Introduction:
    1. It is a blocking queue (when the queue is full, only dequeue operations are performed. Only enqueue when the queue is empty.
    2. It is thread-safe and is primarily used in producer and consumer scenarios. In the diagram, thread 1 is constantly inserted and thread 2 is constantly taken out;
  • Some implementation classes of BlockingQueue:
    1. ArrayBlockingQueue: a bounded blocking queue (the size must be initialized when new, and cannot be changed once specified). It stores data first in first out (FIFO). The first object inserted is the tail and the first object removed is the head.
    2. DelayQueue, which blocks the internal elements, must implement an interface, the Delayed interface in J.U.C, because its elements need to be ordered, typically by their expired priority. Application scenarios: Such as timed connection closure, object caching, timeout processing and other scenarios. Its internal implementation is locking and sorting.
    3. LinkedBlockingQueue: Its size configuration is optional. If the size is specified at initialization, it is bounded. If not, it is unbounded. Its internal implementation is a linked list that stores data on a first-in, first-out basis, much like an ArrayBlockingQueue.
    4. PriorityBlockingQueue
    5. SynchronousQueue:

Thread scheduling – Thread pool explanation

8.1 the thread pool

  • Disadvantages of new Thread:

    1. Performance is poor every time a new Thread creates an object
    2. Lack of unified management of threads, unlimited new threads may compete with each other, may occupy too many system resources, resulting in a crash or OOM
    3. Lack of more features such as more execution, periodic execution, thread interrupts.
  • Benefits of thread pools:

    1. Reuse existing threads, reduce the overhead of object creation and death, poor performance.
    2. The system can effectively control the maximum number of concurrent threads, improve system resource utilization, and avoid excessive resource competition and blocking.
    3. Provides scheduled execution, periodic execution, single thread, concurrency control and other functions.
  • Thread pool – ThreadPoolExecutor

    1. CorePoolSize: number of core threads
    2. MaximumPoolSize: specifies the maximum number of threads
    3. WorkQueue: A blocking queue that stores tasks waiting to be executed. This is important and has a significant impact on thread pool execution.
    4. KeepAliveTime: The maximum amount of time a thread can hold and terminate without executing a task.
    5. Unit :keepAliveTime Time unit
    6. ThreadFactory: threadFactory, used to create threads
    7. RejectHandler: Policy for rejecting a task.
  • After it initializes the thread instance, it drops the task in and waits for it to execute. It is very simple and convenient to use. It’s also easy to build a thread pool by passing in the parameters it needs.

  • Thread pool status:

  • Common methods of thread pooling:

    1. Execute () : submits the task to the thread pool for execution
    2. Submit () : submits the task and returns the execution result execute+Future
    3. Shutdown (): Closes the thread pool and waits for all tasks to complete
    4. ShutdownNow () : Closes the thread pool until the task is complete.
    5. GetTaskCount () : Total number of executed and unexecuted tasks in the thread pool
    6. GetCompletedTaskCount () : Number of completed tasks
    7. GetPoolSize () : the current number of threads in the thread pool
    8. GetActiveCount () : Number of threads executing tasks in the current thread pool.

    Using methods such as 5,6,7, and 8, you can monitor the execution of tasks in a thread.

  • Thread pool class diagram:

  • Thread pool-Executor Framework interface:

    Executor makes it easy to create different types of thread pools.

  • Code examples:

    If not, replace Executors. Newxxx. Always close the thread pool when you’re done using it. The operation is basically the same, but the difference is that the appropriate thread pool is used in the actual scenario according to the characteristics of the different thread pool.

  • Thread pool – Properly configured

    • CPU intensive tasks, you need to squeeze as much CPU as possible, the reference value can be set to NCPU+1
    • For IO – intensive tasks, the reference value can be 2 x NCPU

Nine, multithreading concurrency expansion explanation

9.1 a deadlock

  • Deadlocks – Requirements:
    1. Mutually exclusive: A process is exclusive of occupied resources. A thread can occupy only one resource. If another thread requests this resource, other resources can be held only after the lock is released.
    2. Request and hold condition: It refers to the process that has held at least one resource, but makes a new request for the resource, and the resource has been occupied by other resources. In this case, the requesting process blocks, but does not release the resource it has held.
    3. Non-deprivation conditions: can not be deprived while using, can only be released after the completion of use.
    4. Loop waiting condition: when a deadlock occurs, there must be a loop chain of process resources. That is, P0 in process set {P0, P1, P2, ··· Pn} is waiting for a resource occupied by P1. P1 is waiting for resources occupied by P2. Pn is waiting for resources occupied by P0
  • Deadlock code icon:

    Each thread holds resources needed by the other thread, causing deadlocks to occur.

9.2 Best practices for Multi-threaded concurrency

  1. Using local variables
    • You can save memory
  2. Use immutable classes
    • Reduce the amount of synchronization required in your code. Such as a String.
  3. Minimize lock scope: S=1/ (1-a+a/n)
    • S (acceleration ratio), A (proportion of parallel computing), N (number of nodes in parallel processing)
    • If 5% of the code is in the lock, parallelism is up to 20 times more efficient because the code in the lock can only be executed sequentially.
  4. Use Thread pool executors instead of executing directly with new Threads
    • Creating threads is expensive and difficult to manage.
  5. Prefer synchronization to wait and notify for threads
    • Use synchronization tools in place of them
  6. Use BlockingQueue to implement the production-consumption pattern
    • Production and consumption, it is the most appropriate solution tool
  7. Use concurrent collections instead of synchronized collections with locks
    • It’s more efficient and a better solution for a particular scenario.
  8. Use Semaphore to create bounded access
    • Block thread wait at minimal cost to control the number of threads accessing the resource.
  9. Prefer to use synchronized code blocks rather than synchronized methods
    • The Syschronized synchronized code block locks only one object rather than the entire method.

10. Avoid using static variables – static variables can cause a lot of problems ina concurrent environment, so make it final first, or we need to do synchronous and concurrent processing.

9.3 Spring and Thread Safety

  • Spring, as an IOC and DI container, helps us manage many beans, but in fact Spring does not guarantee the thread-safety of these objects, we need to write our own code to solve the thread-safety problem. Spring provides a property for each Bean to represent its scope and life cycle,
    • Singleton: This type of Bean creates a singleton object when it is first injected, which is reused until the end of the application. It is the default property of the Spring Bean.
    • Prototype: Creates an object on each injection
  • Stateless objects: Both singletons and multiinstances are thread-safe. However, singletons can save the resource overhead of constantly creating objects. VO/DTO/BO etc.

9.4 a HashMap and ConcurrentHashMap

  • Here is:
  • Follow-up fill;

9.5 summarize

High concurrency expansion ideas and means

10.1 capacity

  • Vertical expansion: Improves the capabilities of system components

    For example, memory is not enough, add memory, disk is not enough to add disk

  • Horizontal expansion (horizontal expansion) : To achieve this by adding more system members

    A single server cannot be expanded to a cluster. Three servers isn’t enough, just add one. Increased pressure on shared resources.

10.2 Capacity Expansion – Database

  • Read operation expansion: memcache, Redis, CDN cache
  • Extended write operations: Cassandra and Hbase

    Actual needs according to the actual scenario, here is a train of thought, specific according to the actual scenario to choose the appropriate solution.

High concurrency cache idea

11.1 Application scenarios and description of cache

  • Here is:

  • Cache features:

    1. Hit ratio: hits /(Hits + misses)
    2. Maximum element (space)

      When the cache is full, how to cache effectively and how to clean up?

    3. Empty policy: FIFO, LFU, LRU, expiration time, random, etc

      Choosing an appropriate clearing strategy can effectively improve the cache hit ratio. FIFO: First-in, first-out policy (the most advanced is cleared first and requires high real-time data). LFU: the least used policy (a policy that compares the number of hits to ensure the high frequency of hits). LRU: Least recently used policy. (Priority is given to ensuring the validity of hotspot data); Expiration time (longest time to be cleaned)

  • Factors affecting cache hit ratio:

    • Business scenarios and business requirements
      1. The lower the real-time requirement, the more suitable for caching.
      2. Good for reading more and writing less
    • Cache design (granularity and policy)
      1. The smaller the granularity, the higher the hit
      2. Cache expired updates have a higher cache hit ratio than outright deletion
    • Cache capacity and infrastructure
      1. Most use LRU algorithm.
      2. Distributed caching is easier to scale
      3. The efficiency and stability of different cache middleware are different, so we need to choose the right one.

    The higher the concurrency, the higher the cache yield. If the randomness of the cache is high and there is no hit after the cache expires, the cache yield is low.

  • Cache classification and application scenarios

    1. Local Cache: programming implementation (member variables, local variables, static variables), Guava Cache(often used)
    2. Distributed cache: Memcache, Redis(widely used)

11.2 Frameworks or tools that support caching

  • Cache – Guava Cache

    • Here is:

    • Guava Cache:
      • Guava Cache is a continuation of CurrentHashMap, which uses fine-grained locking to keep threads safe while supporting the requirements of high-concurrency scenarios.
      • The cache is similar to a Map that stores a collection of key-value pairs.
      • The difference, however, is that it also deals with the logic of some algorithms, such as cache expiration, dynamic loading, etc., and requires some additional information to implement these operations.
      • According to the idea of object-oriented, it also needs to do the correlation of methods and data encapsulation, it automatically adds nodes to the cache, when the cache data exceeds the maximum value, using the LRU algorithm to remove, it has the expiration mechanism according to the node was last accessed or written time to calculate its expiration mechanism. It can collect cache hit ratio, exception rate, miss ratio, and other statistics.
  • Cache – Memcache

    • Here is:
    • Its distribution is mainly realized in the client side, through some algorithm mapping to a server, using consistent hashing algorithm calculation.
    • A chunk is a place where data is stored. There’s a finite number of slabs.
  • Cache – redis:

    • Here is:
    • Features:
      1. Support data persistence, RDB, AOF
      2. Support multiple types of data: String, Hash, List, Set, Sorted Set, etc., support a variety of scenarios, suitable for the operation of uniqueness check, real-time system, anti-garbage system, etc.
      3. Supports data backup
      4. atomic
      5. Excellent performance and easy to expand

11.3 Cache Troubleshooting in A High-Concurrency Scenario

  • Frequently asked questions are:

    1. Cache consistency
    2. Cache concurrency issues
    3. Cache penetration problem
    4. Cache avalanche
  • Cache consistency occurs when:

  • Cache concurrency issues:

  • Cache penetration issues:

    When querying a large amount of data not go to Redis but go directly to the database.

  • Cache avalanche:

    Cache penetration, cache jitter, cache concurrency, and periodic cache invalidation may result in a large number of requests arriving at the database, causing the database to collapse due to excessive pressure, and thus the entire system to crash.

High concurrency message queue ideas

12.1 Message Queues

  • Message flow:

    Details such as controlling the speed of messages, asynchrony, decoupling, and retry failures need to be taken care of to maintain final consistency. Reduce concurrency.

  • Message queue features

    1. Irrelevant: Only distributes messages
    2. FIFO: First delivery, first arrival
    3. Disaster recovery: Dynamically adding and deleting nodes and persisting messages
    4. Performance: The throughput is improved and the internal communication efficiency is improved
  • Why message queues?

    1. Factors such as the speed or stability of production and consumption are inconsistent
    2. Asynchronous, decoupled, peak clipping, broadcast, flow control, etc
  • Message queue benefits

    1. Business decoupling: One transaction only cares about the core process, while others only need to be notified and not directly concerned with the outcome.
    2. Final consistency: the final success or failure of two systems to remain consistent within a certain time frame. For example, in the transfer operation, one person pays, another person should pay more money, if inconsistent, it will cause disaster. Success (OK), failure (record failure, retry, notify solution until problem is resolved)
    3. Broadcasting, every time a new service comes in, they subscribe to it without us actively adapting it.
    4. Peak shifting and flow control

In summary, message queues are not a panacea, and RPC remote calls are more suitable for those requiring strong transaction assurance and sensitive to latency. Scenarios that are important to others, things that are not your primary concern, chasing ultimate consistency, receiving delays, notifications, and so on are appropriate for message queues.

12.2 Common Message Queues

  • The queue – Kafka

    • Here is:

      In addition to performing well, it is a distributed system that works well.

    • Introduction:
      1. Each message has a Topic
      2. Each Topic may have multiple partitions
      3. Producer sends data to the specified Partition based on the specified Partition algorithm. Kafka receives messages and saves them to disk for a specified amount of time. The Consumer carries out the consumption message in the Partition.
  • The queue – the RabbitMQ

    • Here is:
    • Introduction:
      1. Queue and Exchange are bound via a RoutingKey.
      2. RabbitMQ Server has its own management interface
      3. RabbitMQ is good, mature and has an active community.

High concurrency application split ideas

13.1 Application Splitting

  • Why split:
    • Reduce stress
    • An application can be divided into multiple applications to avoid a system crash caused by an error in one link.
    • It is divided into multiple applications and can be individually optimized according to the characteristics of each application with higher granularity.
  • Split example:
    • Here is:
    • The graphic information shows splitting a stock system application into multiple applications.
  • Defects of splitting:
    1. Split risk, technical test
    2. Server pressure
    3. Network overhead

13.2 Application Splitting – Rules

  • Principle:

    1. Business is preferred
    2. Step by step (small steps to reduce cumulative errors)
    3. Techniques: Refactoring, layering
    4. A reliable test
  • Think about:

    1. Communication between applications :RPC (Dubbo, etc.), message queue
    2. Inter-application database design: Each application has a separate database
    3. Avoid transaction operations across applications
  • Application split framework:

    1. Dubbo (servicification)

      • Technical architecture diagram:
    2. Spring Cloud(Microservices)

      • Here is:
      • Features:
        1. A system of independent services
        2. Each service runs in its own process, and each service is developed for a separate business. They are distributed managed with a strong emphasis on isolation
        3. Living products, not projects
        4. Strong service individual, weak communication
        5. Automate the operation and maintenance of DevOps
        6. High fault tolerance
        7. Capable of rapid evolution and iteration

        Microservices are generally direct-to-customer.

High concurrency application of current limiting ideas

14.1 Applying the Traffic Limiting Algorithm

  • There are four main algorithms:

    1. Counter method
    2. The sliding window
    3. Bucket algorithm
    4. Token bucket algorithm
  • Counter:

    • Limit the number of visits per minute to 100.
    • The counter can be reset once a minute, and then no more than 100 times a minute.
    • But there are critical issues, such as sudden access to more than 100 requests within the reset interval, that can penetrate limiting interception.
    • Code examples:
      	public class CounterTest {
          public long timeStamp = getNowTime();
          public int reqCount = 0;
          public final int limit = 100; // The maximum number of requests in the time window
          public final long interval = 1000; // Time window ms
      
          public boolean grant(a) {
              long now = getNowTime();
              if (now < timeStamp + interval) {
                  // Within the time window
                  reqCount++;
                  // Determine whether the maximum number of requests is exceeded in the current time window
                  return reqCount <= limit;
              } else {
                  timeStamp = now;
                  // Reset after timeout
                  reqCount = 1;
                  return true; }}public long getNowTime(a) {
              returnSystem.currentTimeMillis(); }}Copy the code
    Copy the code
  • The sliding window

    • Rolling window, also known as rolling Window. To solve this problem, we introduce the sliding window algorithm. If you have learned the TCP network protocol, then you must be familiar with the term sliding window. Here is a good illustration of the sliding window algorithm:
    • In the figure above, the entire red rectangle represents a time window, in our case a time window is a minute. Then we divide the time window, for example, in the picture, we have divided the sliding window into 6 squares, so each one represents 10 seconds. Every 10 seconds, our time window slides one space to the right. Each grid has its own counter. For example, when a request arrives at 0:35 seconds, the counter from 0:30 to 0:39 is incremented by one.
    • So how does sliding Windows solve the critical problem just now? If you look at the figure above, 100 requests arriving at 0:59 will land in the gray grid, and requests arriving at 1:00 will land in the orange grid. When the time reaches 1:00, our window will move one space to the right, so the total number of requests in the time window at this time is 200, exceeding the limit of 100, so it can be detected that the current limit is reached.
    • Let me review the counter algorithm, and we can see that the counter algorithm is actually a sliding window algorithm. But it doesn’t divide the time window further, so there’s only one.
    • It can be seen that the more grids the sliding window is divided, the smoother the rolling of the sliding window will be, and the more accurate the statistics of current limiting will be.

    It is an upgraded version of the counter algorithm, which is more accurate and requires more storage space.

  • Leaky Bucket algorithm

    • Leaky bucket algorithm is actually very simple, can be roughly considered as the water leakage process, into the bucket at a certain rate of water outflow, at any rate of water inflow, when the water exceeds the bucket flow is discarded, because the bucket capacity is unchanged, to ensure the overall rate.
  • Token bucket algorithm:

    • All requests require an available token before they can be processed;
    • Add the token to the bucket at a certain rate according to the flow limiting size.
    • The bucket sets a maximum placement token limit, and when the bucket is full, newly added tokens are discarded or rejected.
    • After the request is reached, the token in the token bucket should be obtained first, and then other business logic can be carried out with the token. After the business logic is processed, the token can be deleted directly.
    • The token bucket has a minimum limit. When the number of tokens in the bucket reaches the minimum limit, tokens will not be deleted after the request is processed to ensure sufficient traffic limiting.

    It can solve critical problems very well. Compared with the copper leak algorithm, the token bucket algorithm has better advantages.

  • Choose an appropriate algorithm according to the actual scenario.

15. Service degradation and service circuit breaker

  • Service degradation

    • Introduction: When the server pressure increases dramatically, according to the actual business situation and traffic, some services and pages are strategically not processed or processed in a simple way, so as to release server resources to ensure the normal or efficient operation of core transactions.
    • Usage scenarios: What are the main scenarios for service degradation? When the overall load of the microservice architecture exceeds the preset upper threshold or the incoming traffic is expected to exceed the preset threshold, we can delay or suspend the use of some non-important or non-urgent services or tasks to ensure the normal operation of important or basic services.
  • Service fuse:

    • Like a fuse, it’s an overload protection.
    • In an Internet system, when downstream services respond slowly or fail due to excessive access pressure, upstream services can temporarily cut off calls to downstream services to protect the overall availability of the system.
  • Service degradation classification:

    • Automatic degradation: timeout, number of failures, faults, and traffic limiting

      Degraded prompts such as queue, error page, error message, etc

    • Artificial downgrade: second kill, double eleven big promotion
  • Realization of service circuit breaker:

    • Spring Cloud Hystrix is an implementation of Hystrix, an open source framework based on Netflix, which implements a series of service protection features such as service circuit breaker and thread isolation.
    • For the implementation of the circuit breaker mechanism, Hystrix designs three states: 1. Circuit breaker Closed When the service is not faulty, the circuit breaker is in a state that does not restrict the call of the caller. 2. Fusing on (Open) Within a fixed period (Hystrix default is 10 seconds), the interface call error rate reaches a threshold (Hystrix default is 50%), and the fusing is on. After the service interface enters the fusing state, subsequent calls to the service interface do not pass through the network and directly execute the local fallback method. 3. Half-open After the fuse is on for a period of time (Hystrix defaults to 5 seconds), the fuse enters the half-open state. A semi-fuse is an attempt to resume a service call, allow limited traffic to call the service, and monitor the success rate of the call. If the success rate reaches the expected, it indicates that the service has been restored and enters the fusing shutdown state. If the success rate is still low, re-enter the fuse open state.
    • The transformation relationship of the three states is shown as follows:

17. Database sub-table and high availability means

17.1 Database cutting, database splitting, and table splitting

  • Database bottleneck:
    1. Too much data in a single library (1 TB to 2 TB) : multiple libraries
    2. Single database server is too stressed, read and write bottlenecks: multiple libraries
    3. Large amount of data in a single table: split table
  • Database cutting:
    • Basic and practical application of database cutting: read and write separation: the master database is used for storage, update, and real-time data query, while the non-real-time data query can use the slave database.
  • When to consider partitioned tables: With tens of millions of tons of data, the use of partitioned tables is imminent, and the efficiency gains from indexing and optimizing SQL access are not obvious.
  • Table classification:
    1. Horizontal (horizontal) sub-table and vertical (vertical) sub-table
    2. Shardbatis2.0 plugin for mybatis

Introduction of the available means of high concurrency

18.1 Some means of high availability

  • Distributed: Elastice-job + ZooKeeper Task scheduling system Distributed: Elastice-job + ZooKeeper task scheduling system distributed: Elastice-job + ZooKeeper
  • Active/standby switchover: Apache Curator and ZooKeeper distributed lock
  • Monitoring alarm mechanism

19. Course Summary

  • This post has two main sections, one on concurrent programming and the other on high-concurrency solutions.
  • It mainly talks about the following parts:
    1. Basic knowledge explanation and core knowledge preparation:
    2. Concurrency and concurrent thread-safe handling:
    3. Ideas and means of high concurrency processing:

    High concurrency is not difficult, but it is about providing the right solution and processing steps in high concurrency scenarios.

Feel good, can a little attention + like yo ~, your support is my biggest encouragement!