1 the Java based
1.1 the JVM
1.1.1 JVM Memory model
1. What is the memory model of the JVM, which are thread private and which are thread common?
Thread private: 1. Java stack: same as thread life cycle, same as thread life cycle. Structure of stack frame: Local Variable; Operand Stack; Runtime Constant Reference to the Runtime Constant pool StackOverflowError: thread request stack deeper than JVM allows OutOfMemortError 2, local method stack: exception: thread request stack depth exceeds the JVM allowed depth StackOverflowError; If the JVM allows dynamic scaling, if sufficient memory cannot be allocated OutOfMemortError; Java heap: new generation (1/3) : Java heap: new generation (1/3) : Java heap: new generation (1/3) : Java heap: new generation (1/3) : Java heap: new generation (1/3) : Java heap: new generation (1/3) : Java heap: new generation (1/3) : Java heap: new generation (1/3) : Java heap: new generation (1/3) Eden (8/10), from survivor (1/10), to survivor (1/10) Old (2/3) permanent generation (no after 1.8) Constant pool: Runtime constant pool (after 1.7), String constant pool exception (OOM) What is the difference between Java heap and stack? - Each thread has its own stack memory, and all threads share the heap memory; - Stack stores local variables, method parameters, stack calls, and heap stores object data; What are the two methods of object location access? - Handle: If handle is used, the Java heap will allocate a chunk of memory as the handle pool. Reference stores the handle address of the object, and the handle contains the specific address information of the instance data and the type data of the object. - Direct pointer: If direct pointer access is used, then the layout of the Java heap object must consider how to place the information related to accessing the class data, and the direct stored in reference is the address of the object;Copy the code
2. Java memory model
- Java's memory model specifies that variables are stored in main memory (similar to physical memory) and that each thread has its own working memory (similar to a cache). Threads must operate on variables in their own working memory, not directly on main memory. Each thread's working memory is isolated. - Volatile keyword and atomicity, visibility, orderliness, happens-before relationship for Java memory model.Copy the code
1.1.2 JVM garbage collection
1. How does the JVM determine that an object is dead?
2. Reachability analysis algorithm (root search algorithm) : GC Roots are used as the starting point to search down, when an object is not connected to any GC Roots by reference chain (only retrieved twice); - GC Roots objects: references in VM stack, static references in method area, JNI (Java native method references) - Four references in Java: Strong reference (reachable state, not recyclable), soft reference (recyclable only when there is insufficient memory), weak reference (recyclable regardless of whether there is sufficient memory), virtual reference (cannot be used alone, must be used in conjunction with the reference queue, tracking the state of the object being garbage collected, "weak and strong") - Three color mark: - white: Objects that have not yet been accessed by the GC and remain white if all marks have been completed are called unreachable objects, i.e. garbage objects. - Gray: Indicates that the object has been accessed, but its child reference objects have not been accessed. After all the access is complete, the object turns black and belongs to the intermediate state. - Black: This object has been accessed by GC and its child references have been accessed.Copy the code
2. JVM garbage collection algorithm
1, copy algorithm: copy algorithm is to divide the memory into two pieces of the same size, when this piece is used up, the current surviving object copy to another piece, and then one-time empty the current block. The disadvantage of this algorithm is that only half of the memory space can be used. 2. Mark-clear algorithm: this algorithm is implemented in two stages. The first stage marks all referenced objects from the root node, and the second stage traverses the whole heap to remove unmarked objects. This algorithm requires the suspension of the entire application and generates memory fragmentation. 3. Mark-collation: this algorithm combines the advantages of "mark-clean" and "copy" algorithms. The first stage marks all referenced objects from the root node, and the second stage walks through the heap, clearing unmarked objects and "compressing" the surviving objects into one of the blocks of the heap, in order. This algorithm avoids the fragmentation problem of "mark - clean" and the space problem of "copy" algorithm. (G1 garbage Collector)Copy the code
Garbage collector for the JVM
The Insane are the ParNew Insane, the ParNew Insane, and the Insane. Parallel Old process: initial mark -> concurrent mark -> re-mark -> concurrent clear; (mark-clean) Advantages: Using multithreading, tag clean garbage; Disadvantages: sensitive to CPU resource requirements, CMS can not clear floating garbage, CMS garbage collection will produce a lot of space debris; G1 (default GC option after JDK9) advantages: - A GC implementation with both throughput and pause time, low pause time, multi-threading, G1 can intuitively set the pause time goal, compared with CMS CG, G1 may not be able to achieve delayed pauses in the best case, but it is much better in the worst case; - Region division and priority Region reclamation mechanism, similar to one Region on a checkerboard. It is a replication algorithm between regions, but it can actually be regarded as a mark-compact algorithm as a whole. There is no memory fragmentation. When the Java heap is very large, G1's advantage is more obvious.Copy the code
4. Configuration of JVM parameters
-xms: initial heap memory -xmx: maximum heap memory -xmn: young generation memory, Sun official recommended 3/8 of the entire heap -xx: +PrintGCDetails -xx: SurvivorRatio=8 Sets the size ratio between Eden and Survivor zones in the young generation 1: 1:8-xx: PretenureSizeThreshold= XXx-XX: MaxTenuringThreshold -xx: -handlePromotionFailure -xx: + HeapDumpOnOutOfMemoryError - XX: HeapDumpPath output path = / TMP/dump dump configuration files - XX: XX: MetaspaceSize = 512 m - MaxMetaspaceSize= 512M Sets the metasize sizeCopy the code
5. MinorGC, MajorGC and FullGC
1. Concept: -minorgc: it will be triggered when a new object is generated but fails to apply for space in Eden area, and the surviving object will enter Survivor area very frequently. - MajorGC: To clean up old ages, many MajorGC's are triggered by Minor GC's. - FullGC: All regions including young generation, old generation and permanent generation are recycled, so it is much slower than MinorGC to reduce the number of FullGC as much as possible. 2. When will MinorGC and FullGC occur respectively? MinorGC: When Eden's space is full, MinorGC will copy the remaining objects into S0, so Eden's space will be empty. MinoGC is triggered again to copy the surviving S0 and Eden to S1, and S0 and Eden are cleared. Only Eden and Survivor are operated at the same time. Each MinorGC, a counter is automatically incrementing and the JVM stops copying when it reaches 15 times by default (because the Object Header uses four bits to hold ages, the maximum number of bits is 15). And move the object to an older age. Note: Large objects are objects that require a large amount of contiguous memory (e.g., strings, arrays). Why is that? To avoid the loss of efficiency when allocating memory for large objects due to replication caused by the allocation guarantee mechanism.Copy the code
6. JVM memory health analysis tool
-jps: vm process status tool -jstat: VM statistics monitoring tool -jinfo: Java configuration information tool -jmap: Java memory mapping tool (export Dump files and analyze them on MAT) -jstack: Java stack information tracing tool - VisualVM: All-in-one fault processing tool - MAT: Dump file analysis 1. Set JVM parameters. In OOM, specify path to generate DUMP file. 2. Analysis of stack DUMP pie charts of the largest memory information found in - Arthas: ali's JVM analysis tool (https://arthas.aliyun.com/doc/) - Prometheus: monitoring toolsCopy the code
1.1.3 Java class loading mechanism
1. Java code execution
Java source files (.java) -- > compiler (javac) -- > bytecode files (.class) -- >JVM -- > machine codeCopy the code
2. Parental delegation model
- Application ClassLoader - Extension ClassLoader - Bootstrap ClassLoader 2. Execute procedure a. call findLoadedClass() to find this class in memory, and then find its parent class by AppClassLoader; B. If not, the parent class loader will be called, that is, the ExtClassLoader will also look for memory; C. If not, BootStrap will be used to find the parent class loader. D. If none is found, BootStrap will be loaded layer by layer. That is, BootStrap will be loaded through its own defined loading path. E. If the file cannot be loaded, the file is loaded through ExtClassLoader based on the path defined by the file. If the file cannot be loaded, the file is loaded through AppClassLoader. If none can be loaded, a ClassNotFund exception is raised. 3. Why do this (parent delegate) - it ensures that different class loaders end up with the same Object; - Avoid loading multiple copies of the same bytecode, saving memory space;Copy the code
3. Escape analysis
What is object escape? - Objects and arrays are not allocated memory on the heap: -xx: -doescapeAnalysis: indicates that escape analysis is enabled. -xx: -doescapeAnalysis: indicates that escape analysis is disabled. Escape analysis starts from JDK1.7 by default. 1. Synchronous elision (lock elimination) : If an object is found to be accessible only from one thread, operations on the object may be performed without regard to synchronization 2. Scalar substitution: Some objects may be accessible without needing to exist as a contiguous memory structure, so some (or all) of the object may not be stored in memory, but in CPU registers 3. On-stack allocation: If an object is allocated in a subroutine such that Pointers to the object never escape, the object may be a candidate for stack allocation rather than heap allocationCopy the code
1.2 Java Collection classes
ConcurrentHashMap 1.7 and 1.8
1, Segment lock ->Node+syncronized code block +CAS -> red black tree If the Segment size is larger than 8 and the array size is smaller than 64-> red black tree -> array size is smaller than 6-> linked list 3. Why use red-black trees instead of binary trees -- binary trees degenerate into linked lists in extreme cases, while red-black trees have balance and do not degenerate into linked lists -- time complexity: O (n /2), O (log(n)), O (log(n)) 1.compareAndSet will have AN ABA problem (the version number is marked with an object,1.5 after the AtomicStampedReference<E>, it is marked with a metagram object wrapped with [E, Integer] stamp) 2. Spin CAS consumes CPU performance; 3. Atomic operations of only one shared variable can be guaranteed; Synchronized is not inferior to ReentrantLock in terms of low granularity. In coarse-grained locking, ReentrantLock can be more flexible using Condition to control various low-granularity boundaries, whereas in low-granularity locking, Condition's advantages are lost - JVM development teams never gave up on synchronized, Moreover, jVM-based synchronized optimizations have more space and use of embedded keywords is more natural than using the API - apI-based ReentrantLock can consume more memory for JVM memory stress under a large number of data operations, which is not a bottleneck, but is an optionCopy the code
2, a HashMap
1. Why is the loading factor 0.75? - What is the difference between 2, 1.7 and 1.8 from the Poisson distribution? - 1.7 lists using the head plug method: in multi-threaded operations will lead to an infinite loop - 1.8 lists using tail plug method: avoid the problem of infinite loop 3, the difference between fast failure and failure safety 1. Principle - Fail fast: Iterators access the contents of a collection directly during traversal and use a modCount variable during traversal. If the contents of the collection change during traversal, the value of modCount is changed. Whenever the iterator iterates over the next element using hashNext()/next(), it checks whether the modCount variable is the expectedmodCount value and returns the traversal if it is; Otherwise, an exception is thrown and the traversal is terminated. - Fail-safe: Since the copy of the original collection is iterated during iteration, the changes made to the original collection during iteration cannot be detected by the iterator, so Concurrent Modification Modification is not triggered.Copy the code
1.3 Java concurrency
1, multi-threading, asynchronous threads
1, three ways to achieve multithreading - inherit Thread class - implement Runable interface - implement Callable interface (combined with Thread pool use) - implement Runnable interface and Callable interface difference: - Runnable does not return a result or throw a check exception. Run () -callable can return a result or throw a check exception. - Execute () is used to submit tasks that do not require a return value, so it is impossible to determine whether the task was successfully executed by the thread pool. - Submit () is used to submit tasks that require a return value. The thread pool returns an object of type Future to determine whether the task was successfully executed and gets the return value from the Future's get() method. 3, Future-Future (1.5) - CompareableFuture (1.8)Copy the code
What are the different types of thread pool executors? Core parameters? What are the benefits?
- Four thread pools: 1. NewSingleThreadExecutor single thread 2. NewFixedThreadPool fixed number of threads (suitable for fixed number of jobs per unit time) 3. NewScheduledThreadPool (is suitable for the task to perform periodic) regularly 4. NewCachedThreadPool can be created by caching (suitable for executing a large number of tasks in a short time). - FixedThreadPool and SingleThreadPool allow LinkedBlockingQueue as a task queue with a length of integer. MAX_VALUE, which may pile up a large number of requests and may cause memory overflow; - CachedThreadPool and ScheduledThreadPool allow the number of threads to be Integer.MAX_VALUE. If a large number of threads are created, memory overflow may occur. - Core parameters: 1. CorePoolSize 2.maximumPoolSize 3.keepLiveTime 4.unit 5.workQueue 6. ThreadFactory (thread factory) 7. Handler (refused to policy object) - AbortPolicy: discard task and throw RejectedExecutionException exception - DiscardPolicy: Discarding a task without throwing an exception -DiscardoldestPolicy: Discarding the top task in the queue and resubmitting the rejected task - CallerRunsPolicy: Processing the task by calling the main thread (the thread submitting the task) - Benefit: 1. Reduce the time required to create and destroy threads by reusing threads that have been created. 2. Improve response speed; 3. Provides manageability of threads. 4. If the thread pool is not used, the system memory may run out. - Typical thread pool composition: 1. Thread pool manager 2. Worker thread 3. Application scenarios of -threadLocal: Spring transaction manager, Hibernate Session manager, etc. - ThreadLocal thread safety causes: ThreadLocal maintains copies of variables for each thread, limiting the visibility of shared data to the same thread. Therefore, ThreadLocal is thread-safe, and each thread has its own variable. Since the key in a ThreadLocalMap is a weak reference, it must be reclaimed, and the value is a strong reference, it will not be reclaimed, so the ThreadLocalMap will have an Entry with a null key, and there is no way to access the data. If the current thread does not terminate, the value of each Entry with a null key will remain in a strong reference chain: Thread Ref -> Thread -> ThreaLocalMap -> Entry -> value and can never be reclaimed, causing a memory leak. - Solution: After using ThreadLocal, call remove() to remove useless data from ThreadLocalMap.Copy the code
3. Optimistic and pessimistic locks
Optimizationlock (CAS) -Optimizationlock (CAS) -Optimizationlock (CAS) -Optimizationlock (CAS) -Optimizationlock (CAS) -Optimizationlock (CAS) -Optimizationlock (CAS) -Optimizationlock (CAS) -Optimizationlock (CAS) -Optimizationlock (CAS) -- AtomicLong -- AtomicInteger-random -- locksupport. park() -- ConcurrentHashMap -- pessimistic lock SQL > select * from MySQL; SQL > select * from MySQL; If concurrency is low, pessimistic locks can be used to solve the concurrency problem. However, pessimistic locks can have performance problems if the system is heavily concurrent, so optimistic locks should be used.Copy the code
What are deadlocks? How to solve the deadlock problem?
2, How to avoid deadlock - banker algorithm (to be added) - try to use tryLock(long timeout, long timeout) TimeUnit Unit (ReentrantLock, ReentrantReadWriteLock) to set the timeout period. Timeout can be used to prevent deadlocks. Try to use the java.util.concurrent class instead of your own handwritten locks. Try to reduce the granularity of the use of locks, try not to use the same lock for several functions. Minimize synchronized code blocks. The difference between deadlocks and live locks? - Deadlocks block. Live locks do not block, but exhaust CPU resources. [Fixed] deadlocks cannot be unlocked, live locks have a chance to be unlocked - A deadlock is a wait, whereas a live lock is an ever-changing state;Copy the code
5, AQS (AbstractQueuedSynchronizer)
1. Concept: AQS encapsulates each thread requesting shared resources into a Node (Node) of CLH lock queue to achieve lock allocation. Through AQS, the value of state variable of int type can be modified. - Private volatile int State (for shared resources) - FIFO thread wait queue (entered when multithreaded contention for resources is blocked) - Custom synchronizer - isHeldExclusively() : Whether the thread is monopolizing resources. Only condition needs to be implemented; - tryAcquire(int) : exclusive mode. Return true on success, false on failure; - tryRelease(int) : exclusive mode. Return true on success, false on failure; - tryAcquireShared(int) : Share mode. Attempt to obtain resources. Negative numbers indicate failure; 0 indicates success, but no available resources are available. A positive number indicates success and remaining resources. - tryReleaseShared(int) : indicates the share mode. Attempts to free resources, returns true if allowed to wake up subsequent wait nodes, false otherwise; 3, design pattern: template pattern - concept: AQS itself is abstract, implementation is achieved through subclasses, users inherit AQS and rewrite the specified method. (Its main work is based on THE CHL queue, voliate keyword modified state, the thread to modify the state is a success, failed into the queue wait, Waiting for wake up) - Involving subclasses - spin locks, mutex, reader write locks, conditional yield, Semaphore, CyclicBarrier, CountDownLatch All traversal from the end of the list to the head of the list is set to the tail. To avoid missing tail nodes, set yourself to the tail to complete the list from the endCopy the code
6. Syncronized and ReentrantLock
The markword in the head of the object stores the lock status (2bit) : -01 indicates the lock status (2bit); -01 indicates the lock status (2bit); -01 indicates the lock status (2bit). -00 is a lightweight lock; -10 is the heavyweight lock; -11 is the GC flag, indicating that it is ready for GC and marked by GC; A. no -> synchronized synchronized in the case of the first thread (thread 1) enter, the default is changed to biased lock, update the ID of the current thread to the thread ID of the object header Markword, add the timestamp of biased lock and change the flag of biased lock to 1. Other threads cannot enter the synchronized block until thread 1 exits the synchronized block. B. biased locking - > lightweight lock - thread 1 never quit the synchronized code block, and thread 2 to compete this lock, adaptive spin and thread 2 through the CAS has not been successful (thread 1 out of the synchronized code block, the CAS success is not upgrade), this time it will upgrade for lightweight lock, the upgrade process takes performance. - Bias locks need to be revoked when upgrading to lightweight locks. Procedure: 1. Stop all threads with locks at a safe point; 2, traverse the thread stack, if there is a lock record, need to repair the lock record and Markword, make it become lock free state; 3. Wake up the current thread and upgrade the current lock to a lightweight lock; C. Lightweight lock -> heavyweight lock - in the case of lock competition is very exciting, for example, thread 1 will first create a certain space on the thread stack to store Markword, and point to each other, and then start to execute the synchronization code, and at this time many threads come, so these threads also need to copy Markword to the thread stack. Then cas changes the lockRecord and Markword pointing to each other. At this point, only one thread can succeed, and all other threads need CAS. If thread 1 is not synchronized and the code block is not pointing to completion, the other threads cannot spin successfully, then those locks swell and upgrade to heavyweight locks. The blocking of a thread after a heavyweight lock upgrade is handled by the kernel, so performance is low. Advantages and disadvantages of each syncronized lock Advantages of biased lock: no extra cost is needed to unlock the lock, and high performance Disadvantages: Extra loss is required to cancel the lock when lock contention occurs. Lightweight lock advantages: Competitive lock does not block Disadvantages: If not locked constantly spin consumes heavyweight performance advantages: competition need not spin thread, not consume CPU faults: thread blocks, the number of slow response adaptive spin spin lock will be changed, namely the last spin is successful, the virtual machine will be allocated more spin next time, if the last failure, the distribution of spin opportunities would be less next time. The goal is to save processor resources. ConcurrentHashMap from 1.7 (segmented lock) ->1.8(syncronized) -unlock () : to release the lock. -trylock () : to try to obtain the lock. -getholdCount () : to query the number of times the lock() method has been executed by the current thread. Returns the number of threads queuing for the lock. -isfair () : Whether the lock is a fair lock (default is not). - synchronized is a built-in Java keyword. On the JVM level, ReentrantLock is a Java class and JRE level. - ReentrantLock is flexible. However, you must manually obtain and release the lock to avoid deadlocks. Synchronized does not need to manually release and enable the lock. - ReentrantLock applies only to code block locks, while synchronized applies to modifying methods, code blocks, and so on. - ReentrantLock can use tryLock() to determine whether a lock has been successfully obtained, whereas synchronized cannot. - ReentrantLock performs slightly better than synchronized in fierce competition. - ReentrantLock (interruptible lock, lockInterruptibly()) syncronized (uninterruptible lock) - ReentrantLock implements fair lock, syncronized is not fair lockCopy the code
7. The principle of volatile
1. What does volatile do? - Volatile is the lightest synchronization mechanism provided by the Java VIRTUAL machine. - When a variable is volatile, it has two properties: - (visibility) The variable is visible to all threads. When one thread changes the value of the variable, the new value is visible to other threads (immediately known). 1. Using volatile forces the changed value to be written to main storage immediately. 2. When thread 2 is modified, the cache line of the stop variable in thread 1's working memory becomes invalid. 3. Since the stop cache line in thread 1's working memory is invalid, the stop cache line will be read from main memory when thread 1 reads variable stop again - (orderliness) Prohibits instruction reordering optimization. Ordinary variables can only ensure correct results during the execution of this method, but do not guarantee the execution order of program code. Memory barrier: - before the memory barrier instruction must perform, and after the memory barrier instruction must be executed after - make memory visibility, read instructions before inserting barrier, can disable caching data, from main storage load, writing instructions after insert write barriers, to write to the cache to the latest data to write back to the main memory. Four types of barrier: 1. LoadLoad barrier: read | 2. StoreStore barrier: write | 3. LoadStore barrier: read write 4. | StoreLoad barrier: write | read - note: double check lock DCL - one of the scenes using volatile (stay)Copy the code
1.4 Design Mode
1.4.1 creational
- Simple Factory model:
- Abstract Factory pattern: Subclasses decide which concrete class to create
Public class AbstractFactoryTest {public static void main(String[] args) {// Abstract factory String result = (new) CoffeeFactory()).createProduct("Latte"); System.out.println(result); Abstract class AbstractFactory{public abstract String createProduct(String product); } class BeerFactory extends AbstractFactory{@override public String createProduct(String product) {String result = null; Switch (product) {case "Hans": result = "Hans"; break; Elseif result = "回 "; break; Default: result = "other beer "; break; } return result; Class CoffeeFactory extends AbstractFactory{@override public String createProduct(String product) { String result = null; Switch (product) {case "product ": result = "Mocca"; break; Case "Latte": result = "Latte"; break; Default: result = "other coffee "; break; } return result; }}Copy the code
- Singleton: Ensures that only one object is created. Singleton solves the problem of high concurrency by double-checking (read/write locks)
- Prototype mode: Clone, deep copy, shallow copy
- Builder mode: Hides the specific construction process and details
1.4.2 structured
- Adapter pattern: Changes the interface of one class into another interface that the client expects, thus enabling two classes to work together that would otherwise not work together due to interface mismatches.
/* * MicroUSB */ interface MicroUSB {void charger(); } /* * TypeC charger */ interface ITypeC {void charger(); } class TypeC implements ITypeC {@override public void charger() {system.out.println ("TypeC charging "); }} /* */ class AdapterMicroUSB implements MicroUSB {private TypeC TypeC; public AdapterMicroUSB(TypeC typeC) { this.typeC = typeC; } @Override public void charger() { typeC.charger(); Public class AdapterTest {public static void main(String[] args) {TypeC TypeC = new TypeC(); MicroUSB microUSB = new AdapterMicroUSB(typeC); microUSB.charger(); }}Copy the code
- Decorator pattern: Assign different responsibilities or functions to objects
1) Define the top-level object and define the behavior
interface IPerson {
void show();
}
Copy the code
2) Define the decorator superclass
class DecoratorBase implements IPerson{ IPerson iPerson; public DecoratorBase(IPerson iPerson){ this.iPerson = iPerson; } @Override public void show() { iPerson.show(); }}Copy the code
3) Define concrete decorators
class Jacket extends DecoratorBase { public Jacket(IPerson iPerson) { super(iPerson); } @override public void show() {iperson.show (); // Define new behavior system.out.println (" put on jacket "); } } class Hat extends DecoratorBase { public Hat(IPerson iPerson) { super(iPerson); } @override public void show() {iperson.show (); // Define a new behavior system.out.println (" put on the hat "); }}Copy the code
4) Define concrete objects
Class LaoWang implements IPerson{@override public void show() {system.out.println (" implements nothing "); }}Copy the code
5) Decorator mode call
public class DecoratorTest { public static void main(String[] args) { LaoWang laoWang = new LaoWang(); Jacket jacket = new Jacket(laoWang); Hat hat = new Hat(jacket); hat.show(); }}Copy the code
- Appearance pattern: Simplifies the interface of a group of classes
- Composite pattern: Customers work with collections of objects and individual objects in a consistent manner
- Bridge pattern: Abstraction and implementation are separated to implement different
- Agent mode: Your broker
/* / IAirTicket {void buy(); } @override public void buy() {system.out.println (" ticket "); }} /* * ProxyAirTicket implements IAirTicket {private AirTicket implements IAirTicket; public ProxyAirTicket() { airTicket = new AirTicket(); } @Override public void buy() { airTicket.buy(); } /* * Proxy mode call */ public class ProxyTest {public static void main(String[] args) {IAirTicket airTicket = new ProxyAirTicket(); airTicket.buy(); }}Copy the code
- Share pattern: Sharing technology that supports a large number of fine-grained objects
1.4.3 behavior type
- Policy pattern: Encapsulate interchangeable behaviors and use delegates to decide which one to use. (Use policy mode when there are too many if else)
/* * declare travel */ interface ITrip {void going(); } class Bike implements ITrip {@override public void going() {system.out.println (" implements Bike "); }} class implements ITrip {@override public void going() {system.out.println (" implements "); }} /* * class Trip {private ITrip Trip; public Trip(ITrip trip) { this.trip = trip; } public void doTrip() { this.trip.going(); Public class StrategyTest {public static void main(String[] args) {Trip Trip = new Trip(new Bike()); trip.doTrip(); }}Copy the code
- Observer mode: Allows objects to be notified when their state changes
1) Define the observer (message receiver)
/* * interface Observer {public void update(String message); */ ConcrereObserver implements Observer {private String name; public ConcrereObserver(String name) { this.name = name; } @override public void update(String message) {system.out.println (name + ":" + message); }}Copy the code
2) Define the observed (message sender)
Public void attach(Observer Observer); public void attach(Observer Observer); Public void detach(Observer Observer); Public void notify(String message); } /* * ConcreteSubject implements Subject {private List<Observer> List = new ArrayList<Observer>(); @Override public void attach(Observer observer) { list.add(observer); } @Override public void detach(Observer observer) { list.remove(observer); } @Override public void notify(String message) { for (Observer observer : list) { observer.update(message); }}}Copy the code
3) Code call
ConcreteSubject ConcreteSubject = new public class ObserverTest {public static void main(String[] args) {ConcreteSubject ConcreteSubject = new ConcreteSubject(); // Define subscriber ConcrereObserver ConcrereObserver = new ConcrereObserver(" old king "); ConcrereObserver concrereObserver2 = new ConcrereObserver("Java"); // Add subscribe concreteSubject. Attach (concrereObserver); concreteSubject.attach(concrereObserver2); // Publish message Concretesubject. notify(" Updated "); }}Copy the code
The execution result of the program is as follows:
Lao Wang: Updated Java: UpdatedCopy the code
- Template method pattern: Only methods are defined, and subclasses decide how to implement the steps in an algorithm (AQS)
- Command mode: Encapsulates the request as an object
- Stateful mode: Encapsulates state-based behavior and switches between behaviors using delegates
- Iterator pattern: Walks between collections of objects without exposing the implementation of the collection
- Responsible-chain mode: Similar to a linked list, each responsible-holder holds a pointer or reference to the next processing object to be set
- Mediator model: United Nations, dealing with national disputes
- Interpreter mode: compiler
- Memo mode: Save the game
- Visitor pattern: The most complex pattern, suitable for systems with relatively stable data structures