Cabbage Java self study room covers core knowledge
The path of The Java Engineer Design pattern (1) The path of the Java Engineer design pattern (2) the path of the Java engineer design pattern (3)
1. Introduction to design patterns
Design Pattern is a series of routines to solve specific problems. It is not a syntax specification, but a set of solutions to improve code reusability, maintainability, readability, robustness, and security.
In 1995, GoF (The Gang of Four) published design Patterns: The Foundations of Reusable Object-oriented Software, a collection of 23 design patterns, which set a milestone in the field of software design patterns known as GoF Design Patterns.
The essence of these 23 design patterns is the practical application of object-oriented design principles, and the full understanding of the encapsulation, inheritance and polymorphism of classes, as well as the association and combination of classes.
2. The concept of design patterns
Software Design Pattern, also known as Design Pattern, is a set of repeatedly used, most people know, after classification and cataloging, code Design experience summary. It describes some recurring problems in the software design process, and the solutions to this problem. That is to say, it is a series of routines to solve a specific problem. It is a summary of the code design experience of predecessors. It has a certain universality and can be used repeatedly. The goal is to improve code reusability, code readability, and code reliability.
The essence of design pattern is the practical application of object-oriented design principles, and it is a full understanding of the encapsulation, inheritance and polymorphism of classes, as well as the association and combination of classes. Using design patterns correctly has the following advantages:
- Can improve the programmer’s thinking ability, programming ability and design ability.
- Make the program design more standardized, code preparation more engineering, so that the software development efficiency is greatly improved, thus shortening the software development cycle.
- So that the design of the code can be reused, readability, high reliability, good flexibility, maintainability.
Of course, software design patterns are only a guide. In the specific software development, it must be chosen appropriately according to the characteristics and requirements of the designed application system. For simple program development, it is much easier to write a simple algorithm than to introduce a design pattern. But for large project development or framework design, it’s better to use design patterns to organize your code.
3. 23 design modes of GoF
- The creation pattern describes how objects are created. Its main feature is the separation of object creation and use.
GoF provides five creation patterns: singleton, prototype, factory method, Abstract factory, and Builder.
- Structural pattern: Used to describe how classes or objects are grouped into a larger structure in some layout.
GoF provides 7 structural modes, such as agent, adapter, bridge, decoration, appearance, share element and composition.
- Behavioral patterns: Describe how classes or objects work together to accomplish tasks that a single object could not accomplish alone, and how responsibilities are assigned.
GoF provides 11 behavioral patterns, including template methods, policies, commands, chains of responsibility, states, observers, mediators, iterators, visitors, memos, and interpreters.
3.1. The function of GoF’s 23 design modes
- Singleton pattern: Only one instance of a class can be generated. This class provides a global access point for external access to the instance. The extension is the finite-multiple pattern.
- Prototype pattern: Use an object as a Prototype and clone it to create new instances similar to the Prototype.
- Factory Method pattern: Defines an interface for creating products, and subclasses decide what products to produce.
- AbstractFactory pattern: provides an interface to create a family of products, each subclass of which can produce a series of related products.
- Builder pattern: Decompose a complex object into relatively simple parts, then create them separately according to different needs, and finally build the complex object.
- Proxy mode: Provides a Proxy for an object to control access to that object. That is, a client indirectly accesses an object through a proxy to restrict, enhance, or modify some of its features.
- Adapter pattern: Converts the interface of one class into another interface that the customer expects, making classes that would otherwise not work together due to interface incompatibility work together.
- Bridge pattern: Separate the abstraction from the implementation so that they can vary independently. It is realized by using combinatorial relation instead of inheritance relation, thus reducing the coupling degree of abstraction and realization of these two variable dimensions.
- Decorator pattern: Dynamically add some responsibility to an object, that is, add additional functionality.
- Facade pattern: Provides a consistent interface to multiple complex subsystems, making them more accessible.
- Flyweight pattern: Sharing techniques are used to efficiently support reuse of large numbers of fine-grained objects.
- Composite pattern: Groups objects into a tree-like hierarchy, giving users consistent access to individual and Composite objects.
- TemplateMethod pattern: define the skeleton of an algorithm in an operation and defer some steps of the algorithm to subclasses so that subclasses can redefine specific steps of the algorithm without changing the structure of the algorithm.
- Strategy pattern: A series of algorithms are defined and each algorithm is encapsulated so that they are interchangeable and the change of the algorithm does not affect the customers using the algorithm.
- Command pattern: Encapsulates a request as an object, separating the responsibility for making the request from the responsibility for executing the request.
- Chain of Responsibility pattern: Requests are passed from one object in the Chain to the next until the request is responded to. In this way, objects are decoupled.
- State mode: Allows an object to change its behavior when its internal State changes.
- The Observer pattern: There is a one-to-many relationship between objects. When one object changes, the change is notified to the other objects, thereby affecting the behavior of the other objects.
- Mediator mode: Define an intermediary object to simplify the interaction between the original objects, reduce the degree of coupling between the objects in the system, so that the original objects do not need to understand each other.
- The Iterator pattern: Provides a way to access a series of data in an aggregate object sequentially without exposing the internal representation of the aggregate object.
- Visitor pattern: Provides multiple access to each element in a collection without changing the collection elements, that is, multiple Visitor objects for each element.
- Memento pattern: Retrieves and saves the internal state of an object so that it can be restored later without breaking encapsulation.
- The Interpreter pattern: Provides the grammar of how to define a language and the method of interpreting language sentences, called the Interpreter.
4. Object-oriented design principles
4.1. Open and close principle
The Open Closed Principle (OCP) was developed by Bertrand Meyer, In his 1988 book Object Oriented Software Construction, he proposed: Software entities should be open for extension, but closed for modification. This is the classic definition of the open closed principle.
The software entity here consists of the following parts: modules, classes and interfaces, and methods divided in the project.
The open-close principle means that when application requirements change, functions of modules can be extended to meet new requirements without modifying the source code or binary code of the software entity.
4.1.1. Function of the Open and Close principle
The open and close principle is the ultimate goal of object-oriented programming, which makes software entities have certain adaptability and flexibility as well as stability and continuity. Specifically, its role is as follows.
- Impact on software testing
If the software follows the open closed principle, only the extended code needs to be tested during software testing, because the original test code still works.
- You can improve your code’s reusability
The smaller the granularity, the more likely it is to be reused; In object-oriented programming, programming according to atoms and abstractions can improve code reusability.
- Can improve software maintainability
Software that follows the open and closed principle is more stable and continuous, making it easier to expand and maintain.
4.1.2. Implementation method of open and close principle
The open closed principle can be implemented by “abstracting constraints and encapsulating changes”, that is, defining a relatively stable abstraction layer for software entities through interfaces or abstract classes, while encapsulating the same variable elements in the same concrete implementation class.
Because abstraction has good flexibility and wide adaptability, as long as abstraction is reasonable, it can basically maintain the stability of software architecture. The changeable details in software can be extended from the abstract implementation class. When the software needs to change, it only needs to derive an implementation class to extend according to the requirements.
4.2. Richter’s Substitution principle
Liskov Substitution Principle, LSP was proposed by Ms. Liskov of the MIT Computer Science Laboratory in a paper called “Data Abstraction and Hierarchy” published at the 1987 OOPSLA Summit, She asked: Inheritance should ensure that any property proved about supertype objects also holds for subtypes Objects).
Richter’s substitution principle mainly describes some principles about inheritance, that is, when inheritance should be used, when inheritance should not be used, and the underlying principles. Richter substitution is the basis of inheritance reuse, which reflects the relationship between base class and subclass, is a supplement to the open and closed principle, and is the specification of concrete steps to achieve abstraction.
4.2.1. Functions of Richter’s substitution principle
The main functions of Richter’s substitution principle are as follows:
- Richter’s substitution principle is one of the important ways to realize open – close principle.
- It overcomes the disadvantage of poor reusability caused by overwriting the parent class in inheritance.
- It is the guarantee of the correctness of the action. That is, class extensions do not introduce new errors into existing systems, reducing the likelihood of code errors.
- Enhance the robustness of the program, while changing can be very good compatibility, improve the maintenance of the program, scalability, reduce the risk introduced when the requirements change.
4.2.2. Implementation method of Richter’s substitution principle
In plain English, the Richter substitution principle says that a subclass can extend the functionality of its parent class, but cannot change the functionality of its parent class. In other words, when a subclass inherits from a parent class, try not to override the methods of the parent class, except to add new methods to accomplish new functions.
Based on the above understanding, the definition of Richter’s substitution principle can be summarized as follows:
- A subclass can implement an abstract method of the parent class, but cannot override a nonabstract method of the parent class
- Subclasses can add their own special methods
- When a subclass’s method overrides a parent class’s method, the method’s preconditions (that is, the method’s input parameters) are looser than the parent class’s method
- When a subclass’s method implements a parent class’s method (overriding/overloading or implementing an abstract method), the method’s postcondition (that is, the output/return value of the method) is stricter or equal to that of the parent class’s method
Although it is simple to write the new function by rewriting the parent class method, the reusability of the whole inheritance system will be poor, especially when the application of polymorphism is more frequent, the probability of program running error will be very large.
If a program violates the Richter substitution principle, an object that inherits from a class will get a runtime error where the base class appears. At this time, the correction method is to cancel the original inheritance relationship and redesign the relationship between them.
The most famous example of Richter’s substitution is “a square is not a rectangle”. Of course, there are many similar examples in life, for example, penguins, ostriches and kiwis are biologically divided, they belong to birds; However, from the perspective of class inheritance, they cannot be defined as a subclass of “bird” because they cannot inherit the function of “bird” to fly. Similarly, “balloon fish” cannot swim, so it cannot be subclassed as “fish”; “Toy gun” does not blow up enemies, so it cannot be defined as a subclass of “gun”.
4.3. Dependency inversion principle
Dependence Inversion Principle (DIP) is an article by Robert c. Martin, president of Object Mentor, published in C++ Report in 1996. The dependency inversion principle was originally defined as: a high-level module should not depend on a low-level module, but both should depend on its abstraction; Abstraction should not depend on details, Details should depend upon abstractions (High level modules shouldnot depend upon low level modules. Both should depend upon abstractions. Abstractions Should not depend upon details. Details should depend upon Abstractions). The core idea is: program to the interface, not to the implementation.
Dependency inversion principle is one of the important ways to realize the open closed principle, which reduces the coupling between the customer and the implementation module.
Because in software design, detail is variable and abstraction layers are relatively stable, an architecture based on abstraction is much more stable than one based on detail. Here, abstraction refers to an interface or abstract class, while detail refers to a concrete implementation class.
The purpose of using interfaces or abstract classes is to create specifications and contracts that do not involve any concrete operations, leaving the task of presenting the details to their implementation classes.
4.3.1. Role of the dependency inversion principle
The main functions of the dependency inversion principle are as follows:
- The dependency inversion principle can reduce the coupling between classes.
- The dependence inversion principle can improve the stability of the system.
- The dependency inversion principle can reduce the risks associated with parallel development.
- The dependency inversion principle improves code readability and maintainability.
4.3.2. Implementation method of dependency inversion principle
The purpose of the dependency inversion principle is to reduce the coupling between classes by programming to the interface, so we can satisfy this rule in projects by following four points in real programming.
- Each class tries to provide an interface, an abstract class, or both.
- Variables should be declared as interfaces or abstract classes.
- No class should derive from a concrete class.
- Follow the Richter substitution principle when using inheritance.
4.4. Principle of single responsibility
Single Responsibility Principle (SRP), also known as Single function Principle, was formulated by Robert C. Robert C. Martin in Agile Software Development: Principles, Patterns, and Practices. The single responsibility principle states that There should never be more than one reason for a class to change.
This principle states that objects should not take on too many responsibilities. If an object takes on too many responsibilities, there are at least two disadvantages:
- Changes to one responsibility may impair or inhibit the class’s ability to implement other responsibilities;
- When a client needs a responsibility for the object, it has to include all the other responsibilities that are not needed, resulting in redundant code or wasted code.
4.4.1. Advantages of the single responsibility principle
The core of the single responsibility principle is to control the granularity of classes, decouple objects, and improve their cohesion. Following the single responsibility principle has the following advantages:
- Reduce class complexity. The logic of a class having one responsibility is certainly much simpler than having multiple responsibilities.
- Improve the readability of classes. As complexity decreases, readability increases.
- Improve system maintainability. Improved readability makes it easier to maintain.
- Risk reduction due to change. Change is inevitable, and if the single responsibility principle is followed well, when you modify one function, you can significantly reduce the impact on other functions.
4.4.2. Implementation method of single responsibility principle
The single responsibility principle is the simplest and most difficult principle to apply, requiring the designer to discover the different responsibilities of a class, separate them, and encapsulate them into different classes or modules. The multiple responsibilities of discovery classes require designers to have strong analytical design ability and relevant refactoring experience. The following describes the application of the single responsibility principle by taking the university student work management procedure as an example.
4.5. Interface isolation principle
The Interface Segregation Principle (ISP) requires programmers to break bloated interfaces into smaller, more specific interfaces, so that the interfaces contain only the methods that the customer is interested in. In 2002, Robert C. Martin defined the “interface isolation principle” as: Clients should not be forced to depend on methods they do not use. There is another definition of this principle: The dependency of one class to another one should depend on The smallest possible interface.
What these two definitions mean is that instead of trying to create a huge interface for all the classes that depend on it, create a special interface for each class.
The interface isolation principle and the single responsibility principle, both intended to improve the cohesion of classes and reduce coupling between them, embody the idea of encapsulation, but they are different:
- The single responsibility principle focuses on responsibility, while the interface isolation principle focuses on isolation of interface dependencies.
- The single responsibility principle is mainly a constraint class, which is targeted at the implementation and details in the program; The interface isolation principle mainly constrains interfaces and is aimed at abstraction and the construction of the overall framework of the program.
4.5.1. Advantages of interface isolation
The interface isolation principle is used to constrain interfaces and reduce the dependency of classes on interfaces. Following the interface isolation principle has the following advantages:
- Splitting a bloated interface into multiple small-grained interfaces can prevent the proliferation of external changes and improve the flexibility and maintainability of the system.
- Interface isolation improves system cohesion, reduces external interactions, and reduces system coupling.
- If the interface granularity is properly defined, the system stability can be guaranteed. However, if the definition is too small, it will lead to too many interfaces and complicate the design. If the definition is too large, flexibility is reduced and customization services cannot be provided, bringing unpredictable risks to the overall project.
- The use of multiple specialized interfaces also provides a hierarchy of objects, since the overall interface can be defined through interface inheritance.
- Reduces code redundancy in project engineering. A large interface that is too large often places a lot of unnecessary methods inside it, forcing redundant code to be designed when implementing the interface.
4.5.2. Implementation method of interface isolation principle
When applying the interface isolation principle, it should be measured according to the following rules.
- Keep interfaces small, but limited. An interface serves only one submodule or business logic.
- Customize services for classes that depend on interfaces. Provide only the methods needed by the caller and mask those that are not.
- Understand the environment and refuse to follow blindly. Each project or product has selected environmental factors, and depending on the environment, the criteria for interface separation will vary to gain insight into the business logic.
- Improve cohesion and reduce external interactions. Make the interface do the most with the fewest methods.
4.6. Demeter’s Rule
The Law of Demeter (LoD) is also known as the Least Knowledge Principle (LKP), Born in 1987 as a Northeastern University research project called Demeter, proposed by Ian Holland and popularized by Booch, one of the founders of UML, He later became known as The Pragmatic Programmer in his classic book.
Demeter’s Law is defined as: Talk only to your immediate friends and not to strangers. The implication is that if two software entities do not communicate directly, then direct calls to each other should not occur and can be forwarded by a third party. Its purpose is to reduce the degree of coupling between classes and improve the relative independence of modules.
The “friend” in Demeter’s law refers to the current object itself, its member object, the object created by the current object, the method parameters of the current object, etc. These objects are associated, aggregated or combined with the current object, and can directly access the methods of these objects.
4.6.1. Advantages of Demeter’s rule
Demeter’s law requires to limit the width and depth of communication between software entities. The correct use of Demeter’s law will have the following two advantages:
- The coupling degree between classes is reduced and the relative independence of modules is improved.
- As the affinity degree is reduced, the class reusability and system scalability are improved.
However, the excessive use of Demeter’s rule will make the system produce a large number of mediation classes, which will increase the complexity of the system and reduce the communication efficiency between modules. Therefore, the application of Demeter’s rule requires repeated trade-offs to ensure high cohesion and low coupling as well as clear system structure.
4.6.2. Realization method of Demeter’s rule
According to the definition and characteristics of Demeter’s rule, it emphasizes the following two points:
- From a dependent’s point of view, only rely on what should be relied on.
- From the perspective of the dependent, only expose the method that should be exposed.
So here are six things to keep in mind when applying Demeter’s rule.
- In class partitioning, you should create weakly coupled classes. The weaker the coupling between classes, the better the goal of reuse.
- In the structure design of the class, the access rights of class members should be reduced as far as possible.
- In class design, the priority is to make a class immutable.
- Keep references to other objects to a minimum on references to other classes.
- Instead of exposing the class’s attribute members, the corresponding accessors (set and GET methods) should be provided.
- Use Serializable with caution.
4.7. Principle of composite reuse
Composite Reuse Principle (CRP) is also called Composition/Aggregate Reuse Principle (CARP). It requires that in software reuse, we should first use association relation such as combination or aggregation, and then consider using inheritance relation.
If inheritance is to be used, the Richter substitution principle must be strictly followed. The principle of composite reuse and the Principle of Richter’s substitution complement each other.
4.7.1. Importance of the principle of composite reuse
Generally, class reuse is divided into inheritance reuse and composite reuse. Although inheritance reuse has the advantages of simplicity and easy implementation, it also has the following disadvantages.
- Inheritance reuse breaks class encapsulation. Because inheritance exposes the implementation details of the parent class to the child class, the parent class is transparent to the child class, so this reuse is also known as “white box” reuse.
- A subclass has a high degree of coupling with its parent. Any change in the implementation of the parent class results in a change in the implementation of the subclass, which is not conducive to the extension and maintenance of the class.
- It limits the flexibility of reuse. An implementation inherited from a parent class is static and defined at compile time, so it cannot be changed at run time.
When using composite or aggregate reuse, existing objects can be incorporated into the new object, making it a part of the new object. The new object can invoke the functions of the existing object, which has the following advantages.
- It maintains class encapsulation. Because the internal details of component objects are invisible to the new object, this reuse is also known as “black box” reuse.
- Poor coupling between old and new classes. This reuse requires fewer dependencies, and the only way a new object can access a component object is through its interface.
- High flexibility of reuse. This reuse can occur dynamically at run time, with new objects dynamically referencing objects of the same type as component objects.
4.7.2. Implementation method of composite multiplexing principle
The principle of composite reuse is to integrate the existing object into the new object as a member object of the new object, the new object can call the function of the existing object, so as to achieve reuse.
The path of The Java Engineer Design pattern (1) The path of the Java Engineer design pattern (2) the path of the Java engineer design pattern (3)