There was a previous article on design patterns, “23 Design patterns for GoF with Go,” but that series ended after three articles, mainly because I couldn’t find the right sample code. Considering that it is difficult to make a connection in our daily development by taking examples such as “can a duck fly?” and “how to bake?” (There are very few software systems with this logic.)
23 Design Patterns for GoF can be seen as a reboot of the 23 Design Patterns for GoF with Go series, learning from the lessons learned last time, and having completed the implementation of 23 design patterns in sample code before writing this article. Unlike last time, this sample code is implemented in Java and uses some of the techniques/issues/scenarios we encounter in daily development as a starting point to demonstrate how design patterns can be used to achieve the relevant implementation.
preface
Twenty-five years have passed since GoF proposed 23 design patterns in 1995, and design patterns are still a hot topic in software. Design patterns are generally defined as:
Design Pattern is a set of repeatedly used, most people know, classified, summarized code Design experience, the use of Design Pattern in order to reuse the code, make the code easier to understand and ensure the reliability of the code.
By definition, a design pattern is a summary of experience, a concise and elegant solution to a particular problem. The immediate benefit of learning design patterns is that you can stand on the shoulders of giants to solve specific problems in software development.
The highest level of learning design patterns is to understand their underlying ideas, so that even if you have forgotten the name and structure of a design pattern, you can solve a particular problem with ease. The essential idea behind design patterns is known as the SOLID principle. If design patterns are analogous to martial arts moves in the martial arts world, then SOLID principles are internal forces. Generally speaking, first practice good internal skills, then learn moves, will achieve twice the result with half the effort. Therefore, before introducing design patterns, it is important to introduce SOLID principles.
This article starts with the overall structure of the sample code used in this series, Demo, and then goes through the SOLID principles, namely, the single responsibility principle, the Open closed principle, the Richter Substitution principle, the interface isolation principle, and the dependency inversion principle.
A simple distributed application system
This series of demo code to obtain the address: github.com/ruanrunxue/…
Example code Demo project to achieve a simple distributed application system (stand-alone version), the system is mainly composed of the following modules:
-
Network Network, Network function module, simulation of packet forwarding, socket communication, HTTP communication and other functions.
-
Database Db, database function module, simulated table, transaction, DSL and other functions.
-
Message queue Mq, the message queue module, simulates a topic-based producer/consumer message queue.
-
Monitoring system Monitor, monitoring system module, simulation of service log collection, analysis, storage and other functions.
-
Sidecar is a Sidecar module, which simulates the interception of network packets and realizes the functions of Access log reporting and message flow control.
-
At present, the simulation realizes the Service registry, online mall Service cluster, Service message intermediary and other services.
The main directory structure of the demo project is as follows:
│ ├─ Heavy Metal Guitar School │ ├─ Heavy metal Guitar School │ ├─ Heavy metal Guitar School │ ├─ Heavy metal Guitar School │ ├─ Heavy metal Guitar School │ ├─ Heavy metal Guitar School │ ├─ DSL # Support DSL query and result display │ ├─ Exception # ├─ iterator # Contain sequential traversal and random traversal │ @ the iterator pattern 】 【 └ ─ ─ the transaction # achieve the function of database transactions, including execution, commit and rollback the @ command mode @ memo mode 】 【 】 ├ ─ ─ monitor # monitoring system module, adopt the plug-in architecture style, │ ├─ ├─ json │ ├─ ├─ ├─ ├─ ├─ ├─ ├─ ├─ ├─ ├─ ├─ ├─ ├─ ├─ ├─ ├─ ├─ ├─ ├─ ├─ ├─ │ ├─ Entity # Monitoring System Entity object definition │ ├─ Exception # Monitoring system related exception │ ├─ filter # Filter plugin implementation definition │ ├── │ ├─ ├─ Pipeline # API # API # API # API # API # API A pipeline said │ @ bridge mode 】 【 ETL processing ├ ─ ─ the plugin # plug-in abstract interface definition │ └ ─ ─ schema # monitoring system related data table definition ├ ─ ─ # mq message queue module ├ ─ ─ network # network module, Analog network communication, defines the socket, a generic types such as packet/interface │ @ the observer pattern 】 【 └ ─ ─ HTTP # simulation for HTTP communication server, client ├ ─ ─ service # service module, │ ├─ Mediating # as a mediator for service discovery and message forwarding Provide service registration, registration, update, and find, subscription, subscribe and notify the function such as │ │ ├ ─ ─ the entity # service registration/find the relevant entities to define "@ archetypal pattern" @ builders mode 】 【 │ │ └ ─ ─ schema # service registry data table definition 【 @ the visitor pattern 】 【 @ the flyweight pattern 】 │ └ ─ ─ online mall shopping # simulation definition of service group, containing order service, inventory, payment service, delivery service @ appearance mode 】 【 └ ─ ─ sidecars # side car module, to intercept the socket, Provide HTTP access log, the flow control function 【 @ decorator pattern 】 【 】 @ factory pattern └ ─ ─ flowctrl # flow control module, the stochastic flow control based on message rate is 【 @ template method pattern 】 【 @ state pattern 】Copy the code
SRP: Single responsibility principle
The Single Responsibility Principle (SRP) should be one of The most easily understood of SOLID principles, but also one of The most easily misunderstood. Many people equate the refactoring technique of “refactoring a large function into a small function with a single responsibility” to SRP, which is not true. Small functions imply a single responsibility, but this is not SRP.
The most widely spread definition of SRP is probably Uncle Bob’s:
A module should have one, and only one, reason to change.
That is, a module should have one and only one cause that causes it to change.
There are two things to understand about this explanation:
(1) How to define a module
We typically define a source file as a module of minimal granularity.
(2) How to find the cause
Software changes to meet the needs of a user, and that user is the cause of the change. However, a module often has more than one user/client program, such as the ArrayList class in Java, which may be used by thousands of programs, but we can’t say that ArrayList has different responsibilities. Therefore, we should change “a user” to “a class of roles”, such as ArrayList clients that can be categorized as “requiring list/array capabilities”.
So Uncle Bob offers another explanation for SRP:
A module should be responsible to one, and only one, actor.
With this explanation, we can understand single function duty is not the same as SRP, such as in A module has two functions, A and B are responsibilities of A single, but the function of A consumer is A class of users, users of function B B class of users, and change of class A users and class B is different, Then this module does not satisfy the SRP.
Next, take our distributed application system Demo as an example for further discussion. For the Registry class (service Registry), it provides the basic capabilities of service registration, update, deregistration, and discovery. Then, we can implement it as follows:
// demo/src/main/java/com/yrunz/designpattern/service/Registry.java public class Registry implements Service { private final HttpServer httpServer; private final Db db; . @Override public void run() { httpServer.put("/api/v1/service-profile", this::register) .post("/api/v1/service-profile", this::update) .delete("/api/v1/service-profile", this::deregister) .get("/api/v1/service-profile", this::discovery) .start(); } // Private HttpResp register(HttpReq req) {... } // Private HttpResp update(HttpReq req) {... } // Private HttpResp deregister(HttpReq req) {... } // Private HttpResp Discovery (HttpReq req) {... }}Copy the code
In the above implementation, Registry contains four main methods such as Register, Update, Deregister and Discovery, which exactly correspond to the external capabilities provided by Registry. It seems to have a single responsibility.
But if you think about it, service registration, update, and deregistration are functions for service providers, while service discovery is functions for service consumers. Service providers and service consumers are two different roles, and they can change at different times and in different directions. Such as:
The current service discovery capability is implemented like this: Registry selects one of the Service Profiles that meet the query criteria and returns it to the service consumer (that is, Registry does load balancing itself).
Suppose that now the service consumer has a new requirement: Registry returns all service Profiles that meet the query criteria, and the service consumer does the load balancing.
To do this, we need to modify Registry’s code. Functions such as service registration, update, and deregistration should not be affected, but because they are all in the same Registry as service discovery, they are forced to be affected, such as possible code conflicts.
Therefore, a better design would be to consolidate register, Update, and Deregister into one service management module, SvcManagement, and discovery into another service discovery module, SvcDiscovery. The service Registry is combined with SvcManagement and SvcDiscovery.
The concrete implementation is as follows:
// demo/src/main/java/com/yrunz/designpattern/service/SvcManagement.java class SvcManagement { private final Db db; . HttpResp register(HttpReq req) {... } // HttpResp update(HttpReq req) {... } // Service deregister(HttpReq req) {... } } // demo/src/main/java/com/yrunz/designpattern/service/SvcDiscovery.java class SvcDiscovery { private final Db db; . // HttpResp Discovery (HttpReq req) {... } } // demo/src/main/java/com/yrunz/designpattern/service/Registry.java public class Registry implements Service { private final HttpServer httpServer; private final SvcManagement svcManagement; private final SvcDiscovery svcDiscovery; . @override public void run() {httpserver. put("/ API /v1/service-profile", svcManagement::register) .post("/api/v1/service-profile", svcManagement::update) .delete("/api/v1/service-profile", svcManagement::deregister) .get("/api/v1/service-profile", svcDiscovery::discovery) .start(); }}Copy the code
In addition to repeated code compilation, SRP violations cause two common problems:
1. Code conflicts. Programmers modify A module of A function, and the programmer B in unwittingly in modifying the module B function (as A function and B function geared to the needs of different users, may consist of two completely different programmers to maintenance), when they commit changes at the same time, the code conflict will occur (modify the same source file).
2. The modification of function A affects function B. If function A and function B both use A common function C in the module, and now function A has new requirements to modify function C, then if the modifier does not consider function B, the original logic of function B will be affected.
Thus, violating SRP can lead to software that is extremely unmaintainable. However, we should not blindly split modules, which can lead to too much fragmentation of code and also increase software complexity. For example, in the previous example, there is no need to split the service management module into service registration module, service update module and service deregistration module, because they are for the same users; The other is that they will either change at the same time or both for the foreseeable future.
Therefore, we can come to the conclusion that:
-
If a module is aimed at the same type of users (for the same reasons), there is no need to split it.
-
In the absence of user categorization judgment, the best time to split is when the change occurs.
SRP is a balance between aggregation and fragmentation. Too much aggregation leads to overstretching, and too much fragmentation leads to complexity. From the user’s point of view to grasp the degree of separation, the separation of functions for different users. If you really can’t tell/predict, wait for changes to occur before splitting, to avoid excessive design.
OCP: Open and close principle
In The open-close Principle (OCP), “Open” refers to being Open to extensions, and “closed” refers to being closed to modifications, which can be fully explained as follows:
A software artifact should be open for extension but closed for modification.
In plain English, a software system should be well extensible, and new features should be implemented through extensions rather than modifications to existing code.
However, OCP seems paradoxical in its literal sense: you want to add functionality to a module, but you can’t modify it.
How can we break this dilemma? The key is abstraction! Good software systems are always based on good abstractions, which can reduce the complexity of software systems.
So what is abstraction? Abstraction exists not only in software, but also in our daily life. Here’s an example from The Linguistic Invitation to explain what abstraction means:
Suppose a farm has a cow named “Ah Hua”, then:
1. When we call it a “ah hua”, we see some of its unique features: spots on its body and a lightning-shaped scar on its forehead.
2. When we call it a cow, we ignore its unique characteristics and see what it has in common with the cow “A Hei” and the cow “A Huang” : it is a cow, female.
3. When we call it a domestic animal, we ignore its characteristics as a cow, and instead see it as an animal like a pig, chicken, or sheep, kept in a pen on a farm.
4. When we call it farm property, we focus only on what it has in common with the other saleable objects on the farm: it can be sold and transferred.
From “ah hua”, to cow, to livestock, and then to farm property, this is a process of continuous abstraction.
From the above examples, we can draw the conclusion that:
-
Abstraction is the process of constantly ignoring details and finding common ground between things.
-
Abstractions are layered, and the higher the level of abstraction, the less detail there is.
Back in the software world, we can also compare the above example to a database. The abstraction level of a database can be as follows: MySQL 8.0 -> MySQL -> relational database -> database. Now suppose there is a requirement for a business module to store business data to a database, and there are several design options:
-
Solution 1: Design the business module to rely on MySQL 8.0 directly. Because the version is always changing, if MySQL is upgraded one day, then we have to modify the business module to adapt, so scheme 1 violates OCP.
-
Scheme 2: the business module is designed to rely on MySQL. Compared with scheme 1, Scheme 2 eliminates the influence caused by MySQL version upgrade. Now consider another scenario, if for some reason the company banned MySQL and had to switch to PostgreSQL, we would still have to modify the business module to adapt the database switch. Therefore, in this scenario, scheme two also violates OCP.
-
Scheme three: the business module is designed as a dependent relational database. In this solution, we basically eliminate the impact of relational database switching, and can switch between MySQL, PostgreSQL, Oracle and other relational databases at any time without changing the business module. However, as you are familiar with the business, you predict that with the rapid growth of users in the future, it is likely that the relational database will not be able to meet the high concurrent write business scenarios, so the final solution follows.
-
Scheme four: the business module is designed to rely on the database. In this way, business modules do not need to be changed in the future whether MySQL or PostgreSQL, relational or non-relational databases are used. At this point, we can basically assume that the business module is stable, not affected by changes in the underlying database, and meets the OCP.
We can see that the evolution of the above solution is the process of abstracting the database modules that our business depends on, and finally designing stable software that serves OCP.
So what do we use to represent the abstraction of “database” in a programming language? Is the interface!
The most common operations on a database are CRUD operations, so we can design a Db interface to represent “database” :
public interface Db {
Record query(String tableName, Condition cond);
void insert(String tableName, Record record);
void update(String tableName, Record record);
void delete(String tableName, Record record);
}
Copy the code
Thus, the dependency between the business module and the database module becomes something like the following:
Another key to satisfying OCP is to isolate the change, which can only be abstracted if the change point is identified and isolated first. The following uses our distributed application system Demo as an example to explain how to achieve separation and abstraction of change points.
In Demo, the monitoring system performs ETL operations on the access log of the service, which involves the following three operations: 1) Obtains log data from the message queue; 2) Data processing; 3) Store the processed data in the database.
We call the whole log data processing process pipeline, so we can implement:
public class Pipeline implements Plugin { private Mq mq; private Db db; . public void run() { while (! Consume () {// consume(); // consume(); String accessLog = msg.payload(); ObjectNode logJson = new ObjectNode(jsonNodeFactory.instance); // 2. logJson.put("content", accessLog); String data = logJson.asText(); Db. insert("logs_table", logId, data); }}... }Copy the code
Now consider a new service, but this service does not support message queue connection, only support socket transmission data, so we need to add an InputType on Pipeline to determine whether to use socket input source:
public class Pipeline implements Plugin { ... public void run() { while (! isClose.get()) { String accessLog; If (inputType == inputType.mq) {Message MSG = mq.consume("monitor. Topic "); accessLog = msg.payload(); } else {// use socket as message source Packet Packet = socket.receive(); accessLog = packet.payload().toString(); }... }}}Copy the code
After a period of time, there is a need to stamp the access log with a timestamp to facilitate subsequent log analysis, so we need to modify the data processing logic of Pipeline:
public class Pipeline implements Plugin { ... public void run() { while (! isClose.get()) { ... ObjectNode logJson = new ObjectNode(jsonNodeFactory.instance); logJson.put("content", accessLog); // Add a timestamp field logjson.put ("timestamp", instant.now ().getepochsecond ()); String data = logJson.asText(); . }}}Copy the code
Soon, there was another requirement to store the processed data on ES for the convenience of subsequent log retrieval, so we modified the data storage logic of Pipeline again:
public class Pipeline implements Plugin { ... public void run() { while (! isClose.get()) { ... If (outputType == outputType.db) {db.insert("logs_table", logId, data); } else {// Store to ES. Store (logId, data)}}}}Copy the code
In the pipeline example above, each new requirement requires modification of the pipeline module, which is a clear violation of OCP. Now, let’s optimize it to satisfy OCP.
The first step is to separate the change points. According to the business processing logic of pipeline, we can find three independent change points: data acquisition, processing and storage. In the second step, we abstract the three change points and design the following three abstract interfaces:
/ / demo/SRC/main/Java/com/yrunz designpattern/monitor/input/InputPlugin. Java / / data acquisition abstract interface public interface InputPlugin extends Plugin { Event input(); void setContext(Config.Context context); } / / demo/SRC/main/Java/com/yrunz designpattern/monitor/filter/FilterPlugin. Java / / data processing abstract interface public interface FilterPlugin extends Plugin { Event filter(Event event); } / / demo/SRC/main/Java/com/yrunz designpattern/monitor/output/OutputPlugin. Java / / data store abstract interface public interface OutputPlugin extends Plugin { void output(Event event); void setContext(Config.Context context); }Copy the code
Finally, the implementation of Pipeline is as follows, relying only on the three abstract interfaces InputPlugin, FilterPlugin and OutputPlugin. If there is any subsequent demand change, only the corresponding interface can be extended, and Pipeline does not need to be changed:
/ / demo/SRC/main/Java/com/yrunz/designpattern/monitor/pipeline, pipeline. The Java / / ETL process definition public class pipeline implements Plugin { final InputPlugin input; final FilterPlugin filter; final OutputPlugin output; final AtomicBoolean isClose; public Pipeline(InputPlugin input, FilterPlugin filter, OutputPlugin output) { this.input = input; this.filter = filter; this.output = output; this.isClose = new AtomicBoolean(false); } // Pipeline public void run() {while (! isClose.get()) { Event event = input.input(); event = filter.filter(event); output.output(event); }}... }Copy the code
OCP is the holy grail of software design, and we all want to design software that can add functionality without touching old code. But 100% closure to modifications is certainly not possible, and the cost of complying with OCP is also significant. It requires software designers to be able to identify those points that are most likely to change based on a specific business scenario, and then isolate and abstract them into stable interfaces. This requires the designer to have extensive field experience and be very familiar with the business scenarios of the field. Otherwise, blind separation of change points and excessive abstraction will lead to more complex software systems.
LSP: Indicates the Richter replacement principle
In The last section, one of The key points of OCP was abstraction, and it’s The Liskov Substitution Principle (LSP) that needs to be answered to determine whether an abstraction is reasonable or not.
The initial definition of LSP is as follows:
If for each object o1 of type S there is an object o2 of type T such that for all programs P defined in terms of T, the behavior of P is unchanged when o1 is substituted for o2 then S is a subtype of T.
Simply put, subtypes must be able to replace their base type, that is, all properties of the base class can still be held in subclasses. A simple example: Suppose you have a function f whose input type is base class B. Meanwhile, the base class B has a derived class D, and f’s behavior should remain the same if an instance of D is passed to it.
In this case, the consequences of LSP violation are serious. Unexpected errors may occur in the program. Now, let’s look at a classic example of the opposite, rectangles and squares.
Rectangle = setWidth = setLength = area = area = area
Public class Rectangle {private int width; // width private int length; Public void setWidth(int width) {this.width = width; } public void setLength(int length) {this.length = length; } public int area() {return width * length; }}Copy the code
Rectangle = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
Public void f(Rectangle Rectangle) {Rectangle. SetWidth (5); rectangle.setLength(4); if (rectangle.area() ! = 20) { throw new RuntimeException("rectangle's area is invalid"); } System.out.println("rectangle's area is valid"); Public static void main(String[] args) {Rectangle = new Rectangle(); Client client = new Client(); client.f(rectangle); // Rectangle's area is validCopy the code
Now, we’re going to add a new type, Square Square. Because a Square is mathematically a Rectangle, we have Square inherit from Rectangle. In addition, squares require the same length and width, so Square overwrites the setWidth and setLength methods:
Public class Rectangle extends Rectangle {public void setWidth(int width) {this.width = width; This. length = width; this.length = width; } public void setLength(int length) {this.length = length; This.width = length; this.width = length; }}Copy the code
Here we pass Square instantiated as an input to cient.f:
public static void main(String[] args) { Square square = new Square(); Client client = new Client(); client.f(square); } / / run results: / / Exception in the thread "is the main" Java. Lang. RuntimeException: rectangle's area is invalid // at com.yrunz.designpattern.service.mediator.Client.f(Client.java:8) // at com.yrunz.designpattern.service.mediator.Client.main(Client.java:16)Copy the code
We found that the behavior of Cient. F changed. The subtype Square did not replace the base Rectangle, which violated LSP.
The main reason for the LSP violation is that we designed the model in isolation and did not examine whether the design is correct from the perspective of the client program. We assume in isolation that the relationship that IS mathematically true (square IS-A rectangle) must also be true in the program, ignoring how the client program uses it (first set width to 5, length to 4, then verify area to 20).
This example tells us that the correctness or validity of a model can only be determined by a client program.
Below, we summarize some constraints that need to be followed in order to design an LSP-compliant model under the inheritance system (IS-A) :
-
The base class should be designed as an abstract class (not directly instantiated, only inherited).
-
Subclasses should implement the abstract interface of the base class, not override concrete methods already implemented by the base class.
-
Subclasses can add functionality, but cannot change the functionality of the base class.
-
Subclasses cannot add new constraints, including exceptions that are not declared by the base class.
Rectangle = Rectangle; Rectangle = Rectangle; Rectangle = Rectangle; Rectangle = Rectangle; Rectangle = Rectangle; Rectangle = Rectangle; Rectangle = Rectangle; 2) Square overwrites the base class setWidth and setLength methods, breaking constraint 2; Rectangle = Rectangle = Rectangle = Rectangle = Rectangle = Rectangle = Rectangle = Rectangle
In addition to inheritance, another mechanism for implementing abstraction is interfaces. If we are designing for interfaces, constraints 1 to 3 above are already met: 1) The interface itself does not have instantiation capability, satisfying constraint 1; 2) There is no specific implementation method for the interface (the default method of Java interface is an exception, so this paper will not consider it for the moment), so it will not be rewritten, which satisfies constraint 2. 3) The interface itself only defines the behavior contract and has no actual function, so it will not be changed, which satisfies constraint 3.
Therefore, using interfaces instead of inheritance to implement polymorphism and abstraction can reduce many unintended errors. However, interface-oriented design still needs to comply with constraint 4. The following uses distributed application system Demo as an example to introduce an implicit violation of constraint 4, thus violating the implementation of LSP.
Taking the monitoring system as an example, to realize the flexible configuration of ETL process, we need to define pipeline process functions (where to obtain data, which processing needs to be done, and where to store data after processing) through configuration files. Currently, you need to support json and YAML configuration file formats. The yamL configuration is as follows:
# the SRC/main/resources/pipelines/pipeline_0 yaml name: pipeline_0 # name of pipeline type: single_thread # pipeline type input: Type: memory_mq # INPUT plugin type Context: # input plugin initialization context topic: Name: filter_0 # filter_0 definition, type: log_to_json type: Log_to_json-name: filter_1 # Processing process Filter_1 is defined. The type is add_timestamp. Type: add_timestamp - name: Json_to_monitor_event type: json_to_MONITor_event OUTPUT: # output Output_0 # output plug-in name Type: memory_db # output Plug-in type Context: tableName: monitor_event_0Copy the code
First we define a Config interface to represent the “configuration” abstraction:
/ / demo/SRC/main/Java/com/yrunz designpattern/monitor/config/config. Java public interface config {/ / load the configuration from the json string void load(String conf); }Copy the code
In addition, the input, filter and output sub-items in the above configuration can be considered as the configuration items of InputPlugin, FilterPlugin and OutputPlugin plug-in, which are combined by the configuration items of Pipeline plug-in. So we define the following abstract classes for Config:
// demo/src/main/java/com/yrunz/designpattern/monitor/config/InputConfig.java public abstract class InputConfig implements Config { protected String name; protected InputType type; protected Context ctx; @override public abstract void load(String conf); @override public abstract void load(String conf); . } // demo/src/main/java/com/yrunz/designpattern/monitor/config/FilterConfig.java public abstract class FilterConfig implements Config { protected List<Item> items; @override public abstract void load(String conf); @override public abstract void load(String conf); . } // demo/src/main/java/com/yrunz/designpattern/monitor/config/OutputConfig.java public abstract class OutputConfig implements Config { protected String name; protected OutputType type; protected Context ctx; @override abstract public void load(String conf); @override abstract public void load(String conf); . } // demo/src/main/java/com/yrunz/designpattern/monitor/config/PipelineConfig.java public abstract class PipelineConfig implements Config { protected String name; protected PipelineType type; protected final InputConfig inputConfig; protected final FilterConfig filterConfig; protected final OutputConfig outputConfig; @override public abstract void load(String conf); @override public abstract void load(String conf); }Copy the code
Finally, implement concrete json and YAML based subclasses:
// Load Config subclass directory in json mode: src/main/java/com/yrunz/designpattern/monitor/config/json public class JsonInputConfig extends InputConfig {... } public class JsonFilterConfig extends FilterConfig {... } public class JsonOutputConfig extends OutputConfig {... } public class JsonPipelineConfig extends PipelineConfig {... } // Yaml load Config subclass directory: src/main/java/com/yrunz/designpattern/monitor/config/yaml public class YamlInputConfig extends InputConfig {... } public class YamlFilterConfig extends FilterConfig {... } public class YamlOutputConfig extends OutputConfig {... } public class YamlPipelineConfig extends PipelineConfig {... }Copy the code
Because it involves the process from configuration to instantiation of objects, it is natural to think of using the factory pattern to create objects. In addition, since Pipeline, InputPlugin, FilterPlugin, and OutputPlugin all implement the Plugin interface, it is also easy to define a PluginFactory interface to represent the “plug-in factory” abstraction, which is then implemented by the concrete plug-in factory:
Public interface PluginFactory {Plugin create(Config Config); } public class InputPluginFactory implements PluginFactory {... @Override public InputPlugin create(Config config) { InputConfig conf = (InputConfig) config; try { Class<? > inputClass = Class.forName(conf.type().classPath()); InputPlugin input = (InputPlugin) inputClass.getConstructor().newInstance(); input.setContext(conf.context()); return input; }... }} public class FilterPluginFactory implements PluginFactory {... @Override public FilterPlugin create(Config config) { FilterConfig conf = (FilterConfig) config; FilterChain filterChain = FilterChain.empty(); String name = ""; try { for (FilterConfig.Item item : conf.items()) { name = item.name(); Class<? > filterClass = Class.forName(item.type().classPath()); FilterPlugin filter = (FilterPlugin) filterClass.getConstructor().newInstance(); filterChain.add(filter); }}... }} public class OutputPluginFactory implements PluginFactory {... @Override public OutputPlugin create(Config config) { OutputConfig conf = (OutputConfig) config; try { Class<? > outputClass = Class.forName(conf.type().classPath()); OutputPlugin output = (OutputPlugin) outputClass.getConstructor().newInstance(); output.setContext(conf.context()); return output; }... PipelineFactory public class PipelineFactory implements PluginFactory {... @Override public Pipeline create(Config config) { PipelineConfig conf = (PipelineConfig) config; InputPlugin input = InputPluginFactory.newInstance().create(conf.input()); FilterPlugin filter = FilterPluginFactory.newInstance().create(conf.filter()); OutputPlugin output = OutputPluginFactory.newInstance().create(conf.output()); . }}Copy the code
Finally, create the Pipline object using PipelineFactory:
Config config = YamlPipelineConfig.of(YamlInputConfig.empty(), YamlFilterConfig.empty(), YamlOutputConfig.empty()); config.load(Files.readAllBytes("pipeline_0.yaml")); Pipeline pipeline = PipelineFactory.newInstance().create(config); assertNotNull(pipeline); // Run result: PassCopy the code
So far, the design looks reasonable and works fine.
However, careful readers may notice that the first line of the create method in each plug-in factory subclass is a transition statement, such as PipelineFactory’s PipelineConfig Conf = (PipelineConfig) Config; . The pipelineFactory.create method must be PipelineConfig as an input parameter to the pipelineFactory.create method. If the client program passes in an instance of InputConfig, the pipelineFactory.create method will throw an exception that the transition failed.
The preceding example is a typical LSP violation scenario. Although the program can run correctly under a good contract, if a client breaks the contract, the program behavior will be abnormal (we can never predict all the behaviors of the client).
This can be easily fixed by removing the PluginFactory abstraction and declaring the input of factory methods such as pipelineFactory.create as a concrete configuration class. PipelineFactory can be implemented as follows:
/ / demo/SRC/main/Java/com/yrunz designpattern/monitor/pipeline/PipelineFactory. Java / / pipeline plugin factory, Public Class PipelineFactory {PluginFactory {PipelineFactory}} // The factory method takes the PipelineConfig implementation class. Public Pipeline create(PipelineConfig Config) {InputPlugin input = InputPluginFactory.newInstance().create(config.input()); FilterPlugin filter = FilterPluginFactory.newInstance().create(config.filter()); OutputPlugin output = OutputPluginFactory.newInstance().create(config.output()); . }}Copy the code
From the above examples, we can see the importance of following LSP, and the key point of designing lSP-compliant software is to check whether it is valid and correct based on reasonable assumptions made by the user behavior of the software.
ISP: interface isolation principle
The Interface Segregation Principle (ISP) is a Principle used for Interface design. The term “Interface” here does not merely refer to The narrow definition of interfaces declared by interfaces in Java or Go. But contains the narrow sense interface, abstract class, concrete class and so on. It is defined as follows:
Client should not be forced to depend on methods it does not use.
That is, a module should not force clients to rely on interfaces they do not want to use, and the relationship between modules should be based on a minimal set of interfaces.
Let’s take a closer look at ISP through an example.
In the figure above, Client1, Client2, and Client3 all rely on Class1, but in reality, Client1 needs only the method Class1. Func1, Client2 needs only the method Class1. Func2, and Client3 needs only the method Class1. At this point we can say that the design violates the ISP.
Violation of ISP will lead to the following two problems:
-
Add a dependency between the module and the client program. For example, in the example above, even though neither Client2 nor Client3 called func1, Class1 still has to notify Client1 to 3 when it changes func1, because Class1 doesn’t know if they use func1.
-
Interface contamination, assuming that the programmer who developed Client1 accidentally called func1 func2 when writing code, will cause Client1 to behave abnormally. That is, Client1 is polluted by func2.
To solve the above two problems, we can isolate func1, func2 and func3 by interface:
After interface isolation, Client1 relies only on Interface1, and func1 has only one method on Interface1, that is, it is not polluted by func2 and func3; In addition, when Class1 modifies Func1, it only notifies clients that rely on Interface1, greatly reducing inter-module coupling.
The key to implement ISP is to divide large interfaces into small ones, and the key to divide interfaces is to grasp the granularity of interfaces. To do this well, the interface designer must be familiar with the business scenarios and the scenarios used by the interface. Otherwise, it is difficult to satisfy ISPs by designing interfaces in isolation.
Next, we take the distributed application system Demo as an example to further introduce the implementation of ISP.
A message queue module usually contains two behaviors, produce and consumer. Therefore, we design an ABSTRACT interface for Mq message queue, which contains produce and consume:
Public interface consume {Message consume(String topic); void produce(Message message); } / / demo/SRC/main/Java/com/yrunz/designpattern/mq/MemoryMq Java / / the current implementation of provide MemoryMq memory message queue public class MemoryMq implements Mq {... }Copy the code
There are two modules that use interfaces in demo: MemoryMqInput as consumer and AccessLogSidecar as producer:
public class MemoryMqInput implements InputPlugin { private String topic; private Mq mq; . @Override public Event input() { Message message = mq.consume(topic); Map<String, String> header = new HashMap<>(); header.put("topic", topic); return Event.of(header, message.payload()); }... } public class AccessLogSidecar implements Socket { private final Mq mq; private final String topic ... @Override public void send(Packet packet) { if ((packet.payload() instanceof HttpReq)) { String log = String.format("[%s][SEND_REQ]send http request to %s", packet.src(), packet.dest()); Message message = Message.of(topic, log); mq.produce(message); }... }... }Copy the code
From the realm model, the DESIGN of the Mq interface is fine, and it should contain both consume and produce. But from the client program’s point of view, it violates the ISP. For MemoryMqInput, it requires only consume methods; For AccessLogSidecar, all it needs is the Produce method.
One design scheme is to divide the Mq interface into two sub-interfaces, Consumable and Producible, so that MemoryMq can directly implement Consumable and Producible:
/ / demo/SRC/main/Java/com/yrunz/designpattern/mq/Consumable/Java/consumer interface, Public interface consume(String topic) {Message consume(String topic); } / / demo/SRC/main/Java/com/yrunz designpattern/mq/Producible. Java interface / / producer, Public interface Producible {void produce(Message Message); Public class MemoryMq implements Consumable, Producible {// MemoryMq implements Consumable, Producible {... }Copy the code
If you think about it, the above design doesn’t quite fit the domain model of message queues, because this abstraction of Mq really should exist.
A better design would be to keep the Mq abstract interface and have Mq inherit from Consumable and Producible, a layered design that both satisfies ISPs and enables the implementation to conform to the message queue domain model:
The concrete implementation is as follows:
/ / demo/SRC/main/Java/com/yrunz/designpattern/mq/mq Java / / message queue interface, inherited Consumable and Producible, Consume and produce both public interface Mq extends Consumable Public class MemoryMq implements Mq {... } // demo/src/main/java/com/yrunz/designpattern/monitor/input/MemoryMqInput.java public class MemoryMqInput implements InputPlugin { private String topic; Consumable interface private Consumable consumer; . @Override public Event input() { Message message = consumer.consume(topic); Map<String, String> header = new HashMap<>(); header.put("topic", topic); return Event.of(header, message.payload()); }... } // demo/src/main/java/com/yrunz/designpattern/sidecar/AccessLogSidecar.java public class AccessLogSidecar implements Socket {// Producers only rely on Producible interfaces private final Producible producers; private final String topic ... @Override public void send(Packet packet) { if ((packet.payload() instanceof HttpReq)) { String log = String.format("[%s][SEND_REQ]send http request to %s", packet.src(), packet.dest()); Message message = Message.of(topic, log); producer.produce(message); }... }... }Copy the code
Interface isolation can reduce module coupling and improve system stability. However, excessive refinement and splitting of interfaces will increase the number of interfaces in the system, resulting in greater maintenance costs. The granularity of the interfaces depends on the business scenario, and you can combine those interfaces that serve the same type of client application by referring to the single responsibility principle.
DIP: dependency inversion principle
As mentioned in Clean Architecture in the introduction of OCP, if module A is to be immune to changes in module B, then module B must depend on module A. Module A needs to use the functions of module B. How can module B in turn depend on module A? This is The Dependency Inversion Principle (DIP).
DIP is defined as follows:
High-level modules should not import anything from low-level modules. Both should depend on abstractions.
Abstractions should not depend on details. Details (concrete implementations) should depend on abstractions.
Translation:
1. High-level modules should not depend on low-level modules; both should depend on abstractions
2, abstraction should not rely on details, details should rely on abstraction
In the definition of DIP, there are four keywords, such as high-level module, low-level module, abstraction, detail, etc. To clarify the meaning of DIP, the four keywords of understanding are very important.
(1) High-level module and low-level module
Generally speaking, we think that the high-level module is the module that contains the core business logic and strategy of the application program, and is the soul of the whole application program. Low-level modules are usually infrastructure, such as databases, Web frameworks, and so on, that exist primarily to assist high-level modules in their business.
(2) Abstraction and detail
In the previous section “OCP: The Open and Closed Principle”, we can see that abstraction is the common denominator among many details, and abstraction is the result of constantly ignoring details.
Now let’s look at the DIP definition. For the second point, we can understand that abstraction does not depend on details, otherwise it would not be abstraction; Detail dependent abstractions are always valid.
The key to understanding DIP lies in point 1. According to our positive thinking, the high-level module needs the low-level module to complete the business, which will inevitably lead to the dependency of the high-level module on the low-level module. But in software, we can turn this dependency upside down, and the key is abstraction. We can ignore the details of low-level modules, abstract out a stable interface, and then have the high-level modules rely on the interface, and have the low-level modules implement the interface, thus achieving the dependency inversion:
The main reason for inverting the dependency between high-level and low-level modules is that the high-level modules at the core should not be affected by changes in low-level modules. There should be only one reason for changes in high-level modules, and that is business change requirements from software users.
Below, we introduce the implementation of DIP through the distributed application system demo.
For the service Registry, when a new service is registered, it needs to save the service information (such as service ID, service type, etc.) so that it can be returned to the client in the subsequent service discovery. Therefore, Registry needs a database to assist it with its business. As it happens, our database module implements an in-memory database, MemoryDb, so we can implement Registry like this:
Public class Registry implements Service {... // Directly dependent on MemoryDb private final MemoryDb db; private final SvcManagement svcManagement; private final SvcDiscovery svcDiscovery; private Registry(...) {... This.db = memorydb.instance (); // Initialize MemoryDb this.db = memorydb.instance (); this.svcManagement = new SvcManagement(localIp, this.db, sidecarFactory); this.svcDiscovery = new SvcDiscovery(this.db); }... } public class MemoryDb {private final Map<String, Table<? ,? >> tables; . Public <PrimaryKey, Record> Optional<Record> query(String tableName, PrimaryKey primaryKey) { Table<PrimaryKey, Record> table = (Table<PrimaryKey, Record>) tableOf(tableName); return table.query(primaryKey); } public <PrimaryKey, Record> void insert(String tableName, PrimaryKey PrimaryKey, Record record) { Table<PrimaryKey, Record> table = (Table<PrimaryKey, Record>) tableOf(tableName); table.insert(primaryKey, record); } public <PrimaryKey, Record> void Update (String tableName, PrimaryKey PrimaryKey, Record record) { Table<PrimaryKey, Record> table = (Table<PrimaryKey, Record>) tableOf(tableName); table.update(primaryKey, record); } // Delete Table records public <PrimaryKey> void delete(String tableName, PrimaryKey PrimaryKey) {Table<PrimaryKey,? > table = (Table<PrimaryKey, ? >) tableOf(tableName); table.delete(primaryKey); }... }Copy the code
As designed above, the dependencies between modules are Registry dependent on MemoryDb, that is, the higher-level modules depend on the lower-level modules. This dependency is fragile. If one day we need to change the database that stores service information from MemoryDb to DiskDb, we would have to change the code of Registry as well:
Public class Registry implements Service {... DiskDb private final DiskDb db; . private Registry(...) {... // Initialize DiskDb this.db = diskdb.instance (); this.svcManagement = new SvcManagement(localIp, this.db, sidecarFactory); this.svcDiscovery = new SvcDiscovery(this.db); }... }Copy the code
A better design would be to invert the dependencies of Registry and MemoryDb. First we need to abstract a stable interface Db from the details MemoryDb:
/ / demo/SRC/main/Java/com/yrunz designpattern/db/db/Java/db abstract interface public interface db {< PrimaryKey, Record> Optional<Record> query(String tableName, PrimaryKey primaryKey); <PrimaryKey, Record> void insert(String tableName, PrimaryKey primaryKey, Record record); <PrimaryKey, Record> void update(String tableName, PrimaryKey primaryKey, Record record); <PrimaryKey> void delete(String tableName, PrimaryKey primaryKey); . }Copy the code
Next, we make Registry rely on the Db interface and MemoryDb implement the Db interface to complete the dependency inversion:
/ / demo/SRC/main/Java/com/yrunz designpattern/service/registry registry. Java / / service registry public class registry implements Service { ... Private final Db Db; private final SvcManagement svcManagement; private final SvcDiscovery svcDiscovery; private Registry(... , Db db) { ... // dependency injection Db this. Db = Db; this.svcManagement = new SvcManagement(localIp, this.db, sidecarFactory); this.svcDiscovery = new SvcDiscovery(this.db); }... } / / demo/SRC/main/Java/com/yrunz/designpattern/db/MemoryDb. Java / / memory database. Public class MemoryDb implements Db {private final Map<String, Table<? ,? >> tables; . @override public <PrimaryKey, Record> Optional<Record> query(String tableName, PrimaryKey PrimaryKey) {... } @override public <PrimaryKey, Record> void insert(String tableName, PrimaryKey, Record record) {... @override public <PrimaryKey, Record> void update(String tableName, PrimaryKey PrimaryKey, Record record) {... } @override public <PrimaryKey> void delete(String tableName, PrimaryKey PrimaryKey) {... }... } / / demo/SRC/main/Java/com/yrunz designpattern/Example Java public class Example {/ / complete dependency injection in the main function of public static void main(String[] args) { ... Registry Registry = Registry. Of (... , MemoryDb.instance()); registry.run(); }}Copy the code
When high-level modules rely on abstract interfaces, implementation details (low-level modules) have to be injected into high-level modules sometime, somewhere. In the example above, we chose to inject the MemoryDb into the Registry object when it was created on the main function.
In general, dependency injection is done on the main/ launcher function. There are several common methods of injection:
-
Constructor injection (the method used by Registry)
-
Setter method injection
-
Provides a dependency injection interface that can be directly invoked by the client
-
Injection through frameworks, such as annotation injection capabilities in the Spring framework
In addition, DIP applies not only to module/class/interface design, but also to architecture level, such as DDD’s layered architecture and Uncle Bob’s clean architecture, which use DIP:
Of course, DIP does not mean that high-level modules rely only on abstract interfaces, but rather on stable interfaces/abstract classes/concrete classes. If a concrete class is stable, such as String in Java, then there is no problem with higher-level modules relying on it; Conversely, if an abstract interface is unstable and changes frequently, then it is also a DIP violation for higher-level modules to rely on that interface, and the interface should be considered abstract.
The last
At length, this article discusses the SOLID principle, the core idea behind 23 design patterns, which can guide the design of highly cohesive, low-coupling software systems. However, it is only a principle after all, and successful practical experience is still needed to refer to how to implement it into actual engineering projects. These practical experiences are the design patterns we will discuss next.
Learning design patterns, the best way is practice, in practice the 23 GoF design patterns, a subsequent article, we will with in this paper, the distributed application system demo as a practical demonstration, introduces the program structure of 23 design patterns, suitable scene, implementation methods, advantages and disadvantages, let everybody to have a more thorough understanding of design mode, Be able to use the right design patterns without abusing them.
reference
Clean Architecture, Robert C. Martin (” Uncle Bob “)
Agile Software Development: Principles, Patterns, and Practices, Robert C. Martin (” Uncle Bob “)
Use Go to implement the 23 design patterns of GoF, meta leap son
The Richter substitution principle of fine interpretation LSP, people’s deputy principal code son
For more articles, please pay attention to wechat public number: Yuan Runzi’s invitation