(Technical sharing summary within the team)
Problems in current development practices
- Service logic leakage. The business logic that should belong to services leaks into other layers (Controller, Repository, View, etc.), and the rich services become anaemic.
- All-in-one Service, mainly manifested by a large number of codes (such as vshop goods and orders of the Service code are more than 1000 lines) and various functions. For example, OrderService, almost everything related to orders is in this Service without further fine modeling.
- Repetitive functionality, manifested in repeated services (such as MemberServices) and repeated methods (especially across modules). These duplicate services are mostly the same, with a few differences.
- Some services are riddled with a variety of query functions (list and single record queries) that make the Service look weird.
- Anemia model. It’s basically all in the Service layer. More often in the place of the query (list, single record). This is because the default convention is that a Controller must read and write from a Service and cannot access a Repository directly. In fact, many queries only need data returned by a Repository, and the Service does not need to do anything.
- Divide modules and systems from a technical perspective. For example, if all job tasks are placed in one system (such as message-Center), jobs must have their own business logic (such as membership consolidation), so the same set of business logic must appear in multiple places.
- “Front and back separation”. The separation here refers to the separation of two completely independent systems according to the front and background sites (for example, a member system is divided into member-Center system for merchants and V-Member system for C-end users, resulting in the repeated implementation of a lot of business logic in these two systems).
- Classic “Framework problem.” This problem is plaguing almost everyone in the company and will continue to be plaguing. The essence of the framework problem is the resistance of agile teams/companies and traditional architectural thinking, which in fact demonstrates Conway’s Law: organizational structure determines software architecture. This topic is discussed in more detail later.
- OO(Object oriented) uselessness theory. When we apply for a job, we write “good OO fundamentals” on our resumes, but we are inherently suspicious and resistant to OO, and are actually developing process-oriented development within an object-oriented framework. The question is, why do we resist OO so much and love procedural development so much? Another question is why do we need to embrace OO, and what are the benefits of it compared to procedural development?
- Two-dimensional design. Or: “When I hold MVC as a claw hammer, the world is full of nails.” When we use MVC as a single design pattern to solve everything, we fall into two-dimensional design. We can’t see the problem in three dimensions. Both thinking (design) and coding are done on a plane (like formal calls to Controller -> Service -> Repository). This way of thinking tends to lead to formalized processes and constraints, and formalized things tend to restrict people’s thinking and then stop thinking.
Overview of related design patterns and architectures
-
Inversion of control, or dependency inversion, is one of the five SOLID Principles (D).
Description: High-level modules should not depend on low-level modules; both should depend on abstractions. Abstraction should not depend on details, details should depend on abstractions.
Note the mention of high and low levels, indicating that this principle is primarily used to address cross-layer dependencies (such as our Service and Repository). In general, the higher level needs to use the lower level things (services use Repository) and has dependencies on the lower level, so when the lower level implementation needs to be replaced, the higher level code needs to be changed. IoC requires that the upper level should not depend on the lower level implementation, nor should changes in the lower level implementation affect the upper level. So how do you do that? Interfaces, or abstractions. The higher and lower levels adhere to the same abstraction and communicate uniquely with that abstraction (interface).
Another way of looking at this is to separate the use and creation of objects, so that the consumer (caller, dependent) just uses the object, not creates it, and the creation is done externally, so that when the implementation of the dependent needs to be changed, there is no need to change the dependent code. For example, our Service needs to use a Repository. The traditional way is to create a new Repository in the Service. Now we need to replace the original Repository with another Repository (Redis or NoSQL). You need to modify all the places that use the Repository. When using IoC, because the Repository is created externally, only the external needs to be adjusted.
This change from internal new to external injection is called dependency injection (DI). Constructor injection and service container (\Yii::$app->container) in the Yii framework are two implementations of DI.
IoC is a design principle and DI is its implementation.
Although this principle is mainly used to solve cross-layer dependencies, it also applies to decoupling between the same layers if the dependencies are not highly cohesive. (However, whenever there is a dependency between low-cohesion classes at the same level (such as calls between aggregations), you first need to examine whether a higher-level coordinator, such as a Service, is needed).
Another thing to be careful about is not abusing IoC. IoC is a good way to achieve loose coupling, but in software design, apart from the principle of “loose coupling”, there is also the principle of “high cohesion”. Classes with high cohesion can be strongly coupled; otherwise, over-design is easy to occur.
See also: www.tuicool.com/articles/JB… From Baicao Garden to Sanwei Bookstore, by Laravel, has a good example of dependency inversion.
-
Hexagonal architecture (port-to-adapter mode) :
At work, we often encounter the following problems:
- The leakage of business logic to all layers, and the strong dependence of business code on input and output (request objects, database storage objects, etc.) make it difficult to conduct clean unit testing;
- Development depends on the normal operation of the database, cache system, once these fail, can not develop;
- When an underlying technology implementation needs to be switched, the relevant business layer code needs to be changed.
The purpose of the hexagonal architecture is to enable applications to be driven in a consistent manner by users, applications, automated tests, and batch scripts; Also, it can be developed and tested in isolation from the actual running devices and databases.
For example, we have a requirement for membership consolidation, which can occur in a number of scenarios: Web application, API, background batch processing, message queue asynchronous processing, etc., at present, we are in the scene by a merge logic, it is difficult to maintain, and merge logic itself and the storage layer is strong coupling, we cannot ignore the storage layer under the condition of unit test code (in fact, it’s impossible to be a unit test). According to the hexagon architecture, Web applications, apis, and background processing belong to different input sources, which need to be decoupled from the application itself (membership merge service). The different input data of these different input sources need to be adapted into data formats that the application can understand through their own adapters. The adapter and the application interface adhere to a common contract (Inteface) through which they uniquely communicate. Output is the same, the application itself does not depend on the output of the specific implementation (web, database), API, single test, implementation is through their respective adapter according to their respective output technology to convert the output of the application need to fit into a specific output end, such as HTML, XML, specific database is needed.
When we change the external dependencies in the application from implementation to interface, and encapsulate the business logic that leaks to the periphery (controllers, repositories, etc.) inside the application, it becomes easy to single test, for example by mocker instead of the actual Web input, storage layer, cache components, etc.
Here is a diagram of the hexagonal architecture:
There are two layers: the business logic layer (domain + application, now simply known as the domain or business logic layer), also known as the application (inner layer, inner circle); Peripheral infrastructure layer (Web, single test, database, cache server, etc., outer layer, outer circle). The inner layer does not depend on the existence of the outer layer (for example, the business logic layer should still be able to run in place of the database in other ways when the database is unavailable). The inner layer provides functions (services) to or receives data from the outer layer by exposing ports (apis or function calls) — much like ports for an operating system. A port is represented by a contract interface (the entry and return value of an API or function). The outer layers do not interact directly with the inner layer, but communicate through their respective adapters (controllers are typically adapters). The adapter ADAPTS the output transformation of the inner layer to data required by the outer devices it serves, or ADAPTS the input data of the outer devices to data required by the inner layer. The adapter and the inner layer communicate exclusively through a contract.
For example, the computer needs to receive input from various peripherals for processing. In this case, the computer is the inner application, and the peripherals (USB drive, cell phone, network, etc.) are the outer layer. For external communication, hosts have various slots (ports), each following a different specification. Peripherals exchange data with the computer through an adapter (such as various data cables and data conversion devices) that follows two protocols: a computer socket on one end and a specific peripheral socket on the other.
IoC is an effective means of implementing a hexagonal architecture: the inner layer should not depend on the outer implementation, but both sides should depend on the interface definition — which is exactly how IoC describes it. But there is another point in the hexagonal architecture: the business logic of the inner layer should not leak out to the outer layer, because once the business logic leaks out to the outer layer, the inner layer is no longer generic, implementation independent, and multilateral adaptation is impossible.
It is also important to note that the terms “inner layer” and “outer layer” are used, not “upper layer” and “lower layer”, and that the emphasis here is on multilateral adaptation, rather than the seven-tier architecture found in similar network models.
We find that in practice we more or less use things with hexagonal architectures (like IoC), but why is the code still so difficult to test and maintain? The application of something in hexagon does not mean that the whole application follows the hexagon architecture. For example, we use DI, but it is not clear why we use DI, and where we need to use DI and where we do not need to use DI. More importantly, our business logic is not well encapsulated and decoupled, so it is naturally difficult to maintain.
Refer to the article: blog.csdn.net/zhongjinggz…
-
GRASP nine design patterns:
GRASP series of design patterns are mainly used to solve the problems of module division and responsibility assignment in OOD. Here we focus on the information expert model and the creator model.
-
Information expert pattern: Assign responsibilities to classes that have the information necessary to perform a responsibility, that is, information experts.
The information expert model addresses the question of who should assume this responsibility. Start by looking at where the data (information) needed for the responsibility (method) comes from, typically by assigning the responsibility to the master data source class.
For example, if the forum system has an Article class and an Author class, which class publishes articles? Assuming that the Author class is responsible (which seems to belong to Author from the requirements description “Publish an Article”), call: Author::publish(Article) like this. We found that the data used in publish is basically from Article (except for the Author information). When the data structure of Article changes, we need to modify the Author class and call the methods of Article class inside Publish for business rule verification. There is a strong dependence between the two, and it violates the principle of single responsibility, open and close in the SOLID principle. If you put the Publish method in the Article class, the required data is self-contained, and the validation rules are internal and do not need to be exposed to the public, ensuring that changes to the rules do not affect other classes.
The information expert pattern needs to be used in conjunction with high cohesion and low coupling patterns. For example, the persistence problem of entity objects (save()), the data used for the persistence is obviously the entity object, according to the information expert mode, the save() method should be in the entity class, but from the responsibility of persistence is low-level technical implementation, not business logic. It should not be done by entities — we take care of entity persistence with a separate repository.
-
Creator pattern: Who should be responsible for generating instances of classes?
This pattern addresses the creation responsibility of classes. If B contains or aggregates A, or uses A directly, B creates A.
An Article instance has a reference to an Author instance. Who creates the Author object? One possibility is to pass in the Article constructor and create the Author object from the outside, but that exposes the internal details of the Article to the outside world. It is better to create an Author object within the Article itself and hide the implementation details.
The use of creator patterns also requires a combination of high cohesion and low coupling patterns.
Consider another example: whenever an article is published, a notification needs to be sent to the relevant subscribers. ArticleService calls Article::publish() to get the list of subscribers and call Email::sendMessage() to send mail to them. ArticleService uses an Email instance. According to the creator mode, should the Email instance be created internally by ArticleService? If so, consider what happens when you need to replace the Email implementation later. At this point, you need to find out where the Email instance is used and replace it one by one. Obviously, this situation requires an external injection of the Email instance using inversion of control.
What’s the difference between these two examples?
In the previous example, Article and Author are aggregated and strong relationships that together make up the business as a whole, so the Author pattern can be used and should be used to hide internal details. In the latter example, ArticleService and Email are purely use relations, which are very weak, and they are called across boundaries (ArticleService belongs to domain layer, Email belongs to infrastructure layer). In hexagonal architecture, ArticleService is in the inner circle, As for the Email outer circle, the inner circle should not depend on the implementation of the outer circle, so the creator mode cannot be used here, but IoC should be used to maintain low coupling.
In the condition list of creator mode, “use” is listed at the end, which is the weakest relation. In practice, if the two are only “use” relation, the creator mode should be used with caution.
-
-
SOLID principle:
The SOLID Principles are five of the most fundamental and important classical principles in object-oriented design and programming (the word is an acronym for the five principles) :
-
Single responsibility principle: There is one and only one reason to change a class.
Article has publish() for publish and save() for save to the database.
Now consider save() : persist the article object to the database. One day, when the persistence strategy changes (replacing mysql with mongodb) and we need to replace the persistence engine, we need to modify the save() method. “Changing the persistence policy” obviously has nothing to do with Article directly, but it does affect the Article class, which violates the single responsibility principle.
-
Open closed principle: code is open to extension and closed to modification.
Once the article is published, a message needs to be sent to subscribers. The previous approach is to get the list of subscribers in the ArticleService::publish() method and send them a message by calling Email::sendMessage().
Now, there is a requirement to only send messages to subscribers who have seen the author’s articles in the last three months. At this point we need to modify the OrderService to check each subscriber before sending it. A few days later, there is demand: only to pay attention to the relevant column subscribers send a message…… You’ll find that the ArticleService gets changed endlessly with each change to the requirement (which is actually what we’re doing). This is open to modification.
The need to modify ArticleService due to a change in the messaging rules for subscribers violates the single responsibility principle (SOLID principles are interchangeable and breaking one often violates the others). It is possible to publish an article- Published event in ArticleService::publish() and externally subscribe to the event so that any business extensions can be done on the event subscriber side (open to extensions) without affecting the class here (closed to changes).
-
Richter’s substitution principle: Any implementation of an abstraction can be used wherever the abstraction is needed.
There is an IAnimal abstraction that defines the run() and sound() methods. Here are the implementation classes: Bird, Earthworm(Earthworm), Person, Bird, and Person all make their own implementations of sound(), but Earthworm:: Sound () throws an exception. Now you have a call like AnimalTrainer::train(IAnimal). Imagine what happens? When we pass in an Earthworm object, the result of its execution is unknown and an exception may be thrown (if sound is called by the AnimalTrainer). We don’t know how train(IAnimal) will use the IAnimal from its declaration, and Earthworm should be used correctly by the AnimalTrainer (not by throwing exceptions) according to the Richter substitution rule, so the implementation of Earthworm here violates the Richter substitution rule. If the implementation class wants to implement an abstraction but doesn’t want to implement some of the contracts of that abstraction (by throwing exceptions in protest), you have a problem with your abstract design.
-
Interface isolation principle: When implementing interfaces, you should not force the implementation of useless methods.
As in the previous example, training earthworms to vocalize is futile. Earthworm:: Sound () is completely useless, either leaving empty functions to do nothing or throwing exceptions, which violates interface isolation.
-
The dependency inversion principle: This principle was discussed separately earlier (because it is too important for hexagon architectures) and will not be repeated here.
See also: www.tuicool.com/articles/JB… From Baicao Garden to Sanwei Bookstore, by Laravel, has a good example of SOLID principles.
Here’s a comprehensive example:
When customers purchase goods and place orders, the system needs to carry out various verification.
Suppose the Order validation is performed in OrderProccessor:: Confirm () (this class maintains a reference to an instance of Order).
To start, we simply need to verify that there are enough items in stock. We create Order:: Validate () to perform these checks. Since most of the data used by VALIDATE is available from the Order object, this fits the information expert pattern. Ok.
One day, I received a request to add a verification rule: verify that the orderer meets the verification rule (only the owner can place an order). We need to change the Order::validate() method. Then we found that the change of Order verification rules would lead to the modification of Order, which violated the single responsibility principle and the open and closed principle. We decided to separate the “Order validation” business logic into an OrderValidator class, which implements validation by OrderValidator:: Validate (Order), thus isolating the impact of validation rule changes on the Order class.
However, we later found that one validate() method handled all the validations, resulting in the bloated validate() method. Therefore, independent methods such as validateGoodsStock() and validateBuyer() were extracted after classifying the validation rules. These methods are then called in validate().
A few days later, and to add a check rule: order price is legal. So let’s add a validatePrice() method. Although the OrderValidator isolates validation from the Order class, it violates the open-closed principle by changing the class every time a rule is added.
Is there any way to isolate the impact of changes in the validation business on the OrderValidator?
The answer is abstraction.
We abstracted the IOrderValidator interface, defined a validate(Order) contract, and created GoodsStockValidator, BuyerValidator, PriceValidator and other implementation classes to implement the interface. In its validate(Order) implementation of the above verification. Then inject OrderProccesor with a collection of IOrderValidators, calling the validate method of each validator in order in Confirm ().
Now you need to add validation rules, no problem, just create a new IOrderValidator implementation class and place it in the validation collection — the design is open to extensions (by creating a new validator class) and closed to modifications (you don’t need to modify the other classes).
The above design patterns are very basic and general, which must be mastered for OO implementation. Their common basic principle is “high cohesion and low coupling”, which must be reflected in OOD at all times.
-
Other design/architectural patterns
-
DDD (Domain Driven Design) :
Domain driven design is pointed out that the requirement analysis and model of traditional design, the code is fragmented, traditional requirements analyst and system designers two independent position, the fragmentation causes don’t match each other, such as system design can not fully reflect the demand analysis, and code and system design, each has its own private language, It’s hard to communicate with each other.
DDD emphasizes the inherent unity of requirements analysis, modeling, and coding, and the three (and the people who perform them) communicate in a common domain language, making it easy for business experts, designers, and programmers to reach consensus.
The reality is that programmers and business experts (represented by product managers) communicate either in a serious gulf or in a non-business (often technical) language that deviates from the true concept of the business domain. Programmers tend to use technical language (or even direct database language) to communicate with other people (even non-technical people such as customer service or sales), leading to conflicting opinions. Agile teams tend to have only one product manager and several technical people, often leading the communication with technical language (if the product manager himself is not focused on guiding the business language of the team).
Why do programmers like to communicate in technical languages? On the one hand, programmers often communicate with programmers, and technical language communication costs are the lowest. On the other hand, they are often thinking about the implementation at the same time they are communicating (or the communication itself is the description of the implementation).
However, technical language communication is a serious detriment to business model building. Technology and business are two domains, and the most typical representation of technical language in reality is “database language”, such as “some time to mark a field in a table as 1”, which has no meaning to the business itself. This mental orientation takes our minds beyond modeling to the lower level of storage implementation. On the other hand, the mixing of this technology with the business language allows the business logic itself to be coupled into the design of the storage tier. For example, in storage design (technical implementation) alone, “login state” should be marked by a separate field, whereas in the business domain, “login” and “logout login” operations would result in a separate state change (storage design as a separate field). When we were in storage layer design too much into business logic itself (or rather too much when business logic description into the technical implementation of storage layer), we could use another field (store of other state) to represent the login status (and proudly think it can save the storage space – all the typical led technology thinking). The problem here is that the storage implementation of the logon state relies on the business logic, and since the business logic is unstable (relative to the storage layer), so is the design of the lowest storage layer. (This is actually the case with member login status storage.)
DDD stressed:
-
Communication between people throughout the process, from requirements analysis to code implementation to testing, needs to use a consistent and unambiguous domain language.
-
The common language must accurately reflect business requirements and domain knowledge (rather than technical implementation).
For programmers, this idea of DDD can be summarized as: code is model, code is design. The code we write, the calling relationships between classes, and the naming of methods and variables all reflect the domain language itself. DDD places a strong emphasis on naming, and for DDD, programming itself is a language activity (not machine language). DDD emphasizes the importance of language, which is human language, and human language within a particular domain.
The reality is that there are a lot of database-oriented languages in our code. Such as Update () and delete() are ubiquitous in business code. “Update the value of field A to B” is A technical (database) language, not A business domain language. What’s more, write an UpdateFields() in the controller layer to handle all updates.
However, not all technical implementations need to be interpreted in the current business domain language, and indeed they cannot be. Some technology implementations are outside the business domain, such as persistent storage, event systems, message queues, etc., and of course implementations that are outside the current business domain do not need to comply with the business domain common language (they need to comply with their own domain language specifications). This kind of business domain boundary is called “bounded context” in DDD. The things in the context belong to the inner circle part of the hexagonal architecture, and the things outside belong to the outer circle, and the inner and outer circles communicate through contract adaptation (adapters).
DDD is not a general theory, it has a set of detailed implementation of the system (methodology), such as entity, value objects, polymerization, polymerization, field service, warehousing, all kinds of design patterns, etc., here don’t do detail (that have to write a thick book, and DDD itself is very practical, one hundred people have one hundred kind of way, Focus on mastering its core ideas).
The point of DDD here is to compare the development/design patterns we use today: service-oriented design versus database oriented design. In DDD, entities are the core, services are the auxiliary, and databases are the infrastructure outside the domain.
-
-
CQRS(separation of command and query responsibilities) :
Commands: Operations that cause changes in entity state (as reflected in updates, deletes, inserts, etc.);
Query: An operation that does not result in an entity state change.
CQRS principle: No query in command, no command in query. For example, the common practice of querying and returning a record in a modification method is a violation of CQRS.
The purpose of CQRS is to address the fact that command models are often very different from query models, which makes sense when you think about our database design. We for database design, is generally carried out in accordance with the command model design, it and business model to compare match in our mind, and the report is a typical query model (model), in general, in accordance with the command model design of data table structure is not satisfy the query statement analysis of the model, therefore, in order to the report, or you need to write very complex SQL, Or data processing and cleaning to get the query model that meets the conditions. Obviously, the performance of performing analysis queries on a command model is very poor.
CQRS can be used at different levels:
-
Separation of reading and writing in the traditional sense. Commands and queries use different libraries, but the tables in both libraries have the same structure.
-
The code layer is separated, but the storage layer is not. (Read/write separation is possible, but the table structure is the same)
-
The code and storage layers are separated. This is also strictly CQRS. Here, the write table is designed based on the command model, while the read table is designed based on the query model. The read and write are synchronized through events (the command end issues corresponding events after execution, and the query end subscribes to events and updates the query model). Separate commands and queries at the code and storage layers, using the most appropriate implementation at each end for optimal design and best performance.
CQRS in the strictest sense is complex to implement and requires robust underlying support.
We mention CQRS here, on the one hand, to point out the above facts, on the other hand, we can try the second level of CQRS in practice to get the benefits of the code level, such as cache management and different design patterns on both ends (such as DDD on the command side and traditional MVC on the query side).
-
Analysis of related concepts
-
About Service (what Service means and what’s wrong with service-oriented programming) :
When we say “service,” we emphasize that it provides functionality to other parties. The service itself is dynamic and the core is its functionality, which of course is provided by the service provider, but we don’t care about the service provider itself. Services are extroverted, that is, the value of their existence depends on their value to others (rather than themselves). Therefore, when we take service as the core, we are bound to look at its external value from an outsider’s perspective. When we say “message queue services,” we focus on the benefits of asynchronous decoupling that message queues bring to our business systems, rather than the internal mechanics of message queues themselves.
This perspective is unhelpful for modeling. When it comes to services, there is always an “I” (that is, the bystander), and modeling emphasizes the realization of “no SELF”, the need to eliminate the bystander, incarnate into the model itself, from the internal perspective of things to think about problems. The model is intrinsically self-fulfilling, having specific behaviors of its own, rather than being “provided” to it by various services.
Let’s take a real-world example. Software outsourcing companies provide external software outsourcing services. (that is, when we as party a bystander) to accept the services it provides, only need to tell the outsourcing company, we have what kind of requirements, the two sides reach a consensus, sign the contract, in the end we receive from outsourcing company expected software products conform to the contract, we don’t care about outsourcing company internal specific how to operate, which team is responsible for the UI design, which is responsible for programming, etc. However, for the outsourcing company itself, this perspective is not feasible, it must design a detailed and sound corporate operation system to ensure that it can provide such services, that is to say, the concept of “outsourcing company” must be modeled internally to form a self-operating entity.
What’s wrong with service-oriented programming? Isn’t it possible to model the interior of a service?
The problem is that when we are service-oriented and service-oriented as the core carrier, we often fail to model its internals well. The fundamental reason is that service-oriented extroversion and entity oriented introversion are in conflict. When we model, there is only one dominant thought, and the others are auxiliary. When we are service-oriented, the first thing we think about is function, what we do; When we are entity oriented, the first thing we think about is who is going to do it, the subject problem. One is to derive the function provider from the function, where the provider is the auxiliary and the function is the main. One is to deduce the subject behavior from the subject, and the behavior is the element of the subject in itself.
Because in service-oriented programming, providers are often overlooked as side effects, we don’t care too much about who provides the functionality. As a result, a class often provides n functions, such as an OrderService that provides everything related to an order. Over time, services (especially the core services in the business such as OrderService and GoodsService in the mall) become bloated and difficult to maintain. Messy assignment of responsibilities can also lead to leakage of business logic, as different developers may choose to have the same or similar functionality provided by different providers, resulting in one piece of business logic appearing in multiple places.
Services are necessary, but systems cannot be modeled for services themselves because services are coarse-grained and should be modeled for entities.
So, when should you use Service? Imagine this scenario: Zhang SAN is a PHP programmer responsible for back-end program development, but is not familiar with front-end technologies (JS, HTML, etc.). Now there is a back-end interface development task, and clearly John is up to it. Now there is a website development task, because Zhang SAN does not understand the front-end technology obviously can not complete the task alone, the need for front-end engineer Li Si intervention. Now the problem is, because Zhang SAN and Li Si are equals, no one can tell the other what to do, and the work can’t be carried out. Do how? A new coordinator, such as their boss, needs to step in. This coordinator is the service. The coordinators themselves are not responsible for the execution of specific tasks, but for coordination, division of labor, scheduling and diplomacy. The coordinator is also responsible for bringing in other roles (such as operations) when it is found that other roles need to be added in the middle of a web site development task. Back end, front end, operation and maintenance are only responsible for their own work, without knowing what others are doing or even knowing that others exist (e.g., three people in three countries).
Services are not required and are not responsible for specific business implementations. Services are responsible for coordinating the execution processes of entities at a high level and exposing a single functional interface.
Services are not innate; they are acquired through upward abstraction. When the work of entities at this layer is not coordinated, a specialized service needs to be pulled up.
For example, the bank transfer business involves the receiving account and the paying account, the receiving account receives money, the paying account pays money, but the two are parallel, you can not say in the receiving account of the payment method call the paying account of the payment method, and vice versa. This is where higher-level money transfer services come in.
There is, of course, another case (from another perspective) when we need a function outside of the context (and we don’t want to implement it ourselves), we also call it a service.
About service naming:
We currently have a number of class names ending in Service to identify it as a Service. I personally do not recommend this. In principle, we cannot say “so-and-so is a service”, but only “what service so-and-so provides”. Banks provide lending services and Foxconn provides contract manufacturing services, but we cannot say that banks and Foxconn are services – they belong to service providers. A Service is a function. The class we usually end with a Service is actually a Service provider. For example, “Mail Service provider provides the Service to send mail”. It is more appropriate to name the service provider directly, such as Email, or to name the service provider after its main function, such as EventSchedular, which provides scheduling services. What does EventService do?
-
Database oriented programming:
What is the first thing we want to do when we receive the business requirements and have a rough analysis done? Table structure design.
Yes, that’s the way most of us do things all the time, and even a programmer’s skill level depends on how well he or she designs his watches.
But let’s take a hard look at what a database really means to a business. It’s just a way of persisting data, something that’s part of the underlying support layer.
The problem with database-oriented programming is that we get down to the nitty-gritty from the start, rather than designing and examining from a commanding point of view the underlying logic of business systems. This design approach, due to the failure of in-depth design modeling, is very easy to matter-of-fact, often lost in the forest of details. In addition, because database design and business modeling go hand in hand, too much current business logic is often brought into table design. This table structure is unstable because business rules change over time as requirements change. Typically, a field represents multiple state values that adhere to (currently) fixed business rules. Database design cannot be completely isolated from the specific business, but it is important to try to identify and reduce the intake of unstable business rules.
Database-oriented programming thinking starts from the beginning, not just in the initial table design phase. In the process of programming, we are used to add, delete, change and check fields such database terms into the business code, and the operation is directly oriented to the database.
In this way of programming, there are often no entities, and if there are, they are only used as data transfer objects (Dtos) (not DAO, because there is not even a CRUD method). These objects often have no methods, and their properties basically correspond to database tables one by one. Models are also reduced to objectified representations (DAOs) of database tables, with their own CRUDS.
Database-oriented programming is so popular because it’s quick and easy to do, you can do it by typing code, you don’t have to worry about object modeling, and it doesn’t expose itself to simple business complexity. However, when the business complexity reaches a certain scale, its disadvantages are fatal. There’s a lot of spaghetti code and a lot of business logic. The more frequently requirements are iterated, the sooner these weaknesses are exposed, the sooner the early steps are painful.
To get deep into business modeling, you have to forget about the database and realize that the database is just a means of persistent storage so that you can get your mind out of table structures and into the domain model itself.
Service-oriented programming (SOA) and database oriented programming (DBP) are at opposite ends of the spectrum: the former is all about functionality (behavior), the latter all about data.
-
About Repository (Repository level responsibilities and who is responsible for transactions) :
Hierarchically, warehousing is at the business boundary, connecting the business layer to the storage layer (infrastructure layer). The repository is domain-language (but does not necessarily implement specific business logic in it), but it also understands persistence. The book Implementation Domain-Driven Design (implementation) recommends using repositories as collections, with their definitions (interfaces) at the domain level and implementations (implementation classes) at the infrastructure level.
So let’s look at the set. A collection holds a set of homogeneous elements (objects), with the ability to add and remove. Note that the collection does not modify and save because we are getting a reference to an object in the collection, and when the object’s state changes, that object in the collection changes synchronously.
The Implementation divides warehousing into two categories: aggregation-oriented and persistence-oriented. The collective-oriented warehousing implementation strictly mimics the behavior of the collection, with add and remove methods, but no save method, because we get the reference of the collection object, and the change of its state will be immediately reflected in the collection (in the concrete implementation, the internal copy on read or copy on write mechanism will be used to track the change of entity state). In addition to the add and remove methods like the former, the persistence-oriented repository also has the save method, that is, the external world needs to display the save(Object) method of the repository to save the entity state changes (in practice, often add is also incorporated into the save method). The biggest difference between the two is whether entity changes are stored explicitly.
Both collective-oriented and persistence-oriented require us to use warehousing in a collection fashion — this is where warehousing differs from the DAO, the data access object. The repository is domain object oriented and is responsible for persisting domain objects (aggregations) into the infrastructure and retrieving data from the infrastructure and returning it back as domain objects. Daos are table-oriented and generally correspond to tables one by one, with their own CRUD methods — unlike add and remove methods of collections, DAO CRUD corresponds to CRUD operations of the database.
We can understand “storage” aptly. It’s a warehouse. We give the goods to the storekeeper, who takes care of them. How we take care of them is a matter inside the warehouse. In addition, we ask the warehouse manager for the specified goods according to the number of goods (and other filtering conditions). As for how to take out the goods from the warehouse is an internal matter, the warehouse can decide how to keep the goods (for example, the goods may be disassembled and classified for storage in order to save space). But the warehouse must ensure that the goods taken out are exactly the same as those put in at the time (unless there is an irresistible reason such as expiration).
We are now facing some problems in the use of storage. The biggest problem is that there is a lot of business logic involved in the repository – this is the result of database-oriented programming. Our minds work directly with the database, and the repository is the closest thing we’ve ever written to a database. There are no domain objects (entities) in our program, so we are dealing with arrays that have no business implications for the repository.
PHP is an array of gains and losses, and arrays are so flexible and powerful (though not very powerful) that everything can be represented in arrays and object orientation becomes meaningless. A lot of Phpers don’t understand object orientation and think it’s just making everything private and writing getters and setters all the time, boring and thankless. This perception has a lot to do with the mechanical orientation of long-standing mainstream frameworks such as Spring.
“If a class has 20 properties, don’t I have to write 20 getters and 20 setters?” This is an example that many OO opponents like to cite. The fact is, what class does you have that has 20 attributes? This is probably because of your abstraction (in fact, this is basically object-oriented programming with database-oriented thinking, OO is just a shell, and the object itself is mapped to the database fields one by one, resulting in so many attributes (20 fields is normal for a data table). Additionally, OO itself (and DDD in particular) does not encourage excessive use of getters and setters because they often have no specific business meaning. What does a public setPrice() mean? Set the price? What business needs a separate setPrice? Well-designed classes tend to maintain their attribute states themselves, rather than letting the outside world set them.
OO is an idea, a way of thinking about a problem, and in implementation, “what is reasonable is there”, not mechanically.
This is not to say that arrays in PHP cannot be used, but arrays should be used as “arrays”, not as a panacea. Using arrays instead of object structures can cause significant maintenance complexity.
If the warehouse itself contains a lot of business logic, it is better to use the traditional (and mainstream frameworks come with it)Model, which at least has the concept of a “Model” (albeit table-oriented), whereas the warehouse here is “four different”.
Another serious problem with the current use of warehousing is the implementation of transactions in warehousing. From the description of warehousing, it is only responsible for depositing and retrieving. Why is it linked to transactions? Again, this is the result of database-oriented programming thinking. When we think of transactions, we first think of transactions in a relational database, and take it for granted that this transaction is another transaction.
What is a transaction? A transaction is the atomicity of a task in which all operations either succeed or fail (undo). The transaction itself has nothing to do with the database. When we say bank transfers need to be transactional, we mean the business, not the storage technology behind the business. Database-level transactions apply the concept of “transactions” to the specific domain of databases. Specifically, a series of writes in a transaction either all succeed or all fail. What we really need to focus on is clearly the transactional nature of the business layer. Without the transactional nature of the business layer, what are the transactions of the storage layer? Database transactions are the low-level technical support for business transactions. One day we don’t need relational databases. Can’t the business be transactional?
Both warehouse and entity operations are fine-grained and cannot guarantee overall transactionality, nor should the existence of transactions be known.
Transactions should be placed in methods that represent a single complete task (such as in application services).
Placing transactions in the repository inevitably brings business logic into the repository.
-
About Controller (Controller responsibility and Authority System) :
The controller layer is the layer closest to the user and belongs to the adapter in port Adapter. Like a warehouse (which is also an adapter), a controller has both the user’s input and the business layer’s ports (interfaces). It converts user input data to data required by related service ports and ADAPTS the output data of inner services to data required by clients.
What the controller represents is a task (use case task item) from the user’s perspective.
The controller should be a thin layer without any business logic or process control.
The controller also has another function: permission control. Permissions are used to control whether or not a task can be performed from the user’s perspective, just like controllers. The controller is user-focused, and while it knows who in the domain (entity or service) should perform a task from the user perspective, it should be as decoupled from the domain as possible.
(When a task from a user perspective might require collaboration between domain services across business domains, application services should be introduced for cross-domain coordination, not in a controller.)
Here is an example of controller and permission control: the membership system has several modules of magic marketing, mass Posting and material management, and they all need to add and edit graphic functions. The general practice is that all three places point to the same URL (the action of the same controller) to edit the text. Now the question is: how do you control permissions? Now that Joe, Joe, and Joe have (and only have) editorial rights for magic marketing, Mass post management, and material management, how can they all edit graphics? One way to do this is to do permission control in the graphic editing action: “need to have magic marketing or mass distribution management or material management edit permission”. It’s so tedious, why add another one later? This is exactly what we’re doing right now (adding all sorts of routing-related restrictions to the data table and all sorts of parameters to the URL, a lot of confusion).
In fact, if we think carefully, we will find that although the three places are all editing pictures and texts, their business meanings are different, and the operation of editing pictures and texts in the three scenes here is just exactly the same, which makes us have the illusion that they are the same thing. From the perspective of the user (use cases), these three things, edit graphic is one of three things, they can happen to completely the same, but are essentially different, can have different needs in the future, if you have additional restrictions magic edit graphic in the marketing, etc.), and three in the scene “edit graphic” are three cases, It should correspond to three actions (represented in urls as three urls). Here some people may ask: URL as a uniform resource locator, represents the resource itself, can the SAME resource URL be different? The URL only represents the resource, not the resource itself. The resource and THE URL are one-to-many, just as the same thing can have multiple names (for example, people in different places call sweet potatoes differently). These three actions require editing rights for magic Marketing, Mass Post Management, and Material management, respectively.
In practice, the controller provides the following types of users: Web (human), Console (command line), and external system (API call). The controller is presented as a Web application, background script, and Web service respectively. The permission control mentioned above actually belongs to service role authentication in Web applications, but each type of controller can use each type of permission control system respectively (for example, account authentication during API service invocation).
It is also important to note that permission control should be done at the controller level, but this does not necessarily mean that it is done by the controller itself. In fact, a typical controller does not perform permission control directly and is not even aware of the existence of a permission system. Normally we will perform authentication in a unified place, but this place is definitely not in the permission system, it is in the use of the permission system (the consumer of the permission system), outside the permission system. Many permission system designs make the mistake of confusing the permission system itself with the use of the permission system, where too much consumer information (such as menus, routing, and so on) is coupled. Another fact to clarify here is that the team responsible for the permissions system is often responsible for maintaining some or all of the permissions system consumers at the same time, which makes it easy to mix the two together. Such a team must carefully identify what belongs to the permissions system itself and what belongs to the consumer side to maximize decoupling. In contrast to the consumer side, the permission system should be fairly stable and should not change as the consumer side changes menus, adds routes, etc.
What is the access system? Rights The system defines whether a user has the operation rights. For example, John has the group sending management rights. Usually, in order to maintain the convenience, will introduce the concept of role permission system, in this way, not to zhang SAN directly to mass administrative authority, but to create a group administrator role, to grant the role mass administrative authority, and then given a mass administrator role to zhang SAN, and zhang SAN has been indirectly have a mass of management authority.
From this we get the three elements of the authority system: user, role and authority set. Roles and permission sets can also be grouped.
(Unless otherwise specified in this article, the default refers to the business authority system)
The use of a permission system is when a use case task requires that the operator (user) must have a certain permission. The use case task (or its agent) asks the permission system, and the permission system answers yes or no.
To sum up, the permission system consumer is as follows:
- The consumer uses the permission set defined by the permission system;
- The consumer asks the permission system whether a user has a certain permission.
Back to the magic marketing example above. Start by creating three privilege points in the privilege system: Magic Marketing Management, Mass marketing management, and Material management (depending on the situation). Then create the role of the Promotion specialist and grant this role magic marketing management and mass marketing management rights. Then give the role of promotion specialist to The user, and from then on, The user has the above two permission points. Magic marketing and Mass Posting have two separate Web controllers, each with an editArticleAction that calls the same domain-level object (such as an Article entity), but they require different permissions: One requires magic marketing administration, and the other requires mass marketing administration.
(Depending on the actual situation, it is possible to make the text editing function into a service for remote API call, two actions call the same ArticleService service, the service calls the text editing service remotely)
Some people are confused: since both of them are to edit the text, use a separate edit text action is not ok, why two, just for permission control? As mentioned above, the two edit images here are in two completely different business scenarios, but they just happen to (currently) operate exactly the same. When a requirement changes and requires some extra data for magic marketing’s graphic editing operation, how do you do it in one action? If else? What if later requirements become impossible to use the same set of graphic editing operations at all?
We are often deceived into believing that two things are one and the same when we find that they have exactly the same name and behave exactly the same. People who meet Europeans and Americans for the first time assume that they both come from the same place because of the first similarities they see. In system design, we must think about the system from the business itself and dig the essence hidden behind the business representation.
At this point, we have discovered the core value of the controller as a bridge between the user and the business system: the behavior of the use case layer and the behavior of the business domain layer are not mapped one by one, and therefore require decoupling and adaptation of the controller.
Application services were mentioned briefly earlier. Again, we emphasize the nature of the “service” : the service itself does not provide implementation details (in the case of domain services, this is the entity’s job), but the service’s primary responsibility is coordination, scheduling of subordinate units, and diplomacy. Services arise as a result of dissonance between subordinates (i.e., services are extracted from the bottom up). Just as domain services are used to coordinate domain entities (and other domain services), application services are used to coordinate multiple bounded contexts (multiple domains). Unlike domain services, application services belong to the application layer and cannot contain domain services. Application services should be a thin layer. Together with the controller (which can also be thought of as a very simple application-layer service, whereas the specialized application service deals with the more complex use-case adaptation to the domain model), the application layer constitutes the adapter between the use case (user) and the domain model.
The key to understanding the value of the application layer is to recognize that the use-case perspective and the domain model perspective are two very different perspectives that look at problems in different ways. Product managers tend to describe business from a domain model perspective (at least required in DDD) and hCI design from a use case perspective. For example, the blog details page often needs to show recent comments and related blogs, which is clearly a multi-domain model. If the difference is not recognized, domain models are often designed to be led by the use-case model, classes violate the single responsibility principle, and over time, code deviates from its OO purpose.
Technically, use case models are usually represented by “presentation models”, such as Yii’s FormModel. Sometimes we can use DTO objects as presentation models to assemble the data needed for a use case from multiple aggregations. Or we can query and assemble the use case model directly at the repository level for use cases (use case optimization queries) — which sounds odd, since normally the repository should return aggregate objects at the domain level, not things at the application level. This approach is based on the CQRS idea: impedance mismatch between the command model and the query model. Aggregations are generally based on the command model, while use cases are based on the query model.
-
About the Entity:
Entities have been mentioned several times before, and we will discuss them in the next section.
Entities are at the heart of DDD, and each entity object has a unique identity to distinguish it from other entities. Whether two entities are equal depends on whether their identifiers are the same. Two identifiers are different entities, even if all attributes are equal. For example, there are two Zhangsans of the same gender and age, but they are still two people. Unlike value objects, two value objects with identical attributes are equal.
An entity has a life cycle, during which its state can change. For example, the Order entity, during the purchase and after-sales process, some state of the Order changes, but it is still the same Order. Again, this is different from value objects. Value objects are immutable.
We can think of it figuratively and simply as “that thing” (the individual) that corresponds to the real world (though this is not complete). A car, a point transaction, it’s all real. Entities have a continuous life cycle. A car, from manufactured (new) an object, the transactions (car) owner field state change, to hit the road running (such as car trip properties constantly changing), to scrap (object to be deleted), although the car all kinds of information change constantly, even after spray paint and modified to become “recognition”, car or the car. A value object is immutable. Once its properties change, it becomes another value object. For example, if you have an Address object, it has city and stress properties, and city changes, it becomes a new Address object. If the city and stress of two addresses are exactly the same, the two Address objects are considered equal.
Is the address information an entity? The key is whether your system needs to distinguish between “that one” addresses. When two addresses have exactly the same information (the attributes of the object are exactly the same), do they represent the same address? If so, it’s not an entity, but a value object — we don’t care about “that one,” just its value.
We find that an entity often corresponds to a record in the database. This is generally true, but not exactly one-to-one. The point here is not to use database records for one-to-one correspondence with entities, which can easily lead to database-oriented programming. Remember: A database is just a data store.
We occasionally find entities in our current code, but they are used as Dtos or DAOs, whose fields typically correspond to database tables one to one and either have no methods or write to the database.
Entity is the foundation and core of our modeling. When modeling entities, don’t think about the database, think about the real world. For example, when designing a point system, there is an Account class. Accounts are divided into personal accounts, corporate accounts and merchant accounts. In the database, all accounts are placed in the H_integral_account table, which is distinguished by the field. To think database-oriented, we would also have an Account class identified by the type attribute. This is clearly pseudo-OO, using objects to simulate database records (which makes sense at the database level). If we were to get rid of the database altogether, the mind would be liberated from this shackle. In OO terms, accounts are divided into personal accounts, company accounts, merchant accounts, and here the obvious inheritance relationship is PersonalAccount, CompanyAccount, and MerchantAccount inherited from Account. In practice, the CompanyAccount and the MerchantAccount have many similarities, so it can be further separated from the “PublicAccount” inherited Account, and the CompanyAccount and MerchantAccount inherited PublicAccount.
Let’s look at the member entities.
At the database level, member information is mainly recorded in the H_member table, which has dozens of fields. When you design objects in a database-oriented fashion, the Member object also has dozens of properties, and then dozens of getters and setters. And then — and then you’re gonna go on and on about how it doesn’t work.
From an object-oriented perspective, Member is an identity whose more neutral existence is Person. As a Person, there are only a few attributes you need to care about: name, ID, gender, birthday. Member is a core concept in the business field of Member system and a role played by Person in this system.
The concept of “persona” is important in OO analysis and design. In a broad sense, the name of all things is the role of all things. A thing must perform an activity in a certain role at a certain time (this is also an overview of the four-color prototype analysis). Knowing the importance of roles prevents a Person (or Member) from representing all users and user activities in your programming. When a user logs in, he is the log-in, when an event is signed up, and when an article is published, he is the author. Different identities have different properties and methods).
What attributes and methods should a member have as a central role? There was another pitfall: because we were a membership group, the system we developed was a membership system, so it seemed that one Member could do everything.
There are two options: either scrap the word “Member” outright (it’s too broad and therefore too empty) and go straight to a more specific, fine-grained identity; The other is to strictly define the concept of “membership” and excavate the narrow definition of membership. Personally, I prefer the second option because, after all, “membership” is a real existence in the business domain, and eliminating it directly in the software model will lead to a mismatch between the model and the domain, and some areas really cannot be replaced by other concepts. In fact, as long as we usually pay a little attention to the usage, we will find that in fact, “membership” or relatively clear boundaries. For example, fans and members are two identities.
Let’s look at the owner. The Owner must be a Member (not just a fan), so you can create an Owner that inherits from the Member. Note that the relationship description here uses the word “must.” This description is “domain inevitability.” It is not necessarily immutable, but the probability that the underlying business logic will change is so small that we use inheritance rather than other relationships (such as aggregation, composition, etc.). Inheritance is a more stable relationship than aggregation and composition.
(Conversely, if we find only an unstable “is one” relationship between A and B, we should be careful about inheritance and consider other relationships (such as identifying their types with type).)
And let’s look at the word Owner. The meaning of a word depends on the context. Owner means “Owner”, which is broader than “Owner”.
So do you want to use the RoomOwner here? RoomOwner brings our attention to Room, making the Owner less prominent. “Owner” is another core concept in the membership system, so it should not be named on Room, and Owner itself means Owner. The point here is to illustrate the importance of context in terms of noun understanding and the need for core concepts to have core names.
Once again, database design and object design are two very different design patterns.
-
On modeling (where models/entities come from) :
A lot of times we want to try OO but struggle with modeling — either we don’t have a model in mind for a long time, or we’re afraid that the model we build doesn’t fit the business, and it gets messy and out of control. After a lot of thought, it’s not as straightforward as facing the database.
Modeling is a process of repeated deconstruction and construction.
The initial model is always naive and incomplete, which is normal, because at this point you only understand the current requirements themselves and remain at the phenomenon level — and the model reflects this perfectly.
As you learn more (or code — yes, the code itself is a design process that refines and revises the model), some inconsistencies within the previously naive model are inevitably exposed, and these inconsistencies naturally urge you to re-examine requirements and models further. At this point, you can often see something of the nature of the business domain through the phenomenon of requirements.
It is important to recognize that requirements are phenomena, not domain nature. It is true that we should respect phenomena and build models to support phenomena correctly, but phenomena are not the essence. Modeling cannot be completely constrained by the current requirements themselves.
Keep going back to the requirements themselves to prevent the model from going too far. If the model doesn’t support the requirements properly, then either the requirements or the model is definitely wrong (usually the model is wrong).
When you run out of ideas on modeling, come back and read the requirements. Either the model is finished or the requirements are not fully understood.
Write requirements down, not just verbally outline them. Write it out in detail (it is best to discuss it in depth with several people, or if you don’t have the conditions, one person is ok, you can organize several people to discuss it later). As you write, you will constantly discover new concepts and new questions. In effect, writing requirements this way is modeling at the same time. DDD emphasizes linguistic consistency in requirements analysis and model design.
When writing requirements documents, focus on refining concepts. Due to the irregular description of requirements (or the vagueness of the concept of the demander itself — the demander is often not a domain expert), the original requirements are usually mixed with various interference factors, such as the use of “user” to represent system users on all occasions, and the confusion of activity signers and participants. If you model based on the original requirements, you will find that the entire system is a super User class. Requirements analysis is not only a process of discovering hidden concepts, but also a process of “clarifying” known concepts. What is and is not a “fan”, what is and is not a “member”, and what is the relationship between him and “fan”, etc.
Remember: Modeling does not begin by drawing class diagrams, but by writing requirements analysis documents. Your class diagram should be broadly consistent with the requirements analysis document. When there are many conceptual or logical inconsistencies between the two sides, there is a problem with the model.
[Note: There is a difference between the original requirements and the requirements after our own analysis. The detailed requirements are generally “unity of appearance and essence”, which is embodied in the unity of requirements analysis/design document and domain model. Note that analysis and design are one and the same.]
It is also important to note that the completion of requirements analysis only represents the completion of the core model, not the completion of the modeling itself.
Requirements analysis is iterative, programming and modeling are also iterative. From personal practice, even the initial development of a project will go through several major internal reconstructions, and the claim that detailed UML diagrams must be drawn before actual coding is purely rationalist utopia.
-
About Framework/SDK/Vendor (how Conway’s Law is reflected in the framework’s internal contradictions) :
Conway’s Law: design systems of organizations, the resulting design is equivalent to the structure of communication within and between organizations.
The framework can’t be created without an “early team.” When we find two separate teams using the same framework application, it’s almost certain that they used to be together (or have a direct boss in common).
Since some functionality needs to be common across modules or projects, we put it in a separate directory and reference it across projects. This is fine in a small team. But as the business grows and the team splits into multiple independent teams, ownership issues arise over the common framework between the teams. Common code is either not maintained or modified. Another problem is that although teams are now separated, they still think the framework is their own, and their team’s common code should still be there, so the framework is filled with “common” code from each team, even though it is of little use to the other team.
The framework is not a good form of common code for agile teams. The Framework is better suited to traditional teams, which tend to adopt intra-team groupings rather than separate teams as businesses are split. Agile teams tend to be small and self-managed, and the communication between teams is not very frequent, so the system architecture should also adapt to it, and the code between each other should be decoupled as much as possible.
A better approach is to use the composer format, with each function acting as a separate Composer and each composer package having an explicit owner. There is a private repository of Composer throughout the company. Each team manages its own package (or even its own repository), which other teams can use but not modify. If other teams feel the need to make major changes to their own team, they can fork out a separate branch to maintain composer themselves.
Requirements for package:
- Only provides a relatively single function, its modification will not lead to a large area of radiation;
- Interface oriented programming, as far as possible to ensure its external stability. Contracts can even be made between teams (similar to a series of PSR protocols developed between PHP-FIG members to unify programming specifications);
- The standard version naming conventions of the open source community should be followed, and users can choose to use different versions;
- You can trace which projects have the package installed and be notified when a version change occurs.
- Packages are not only used to provide general functionality, they can also be used to define contracts (interfaces), meaning that this package only provides interface definitions, not implementations, for which other packages or projects can provide implementations (this can be contractual between teams).
There are concerns about the complexity of this approach: upgrading the package requires notifying all consumers of project updates. First, packages should be stable. If a package needs frequent changes, either the implementation is wrong or it does too much. Not every package is heavily used by many teams, and not every upgrade is required; In addition, if you can trace which projects are using the package, you can be notified on demand of the upgrades that must be made. Finally, the Framework mode also has the problem of upgrading, and since all functions must be upgraded in a single upgrade, there are greater security risks.
One thing that needs to be kept in mind (whether in Package or framework mode) is which functionality should be provided as a service rather than as a Package (or framework). The direct purpose of using Package is to extract functionality needed by multiple projects for common use, but in many cases it is more appropriate to provide these functionality as services. For example, integral is used in every project, but it is more suitable to make independent integral system for other system calls than package. For example, functions related to public accounts should also be provided as services rather than the framework.
Currently we use the SDK instead of the framework. There is a common SDK between the teams, maintained by the common teams, and then each team has its own SDK. The essence of it is the framework, which means that if the public team no longer exists, the public SDK becomes a problem; Where a team splits into multiple teams, that team’s SDK becomes a problem. It’s not a problem because the current organizational structure of cloud services hasn’t changed much.
The essence of the framework (or SDK) problem is the tension between its strongly coupled code architecture and loosely coupled agile teams. The framework is a very coarse-grained technical partition in which code may have nothing to do with each other but come together simply because it is “common code.” To make matters worse, a lot of the domain business, which should have been spun off into a separate service, was thrown into the framework because it was needed by multiple projects.