From Layered Architecture to Microservice Architecture is a series of articles that introduce the eight architectural patterns mentioned in Fundamentals of Software Architecture. Instead of going into all the details, we’ll pick out the key ones. Read the original book for more details.

preface

When it comes to the methodology of software system design, there are 23 familiar design patterns at the code level, and the so-called architecture pattern at the architecture level. They guide us to design good software systems from micro and macro perspectives, respectively. Therefore, as software engineers, we should not only be familiar with design patterns, but also familiar with common architectural patterns. Just as the name of a design pattern brings to mind a general structure diagram, when we see the name of an architectural pattern, we should immediately think of the corresponding architecture diagram and its basic characteristics. For example, when talking about layered architecture, we should think about its architecture diagram, architecture characteristics, how the system is deployed, the data storage strategy, and so on.

Generally, architecture patterns can be divided into two general categories, monolithic architecture and distributed architecture. This series of articles will cover the following eight common architectural patterns:

Monomer architecture

  • Layered Architecture
  • Pipeline Architecture
  • Microkernel Architecture

Distributed architecture

  • Service-based Architecture
  • Event-driven Architecture
  • Space-based Architecture
  • Service-oriented Architecture
  • Microservices Architecture

Fallacies in software design

Before introducing architectural patterns, let’s talk about fallacy in software design. Fallacies are the beliefs that when designing software systems, especially distributed systems, we assume that they are correct when they are not. These concepts are symptomatic of how poorly we design software.

Myth 1: The Internet is reliable

Many software engineers often assume that networks are reliable, but they are not. The web is much more reliable than it was 20 years ago, but it remains highly uncertain. As shown in the figure above, Serivce B might be perfectly fine, but A request from Service A cannot reach Service B because of A network problem. In A worse scenario, Service B can receive A request from Service A and process the data, but A network problem causes Service A to fail to receive A response from Service B, resulting in data inconsistency. The unreliability of network is also the reason why service communication timeout and service fusing often occur in the system.

In short, if the network is assumed to be reliable, the software systems we design will be unreliable.

Myth 2: The delay is 0

As shown in the figure above, function/method-level calls between components within a service can take subtle or even nanosecond time. But remote calls between services (such as REST, message queues, RPC) can take microseconds, even seconds in exception scenarios! When designing systems, especially distributed systems, time delay is a factor that cannot be ignored. We must know the average time delay of the system, otherwise the design scheme may not be feasible at all. For example, if the delay of communication between services in the system is 100ms, if the invocation chain of a request involves 10 services, the delay of the request will be 1000ms! Such a high average delay is totally unacceptable for ordinary systems.

When designing the system, it is not enough to consider the average delay, but 95th and 99th percentage points are more important. The average delay of a system may only be tens of milliseconds, but the 95th percent delay is hundreds of milliseconds, which is often the “short board” that drags down the whole system performance.

Myth 3: Bandwidth is infinite

In a monolithic architecture, business processes are closed loop within a single service and consume little or no bandwidth, so bandwidth is not the primary concern. Once the system is broken down into distributed architecture, a business process may involve communication between multiple services, and bandwidth becomes an important consideration. Insufficient bandwidth will slow down the network, affecting system delay (fallacy 2: the delay is 0) and reliability (fallacy 1: the network is reliable).

As shown in the figure above, imagine A Web system in which Service A handles front-end requests and Service B manages user information (including 45 attributes such as name, gender, age, and so on). Service A queries Service B for the user name (200 bytes) on each request, but Service B returns all of the user’s information (500 KB) on A single request. If the system processes 2000 requests per second and consumes 500 KB of bandwidth per request, the total bandwidth consumed per second will be 1 Gb! If Service B returns only the required names, the total bandwidth consumption per second is only 400 KB, all things being equal.

This problem is known as stamp coupling, and there are many solutions, such as adding attribute selection to requests and using GraphQL instead of REST. Rather than these techniques, it is important to determine the minimum set of data required for communication between services and make this a key factor in system design.

Myth 4: The Internet is secure

The widespread use of VPN, firewall, etc., makes many engineers ignore the important principle of “network is not secure” when designing the system. Especially after the evolution from a single architecture to a distributed architecture, the probability of the system being attacked will greatly increase. Therefore, in distributed systems, every service must be a secure endpoint to ensure that any unknown or malicious requests are intercepted. Of course, security comes at a cost, which is an important reason for the performance degradation of systems with fine service granularity such as microservice architectures when the invocation chain becomes too long in a single business request.

Myth 5: Network topology remains unchanged

The network topology here refers to the network devices involved in the operation of the system, including all routers, firewalls, hubs, switches, etc. Many engineers assume that the network topology is fixed, but it is not.

Imagine the following scenario: You, as an architect, come back to work on Monday morning to find that your team is distracted by the fact that all the inter-service communications in the system are constantly responding to time-outs, but surprisingly no service changes were made over the weekend. After several hours of work, you discover that there was a network upgrade at 2am on Monday, and that it was a “minor” network upgrade that triggered the crash, overturning the delay assumptions of the system design.

Therefore, software engineers need to regularly contact network administrators to make sure that changes in the network topology are identified before each network upgrade to adjust the network topology accordingly.

Myth 6: Only one Network administrator

There is often more than one network administrator, especially in the “cloud” era, where data centers are scattered across multiple geographies and, of course, multiple Lans. Running on the “cloud” system is likely to span multiple data center, so the engineers should perceive each data center network administrators to network related operations, response measures in advance, avoid because of the change of network topology (fallacy 5: the network topology invariably) due to the service communication timeout, and even trigger service fusing.

Myth 7: Communication cost is 0

The communication cost here is not the network delay, but the money spent for each additional call between services. Many engineers often ignore the cost of communication when designing systems. They preach the superiority of distributed architecture over single architecture, but forget the increased number of servers, firewalls, gateways and other hardware that it brings, which are all expensive.

Therefore, hardware resources and network topology should also be taken into consideration when designing systems.

Myth 8: Networks are homogeneous

Many engineers assume that networks are homogenous, meaning that all network equipment comes from the same hardware manufacturer, which is also a fallacy. In fact, in a large communication network, hardware devices often come from different manufacturers, which benefits from the unification of network protocol standards. After all, cooperative testing of devices between vendors is not sufficient. Packet loss may occur in some special scenarios, which affects network reliability (fallacy 1: the network is reliable), delay (fallacy 2: the delay is 0), and bandwidth (fallacy 3: the bandwidth is infinite).

It all started with the big ball of mud

The “big ball of Mud” architecture is a well-known anti-pattern architecture, first proposed by Brian Foote and Joseph Yoder in 1997. In the “big mud ball” architecture, the system does not carry out internal module division, the code coupling is serious, the call relationship is chaotic, just like a big mud ball. As shown above, each dot represents a class, and the red line represents the coupling between the classes. Such an architecture is extremely unfriendly to requirements change, often affecting the whole process, and has many problems in terms of deployment, testability, performance, and so on. All architects strive to avoid the “big ball of mud”, but unfortunately it is still very common in real projects, especially at the beginning of a project, before code quality and structure are tightly controlled.

Whenever an anti-pattern occurs, there must be a solution to it, and this is the architectural pattern, and starting with the next article, we’ll look at each of the eight common architectural patterns.

conclusion

Similar to design patterns, architectural patterns are a summary of software engineers’ years of experience in architectural design. There is no absolute superiority or inferiority of each architectural pattern. We cannot say that microservice architectures are necessarily superior to single layered architectures. They all have their own application scenarios. Distributed architectures are more scalable and fault tolerant than individual architectures, but they also bring higher complexity, such as distributed transactions. Therefore, we should be familiar with the characteristics of each architectural pattern so that the appropriate architectural pattern can be used in a particular business scenario.