Software Architecture: Business Architecture, Application Architecture, and Cloud Infrastructure

This part is excerpted from Software Architecture Design

Software development is the decomposition of a complex problem into a series of simple problems and the combination of a series of simple solutions into a complex solution. The biggest challenge in software development is to be able to respond quickly and efficiently to changing requirements and environments while still providing stable, highly available services. Software architecture is the skeleton and framework of a software system.

It is difficult to have a clear or standard definition of the so-called architecture. However, architecture is not just a glass of water or a glass of sunshine. Architecture is needed wherever there is a system, from an aircraft to a functional component in an e-commerce system, all of which need to be designed and constructed. Abstract speaking, architecture is the abstract description of the entities in the system and the relationship between the entities, the allocation of the corresponding situation between the function of objects/information and the formal elements, and the definition of the relationship between the elements and the relationship between the elements and the surrounding environment. Architecture shards the target system according to a principle that allows different actors to work in parallel, and that well-structured creative activities are better than unstructured creative activities.

The core value of software architecture is to control the complexity of the system and decouple core business logic from technical details. Software architecture is the sketch of a system, which describes the abstract components that directly constitute the system. Connections between components explicitly and in relative detail describe communication between components. In the implementation phase, these abstract components are refined into actual components, such as concrete classes or objects. In the object-oriented world, interfaces are often used to connect components. The role of the architect is to try to train his mind to understand complex systems, understand and parse requirements, create useful models, validate, refine and extend models, and manage architectures through proper decomposition and abstraction. Be able to decompose the system to form the overall architecture, be able to select the correct technology, be able to formulate technical specifications and effectively promote the implementation.

Software Architecture Classification

In the author’s knowledge system, architecture is actually divided into business architecture, application architecture and cloud infrastructure. Business architecture mainly focuses on controlling the complexity of business, while infrastructure focuses on solving a series of problems existing in distributed system. No matter what kind of architecture, it is expected to realize the variable system and guarantee the high availability of services. On the other hand, based on the division of responsibilities in the enterprise, we can often classify software architecture, and associated architects, into the following categories:

  • Business architecture/solution architecture: the core is to solve the system complexity brought by business, understand the pain points of customers/business parties, project definition, existing environment; Sorting out high-order requirements and non-functional requirements, problem domain division and domain modeling; Communication, proposal, multiple iterations, delivery of the overall architecture.

  • Application architecture: According to the needs of business scenarios, design application hierarchies, formulate application specifications, and define interfaces and data interaction protocols. In addition, the complexity of applications should be controlled to an acceptable level. In this way, applications can meet non-functional attributes (such as performance, security, and stability) while rapidly supporting service development and ensuring system availability and maintainability.

  • Data architecture: Focuses on building the data center, unifying data definition specifications, standardizing data expression, and forming effective and easy to maintain data assets. Build a unified big data processing platform, including data visualization operation platform, data sharing platform, data rights management platform, etc.

  • Middleware architecture: Focuses on the construction of middleware systems, which need to solve server load, registration and discovery of distributed services, messaging system, caching system, distributed database, etc., while architects need to balance between caps.

  • Operation and maintenance architecture: responsible for the planning, selection and deployment of the operation and maintenance system and the establishment of a standardized operation and maintenance system.

  • Physical architecture, physical architecture focused on how the software components on hardware, focused on infrastructure, software and hardware system, some even cloud platform, including room construction, network topology, network shunt, proxy server, Web server, application server and report server, the integration server and storage server and the host, etc.

Architectural patterns and architectural styles

One of the core issues in software architecture design is whether repetitive architectural patterns can be used, that is, whether software reuse can be achieved at the architectural level. That is, the ability to use the same architecture in different software systems. Architectural patterns and Architectural styles are often referred to when discussing software architecture.

Software architecture patterns are often used to specifically solve a specific, repetitive architectural problem, while architectural styles are the names given to specific architectural design solutions. Software architectural style is an idiomatic pattern describing the system organization in a particular application domain. The architectural style reflects the structural and semantic characteristics common to many systems in the domain, and guides how various modules and subsystems can be effectively organized into a complete system.

In my series of articles, CRUD, layered architecture, hexagonal architecture, Onion architecture, REST, and DDD are all architectural styles; CQRS, EDA, UDLA, microservices, etc. are divided into architectural patterns.

The source and solution of system complexity

In software development, programmers are often able to break away from the rules of reality and create an unconstrained world, which is also one of the most creative activities. The only thing programming requires is creative thinking and the ability to think organizationally, which means that the biggest limitation in software development is understanding the object we are creating. As software evolves and more function points are added, systems become more complex: there are subtle dependencies between modules. As systems become more complex over time, it becomes more and more difficult for programmers to modify systems to take all relevant factors into account. This will slow down software opening and introduce bugs, which will further delay development and increase development costs. In the life cycle of any system, complexity inevitably increases; The larger the system, the more people need to develop it, and the more difficult it is to manage the complexity of the system.

Eric Evans, in his book Domain‐Driven Design, pokes fun at the so-called Spaghetti architecture, where the code does something useful, but it’s hard to explain how it’s done; The main reason for this dilemma, he argues, is that the complexity of domain problems is mixed with the complexity of technical details, resulting in an exponential increase in overall complexity.

Complexity doesn’t come out of nowhere, and most of the time it’s not intentional, which means that it doesn’t tend to grow on our own. Like the elephant in the room, we can’t escape it, we can’t ignore it. Sources of complexity may be:

  • Accretion and continuous iteration: Incremental design means that software design never ends, that design continues throughout the life cycle of the system, and that programmers are constantly thinking about design issues. Incremental development also means continuous refactoring. The initial design of a system is almost never the best solution. As experience increases, better design solutions will inevitably be discovered.

  • Interactive and non-scalable design: When accretion effects result in large-scale systems, the combination of interactive features can make technical systems more complex. In addition to acting on itself, a technical system interacts with a large number of other systems. For example, order system, commodity system, payment system, logistics system and card system will cooperate with each other. The complexity of accretion thus increases exponentially due to the interaction properties.

  • Irrational business encapsulation: Irrational business encapsulation is a relatively broad concept, its specific manifestations such as process-oriented rather than object, unreasonable stratification, etc.

  • Lack of unified language: Typical agile development structure, each role in the assembly line tends to focus on their own responsibilities, and the refined division of labor limits each role’s global perspective; While we often advocate a sense of ownership, it’s hard to push it when it’s on the ground.

  • Lack of discipline and discipline: In the context of collaborative team development, the lack of discipline and discipline severely compromises Consistency of the architecture and reduces the maintainability of the code dramatically. It may be that the specification at the implementation level is just a matter of naming, subcontracting and so on, which does not affect the operation of the code, but it is these small inattention that can cause an avalanche of overall complexity.

Complexity will never be solved once and for all. We need to constantly bring forth new ideas, dynamically and gradually reshape our understanding of the software system, constantly recognize problems and seek better solutions. The first way to control complexity is with simple code and Obvious intent. For example, reducing the processing of special scenarios or consistent variable naming can reduce system complexity. Another way is to abstract a complex problem and divide and conquer it.

Domain-driven design

This is an excerpt from Domain-Driven Design

DDD domain-driven design originated in 2004 when renowned modeler Eric Evans published his most influential and famous book: “Domain-driven Design – Complexity in the Heart of Software”, in which Eric Evans is referring only to the original theory and not to a methodology, So DDD has been a matter of opinion for many years. Earlier, MartinFowler proposed the concept of anemia model versus congestion model. He argued that most of our systems are modeled on pojos, with ordinary getters and setters, and no real behavior, like people who lack blood. In Evans’ opinion, Model of DDD exists in the form of congestion, which means in DDD, we design the model has not only describes the business property, but also contains can describe the method of action, the difference is that in the field of concepts cannot be used in model objects, such as warehouses, factories, service, etc., such as on the model, the damage model definition.

The strategic core of domain-driven design is to separate the problem Domain from the application architecture, manifest the business semantics, and transform the previously obscure business algorithm logic into a clear and explicit expression of Domain concepts through Domain Object and Ubiquitous Language.

  • Unified language: Software developers/users all use the same set of language, that is, the cognition of a certain concept and noun is unified, and a clear business model is established to form a unified business semantics. Model as the backbone of the language. Make sure the team uses this language for all internal communication, coding, drawing, writing, and especially speaking. Such as account number, transfer, overdraft strategy, these are very important concepts in the field, if these names are consistent with our daily discussion and the DESCRIPTION in PRD, will greatly improve the readability of the code, reduce the cognitive cost. For example, there will no longer be someone in the meeting to “work order”, “audit order”, “form” and repeatedly confirm the meaning, DDD model will not be held hostage by DB.

  • Domain oriented, business semantic explicit, think in domain, not module. Extract implicit business logic from a push of if-else, name it, code it, and extend it in a common language to make it a display concept; Many important business concepts are written in a transactional script whose meaning is completely lost in the code logic.

  • Responsibility division, according to the actual business reasonable division of the model, the model between the dependency structure and boundaries more clear, avoid chaotic dependency, and then increase the readability and maintainability; Single responsibility, model only focus on their own job, avoid “overreach” and lead to chaotic invocation relationship. Through modeling, complex business in the real world can be better expressed. With the development of time, the system will continuously increase the precipitation of actual business and better describe business logic through clear code. The cohesion of model increases the high modularity of the system and improves the reusability of code. It is likely that a large number of repetitive functions are scattered within services.

Microservices and cloud native architecture

This part is excerpted from Microservices and Cloud Native

Monomer layered architecture

In the early days of Web applications, most projects packaged all server-side functionality into a single Monolith application, such as many enterprise Java applications packaged as war packages, resulting in the following architecture:

Megalithic applications are easy to build a development environment, easy to test, and easy to deploy; Its defects are also very obvious, unable to carry out local changes and deployment, compilation time is too long, regression testing cycle is too long, the development efficiency is reduced. The centralized architecture is divided into three standard layers: the data access layer, the service layer, and the Web layer.

When Web2.0 was just popular, there was no essential difference between Internet application and enterprise application. Centralized architecture was divided into three standard layers: data access layer, service layer and Web layer.

  • The data access layer is used to define the data access interface and realize the access to the real database.
  • The service layer is used to process the application business logic.
  • The Web layer is used to handle exceptions, logical jump control, page rendering templates, and so on.

SOA Service-oriented architecture

Service-oriented Architecture (SOA) is a relatively broad concept related to modular development and distributed scale-out deployment in the context of the rapid growth of Internet applications and the inability of centralized Architecture to improve system throughput indefinitely.

SOA is a component model that links different functional units of an application (called services) through well-defined interfaces and contracts between these services. Interfaces in SOA are defined in a neutral manner, independent of the hardware platform, operating system, and programming language that implements the services. This allows services built on a variety of systems to interact in a unified and common way. A service-oriented architecture that enables distributed deployment, composition, and use of loosely coupled, coarse-grained application components over a network as required. The service layer is the foundation of SOA and can be invoked directly by applications to effectively control the artificial dependencies in the system that interact with software agents.

A key goal of SOA implementation is to maximize the impact of enterprise IT assets. This is achieved by keeping in mind the characteristics of SOA: accessible from outside the enterprise, readily available, coarse-grained service interface hierarchy, loose coupling, reusable services, service interface design management, standardized service interfaces, support for various message patterns, and precisely defined service contracts.

A Service Consumer can invoke a Service by sending messages that are transformed by a Service Bus and sent to the appropriate Service implementation. This service architecture can provide a Business Rules Engine that allows Business Rules to be merged into one service or multiple services. The architecture also provides a Service Management Infrastructure for managing services such as auditing, billing, logging, and so on. In addition, the architecture provides enterprises with flexible business processes, better handling of Regulatory requirements, such as Sarbanes Oxley (SOX), and the ability to change one service without affecting other services.

Due to the complexity of distributed system, a large number of distributed middleware and distributed database are produced to simplify the development of distributed system, and the concept of service-oriented architecture design is also recognized by more and more companies. Here is a picture of SOA system evolution published in Dubbo’s official documentation:

MSA microservices architecture

The Microservices Architecture Pattern was proposed by Martin Fowler in 2014, hoping to transform a single application into multiple services or the aggregation of applications that can be run, developed, deployed and maintained independently. So as to meet the needs of rapid business change and distributed multi-team parallel development. As Conway’s Law says, when any organization designs a system (a broad concept), it delivers a design solution that is structurally consistent with the organization’s communication structure. Microservices and microfronts are not only changes in the technical architecture, but also changes in the way they are organized and communicated.

For developers familiar with SOA, microservices can also be considered an implementation of SOA without the ESB; The ESB is the central bus in the SOA architecture, and the design graph should be a star, while microservices are a decentralized distributed software architecture. SOA is more about reuse, while microservices are more about rewriting. SOA tends to horizontal service, while microservice tends to vertical service. SOA favors top-down design and microservices favor bottom-up design.

The principles of microservices and microfront-end are similar to those of software engineering and object-oriented design, Both follow basic principles of Single Responsibility, Separation of Concerns, Modularity and Divide & Conquer. The evolution from megalithic application to micro-service is not accomplished overnight. The following figure also demonstrates the simple gradual substitution process:

Cloud Native Cloud Native architecture

Cloud native uses automation and architecture to manage system complexity and liberate productivity by building teams, cultures, and technologies. — Joe Beda, Heotio CTO, co-founder

Pivotal is the originator of Cloud native application, and has launched the Pivotal Cloud Foundry Cloud native application platform and Spring open source Java development framework, becoming a pioneer and pathfinder in Cloud native application architecture. Back in 2015, Matt Stine of Pivotal wrote a booklet called Migrating to Cloud Native Application Architectures, which discussed several key features of cloud native application architectures: Compliance with 12 Factors applications, microservice oriented architecture, self-service agile architecture, API-based collaboration, and anti-vulnerability. In 2015, Google led the establishment of Cloud Native Computing Foundation (CNCF). At the beginning, CNCF defined Cloud Native in the following three aspects: application containerization, microservice-oriented architecture, and the arrangement and scheduling of application support containers.

Cloud native applications are simply defined as building applications from scratch for cloud computing architectures; This means that if we design our applications to be expected to be deployed on a distributed, scalable infrastructure, our applications are cloud native. As public cloud will carry more and more computing power, cloud computing will be the mainstream IT capability delivery method in the future. CNCF also redefines cloud native technology: Cloud native technology helps organizations build and run applications that can be flexibly expanded in new dynamic environments such as public cloud, private cloud and hybrid cloud. Cloud native technologies include containers, service grids, microservices, immutable infrastructure, and declarative apis.

  • Codeless is the counterpart of service development, which implements source code hosting, and you only care about the implementation of your code, not where your code is, because you don’t feel the presence of a code base or a code branch throughout the development process.

  • Applicationless corresponds to service publishing. In a servitization framework, you no longer need to apply for an application or care where your application is located to publish your service.

  • Serverless corresponds to service operation and maintenance. With Serverless capability, you no longer need to pay attention to your machine resources, Servlerless will help you deal with the elastic expansion of machine resources

The combination of these technologies can build loose coupled systems with good fault tolerance, easy management and easy observation. Combined with reliable automation, cloud native technology enables engineers to easily make frequent and predictable major changes to systems. It can be seen that cloud native is an effective grip to guarantee system capability. Cloud native technologies enable organizations to build and run applications that can scale flexibly in new dynamic environments such as public, private and hybrid clouds. Microservices architectures are ideal for cloud native applications; However, cloud native also has some limitations. If your cloud native application is deployed on a public cloud such as AWS, the cloud native API is not cross-cloud platform.

Key attributes of cloud native applications include: Using lightweight containers, packaging, use the most appropriate language and framework development, with loose coupling way of micro service design, interaction and collaboration for the center with API, stateless and stateful services in architectural boundaries clear, does not depend on the underlying operating system and the server, deploy on self-service and elastic cloud infrastructure, through agile enterprise Process management, automation capabilities, resource allocation driven by definition and policy. Cloud native is the current important development path of distributed applications, and its final state should be distributionalism. All problems related to distributed applications should be solved by cloud platforms. The development of distributed applications will be as convenient as the development of traditional applications, or even more convenient.

Cloud Infrastructure

This part is excerpted from Virtualization and Choreography of distributed Infrastructure

The virtual machine

A virtual machine consists of virtualization of some specific hardware and kernel that runs a guest operating system. Software called hypervisors creates virtualized hardware, which can include virtual disks, virtual network interfaces, virtual cpus, and so on. The virtual machine also includes a guest kernel that can communicate with this virtual hardware. Hypervisors can be hosted, which means that it is software that runs on the host operating system (MacOS), as shown in the example. It can also be bare-metal and run directly on the machine’s hardware (replacing your operating system). Either way, the hypervisor approach is considered heavyweight because it requires virtualizing multiple parts, if not all of the hardware and kernel.

While VMS require hardware virtualization to achieve machine-level isolation, containers need only be isolated within the same operating system. As the number of quarantined Spaces increases, the cost difference becomes significant.

The container

Cloud platforms have grown rapidly over the past few years, but one of the biggest challenges for operations engineers is the need to install runtime environments for a variety of different development languages. Although automated operation and maintenance tools can reduce the complexity of environment construction, they still cannot fundamentally solve the problems of the environment.

The emergence of Docker has become a new watershed in the software development industry, and the maturity of container technology also marks the beginning of a new era of technology. Docker provides the ability for developers to encapsulate applications and dependencies into a portable container. This move makes Docker likely to sweep the entire software industry and change the rules of the game, much like the scene when smart phones first appeared — changing the rules of the game of the entire mobile industry. Docker, through container-type packaging, enables development engineers and operation and maintenance engineers to release applications in a standardized way of image distribution provided by Docker, making heterogeneous languages no longer the shackles that bind teams.

Containers are packages of application code, configuration, and dependencies that provide operational efficiency and productivity. The rise of containers, which provide predictable, repeatable and immutable operating expectations, is a huge driver of DevOps as a service and can overcome one of the biggest security barriers facing us today. Containerization creates kernel-based, isolated, encapsulated systems by virtualization at the operating system level to make applications portable. Containerized applications can be placed anywhere and run without dependencies or require an entire VM, eliminating dependencies.

As a stand-alone unit, containers can run on any host operating system, CentOS, Ubuntu, MacOS, and even non-UNIX systems like Windows. Containers also serve as standardized work or computing units. A common example is that each container runs a single Web server, a single shard of a database, or a single Spark worker, etc. It is easy to scale applications by simply scaling the number of containers. Each container has a fixed resource configuration (CPU, RAM, number of threads, etc.), and scaling applications requires scaling only the number of containers rather than a single resource primitive. This provides an easier abstraction for engineers when applications need to be scaled up or down. Containers are also a good tool to implement a microservice architecture, where each microservice is just a set of collaborating containers. For example, Redis microservices can be implemented using a single master container and multiple slave containers.

Kubernetes and choreography

As virtualization technology matures and distributed architectures gain popularity, cloud platforms for deploying, managing, and running applications are increasingly mentioned. IaaS, PaaS and SaaS are the three basic service types of cloud computing, representing infrastructure-as-a-service (IFAS) focusing on hardware infrastructure, platform-as-a-service (PaaS) focusing on software and middleware platforms, and software-as-a-service (SaaS) focusing on business applications, respectively. The emergence of containers has transformed the original cloud host applications based on virtual machines into cloud platform applications with more flexible and lightweight containers and scheduling.

However, the increasingly scattered container units make the management cost gradually rise, and the demand for container orchestration tools is unprecedented. Kubernetes, Mesos, Swarm and other cloud native applications provide strong orchestration and scheduling capabilities, they are distributed operating systems on the cloud platform. Container choreography is a process in which multiple containers can often be deployed to automate an application. Container management and container choreography engines like Kubernetes and Docker Swarm enable users to guide container deployment and automate update, health monitoring, and failover processes.

Kubernetes is one of the most closely watched open source projects in the world right now. It is an excellent container orchestration system for one-stop service. Based on the Borg system, which hundreds of engineers spent more than a decade building, Kubernetes is extremely easy to install and flexible to connect to network layers. Kubernetes and Mesos have revolutionized the way engineers across the industry work. They no longer have to keep an eye on every server and can simply replace it when something goes wrong. Business development engineers need not focus too much on non-functional requirements and can only focus on their own business area. Middleware development engineers need to develop robust cloud-native middleware to connect business applications to cloud platforms.

Kubernetes, Service Mesh, and Serverless work together to interpret the details below for different levels of encapsulation and upward masking. Kubernetes introduces different design patterns and implements new, effective and elegant abstraction and management patterns for various cloud resources, making cluster management and application publishing a fairly easy and error-free thing. Microservice software architecture is widely used to transfer the complexity of distributed applications to services. How to manage it by means of global consistency, systematization, standardization and non-intrusion has become a crucial content of microservice software architecture. Kubernetes refines the granularity of decomposition of applications, while simplifying application development by introducing service discovery, configuration management, load balancing, and health checks as infrastructure capabilities. The Kubernetes declarative configuration is particularly well suited for CI/CD processes, and there are open source tools such as Helm, Draft, Spinnaker, Skaffold, and others that can help us distribute Kuberentes applications.

The Service Mesh does this easily by stripping the content shared and environment-specific by the services into the Sidecar process deployed alongside each Service. This stripping action makes the service and platform fully decoupled to facilitate their evolution and development, and also makes the service lighter and helps improve the timeliness of service start and stop. Because the Service Mesh separates the logic related to Service governance into a Sidecar as an independent process, the functions implemented by Sidecar naturally support multiple languages, creating more favorable conditions for the above services to be developed in multiple languages. The Service Mesh is used to close the Service traffic of the whole network, so that the system engineering related to traffic scheduling, such as remote multi-live, can be more elegant, simple and effective, and the gray scale and rollback of Service version upgrade can be realized more conveniently to improve the quality of production safety. Technology closure creates new development space for the governance and evolution of service traffic, troubleshooting, and the economy of log collection.

read

You can read the author’s series of articles in Gitbook through the following navigation, covering a variety of fields, such as technical summary, programming language and theory, Web and big Front end, server-side development and infrastructure, cloud computing and big Data, data science and artificial intelligence, product design and so on:

  • Knowledge system: The Awesome Lists | CS data collection “, “Awesome CheatSheets | learn quick speed manual”, “Awesome Interviews | necessary job interview”, “Awesome RoadMaps | Programmers advanced guide “, “Awesome MindMaps | knowledge venation mind”, “Awesome – CS – Books | open source Books (. PDF)”

  • Programming languages: Programming Language Theory, Java Field, JavaScript Field, Go Field, Python Field, Rust Field

  • Software Engineering, Patterns and Architecture: Programming Paradigms and Design Patterns, Data Structures and Algorithms, Software Architecture Design, Neatness and Refactoring, R&D Methods and Tools

  • Web and Big Front End: Modern Web Development Fundamentals and Engineering Practice, Data Visualization, iOS, Android, Hybrid Development and Cross-End Applications

  • Server development practice and Engineering architecture: Server Basics, Microservices and Cloud Native, Testing and High Availability Assurance, DevOps, Node, Spring, Information Security and Penetration Testing

  • Distributed Infrastructure: Distributed Systems, Distributed Computing, Databases, Networks, Virtualization and Choreography, Cloud Computing and Big Data, Linux and Operating Systems

  • Data Science, Artificial Intelligence and Deep Learning: Mathematical Statistics, Data Analysis, Machine Learning, Deep Learning, Natural Language Processing, Tools and Engineering, Industry Applications

  • Product Design and User Experience: Product Design, Interactive Experience, Project Management

  • Industry application: “Industry myth”, “Functional Domain”, “e-commerce”, “Intelligent Manufacturing”

In addition, you can go to xCompass to interactively search for articles/links/books/courses; Or view more detailed directory navigation information such as the article and project source code in the MATRIX Article and code Index MATRIX. Finally, you can also follow the wechat official account “A Bear’s Technical Path” to get the latest information.