SpringCloud_01_ Introduction to microservice architecture theory
Since the reference:
- www.cnblogs.com/binghe001/p…
- Blog.csdn.net/qq_42897427…
1. Evolution of microservices architecture
1.1. Single application architecture — single database multi-application service architecture
In the early days of the Internet, the general website application traffic was small, so only one application was needed, and all the functional codes were deployed together, which could reduce the cost of development, deployment and maintenance. For example, an e-commerce system will contain many modules of user management, commodity management, order management, logistics management and so on. We will make them into a Web project, and then package and deploy them on a Tomcat server. But if there are too many users, add the number of servers to form load balancing
As mentioned above, if we need multiple servers to respond to more users, we also need to ensure data consistency. According to database paradigm theory, the lower the redundancy of data, the higher the consistency of data. Therefore, our first step can be separated into multiple application servers to handle user requests and a database server to deal with data reading and writing in a centralized manner, so as to share the server pressure and ensure the correctness of data.
However, in the application server entry, we need to add a load balancing server to distribute different user requests to specific application servers. This is somewhat similar to the queuing machine in a restaurant, which diverts users. A load balancing server can be a common PC server configured with Nginx software or a dedicated hardware load balancing device such as F5.
Advantages of single application architecture:
- Simple project structure, small projects, low development cost
- The project is deployed on one node for easy maintenance
Disadvantages of single application architecture:
- All functions are integrated in one project, which is difficult to develop and maintain for large projects
- The project modules are tightly coupled with low single point fault tolerance
- It is impossible to optimize and expand horizontally for different modules
- Fault tolerance is not good, and if one module fails, the whole project can collapse.
1.2. Vertical application architecture
As the number of visits increases, a single application can only cope by adding server nodes, but not all modules will have a large number of visits. Take the e-commerce as an example, the increase of user visits may only affect the user and order module, but the impact on the message module is relatively small. At this point we want to add only a few more order modules, not the message module. At this point, the single application can not be done, vertical application came into being.
The so-called vertical application architecture is to break up an application into several unrelated applications to improve efficiency. For example, we can divide the single application of the above e-commerce into:
- E-commerce system (user management, commodity management, order management)
- Background system (user management, order management, customer management)
- CMS system (Advertising management marketing management)
In this way, once the user access becomes larger, it is only necessary to increase the nodes of the e-commerce system, without adding nodes of the background and CMS. Then the three module system is deployed on three servers, so that the respective applications do not interfere with each other, forming a diversion
Advantages:
- System split achieves traffic sharing, solves concurrency problems, and can be optimized and scaled horizontally for different modules
- Problems in one system do not affect other systems, improving fault tolerance
Disadvantages:
- Systems are independent of each other and cannot call each other.
- Systems are independent of each other, and there are repetitive development tasks
- For example, if the background management system wants to see the data of the e-commerce system, it must call the data content of the e-commerce system. This architecture does not fundamentally solve the problem of high concurrency. I just cut the original project into three pieces. System performance can only be expanded by expanding cluster nodes, which is costly and has bottlenecks.
1.3. Distributed Architecture
As more and more verticals are applied, there will be more and more repetitive business code. At this point, we thought, can we extract the duplicate code, make a unified business layer as independent services, and then call different business layer services by the front-end control layer? This results in a new distributed system architecture. It will split the project into a presentation layer and a service layer, which contains the business logic. The presentation layer only needs to handle the interaction with the page, and the business logic is implemented by invoking the services of the service layer.
Advantages:
Extract common functions into the service layer to improve code reuse
Disadvantages:
The coupling degree between systems becomes high, and the call relationship is complicated and difficult to maintain
1.4. SOA Architecture — Ali Dubbo
In a distributed architecture, when the number of services increases, problems such as capacity evaluation and waste of small service resources become obvious, a scheduling center needs to be added to manage the cluster in real time. At this point, SOA Service Orientation architecture (SOA) for resource scheduling and governance is key. It enables distributed deployment, composition, and use of loosely coupled, coarse-grained application components (services) over a network as required. A service usually exists in a separate operating system process.
Advantages:
- The use of registries addresses the automatization of invocation relationships between services
- We split these three systems into many services. The whole system is divided into display layer (consumer) and service layer (provider). In this way, a service is deployed separately on a server or cluster, and then the provider is invoked by the consumer through the registry. The presentation layer is usually what we call the Controller layer, and the service layer is the Service layer
Disadvantages:
- There are dependencies between services, and if something goes wrong, it will have a big impact (service avalanche)
- The dependency and invocation relationship between services is complex, so it is difficult to test and deploy.
1.5. Microservices Architecture
Microservices architecture is in some ways the next step in the continued evolution of service-oriented architecture SOA, with a greater emphasis on “clean uncoupling” of services. Under the microservice architecture, we split a large project into small microservices that can be independently deployed. The granularity of microservice separation is smaller. Each service corresponds to a unique business capability, and each microservice has its own database.
Advantages:
- Services are atomized and split, packaged, deployed and upgraded independently, ensuring clear task division for each microservice and facilitating expansion
- Microservices can communicate with each other using REST and RPC protocols
Disadvantages:
- High technical cost of distributed system development (fault tolerance, distributed transactions, data consistency issues, etc.)
2. What are microservices
2.1 introduction to micro services
In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, Which may be written in different programming languages and use of different Data Storage Technologies. — James Lewis and Martin Fowler (2014)
Microservices architecture is an architectural pattern that advocates the partitioning of a single application into a set of small services that coordinate with each other to provide ultimate value to users.
In the microservice architecture, each service runs in its own independent process, and the communication between services is usually achieved through lightweight communication mechanism. Lightweight communication mechanisms usually refer to language-independent, platform-independent protocols (usually RESTful apis based on HTTP). Simplifies and standardizes collaboration between services through lightweight communication mechanisms.
Each service is built around the business and can be independently deployed to production, class production, and so on. In addition, a unified and centralized service management mechanism should be avoided. For a specific service, appropriate language and tools should be selected to build it according to the context.
- Microservices are an architectural style
- An application is divided into a group of small services. Each microservice can expose its service interfaces for other microservices to invoke
- Each service runs in its own process, which means it can be deployed and upgraded independently. Each service is a standalone project, and each microservice has its own database
- Services interact with each other using lightweight HTTP
- Services are split around business functions
- It can be independently deployed by the automatic deployment mechanism
- Decentralization, service autonomy. Services can use different languages and storage technologies.
Once the microservice system architecture is adopted, it is bound to encounter the following problems:
- So many small services, how to manage them? (Service Governance Registry [Service registration discovery cull])
- With so many small services, how do they communicate with each other? (restful rpc)
- With so many small services, how do clients access them? (gateway)
- So many small services, once there is a problem, how to deal with themselves? (fault tolerance)
- So many small services, once there is a problem, how to arrange the wrong? (Link tracing)
Distributed Microservices Architecture – Ground dimension
The above problems are beyond the reach of any microservice designer, so most microservice products provide components for each problem to solve. Which dimensions are satisfied? The specific technologies that underpin these dimensions?
2.2 Common concepts of microservices architecture
Service governance
Service governance is the automatic management of services, the core of which is automatic registration and discovery of services.
- Service registration: A service instance registers its service information with a registry.
- Service discovery: Service instances obtain information about registered service instances through the registry, and use this information to request their services.
- Service culling: The service registry automatically removes problematic services from the available list so that they cannot be invoked.
The service call
In microservice architectures, there is often a need for remote invocations between multiple services. At present, the mainstream remote call technologies include RESTful interface based on HTTP and RPC protocol based on TCP.
- Representational State Transfer (Representational State Transfer) is a more standard and generic format for HTTP calls that are supported by all languages
- Remote Promote Call (RPC) : an inter-process communication mode. Allows remote services to be invoked as if they were local services. The main goal of the RPC framework is to make remote service invocations simple and transparent. The RPC framework is responsible for masking the underlying transport, serialization, and communication details. Developers only need to know who provides what remote service interface and where to use it, and do not need to care about the underlying communication details and invocation process.
The service gateway
With the increasing number of micro-services, different micro-services generally have different network addresses, and external clients may need to call the interfaces of multiple services to complete a business requirement. If the client is allowed to directly communicate with each micro-service, it may appear:
- Clients need to call different URLS, which makes it more difficult. In certain scenarios, cross-domain requests may occur
- Each microservice requires separate authentication, and the API gateway is designed to address these issues.
API gateway face to face means that all API calls are unified access to the API gateway layer, from the gateway layer unified access and output. The basic functions of a gateway include unified access, security protection, protocol adaptation, traffic control, support for long and short links, and fault tolerance. With the gateway, each API service provider team can focus on their own business logic processing, while THE API gateway is more focused on security, traffic, routing, and other issues.
Service fault tolerance
In microservices, a request often involves invoking several services, and if one of them is unavailable, without fault tolerance, there is a high risk of an avalanche of services becoming unavailable. We can’t prevent avalanches, we have to be as fault-tolerant as possible. The three core ideas of service fault tolerance are:
- Not affected by the outside environment
- Don’t be overwhelmed by upstream requests
- Not dragged down by downstream responses
Link to track
With the popularity of microservices architectures, services are split along different dimensions, often involving multiple services in a single request. Internet applications are built on different sets of software modules, which may be developed by different teams, implemented in different programming languages, and spread across thousands of servers across different data centers. Therefore, multiple service links involved in a request need to be logged, and performance monitoring is link tracing
2.3 Common solutions of microservice architecture
ServiceComb
Apache ServiceComb, formerly huawei Cloud Service Engine (CSE) Cloud Service, is the first Apache top-level Service project in the world. It provides a one-stop open source solution for microservices, committed to helping enterprises, users and developers to easily microserve enterprise applications on the cloud, and achieve efficient operation and maintenance management of microservices applications.
Dubbo
Dubbo is an open source high-performance service framework of Alibaba, which enables applications to realize the output and input functions of services through high-performance RPC, and can be seamlessly integrated with [1]Spring framework. Dubbo is a high-performance, lightweight, open source Java RPC framework that provides three core capabilities: interface-oriented remote method invocation, intelligent fault tolerance and load balancing, and automatic service registration and discovery.
The three parts
- Remoting: A network communication framework that implements sync-over-async and Request-Response message mechanisms.
- RPC: A remote procedure call abstraction that supports load balancing, disaster recovery, and clustering
- Registry: The service catalog framework is used for registering services and publishing and subscribing to service events
springCloud
- Spring Cloud is a collection of frameworks. It takes advantage of the development convenience of Spring Boot to subtly simplify the development of distributed system infrastructure, such as service discovery registry, configuration center, message bus, load balancing, circuit breakers, data monitoring, etc., which can be started and deployed with one click using Spring Boot’s development style.
- Instead of reinventing the wheel, Spring Cloud simply combines mature, proven service frameworks developed by companies and encapsulates them in a Spring Boot style that eliminates complex configuration and implementation principles
- Finally, a simple and easy to understand, easy to deploy and easy to maintain distributed system development kit was set aside for developers.
- The service call
- Service degradation
- Service registration and discovery
- Service fusing
- Load balancing
- Service message queue
- The service gateway
- Configuration Center Management
- Automated Build deployment
- Service monitoring
- Full link tracing
- Service Timing Task
- Scheduling operation
SpringCloud Alibaba
Spring Cloud Alibaba is committed to providing a one-stop solution for microservice development. This project contains the necessary components for developing distributed application microservices that developers can easily use to develop distributed application services through the Spring Cloud programming model. Relying on Spring Cloud Alibaba, you only need to add some annotations and a little configuration, you can plug Spring Cloud application into Ali Micro service solution, and quickly build distributed application system through Ali middleware.