Original text/project address: github.com/sqshq/Piggy…

The project is called Piggy Metrics, a financial solution for individuals.

This is a conceptual application that simply demonstrates the architecture pattern of microservices based on Spring Boot, Spring Cloud, and Docker, and, by the way, has a very nice and clean user interface. Here’s a demo of its interface:

Functional service

PiggyMetrics is broken down into three core microservices. These services are independently deployable applications organized around some business capability.

Account service

Includes general user input logic and validation: income/expense items, savings and account Settings.

Statistics service

Perform calculations for major statistical parameters and time series for each account. Data points contain values for the base currency and time period. This data is used to track cash flow dynamics (fancy charts that have not yet been implemented in the UI) throughout the life of the account.

Notification service

Stores user contact information and notification Settings (such as reminder and backup frequency). Program workers gather the required information from other services and send e-mails to subscribing customers.

Summary: * *

Each microservice has its own database, so there is no way to bypass the API and access the database directly.

In this project, MongoDB is used as the primary database for each service. It is a persistence architecture that supports multiple programming languages, including the type of database best suited to service requirements.

Service-to-service communication is fairly simple: communication between microservices only uses the synchronous RESTAPI. A common practice in the real world is to use a combination of interactive styles. For example, performing synchronous GET requests to retrieve data and creating/updating operations via message brokers using asynchronous methods to separate services from buffered messages brings us consistency.

Basic service facilities

There are some common architectures in distributed systems that can help us understand how core services work. Spring Cloud implements these architectures by providing powerful tools to enhance Spring Boot-based applications.

Config service

Spring Cloud Config is a horizontally scalable centralized configuration service for distributed systems. Support for local storage, Git, and Subversion.

In this project, native Profile is used, which loads configuration files from the local classpath. You can view shared’s directory in the Config service resource. Now, when the notification service requests its configuration, the configuration service responds with shared/notification-service.yml and shared/application.yml (shared among all client applications).

Clients simply build spring Boot applications with spring-cloud-starter-config dependencies, and automatic configuration does all the rest.

Now, you don’t need to use any embedded properties in your application. Simply provide the bootstrap.yml application name and configuration service URL:

With Spring Cloud Config, configuration can be updated dynamically. For example, the EmailService bean is annotated with @refreshScope. This means that you can change the E-mail text and subject without having to redeploy the start notification service.

First, change the required properties in the Config server. Then, to refresh Notification service request: curl -h “Authorization: Bearer # token#” – XPOST http://127.0.0.1:8000/notifications/refresh

This process can also be performed automatically using Repository webhooks

Summary: * *

Dynamic updates have some limitations. The @refreshScope is not used with the @Configuration class and does not affect the @Scheduled method fail-fast property meaning that Spring Boot will fail to start if it cannot connect to the Config Service, which is useful when batch booting. Safety precautions please read down

The Auth Service fully abstracts responsibility for authorization to a separate server, which grants the OAuth2 token to the back-end resource service. The Auth server is used for user authorization and secure machine-to-machine communication within the perimeter.

In this project, I use user authorization of the Password Credentials authorization type (because it is only used by the local PiggyMetrics UI) and Client credentials to grant microservice permissions.

Spring cloud security provides convenient annotation and automatic configuration, making it easy to implement from both the server and client side. You can learn more in the documentation and check the configuration details in the Auth Server code.

From the client, everything works exactly the same as traditional session-based authorization. You can Principal to retrieve objects from the request, check the role of the user, and other things using express-based access controls and @preauthorize annotations.

Each client in PiggyMetrics (account service, statistics service, notification service, and browser) has a scope: Server for back-end service, and UI – for browser. Therefore, we can also protect the controller from external access, for example:

The API Gateway can see that there are three core services that expose external apis to clients. In real-world systems, this number can grow rapidly as well as the overall complexity of the system. In fact, hundreds of services may involve rendering a complex web page.

In theory, a client could make a request directly to each microserver. Obviously, however, this option has challenges and limitations, such as the need to know all endpoint addresses, perform HTTP requests for each information separately, and merge the results on the client side. Another issue is non-Web-friendly protocols that may be used on the back end.

Usually a better approach is to use an API gateway. It is a single entry point into the system to process requests by routing them to the appropriate back-end services, or by calling multiple back-end services and aggregating the results. In addition, it can be used for authentication, insight, stress and canary testing, service migration, static response processing, and active traffic management.

Netflix turned on such an edge service, and now with Spring Cloud, we can enable it with an @enableZuulProxy annotation. In this project, I used Zuul to store static content (UI applications) and route requests to the appropriate microservices. This is a simple prefix based route configuration Notification service:

This means that all incoming requests /notifications will be routed to the Notification service. There’s no hard-coded address, as you can see. Zuul uses the service discovery mechanism to locate notification service instances, as well as circuit breakers and load balancers, as described below.

Another well-known architectural pattern for Service Discovery is Service discovery. It allows automatic detection of the network location of service instances, which may dynamically assign addresses due to automatic scaling, failures, and upgrades.

A key part of service discovery is the registry. I used Netflix Eureka for this project. Eureka is a good example of the client-side discovery pattern when the client is responsible for determining the location of available service instances (using a registry server) and load balancing requests.

With Spring Boot, you can easily build the Eureka registry using spring-Cloud-starter-Eureka-Server dependencies, @enableEurekaserver annotations, and simple configuration properties.

Clients that support the @enableDiscoveryClient annotation support bootstrap.yml application name:

Now, when the application starts, it registers with the Eureka server and provides metadata such as host and port, health indicator URL, home page, and so on. Eureka receives heartbeat messages from each instance of a service. If heartbeat failover occurs to a configurable schedule, the instance is removed from the registry.

In addition, Eureka provides a simple interface where you can keep track of running services and the number of available instances: http://localhost:8761

Load balancer, breaker and Http client Netflix OSS provides another great toolset.

Ribbon Ribbon is a client load balancer that gives you a lot of control over the behavior of HTTP and TCP clients. Compared to traditional load balancers, there are no additional hops per online call – you can directly contact the required services.

Out of the box, it integrates with Spring Cloud and service discovery itself. The Eureka Client provides a dynamic list of available servers so that the Ribbon can balance them.

Hystrix Hystrix is an implementation of the circuit breaker mode, which provides delay and failure control over dependency on access over the network. The main idea is to stop cascading failures in a distributed environment with a large number of microservices. This helps to fail quickly and recover as quickly as possible – an important aspect of self-healing fault tolerant systems.

In addition to circuit breaker control, with Hystrix, you can add a fallback method that can be called to get default values if the main command fails.

In addition, Hystrix generates metrics of execution results and latency for each command, which we can use to monitor system behavior.

Feign Feign is a declarative Http client that integrates seamlessly with the Ribbon and Hystrix. In fact, with a spring-cloud-starter-Feign dependency and the @enableFeignClients annotation, you have a complete set of load balancers, circuit breakers, and Http clients with a reasonable out-of-the-box default configuration.

Here is an example of an account service:

All you need is an interface

You can use @RequestMapping to share part between Spring MVC controller and Feign method to specify only the required service ID – as shown in the above example

Statistics-service, due to automatic discovery via Eureka (but obviously, you can access any resource with a specific URL) monitoring dashboard

In this project configuration, each Hystrix microservice pushes metrics to Turbine via Spring Cloud Bus (using an AMQP proxy). The monitor project is just a small Spring startup application with turbo and Hystrix dashboards.

Here’s how to make it work.

Let’s look at how our system behaves under load: The account service invokes the statistical service, which responds with a different impersonation delay. The response timeout threshold is set to 1 second.

Blog.csdn.net/rickiyeat/a…

Log analysis

Centralized logging can be very useful when trying to identify problems in a distributed environment. Elasticsearch, Logstash, and Kibana stacks make it easy to search and analyze your log, utilization, and network activity data. The out-of-the-box Docker configuration described in my other projects.

security

Advanced security configuration is outside the scope of this proof-of-concept project. For a more realistic simulation of a real system, consider using HTTPS, JCE keystore to encrypt microservice passwords and configure server property content (see documentation for more information).

Infrastructure automation

Deploying microservices and their interdependencies is more complex than deploying monolithic applications. It is important to have a fully automated infrastructure. We can achieve the following benefits through a continuous delivery approach:

The ability to release software at any time

Any build may end up being a release

Build the artifact once – deploy as needed

Here is a simple sequential delivery workflow implemented in this project:

In this configuration, the Travis CI builds an image of the tag for each successful Git push. Therefore, Latest always has an image for every microservice on the Docker Hub, and commits the old hashed image with Git. It’s easy to deploy any one and quickly roll back if needed.

How do you run everything?

Remember, you have eight Spring Boot applications, four MongoDB instances, and RabbitMq to Boot. Make sure you have available RAM on your 4 Gb computer. You can always run important services, though: gateways, registries, configurations, Auth services and account services.

Before you begin

  • Install Docker and Docker Compose.
  • Export environment variables: CONFIG_SERVICE_PASSWORD, NOTIFICATION_SERVICE_PASSWORD, STATISTICS_SERVICE_PASSWORD, ACCOUNT_SERVICE_PASSWORD, MONGODB_PASSWORD

Production mode

In this mode, all the latest images will be extracted from the Docker Hub. Just copy docker-compose. Yml and type docker-compose up -d.

Development mode

If you want to build the image yourself (with some changes in the code, for example), you must use Maven to clone all the repositories and build artifacts. Then, run docker-compose -f docker-comemess. yml -f docker-comemess.dev. yml up -d

Yml inherits docker-comemage.yml with the additional possibility of building an image locally and exposing all container ports for easy development.

Important port

summary

All Spring Boot applications need to run Config Server to start. But we can start all containers at the same time because of the Fail-Fastspring Boot property and the Docker restart: always-compose option. This means that all dependent containers will try to restart until Config Server is up and running.

In addition, the service discovery mechanism takes some time after all applications are started. No service is available for client discovery until the instance, Eureka server and client both have the same metadata in their local cache, so 3 heartbeats may be required. The default heartbeat interval is 30 seconds.