In this article, we will share how to build a simple pattern of microservices architecture.

What is the simple model of microservices architecture?

Compared to the tens of thousands of concurrent visits on large Internet platforms, or multiple online releases per day, most enterprises and projects do not have such a need. They focus on how to make development more efficient, how to realize new requirements faster, how to make operations easier, and so on.

A simple pattern of microservices architecture is a software architecture solution that meets these requirements.

As opposed to a “perfect” microservices architecture solution, The simple pattern of microservices architecture can ignore distributed transaction technology to ensure data consistency, configuration center components to facilitate package migration between environments (development, test, production), call chain components to monitor API calls, circuit breaker components to avoid system overloading, and apis to facilitate API management and testing Documentation frameworks, Zookeeper, Redis, and various MQ. Just focus on the oft-talked registry, service discovery, load balancing, and service gateways.

How to land?

The focus of the microservices architecture is to develop the advantages and overcome the disadvantages. Compared with single architecture, the biggest disadvantage of microservice architecture is that it is difficult to get started and operate and maintain. Let’s take a look at how to implement the simple pattern of microservices architecture from these two perspectives.

It is difficult to

Compared to the traditional monolithic architecture, the microservices architecture introduces too many concepts at once, making it a bit overwhelming for beginners. Therefore, we have to separate the wheat from the chaff, clear what is their own needs, what is just a legend on the lake. Here’s a look at which components are necessary to develop a microservices architecture.

First, there are four steps to developing using the microservices simple pattern:

Step 1: Develop single-responsibility microservices using existing technology systems in your organization.

Step 2: The service provider registers the address information in the registry, and the caller pulls the service address down from the registry.

Step 3: Expose the microservice API to the portal and mobile APP through the portal back end (service gateway).

Step 4: Integrate the management module into a unified operation interface.

In order to achieve the above four points, corresponding to the following basic technologies (required components) must be mastered.

Registry, service discovery, load balancing: correspond to steps 1 and 2 above

Service Gateway: corresponds to step 3 above

Management side integration framework: corresponds to step 4 above

Registry, service discovery, load balancing

And different monomer architecture, the service architecture is composed of a series of duty single fine-grained services distributed network structure, by lightweight mechanism for communication between services, will introduce a service registry found that the problem at this time, that is to say, the provider will own registered service address somewhere (service registry, Service Registry Center), where the Service caller can find the address of the Service to be invoked from the Service Registry (Service Discovery, Service Discovery). At the same time, service providers generally provide services in a cluster, which introduces the requirement of load balancing.

Depending on the location of the Load Balancer (LB), there are currently three major service registration, discovery, and Load balancing schemes:

Centralized LB solution

The first is the centralized LB solution, where there is an independent LB between the service consumer and the service provider. LB is usually implemented by specialized hardware devices such as F5, or based on software such as LVS, HAproxy, etc.

When a service caller invokes a service, he makes a request to the LB, which then routes the request to the specified service based on certain policies such as polling, randomness, minimum response time, minimum concurrency, and so on. The biggest problem with this scenario is that it adds a jump between the caller and the provider, and the LB is most likely to become a bottleneck for the entire system.

In-process LB scheme

The second is the in-process LB scheme, which integrates LB functions into service consumer processes as a library to address the shortcomings of centralized LB. This scheme is also known as Soft Load Balancing or client Load scheme.

The principle is as follows: The service provider sends its address to the service registry and sends heartbeat periodically to the registry. The registry decides whether to remove the node from the registry according to the heartbeat. When a service caller invokes a service, it first pulls the service registration information from the registry and then invokes the service node according to certain policies.

In this case, even if the registry goes down, the caller can route the request to the correct service based on the service address that has been pulled in memory. The biggest problem with this scenario is that the service caller may need to integrate the registry client and may need to upgrade the registry client for the upcoming registry server upgrade.

Independent host LB process solution

And the third is to host independent LB process scheme, this scheme is aimed at the shortcomings of the second scheme and put forward a kind of compromise, basic principle and the second kind of similar, the difference is that he will LB and service discovery function from in-process, become the host in a separate process, on a host of one or more services to access the target service, They all do service discovery and load balancing through separate LB processes on the same host. The typical example of this solution is Airbnb’s SmartStack service discovery framework. The biggest problem with this scheme is that it is cumbersome to deploy and operate.

At present, with the rise and maturity of Netflix microservice solution and Spring Cloud, the second solution has become our first choice. We recommend using Eureka for the service registry and Ribbon for client service discovery and load balancing.

The biggest advantage of this choice is simple + practical + controllable, there is no need to introduce additional Zookeeper, Etcd registry, deployment and operation are relatively simple. It’s also very simple to use in terms of code.

Use Nginx Upstream to perform load balancing for services that are open to the Internet.

The following are some of the most important parameters for Eureka to get an overview of how it works.

Due to Eureka’s registration and expiration mechanism, it takes nearly 2 minutes for the service to be fully available from launch, so we changed the following parameters to improve release speed in our development and test environments. Be sure to change it back during production.

The interface of the Eureka Registry is as follows:

The service gateway

In general, a large system will have many microservices with a single responsibility. If the portal system or mobile APP calls the API of these microservices, at least two things should be done:

A unified portal to invoke the microservice API

API authentication

This requires a service gateway. In 2015, we made a simple API gateway using Rest Template + Ribbon. The principle is that when the API gateway receives a request/Service1 / API1.do, it forwards the request to the API 1 interface of the microservice corresponding to Service1.

Later, we found that Spring Cloud Zuul had a good implementation of all the functions we implemented, so we switched to Zuul. Zuul is Netflix’s Java-based server-side API gateway and load balancer.

Zuul also dynamically loads, compiles, and runs filters. Most surprisingly, Zuul’s forwarding performance is said to be about the same as Nginx’s. Refer to https://github.com/Netflix/zuul for more information.

In general, the API gateway (which can be called the portal back end) is used for reverse proxy, permission authentication, data tailoring, data aggregation, and so on.

Management side integration framework

With the knowledge of registry, service discovery, load balancing and service gateway technologies, microservices can provide reliable services for portal systems and mobile apps. But what about the management side for the back office operators?

Since there is not much pressure on the back-end operating system, we can integrate the independently developed micro-services through CAS and UPMS (UPMS is a user and permission management system developed by our team that fits the micro-service architecture, which we will share on the official website of Qingliuyun, welcome to follow).

The basic three-step process for integrating a microservice is:

Spring Boot-based security starter is introduced in microservices, which contains the top Banner and left menu of the system.

Register the access address for the microservice with the UPMS as the entry menu (level 1 menu) for the microservice.

Configure the function menu and role permission information for microservices on the UPMS.

When a user opens a microservice from the browser, the Security starter invokes the API of the UPMS to pull all the microservice lists (level 1 menu) and the function lists of the current microservice (level 2 menu), and displays the current page of the microservice to the user in the content area.

Application Architecture Diagram:

UPMS screenshot, the orange part is provided by UPMS framework, and the red box is the page of microservice:

UPMS access new microservices through the module function:

So, at the end of the day, a simple pattern-based microservices architecture could look like this:

At this point, the basic microservices architecture is in place. Let’s talk about how to solve the problem of micro service operation and maintenance.

Operations difficult

The operation and maintenance of a microservice architecture is mainly relative to a single architecture. After the implementation of the micro-service architecture, the whole system suddenly has many more modules than before, and the workload of deployment and maintenance will increase with the increase of modules. Therefore, to solve the problem of difficult operation and maintenance, we can first solve it from the perspective of automation.

Furthermore, if you want to make full use of the advantages and disadvantages of the microservices architecture, you are advised to prepare a reliable infrastructure, including automatic build, automatic deployment, log center, health check, performance monitoring, and other functions.

Otherwise, it is very likely that the shortcomings of the microservices architecture will cause our team to lose confidence in the microservices architecture and fall back on the monolithic architecture. It’s really important to sharpen your tools if you want to do a good job.

Continuous integration

When a single application is microservitized, it is likely that the original package will be divided into 10, 20, or more packages. Well, the first thing we had trouble with was a 10-20 fold increase in deployment. At this point, continuous integration methods and tools become a prerequisite for the implementation of microservices architecture. In practice, we use docker-based container service platform to automatically deploy microservices of the whole system. The process is shown as follows:

If there is no microservice support platform, Jenkins API and Docker API can also be called in the form of Shell scripts.

The main process is:

Call the Jenkins command to pull the code from the code repository and package the code.

Call Docker /build and /images/push commands to build the image and push it to a private image repository.

The Docker /containers/create and /containers/start commands are called to create and start the containers.

Configuration center

In the development/test environment, the package has been packaged as a Docker image, if you can directly push the image passed the test to the production environment, can directly save the packaging and deployment work repeated for the production environment, isn’t it beautiful?

To achieve this effect, packages need to be packaged to be environment-independent, that is, no environment-specific configuration information can be contained in the package, which introduces the configuration central component.

This component is simple enough to get the key-value pairs required by the microservice based on the project code, environment code, and microservice code. Such as:

ProjectA_PRODUCTION_MicroService1_jdbc. Connection. Url.

An important added value of the configuration center is that the configuration information in different environments can be managed by different people, which enhances the security of the configuration information in the production environment, such as database accounts and passwords.

This module also has some open source projects for reference, such as Baidu Disconf and Spring Cloud Config. We took the spirit of reinventing the wheel ourselves and developed a configuration central microservice to easily integrate with the UPMS mentioned above.

Note: This component is not required for the simple pattern of microservices architecture, but is recommended.

Monitoring alarm

After a single application is microsertized, a single application is divided into many microservices, making it difficult to monitor system health inspection, performance monitoring, service indicator health monitoring, file backup monitoring, database backup monitoring, and scheduled task execution.

Therefore, in order to let the operation and maintenance students can live a practical point, it is best to build the monitoring platform. If you want to build a monitoring platform quickly, consider Nagios, Zabbix. If you want better scalability and customization, you can consider using the following components:

Collectd is a collection of host, database, network, and storage indicators. 1653 stars on GitHub.

Metrics is a fantastic JVM Metrics collection qi that provides modules for supporting statistics for third-party libraries and applications such as Jetty, Logback, Log4j, Apache HttpClient, Ehcache, JDBI, Jersey, It can also send measurements to Ganglia and Graphite for graphical monitoring. 5000+ stars on GitHub

CAdvisor is a Docker container indicator collection QI, produced by Google. 6000 stars on GitHub.

Grafana is a very elegant open source dashboard tool that supports multiple data sources such as Graphite, InfluxDB, MySQL and OpenTSDB. 17000 stars on GitHub.

InfluxDB is an excellent open source distributed timing database, currently ranked number one in timing data. Among its useful features, The RETENTION POLICY automatically eliminates unwanted historical data. 11175 stars on GitHub.

In addition to the above modules, we also developed a module to detect the health and performance of the application, and send alerts to the operation and maintenance personnel when various indicators such as host, application health, and application performance are abnormal.

Write in the last

At the end of this article, we can come back to look at, we only need to understand the development level of registry, service discovery, load balancing, the service gateway and management integration framework, at the operational level ready to continuous integration tools, configuration center and monitoring alarm, can easily be born micro service architecture, enjoy the wonderful micro service architecture. Have a good time.