Copyright notice: This article is the blogger original article, reprint please indicate the source, welcome to exchange learning!

The construction goal of PaaS cloud platform based on microservice architecture and Docker container technology is to provide our developers with a set of processes for rapid service development, deployment, operation and maintenance management, continuous development and continuous integration. The platform provides infrastructure, middleware, data services, cloud servers and other resources. Developers only need to develop business code and submit it to the platform code base to do some necessary configuration. The system will be automatically constructed and deployed to achieve agile development and rapid iteration of applications. In terms of system architecture, PaaS cloud platform is mainly divided into microservice architecture, Docker container technology and DveOps. This paper focuses on the implementation of microservice architecture.

Implementing microservices requires a lot of technical effort to develop the infrastructure, which is obviously not practical for many companies. Don’t worry, there are already excellent open source frameworks for us to use. At present, mature micro-service frameworks in the industry include Netflix, Spring Cloud and Alibaba’s Dubbo. Spring Cloud is a complete set of microservices framework based on Spring Boot. It provides the components needed to develop microservices. Using Spring Boot together, it will become very convenient to develop Cloud services with microservices architecture. Spring Cloud contains many sub-frameworks, among which Spring Cloud Netflix is a set of frameworks. Many components of Spring Cloud Netflix framework are used in our micro-service architecture design. The Spring Cloud Netflix project was young, and there was very little documentation. The blogger had to agonize over a lot of English documents to study the framework. For those who are new to this framework, they may not know how to build a microservice application architecture. Next, we will introduce the process of building a microservice architecture and which frameworks or components are needed to support the microservice architecture.

In order to directly and clearly show the composition and principle of micro-service architecture, the blogger drew a system architecture diagram as follows:

       

       

As can be seen from the figure above, the general access path of micro-services is as follows: external request → load balancing → Service GateWay → micro-service → data service/message service. Both service gateways and microservices use service registration and discovery to invoke dependent services, and each service cluster can obtain configuration information by configuring central services.

Service GateWay

A gateway is a door between external systems (such as client browsers and mobile devices) and internal systems of an enterprise. All client requests access background services through the gateway. To cope with high concurrent access, service gateways are deployed in clusters, which means that Load Balancing is required. We adopt Amazon EC2 as the virtual cloud server and Elastic Load Balancing (ELB) as the Load Balancing. EC2 automatically configures capacity. When user traffic reaches peak, EC2 automatically adds more capacity to maintain virtual host performance. ELB Elastic load balancer automatically distributes incoming application traffic among multiple instances. In order to ensure security, the client request needs to use HTTPS encryption protection, which requires us to uninstall SSL, using Nginx to uninstall the encrypted request. After ELB load balancing, the external request is routed to a GateWay service in the GateWay cluster, and then forwarded to the micro service by the GateWay service. As the boundary of the internal system, the service gateway has the following basic capabilities:

1. Dynamic routing: dynamically route requests to the required back-end service cluster. Although the internal system is a complex distributed network of microservices, the external system looks like a whole service from the gateway, which shields the complexity of the back-end services.

2. Flow limiting and fault tolerance: Allocate capacity for each type of request, discard external requests when the number of requests exceeds the threshold, limit the flow, and protect the background service from being washed down by large flows; When the internal service of the party fails, some responses are created directly at the boundary, and fault tolerant processing is centralized, rather than forwarding requests to the internal cluster, to ensure a good user experience.

3. Identity authentication and security control: user authentication is carried out for each external request, the request that does not pass authentication is rejected, and the anti-crawler function can be realized through access mode analysis.

4. Monitoring: The gateway can collect meaningful data and statistics to provide data support for background service optimization.

5. Access log: The gateway can collect access log information, such as which service is accessed? Process (what exception occurs) and results? How long does it take? Further optimize the background system by analyzing log content.

We use Zuul, the open source component of Spring Cloud Netflix framework, to implement the gateway service. Zuul uses a number of different types of filters, and by rewriting the filters, we have the flexibility to implement various functions of the GateWay.

Service registration and discovery

Because microservices architecture is a network of fine-grained services with a single responsibility, services communicate with each other through lightweight mechanisms, which introduces problems of service registration and discovery. Service providers register and report service addresses, and service invocations discover target services. Eureka components are used in our microservices architecture for service registration and discovery. All microservices (by configuring Eureka service information) are registered with the Eureka server and periodically send heartbeat for health check. The default Eureka configuration is 30 seconds, indicating that the service is still alive. The heartbeat sending interval can be set according to the Eureka configuration parameters. After receiving the last heartbeat of a service instance, the Eureka server determines that the service is dead (that is, it does not receive heartbeat messages for three consecutive times) after 90 seconds (the default value is 90 seconds, which can be modified by configuring parameters). When the Eureka self-protection mode is disabled, the registration information of the service is cleared. The so-called self-protection mode means that Eureka enters the self-protection mode when network partitions occur and Eureka loses too many services in a short period of time. That is, Eureka will not delete a service that has not sent heartbeat for a long time. The self-protection mode is enabled by default. You can set the self-protection mode to disabled by setting parameters.

The Eureka service is deployed in a cluster (the deployment of Eureka cluster is described in detail in another article of the blogger). All Eureka nodes in the cluster will automatically synchronize the registration information of microservices periodically, so that all Eureka service registration information is consistent. So how do Eureka nodes discover other nodes in Eureka clusters? We established the association of all Eureka nodes through a DNS server, which also needed to be set up in addition to the deployment of Eureka cluster.

When the gateway service forwards external requests or invokes each other between background micro-services, it will search the registration information of the target service on the Eureka server, discover and invoke the target service, thus forming the whole process of service registration and discovery. The number of Eureka configuration parameters is quite large, up to hundreds of them, which the blogger will elaborate on in another article.

Microservice deployment

Microservices are a series of services with single responsibilities and fine granularity, which split our business into independent service units with good scalability and low coupling degree. Different microservices can be developed in different languages, and each service deals with a single business. Micro service can be divided into the front edge (also called service) and the back-end services (also called the middle service), the front-end services are necessary to the back-end service aggregation and clipping after exposed to the external different devices (PC, Phone, etc.), all service starts to Eureka server to register, there will be a complex dependencies between services. When the gateway service forwards the external request to invoke the front-end service, the target service can be found to invoke by querying the service registry. The same is true when the front-end service invokes the back-end service. A request may involve the mutual invocation of multiple services. Each microservice is deployed in a cluster and requires load balancing when services invoke each other. Therefore, each microservice has an LB component for load balancing.

Microservices run in Docker containers in the form of images. Docker container technology makes our service deployment simple and efficient. Traditional deployment requires installing the runtime environment on each server. If we have a large number of servers, installing the runtime environment on each server would be a tremendous amount of work. If the runtime environment changes, we would have to reinstall it, which would be disastrous. However, with Docker container technology, we only need to generate a new image for the required basic image (JDK, etc.) and microservice, and deploy the final image to run in Docker container. This method is simple, efficient and can quickly deploy the service. Each Docker container can run multiple microservices. Docker containers are deployed in a cluster, and Docker Swarm is used to manage these containers. We create a mirror repository to hold all the base images and the resulting final delivery images, and manage all the images in the mirror repository.

Service fault tolerance

A request may depend on multiple back-end services, which may cause failure or delay in actual production. In a high-traffic system, once a service delays, it may exhaust system resources in a short time and bring the whole system down. So a service that fails to isolate and tolerate its failures can be disastrous in itself. Hystrix components are used in our microservices architecture for fault tolerance. Hystrix is an open source component of Netflix. It provides flexible fault tolerant protection for services through mechanisms such as circuit breaker mode, isolation mode, fallback and flow limiting to ensure system stability.

1. Fuse mode: the principle of fuse mode is similar to that of circuit fuse. When the circuit is short-circuited, the fuse is blown off to protect the circuit from catastrophic loss. When the service is abnormal or a large amount of delay occurs, the service caller will actively start the circuit breaker when the circuit breaker condition is met, execute fallback logic and return directly, and will not continue to call the service to further bring down the system. The fuse is configured with a default service invocation error rate threshold of 50%. When the threshold is exceeded, the fuse mode is automatically started. After a period of service isolation, the fuse goes into a semi-fusing state, which allows a small number of requests to be tried, then returns to the fusing state if the call still fails, and turns off the fusing mode if the call succeeds.

2. Isolation mode: Hystrix adopts thread isolation by default. Different services use different thread pools and are not affected by each other. For example, we use andThreadPoolKey to configure a service to use a thread pool named TestThreadPool, which is isolated from other named thread pools.

3. Fallback: Fallback mechanism is actually a fault tolerance method for service failure, similar to exception handling in Java. You simply inherit HystixCommand and rewrite the getFallBack() method, where you write processing logic such as throwing an exception directly (failing fast), returning null or default values, returning backup data, and so on. When an exception occurs in the service invocation, the switch is to execute getFallBack(). A fallback can be triggered in the following ways:

1) application throws the HystrixBadRequestExcepption abnormalities, when throw HystrixBadRequestExcepption anomalies, the caller can catch exceptions, without triggering fallback, when other exception is thrown, will trigger a fallback;

2) The program times out;

3) Fuse start;

4) Thread pool is full.

4. Traffic limiting: Traffic limiting means to limit the concurrent access of the service, set the number of concurrent requests per unit time, reject and fallback the requests exceeding the limit, and prevent the background service from being washed out.

Hystix uses the command pattern HystrixCommand to wrap the dependent call logic so that the associated calls are automatically protected by Hystrix’s elastic fault tolerance. The caller needs to inherit HystrixCommand and write the call logic in run(), using execute() (synchronous blocking) or queue() (asynchronous non-blocking) to trigger the execution of run().

Dynamic configuration center

Microservices have many dependency configurations, and some configuration parameters may be dynamically modified during service operation, such as dynamically adjusting fuse thresholds based on access traffic. The realization of the traditional information configuration methods, such as in XML, yml, such as the configuration file, and application of packaged together, every change need to submit code, packaging, building and generate a new image, restart the service, the efficiency is too low, it is obviously unreasonable, so we need to build a dynamic configuration center service support micro dynamic configuration service. We used the ConfigServer service of Spring Cloud to help us set up the dynamic configuration center. The microservice code developed by us is stored in the private repository of git server, and all configuration files that need dynamic configuration are stored in configServer (configuration center, also a microservice) service under Git server. The microservice deployed in Docker container dynamically reads configuration file information from Git server. When the local git repository changes code, which is pushed to the git repository, the hooks on the Git server (post-receive) automatically check for configuration file updates. If so, the hooks on the Git server send messages to the configServer via message queues. A microservice deployed in a container) sends a message notifying the configuration center to refresh the corresponding configuration file. In this way, the microservice can obtain the latest configuration file information to achieve dynamic configuration.

These frameworks or components are the core to support the implementation of microservices architecture. In actual production, we will also use many other components, such as log service component, message service component, etc., according to business needs. Many open source components of Spring Cloud Netflix framework are used in our microservice architecture implementation cases, including Zuul (service gateway), Eureka (Service Registration and discovery), Hystrix (service fault tolerance), Ribbon (client load balancing), etc. These excellent open source components provide a shortcut to microservices architecture.

The above section mainly introduces the basic principles of micro-service architecture, some of which are more detailed, such as Eureka parameter configuration description, dynamic configuration center construction process, etc., the blogger will make detailed explanations in other articles for your reference.