Author: A Fei, senior R & D engineer of Individual Push Application platform infrastructure

In the microservice architecture, different microservices can have different network addresses. Each microservice can call each other to complete user requests. The client may call N interfaces of microservices to complete a user request. Therefore, adding an API gateway between the client and server is a natural choice for most microservices architectures.

API gateways also play a crucial role in individual microservices practices. On the one hand, THE API gateway is the only external entrance of the micro-push service system; On the other hand, the API gateway implements many common requirements for back-end services, avoiding duplication.

Design and implementation of micro – service gateway

The micro service is mainly based on Docker and Kubernetes practice. At the bottom of the microservices architecture is a Kubernetes cluster that pushes private deployment, and on top of that, application services are deployed.

The application service system is divided into three layers, the gateway layer is the top layer, followed by the business layer, the bottom is the basic layer of services. When deploying application services, we used Kubernetes’ namespace to isolate products from different product lines. In addition to application services, Consul is deployed on the Kubernetes cluster for configuration management, Kube-DNS for service registration and discovery, and some auxiliary systems for application and cluster management.

The following figure is an architecture diagram of the micro-push service system.



The functional requirements for API gateway are as follows:

1. Support the configuration of multiple products and provide different ports for different products.

2. Dynamic routing;

3. URI rewriting;

4. Registration and discovery of services;

5. Load balancing.

6. Security-related requirements, such as session verification;

7. Flow control;

8. Link tracking;

9. A/B Testing.

After investigating the existing gateway products on the market, our technical team found that they were not suitable for use in Getui’s microservices system. First, the configuration management of each push is implemented based on Consul, while most gateway products require some DB storage for configuration management. Second, most gateway products provide more general and perfect functions, which also reduces the complexity and flexibility of configuration. Third, most gateway products are difficult to directly integrate into the micro-service architecture of Individual push.

In the end, We chose to use OperResty and Lua to develop our own gateway. In the process of developing our own gateway, we also borrowed some of the design of other gateway products, such as Kong and Orange’s plug-in mechanism.

The plug-in design of the API gateway is shown below.



OpenResty processes requests in several stages. The plugins of the API gateway mainly deal with the six stages of Set, Rewrite, Access, Header_filter, Body_filter and Log. Each of these plugins can play a corresponding role in one or more stages. After a request reaches the API gateway, the gateway selects plug-ins for the request according to the configuration, and then filters out plug-ins matching the rules according to the rules of each plug-in. Finally, the plug-in is instantiated and the traffic is processed accordingly.



We can through examples to understand this process, as shown in the above, localhost: 8080 / API/demo/test/hello this request arrived at the gateway, the gateway will be determined according to the host and port information, extracts the URI (/ API/demo/test/hello), Then, Rewrite_URI, Dyups, and Auth plug-ins are selected according to the specific configuration of the product. Then, only Rewrite_URI and Dyups are selected according to the rule configuration of each plug-in. These two plug-ins are then instantiated to process the request at various stages. When a request is forwarded to a back-end service, the URI is rewrite with “/demo/test/hello” and upstream with “prod1-svc1”. After the request is processed by the back-end service, the response is returned to the client through the gateway, which is the process of the design and work of the plug-in. To optimize performance, we defer plug-in instantiation until the request is actually processed, and until then the gateway filters out plug-ins that do not need to be executed through product configuration and rules. As you can see from the figure, the rule configuration for each plug-in is simple and there is no uniform format, which also ensures that the plug-in configuration is simple and flexible.

The configuration of the gateway is hot update, implemented on Consul and consul-template. After the configuration is updated on Consul, consul-template pulls the configuration from the Consul in real time and updates the configuration in either of the following ways:

(1) Update the configuration to shared-dict by calling the Update API.

(2) Update the configuration file, using Reload OpenResty to update the configuration file.

The main functions provided by the Micro services gateway

1. Dynamic routes

Dynamic routing mainly involves three aspects: service registration, service discovery and request forwarding.

As shown in the figure below, Service registration and discovery is implemented based on Kubernetes Service and Kube-DNS. In Consul, a mapping table of services is maintained and each microservice applied corresponds to a Service on Kubernetes. Each Service created adds an entry to the Service mapping table on Consul (which is updated to the shared memory of the gateway in real time). Each request received by the gateway will be queried from the Service mapping table for the specific back-end Service (that is, the Service name in Kubernetes), and dynamic routing. Kube-dns can resolve the domain name of a Service to a ClusterIP within Kubernetes. The Service proxies multiple Pods and forwards traffic evenly to different pods.



2. Flow control

Flow control is mainly achieved through a back-end service called “Counter” and a flow control plug-in in the gateway. Counter is responsible for storing access times and limits for requests, and supports counting by time dimension. The flow control plug-in is responsible for intercepting the traffic and calling the interface of Counter for overflow query. If Counter returns an overflow request, the gateway will directly reject the access to realize the function of time limit. Combined with the time dimension, frequency limit can be realized. At the same time, the flow control plug-in outputs log information to Fluent-bit, and the fluent-bit aggregate counts to update the count in Counter.



3. Link tracing

The link tracking of the whole microservice system is based on the distributed link tracking system Zipkin. By installing the Zipkin plug-in on the gateway and introducing the Zipkin middleware in the back-end service, the final link tracing capability is implemented. The specific architecture is shown in the figure below.



4. A/B testing

In the implementation of A/B testing, there are several key points:

(1) All policy information is configured on Consul and takes effect in real time in the memory of each microservice using Consul-template.

(2) Each policy specifies whether A or B should be called when invoking A microservice (default is A);

(3) A/B plug-in is implemented in the gateway. When the request reaches the gateway, the A/B policy applicable to the request can be determined by the rules configured by A/B plug-in;

(4) The gateway passes the A/B policy applicable to the request through the URL parameter;

(5) Each microservice selects the correct service for access by passing down the strategy.

The following figure shows the invocation links in the two scenarios.



conclusion

The above is a push micro service gateway design and the realization of the main functions. After that, the technical team of INDIVIDUAL Push will continuously improve the elastic design of API gateway, so that it can reduce the scope of failure when it occurs; At the same time, we will continue to further integrate the gateway with the DevOps platform to ensure more automated testing to ensure quality and faster deployment as the gateway is iteratively updated.