Abstract: The next part of this article describes the role of API gateways in hosting the existing rapidly evolving API ecosystem.
The preface
API economic ecological chain has covered the world, and most enterprises have gone on the road of digital transformation. API has become the core carrier of enterprise connection business, and generates huge profit space. With the rapid growth of API scale and volume of calls, enterprise IT is facing more challenges in terms of architecture and mode. The next part of this article describes the role that API gateways play in hosting the existing, rapidly evolving API ecosystem.
What is the API
Application Programming interfaces (apis) are conventions for connecting the different parts of a software system. A simple example: Every time you log in tO wechat, you need to provide your account information to access it. The authentication carrier provided by wechat is an API. API has been everywhere, finance, IT, Internet of things, etc., the development trend is quite rapid, invisible throughout our life.
Over the years, several common trends emerge from API iterations:
1. The number of API openings continues to increase
No doubt, as enterprise digitization progresses, microservice transformation, different areas of API are emerging one after another. Back in 2014, ProgrammableWeb predicted that API vector could reach 100,000 to 200,000 and will continue to grow. The increase in the number of API developments created opportunities for edge systems, which led to the emergence of API gateways. Large-scale API management system becomes the core development trend.
The API Economy Disruption and The Business of APIs, Nordic APIs
2. Diversified API service platforms
The original API focused on information exchange between network units of different individual applications, and has evolved to rapid communication between services. With the continuous evolution of ARTIFICIAL intelligence EI and IOT, platforms that rely on API are constantly updated, such as Web, Mobile and terminal, and more service systems will emerge in the future.
3. Gradually replace the service mode of the original enterprise with API as commodity
Selling computing, software and ability will gradually change the sales model of the enterprise. The ability will be realized, the value of data will be released, and new profits will be created based on different API management platforms.
Why was API Gateway born
As the overall trend of APIS evolves, each historical era faces different challenges and architectures change accordingly. Here are some examples:
Image credit: API Economy From Systems to Business Services
From the most original “transport protocol communication” -> “simple interface integration” -> “message middleware” -> “standard REST”, it can be seen that the development of API tends to be concise, integrated and standardized, which also promotes the emergence of more system boundary components. Under the background of carrying trillions of API economy, The API gateway came into being.
The Gartner report states that without proper API management tools, the API economy will not work. At the same time, the definition of life cycle of API management system is proposed: Planning, design, implementation, publication Operation, consumption, maintenance and Retirement of APIs Magic Quadrant for Full Life Cycle API Management, Gartner published on 2016-10-27.
The API gateway runs through the process and provides rich management features.
-
High performance, horizontal expansion
-
High reliability and service continuity
-
Plug-in API security control
-
Flexible data orchestration
-
Fine flow control
-
API Version Management
-
API data analysis
-
Efficient plug-in routing algorithm
-
Security authentication, anti-attack
-
API access control
-
Swagger import/export
API Gateway design core practices
To provide a reference high-performance API gateway architecture, the design of the API gateway is divided into two planes: the data plane used by API consumers and the management plane used by API providers, which can effectively isolate service requests from management requests to a certain extent.
Let’s talk about the data plane
The core design concept of THE API gateway is to ensure uninterrupted service on the data plane. Due to the diversity of services connected to the API gateway and the uncontrolled design of client apis and applications, it is difficult to require fault tolerance of each service and client, especially for some traditional services. This requires the gateway to ensure that it can normally process each request and meet a high service-level Agreement (SLA). API gateways in the industry are divided into several types: Direct use of cloud services, Nginx series, Golang series, Java series, etc., there are many choices, if you want to build, recommend using Nginx system, the main considerations are as follows:
1. Hot restart is supported
Upgrading components on the data side is a high-risk operation that can lead to a broken connection and system failure unless your front-end LB (load balancers) is capable of draining quickly, but even then it can force a break on the requests being processed. So a hot restart on the data side is critical.
2. Dynamic subscription routes are supported
API routing changes frequently and requires high timeliness. If the regular synchronization scheme is adopted, the simultaneous synchronization of tens of thousands of data will slow down your system. Therefore, it is very critical to add a subscription routing service center. And it’s not too stressful to just take incremental data for performance.
3. Support plug-in management
Nginx provides a rich ecosystem of plug-ins. Different apis and different users require different processing flows. If each request is processed according to the same flow, redundant operations are bound to occur. Plug-in management can improve performance to a certain extent and also ensure that processing chains can be added quickly during the upgrade process.
4. High-performance forwarding capability
API gateways generally work in the multi-back-end API reverse proxy mode, and many self-developed API gateways are prone to bottlenecks in performance, so nGINx’s excellent performance and efficient traffic throughput are its core competitiveness.
5. In stateless mode, horizontal expansion can be performed
API gateway is the collection of all requests of the whole system, which needs to be flexibly scaled according to the business scale. Service center and Nginx configuration management can be used to quickly add and delete existing clusters, and synchronize to LVS to achieve rapid horizontal expansion.
Again, the management side
Compared with the data side, the constraints on the management side are not so obvious, the management side should consider more data storage and presentation ability. It is very important to define API specifications from the very beginning. Swagger, as the most mainstream API description mode at present, has a very complete ecology, and the entire AWS API gateway model is built by referring to Swagger.
Core Architecture practices
The implementation of the API gateway is described today in terms of flow control and route traversal, and other core designs will be provided in future articles.
Fine second level flow control
Flow control of more than minute level is relatively easy to handle, but upgrading to second level is a great challenge to the system’s performance and processing capacity. There are many flow control schemes on the Internet. Synchronous and asynchronous schemes have their own advantages, but they all encounter the same problems: performance and accuracy.
The following is one of the most common flow control schemes (cluster flow control), which uses Redis shared storage to record all flow control requests and access them in real time. This architecture has an obvious problem: when the number of clusters and requests is large, Redis cluster performance will become a big bottleneck.
We redesigned an API flow control architecture to mix flow control solutions and automatically adjust to business needs. Here we split it into local flow control and cluster flow control. For flow-sensitive applications, the more accurate the flow control is, the higher the calculation timeliness is, the lower the time dimension is (second level), and the local flow control is adopted. For apis with a long time cycle and low access frequency, cluster flow control is adopted to reduce the operation frequency of shared storage.
Note: The figure above shows the specific flow control architecture. Please refer to the API gateway architecture panorama at the beginning of this chapter for integration with API gateway.
Local flow control
That is, single-machine flow control, suitable for flow-sensitive services. The API computes the Hash value against the apI-core cluster node, ensuring that each API can be loaded on one of the cluster nodes. Suppose there are three API-cores A, B, and C. If the consistent hash value calculated by an API is node A, when the request is sent to node A, it will be directly forwarded from this node and A flow control value will be recorded. When the request is sent to node B/C, it will be forwarded to node A to calculate A flow control value and then forward. All flow control requests from the same API are logged to one API-core. You can take advantage of apI-Core’s single-machine flow control capabilities. Single machine flow control algorithm is plug-in, can use counting, leakage bucket, etc.
Of course, local flow control can also bring some problems. When all apis are loaded on a node, if one API is heavily visited, the load may be uneven. Also, if the flow control time record is very long, such as 12 times/day, the count time cycle is too long to be suitable for local flow control.
Cluster flow control
Cluster flow control applies to services with long counting periods and low requirements for flow control accuracy. Different flow controls are selected based on different services. The flow control process is basically the same as the above flow control process, but the local flow control data will be cached for a period of time before being reported to the flow control center.
Routing traversal algorithm based on tree structure
The main process of API gateway data plane includes the routing matching algorithm. All the routing data will be cached in ETCD. In order to improve the performance of data plane, the storage structure is very important. We split the stored procedure into two parts: the domain name tree and the URI tree
From the first tree we can iterate through the following domains: www.apig.com, test.com, *.apig.com, *.com. Domain names are stored from the last “. Start traversing. For example:
Match: www.test.com. Com is matched first. Test is traversed if the match is successful. WWW is traversed if the match is successful. Match: test.apig.com, com is matched first, apig is traversed successfully, test is traversed successfully, no test, traversal number, matching target:.apig.com, URL is matched in advance, which is opposite to the matching pattern of domain name, but the traversal calculation method is the same.
conclusion
There are many mainstream open source API gateway architectures in the industry, but open source software has a common characteristic: lack of weight, security, operation and maintenance analysis, and need to invest high research and development costs to meet the requirements of production environment. It is very important to find a perfect API management solution for enterprises to realize their capabilities.
Huawei cloud API Gateway service provides a complete API lifecycle management solution that supports multiple application scenarios and provides convenient management services. The process of API on-line, release, management and final sale is no longer complicated, and the enterprise capability is realized quickly. Welcome to experience: Huawei Cloud-API Gateway
Welcome to continue to pay attention to Huawei Cloud DevCloud, search the public account: HWDevCloud, for more dry goods information!
Click to follow, the first time to learn about Huawei cloud fresh technology ~