I recently read an article about API gateways, which introduced three roles: API management, cluster entry control, API gateway pattern, and finally the relationship with the service grid. This article provides a more comprehensive understanding of the role of API gateways.

Image by Pexels

Over the years, API gateways have been experiencing some skepticism about whether they really work:

  • Do they centralize and share resources to facilitate API management of external calls?
  • Are they controllers of the cluster entry point (ingress) so that users can strictly manage their entry and exit from the cluster?
  • Or are they linkers for some kind of API that makes it easier to use on a given client?
  • Of course, the elephant in the room and the most common question is: “Will service grids make API gateways obsolete?

Elephant in the room: English idiom for something that is obvious but is deliberately ignored because it may cause embarrassment, controversy, sensitivities, or taboos.

Some background

You could be forgiven for saying that “all this is making my head grow big” as the industry reshuffles rapidly through technological and architectural patterns.

In this article, I hope to summarize the different identities of “API gateways”, clarify which groups of people can use API gateways in everyday use (perhaps a single person is encountering and trying to solve this problem), and re-emphasize those basic principles.

Ideally, by the end of this article, you will have a better understanding of what the API infrastructure does at different levels, for different objects, and how to get the most value out of each level.

Before going any further, let’s clarify what the term API means.

My definition of an API:A well-defined interface with a clear end purpose that enables software developers to easily and securely access target data and functions through network calls.

These interfaces abstract away the technical architectural details that implement them. For these designed network nodes, we want a degree of usage guidance and mature downward compatibility.

Conversely, simply being able to interact with another piece of software over a network does not necessarily mean that those remote nodes are apis that fit this definition.

Many systems interact with each other, but these interactions are arbitrary and often suffer in terms of immediacy because of coupling between systems and other factors.

We created apis to provide sophisticated abstract services to various parts of the business to implement new business functions and stumble upon an innovation or two.

When talking about API gateways, the first thing to mention is API management.

API management

Many people think of API gateways in terms of API management. That makes sense. But first, let’s take a quick look at the capabilities of such gateways.

With API management, we try to solve the problem of how to control the use of existing apis by others.

For example, how to track who is using these apis, control who can use these apis, establish a comprehensive set of management measures for authorization and authentication, and create a service catalog that can be used at design time to improve the understanding of the API and lay the foundation for effective governance in the future.

We want to solve the problem of “We have some great apis, and we want others to use them, but we want them to use them according to our rules.”

API management certainly serves some good purposes, such as allowing users (potential API users) to self-service and sign up for different API usage plans (think: number of calls per user per endpoint at a given price point in a given time range).

The infrastructure capable of performing these administrative functions is the gateway (through which API traffic passes). At the gateway layer, we can perform authentication, rate limiting, metric collection, and other policy enforcement.

API Management Gateway

Example of API management software based on API gateway:

  • Google Cloud Apigee
  • Red Hat 3Scale
  • Mulesoft
  • Kong

At this level, we consider how the API (as defined above) can best manage and allow access to it.

We didn’t consider other perspectives, such as servers, hosts, ports, containers, and even services (another hard word to define).

API management (and their corresponding gateways) is usually tightly controlled and operates as a “platform component,” an “all-in-one component,” along with the other underlying components of an API.

One thing to note: We need to be careful not to let any business logic into this layer.

As mentioned in the previous paragraph, API management is a shared infrastructure, but because our API traffic passes through it, it tends to recreate the “all-inclusive” (think enterprise Service Bus) gateway that we have to coordinate with to change our services.

In theory, that sounds good. In fact, this can end up being an organizational bottleneck.

For more information, see this article: With ESB, API Management, and Now… Application networking capabilities of Service Mesh?

https://blog.christianposta.com/microservices/application-network-functions-with-esbs-api-management-and-now-service-mes h/Copy the code

Cluster entrance

To build and implement the API, we focus on code, data, productivity frameworks, and so on.

But for any of these things to be of value, they must be tested, deployed into production, and monitored.

When we start deploying to the cloud, we start thinking about deployments, containers, services, hosts, ports, etc., and building applications that can run in this environment.

We may be designing workflows (CI) and pipes (CD) to take advantage of the cloud platform’s ability to quickly migrate, change, and put it in front of customers, and so on.

In this environment, we might build and maintain multiple clusters to host our applications and need some way to directly access the applications and services in these clusters.

Consider Kubernetes for example. We might access the Kubernetes cluster through a Kubernetes entry controller (everything else in the cluster is not externally accessible).

This allows us to tightly control what can enter (and even leave) our cluster with well-defined rules (such as domain/virtual hosts, ports, protocols, and so on).

At this level, we might want some kind of “gateway” to be the traffic monitor that allows requests and messages to enter the cluster.

At this level, thinking is more like “I have this service in my cluster and I need someone outside the cluster to be able to call it.”

This could be services (public apis), existing overall components, gRPC services, caches, message queues, databases, and so on.

Some people choose to call it an API gateway, and may actually do more than control traffic in/out, but the point is that the problem at this level is at the level of cluster operations.

Cluster Ingress Gateway

Examples of these types of entry implementations include the following:

Envoy Proxy and projects based on it include:

  • Datawire Ambassador
  • Solo.io Gloo
  • Heptio Contour

Other components built on other reverse proxies/load balancers:

  • HAProxy
  • OpenShift ‘s Router
  • Nginx
  • Traefik
  • Kong

The cluster entry controller at this level is operated by platform components, but this part of the infrastructure is often associated with a more distributed, self-service workflow (as you would expect from a cloud platform).

See The “GitOps” Workflow as described by The Good folks at Weaveworks:

https://www.weave.works/blog/gitops-operations-by-pull-request
Copy the code

API Gateway mode

Another extension of the term “API gateway” that I usually think of when I hear the term is the one that most closely resembles the API gateway pattern.

Chris Richardson has a good introduction to this usage in chapter 8 of his book “The Microservice Model.” In short, the API gateway pattern optimizes the use of apis for different categories of consumers.

This optimization involves an API indirect access. Another term you might hear to represent the API gateway pattern is the “back end of the front end,” where the “front end” could be a character terminal (UI), mobile client, IoT client, or even other service/application developer.

In the API gateway pattern, we significantly simplify calls to a set of apis to simulate an “application” cohesive API for a specific user, client, or consumer.

Recall that when we built systems using microservices, the concept of “applications” disappeared. The API gateway pattern helps restore this concept.

The key here is the API gateway, which, once implemented, becomes the API for clients and applications and is responsible for communicating and interacting with any back-end apis and other application network nodes that do not meet the above API definition.

Unlike the entry controller in the previous section, this API gateway is closer to a developer’s perspective and less concerned with which ports or services are exposed for use outside the cluster.

This “API gateway” is also different from our API management perspective of managing existing apis. This API gateway aggregates calls to the back end.

This might expose the API, but it might also involve things that are less well described by the API, such as RPC calls to older systems, calls using non-” REST “protocols (such as over HTTP but not JSON), gRPC, SOAP, GraphQL, WebSockets, and message queues.

This type of gateway can also be used for message-level transformation, complex routing, network resilience/rollback, and aggregation of responses.

If you are familiar with the Richardson Maturity model of REST apis, you will see that API gateways that implement the API gateway pattern integrate more Level 0 requests (and everything in between) than Level 1-3.

These types of gateway implementations still need to address rate limiting, authentication/authorization, circuit disconnections, metric collection, traffic routing, and more.

These types of gateways can be used as cluster entry controllers at the edge of the cluster or as application gateways within the cluster.

API Gateway Pattern

Examples of such API gateways include:

  • Spring Cloud Gateway
  • Netflix Zuul
  • IBM-Strongloop Loopback/Microgateway

A more general programming or integration language/framework can also be used, for example:

  • Apache Camel
  • Spring Integration
  • Ballerina.io
  • Eclipse Vert.x
  • NodeJS

Because this type of API gateway is closely related to the development of applications and services, we want developers to be involved in helping specify the API exposed by the API gateway, understand any aggregation logic involved, and be able to quickly test and change this API infrastructure.

We also want operations or engineers to have some idea of the security, resilience, and observable configuration of the API gateway.

This level of infrastructure must also accommodate the evolving, on-demand, autonomous service developer workflows. You can learn more about this by looking at the GitOps model.

Enter the Service Mesh

Part of the challenge of running a service architecture on a cloud infrastructure is building the right level of visibility and control in the network.

In previous iterations of solving this problem, we used application libraries and some professional developer governance to achieve this goal.

However, in large scale and multiple development languages, the emergence of service grid technology provides a better solution.

Transparently implemented, the service grid brings the following capabilities to the platform and its constituent services:

  • Service-to-service (that is, east-west traffic) elasticity.
  • Security includes end user authentication, mutual TLS, and service-to-service RBAC/ABAC.
  • Observability of black box services (focusing on network communications), such as request/second, request delay, request failure, fusing events, distributed tracing, etc.
  • Service to service rate limit, quota enforcement, etc.

The savvy reader will recognize that API gateways and service grids appear to overlap in functionality. The purpose of the Service Grid is to solve these problems transparently for all services/applications in L7.

In other words, the service grid wants to be integrated into the service (its code is not actually embedded in the service).

The API gateway, on the other hand, sits on top of the service grid, along with applications (L8?). . The service grid adds value to the flow of requests between services, hosts, ports, protocols, and so on (east-west traffic).

They can also provide basic cluster entry functionality to bring some of this functionality north to south. However, this should not be confused with the ability of the API gateway to bring north/south traffic. (One in a cluster and one in a group of applications)

The service grid and API gateway overlap in some respects, but they complement each other at different levels, each responsible for solving different problems.

The ideal solution is to fit each component (API management, API gateway, service grid) properly into your solution and establish good boundaries between the components as needed (or exclude them when not needed).

It is also important to find appropriate ways to distribute these components to appropriate developers and operational workflows.

Even if there is confusion in terms and identities of these different components, we should rely on fundamentals and understand the value these components bring to our architecture to determine how they can exist independently and complementing each other.

\