William Morgan

Service Mesh is a fairly new concept, and its “history” may seem like a stretch. Service Mesh has been running in some enterprise production environments for more than 18 months now, and its roots can be traced back to around 2010 when Internet companies faced large-scale business development.

So why has Service Mesh suddenly become such a hot topic?

Service Mesh is a software infrastructure layer used to control and monitor the internal and service-to-service communication of microservice applications. It is usually in the form of a “data plane” deployed in the network agent beside the application or a “control plane” interacting with the Zhejiang Ai agent. In the Service Mesh model, developers have no awareness of the Service grid, while operations personnel gain a new set of tools to help ensure the reliability, security, and visibility of services.

It typically takes the form of a “data plane” of network agents deployed next to application code and a “control plane” for interacting with those agents. In this model, the developer (the “service owner”) is unaware of the existence of the service grid, and the operator (the “platform engineer”) is given a new set of tools to ensure reliability, security, and visibility.

For many companies, Docker and Kubernetes have “solved deployment” (at least initially), but not the run-time problem, which is where Service Mesh comes in.

What does “resolved deployment” mean? Using tools such as Docker and Kubernetes can significantly reduce the operational burden of deployment. With these tools, deploying 100 applications or services is no longer 100 times as much work as deploying one application. This is a big step forward and will significantly reduce the cost of adopting microservices for many companies. This is not only because Docker and Kubernetes provide powerful abstractions at all appropriate levels, but also because they standardize packaging and deployment patterns across the organization.

But what happens once the application is running? After all, deployment is not the last step in production — the application still needs to run, needs to deliver, and needs to generate value. So the question becomes: can we standardize the run-time operations of our applications in the same way that Docker and Kubernetes standardize deploy-time ops?

To answer this question, ask Service Mesh. At its core, the Service Mesh provides a unified, global way to control and measure all request traffic (” east-west “traffic in data center parlance) between applications or services. For companies that have adopted microservices, this request traffic plays a key role in runtime behavior. Because services work by responding to incoming requests and making outgoing requests, the flow of requests becomes a key determinant of the application’s behavior at run time. Therefore, standardized traffic management becomes a tool for standardizing the application runtime.

By providing apis to analyze and manipulate this traffic, Service Mesh provides a standardized mechanism for runtime operations across the organization — including ways to ensure reliability, security, and visibility. Like any good infrastructure layer, Service Mesh is built Service independent.

How is a Service Mesh formed

So, where does the Service Mesh come from? By doing some “software archaeology,” we discovered that the core functions provided by the service grid — such as Request-level load balancing, circuit breakers, retries, detection — are not fundamentally new features. Instead, Service Mesh is a repackaging of functionality — a shift in where, not what.

Service Mesh originated around 2010 as a three-tier model of application architecture — a simple architecture that once “powered” most applications on the Web. In this model, application traffic is handled first by the “Web layer,” which talks to the “application layer,” and then to the “database layer.” Web servers in the Web tier are designed to handle a large number of incoming requests very quickly and carefully hand them off to relatively slow application servers (Apache, NGINX, and other popular Web servers all have very sophisticated logic to handle this). Similarly, the application layer uses database libraries to communicate with back Stores. These libraries typically handle caching, load balancing, routing, flow control, and so on in a way that is optimized for this use case.

So far So good, but the model is starting to show signs of fatigue for large businesses — especially at the application layer, which gets very large over time. The early web-scale companies — Google, Facebook, Netflix, Twitter — learned to break the boulder into many pieces that operated independently, giving rise to microservices. East-west traffic was introduced the moment microservices were introduced. In this world, communication is no longer specialized, but between each service. When it goes wrong, the site crashes.

These companies responded in a similar way — they wrote “FAT Client” libraries to handle request traffic. These libraries — Google’s Stubby, Netflix’s HYstrix, Twitter’s Finagle — provide a unified runtime approach across all services. Developers or service owners will use these libraries to make requests to other services, and the libraries will perform load balancing, routing, routing, and so on in the background. These libraries ostensibly form the first Service Mesh by providing uniform behavior, visibility, and control points for each Service in the application.

Rise of the agent

When we go back to today’s cloud-based world, these libraries still exist. However, libraries are becoming less attractive due to the operational convenience provided by out-of-process agents — especially as deployment complexity is significantly reduced with the advent of containers and choreography.

Proxies bypass many of the disadvantages of libraries. For example, when a library changes, those changes must be deployed in each service, a process that often requires complex organizational collaboration. Agents, on the other hand, can upgrade applications without recompiling and redeploying. Also, proxies allow multiple language systems, and applications can be written in different languages, which of course is very expensive for libraries.

Perhaps most importantly for large organizations, it is implemented in proxy servers rather than libraries – the service grid is responsible for providing the capabilities required for runtime operations to the service owners and by the end users of those capabilities – the platform team. This alignment of providers and consumers allows these teams to control their own destinies and separate the complex dependencies between DEV and OPS.

The combination of these factors has led to the rise of Service Mesh and a more robust run-time operation. By deploying a “network” of distributed agents that can be maintained as part of the underlying infrastructure rather than the application itself, and by providing a centralized API to analyze and manipulate traffic, Service Mesh provides a standard mechanism for runtime operations throughout the organization while ensuring reliability, security, and visibility.

  • END –

The open source PaaS Rainbond V3.6.0 has been released. The Service Mesh microservice architecture is added out of the box to realize governance functions through plug-in extensions, and supports mainstream microservice architectures such as Spring Cloud, API Gateway, and Dubbo.

To read more

  • technology Service Mesh: What is the Sidecar mode 2018/06/21
  • technology Source PaaS Rainbond V3.6.0 is released, and the Service Mesh is available out of the box 2018/06/20
  • technology Rainbond ServiceMesh Microservice Architecture – Open Source PaaS Rainbond 2018/05/15
  • technology Pinpoint Java Performance Analysis Best practices _ Open Source PaaS Rainbond 2018/05/08
  • technology Building a Privatized Object Storage Service using Minio _ Open Source PaaS Rainbond 2018/04/26
  • technology Exposing the high availability load balancing component Rainbond-Entrance_ Open Source PaaS Rainbond 2018/04/25
  • technology PaaS Rainbond 2018/02/24
  • technology How does Rainbond connect to an external Maven repository 2018/01/18
  • technology Spring Boot framework configuration mysql_open Source PaaS Rainbond 2018/01/10
  • technology Midonet-based multi-tenant Network design – Open Source PaaS Rainbond 2018/01/09