This article is a revision of the previous content and has been included in the Istio Handbook of ServiceMesher community. Other chapters are still being compiled.

Anyone who has just heard of Service Mesh and tried Istio may have the following questions:

  1. Why is Istio bound to Kubernetes?
  2. What are the roles of Kubernetes and Service Mesh in cloud native?
  3. What aspects of Kubernetes does Istio extend? What problems have been solved?
  4. What is the relationship between Kubernetes, xDS protocols (Envoy, MOSN, etc.) and Istio?
  5. Should I go to the Service Mesh or not?

In this section we will try to tease out the relationship between Kubernetes, xDS, and Istio Service Mesh. In addition, this section will introduce the load balancing method in Kubernetes, the significance of xDS protocol for Service Mesh, and why Istio is needed even with Kubernetes.

Using Service Mesh is not a break with Kubernetes, it’s a natural progression. The essence of Kubernetes is life cycle management of applications through declarative configuration, while the essence of Service Mesh is traffic and security management and visibility between applications. If you have built a stable microservice platform using Kubernetes, how do you set up load balancing and traffic control for inter-service calls?

The xDS protocol created by Envoy is supported by numerous open source software such as Istio, Linkerd, MOSN, and others. Envoy’s greatest contribution to Service Mesh or cloud native is the definition of xDS, Envoy is essentially a proxy, a modern version of a proxy that can be configured through an API, and there are a lot of different usage scenarios derived from it, For example, API Gateway, Sidecar Proxy and edge proxy in Service Mesh.

This section contains the following

  • This section describes the function of kube-proxy.
  • Limitations of Kubernetes in microservice management.
  • This section describes the functions of Istio Service Mesh.
  • This section describes xDS content.
  • Compare some concepts in Kubernetes, Envoy, and Istio Service Mesh.

The important point

If you want to see everything ahead of time, here are some of the main points from this article:

  • The essence of Kubernetes is application lifecycle management, specifically deployment and management (scaling, automatic recovery, publishing).
  • Kubernetes provides a scalable, highly resilient deployment and management platform for microservices.
  • The Service Mesh is based on transparent proxy. The Sidecar proxy intercepts traffic between microservices and manages the behavior of microservices through the configuration of the control plane.
  • The Service Mesh decouples traffic management from Kubernetes. Traffic within the Service Mesh is not requiredkube-proxyComponent support manages traffic, security, and visibility between services through abstractions closer to the microservices application layer.
  • XDS defines the protocol standard for Service Mesh configuration.
  • The Service Mesh is a higher-level abstraction of the Service in Kubernetes, and the next step is Serverless.

Kubernetes vs Service Mesh

The following figure shows the Service access relationship between Kubernetes and the Service Mesh (one Sidecar per pod).

Traffic forwarding

A Kube-Proxy component is deployed on each node of the Kubernetes cluster. The component communicates with the Kubernetes API Server, obtains the service information in the cluster, and then sets the iptables rules. Requests to a service are directly sent to the corresponding Endpoint (pod belonging to the same set of services).

Service discovery

Istio Service Mesh can follow the Service in Kubernetes for Service registration. It can also connect to other Service discovery systems through the platform adapter of the control plane, and then generate the configuration of the data plane (using CRD declaration and stored in ETCD). Transparent Proxies on the data plane are deployed in the POD of each application service in the form of sidecar containers. These proxies need to request the control plane to synchronize the proxy configuration. Transparent proxy, because the application container is completely unaware of the existence of proxy, this process kube-proxy components need to intercept traffic, but kube-proxy intercepts the traffic in and out of the Kubernetes node. The Sidecar proxy intercepts traffic to and from the Pod. See the Envoy Sidecar proxy routing and forwarding in the Istio Service Mesh for details.

Disadvantages of Service Mesh

Because Kubernetes has a large number of pods running on each node, putting the original Kube-proxy routing and forwarding functionality into each Pod causes a lot of configuration distribution, synchronization, and eventual consistency issues. A new set of abstractions must be added to fine-grained traffic management, further increasing the learning cost for users, but this situation will slowly ease as technology becomes more widespread.

Advantages of Service Mesh

The Settings of Kube-proxy take effect globally, so fine-grained control of each Service is not possible. However, the Service Mesh takes the control of traffic in Kubernetes out of the Service layer through sidecar proxy, so more expansion can be made.

Kube – proxy component

In a Kubernetes cluster, each Node runs a Kube-Proxy process. Kube-proxy is responsible for implementing a FORM of VIP (virtual IP) for the Service. In Kubernetes V1.0, the agent is implemented entirely in Userspace. Kubernetes V1.1 added the iptables agent mode, but it is not the default operating mode. As of Kubernetes V1.2, iptables agents are used by default. In Kubernetes V1.8.0-beta.0, ipvS proxy mode was added. For more information about kube-Proxy, see Introduction to Kubernetes: Principles of Service and Kube-Proxy and Implementation of Kubernetes Import Traffic Load Balancing using IPVS.

The defects of kube – proxy

Disadvantages of Kube-Proxy:

First, there are LIVENESS probes that can be addressed if a forwarded POD is not serving properly, it will not automatically try another POD. Each POD has a health check mechanism, when a POD health problem, Kube-Proxy will delete the corresponding forwarding rule. In addition, nodeport-type services cannot add TLS or more complex packet routing mechanisms.

Kube-proxy implements load balancing among pod instances of Kubernetes Service, but how to do fine-grained control over the traffic among these services, such as dividing the traffic by percentage to different application versions (these applications belong to the same service, But located on a different Deployment), do canary releases (grayscale releases) and blue and green releases? The Kubernetes community provides a way to use Deployment for canary publishing, which is essentially to assign different pods to A Deployment Service by changing the pod label.

Kubernetes Ingress vs Istio Gateway

Kube-proxy can only route the traffic inside Kubernetes cluster, and we know that the Pod of Kubernetes cluster is located in the external network created by CNI, the external cluster is unable to directly communicate with it. Therefore, Kubernetes creates the ingress resource object, which is driven by the Ingress Controller located on the edge of the Kubernetes node. The ingress controller manages the north-south traffic. Ingress must be connected to various Ingress controllers, such as Nginx Ingress Controller and Traefik. Ingress only works with HTTP traffic and is simple enough to route traffic by matching only limited fields such as Service, port, and HTTP path. This makes it impossible to route TCP traffic such as MySQL, Redis, and various private RPCS. To directly route northbound traffic, you can only use the LoadBalancer or NodePort of the Service, the former requiring cloud vendor support and the latter requiring additional port management. Some Ingress controllers support exposure of TCP and UDP services, but can only be exposed using services. The Ingress itself is not supported, for example, the Nginx Ingress Controller, The ports exposed by the service are configured by creating a ConfigMap.

Istio Gateway functions like Kubernetes Ingress in that it is responsible for the north-south traffic of the cluster. The load balancer described by Istio Gateway is used to host connections to and from the edge of the grid. This specification describes a series of open ports, the protocols used by these ports, SNI configuration for load balancing, and so on. Gateway is a CRD extension that uses the capabilities of Sidecar Proxy. For details, see the Istio official website.

XDS agreement

Each square represents an instance of a Service, such as a Pod in Kubernetes that contains a Sidecar proxy. The xDS protocol controls the specific behavior of all traffic in the Istio Service Mesh, as shown in the following figure.

The xDS protocol was proposed by Envoy, The most primitive xDS protocol in Envoy V2 API refers to Cluster Discovery Service (CDS), Endpoint Discovery Service (EDS), and Listener Discovery (LDS) Service), Route Discovery Service (RDS), Later, in V3 version, Scoped Route Discovery Service (SRDS), Virtual Host Discovery Service (VHDS) and Secret Discovery were developed Service (SDS) and Runtime Discovery Service (RTDS) See xDS REST and gRPC Protocol.

Let’s take a look at the xDS protocol with two instances of each service.

The arrows in the figure above are not the path or route of traffic entering the Proxy, nor the actual order, but an imaginary order of xDS interface processing. Actually, xDS are cross-referenced.

Xds-enabled agents discover resources dynamically by querying files or managing servers. Broadly speaking, the corresponding discovery service and its corresponding API are called

xDS

. Envoy access resources by ** subscription, three ways to access resources:

  • File subscription: Monitors the files in the specified path. The simplest way to discover dynamic resources is to save them in a file and set the path toConfigSourceIn thepathIn the parameters.
  • GRPC streaming Subscription: Each xDS API can be individually configured with an ApiConfigSource pointing to the cluster address of the corresponding upstream management server.
  • Polling REST-JSON polling subscription: Synchronous (long) polling of REST endpoints that can be done by a single xDS API.

For details about the xDS subscription mode, see xDS Protocol Resolution. Istio uses gRPC streaming subscription to configure sidecar proxy for all data planes.

This article introduces the overall architecture of Istio Pilot, proxy configuration generation, the capabilities of the Pilot-Discovery module, and CDS, EDS, and ADS in the xDS protocol. For details on ADS please refer to the official Envoy documentation.

XDS protocol essentials

Finally, summarize the key points of xDS protocol:

  • CDS, EDS, LDS, and RDS are the most basic xDS protocols, which can be updated independently.
  • All Discovery services can connect to different Management Servers, which means that multiple servers can manage xDS.
  • The Envoy has made a series of extensions to the original xDS protocol, adding apis such as SDS (Secret Key Discovery Service), ADS (Aggregated Discovery Service), HDS (Health Discovery Service), MS (Metric Service), RLS (Rate limiting service), etc.
  • To ensure data consistency, if you use the xDS raw API directly, you need to ensure that the order of updates is as follows: CDS — > EDS — > LDS — > RDS, which follows the make-before-break principle of electrical engineering, that is, establish a new connection Before disconnecting the original one. When a new routing rule is set, traffic is discarded because the upstream cluster cannot be discovered, which is similar to circuit disconnection.
  • CDS sets which services are in the Service Mesh.
  • EDS sets which instances (endpoints) belong to these services (clusters).
  • LDS sets the port to listen on the instance to configure the route.
  • RDS The routing relationship between the final services should ensure that the RDS is updated last.

Envoy

Envoy is the default Sidecar in the Istio Service Mesh. Istio extends its control plane based on Enovy in terms of Envoy xDS protocol. Before we get to the Envoy xDS protocol we need to familiarize ourselves with the basic terms Envoy. Here’s a list of the basic terms and data structures in Envoy. For details on Envoy, please refer to the official Envoy documentation, For details on how envoys work as forwarding proxies in Service Mesh (not just Istio) please refer to this article by Liu Chao on netease Cloud for an in-depth look at the technical details behind Service Mesh and how to understand Service Proxies in Istio Service Mesh Sidecar injection and traffic hijacking, some of which are cited in this article.

Basic terminology

Here are the basic terms you should know about Enovy:

  • Downstream: A Downstream host connects to an Envoy, sends a request and receives a response, that is, the host that sent the request.
  • Upstream: An Upstream host receives a connection and request from an Envoy and returns a response, i.e. the host receiving the request.
  • Listeners: Listeners are named network addresses (for example, ports, Unix Domain sockets, etc.) that downstream clients can connect to. Envoy exposes one or more listeners to the downstream host for connection.
  • Cluster: A Cluster is a set of logically identical upstream hosts connected by an Envoy. Envoy finds cluster members through service discovery. You can choose to perform an active health check to determine the health status of cluster members. Envoy determines which member of the cluster to route the request to using a load-balancing policy.

Envoys can set multiple listeners, each with a filter chain, and the filters are extensible, making it easier to manipulate the behavior of traffic, such as setting encryption, private RPCS, etc.

The xDS protocol was proposed by Envoy and is currently the default Sidecar proxy in Istio, but any implementation of xDS can theoretically serve as a Sidecar proxy in Istio, such as Ant Financial’s open source MOSN.

Istio Service Mesh

Istio is a rich Service Mesh, which includes the following functions:

  • Traffic management: This is the most basic function of Istio.
  • Policy control: Through Mixer components and various adapters, access control systems, telemetry capture, quota management, and billing are implemented.
  • Observability: Through Mixer.
  • Security authentication: The Citadel component manages keys and certificates.

Traffic management in Istio

Istio defines the following CRDS to help users manage traffic:

  • Gateway: Gateway describes a load balancer that operates on the edge of a network to receive incoming or outgoing HTTP/TCP connections.
  • VirtualService: VirtualService actually connects the Kubernetes service to Istio Gateway. It can do more, such as defining a set of traffic routing rules that can be applied when a host is addressed.
  • DestinationRule: Indicates the policy defined by the DestinationRule, which determines the access policy for the traffic after routing. Simply put, it defines how traffic is routed. These policies can define load balancing configurations, connection pool sizes, and external detection configurations for identifying and evicting unhealthy hosts from the load balancing pool.
  • EnvoyFilter: The EnvoyFilter object describes the filters for the broker service that can be customized to the broker configuration generated by Istio Pilot. This configuration is rarely used by novice users.
  • ServiceEntry: By default, services in an Istio Service Mesh cannot discover services outside the Mesh. ServiceEntry adds additional entries to the internal Service registry in Istio so that automatically discovered services in the Mesh can access and route to manually added services.

Kubernetes vs xDS vs Istio

After reading the above abstract concepts of traffic management in Kube-Proxy component, xDS, and Istio of Kubernetes, the following will take you through a comparison of the corresponding components/protocols for traffic management only (note that the three cannot be identical).

Kubernetes xDS Istio Service Mesh
Endpoint Endpoint
Service Route VirtualService
kube-proxy Route DestinationRule
kube-proxy Listener EnvoyFilter
Ingress Listener Gateway
Service Cluster ServiceEntry

conclusion

If the object managed by Kubernetes is Pod, then the object managed by Service Mesh is one Service. Therefore, it is natural to use Kubernetes to manage micro services and then apply Service Mesh. If you don’t care about services, use a Serverless platform like Knative, but that’s another story.

The concepts described above are just the tip of the iceberg in Istio’s new layer of abstraction on top of Kubernetes, but traffic management is one of the most fundamental and important features of a service grid, so this is where the book begins.

reference

  • Dive into the technical details behind Service Mesh – CNblogs.com
  • Understand Envoy proxy Sidecar injection and traffic hijacking in Istio Service Mesh – jimmysong. IO
  • Kubernetes service and Kube-Proxy – cizixs.com
  • Implementation of Kubernetes entry traffic load balancing using IPVS – jishu. IO
  • xDS REST and gRPC protocol