☝ click on the blue word above, follow us!
Words: 3657
Estimated reading time: 15 minutes
Read the guide
Over the past few years, microservices technology has exploded. With the popularization of container-related technology, communication management between services is paid more and more attention. Recall the history of communication between services, from the original straight-line connection to the emergence of the network layer, from application self-control to dedicated service discovery applications (libraries) and circuit breakers. Technology is constantly changing at a rapid pace. By 2017, traditional, intrusive development frameworks, such as Spring Cloud, had become so entrenched in the microservices market that they were once synonymous with microservices. The world has been bitter and intrusive for a long time. It is in this context that the service grid, represented by Istio and Linkerd, came out in 2017, and people began to realize that there is such a way of doing micro-services. Let’s follow this article to find out what service grid and IStio are all about.
The basic concept
What is a service grid
Service Mesh: The network of microservices that make up an application and the interactions between applications.
Service network requirements include service discovery, load balancing, fault recovery, indicator collection and monitoring, and o&M requirements (such as A/B testing, Canary publishing, traffic limiting, access control, and end-to-end authentication).
Service Grid features:
-
An intermediate layer for communication between applications;
-
Lightweight network proxy;
-
The application is not aware;
-
Decouple application retry/timeout, monitoring, tracing, and service discovery.
What is istio
Istio provides a simple way to network deployed services with load balancing, authentication between services, monitoring, and so on, without making any changes to the service code.
Simply put: Istio provides a complete solution for implementing a service grid.
Istio has the following characteristics:
-
Istio applies to container or VIRTUAL machine environments (especially K8S) and is compatible with heterogeneous architectures.
-
Istio uses a network of Sidecard (sidecard mode) proxy services without any changes to the business code itself;
-
Automatic load balancing of HTTP, gRPC, WebSocket and TCP traffic;
-
Istio implements fine-grained control over traffic behavior and supports access control, rate limiting, and quotas through rich routing rules, retry, failover, and fault injection.
-
Istio automatically measures, logs, and tracks all traffic at the entrances and exits to and from a cluster.
Istio architecture
The ISTIO architecture consists of a data plane and a control plane, as shown in the following figure:
Data plane
-
Data plane: consists of a set of intelligent agents (Envoy1) deployed in sidecar mode;
-
Role: To regulate and control all network communication between microservices and to Mixer (control plane controller).
Sidecar (Sidecar mode) : Separate application functions into separate processes. In K8S, multiple different functions are separated into the same POD.
Envoy agent
-
Envoy deployed as Sidecar, in the same Kubernetes pod as the corresponding service;
-
Mediate all inbound and outbound traffic for all services in the service grid.
Control plane
-
Manage and configure agents.
Mixer
-
Responsible for enforcing access control and usage policies on the service grid;
-
Collect telemetry data from Envoy agents and other services.
Pilot
-
Service discovery for Envoy Sidecar;
-
Provides traffic management capabilities for intelligent routing (e.g. A/B testing, Canary deployment, etc.) and resiliency (timeout, retry, fuses, etc.);
-
Transform the high-level routing rules that control traffic behavior into an Envoy specific configuration.
Citadel
-
Transport authentication (service-to-service authentication);
-
Source authentication (end user authentication).
Galley
-
Other Istio control plane components;
-
Verify user-written Istio API configuration;
-
Isolate other Istio components from the details of getting user configuration from the underlying platform, such as Kubernetes.
function
The most exciting aspect of Istio is its rich functionality: traffic management, security and access control, and monitoring (telemetry).
Traffic management
-
HTTP routing: Routes are routed based on HTTP content, for example, user information in the HTTP header.
-
TCP routing: TCP traffic control;
-
Weight-based routing: Traffic is controlled proportionally and supports HTTP and TCP.
-
Fault injection: You can inject delays, terminations, and so on between services, which can be helpful when testing services.
-
Timeout setting: You can set timeout for HTTP requests. The timeouts of ISTIO are combined with the timeout Settings of the application itself for clearer semantics and finer control;
-
Ingress control: Using the ISTIO Gateway, monitoring and routing rules can be applied to incoming traffic.
-
Egress Control: ServiceEntry controls the cluster to invoke external services (monitoring and routing rules).
-
Fusing: You can customize fusing rules, including concurrency limit and opening time.
-
Mirroring traffic: A copy of real-time traffic can be sent to the mirroring service. Mirroring requests are sent on a fait-or-forget basis (the response to mirroring requests is discarded).
Security and access control
-
TLS, CA;
-
Service Role: Namespace-level access control and service set access control.
-
Authentication (JWT) : Group-based (user group), list-based.
Monitoring (telemetry)
Prometheus
Collect logs and metrics from envoy, store, and provide query interfaces.
The main indicators are:
-
Request Count: increments with each Request processed by the Istio agent;
-
Request Duration: Measures the Duration of the Request;
-
Request Size: This measures the body Size of the HTTP Request;
-
Response Size: It measures the Size of the HTTP Response body;
-
Tcp Byte Sent: It measures the total number of bytes Sent during a response in a Tcp connection scenario, as measured by the server proxy.
-
Tcp Byte Received: This measures the total number of bytes Received during a request in a Tcp connection scenario, as measured by the server proxy.
Granfana
Visual display of log indicators.
Jeager
Distributed tracing is implemented by adding a specific header to an HTTP request that the application must forward and propagate (heterogeneous project compatibility).
The sample
To better understand the power of IStio, let’s take a look at flow control and telemetry, using book-info as an example.
First, deploy a short book review site, as shown below:
The site includes four microservices:
-
Productpage: This service invokes details and Reviews microservices to generate pages;
-
Details: This micro-service contains information about books;
-
Reviews: This micro service contains book related reviews and calls the ratings micro service;
-
Ratings: Ratings micro service includes ratings information consisting of book reviews.
Reviews Microservices comes in 3 versions:
-
The V1 version does not invoke the ratings service;
-
Version V2 invokes the ratings service and uses 1 to 5 black star ICONS to display ratings information;
-
The V3 version invokes the ratings service and uses 1 to 5 red star ICONS to display the rating information.
The access effect is as follows:
Because the Bookinfo example deploys three versions of the Reviews microservice, when we visit the application multiple times, we can see that sometimes the output contains star ratings and sometimes it does not.
Intelligent routing
Fine-grained traffic control can be implemented based on routing rules.
Custom Route
First import all traffic into the V1 version of reviews using the following configuration and submit it to K8S:
1apiVersion: networking.istio.io/v1alpha3 2kind: VirtualService 3metadata: 4 name: reviews 5spec: 6 hosts: 7 - reviews 8 http: 9 - route:10 - destination:11 host: reviews12 subset: v1
Copy the code
Wait a few seconds to see the change:
It can be seen that all the traffic now goes to review v1 version.
Distribute traffic based on the header content
Istio can distribute traffic based on content. Here we let normal users access version V1 and special users (Jason) access version V2 using the following configuration and submit it to K8S:
1apiVersion: networking.istio.io/v1alpha3 2kind: VirtualService 3metadata: 4 name: reviews 5spec: 6 hosts: 7 - reviews 8 http: 9 - match:10 - headers:11 end-user:12 exact: jason13 route:14 - destination:15 host: reviews16 subset: v217 - route:18 - destination:19 host: reviews20 subset: v1
Copy the code
As you can see, regular users are still accessing Review v1.
If you log in as Jason, you will access the V2 version of Review (black Pentagram).
In addition, ISTIO can inject delays and disconnections between services, as well as scale up migration requirements.
monitoring
In Istio, the Mixer can automatically generate and report new metrics as well as new log flows for all in-grid traffic. The following uses the book-info application as an example to show distributed tracking.
Prometheus
Used to collect and query indicators.
Distributed tracking
Although the Istio agent can automatically send Span information, some assistance is needed to unify the entire tracing process. The application should propagate the HTTP headers associated with the trace itself so that the same trace process is properly unified when the agent sends a Span message.
Productpage Service Access Overview:
Call chain trace:
Service topology:
grafana
Istio itself and the service grid are monitored using Grafana.
Service market:
The principle of analysis
sidecar
Sidecar (Sidecar mode) : Separate application functions into separate processes. In K8S, multiple different functions are separated into the same POD.
Manual injection of sidecar by IStio changes deployment to add two containers:
-
Init container istio-init: used to initialize the Sidecar container (Envoy proxy) and set iptables port forwarding;
-
Envoy Sidecar container IStio-proxy: Runs an Envoy proxy.
An example sidecar Deployment template for ISTIO injection:
1Containers: 2-name: ISTIo-proxy 3 image: IStio. IO /proxy:0.5.0 4 ARgs: 5 - proxy 6 - sidecar 7 - --configPath 8 - {{ .ProxyConfig.ConfigPath }} 9 - --binaryPath10 - {{ .ProxyConfig.BinaryPath }}11 - --serviceCluster12 {{ if ne "" (index .ObjectMeta.Labels "app") -}}13 - {{ index .ObjectMeta.Labels "app" }}14 {{ else -}}15 - "istio-proxy"16 {{ end -}}
Copy the code
[1]
Communicate with Envoy
-
Discovery Service (pilot-Discovery binary) : Listen to isTIO control plane configuration information (such as VirtualService and DestinationRule) from Kubernetes apiserver list/ Watch Service, endpoint, POD, node, etc. Translate as configuration formats that can be understood directly by envoys;
-
Proxy (Envoy binary) : That is, connect directly to the Discovery Service to retrieve the configuration (equivalent to indirectly retrieving the registry of microservices in a cluster from service registries such as Kubernetes);
-
Agent (pilot-Agent binary) : Generates an Envoy profile and manages the Envoy life cycle;
-
Service A/B: For applications that use ISTIO, the incoming and outgoing network traffic of Service A/B is taken over by the proxy.
The performance test
Official performance tests
Under the official standard tests [2] (1000 services, 2000 Sidecars, 70,000/s grid calls), the approximate performance data are as follows:
-
Envoy proxies consume 0.6 vCPU and 50 MB of memory per 1000 QPS;
-
The telemetry service consumes 0.6 vCPU per 1000 QPS (in-grid calls);
-
The Pilot consumes 1 vCPU and 1.5 GB of ram;
-
Envoy proxies add 8ms of delay to 90% of calls.
A few things to note:
-
The more complex the proxy policy is, the more the delay increases.
-
Telemetry data is sent asynchronously, which does not affect the response time of the current request, but causes network congestion and indirectly increases latency.
Test Scheme Description
This test is based on ISTIO 1.2, standalone K8S (V1.14.3).
-
Fortio, an end-to-end load testing tool, calls Service A in this case;
-
There is A call relationship between Service A and Service B, and the depth of their calls can be controlled (i.e. the number of calls between Service A and Service B before Service A returns to Fortio).
-
Fortio and two services are in the same namespace;
-
Test results 1 to 3 do not include the basic telemetry configuration.
The test results
1. Istio performance loss
At 500 QPS with A call depth of 3 (i.e. three calls between A and B before Service A returns to Fortio), there is A performance difference with and without ISTIO agents:
As you can see, the P90 of ISTIO = 6.63ms (90% of calls are less than 6.63ms) and P90 = 3.70ms without ISTIO. Call latency increased by 3ms.
2. Impact of call depth on latency
In the case of 500 QPS and call depth of 0, 3, 5, 8, 10 and 12 respectively, there is an analysis diagram of call delay of ISTIO agent:
Call delay analysis diagram of non-ISTIO agent:
Tests with call depth of 5, 8, 10 and 12 were selected for comparison:
To the left of the red line is the call delay of isTIO agent when the depth is 5, 8, 10, and 12 respectively. To the right of the red line is the call delay of ISTIO agent when the depth is 5, 8, 10, and 12 respectively.
As you can see, the latency added by the ISTIO agent increases as the call depth increases, but the increase slows down. The depth of more than 8 calls, P90 with ISTIO proxy is on average 10 ms higher than P90 without ISTIO proxy.
3. The impact of QPS on delay
In the case of call depth of 3 and QPS of 300, 500 and 1000 respectively, there is an analysis chart of call delay of ISTIO agent:
Call delay analysis diagram of non-ISTIO agent:
As you can see, the increase in QPS does not increase the call latency.
Summary of performance testing
-
The increase of call delay is mainly positively correlated with the call depth (the number of layers through ISTIO Proxy).
-
The delay of P90 with IStio increases by 1 ms on average than that of P90 without ISTIO.
-
The increase in QPS does not increase the call latency.
Note: In my test, the maximum pressure of QPS call between services reached 45K, and the above conclusion still holds. As for the QPS limit of ISTIO Proxy, more detailed tests need to be conducted later.
Go-client operates ISTIO resources
We can use the Go-client provided by K8S to manipulate istio CRD resources.
The steps are as follows:
-
Write CRD definitions under PKG /apis/{API Group}/{version};
-
Add appropriate code generation tags, see [3];
-
Generate clientset, Informer and other codes with code-generator[4].
If you need to operate on someone else’s pre-defined CRD, you can directly reference it when defining the CRD. With istio virtual service, for example, simply introduce istio. IO/API/networking/v1alpha3 / VirtualService can.
I have put the generated API code on Github, and students who need it can use it by themselves [5].
Istio CRD code generation script
1# Working directory for code generation, K8s. IO /code-generator5go get -u k8s.io/code-generator/... 6cd $GOPATH/ SRC /k8s. IO /code-generator78 /generate-groups.sh all "$ROOT_PACKAGE/ PKG /client" "$ROOT_PACKAGE/ PKG /apis" "authentication:v1alpha1 networking:v1alpha3"
Copy the code
Reference:
[1].https://jimmysong.io/istio-handbook/concepts/sidecar-injection-deep-dive.html
[2].https://istio.io/docs/concepts/performance-and-scalability/
[3].https://blog.openshift.com/kubernetes-deep-dive-code-generation-customresources/
[4].https://github.com/kubernetes/code-generator
[5].https://github.com/RuiWang14/k8s-istio-client
Maybe you’d like to see more
(Click the title or cover of the article to view)
| sohu news recommendation algorithm is presented to you, are you care about
2018-08-30
CTR prediction model of news recommendation system
2019-04-18
The evolution of Internet architecture
2018-08-16
Join sohu technology author day group
Thousand yuan payment waiting for you!
Stamp here! ☛