Abstract:

Istio is an open platform for connecting, managing, and securing microservices. It provides an easy way to create microservice networks and provide load balancing/inter-service authentication/monitoring capabilities without modifying the services themselves. Provides the following functions: Traffic Management: Controls the Traffic and API calls between services. Observability:…

Istio is an open platform for connecting, managing, and securing microservices. It provides an easy way to create microservice networks and provide load balancing/inter-service authentication/monitoring capabilities without modifying the services themselves. Provides the following functions:

  • Traffic Management: Controls the flow of calls and API calls between services
  • Observability: Capturing the dependencies between services and the flow of service invocations;
  • Policy Enforcement: Controls the access policies of services without changing the services themselves.
  • Service Identity and Security: Service Identity and security-related functions;

architecture

In terms of architecture, Istio is mainly divided into two parts:

  • Control panel: Administrative agent, used to support traffic routing/runtime enforcement policies, etc.
  • Data panels: A series of intelligent proxies that arbitrate and control network interactions between services;

Intelligent proxy Envoy

Envoy is a service-oriented architecture for L7 proxy and communication buses. This project was born with the following goals in mind:

The network should be transparent to the application so that when network and application failures occur, the root cause of the problem can be easily located.

Envoy attempts to do this by providing the following advanced features:

  • External process architectureThe: Envoy is an independent process that runs alongside the application.
  • cross-language: Envoy can work with applications developed in any language.
  • Based on the new C++11 encoding
  • L3 and L4 filter: Envoy’s core is a L3/L4 network proxy that can be plugged into the main service as a programmable filter to implement different TCP proxy tasks.
  • HTTP L7 filterHTTP is a key component in modern application architectures, and Envoy supports an additional HTTP L7 filtering layer. The HTTP filter is plugged into the HTTP link management subsystem as a plug-in to perform various tasks such as buffering, rate limiting, routing/forwarding, sniffing Amazon DynamoDB, and so on.
  • Support HTTP / 2
  • HTTP about the routing: Run in HTTP mode, Envoy supports path-based routing and redirection based on Content Type, Runtime values, etc. This feature is useful when a service is built into the service grid and Envoy acts as a front-end/edge proxy.
  • Support gRPCGRPC is an RPC framework from Google that uses HTTP/2 as the underlying multiplexing. GRPC requests and responses hosted by HTTP/2 can use Envoy routing and LB capabilities. So the two systems are very complementary.
  • Support the mongo about
  • Support DynamoDB about:
  • Service discovery: Service discovery is an important part of service-oriented Architecture. Envoy supports a variety of Service discovery methods, including asynchronous DNS resolution and invoking Service Discovery services through REST.
  • Health checkThe recommended way to build an Envoy grid is to view service discovery as an ultimate consensus approach. Envoy contains a health check subsystem that performs proactive health checks on upstream service clusters. The Envoy joint service then finds the health check information to determine healthy LB objects. Envoy, as an external health check subsystem, also supports passive health checks.
  • Senior LBLB between different components is also a complex problem in distributed systems. Envoy is an independent proxy process, not a lib library, so it can implement high-level LB in one place and be accessed by any application. Currently, including automatic retry, circuit breakers, global speed limiting, blocking requests, exception detection. Scheduled request rate control will also be supported in the future.
  • The front-end agent: Although designed as a communication system between services, Envoy (debugging, administration, service discovery, LB algorithms, etc.) can also be applied to the front end, Envoy offers enough features to serve as a front end proxy for most Web applications, including TLS, HTTP/1.1, HTTP/2, and HTTP L7 routing.
  • Excellent observabilityAs mentioned above, Envoy’s goal is to make the web more transparent. However, both the network layer and the application layer may have problems. Envoy includes reliable statistical capabilities for all subsystems. Currently only STATSD and compatible statistics libraries are supported, although supporting the other one is not complicated. Statistics can also be viewed through administrative ports, and Envoy supports a distributed tracking mechanism for third parties.
  • Dynamic configuration: Envoy provides layered dynamic configuration apis that users can use to build complex centrally managed deployments.

The Envoy will deploy as a standalone Sidecar on the same Kubernetes pod with related microservices and provide a set of properties to Mixer. The Mixer used this as a basis for executing policies and sending them to the monitoring system.

This Sidecar proxy model does not require any changes to the logic of the service itself and can add a range of functionality.

Mixer

Mixer is responsible for enforcing access control and usage policies on the service grid and collecting telemetry data from Envoy agents and other services. The broker extracts request-level attributes and sends them to Mixer for evaluation. Mixer includes a flexible plug-in model that enables it to plug into a variety of host environments and infrastructure backends, abstracting Envoy proxies and ISTIO-managed services from these details.

Back-end infrastructure is often designed to provide support for establishing services, including access control systems, telemetry data capture systems, quota enforcement systems, and billing systems. Traditional services interact directly with these back-end systems, tightly coupling them and integrating personalization semantics and usage.

Mixer provides a common mediation layer between application code and the infrastructure back end. It is designed to move policy enforcement out of the application layer and replace it with configurations that operations can control. Instead of integrating the application code with a specific back end, the application code integrates fairly simply with the Mixer, which is then responsible for connecting to the back end system.

Mixer was designed to shift the boundaries between layers in order to reduce overall complexity. Remove the policy logic from the service code and put it under the control of operations personnel.

Mixer architecture

Mixer provides three core functions:

  • Prerequisite CheckAllow a service to validate several prerequisites before responding to incoming requests from service consumers. Prerequisites can include whether the service user is correctly authenticated, whether the user is on the whitelist of the service, and whether the user has passed the ACL check.
  • Quota managementIt enables services to allocate and release quotas in multiple dimensions. Quotas, a simple resource management tool, can provide a relatively fair means of competition when service consumers compete for limited resources. Limiting traffic is an example of a quota.
  • Telemetry reportEnables services to report logging and monitoring, and also enables tracking and billing flows for service producers and service consumers.

The application of these mechanisms is based on a set ofattributeEach request presents these properties to Mixer. In Istio, these attributes come from each request made by the Sidecar proxy (Envoy).

Istio uses properties to control the runtime behavior of services running in the service grid. Properties are snippets of metadata with names and types that describe incoming and outgoing traffic and the environment to which that traffic belongs. The Istio attribute carries specific pieces of information, such as the error code of the API request, the latency of the API request, or the original IP address of the TCP connection. Such as:

Request. path: xyz/ ABC request.size: 234 Request. time: 12:34:56.789 04/17/2017 source. IP: 192.168.0.1 target.service: exampleCopy the code

Adapter-based and template-based configuration

Mixer is an attribute processing engine in which requests arrive at The Mixer with a set of attributes, based on which the Mixer generates calls to various infrastructure backends. This set of properties determines which backend Mixer invokes with which parameters for a given request. To hide the details of the individual back ends, Mixer uses modules called adapters.

Mixer configuration has several central responsibilities:

  • Describes which adapters are in use and how they operate.
  • Describes how to map request attributes to adapter parameters.
  • Describes when an adapter is invoked with specific parameters.

Configuration is done based on adapters and templates:

  • The adapterEncapsulates the interface between Mixer and the specific infrastructure back end.
  • The templateDefines the mapping from attributes of a particular request to adapter inputs. An adapter can support any number of templates.

Configurations are expressed in YAML format, built around a few core abstractions:

  • HandlerHandlers are configured adapters. The adapter’s constructor argument is the Handler configuration.
  • The instanceA (request) instance is the result of a mapping of request attributes to a template. This mapping comes from the configuration of the instance.
  • The rulesThe: rule determines when a Handler is invoked using a particular template configuration.

Handler

The adapter encapsulates the interface necessary for Mixer to interact with a specific external infrastructure back end, such as Prometheus, New Relic, or Stackdriver. Various adapters require parameter configuration to work. For example, a logging adapter may require an IP address and port for logging output.

The example here configures an adapter of type ListChecker. The ListChecker adapter uses a list to check the input. If whitelisted mode is configured and the input value exists in the list, success is returned.

apiVersion: config.istio.io/v1alpha2
kind: listchecker
metadata:
  name: staticversion
  namespace: istio-system
spec:
  providerUrl: http://white_list_registry/
  blacklist: falseCopy the code

{metadata.name}.{kind}.{metadata.namespace} is the fully qualified name of the Handler

The instance

The configuration instance maps the attributes in the request to the input to the adapter, noting that all dimensions required in the Handler configuration are defined in this mapping.

apiVersion: config.istio.io/v1alpha2
kind: metric
metadata:
  name: requestduration
  namespace: istio-system
spec:
  value: response.duration | "0ms"
  dimensions:
    destination_service: destination.service | "unknown"
    destination_version: destination.labels["version"] | "unknown"
    response_code: response.code | 200
  monitored_resource_type: '"UNSPECIFIED"'Copy the code

The rules

Rules are used to specify when a particular Handler is invoked using a particular instance configuration. For example, if we want to send the RequestDuration indicator of a Service1 service with an X-user in the request header to the Prometheus Handler:

apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
  name: promhttp
  namespace: istio-system
spec:
  match: destination.service == "service1.ns.svc.cluster.local" && request.headers["xuser"] = ="user1"
  actions:
  - handler: handler.prometheus
    instances:
    - requestduration.metric.istio-systemCopy the code

Pilot (original Istio – Manager)

Pilot is responsible for collecting and validating configurations and propagating them to the various Istio components. It extracts environment-specific implementation details from Mixer and Envoy to provide them with abstract representations of the user’s services, independent of the underlying platform. In addition, traffic management rules (i.e., generic layer 4 rules and layer 7 HTTP/gRPC routing rules) can be programmed at run time through Pilot.

  • Interface between users and Istio to collect and verify configuration information and send it to other components. As a core component of traffic management, Pilot manages all configured Envoy proxy instances and provides the following traffic management options:

  • Pilot provides an abstraction layer for adapting to underlying cluster management platforms, such as the Kubernetes adaptation layer.

  • A proxy controller is also provided for dynamic configuration of Istio agents.

Service Model Service Model

Services themselves are not unique or new to Istio; for example, K8s already provides similar Service concepts and capabilities. However, in Istio, to support more fine-grained routing capabilities, versioning mechanisms are provided for service models. For example, versioning can be described by attaching labels.

As a logical abstraction, each service usually has a full FQDN and several ports, or it may have a separate load balancer and virtual IP address corresponding to it.

In k8s, for example, a service foo will have a domain name foo default. SVC. Cluster. The local hostname, virtual IP10.0.1.1 and possible listener port.

A service often has multiple instances, each of which can be a Container, POD, or VM. Each instance has a network endpoint and exposed listening ports.

Istio itself does not provide service registration and discovery capabilities, but rather relies on off-the-shelf capabilities provided by the underlying platform for service registration discovery. In addition, IStio does not provide DNS capabilities, which are also provided by the underlying platform (such as Kube-DNS) to resolve domain names.

Rule-based routing capabilities are implemented by Istio Proxy Sidecar processes such as Envoy, Nginx, etc. These agents typically provide both layer 4 and layer 7 routing capabilities. These rule definitions can be based on labels, or weights, or HTTP headers, urls, and so on.

Configuration Model Configuration Model

Istio’s rule configuration provides a PB-based schema definition. These rules are stored in a KVstore. The Pilot subscribes to these configuration changes to update the configurations of other istio components.

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: reviews-default
spec:
  destination:
    name: reviews
  route:
  - labels:
    version: v1
    weight: 100Copy the code

Agent controller

Obviously, the proxy controller is the core module of pilot and is used to control and manage proxies in ISTIO. Currently, ISTIO uses envoy as a proxy. Because IStio provides a standard abstract API for interfacing with proxies, users will be able to customize proxies in the future, not necessarily envoy.

For Envoy, ISTIO currently provides two components:

  • proxy agentA set of script commands to generate envoy configurations from abstract service models and rule configurations that also trigger proxy restarts;
  • discovery serviceImplement envoy’s service discovery API to publish information to envoy proxies;
GET /v1/registration/(string: service_name) The discovery service returns all hosts with the specified service_name, and returns the following JSON format response: {"hosts": []
}

const std::string Json::Schema::SDS_SCHEMA(R"EOF( { "$schema":"http://json-schema.org/schema# ",
    "definitions" : {
      "host" : {
        "type" : "object"."properties" : {
          "ip_address" : {"type" : "string"},
          "port" : {"type" : "integer"},
          "tags" : {
            "type" : "object"."properties" : {
              "az" : {"type" : "string"},
              "canary" : {"type" : "boolean"},
              "load_balancing_weight": {
                "type" : "integer"."minimum" : 1,
                "maximum": 100}}}},"required" : ["ip_address"."port"]}},"type" : "object"."properties" : {
      "hosts" : {
        "type" : "array"."items" : {"$ref" : "#/definitions/host"}}},"required" : ["hosts"]
  }
  )EOF");Copy the code

I have to mention pilot’sproxy injectionCapability, as you might already imagine, is implemented based on iptable rules. All service interactions are then captured and reforwarded by Pilot.

Istio-Auth

Authentication between services and users is provided to enhance security between services without changing the service code. It consists of the following three components:

  • identification

    • When Istio runs on Kubernetes,Auth uses the service account provided by Kubernetes to identify who is running the service.
  • The key management

    • Auth provides a CA to automate the generation and management of keys and certificates.
  • Communications security

    • Communication between services ensures the security of service invocations by providing tunnels on the client and server sides via envoys.

Distributed tracking

Istio’s distributed tracking is based on The Open source Zipkin distributed tracking system of Twitter, and the theoretical model comes from The Paper of Google Dapper.

Start the zipkin

Zipkin Addon is started when Istio is installed, or it can be started using the following command:

kubectl apply -f install/kubernetes/addons/zipkin.yamlCopy the code

Visit zipkin

Visit the Zipkin Dashboard: http://localhost:9411

kubectl port-forward $(kubectl get pod -l app=zipkin -o jsonpath='{.items[0].metadata.name}'9411-9411)Copy the code

Enable trace in the service

The service implementation itself needs to make some changes, that is, to take the following headers from the original HTTP request and pass them to other requests:

x-request-id
x-b3-traceid
x-b3-spanid
x-b3-parentspanid
x-b3-sampled
x-b3-flags
x-ot-span-contextCopy the code

Enable the Ingress

In the Kubernetes environment, Istio uses the built-in Ingress to expose services, currently supporting both HTTP and HTTPS. For the specific Ingress, see Kubernetes Ingress.

Configuring the HTTP Service

cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: simple-ingress
  annotations:
    kubernetes.io/ingress.class: istio
spec:
  rules:
  - http:
      paths:
      - path: /headers
        backend:
          serviceName: httpbin
          servicePort: 8000
      - path: /delay/.*
        backend:
          serviceName: httpbin
          servicePort: 8000
EOFCopy the code

Configuring the HTTPS Service

cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: secured-ingress
  annotations:
    kubernetes.io/ingress.class: istio
spec:
  tls:
    - secretName: ingress-secret
  rules:
  - http:
      paths:
      - path: /ip
        backend:
          serviceName: httpbin
          servicePort: 8000
EOFCopy the code

Enable the Egress

By default, services in Istio cannot access services outside the cluster because the IPtable in pod is set to direct all external requests to the Sidecar proxy. In order to access external services, Istio provides two ways to solve this problem.

Configuring External Services

Register an HTTP and HTTPS service as follows:

cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
 name: externalbin
spec:
 type: ExternalName
 externalName: httpbin.org
 ports:
 - port: 80
   # important to set protocol name
   name: http
EOFCopy the code

or

cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
 name: securegoogle
spec:
 type: ExternalName
 externalName: www.google.com
 ports:
 - port: 443
   # important to set protocol name
   name: https
EOFCopy the code

Metadata. name is the name of the external service that the internal service accesses, and spec.externalName is the DNS name of the external service.

To try to access external services, run the following command:

export SOURCE_POD=$(kubectl get pod -lapp=sleep -o jsonpath={.items.. metadata.name}) kubectlexec -it $SOURCE_POD -c sleep bash
curl http://externalbin/headers
curl http://securegoogle:443Copy the code

Direct access to external services

The Istio Egress currently supports access only to HTTP/HTTPS requests. To support other protocol requests (such as MQTT, Mongo, etc.), the service’s Envoy Sidecar needs to be configured to avoid intercepting external requests.

The simplest way is to specify the IP range to be used by the internal cluster service using the includeIPRanges parameter.

Note: Different Cloud providers support different IP ranges and acquisition methods.

For example, Minikube supports the following:

kubectl apply -f <(istioctl kube-inject -fSamples/apps/sleep/sleep. Yaml -- -- includeIPRanges = 10.0.0.1/24)Copy the code

Configuring request Routing

By default,Istio routes all requests to all versions of the same service. In addition,Istio provides routing rules based on the content of the request. The following rule describes that all requests are directed to version V1 of the service:

type: route-rule
name: ratings-default
namespace: default
spec:
  destination: ratings.default.svc.cluster.local
  precedence: 1
  route:
  - tags:
      version: v1
    weight: 100
---
type: route-rule
name: reviews-default
namespace: default
spec:
  destination: reviews.default.svc.cluster.local
  precedence: 1
  route:
  - tags:
      version: v1
    weight: 100
---
type: route-rule
name: details-default
namespace: default
spec:
  destination: details.default.svc.cluster.local
  precedence: 1
  route:
  - tags:
      version: v1
    weight: 100
---
type: route-rule
name: productpage-default
namespace: default
spec:
  destination: productpage.default.svc.cluster.local
  precedence: 1
  route:
  - tags:
      version: v1
    weight: 100
---Copy the code

If some requests need to be directed to other versions of the service, such as routing based on the cookie of the request:

destination: reviews.default.svc.cluster.local match: httpHeaders: cookie: regex: ^(.*?;) ? (user=jason)(; . *)? $ precedence: 2 route: - tags: version: v2Copy the code

Other specific rules, see: https://istio.io/docs/reference/config/traffic-rules/routing-rules.html#routerule

Fault injection

Istio provides two types of errors that can be injected into requests:

  • Delays: time series faults, simulation network delays or upstream service loads.
  • Aborts: a collapse fault, simulating the failure of the upstream service. Return HTTP Error Code or TCP connection Error Code.

The fault injection rule is described in the following example:

destination: ratings.default.svc.cluster.local
httpFault:
  delay:
    fixedDelay: 7s
    percent: 100
match:
  httpHeaders:
    cookie:
      regex: "^ (. *? ;) ? (user=jason)(; . *)? $"
precedence: 2
route:
 - tags:
    version: v1Copy the code

Setting request Timeout

HTTP request timeout can be implemented by setting the field httpReqTimeout in the routing rule.

Examples are as follows:

cat <<EOF | istioctl replace
type: route-rule
name: reviews-default
spec:
  destination: reviews.default.svc.cluster.local
  route:
  - tags:
      version: v2
  httpReqTimeout:
    simpleTimeout:
      timeout: 1s
EOFCopy the code

Current limiting

Configure the limiting rules in Istio mixer as ratelimit.yaml:

rules:
- selector: source.labels["app"] = ="reviews" && source.labels["version"] = ="v3"  
- aspects:
  - kind: quotas
    params:
      quotas:
      - descriptorName: RequestCount
        maxAmount: 5000
        expiration: 5s
        labels:
          label1: target.serviceCopy the code

If target.service=rating, the key of the counter is:

$aspect_id; RequestCount; maxAmount=5000; expiration=5s; label1=ratingsCopy the code

Execute the following command to control the rating service to 5000 requests every 5 seconds (valid only when the Reviews V3 service is invoked):

istioctl mixer rule create global ratings.default.svc.cluster.local -f ratelimit.yamlCopy the code

Simple access control

Istio can implement simple access control by setting rules.

Use the denials attribute

Such as:

rules:
- aspects:
  - kind: denials
  selector: source.labels["app"] = ="reviews" && source.labels["version"] = ="v3"Copy the code

Execute the following command to cause the rating service to reject any requests from the Reviews V3 service.

istioctl mixer rule create global ratings.default.svc.cluster.local -f samples/apps/bookinfo/mixer-rule-ratings-denial.yaml

Use black and white lists

To use the whitelist, you need to define an Adapter as follows:

- name: versionList
  impl: genericListChecker
  params:
    listEntries: ["v1"."v2"]Copy the code

Set the blacklist to false when the whitelist is enabled, and true when the blacklist is not enabled.

rules:
  aspects:
  - kind: lists
    adapter: versionList
    params:
      blacklist: false
      checkExpression: source.labels["version"]Copy the code

conclusion

As an open platform for connecting, managing, and securing microservices, ISTIO does provide an easy way to create microservice networks. A series of articles will be provided to introduce specific cases and the core technologies involved.

Copyright Notice: The content of this article is contributed by Internet users, copyright belongs to the author, the community does not have the ownership, also do not assume the relevant legal responsibility. If you find any content suspected of plagiarism in our community, you are welcome to send an email to yqgroup@service.aliyun.com to report and provide relevant evidence. Once verified, our community will immediately delete the content suspected of infringement.

The original link