A background
1.1 What is a Microservice
- There are lightweight communication mechanisms between services, usually REST apis
- Decentralized management mechanism
- Each service can be implemented in a different programming language, using different data storage technologies
- Applications are divided into services by business. A large application system can be composed of multiple independent services
- Each service can be deployed independently and has its own business logic
- Services can be shared by multiple applications, and other services can reuse some common resource services
1.2 Advantages of microservices
-
Modular development, with single brother service as the component to update and upgrade, improve the overall abnormal stability of the system
-
Modular development and management is convenient, development and maintenance of a single team, clear responsibilities
-
The public service module can be used by other business modules
-
The system architecture is more distinct
-
Combine CI/CD to implement DevOPS
-
Elastic scaling, combined with service choreography K8S dynamic HPA
-
Service fuses/degrades, avoids but abnormal node avalanche effect, disperses failed nodes
1.3 Challenges brought by microservices
- With more and more services, configuration management and maintenance are complicated
- Complex dependencies between services
- Tracing the service invocation chain is difficult
- Load balancing between services
- Service Development
- Service monitoring/logs
- Service circuit breaker/downgrade
- Service authentication
- Service going online and offline
- The service document
- .
1.4 What is Microservices governance
With micro service bring us convenience and challenges, from bare metal to virtualization to public clouds, again to the container, the serverless, technology innovation, cope with challenges of micro service, how to register the service found that request link tracking, load balancing, service fusing/demote, current limiting service, access control, monitoring logs, configuration management, This article is a note for learning about Istio, some of the issues that Istio addresses, and some of the challenges that need to be addressed by other platforms. This article will be updated as we learn about Istio, the service governance tool in Kubernetes.
Two Istio introduction
2.1 What is ISTIO
Istio allows you to connect, secure, control, and observe microservices, and at a high level, Istio helps reduce the complexity of these deployments and the stress on development teams. It is a fully open source grid of services that can be transparently layered onto existing distributed applications. It is also a platform, including apis that allow it to be integrated into any logging platform, telemetry, or policy system. Istio’s diverse feature set enables you to successfully and efficiently run a distributed microservices architecture and provides a unified approach to securing, connecting, and monitoring microservices.
1.2 What is a Service Mesh
Istio can address the challenges developers and operations face in the transition from single application to distributed microservices architecture.
The term Service Mesh is often used to describe the network of microservices that make up these applications and the interactions between them. As the scale and complexity grow, the service grid becomes increasingly difficult to understand and manage. Its requirements include service discovery, load balancing, failure recovery, metrics collection and monitoring, and often more complex operational and maintenance requirements such as A/B testing, Canary publishing, flow limiting, access control, and end-to-end authentication.
Istio provides a complete solution to meet the diverse needs of microservice applications by providing behavioral insights and operational control across the service grid.
1.3 Why is ISTIO used
Istio provides a simple way to set up a network for deployed services. The network provides load balancing, authentication between services, and monitoring functions. The service code does not need to be changed. To enable istiO-enabled services, simply deploy a special Sidecar agent in your environment that uses the Istio control plane functionality to configure and manage the agent to intercept all network traffic between microservices:
- Automatic load balancing of HTTP, gRPC, WebSocket, and TCP traffic.
- Fine-grained control of traffic behavior is possible with rich routing rules, retries, failover, and fault injection.
- Pluggable policy layer and configuration API to support access control, rate limiting, and quotas.
- Automatic measurement, logging, and tracing of all traffic in and out of the cluster inlet and outlet.
- Secure communication between services in a cluster through powerful identity-based authentication and authorization.
Istio is designed to achieve scalability and meet various deployment requirements.
Three core functions
3.1 Traffic Management
With simple rule configuration and traffic routing, you can control traffic and API calls between services. Istio simplifies the configuration of service level properties such as circuit breakers, timeouts, and retries, and makes it easy to set up important tasks such as A/B testing, Canary deployment, and phased deployment based on percentage-based traffic segmentation.
By better understanding your traffic and out-of-the-box failover capabilities, you can find problems before they occur, make calls more reliable, and make your network more powerful — no matter what conditions you face.
- Load balancing
- Dynamic routing
- Gray released
- Fault injection
3.2 security
Istio’s security features allow developers to focus on application-level security. Istio provides an underlying secure communication channel and manages authentication, authorization, and encryption of service communications on a large scale. With Istio, service communication is secure by default, allowing you to enforce policies consistently across multiple protocols and runtimes — all with little or no application changes.
While Istio is platform-independent, the benefits of combining it with Kubernetes (or infrastructure) networking strategies are even greater, including the ability to secure communication between Pods or services at the network and application layers.
- certification
- authentication
3.3 Observable rows
Istio’s powerful tracking, monitoring, and logging gives you insight into service grid deployment. With Istio’s monitoring capabilities, you can really see how service performance affects upstream and downstream functions, while its custom dashboards provide visibility into all service performance and how that performance affects your other processes.
Istio’s Mixer component is responsible for policy control and telemetry collection. It provides back-end abstraction and mediation to isolate the rest of Istio from the implementation details of the various infrastructure backends, and provides operations with fine-grained control over all interactions between the grid and the infrastructure back ends.
All of these features allow you to set up, monitor, and enforce SLOs on services more effectively. The most important thing, of course, is that you can detect and fix problems quickly and efficiently.
- Call chain
- Access log
- monitoring
3.4 Platform Support
Stio is platform independent and is designed to run in a variety of environments, including across the cloud, in-house deployments, Kubernetes, Mesos, and more. You can deploy Istio on Kubernetes or on Nomad with Consul. Istio currently supports:
- Service deployed on Kubernetes
- Use the service registered by Consul
- The service deployed on the VM
3.5 Integration and Customization
The policy enforcement component can be extended and customized to integrate with existing ACL, logging, monitoring, quota, audit, and so on.
Four architecture
The Istio service grid is logically divided into data plane and control plane.
- The data plane consists of a set of intelligent agents deployed as sidecars. These agents regulate and control all network communication between microservices and mixers.
- The control plane is responsible for managing and configuring proxy traffic. In addition, the control plane is configured with mixers to implement policies and collect telemetry data.
4.1 architecture diagram
4.2 components
2 Envoy
Istio uses an extended version of the Envoy proxy, a high-performance proxy developed in C++ that mediates all inbound and outbound traffic for all services in the service grid. Many of Envoy’s built-in functions are enhanced by Istio, such as:
- Dynamic service discovery
- Load balancing
- The TLS to terminate
- HTTP/2 & gRPC agent
- fuse
- Health check and grayscale publishing based on percentage traffic split
- Fault injection
- Rich metrics
Envoy is deployed as a sidecar, in the same Kubernetes Pod as the corresponding service. This allows Istio to extract a large number of signals about traffic behavior as attributes, which in turn can be used to execute policy decisions in Mixer and sent to the monitoring system to provide information about the behavior of the entire grid.
The Sidecar agent model also allows the functionality of Istio to be added to an existing deployment without having to rebuild or rewrite the code. Read more about why we chose this approach in our design goals.
4.2.2 Mixer
- concept
Mixer is a platform-independent component that performs access control and usage policies on the service grid and collects telemetry data from Envoy proxies and other services. The agent extracts the request-level attributes and sends them to Mixer for evaluation. For more information about property extraction and policy evaluation, see Mixer Configuration.
Mixer includes a flexible plug-in model that enables it to tap into a variety of host environments and infrastructure back ends, abstraction the Envoy proxy and Istio-managed services from these details.
- Architecture diagram
Holdings Pilot
- concept
Pilot provides service discovery for the Envoy Sidecar, traffic management for intelligent routing (such as A/B testing, Canary deployment, and so on) and elasticity (timeout, retry, fuse, and so on). It translates the high-level routing rules that control traffic behavior into an Envoy specific configuration and propagates them to sidecar at run time.
Pilot abstractions the platform-specific service discovery mechanism and synthesizes it into a standard format that any sidecar that conforms to the Envoy data plane API can use. This loose coupling allows Istio to run in multiple environments (for example, Kubernetes, Consul, Nomad) while maintaining the same operational interface for traffic management.
- Architecture diagram
4.2.4 Citadel
- concept
Citadel enables powerful inter-service and end-user authentication with built-in identity and credential management. Can be used to upgrade unencrypted traffic in the service grid and provide o&M personnel with the ability to enforce policies based on service identity rather than network control. As of version 0.5, Istio supports role-based access control to control who can access your services, rather than based on unstable layer 3 or layer 4 network identifiers.
- Architecture diagram
4.2.5 Galley
Galley represents the other Istio control plane components that validate user-written Istio API configurations. Over time, Galley will take over Istio’s top-level responsibility for configuring, processing, and assigning components. It is responsible for isolating other Istio components from the details of getting user configurations from the underlying platform, such as Kubernetes.
4.3 Design Objectives
- Maximize transparency: If Istio is to be adopted, operators and developers should benefit from it at little cost. To do this, Istio automatically injects itself into all network paths between services. Istio uses Sidecar agents to capture traffic and, where possible, automatically programs the network layer to route traffic through these agents without any changes to the deployed application code. In Kubernetes, agents are injected into pods to capture traffic by writing iptables rules. By injecting the Sidecar proxy into the POD and modifying the routing rules, Istio is able to mediate all traffic. This principle also applies to performance. When Istio is applied to deployment, o&M can find that the added resource overhead to provide these capabilities is minimal. All components and apis must be designed with performance and scale in mind.
- Scalability: As operations and developers rely more and more on the capabilities provided by Istio, the system will inevitably grow with their needs. While we expect to continue to add new capabilities on our own, we anticipate that the greatest need will be to extend policy systems, integrate other policy and control sources, and propagate grid behavior signals to other systems for analysis. The policy runtime supports standard extension mechanisms to plug into other services. In addition, it allows the vocabulary to be extended to allow the implementation of policies based on new signals generated by the grid.
- Portability: An ecosystem using Istio will differ in many dimensions. Istio must be able to run with minimal cost in any cloud or preset environment. Porting an Istio-based service to a new environment should be a piece of work, and it is possible to use Istio to deploy a service to multiple environments simultaneously (for example, redundancy across multiple clouds).
- Policy consistency: In API calls between services, the application of policies enables full control over the behavior between grids, but it is equally important to apply policies to resources that do not need to be expressed at the API level. For example, it is more useful to apply quotas to the amount of CPU consumed by an ML training task than to the call that started the work. Therefore, the policy system is maintained as a unique service, with its own API, rather than being put into a proxy /sidecar, which allows services to integrate directly with it as needed.
Refer to the link
- Juejin. Im/post / 684490…
- Juejin. Im/post / 684490…
- Juejin. Im/post / 684490…
- Segmentfault.com/a/119000001…