Productpage -gateway allows traffic from the productPage. Bookinfo domain name to enter the grid
Now a GW has been built
This domain name goes inside the grid
Create virtualService and bind the Gateway
There is now a rule that allows traffic from the gateway to access hosts
If you don’t want review to be random and you want to see only Review3, you need to add a configuration later
Now the Product Service accesses the Review service
The VirtualService needs to be set up first, and all traffic to reviews is routed through the following route
Create a destination-rule
Set up VS, access the Review service and switch to v3
Reviews has no gateway
Now refresh is all red Star V3 version
You can do this regardless of timeliness
V3 is now inaccessible
How would it be called if version V2 had multiple instances
LBS, RDS, CDS, EDS, although there are three copies, eds selects only one of them
The traffic is already allocated in the route, and it doesn’t matter how many copies there are
Return to a copy
Now to implement the following common, bookinfo.com large service access review access review service
To first allow the new domain name into the service grid, you need the Gateway
This is actually an array
Both domains are now allowed inside the grid
To forward by path, you need to define rules. Most rules are defined in VirtualService
Traffic from the Gateway to the domain name bookinfo.com is forwarded according to the following rules
Match the URI and route to productPage if it starts with/productPage. If you want to subdivide, which version of traffic can you forward to
Forwarded Bookinfo.com traffic from ProductPage – Gateway is forwarded to BookInfo
It can be accessed here
9080 is accessible by accessing ratings
The style access is lost
I need an underscore
Now that’s it
Accessing productPage this route will automatically concatenate you to the corresponding service listening port/route on the back end
So that’s why static
If the service only has one port you don’t have to write it
If a service has multiple ports, you can point to one of them
If you want rate to access ratings instead of the ratings section, add rewrite to it and the matching path will change
Actually even accessing this
If you want a user to specifically access a version, you can do grayscale distribution, login will fill in a bunch of data in headler, KV right
You can add a default-route. If the match fails, the default route is used
When you log in, you get a header message
Forget grayscale publishing for a while and take a look at destination-rule
Before, we changed the v2 version of review into three copies, so how to forward 10% of the traffic to the back end, so this policy is added in destination-rule
The default rule is random
To use HTTPS, it is recommended to place the certificate on the ingress layer (generally using the LB of public cloud, such as SLB and CLB). You can also use the certificate on the ISTIO side
We use ISTIO for grayscale publishing and also use headers for forwarding
When you log in, you take this information with you
When you visit the Reviews service
If the current user is Luffy, you want him to only access review2, the rest of the forward to review3
That is to write virtualService rules. If the headers and key are end-user (end-user), then the route is matched to the following route
Otherwise, the rest of you go to the following
Now admin can only be distributed to V3, which is red star
Log on luffy
Only version V2 can be accessed
What else is supported here, or even regular expressions
Virtualservice urIs are seen using exact matches, prefixes, or regular expressions
Scheme, HTTP, or HTTPS is also supported
Get or POST method
This is the HTTPv2 protocol
Headers is used before and also supports regular expressions
Query parameters are also supported
The Withoutheaders don’t have that information
Not luffy this user to visit here, luffy this user to visit below
Traffic mirroring and retry
Traffic mirroring is a great feature because it is difficult to simulate an online set of data when doing stress tests. Flow mirroring, to a great extent to solve this problem. It is to continuously mirror online traffic to our pre-release environment without affecting the online environment, so that the reconstructed service can receive a real shock, and the pre-release service also shows real processing capacity
Traffic enters the online service and a copy is sent to the pre-release environment
Start by creating an HTTPv1 service
injection
It makes sense to have a URL, headers
Preparing the V2 version
injection
Create a Service
The virtualService must be created because it matches the Servicename
Now httpbin is not accessible
Need to add rules
Add a prefix to the default route
Now only the service has been created, not the rule, destinationRule
It’s available now
This is where headers visits
Check the log
V2 definitely has no traffic right now, because it’s all on V1
You can start traffic mirroring now
You can adjust how much traffic goes through
Istio supports the management of multiple clusters, and the services of different clusters can be managed in a common way
Now refresh the page
There is traffic now
Retry is also possible. Most of the retry logic is implemented at the code level. The need for retry itself is business independent and a common requirement. Istio helps you implement retry logic
When the server returns to 502, it can try again.
Now when I visit, I return 502
It retries itself for you from the proxy
Retry at 5xx
Visit once 400
Just one record
Retry three
fusing
Overload protection
Istio implements circuit breakers through an envoy proxy. Envoy enforces a circuit breaker policy to be configured at the network level so that it does not have to be configured or reprogrammed individually for each application. If implemented from the code layer, you can implement a callback in which logic can be written. Istio can’t write logic because it can’t get into your code. In general, isTIO networks have the highest number of service configuration links and requests
The following is httpbin to access the Java App, and you can add a circuit breaker configuration in the middle
Create the service first
Inject the
Start with a single threaded link
We now use the client to access the service, which contains the get method. In fact, we set two variables to initiate the request
Now sending 2 is also a successful request
Can be restricted, is placed in the destinationRule
Http2 is a link to send multiple requests, unlike HTTP1, which creates a link to send a request
Most threads now fail when they are 10, and fuses in ISTIO are more like limiting traffic
The configuration of the application above has actually changed the envoy here. Envoy cannot trigger the callback logic because it does not enter the code
Fault injection and timeout
Fault injection is the idea that your service is normal, so you can inject 50% faults on the Details service, explicitly declared on the envoy layer, and basically debug your application compatibility. If a 100% failure is a service failure, it is to realize that your application is down and the whole cluster is still working
Is implemented through two classes, abort, delay
Luffy users now access Review V2
To do so now, Reviews goes to the ratings layer with a 2-second injection delay
Create a VirtualService that is accessed with ratings for the entire domain, there is a fault type error, that is, access to the rating service 100% traffic, injection delay of 2 seconds
! [
All accesses to the rating service inject a delayed access fault
Now there is a delay
The request is more than two seconds
The injected delay configuration is finally shown in the envoy
You can add a 1-second timeout for Reviews V2
The timeout takes effect for V2, but not v3
Luffy login user is also 2 seconds, in fact, timout1 seconds, but the program itself added a second timeout
In this case, it’s 1 second, and it returns when rating hasn’t returned yet
This is a delayed version of fault injection
And the status code. If you delete that, it’s a failure
Representing 50% discipline is 500 errors
There is a 50% chance that something is wrong, that is, you don’t have to stop the service to simulate a hang
Check whether the iptables rules are invalid
Within the ISTIO grid, traffic requests completely bypass the Kube-Proxy component
But I was able to access it last time
There are several services running on Slave1 that clean up slave1’s rules that kube-proxy cleans up to access channels
Scheduling to a nonexistent machine, that is, cleaning up the Kube-proxy
Clean up the iptables rules on the host
Inaccessible at this time
Because DNS is also the traffic of the service, this is resolved by the host iptables rules, now there is no rule to resolve the billService
Direct access to IP can pass because it follows ISTIO
It is not possible to access istio-proxy now because it is iptales from the host
Verifies that in istio-proxy (a grid container, but the access to the outside is the same), in front-Tomcat, is envoy envoy flow
Kube-proxy was deleted
observability
The things to look at throughout IStio are Grafana, Jaeger (Distributed Tracking), Kiali (visual components made specifically for IStio), Prometheus
Istio comes with these YAML files that can be applied directly to Grafana
jaeger
We need to change two addresses
Here we use the grafana service discovery address, which is in the istio namespace by default
To this
These services are created in the IStio-system space
These are all from IStio
Virtualservice adds a route to the service that istio-proxy uses to discover the service route
Istio already implements this structure for you at injection time
This is monitoring at the POD level
Create an ingress
Get the domain name
The data sources are all configured for you
This is the configuration of the Grafana implementation inside IStio
Send the request
100% success rate
The Mesh Dashboard, which monitors service traffic, is where all the services are
Now link tracing is in sidecar
This has been configured for you in the Envoy
There is a configuration file at startup
I saw the Zipkin created when I created jaeger
Both of these are tracing jaeger services, envoy configured with Zipkin, but visiting tracing
Here is a diagram showing the relationship
kiali
Observability analysis services
The one with APP in the label
Do istio
The product configuration, through v1, accesses the detail
You can also see these diagrams by clicking on other services
Traffic is inbound and outbound, inbound and outbound traffic
Service logs and ISTIO-proxy logs
The inbound flow
It integrates a lot of things, logging, inbound, outbound
Take a look at some basic information about services
Here’s the data. Make sure you have it
summary
Istio creates rules that Servicename must rely on K8S to implement, and isTIO is a weak link between microservices
K8S has been called the de facto standard of container scheduling, container can just be used as the smallest unit of work of microservice, so as to play the microservice framework Kubernets has become the de facto standard of container scheduling, and container can just be used as the smallest unit of work of microservice, so as to play the biggest advantage of microservice architecture. So I think the future of microservices architecture will revolve around Kubernetes. Service Mesh such as LSTIo and Cnduit is designed for Kubernetes naturally, and their appearance complements Kubernetes’ Service communication between microservices. On the short board. While dubo.SpringCloud and others are mature microservice frameworks, they are more or less tied to specific languages or application scenarios and only address the microservice Dev level. To solve the Ops problem, they also need to be combined with resource tuning frameworks such as Cloud Foundry. Mesos, Docker Swarm, or Kubernetes:
Laas level is system level, from this level it is SUPPORTED by K8S, SpringCloud is so supported, it will be K8S and ISTIO for a long time
To summarize: The first generation of service governance capabilities, represented by Spring Cloud, are tightly coupled to business code and cannot be used across programming languages. In order to achieve common service governance capabilities, IStio will inject a Sidecar proxy container for each business Pod
** Introduces the concept of server Mesh, which is injected into sidecar to monitor business traffic **
** takes over the traffic of pod, so the initialization container istio=init is introduced during injection to initialize the firewall rules in POD and intercept inbound and outbound traffic to ports 15001 and 15006 in POD respectively **
** also injects istio-Proxy containers, one for the pilot-Agent process, one for the envoy, which is pulled by the pilot-Agent, which is an info process that generates the configuration file -Rev0, So you generate this configuration file and you pull the envoy up, launch it, and then through XDS, reproduce it as pilot, which is the pilot service in IStiod, which is the server side of GRPC
支那
支那
** VirtualService, Destination rule, Gateway, transformation, and configuration fragments that can be envoy recognized. Sync to grid envoy ** via XDS
支那
支那
支那
支那
HTTP: / / HTTP: / / HTTP: / / HTTP: / / HTTP: / / HTTP: / / HTTP: / / HTTP: / / HTTP: / / HTTP: / / HTTP: / / HTTP: / / HTTP: / / envoy on port 15001 The listener reads Filter Chains, goes through the routing rules, matches the cluster, goes to the endpoint, and finally goes to the POD IP
支那
支那
** Incoming pod intercepted by 15006 Envoy, Virrtual Inbound performs processing on the cluster, and goes to the final endpoint **
支那
支那
Create VirtualService, Destination rule, endpoint **
支那
支那
支那
支那
支那
支那
支那
支那
支那
支那
支那
支那
支那
支那
支那
支那
支那
支那
支那
支那
支那
支那
支那
支那
支那
支那
Affirmations brought by the human envoy are more complex on the web, with extra overhead, and need to be considered on your own. Grayscale publishing is simple, create a virtual service, destination rule is fine **
支那
支那
Rancher is installed for use
Rancher is more common because users don’t need to know much about K8S
Supported versions
Installation is simple in one sentence
Stable version to look at, but the specified is fine
In V2.5.2, this configuration is required for privileged. If this parameter is used, the root user in the container has the root permission. Otherwise, root in the Container is just a normal user permission outside of the container. A privileged container can see many devices on host and perform mount. It even allows you to start a Docker container inside a Docker container
支那
支那
支那
支那
**K3S is a lighter platform than K8S **
It can be used for edge computing design of the Internet of Things
Interview now
Set the password
Select the Rancher that manages multiple clusters
Rancher is now a K3S cluster
It can be understood that a K3S cluster is started inside
A K3S cluster is set up by default
Also put the cluster to their own administrative page
Copy to the cluster for execution
The dashboard can also look at the use of some resources
The command line can be executed
You can do the same here
The first concept is a project, which is a cluster of several projects
This project is a layer of virtual concepts above the namespace, and this project contains namespaces
The default project contains a default namespace
There are so many namespaces that don’t belong to any project
You can choose to add projects
You can move the namespace to this project
You can view the workload
And load balancing
Service, there are selectors in the service discovery
Services can also be deployed
Mount the volume
Scaling upgrade strategy, is gray update
Label is plus label plus label
You can curl it into a container
See the log
Events are the content of Deployment
Once the service is created, there is a workload
You can add users for management
No configuration permission
If you want to access projects in the dev environment, members can use exec
Now you can see the whole project
The app store is just a bunch of clones, an ES cluster or something
There was also Devops, which was deprecated in version 2.5
Uses Fleet, a Devops tool
Keeps reporting errors, it looks like the mirror didn’t pull successfully. DNS failed to resolve
The host is pulled down
Container-creating in the container, pod can’t get up
It’s managed by Containerd, not by Docker
Failed to pull the mirror. Procedure
The host machine has a Pause mirror with a Rancher
Get the image
Put this image into the Rancher
Bin is in this directory
List its mirror image
K3s uses the CTR command to manage Containerd
List the current mirror image
Integrates coredns