Introduction to the
In this experiment, isTIO is deployed on K8S to realize the basic functions of microservices. It involves service limiting, timeout, fusing, degradation, traffic separation, A/B testing and other functions. K8s and ISTIO need to be installed before the experiment, please refer to the previous article. Notice Enable the automatic injection function for ISTIO and the default namespace.
The call relationship between services in this experiment is as follows:
In this experiment, the popular front and rear end separation mode was adopted
The front-end project is implemented based on VUE/React
The front end calls an API implemented by Python
Python services invoke services implemented by back-end Node and services implemented by Lua
The Node service invokes the services implemented by Go
- —->service-js
- —->service-python
- —->service-lua
- —->service-node
- —->service-go
Language technology stack used in this experiment:
- vue/react
- python2/3
- node8/10
- Openresty1.11/1.13
- Go1.10/1.9
The architecture diagram is as follows:
The configuration of ISTIO-0.8 has changed greatly from v1Alpha1 to V1Alpha3. The main changes are as follows
- use
virtualservice
anddestinationrule
In place ofrouterule
- use
gateway
Instead of the originalingress
Each VirtualService specifies which destinationrule to go to. Virtualservice specifies which address to access using this route, equivalent to vhosts configured on Nginx
Download lab warehouse
git clone https://github.com/mgxian/istio-test
cd istio-test && git checkout v2
Copy the code
Deploy the service
kubectl apply -f service/go/v1/go-v1.yml
kubectl apply -f service/go/v2/go-v2.yml
kubectl apply -f service/python/v1/python-v1.yml
kubectl apply -f service/python/v2/python-v2.yml
kubectl apply -f service/js/v1/js-v1.yml
kubectl apply -f service/js/v2/js-v2.yml
kubectl apply -f service/node/v1/node-v1.yml
kubectl apply -f service/node/v2/node-v2.yml
kubectl apply -f service/lua/v1/lua-v1.yml
kubectl apply -f service/lua/v2/lua-v2.yml
Copy the code
Create a Gateway
# Use the Gateway functionality provided by ISTIO
Expose JS and Python services for external access to the K8S cluster
istioctl create -f istio/gateway.yml
istioctl create -f istio/gateway-virtualservice.yml
# check
istioctl get gateway
istioctl get virtualservice
# test access
INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http")].nodePort}')
NODE_NAME=$(kubectl get no | grep '<none>' | head -1 | awk '{print $1}')
NODE_IP=$(ping -c 1 $NODE_NAME | grep PING | awk '{print $3}' | tr -d '()')
export GATEWAY_URL=$NODE_IP:$INGRESS_PORT
echo "curl -I http://$GATEWAY_URL/"
echo "curl -I http://$NODE_IP/"
# access returns 404 to indicate that it is correct
Copy the code
Configure the test access environment
# configure hosts parsing
# 11.11.11.112 is the IP address of one of the nodes11.11.11.112 IStio-test. will curl -i http://istio-test.will/# use curl
curl -I istio-test.will
curl -s istio-test.will | egrep "vue|React"
# If you use a browser at this time, the page may display abnormal situation.
CSS/JS does not load properly because requests are sent to v1/v2 versions of the back-end JS service in turn
Copy the code
Traffic management
Based on the requested information, traffic is routed to different versions of the service. If the expected result is not achieved during the experiment, it is likely that the routing rules conflict and the priority is not set. You can delete the previously set routing rules or set the priority to a higher one.
Redirect all traffic to v1
Clear gateway routing rules created earlier
istioctl delete -f istio/gateway-virtualservice.yml
# create routing rules
istioctl create -f istio/gateway-virtualservice-v1.yml
istioctl create -f istio/route-rule-all-v1.yml
# Check routing rules
istioctl get virtualservice
istioctl get destinationrule
# Access browser tests
http://istio-test.will/
# Now you'll see the React App interface
Click the Launch button to send an Ajax request to the Python service
# Because all traffic was directed to version V1
Click the launch button multiple times to get the same content
# the react -- -- -- -- -- > Python2.7.15 -- -- -- -- -- > Gogo1.9.6
Clear routing rules
istioctl delete -f istio/route-rule-all-v1.yml
istioctl delete -f istio/gateway-virtualservice-v1.yml
Copy the code
Redirect traffic to different versions on request (A/B testing)
# create routing rules
# return different content for different browser
istioctl create -f istio/route-rule-js-by-agent.yml
# Check routing rules
istioctl get virtualservice
istioctl get destinationrule
# Use the access browser
# If you use Chrome you will see the React App interface
# If you use firefox you will see the interface of vue app
Click the launch button multiple times to get different content
Clear routing rules
istioctl delete -f istio/route-rule-js-by-agent.yml
Use different versions of Python services depending on the front-end app
istioctl create -f istio/route-rule-python-by-header.yml
Clear routing rules
istioctl delete -f istio/route-rule-python-by-header.yml
Copy the code
Redirect traffic to different versions based on the source service
Create the following route to facilitate test access
# return different content for different browser
istioctl create -f istio/route-rule-js-by-agent.yml
# create routing rules
istioctl create -f istio/route-rule-go-by-source.yml
Clear routing rules
istioctl delete -f istio/route-rule-js-by-agent.yml
istioctl delete -f istio/route-rule-go-by-source.yml
Copy the code
Specify weights for traffic separation
# specify weights to separate the flow
Route 25% of traffic to version V1
# 75% traffic is routed to version V2
Create the following route to facilitate test access
# return different content for different browser
istioctl create -f istio/route-rule-js-by-agent.yml
# create routing rules
istioctl create -f istio/route-rule-go-v1-v2.yaml
Clear routing rules
istioctl delete -f istio/route-rule-js-by-agent.yml
istioctl delete -f istio/route-rule-go-v1-v2.yaml
Copy the code
Access exposed services within the cluster
By default, isTIo-enabled services cannot access external urls
Egress If you need to access external urls, configure this parameter
Egress Also supports setting routing rules
# http
istioctl create -f istio/egress-rule-http-bin.yml
# tcp
istioctl create -f istio/egress-rule-tcp-wikipedia.yml
# check
istioctl get serviceentry
# test
Use exec to enter the POD used as the test source
kubectl apply -f istio/sleep.yaml
kubectl get pods
export SOURCE_POD=$(kubectl get pod -lapp=sleep -o jsonpath={.items.. metadata.name}) kubectlexec -it $SOURCE_POD -c sleep bash
# HTTP test
curl http://httpbin.org/headers
curl http://httpbin.org/delay/5
# TCP test
curl -o /dev/null -s -w "%{http_code}\n" https://www.wikipedia.org
curl -s https://en.wikipedia.org/wiki/Main_Page | grep articlecount | grep 'Special:Statistics'
# to clean up
istioctl delete -f istio/egress-rule-http-bin.yml
istioctl delete -f istio/egress-rule-tcp-wikipedia.yml
kubectl delete -f istio/sleep.yaml
Copy the code
Fault management
- Invoke timeout Settings and retry Settings
- Fault injection, simulating service failures
Set timeout period and simulate service timeout fault
Create the following route to facilitate test access
# return different content for different browser
istioctl create -f istio/route-rule-js-by-agent.yml
Set the Python service timeout
istioctl create -f istio/route-rule-node-timeout.yml
# Simulate go service timeout failure
istioctl create -f istio/route-rule-go-delay.yml
# Use your browser to access and open the debug panel to view web tabs (press F12)
Click the launch button several times to observe the response time
You will see that 50% of requests will return 500 errors
Clear routing rules
istioctl delete -f istio/route-rule-js-by-agent.yml
istioctl delete -f istio/route-rule-node-timeout.yml
istioctl delete -f istio/route-rule-go-delay.yml
Copy the code
Set retry and simulate service 500 faults
Create the following route to facilitate test access
# return different content for different browser
istioctl create -f istio/route-rule-js-by-agent.yml
Set the Python service timeout
istioctl create -f istio/route-rule-node-retry.yml
# Simulate go service timeout failure
istioctl create -f istio/route-rule-go-abort.yml
# Use your browser to access and open the debug panel to view web tabs (press F12)
Click the launch button several times to observe the response time
You will see some requests return 500 errors
Clear routing rules
istioctl delete -f istio/route-rule-js-by-agent.yml
istioctl delete -f istio/route-rule-node-retry.yml
istioctl delete -f istio/route-rule-go-abort.yml
Copy the code
Used in conjunction with timeout and service failure simulation
Delay all requests for 5 seconds, then fail 10% of them
.
route:
- labels:
version: v1
httpFault:
delay:
fixedDelay: 5s
abort:
percent: 10
httpStatus: 400
Copy the code
fuse
# Set the circuit breaker rule
istioctl create -f istio/route-rule-go-cb.yml
# check rules
istioctl get destinationrule
Create fortio for testing
kubectl apply -f istio/fortio-deploy.yaml
# Normal access test
FORTIO_POD=$(kubectl get pod | grep fortio | awk '{ print $1 }')
kubectl exec -it $FORTIO_POD -c fortio /usr/local/bin/fortio -- load -curl http://service-go/env
# Test fuse 2 concurrency
kubectl exec -it $FORTIO_POD -c fortio /usr/local/bin/fortio -- load -c 2 -qps 0 -n 20 -loglevel Warning http://service-go/env
# Test fuse 3 concurrency
kubectl exec -it $FORTIO_POD -c fortio /usr/local/bin/fortio -- load -c 3 -qps 0 -n 20 -loglevel Warning http://service-go/env
# Increasing concurrency will see a higher percentage of failed requests
# check status
# upstream_rq_pending_overflow indicates the number of requests that were fushed
kubectl exec -it $FORTIO_POD -c istio-proxy -- sh -c 'curl localhost:15000/stats' | grep service-go | grep pending
# to clean up
kubectl delete -f istio/fortio-deploy.yaml
istioctl delete -f istio/route-rule-go-cb.yml
Copy the code
Current limiting
Dynamically set the service QPS
- Github.com/istio/istio…
# memquota, quota, rule, QuotaSpec, QuotaSpecBinding
The default is 500qps
istioctl create -f istio/ratelimit-handler.yaml
Configure rate-limiting instances and rules
istioctl create -f istio/ratelimit-rule-service-go.yaml
# check
kubectl get memquota -n istio-system
kubectl get quota -n istio-system
kubectl get rule -n istio-system
kubectl get quotaspec -n istio-system
kubectl get quotaspecbinding -n istio-system
Create fortio for testing
kubectl apply -f istio/fortio-deploy.yaml
# Normal access test
FORTIO_POD=$(kubectl get pod | grep fortio | awk '{ print $1 }')
kubectl exec -it $FORTIO_POD -c fortio /usr/local/bin/fortio -- load -curl http://service-node/env
# test
Some requests are not normal
# node returns code 500
# go returns code 429
kubectl exec -it $FORTIO_POD -c fortio /usr/local/bin/fortio -- load -qps 20 -n 100 -loglevel Warning http://service-node/env
kubectl exec -it $FORTIO_POD -c fortio /usr/local/bin/fortio -- load -qps 50 -n 100 -loglevel Warning http://service-go/env
# to clean up
istioctl delete -f istio/ratelimit-handler.yaml
istioctl delete -f istio/ratelimit-rule-service-go.yaml
kubectl delete -f istio/fortio-deploy.yaml
# conditional rate limitapiVersion: config.istio.io/v1alpha2 kind: rule metadata: name: quota namespace: istio-system spec: match: source.namespace ! = destination.namespace actions: - handler: handler.memquota instances: - requestcount.quotaCopy the code
Flow image
Traffic from a copy service to another mirror service is used to test new services online.
Create fortio for testing
kubectl apply -f istio/fortio-deploy.yaml
# Normal access test
FORTIO_POD=$(kubectl get pod | grep fortio | awk '{ print $1 }')
kubectl exec -it $FORTIO_POD -c fortio /usr/local/bin/fortio -- load -curl http://service-go/env
# check v1's logs
kubectl logs -f $(kubectl get pods | grep service-go-v1 | awk '{print $1}'| head -n 1) -c service-go
# Check v2 logs
Open another terminal to view logs
kubectl logs -f $(kubectl get pods | grep service-go-v2 | awk '{print $1}'| head -n 1) -c service-go
Create a mirror rule
istioctl create -f istio/route-rule-go-mirror.yml
# Test multiple accesses
kubectl exec -it $FORTIO_POD -c fortio /usr/local/bin/fortio -- load -c 10 -qps 0 -t 10s -loglevel Warning http://service-go/env
# to clean up
kubectl delete -f istio/fortio-deploy.yaml
istioctl delete -f istio/route-rule-go-mirror.yml
Copy the code
Clean up the
# delete the associated deploy and SVC
kubectl delete -f service/go/v1/go-v1.yml
kubectl delete -f service/go/v2/go-v2.yml
kubectl delete -f service/python/v1/python-v1.yml
kubectl delete -f service/python/v2/python-v2.yml
kubectl delete -f service/js/v1/js-v1.yml
kubectl delete -f service/js/v2/js-v2.yml
kubectl delete -f service/node/v1/node-v1.yml
kubectl delete -f service/node/v2/node-v2.yml
kubectl delete -f service/lua/v1/lua-v1.yml
kubectl delete -f service/lua/v2/lua-v2.yml
Clear routing rules
kubectl delete -f istio/gateway.yml
kubectl delete -f istio/gateway-virtualservice.yml
istioctl delete destinationrule $(istioctl get destinationrule | grep 'service-' | awk '{print $1}')
istioctl delete virtualservice $(istioctl get virtualservice | grep 'service-' | awk '{print $1}')
Copy the code
Reference documentation
- istio.doczh.cn
- istio.io/docs
- Istio. IO/docs/refere…
- Istio. IO/docs/refere…