Written in the book of the former

This paper is about how to use Flomesh[1] service grid to strengthen the service governance capability of Spring Cloud, lower the threshold of Spring Cloud micro-service architecture landing on service grid, and achieve “independent and controllable”.

The document is constantly updated on Github [2], and we welcome your discussion: github.com/flomesh-io/…


architecture

Architect

Environment set up

Build Kubernetes environment, you can choose Kubeadm for cluster construction. You can also choose minikube, K3S, Kind, etc. K3s is used in this article.

Install K3S [4] using K3D [3]. K3d will run K3S in a Docker container, so you need to make sure Docker is installed.

$ k3d cluster create spring-demo -p "81:80@loadbalancer" --k3s-server-arg '--no-deploy=traefik'
Copy the code

Install Flomesh

https://github.com/flomesh-io/flomesh-bookinfo-demo.git clone code from the warehouse. Go to the flomesh-bookinfo-demo/kubernetes directory.

All Flomesh components and yamls files for demo are located in this directory.

Install the Cert Manager

$kubectl apply -f artifacts/cert-manager-v1.3.1.yaml customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created namespace/cert-manager created serviceaccount/cert-manager-cainjector created serviceaccount/cert-manager created serviceaccount/cert-manager-webhook created clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created clusterrole.rbac.authorization.k8s.io/cert-manager-view created clusterrole.rbac.authorization.k8s.io/cert-manager-edit created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created role.rbac.authorization.k8s.io/cert-manager:leaderelection created role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created service/cert-manager created service/cert-manager-webhook created deployment.apps/cert-manager-cainjector created deployment.apps/cert-manager created deployment.apps/cert-manager-webhook created mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook createdCopy the code

Note: To ensure that all pods in the cert-manager namespace are running properly:

$ kubectl get pod -n cert-manager NAME READY STATUS RESTARTS AGE cert-manager-webhook-56fdcbb848-q7fn5 1/1 Running 0 98s  cert-manager-59f6c76f4b-z5lgf 1/1 Running 0 98s cert-manager-cainjector-59f76f7fff-flrr7 1/1 Running 0 98sCopy the code

Install the Pipy Operator

$ kubectl apply -f artifacts/pipy-operator.yaml
Copy the code

After executing the command, you will see similar results:

namespace/flomesh created customresourcedefinition.apiextensions.k8s.io/proxies.flomesh.io created customresourcedefinition.apiextensions.k8s.io/proxyprofiles.flomesh.io created serviceaccount/operator-manager created role.rbac.authorization.k8s.io/leader-election-role created clusterrole.rbac.authorization.k8s.io/manager-role created clusterrole.rbac.authorization.k8s.io/metrics-reader created clusterrole.rbac.authorization.k8s.io/proxy-role created rolebinding.rbac.authorization.k8s.io/leader-election-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/manager-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/proxy-rolebinding created configmap/manager-config created service/operator-manager-metrics-service created service/proxy-injector-svc created service/webhook-service created deployment.apps/operator-manager created deployment.apps/proxy-injector created certificate.cert-manager.io/serving-cert  created issuer.cert-manager.io/selfsigned-issuer created mutatingwebhookconfiguration.admissionregistration.k8s.io/mutating-webhook-configuration created mutatingwebhookconfiguration.admissionregistration.k8s.io/proxy-injector-webhook-cfg created validatingwebhookconfiguration.admissionregistration.k8s.io/validating-webhook-configuration createdCopy the code

Note: To ensure that all pods in the Flomesh namespace are running properly:

$ kubectl get pod -n flomesh
NAME                               READY   STATUS    RESTARTS   AGE
proxy-injector-5bccc96595-spl6h    1/1     Running   0          39s
operator-manager-c78bf8d5f-wqgb4   1/1     Running   0          39s
Copy the code

Install the Ingress controller: ingress-pipy

$ kubectl apply -f ingress/ingress-pipy.yaml namespace/ingress-pipy created customresourcedefinition.apiextensions.k8s.io/ingressparameters.flomesh.io created serviceaccount/ingress-pipy created role.rbac.authorization.k8s.io/ingress-pipy-leader-election-role created clusterrole.rbac.authorization.k8s.io/ingress-pipy-role created rolebinding.rbac.authorization.k8s.io/ingress-pipy-leader-election-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/ingress-pipy-rolebinding created configmap/ingress-config created service/ingress-pipy-cfg created service/ingress-pipy-controller created service/ingress-pipy-defaultbackend created service/webhook-service created deployment.apps/ingress-pipy-cfg created deployment.apps/ingress-pipy-controller created  deployment.apps/ingress-pipy-manager created certificate.cert-manager.io/serving-cert created issuer.cert-manager.io/selfsigned-issuer created mutatingwebhookconfiguration.admissionregistration.k8s.io/mutating-webhook-configuration configured validatingwebhookconfiguration.admissionregistration.k8s.io/validating-webhook-configuration configuredCopy the code

Check pod status under ingress-pipy namespace:

$ kubectl get pod -n ingress-pipy
NAME                                       READY   STATUS    RESTARTS   AGE
svclb-ingress-pipy-controller-8pk8k        1/1     Running   0          71s
ingress-pipy-cfg-6bc649cfc7-8njk7          1/1     Running   0          71s
ingress-pipy-controller-76cd866d78-m7gfp   1/1     Running   0          71s
ingress-pipy-manager-5f568ff988-tw5w6      0/1     Running   0          70s
Copy the code

At this point, you have successfully installed all components of Flomesh, including the Operator and ingress controllers.

The middleware

The Demo uses middleware to store logs and statistics, but to make it easier to mock with PIpy: print data directly from the console.

In addition, the configuration related to service governance is provided by mock’s Pipy Config service.

log & metrics

$ cat > middleware.js <<EOF pipy() .listen(8123) .link('mock') .listen(9001) .link('mock') .pipeline('mock') .decodeHttpRequest() .replaceMessage( req => ( console.log(req.body.toString()), new Message('OK') ) ) .encodeHttpResponse() EOF $ docker run --rm --name middleware --entrypoint "pipy" -v ${PWD}:/script -p 8123:8123 -p 9001:9001 Flomesh /pipy-pjs:0.4.0-118 /script/middleware.jsCopy the code

pipy config

$ cat > mock-config.json <<EOF { "ingress": {}, "inbound": { "rateLimit": -1, "dataLimit": -1, "circuitBreak": false, "blacklist": [] }, "outbound": { "rateLimit": -1, "dataLimit": -1 } } EOF $ cat > mock.js <<EOF pipy({ _CONFIG_FILENAME: 'mock-config.json', _serveFile: (req, filename, type) => ( new Message( { bodiless: req.head.method === 'HEAD', headers: { 'etag': os.stat(filename)? .mtime | 0, 'content-type': type, }, }, req.head.method === 'HEAD' ? null : os.readFile(filename), ) ), _router: new algo.URLRouter({ '/config': req => _serveFile(req, _CONFIG_FILENAME, 'application/json'), '/*': () => new Message({ status: 404 }, 'Not found'), }), }) // Config .listen(9000) .decodeHttpRequest() .replaceMessage( req => ( _router.find(req.head.path)(req) ) ) .encodeHttpResponse() EOF $ docker run --rm --name mock --entrypoint "pipy" -v ${PWD}:/script -p 9000:9000 Flomesh/pipy - PJS: 0.4.0-118 / script/mock. JsCopy the code

Run the Demo

Demo runs in a separate namespace, flomesh-spring. Run kubectl apply -f base/namespace.yaml to create the namespace. If you describe the namespace you will find that it uses the flomesh. IO /inject=true tag.

This tag tells operator’s admission that webHook intercepts pod creation in the namespace of the tag.

$ kubectl describe ns flomesh-spring Name: flomesh-spring Labels: App. Kubernetes. IO/name = spring - mesh app. Kubernetes. IO/version = 1.19.0 flomesh. IO/inject = true kubernetes.io/metadata.name=flomesh-spring Annotations: <none> Status: Active No resource quota. No LimitRange resource.Copy the code

Let’s start by looking at the CRD ProxyProfile provided by Flomesh. In this demo, it defines the sidecar container fragment and the script to use. Check sidecar/proxy-profile.yaml for more information. Run the following command to create the CRD resource.

$ kubectl apply -f sidecar/proxy-profile.yaml
Copy the code

Check whether the command is successfully created:

$ kubectl get pf -o wide
NAME                         NAMESPACE        DISABLED   SELECTOR                                     CONFIG                                                                AGE
proxy-profile-002-bookinfo   flomesh-spring   false      {"matchLabels":{"sys":"bookinfo-samples"}}   {"flomesh-spring":"proxy-profile-002-bookinfo-fsmcm-b67a9e39-0418"}   27s
Copy the code

As the services has startup dependencies, you need to deploy it one by one following the strict order. Before starting, check the Endpoints section of base/clickhouse.yaml.

Yaml, Base /metrics.yaml, and base/config.yaml to the native IP address (not 127.0.0.1).

After the modification, run the following command:

$ kubectl apply -f base/clickhouse.yaml $ kubectl apply -f base/metrics.yaml $ kubectl apply -f base/config.yaml $ kubectl get endpoints samples-clickhouse samples-metrics samples-config NAME ENDPOINTS AGE samples-clickhouse 192.168.1.101:8123 3m samples-metrics 192.168.1.101:9001 3s samples-config 192.168.1.101:9000 3MCopy the code

Deployment Registry

$ kubectl apply -f base/discovery-server.yaml
Copy the code

Check the status of the pod in the registry to make sure all three containers are working properly.

$ kubectl get pod
NAME                                           READY   STATUS        RESTARTS   AGE
samples-discovery-server-v1-85798c47d4-dr72k   3/3     Running       0          96s
Copy the code

Deployment and Configuration Center

$ kubectl apply -f base/config-service.yaml
Copy the code

Deploy the API gateway and bookinfo-related services

$ kubectl apply -f base/bookinfo-v1.yaml
$ kubectl apply -f base/bookinfo-v2.yaml
$ kubectl apply -f base/productpage-v1.yaml
$ kubectl apply -f base/productpage-v2.yaml
Copy the code

Checking the POD status, you can see that all pods are injected into the container.

$ kubectl get pods samples-discovery-server-v1-85798c47d4-p6zpb 3/3 Running 0 19h samples-config-service-v1-84888bfb5b-8bcw9 1/1 Running 0 19h samples-api-gateway-v1-75bb6456d6-nt2nl 3/3 Running 0 6h43m  samples-bookinfo-ratings-v1-6d557dd894-cbrv7 3/3 Running 0 6h43m samples-bookinfo-details-v1-756bb89448-dxk66 3/3 Running 0 6h43m samples-bookinfo-reviews-v1-7778cdb45b-pbknp 3/3 Running 0 6h43m samples-api-gateway-v2-7ddb5d7fd9-8jgms  3/3 Running 0 6h37m samples-bookinfo-ratings-v2-845d95fb7-txcxs 3/3 Running 0 6h37m samples-bookinfo-reviews-v2-79b4c67b77-ddkm2 3/3 Running 0 6h37m samples-bookinfo-details-v2-7dfb4d7c-jfq4j 3/3 Running 0 6h37m samples-bookinfo-productpage-v1-854675b56-8n2xd 1/1 Running 0 7m1s samples-bookinfo-productpage-v2-669bd8d9c7-8wxsf 1/1 Running 0 6m57sCopy the code

Add an Ingress rule

Run the following command to add an Ingress rule.

$ kubectl apply -f ingress/ingress.yaml
Copy the code

Preparation for the test

Access the Demo service through the Ingress controller. Therefore, obtain the IP address of the LB.

//Obtain the controller IP
//Here, we append port. 
ingressAddr=`kubectl get svc ingress-pipy-controller -n ingress-pipy -o jsonpath='{.spec.clusterIP}'`:81
Copy the code

Here we use k3s created by K3D, with the -p 81:80@loadbalancer option added to the command. We can use 127.0.0.1:81 to access the ingress controller. In this case, run ingressAddr=127.0.0.1:81.

In the Ingress rule, we specify a host for each rule, so each request needs to provide the corresponding host via the HTTP request header host.

Or add a record to /etc/hosts:

$ kubectl get ing ingress-pipy-bookinfo -n flomesh-spring -o jsonpath="{range .spec.rules[*]}{.host}{'\n'}" Api-v1.flomesh.cn api-v2.flomesh.cn fe-v1.flomesh.cn fe-v2.flomesh.cn // Add a record to /etc/hosts 127.0.0.1 api-v1.flomesh.cn api-v2.flomesh.cn fe-v1.flomesh.cn fe-v2.flomesh.cnCopy the code

validation

Host: $curl http://127.0.0.1:81/actuator/health - H ' api-v1.flomesh.cn' {"status":"UP","groups":["liveness","readiness"]} //OR $ curl http://api-v1.flomesh.cn:81/actuator/health {"status":"UP","groups":["liveness","readiness"]}Copy the code

test

gray

In the v1 version of the service, we added rating and Review for Book.

# rate a book $ curl -X POST http://$ingressAddr/bookinfo-ratings/ratings \ -H "Content-Type: application/json" \ -H "Host: api-v1.flomesh.cn" \ -d '{"reviewerId":"9bc908be-0717-4eab-bb51-ea14f669ef20","productId":"2099a055-1e21-46ef-825e-9e0de93554ea","rating":3}' $ curl http://$ingressAddr/bookinfo-ratings/ratings/2099a055-1e21-46ef-825e-9e0de93554ea -H "Host: api-v1.flomesh.cn" # review a book $ curl -X POST http://$ingressAddr/bookinfo-reviews/reviews \ -H "Content-Type: application/json" \ -H "Host: api-v1.flomesh.cn" \ -d '{"reviewerId":"9bc908be-0717-4eab-bb51-ea14f669ef20","productId":"2099a055-1e21-46ef-825e-9e0de93554ea","review":"This was OK.","rating":3}' $ curl http://$ingressAddr/bookinfo-reviews/reviews/2099a055-1e21-46ef-825e-9e0de93554ea -H "Host:  api-v1.flomesh.cn"Copy the code

After executing the above command, We can access the front-end service in a browser (http://fe-v1.flomesh.cn:81/productpage?u=normal, http://fe-v2.flomesh.cn:81/productpage?u=normal), Only the v1 version of the front-end can see the record you just added.

v1

v2

fusing

We force the service to fuse by changing inbound. CircuitBreak in mock-config.json to true:

{
  "ingress": {},
  "inbound": {
    "rateLimit": -1,
    "dataLimit": -1,
    "circuitBreak": true, //here
    "blacklist": []
  },
  "outbound": {
    "rateLimit": -1,
    "dataLimit": -1

  }
}
Copy the code
$curl http://$ingressAddr/actuator/health - H 'Host: API - v1. Flomesh. Cn' HTTP / 1.1 503 Service Unavailable Connection: keep-alive Content-Length: 27 Service Circuit Break OpenCopy the code

Current limiting

Modify the pipy config configuration to set inbound.ratelimit to 1.

{
  "ingress": {},
  "inbound": {
    "rateLimit": 1, //here
    "dataLimit": -1,
    "circuitBreak": false,
    "blacklist": []
  },
  "outbound": {
    "rateLimit": -1,
    "dataLimit": -1
  }
}
Copy the code

We use WRK to simulate sending requests, 20 connections, 20 requests, 30 seconds:

$ wrk -t20 -c20 -d30s --latency http://$ingressAddr/actuator/health -H 'Host: API - v1. Flomesh. Cn 'Running 30 s test @ http://127.0.0.1:81/actuator/health 20 threads and 20 connections Thread Stats Avg Stdev Max +/ -stdev Latency 951.51ms 206.23ms 1.04s 93.55% Req/Sec 0.61 1.71 10.00 93.55% Latency Distribution 50% 1.00s 75% 1.01s 90% 1.02s 99% 1.03s 620 requests in 30.10s, 141.07KB read requests/SEC: 20.60 Transfer/ SEC: 4.69KBCopy the code

The result is 20.60 req/s, which is 1 req/s per connection.

Black and white list

Modify the mock-config.json of pipy config as follows: the IP address is the pod IP of ingress Controller.

$kgpo-n ingress-pipy ingress-pipy-controller-76cd866d78-4cqqn -o jsonPath ='{.status.podIP}' 10.42.0.78Copy the code
{ "ingress": {}, "inbound": { "rateLimit": -1, "dataLimit": -1, "circuitBreak": false, "blacklist": [" 10.42.0.78 "] / / here}, "outbound" : {" rateLimit ": 1," dataLimit ": 1}}Copy the code

It’s also the interface that accesses the gateway

The curl http://$ingressAddr/actuator/health - H 'Host: API - v1. Flomesh. Cn' HTTP / 1.1 503 Service Unavailable the content-type: text/plain Connection: keep-alive Content-Length: 20 Service UnavailableCopy the code

Reference links

[1] Flomesh: flomesh.cn/ [2] github: github.com/flomesh-io/… [3] k3d: k3d.io/ [4] k3s: github.com/k3s-io/k3s

This article uses the article synchronization assistant to synchronize