The author Venil Noronha | translator Wang Quangen | review Yang Chuansheng Wang | | to read 2500 words about 5 minutes
The original view | archive in translation | tag # # # Envoy Grcp Rate – Limiting

Envoy is a lightweight service proxy designed for Cloud Native applications and is one of the few proxies to support gRPC. GRPC is a high-performance RPC (Remote procedure Call) framework based on HTTP/2 that supports multiple languages.

In this article, we will build a C++ language version of Greeter using gRPC and Protocol Buffers, and another gRPC application using Go to implement Envoy’s RateLimitService interface. Finally, deploy the Envoy as a proxy for the Greeter application, implementing a backpressure mechanism using our rate-limiting service.

GRPC Greeter application

We first installed gRPC and Protobuf, and then built the C++ version of Greeter. You can also build this application by selecting the other languages listed in the documentation; However, I will use C++ in this article.

Here’s a schematic of Greeter in action.

When you run the Greeter app, the terminal will have the following output:

/greeter_serverServer Listening on 0.0.0.0:50051Copy the code
$ ./greeter_clientGreeter received: Hello worldCopy the code

Upgrade gRPC Greeter

Now, we can enhance the Greeter application by replacing the static “Hello” prefix with a return value with a request count prefix. Simply update the greeter_server.cc file, as shown below.

 // Logic and data behind the server's behavior. class GreeterServiceImpl final : public Greeter::Service {+ int counter = 0; Status SayHello(ServerContext* context, const HelloRequest* request, HelloReply* reply) override {- std::string prefix("Hello "); + std::string prefix(std::to_string(++counter) + " "); reply->set_message(prefix + request->name()); return Status::OK; }Copy the code

Then rebuild and run Greeter_server, and you should see the following output when you send a request through Greeter_client.

$ for i in{1.. 3};do ./greeter_client; sleep 1; doneGreeter received: 1 worldGreeter received: 2 worldGreeter received: 3 worldCopy the code

Simple rate limiting service

Next, we implement a simple rate-limiting service in the Go language by extending the Envoy’s RateLimitService prototype interface. To do this, we create a Go project called Rate-limit-Service and introduce Envoy Go-Control-Plane and other related dependencies. The Go-Control-plane project provides the GO language binding for the Envoy prototype. Go and CMD /client/main.go files need to be created to implement the rate-limiting service.

$ mkdir -p $GOPATH/src/github.com/venilnoronha/rate-limit-service/$ cd $GOPATH/src/github.com/venilnoronha/rate-limit-service/$ mkdir -p cmd/server/ && touch cmd/server/main.go$ mkdir cmd/client/ &&  touch cmd/client/main.goCopy the code

After introducing all the dependencies, you should have a project structure like the one shown below. Note that I only highlighted the packages associated with this experiment.

─ ─ rate limit - service ├ ─ ─ CMD │ ├ ─ ─ client │ │ └ ─ ─ main. Go │ └ ─ ─ server │ └ ─ ─ main. Go └ ─ ─ vendor ├ ─ ─ github.com │ ├ ─ ─ Envoyproxy │ ├─ data-plane-api │ ├─ go Control-plane │ ├─ googleapis │ ├─ lyft │ └── prok-gene-validate └─ google.golang.orgCopy the code

Rate limiting server

Now let’s create a simple gRPC rate-limiting service to limit the number of requests per second.

package main​import (    "log"    "net"    "golang.org/x/net/context"    "google.golang.org/grpc"    "google.golang.org/grpc/reflection"    rls "github.com/envoyproxy/go-control-plane/envoy/service/ratelimit/v2")​// server is used to implement rls.RateLimitServicetype server struct{    // limit specifies if the next request is to be rate limited    limit bool}​func (s *server) ShouldRateLimit(ctx context.Context,        request *rls.RateLimitRequest) (*rls.RateLimitResponse, error) {    log.Printf("request: %v\n", request)​    // logic to rate limit every second request    var overallCode rls.RateLimitResponse_Code    if s.limit {        overallCode = rls.RateLimitResponse_OVER_LIMIT        s.limit = false    } else {        overallCode = rls.RateLimitResponse_OK        s.limit = true    }​    response := &rls.RateLimitResponse{OverallCode: overallCode}    log.Printf("response: %v\n", response)        return response, nil}​func main() {    // create a TCP listener on port 50052        lis, err := net.Listen("tcp".": 50052")        iferr ! = nil { log.Fatalf("failed to listen: %v", err)        }    log.Printf("listening on %s". lis.Addr()) // create a gRPC server and register the RateLimitService server s := grpc.NewServer() rls.RegisterRateLimitServiceServer(s, &server{limit: false})        reflection.Register(s)        iferr := s.Serve(lis); err ! = nil { log.Fatalf("failed to serve: %v", err)        }}Copy the code

After the RateLimitService is started, the terminal output is as follows.

$ go run cmd/server/main.go2018/10/27 00:35:28 listening on [::]:50052Copy the code

Rate limiting client

We also create a client of RateLimitService to verify the behavior of the server.

package main​import (        "log"    "time"        "golang.org/x/net/context"        "google.golang.org/grpc"    rls "github.com/envoyproxy/go-control-plane/envoy/service/ratelimit/v2")​func main() {        // Set up a connection to the server        conn, err := grpc.Dial("localhost:50052", grpc.WithInsecure())        iferr ! = nil { log.Fatalf("could not connect: %v", err)        }        defer conn.Close()        c := rls.NewRateLimitServiceClient(conn)​        // Send a request to the server        ctx, cancel := context.WithTimeout(context.Background(), time.Second)        defer cancel()    r, err := c.ShouldRateLimit(ctx, &rls.RateLimitRequest{Domain: "envoy"})        iferr ! = nil { log.Fatalf("could not call service: %v", err)        }        log.Printf("response: %v", r)}Copy the code

Now let’s test the server/client interaction by starting the client.

$ for i in{1.. 4};do go run cmd/client/main.go; sleep 1; done2018/10/27 17:32:23 response: overall_code:OK2018/10/27 17:32:25 response: overall_code:OVER_LIMIT2018/10/27 17:32:26 response: overall_code:OK2018/10/27 17:32:28 response: overall_code:OVER_LIMITCopy the code

Logs related to the server.

2018/10/27 17:32:23 request: domain:"envoy"2018/10/27 17:32:23 response: overall_code:OK2018/10/27 17:32:25 request: domain:"envoy"2018/10/27 17:32:25 response: overall_code:OVER_LIMIT2018/10/27 17:32:26 request: domain:"envoy"2018/10/27 17:32:26 response: overall_code:OK2018/10/27 17:32:28 request: domain:"envoy"2018/10/27 17:32:28 response: overall_code:OVER_LIMITCopy the code

Envoy agent

Now we introduce an Envoy proxy that routes requests from the Greeter client to the Greeter server while checking the rate using our rate-limiting service. The following diagram depicts our final deployment structure.

Proxy configuration

We use the following Envoy configuration to register Greeter and RateLimitService services and enable speed limit checking. Note that since we are deploying on Docker for Mac, the locally deployed service is referenced by the docker.for.mac.localhost address.

Static_resources: listeners: address: socket_address: address: 0.0.0.0 port_value: 9211# expose proxy on port 9211    filter_chains:    - filters:      - name: envoy.http_connection_manager        config:          codec_type: auto          stat_prefix: ingress_http          access_log: # configure logging            name: envoy.file_access_log            config:              path: /dev/stdout          route_config:            name: greeter_route # configure the greeter service routes            virtual_hosts:            - name: service              domains:              - "*"              routes:              - match:                  prefix: "/"                  grpc: {}                route:                  cluster: greeter_service              rate_limits: # enable rate limit checks for the greeter service                actions:                - destination_cluster: {}          http_filters:          - name: envoy.rate_limit # enable the Rate Limit filter            config:              domain: envoy          - name: envoy.router # enable the Router filter            config: {}  clusters:  - name: greeter_service # register the Greeter server    connect_timeout: 1s    type: strict_dns    lb_policy: round_robin    http2_protocol_options: {} # enable H2 protocol    hosts:    - socket_address:        address: docker.for.mac.localhost        port_value: 50051  - name: rate_limit_service # register the RateLimitService server    connect_timeout: 1s    type: strict_dns    lb_policy: round_robin    http2_protocol_options: {} # enable H2 protocol    hosts:    - socket_address:        address: docker.for.mac.localhost        port_value: 50052rate_limit_service: # define the global rate limit service  use_data_plane_proto: true  grpc_service:    envoy_grpc:      cluster_name: rate_limit_serviceCopy the code

Deploying an Envoy Proxy

To deploy the Envoy proxy, we copy the above configuration into an Envoy. Yaml file. Then we use the following Dockerfile to build the Docker image.

FROM envoyproxy/envoy:latestCOPY envoy.yaml /etc/envoy/envoy.yamlCopy the code

Run the following command to build the image:

$docker build-t envoy: GRPC.Sending build context to Docker daemon 74.75kBStep 1/2: FROM envoyproxy/envoy:latest ---> 51fc619e4dc5Step 2/2 : COPY envoy.yaml /etc/envoy/envoy.yaml ---> c766ba3d7d09Successfully built c766ba3d7d09Successfully tagged envoy:grpcCopy the code

Then run the agent:

$ docker run -p 9211:9211 envoy:grpc... [the 2018-10-28 02:59:20. 469] [000008] [info] [the main] [source[2018-10-28 02:59:20.553][000008][info][upstream] [source/common/upstream/cluster_manager_impl.cc:135] cm init: All clusters Initialized [2018-10-28 02:59:20.554][000008][info][main] [source/server/server.cc:425] all clusters initialized. initializing init manager[2018-10-28 02:59:20. 554] [000008] [info] [config] [source/server/listener_manager_impl.cc:908] all dependencies initialized. starting workersCopy the code

Update the Greeter client

Due to using the Envoy to route Greeter client requests, we changed the server port in the client code from 50051 to 9211 and rebuilt.

   GreeterClient greeter(grpc::CreateChannel(-      "localhost:50051", grpc::InsecureChannelCredentials())); +"localhost:9211", grpc::InsecureChannelCredentials()));   std::string user("world");   std::string reply = greeter.SayHello(user);Copy the code

The ultimate test

Now that we have the Greeter server, the RateLimitService service, and an Envoy proxy, it’s time to validate the entire deployment. To do this, we use the updated Greeter client to send several requests as shown below.

$ for i in{1.. 10};do ./greeter_client; sleep 1; doneGreeter received: 4 world14:Greeter received: RPC failedGreeter received: 5 world14:Greeter received: RPC failedGreeter received: 6 world14:Greeter received: RPC failedGreeter received: 7 world14:Greeter received: RPC failedGreeter received: 8 world14:Greeter received: RPC failedCopy the code

As you can see, 5 of the 10 requests were successful, alternating with RPC Failed requests with gRPC status code 14. This indicates that the rate-limiting service bounded the request by design and correctly terminated subsequent requests.

conclusion

This article gives you a high-level view of how to use Envoy as an application proxy and helps you understand how Envoy’s speed limit filters work with the gRPC protocol.