This is the final article in the Golang application performance analysis series, focusing on how to use the Pprof tool to analyze the application performance of gRPC services. The gRPC framework has been written before. If you are not familiar with it and do not know what it is used for, you can take a look at it through the gRPC Getting Started series of articles.
How to analyze gRPC performance with Pprof
A typical gRPC service launcher based on the HTTP protocol underlying gRPC might look like this
func main (a) {
lis, err := net.Listen("tcp".10000)
grpcServer := grpc.NewServer()
pb.RegisterRouteGuideServer(grpcServer, &routeGuideServer{})
grpcServer.Serve(lis)
}
Copy the code
It is an RPC framework, not a Web framework, and does not support URL access by browsers, so there is no way to register the routes used by PPROF to collect data for Echo and Gin frameworks separately from the previous section. But another way to look at this problem is that Pprof does CPU analysis by collecting application CPU usage (including registers) at certain frequencies to determine where applications are spending time when they are actively consuming CPU cycles. Therefore, when gRPC service is started, we can asynchronously start an HTTP service that listens to other ports and indirectly obtain gRPC service analysis data through this HTTP service.
go func() {
http.ListenAndServe(":10001", nil)
}()
Copy the code
Since the default ServerMux (service multiplexer) is used, as long as the NET/HTTP /pprof package is imported anonymously, the HTTP multiplexer will register pPROF related routes by default.
Also suggested that in the beginning of initiator, calls the runtime. SetBlockProfileRate (1) the instructions for blocking more than one nanosecond goroutine data collection.
func main (a) {
runtime.SetBlockProfileRate(1)
go func(a) {
http.ListenAndServe(": 10001".nil)
}()
lis, err := net.Listen("tcp".10000)
grpcServer := grpc.NewServer()
pb.RegisterRouteGuideServer(grpcServer, &routeGuideServer{})
grpcServer.Serve(lis)
}
Copy the code
Service after the start will be able to pass {server_ip} : 10001 / debug/pprof/profile to collect CPU usage, the use of the specific pprof tools detailed instructions refer to the first in a series of articles.
Below is part of the function call graph I generated using the analysis data. You can see that although the analysis data is obtained using HTTP service on another port, the CPU usage of gRPC server listening on another port can still be collected.
The limitations of pprof
Pprof is useful, but it’s a bit of a hassle to figure out performance issues, and there are two main things I get from using it.
First of all, because all function calls are displayed in the call diagram, some of the time-consuming functions are direct from the runtime package at the bottom of Go, and it is still difficult to find the slow business functions in the pile.
In addition, many services are distributed now. If service A invokes service B, the execution of the method in service B will take A long time, and the analysis data of A will only know that GRPC. Invoke (the client invokes the request of GRPC method by invoke) takes A long time. Can not do the full link service performance collection, this if anyone knows a good solution can say in the message.
That’s all for this article, please feel free to share more in the comments. Next time, we will publish a long article about how to use Kubernetes StatefulSet controller to organize stateful applications. Meanwhile, we will do a detailed analysis of Headless Service.
reading
Golang program Performance Analysis (I) Pprof and Go-Torch
Using Pprof in Echo and Gin frameworks