Cloud Profiler
Performance analysis is a form of dynamic code analysis that captures the characteristics of an application at runtime and then uses that characteristic information to determine how to make the application faster and more efficient.
Cloud Profiler is a CPU and memory performance analysis tool. It is a sampling statistical performance analyzer with very low overhead suitable for production environments.
Take the Go application as an example. Start the Profiler agent in the application with the following code. The Profiler agent will run in the new Goroutine.
import "cloud.google.com/go/profiler"
func main(a) {
profiler.Start(profiler.Config{})
}
Copy the code
The purpose of the Profiler agent is to capture performance analysis data from the application and use the Profiler API to transfer this data to the Profiler backend (Cloud Profiler) and create performance analysis files. The Cloud Profiler collects data once every minute on an instance of Compute/App Engine’s enabled Profiler proxy service (a collection typically lasts 10 seconds) to create a performance profile.
For multiple instances of an application version (meaning multiple agents), the backend selects one agent per minute on average and instructs it to capture one performance analysis file.
The Profiler back end associates the performance analysis file with the GCP project. We can then view and analyze the performance analysis file using the Profiler interface in the GCP Console.
For performance analysis, you can collect the following indicators
time
- CPU time: Is the time it takes the CPU to execute a block of code. CPU wait time is not included
- Actual time: Measures the time elapsed from entry to exit of the function
Time measurement helps us identify code that occupies CPU for long periods of time, and CPU-intensive blocks that run for long periods of time may need to be optimized.
memory
- Heap usage (point in time) : Is the amount of memory allocated by the program in the heap when the profiling file is collected
- Heap allocation (a certain time interval, i.e., 10s) : Is the total amount of memory allocated by the program in the heap during the time interval between the collection of performance profiles
Helps find potential inefficiencies and memory leaks in programs where high memory footprint can burden the GC.
Threads: Helps find unwanted threads that might be created.
Contention: The wait time for multiple threads to access a shared resource can be long.
Reference:
- Flame chart interpretation
- Analyze Go application performance
- Use the Profiler interface