The author
background
Application performance monitoring and performance optimization has always been a cliche topic, many companies have a dedicated team is doing, tencent did a performance monitoring framework matrix, and is open source, for individuals, it is a great opportunity to learn, like how to start the time-consuming calculation, to understand the App start-up process, system function to be called, This time, we will explore the various aspects of the Matrix framework, find out exactly how it is monitored, and dig into the implementation of the code to see the essence. For the company, the company set up in the process of online performance monitoring platform, establishing a solid foundation and stable, although have toB APM performance monitoring platform on the market, after all is charge of, of course, also consider if spent money is still not satisfied, that is not do more harm than good, so do as oneself, after all we do technology, like certainty, Prefer stable interfaces to constantly changing uncertainties.
Introduction of APM
APM is the full name of Application Performance Management. From the perspective of the entire development process, APM is shown in the figure below. In the development stage, I used various tools or third-party SDKS to monitor memory leaks, such as LeakCanary. In addition, we are familiar with TrackView time analysis tool. In the compilation stage, Gradle code inserts can be used to achieve unified burial point and other functions. Of course, time statistics of functions can also be done through code inserts. It should also be reflected in the test stage, which I didn’t think well, so I skipped. In the gray release stage, we collected the data, and made statistics to figure out all aspects that need to be optimized, and finally handed over to the development for optimization. APM plays an important role in the development process.
Why do big factories have APM platforms
- The main reason, of course, is that to deliver quality applications, we need ways to improve the quality of the application, and APM provides a set of ways to effectively capture problems, solve them, and improve the experience.
- Team size bigger, more and more organizations do performance monitoring, the team made a memory analysis monitoring, the team made a start time monitoring, so also for unified technology stack, reduce the cost of other team learning and research, then unified an APM performance monitoring platform, is extremely important, there is an old saying said of good, smell has successively, Another reason for unifying APM is that there are specialized, professional teams doing professional things
APM performance monitoring indicators
First of all, let’s look at performance indicators, or user experience standards. Everyone is talking about performance monitoring and performance optimization, so what kind of data is qualified? See the following link for details:Software Green Alliance Application Experience Standard 3.0 — Performance standard Take a picture, you feel, this is the cold start standard, the general application class requires 2000 milliseconds, the requirement is very loose ah, one second to start I think is more reasonable, of course, these are most of the standards, they provide us with performance optimization basis, provide strong support. On the other hand, we can also find modules that need to be monitored. Not bad. If you need to dig deeper, check out the link below for more details. Official version of Green Alliance:Software Green Alliance Application Experience Standard 3.0 — Performance standardHuawei interpretation version:Key information and Huawei interpretation of Software Green Alliance Application Experience Standard 3.0Having these standards also helps us make clear what is good and what is bad. Let’s move on to the core Matrix of our analysis.
Matrix description
Now that you know about APM, let’s look at an introduction to one of APM’s concrete implementations, Matrix: MatrixIs an application performance access framework developed by wechat and used daily, supporting iOS, macOS and Android. Matrix collects and analyzes abnormal data of performance monitoring items by accessing various performance monitoring schemes, and outputs corresponding problem analysis, location, and optimization suggestions to help developers develop higher-quality applications. For Android:Matrix-androidThe scope of monitoring includes: application installation package size, frame rate changes, startup time, lag, slow methods, SQLite operation optimization, file reading and writing, memory leaks, etc. **Matrix- AndroidIt covers almost all the performance indicator detection functions, which are mainly divided into five modules, as shown in the following figure:
- APK Checker: A tool that analyzes and checks the APK installation package. Based on a set of set rules, it checks for specific problems in the APK and generates detailed test results for problem analysis and version tracking
- Resource Canary: An Activity leak and Bitmap duplicate creation detection tool developed based on WeakReference feature and Square Haha library
- Trace Canary: Monitors interface smoothness, startup time, page switching time, slow functions, and lag
- SQLite Lint: Automatically detects the quality of use of SQLite statements against official best practices
- IO Canary: Detects file IO issues, including file IO monitoring and Closeable Leak monitoring
The framework characteristics
Compared to conventional APM tools, Matrix has the following features:
APK Checker
- Better availability: Provided as a JAR package, it is easier to apply to continuous integration systems to track and compare changes from one APK version to another
- More check and analysis functions: In addition to the functions of APKAnalyzer, it also supports counting R classes contained in APK, checking whether multiple dynamic libraries are statically linked to STL, searching for useless resources contained in APK, and supporting custom check rules
- Output inspection results are more detailed: support for visual HTML format, easy to parse JSON, custom output, and so on
Resource Canary
- The separation of detection and analysis facilitates the continuous output of the analyzed test results without interrupting the automated test
- The Hprof file generated in the detection section was trimmed to remove most of the useless data and reduce the overhead of transferring the Hprof file
- Added duplicate Bitmap object detection to reduce memory consumption by reducing the number of redundant bitmaps
Trace Canary
- Bytecode is dynamically modified at compile time, and high performance records execution time and call stack
- You can locate the function that has a lag and provide information such as the execution stack, execution time, and execution times to quickly resolve the lag problem
- Automatic coverage of lag, startup time, page switching, slow function detection, and other fluency metrics
SQLite Lint
- Access is simple, code is non-invasive
- The amount of data is irrelevant, and the performance risks of SQLite can be found in the development and testing stages
- The detection algorithms are based on best practices, and high standards control SQLite quality
- The bottom layer is a C++ implementation that supports multi-platform extensions
IO Canary
- Access is simple, code is non-invasive
- Performance, leak comprehensive monitoring, I/O quality in mind
- Compatible with Android P
purpose
The main purpose of the analysis from the Angle of the source, warren, restore essence, clear up all the principle of performance monitoring, on the other hand, the purpose is also our company internal preparation based on it to build a complete online performance monitoring platform, will produce data visualization, through the data statistics and screening to find the need to do optimization, This will help companies control online apps, indirectly improve user experience, and also influence developers to develop higher quality applications.
Analysis method
- Source reading analysis
First of all, through the interpretation of the source code, understand the implementation process, understand the details of processing.
- Code practice
Practice is the best teacher, mainly to think of learning to use, we can write a simple version of monitoring, and through the real scenario simulation, to assist us to understand more deeply.
This series includes
It is roughly the above 9 modules, do detailed elaboration and analysis one by one. The specific link is as follows:
The above will be shared in a separate article. Each article pays more attention to details and practice, but if we think about it carefully, it seems that it is not conducive to our understanding if we start with the detailed analysis, so I also prepared the overall analysis of the framework, from the whole to the local, from the local to the details, from the details to the practice.
The framework of Matrix
The project is mainly divided into three Lib forms, including C, Java, and Android
- The apk-Canary Jar is a package that does not involve android apis, but relies on Java project Commons.
- Trace-canary is an Android dependency package that relies mainly on Android-lib
- A basic project that relies on the canary common, an Android dependency package canary- Android, and a canary- Analyzer Java project that analyzes activity memory leaks, Repeat bitmap, heap analysis, etc.
- Io-canary Android package, android-Commons Android-lib, analysis of IO stream leaks, etc
- Sqlite-lint-android android package, dependent C++ module, code in the SRC directory
- Gradle plugin is a custom plugin for the arsutil Java project. Gradle plugin is a custom plugin for the Arsutil Java project. This plugin is a custom plugin for the Arsutil Java project.
Overall, I don’t go into detail about the functionality and content of each lib, which will be covered later in the source code interpretation.
Matrix directory structure
Combined with the above frame structure diagram, the basic can not be explained. Without further ado, let’s dig a little deeper and look at the class structure design of the project.
The class structure for Matrix
From the class structure diagram, we can clearly see the code organization structure of the framework. Please look at the diagram: We find several points:
- The IPlugin defines the behavior of the plug-in, starting, stopping, destroying, initializing, getting tags, and so on.
- The problems collected by plug-ins are aggregated through pluginListeners and depend on the interface, not the implementation class DefaultPluginListener
- There are four plugins: Trace, SQLiteLint, Resource, and IOCanary.
- The Matrix aggregation class is created through the Builder pattern and manages all plug-ins, such as startAllPlugins, stopAllPlugins, and so on.
In fact, it is not so complex as everyone imagined, very simple implementation, and there is no obscure and difficult place, in fact, the difficult in the back.
conclusion
In this issue, we have a certain understanding of APM from the background, and then introduce the Matrix framework, listing its functions and characteristics. Finally, we have formulated the analysis method and content, and then look at the structural design of Matrix framework and class from the global perspective. This helps us learn from the whole to the parts. Then we can pay attention to the details, for each plug-in function, interface definition method, one by one to look at the source code, see what is the implementation, to do monitoring, looking forward to ing.