preface
To be a good Android developer, you need a complete set ofThe knowledge systemHere, let us grow together into what we think ~.
In the previous part, the author analyzed in detail the current optimization schemes related to App drawing and layout optimization. If you are not familiar with drawing optimization and layout optimization, you can take a closer look at the previous several articles: Drawing optimization of Android performance optimization, in-depth exploration of Android layout optimization (top), in-depth exploration of Android layout optimization (middle), in-depth exploration of Android layout optimization (bottom). Because the topic of Caton optimization contains too much content, in order to explain it in more detail, the author has divided it into two parts. This article is the first part of the deep Dive into Android Caton Optimization. The main contents of this article are as follows:
- 1. Caton optimization analysis methods and tools
- 2. Automatic stuck detection scheme and optimization
When we use a variety of apps, sometimes we will see that some apps do not run smoothly, that is, there is a lag phenomenon, so how to define the occurrence of the lag phenomenon?
If the average FPS of an App is less than 30 and the minimum is less than 24, the App is stuttering.Copy the code
So how do we analyze if an app is stuck? Below, let’s take a look at the analysis methods and tools needed to solve the stuck problem.
1. Caton optimization analysis methods and tools
1. Background
- Many performance problems are hard to spot, but the lag problem is easy to visualize.
- The Caton problem is difficult to locate.
So what’s so hard about the Caton problem?
- 1. The causes of lag are complex and involve code, memory, drawing, IO, CPU, etc.
- 2. The online lag problem is difficult to reproduce offline, because it is strongly related to the scene at that time. For example, the disk IO space of online users is insufficient, which affects the write performance of disk IO, leading to the lag. In view of this kind of problem, we had better try to record the specific scene information of the user when the jam occurs when we find it.
2, Caton analysis method using shell command analysis CPU time
While there are many reasons for sluggishness, it all comes down to CPU time.
CPU time includes user time and system time.
- User time: The time spent executing user-mode application code.
- System time: Time spent executing kernel-mode system calls, including I/O, locks, interrupts, and other system calls.
CPU problems can be roughly divided into the following three categories:
1. Redundant USE of CPU resources
- The efficiency of the algorithm is too low: the algorithm needs to be traversed twice when it can be traversed once, mainly in search, sorting, deletion and other links.
- Not using cache: re-decode images that were clearly decoded once.
- Using the wrong basic type for the calculation: using long when int is sufficient causes the CPU to work four times harder.
2. CPU resource contention
- CPU hog for the main thread: This is the most common problem, and before Android 6.0, the busy nature of the main thread determined whether or not the user ran out of time.
- Grab AUDIO and video CPU resources: audio and video codec itself will consume a lot of CPU resources, and its decoding speed is mandatory, if not up to the playback fluency problem may occur. We can optimize in two ways: 1. Try to exclude consumption of non-core business. 2. Optimize its performance consumption and convert CPU load to GPU load, such as using RenderScript to process image information in videos.
- Everyone is equal and grabs each other: for example, in the custom album, I opened 20 threads to decode the picture, which is robbing each other of THE CPU, and the result is that the picture display speed is very slow. It’s a case of three monks out of water. Therefore, when customizing the thread pool, we need to control the number of threads according to the number of system cores.
3. The CPU resource usage is low
For startup, interface switching, audio and video codec scenarios, we need to make good use of the CPU to ensure speed. Not only disk and network I/O, but also lock operations, sleep, and so on, cause CPU underutilization. Optimization of locks is usually to reduce the scope of locks as much as possible.
1. Understand CPU performance
We can evaluate the performance of CPU by the main frequency, core number, cache and other parameters, the quality of these parameters can show the strength of CPU computing ability and instruction execution ability, that is, the number of floating point calculations and instructions executed by CPU per second.
In addition, the latest mainstream models now use multistage energy efficient CPU architectures (i.e., multi-core layered architectures) to ensure that only low-frequency cores can be used to save power during normal low-load operations.
In addition, we can also directly check the number and frequency of CPU cores of the mobile phone through shell commands, as shown below:
Adb shell adb shell adb shell adb shell My cell phone is eight nuclear platina: / $cat/sys/devices/system/CPU/possible 0 to 7 / / get the first largest CPU frequency platina: / $cat / sys/devices/system/CPU/cpu0 / cpufreq/the files cpuinfo_max_freq < / / 1843200 for the second CPU minimum frequency platina: / $cat /sys/devices/system/cpu/cpu1/cpufreq/cpuinfo_min_freq < 633600Copy the code
From CPU to GPU to AI chip (such as NPU (Neural Network Processing Unit) specially built for Neural network computing), with the leap of the overall performance of mobile CPU, Some AI applications, such as medical diagnosis and image super-clarity, can also be implemented better on mobile. We can take advantage of mobile computing power to reduce the high cost of servers.
In addition, the better the CPU performance, the better the application can be supported. For example, the thread pool can be configured with different threads according to the number of CPU cores on different phones, and some advanced AI functions can only be enabled on devices with high frequency or NPU.
2. Read /proc/stat and /proc/[PID]/stat files to calculate and evaluate the CPU usage of the system
The first thing you should look at when an application has a lag problem is the CPU usage of the system.
First, we read the /proc/stat file to get the total CPU time, and read /proc/[PID]/stat to get the CPU time of the application process. Then, we take two CPU snapshots and process snapshots at a short enough interval to calculate the CPU utilization.
Calculate the total CPU usage
1. Take two CPU snapshots at short enough intervals, i.e., read the /proc/stat file twice to obtain the data corresponding to the two points in time, as shown below:
// First sampling platina:/ $cat /proc/stat CPU 9931551 1082101 9002534 174463041 340947 1060438 1088978 00 0 CPU0 2244962 280573 2667000 22414199 99651 231869 439918 0 0 0 cpu1 2672378 421880 2943791 21540302 121818 236850 438733 0 0 0 cpu2 1648512 76856 1431036 25868789 46970 107094 52025 0 0 0 cpu3 1418757 41280 1397203 25772984 40292 110168 41667 0 0 0 cpu4 573203 79498 178263 19618235 9577 307949 10875 0 0 0 cpu5 522638 67978 155454 19684358 8793 19787 4603 0 0 0 cpu6 458438 64085 132252 19749439 8143 19942 98241 0 0 0 cpu7 392663 49951 97535 19814735 5703 26779 2916 0 0 0 intr... // Second sampling platina:/ $cat /proc/stat CPU 9931673 1082113 9002679 174466561 340954 1060446 1088994 00 CPU0 2244999 280578 2667032 22414604 99653 231869 439918 0 0 0 cpu1 2672434 421881 2943861 21540606 121822 236855 438747 0 0 0 cpu2 1648525 76859 1431054 25869234 46971 107095 52026 0 0 0 cpu3 1418773 41283 1397228 25773412 40292 110170 41668 0 0 0 cpu4 573203 79498 178263 19618720 9577 307949 10875 0 0 0 cpu5 522638 67978 155454 19684842 8793 19787 4603 0 0 0 cpu6 458438 64085 132252 19749923 8143 19942 98241 0 0 0 cpu7 392663 49951 97535 19815220 5703 26779 2916 0 0 0 int...Copy the code
Since my phone has eight cores, the number of cpus is eight. From CPU 0 to CPU 7, the first line of CPU data is the summary of the eight cpus. Since we need to calculate the CPU usage of the system, we should use CPU as the baseline. The CPU indicator data sampled twice is as follows:
cpu1 9931551 1082101 9002534 174463041 340947 1060438 1088978 0 0 0
cpu2 9931673 1082113 9002679 174466561 340954 1060446 1088994 0 0 0
Copy the code
The corresponding indicators are as follows:
CPU (user, nice, system, idle, iowait, irq, softirq, stealstolen, guest);
Copy the code
Take cpu1 (9931551 1082101 9002534 174463041 340947 1060438 1088978 00 0) data, I will explain the meaning of these indicators in detail.
- User (9931551) : indicates the running time in user mode since the system was started. The process whose nice value is negative is not included.
- Nice (1082101) : Indicates the CPU time used by a process whose NICE value is negative since system startup.
- System (9002534) : indicates the operating time in kernel mode since the system is started.
- Idle (174463041) : Indicates the wait time except the I/O wait time since the system starts.
- Iowait (340947) : indicates the I/O wait time since system startup. (Included as of Linux V2.5.41)
- Irq (1060438) : Indicates the hard interrupt time since the system is started. (Included as of Linux V2.6.0-test4)
- Softirq (1088978) : indicates the soft interrupt time since the system is started. (Included as of Linux V2.6.0-test4)
- Stealstolen (0) : Represents the time spent on other operating systems when running in a virtualized environment. On Android, the value is 0. (Included as of Linux V2.6.11)
- Guest (0) : Indicates the time spent running the virtual CPU for other operating systems while under the control of the Linux kernel. On Android, the value is 0. (Included since V2.6.24)
Jiffies are a global variable in the kernel that records the number of ticks generated since the system was started. In Linux, a beat is roughly defined as the minimum time slice for an operating system process to be scheduled. This value may vary from Linux kernel to Linux kernel. Usually between 1ms and 10ms.
After understanding the meaning of the parameters in the /proc/stat command, we can calculate the activity times of CPU1 and CPU2 from the CPU data of the previous two points in time, as shown below:
totalCPUTime = user + nice + system + idle + iowait + irq + softirq + stealstolen + guest
cpu1 = 9931551 + 1082101 + 9002534 + 174463041 + 340947 + 1060438 + 1088978 + 0 + 0 + 0 = 196969590jiffies
cpu2 = 9931673 + 1082113 + 9002679 + 174466561 + 340954 + 1060446 + 1088994 + 0 + 0 + 0 = 196973420jiffies
Copy the code
This results in total CPU time, as shown below:
TotalCPUTime = CPU2 - CPU1 = 3830jiffiesCopy the code
Finally, we can calculate the CPU usage of the system:
IdleCPUTime = IDle2 - idle1 = 3520jiffies totalCPUUse = (totalCPUTime - idleCPUTime) / TotalCPUTime = (3830-3520) / 3830 = 8%Copy the code
It can be seen that the CPU usage between the two time points is about 8%, indicating that the CPU of our system is in idle state. If the CPU usage is always greater than 60%, it means that the system is in busy state. In this case, we need to further analyze the ratio of user time to system time. See if the system is using CPU or the application process is using CPU.
3. Run the top command to check the CPU consumption of application processes
In addition, since Android is an operating system based on the Linux kernel, it is natural to use some common Linux commands. For example, we can use the top command to see which processes are the main users of CPU.
/ / directly using the top command will continually output process related information regularly 1 | platina: / $top PID USER PR NI VIRT RES SHR S CPU] [% % MEM TIME + ARGS 12700 u0_a945 10-10 4.3g 122M 67M S 15.6 2.1 1:06.41 json.chao.com.w+ 753 System RT 0 90M 1.1m 1.0m S 13.6 0.0127:47.73 Android.hardwar + 2064 SYSTEM 18-2 4.6g 309M 215M S 12.3 5.4 978:15.18 System_server 22142 u0_a163 20 0 2.0g 97M 41M S 10.3 1.6 2:22.99 Com.0700.mob + 2293 System 20 0 4.7g 250M 87M S 8.6 4.35:15.77 com.android.sys+Copy the code
From the above, our awesome-wanAndroid application takes up 15.6% of the CPU. Finally, here are the most commonly used top commands, as follows:
/ / out 0% of the process information the adb shell top | grep -v '0%' S / / access to the specified process of CPU and memory consumption, And set the refresh interval adb shell top - 1 d | grep json.chao.com.wanandroid | platina: 1 / $top - d | grep json.chao.com.w+ u0_a945 10 5689 -10 4.3g 129M 71M S 13.82.2 1:04.46 json.chao.com.w+ 5689 u0_a945 10-10 4.3g 129M 71M S 19.0 2.2 1:04.51 Json.w+ 5689 u0_a945 10-10 4.3g 129M 71M S 15.02.2 1:04.70 json.w+ 5689 u0_a945 10-10 4.3g 129M 71M S 9.0 2.2 1:04.85 json.w+ 5689 u0_a945 10-10 4.3g 129M 71M S 26.02.2 1:04.94 json.w+ 5689 u0_a945 10-10 4.3g 129M 71M S 9.0 2.2 1:05.20 json.w+ 5689 u0_a945 10-10 4.3g 129M 71M R 17.02.2 1:05.29 Json.w+ 5689 u0_a945 10-10 4.3g 129M 71M S 20.0 2.2 1:05.46 json.w+ 5689 u0_a945 10-10 4.3g 129M 71M S 9.0 2.2 1:05.66 json.chao.com.w+ 5689 u0_a945 10-10 4.3g 129M 71M R 21.0 2.2 1:05.75 json.w+ 5689 u0_a945 10-10 4.3g 129M 71M S 14.0 2.2 1:05.96 json.chao.com.w+Copy the code
4. PS software
In addition to the top command to comprehensively view the overall CPU information, if we only want to view the percentage of CPU time consumed by the current specified process in the total system time or other status information, we can use the ps command, the common ps command is as follows:
Platina :/ $ps -p 31333 USER PID PPID VSZ RSS WCHAN ADDR S NAME u0_a945 31333 1277 4521308 127460 0 S platina:/ $ps -p 31333 USER PID PPID VSZ RSS WCHAN ADDR S NAME u0_a945 31333 1277 4521308 127460 0 S Json.chao.com.w+ // view the percentage of CPU time used by a specified process in the total system time platina:/ $ps -o PCPU -p 31333% CPU 10.8Copy the code
The meanings of the output parameters are as follows:
- USER: indicates the USER name
- PID: indicates the ID of a process
- PPID: indicates the ID of the parent process
- VSZ: Virtual memory size (1K)
- RSS: Resident memory size (pages in use)
- WCHAN: the running time of a process in kernel mode
- Instruction pointer: indicates an Instruction pointer
- NAME: indicates the process NAME
The final output parameter S represents the current state of the process, with a total of 10 possible states, as shown below:
R (running) S (sleeping) D (device I/O) T (stopped) t (traced)
Z (zombie) X (deader) x (dead) K (wakekill) W (waking)
Copy the code
As you can see, our current main process is in a dormant state.
5, dumpsys cpuinfo
The information obtained using the dumpsys cpuinfo command is more refined than the information obtained from the top command, as shown below:
platina:/ $ dumpsys cpuinfo
Load: 1.92 / 1.59 / 0.97
CPU usage from 45482ms to 25373ms ago (2020-02-04 17:00:37.666 to 2020-02-04 17:00:57.775):
33% 2060/system_server: 22% user + 10% kernel / faults: 8152 minor 6 major
17% 2292/com.android.systemui: 12% user + 4.7% kernel / faults: 21636 minor 3 major
14% 750/[email protected]: 4.4% user + 10% kernel
6.1% 778/surfaceflinger: 3.3% user + 2.7% kernel / faults: 128 minor
3.3% 2598/com.miui.home: 2.8% user + 0.4% kernel / faults: 7655 minor 11 major
2.2% 2914/cnss_diag: 1.6% user + 0.6% kernel
1.9% 745/[email protected]: 1.4% user + 0.5% kernel / faults: 5 minor
1.7% 4525/kworker/u16:6: 0% user + 1.7% kernel
1.6% 748/[email protected]: 0.6% user + 0.9% kernel
1.4% 4551/kworker/u16:14: 0% user + 1.4% kernel
1.4% 31333/json.chao.com.wanandroid: 0.9% user + 0.4% kernel / faults: 3995 minor 22 major
1.1% 6670/kworker/u16:0: 0% user + 1.1% kernel
0.9% 448/mmc-cmdqd/0: 0% user + 0.9% kernel
0.7% 95/system: 0% user + 0.7% kernel
0.6% 4512/mdss_fb0: 0% user + 0.6% kernel
0.6% 7393/com.android.incallui: 0.6% user + 0% kernel / faults: 2272 minor
0.6% 594/logd: 0.4% user + 0.1% kernel / faults: 38 minor 3 major
0.5% 3108/com.xiaomi.xmsf: 0.2% user + 0.2% kernel / faults: 1812 minor
0.5% 4526/kworker/u16:9: 0% user + 0.5% kernel
0.5% 4621/com.gotokeep.keep: 0.3% user + 0.1% kernel / faults: 55 minor
0.5% 354/irq/267-NVT-ts: 0% user + 0.5% kernel
0.5% 2572/com.android.phone: 0.3% user + 0.1% kernel / faults: 323 minor
0.5% 4554/kworker/u16:15: 0% user + 0.5% kernel
0.4% 290/kgsl_worker_thr: 0% user + 0.4% kernel
0.3% 2933/irq/61-1008000.: 0% user + 0.3% kernel
0.3% 3932/com.tencent.mm: 0.2% user + 0% kernel / faults: 647 minor 1 major
0.3% 4550/kworker/u16:13: 0% user + 0.3% kernel
0.3% 744/[email protected]: 0% user + 0.3% kernel / faults: 48 minor
0.3% 8906/com.tencent.mm:appbrand0: 0.2% user + 0% kernel / faults: 45 minor
0.2% 79/smem_native_rpm: 0% user + 0.2% kernel
0.2% 759/[email protected]: 0% user + 0.2% kernel / faults: 46 minor
0.2% 3197/com.miui.powerkeeper: 0% user + 0.1% kernel / faults: 141 minor
0.2% 4489/kworker/1:1: 0% user + 0.2% kernel
0.2% 595/servicemanager: 0% user + 0.2% kernel
0.2% 754/[email protected]: 0.1% user + 0% kernel
0.2% 1258/jbd2/dm-2-8: 0% user + 0.2% kernel
0.2% 5800/com.eg.android.AlipayGphone: 0.1% user + 0% kernel / faults: 48 minor
0.2% 21590/iptables-restore: 0% user + 0.1% kernel / faults: 563 minor
0.2% 21592/ip6tables-restore: 0% user + 0.1% kernel / faults: 647 minor
0.1% 3/ksoftirqd/0: 0% user + 0.1% kernel
0.1% 442/cfinteractive: 0% user + 0.1% kernel
0.1% 568/ueventd: 0% user + 0% kernel
0.1% 1295/netd: 0% user + 0.1% kernel / faults: 250 minor
0.1% 3002/com.miui.securitycenter.remote: 0.1% user + 0% kernel / faults: 818 minor 1 major
0.1% 20555/com.eg.android.AlipayGphone:push: 0% user + 0% kernel / faults: 20 minor
0.1% 7/rcu_preempt: 0% user + 0.1% kernel
0.1% 15/ksoftirqd/1: 0% user + 0.1% kernel
0.1% 76/lpass_smem_glin: 0% user + 0.1% kernel
0.1% 1299/rild: 0.1% user + 0% kernel / faults: 12 minor
0.1% 1448/android.process.acore: 0.1% user + 0% kernel / faults: 1719 minor
0% 4419/com.google.android.webview:s: 0% user + 0% kernel / faults: 602 minor
0% 20465/com.miui.hybrid: 0% user + 0% kernel / faults: 1575 minor
0% 10/rcuop/0: 0% user + 0% kernel
0% 75/smem_native_lpa: 0% user + 0% kernel
0% 90/kcompactd0: 0% user + 0% kernel
0% 1508/msm_irqbalance: 0% user + 0% kernel
0% 1745/cds_mc_thread: 0% user + 0% kernel
0% 2899/charge_logger: 0% user + 0% kernel
0% 3612/com.tencent.mm:tools: 0% user + 0% kernel / faults: 29 minor
0% 4203/kworker/0:0: 0% user + 0% kernel
0% 7377/com.android.server.telecom:ui: 0% user + 0% kernel / faults: 1083 minor
0% 32113/com.tencent.mobileqq: 0% user + 0% kernel / faults: 49 minor
0% 8/rcu_sched: 0% user + 0% kernel
0% 22/ksoftirqd/2: 0% user + 0% kernel
0% 25/rcuop/2: 0% user + 0% kernel
0% 29/ksoftirqd/3: 0% user + 0% kernel
0% 39/rcuop/4: 0% user + 0% kernel
0% 53/rcuop/6: 0% user + 0% kernel
0% 487/irq/715-ima-rdy: 0% user + 0% kernel
0% 749/[email protected]: 0% user + 0% kernel
0% 764/healthd: 0% user + 0% kernel / faults: 2 minor
0% 845/wlan_logging_th: 0% user + 0% kernel
0% 860/mm-pp-dpps: 0% user + 0% kernel
0% 1297/wificond: 0% user + 0% kernel / faults: 12 minor
0% 1309/com.miui.weather2: 0% user + 0% kernel / faults: 729 minor 23 major
0% 1542/rild: 0% user + 0% kernel / faults: 3 minor
0% 2915/tcpdump: 0% user + 0% kernel / faults: 6 minor
0% 2974/com.tencent.mobileqq:MSF: 0% user + 0% kernel / faults: 121 minor
0% 3044/com.miui.contentcatcher: 0% user + 0% kernel / faults: 315 minor
0% 3057/com.miui.dmregservice: 0% user + 0% kernel / faults: 332 minor
0% 3095/com.xiaomi.mircs: 0% user + 0% kernel
0% 3115/com.xiaomi.finddevice: 0% user + 0% kernel / faults: 270 minor 3 major
0% 3513/com.xiaomi.metoknlp: 0% user + 0% kernel / faults: 136 minor
0% 3603/com.tencent.mm:toolsmp: 0% user + 0% kernel / faults: 35 minor
0% 4527/kworker/u16:11: 0% user + 0% kernel
0% 4841/com.gotokeep.keep:xg_service_v4: 0% user + 0% kernel / faults: 275 minor
0% 5064/com.sohu.inputmethod.sogou.xiaomi: 0% user + 0% kernel / faults: 102 minor
0% 5257/kworker/0:1: 0% user + 0% kernel
0% 5839/com.tencent.mm:push: 0% user + 0% kernel / faults: 98 minor
0% 6644/kworker/3:2: 0% user + 0% kernel
0% 6657/com.miui.wmsvc: 0% user + 0% kernel / faults: 52 minor
0% 6945/com.xiaomi.account:accountservice: 0% user + 0% kernel / faults: 1 minor
0% 9387/com.tencent.mm:appbrand1: 0% user + 0% kernel / faults: 27 minor
13% TOTAL: 6.8% user + 5.3% kernel + 0.2% iowait + 0.3% irq + 0.4% softirq
Copy the code
From the above information, the first line displays cpuLoad: Load: 1.92/1.59/0.97. These three numbers represent the average of progressively longer periods of time (averaging one, five, and fifteen minutes), and lower numbers are better. A higher number indicates a problem or an overload of the machine. Note that the Load needs to be divided by the number of cores. For example, my system has 8 cores, so the Load of each single-core CPU ends up being 0.24/0.20/0.12. If the Load exceeds 1, there is a problem.
In addition, the system_server process consumes the highest amount of CPU resources, while our WanAndroid application process consumes only 1.4% of CPU resources, with 0.9% of user time and 0.4% of kernel time. Finally, we can see that the total CPU time used by the system is 13%, which is processed based on the sum of all the previous values per system CPU, which is 104% / 8 = 13%.
In addition to analyzing the CPU usage of the system and application in the above way, we should also pay attention to two indicators: the card rate and the card tree. They can help us effectively evaluate and more specifically optimize the lag occurring in our applications.
Caton rate
Similar to the UV/PV crash rate discussed in the article exploring Android stability optimization, the card can also have its corresponding UV/PV crash rate. UV is Unique visitor, which refers to a mobile phone client for a visitor, the same client is counted only once between 00:00-24:00. PV stands for Page View, or Page views or clicks. Therefore, UV and PV stuck rate is defined as follows:
// UV stuck rate can evaluate the range of influence of stuck UV stuck rate = occurrence of stuck UV/start of stuck collection UV // PV Stuck rate evaluates the severity of stuck PV stuck rate = occurrence of stuck PV/start of stuck collection PVCopy the code
Because the sampling rules for the lag problem are similar to those for memory problems, they are generally reported by sampling and should be sampled according to a single user. If a user hits the collection, the collection will continue throughout the day.
Caton tree
We can implement Caton’s fire map, the Caton tree, and see caton’s whole information in a single image. Due to the specific time with mobile phone performance caton, and closely related to the use scene, environment, etc, and caton problem in daily living large applications appear on the scene very much, so for more than we should specify the card threshold value such as 1 s, 2 s, 3 s, we can abandon concrete time consuming, only according to the stack in the same proportion to aggregate all kinds of card information. In this way, we can intuitively see which stack has the most stuck problems from the stuck tree, so that we can solve the Top stuck problem first, to achieve the purpose of using the least energy to obtain the maximum optimization effect.
3. Caton optimization tool
1. CPU Profiler Review
The use of CPU profilers has been covered in detail in my in-depth exploration of Android startup speed optimization, so if you’re not familiar with CPU profilers, check out this article.
Let’s briefly review CPU profilers.
Advantage:
- Graphical display of execution time, call stack, etc.
- Information comprehensive, including all threads.
Disadvantage:
The run-time overhead is so high that it slows the whole thing down and may take our optimization direction away from it.
Usage:
Debug.startMethodTracing(""); // Code snippet to check... Debug.stopMethodTracing();Copy the code
The resulting generated files in sd card: Android/data/packagename/files.
2. Systrace Review
Systrace makes use of Linux’s FTrace debugging tool (ftrace is a debugging tool for understanding the inner workings of the Linux kernel), which adds performance probes at key points in the system, i.e. burying points for performance monitoring in the code. Android encapsulates Atrace on top of FTrace and adds more specific probes, such as Graphics, Activity Manager, Dalvik VM, System Server, and so on. The use of Systrace has been analyzed in detail in this article exploring Android startup speed optimization. If you are not familiar with Systrace, you can go to this article.
Let’s briefly review Systrace.
Function:
Monitor and track API calls, thread execution, and generate HTML reports.
Advice:
For API 18 or above, TraceCompat is recommended.
Usage:
Execute the script using the Python command, followed by a series of arguments, as follows:
Python systrace. Py -t 10 [other-options] [categories] // We use systrace to configure Python /Users/quchao/Library/Android/sdk/platform-tools/systrace/systrace.py -t 20 sched gfx view wm am app webview -a "com.wanandroid.json.chao" -o ~/Documents/open-project/systrace_data/wanandroid_start_1.htmlCopy the code
The specific meanings of parameters are as follows:
- -t: Indicates that the statistics period is 20s.
- Shced: CPU scheduling information.
- GFX: Graphic information.
- View: indicates a view.
- Wm: Window management.
- Am: Event management.
- App: Application information.
- Webview: Indicates webview information.
- -a: specifies the package name of the target application.
- -o: indicates the generated systrace. HTML file.
Advantage:
- 1, lightweight, small overhead.
- 2, it can intuitively reflect the CPU utilization.
- 3. The Alerts on the right can give specific suggestions based on our application’s problems, for example, it can tell us that the App interface is drawing slowly or GC is frequent.
Finally, it is possible to automatically increase the elapsed time of the application offline by pegging each function at compile time, but be careful to filter most short functions to reduce performance loss (this can be done by filtering short functions or functions that are called very frequently through a blacklist configuration). In this way we can see the call flow of the entire application. This includes function calls that apply critical threads, such as rendering time, thread locks, GC time, and so on. You can use Zhengcx’s MethodTraceMan here, but currently you can only implement filtering configurations for package and class names, so you need to customize the source code to support filtering configurations for short functions or very frequently called functions.
For performance reasons, if you want to use this solution online, it is best to monitor only the main thread time. Although the impact of piling scheme on performance is not great, it is recommended to use only in offline or grayscale environment.
In addition, if you need to analyze Native function calls, use the Simpleperf performance analyzer tool added to Android 5.0, which takes advantage of hardware PERF events provided by the CPU’s performance Monitoring Unit (PMU). Using Simpleperf, you can see the time of all Native code, and the call of some Android system libraries, which is of great help in analyzing problems, such as analyzing the time of loading dex, Verify class, etc. In addition, Profiler in Android Studio 3.2 directly supports Simpleper (SampleNative performance analysis tool (API Level 26+)), which makes it easier to debug native code.
3, StrictMode
StrictMode is a tool class introduced in Android 2.3. StrictMode is a runtime detection mechanism provided by Android to help developers detect irregularities in their code. For our project, there may be tens of thousands of lines of code. If we Review it with naked eyes, it is not only inefficient, but also prone to problems. After StrictMode is used, the system will automatically detect some abnormal conditions in the main thread and give corresponding responses according to our configuration.
StrictMode is a very powerful tool, but it can be overlooked because we are not familiar with it. StrictMode is used to detect two main problems:
1. Thread strategy
Thread policies detect custom time consuming calls, disk reads, and network requests.
2. Vm policies
Check vm policies as follows:
- Leakage of the Activity
- The Sqlite object is leaking
- Number of detected instances
StrictMode of actual combat
To use StrictMode in an application, you only need to configure StrictMode in Applicaitoin’s onCreate method as shown in the following code:
Private void initStrictMode() {// 1. Only in the offline environment and use StrictMode if (DEV_MODE) {/ / 2, set up thread strategy StrictMode. SetThreadPolicy (new StrictMode. ThreadPolicy. Builder () .DetectCustomSlowCalls () //API level 11, Use strictmode.noteslowcode.detectDiskreads ().detectDiskwrites ().detectnetwork () // or.detectall () for just detectable levels PenaltyDialog () // Can also jump out of the alarm dialog //.penaltyDeath() // or crash.build()); / / 3, set up a virtual machine strategy StrictMode. SetVmPolicy (new StrictMode. VmPolicy. Builder () detectLeakedSqlLiteObjects () / / Limit the number of instances of the NewsItem object to 1.setClassInstancelimit (newsitem.class, 1). DetectLeakedClosableObjects () / / API grade 11. PenaltyLog (). The build ()); }}Copy the code
Finally, use StrictMode to filter out the corresponding logs in the log output field.
4. Profilo
Profilo is an Android library that collects performance tracking for production versions of applications.
Profilo, for its part, integrates atrace. All performance burying points of Ftrace are written to the kernel buffer via trace_marker files, and Profilo uses PLT hooks to intercept writes to select a few events of interest for specific analysis. This allows us to access all of systrace’s probes, such as four component life cycles, lock wait times, class validation, GC times, etc. But most atrace events are general, from the event “B | | pid activityStart”, we have no idea what the event is created by which the Activity.
In addition, Profilo provides quick access to the Java stack. Since getting the stack requires pausing the main thread, Profilo quickly gets the Java stack by sending SIGPROF signals at intervals, a method similar to Native crash capture.
Profilo provides a quick and inexpensive access to the Java stack. When the Signal Handler catches the Signal, it retrieves the Thread that is currently executing. The Thread object retrieves the ManagedStack of the current Thread. The ManagedStack is a single linked list that holds the current ShadowFrame or QuickFrame stack pointer. It then iterates through its internal ShadowFrame or QuickFrame to restore a readable call stack, unwinding out of the current Java stack. The relationship between ManagedStack and ShadowFrame and QuickFrame is as follows:
In this way, Profilo allows us to run threads synchronously while we check them, and the time it takes is negligible. However, Profilo does not support Android 8.0 or 9.0, and it uses a lot of internal hacking techniques such as hooks. In view of stability problems, it is recommended to sample some users to enable Profilo.
Profilo project address
As mentioned earlier, Profilo eventually uses FTrace, and Systrace is based primarily on Linux’s fTrace mechanism, which helps us understand the runtime behavior of the Linux kernel for troubleshooting or performance analysis. The overall architecture of Ftrace is as follows:
As you can see from the figure above, Ftrace has two main components, one is the framework and the other is a series of tracers. Each tracer is used to accomplish a different function, and they are uniformly managed by the framework. Ftrace traces are stored in the Ring Buffer, which is managed by the framework. The Framework uses the Debugfs system to create the tracing directory under/Debugfs and provides a series of control files.
Below, I present a project that uses PLTHook technology to get Atrace logs.
1, use profilo PLTHook to hook libc.so write and __write_chk methods
Use PLTHook technology to get Atrace’s log-project address
After running the project, we click the button to enable the Atrace log, and then we can see the following native layer log information in Logcat:
The 2020-02-05 10:58:00. 873, 13052-13052 / com. Dodola. Atrace I/HOOOOOOOOK: = = = = = = = = = = = = = = = the install systrace hoook = = = = = = = = = = = = = = = = = = 2020-02-05 10:58:00. 879, 13052-13052 / com. Dodola. Atrace I/HOOOOOOOOK: = = = = = = = = = B | 13052 | inflate 10:58:00. 2020-02-05, 880, 13052-13052 / com. Dodola. Atrace I/HOOOOOOOOK: = = = = = = = = = B | 13052 | LinearLayout 10:58:00. 2020-02-05, 881, 13052-13052 / com. Dodola. Atrace I/HOOOOOOOOK: ========= E 2020-02-05 10:58:00.882 13052-13052/com.dodola.atrace I/HOOOOOOOOK: = = = = = = = = = B | 13052 | TextView 10:58:00. 2020-02-05, 884, 13052-13052 / com. Dodola. Atrace I/HOOOOOOOOK: ========= E 2020-02-05 10:58:00.885 13052-13052/com.dodola.atrace I/HOOOOOOOOK: ========= E 2020-02-05 10:58:00.888 13052-13075/com.dodola.atrace I/HOOOOOOOOK: = = = = = = = = = B | 13052 | notifyFramePending 10:58:00. 2020-02-05, 888, 13052-13075 / com. Dodola. Atrace I/HOOOOOOOOK: ========= E 2020-02-05 10:58:00.889 13052-13052/com.dodola.atrace I/HOOOOOOOOK: = = = = = = = = = B | 13052 | Choreographer# doFrame 10:58:00. 2020-02-05, 889, 13052-13052 / com. Dodola. Atrace I/HOOOOOOOOK: = = = = = = = = = B | 13052 | input 10:58:00. 2020-02-05, 889, 13052-13052 / com. Dodola. Atrace I/HOOOOOOOOK: ========= E 2020-02-05 10:58:00.889 13052-13052/com.dodola.atrace I/HOOOOOOOOK: = = = = = = = = = B | 13052 | traversal 10:58:00. 2020-02-05, 889, 13052-13052 / com. Dodola. Atrace I/HOOOOOOOOK: = = = = = = = = = B | 13052 | the draw 10:58:00. 2020-02-05, 890, 13052-13052 / com. Dodola. Atrace I/HOOOOOOOOK: = = = = = = = = = B | | 13052 Record View# the draw () the 2020-02-05 10:58:00. 891, 13052-13052 / com. Dodola. Atrace I/HOOOOOOOOK: ========= E 2020-02-05 10:58:00.891 13052-13075/com.dodola.atrace I/HOOOOOOOOK: = = = = = = = = = B | 13052 | DrawFrame 10:58:00. 2020-02-05, 891, 13052-13075 / com. Dodola. Atrace I/HOOOOOOOOK: = = = = = = = = = B | 13052 | syncFrameState 10:58:00. 2020-02-05, 891, 13052-13075 / com. Dodola. Atrace I/HOOOOOOOOK: = = = = = = = = = B | 13052 | prepareTree 10:58:00. 2020-02-05, 891, 13052-13075 / com. Dodola. Atrace I/HOOOOOOOOK: ========= E 2020-02-05 10:58:00.891 13052-13075/com.dodola.atrace I/HOOOOOOOOK: ========= E 2020-02-05 10:58:00.891 13052-13075/com.dodola.atrace I/HOOOOOOOOK: = = = = = = = = = B | 13052 | query 10:58:00. 2020-02-05, 891, 13052-13052 / com. Dodola. Atrace I/HOOOOOOOOK: ========= E 2020-02-05 10:58:00.891 13052-13075/com.dodola.atrace I/HOOOOOOOOK: ========= E 2020-02-05 10:58:00.891 13052-13075/com.dodola.atrace I/HOOOOOOOOK: = = = = = = = = = B | 13052 | query 10:58:00. 2020-02-05, 891, 13052-13075 / com. Dodola. Atrace I/HOOOOOOOOK: ========= E 2020-02-05 10:58:00.892 13052-13052/com.dodola.atrace I/HOOOOOOOOK: ========= E 2020-02-05 10:58:00.892 13052-13075/com.dodola.atrace I/HOOOOOOOOK: = = = = = = = = = B | 13052 | query 10:58:00. 2020-02-05, 892, 13052-13075 / com. Dodola. Atrace I/HOOOOOOOOK: ========= E 2020-02-05 10:58:00.892 13052-13075/com.dodola.atrace I/HOOOOOOOOK: = = = = = = = = = B | 13052 | query 10:58:00. 2020-02-05, 892, 13052-13075 / com. Dodola. Atrace I/HOOOOOOOOK: ========= E 2020-02-05 10:58:00.892 13052-13052/com.dodola.atrace I/HOOOOOOOOK: ========= E 2020-02-05 10:58:00.892 13052-13075/com.dodola.atrace I/HOOOOOOOOK: = = = = = = = = = B | 13052 | query 10:58:00. 2020-02-05, 892, 13052-13075 / com. Dodola. Atrace I/HOOOOOOOOK: ========= E 2020-02-05 10:58:00.892 13052-13075/com.dodola.atrace I/HOOOOOOOOK: = = = = = = = = = B | 13052 | query 10:58:00. 2020-02-05, 892, 13052-13075 / com. Dodola. Atrace I/HOOOOOOOOK: ========= E 2020-02-05 10:58:00.892 13052-13075/com.dodola.atrace I/HOOOOOOOOK: = = = = = = = = = B | 13052 | setBuffersDimensions 10:58:00. 2020-02-05, 892, 13052-13075 / com. Dodola. Atrace I/HOOOOOOOOK: ========= E 2020-02-05 10:58:00.892 13052-13075/com.dodola.atrace I/HOOOOOOOOK: = = = = = = = = = B | 13052 | dequeueBuffer 10:58:00. 2020-02-05, 894, 13052-13075 / com. Dodola. Atrace I/HOOOOOOOOK: = = = = = = = = = B | 13052 | importBuffer 10:58:00. 2020-02-05, 894, 13052-13075 / com. Dodola. Atrace I/HOOOOOOOOK: = = = = = = = = = B | 13052 | HIDL: : IMapper: : importBuffer: : passthrough 10:58:00. 2020-02-05, 894, 13052-13075 / com. Dodola. Atrace I/HOOOOOOOOK: ========= E 2020-02-05 10:58:00.894 13052-13075/com.dodola.atrace I/HOOOOOOOOK: ========= E 2020-02-05 10:58:00.894 13052-13058/com.dodola.atrace I/HOOOOOOOOK: = = = = = = = = = B | 13052 | the Compiling 10:58:00. 2020-02-05, 894, 13052-13075 / com. Dodola. Atrace I/HOOOOOOOOK: = = = = = = = = = E 10:58:00. 2020-02-05, 894, 13052-13075 / com. Dodola. Atrace I/HOOOOOOOOK: = = = = = = = = = B | 13052 | queryCopy the code
It is important to note that the B represents the begin of log, which is corresponding to the starting time of the time, and E represents the End, the End of the corresponding event, and, B and E | | events events come in pairs, so we can through the beginning of the End of the event time minus the corresponding time for each event to use. For example, in the above log we can see that TextView’s Draw method displays 3ms.
In addition, the following project shows how to use PLTHook technology to get a thread-created stack.
2. Use the PLTHook technique to get the thread-created stack
Use the PLTHook technique to get the stack-to-item address created by the thread
After running the project, we click on the Open Thread Hook button, and then on the New Thread button. Finally, you can see the stack information created by Thread in Logcat:
The 2020-02-05 13:47:59. 006, 20159-20159 / com. Dodola. Thread E/HOOOOOOOOK: stack:com.dodola.thread.ThreadHook.getStack(ThreadHook.java:16) com.dodola.thread.MainActivity$2.onClick(MainActivity.java:40) android.view.View.performClick(View.java:6311) android.view.View$PerformClick.run(View.java:24833) android.os.Handler.handleCallback(Handler.java:794) android.os.Handler.dispatchMessage(Handler.java:99) android.os.Looper.loop(Looper.java:173) android.app.ActivityThread.main(ActivityThread.java:6653) java.lang.reflect.Method.invoke(Native Method) com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:547) Com. Android. Internal. OS. ZygoteInit. Main (ZygoteInit. Java: 821) the 2020-02-05 13:47:59. 007, 20159-20339 / com. Dodola. Thread Thread name: thread-2 2020-02-05 13:47:59.008 20159-20339/com.dodola. Thread ID :1057 2020-02-05 13:47:59.009 20159-20339/com.dodola. Thread E/HOOOOOOOOK: stack:com.dodola.thread.ThreadHook.getStack(ThreadHook.java:16) Com. Dodola. Thread. MainActivity $2 $1. The run (MainActivity. Java: 38) 2020-02-05 13:47:59. 011, 20159-20340 / com. Dodola. Thread E/HOOOOOOOOK: Inner thread name: thread3 2020-02-05 13:47:59.012 20159-20340/com.dodola. inner thread id:1058Copy the code
Profilo and PLT Hook cover a lot of KNOWLEDGE about C/C++ and NDK development. Due to limited space, this part will not be explained in detail. If you are interested in NDK development, you can look forward to my Awesome android-NDK series. Will begin to systematically learn NDK related development knowledge, stay tuned.
Second, automatic stuck detection scheme and optimization
1. Why is an automated caton detection solution needed?
There are two main reasons:
- 1. System tools such as Cpu Profiler and Systrace are only suitable for offline targeted analysis.
- 2. Online and test environments require automated stack-detection solutions to locate stackness and, more importantly, to record the situation in which stackness occurs.
2. Principle of Caton detection scheme
The principle is derived from Android’s messaging-handling mechanism. No matter how many handlers a thread has, it will only have one Looper, and any code executed by the main thread will be executed through looper.loop (). In Looper, it has a mLogging object, which is called before and after each message processing. The main thread has stalled, which must be a time-consuming operation in the dispatchMessage() method. Then we can monitor dispatchMessage() with this mLogging object.
The concrete realization steps of caton detection scheme
First, let’s look at the loop() method Looper uses to execute the message loop. The key code is as follows:
/**
* Run the message queue in this thread. Be sure to call
* {@link #quit()} to end the loop.
*/
public static void loop() {
...
for (;;) {
Message msg = queue.next(); // might block
if (msg == null) {
// No message indicates that the message queue is quitting.
return;
}
// This must be in a local variable, in case a UI event sets the logger
final Printer logging = me.mLogging;
if (logging != null) {
// 1
logging.println(">>>>> Dispatching to " + msg.target + " " +
msg.callback + ": " + msg.what);
}
...
try {
// 2
msg.target.dispatchMessage(msg);
dispatchEnd = needEndTime ? SystemClock.uptimeMillis() : 0;
} finally {
if (traceTag != 0) {
Trace.traceEnd(traceTag);
}
}
...
if (logging != null) {
// 3
logging.println("<<<<< Finished to " + msg.target + " " + msg.callback);
}
Copy the code
In Looper’s loop() method, logging prints out before and after each message it executes (note 2). As can be seen, “>>>>> Dispatching to “is output before the message is executed, and “<<<<< Finished to” is output after the message is executed. The logs they print are different, so we can judge the time points before and after the message is executed.
Therefore, the specific implementation can be summarized as the following steps:
- 1. First, we need to use looper.getMainLooper ().setMessagelogging () to set our own Printer implementation class to print out logging. This way, the Printer implementation class we set up will be called before and after each message execution.
- 2. If we are matched with “>>>>> Dispatching to “, we can execute a line of code, that is, after the specified time threshold, we are executing a task in the child thread, which is to obtain the stack information of the current main thread and some current scene information, for example: Memory size, computer, network status, etc.
- 3. If “<<<<< Finished to “is matched within the specified threshold, then the message is completed, and there is no lag effect that we would expect and we can cancel the child thread task.
3, AndroidPerformanceMonitor
It is a non-intrusive performance monitoring component that pops up lag messages in the form of notifications. It works the way we’ve just described the implementation of Caton monitoring.
Let’s go through a simple example to explain its use.
First, we need to configure moudle’s dependencies under build.gradle, as follows:
/ / release: Realized the online monitoring system in the project to use the API 'com. Making. Markzhai: blockcanary - android: 1.5.0' / / only in the debug package enable blockcanary caton monitoring and prompt, So to use debugApi 'com. Making. Markzhai: blockcanary - android: 1.5.0' releaseApi 'com. Making. Markzhai: blockcanary - no - op: 1.5.0'Copy the code
Second, in the onCreate method of Application, enable the caton monitor:
Install (this, new AppBlockCanaryContext()).start();Copy the code
Finally, inherit the BlockCanaryContext class to implement its own monitoring configuration context class:
Public class AppBlockCanaryContext extends BlockCanaryContext {public class AppBlockCanaryContext extends BlockCanaryContext {public class AppBlockCanaryContext extends BlockCanaryContext {public class AppBlockCanaryContext extends BlockCanaryContext { /** ** provides the application identifier ** @return identifier can be specified at installation time, */ public String provideQualifier() {return "unknown"; ** @return user ID */ public String provideUid() {return "uid"; } @return {@link String} like 2G, 3G, 4G, wifi, etc. */ public String provideNetworkType() { return "unknown"; } /** * Set the monitoring time interval, beyond this time interval, BlockCanary will stop, use * with {@code BlockCanary}'s isMonitorDurationEnd * * @return monitor last duration (in hour) */ public int provideMonitorDuration() { return -1; } /** * specifies the threshold (in millis) for the value of the card. * * @return threshold in mills */ public int provideBlockThreshold() {return 1000; } /** * Sets the thread stack dump interval, which is used when blocking occurs. BlockCanary will dump stack information on the main thread based on the current cycle. * </p> * * @return Dump interval (in millis) */ public int provideDumpInterval() { return provideBlockThreshold(); } /** * save the log path, such as "/blockcanary/", if permission allows, * * @return path of log files */ public String providePath() {return "/blockcanary/"; } /** * whether to notify the user of blocking ** @return true if need, else if not need. */ public boolean displayNotification() { return true; } /** ** compressed into a. Zip file ** @param SRC files before compress * @param dest files compressed * @return true if compression is successful */ public boolean zip(File[] src, File dest) { return false; } @param zippedFile zipped file */ public void upload(file zippedFile) {throw new UnsupportedOperationException(); } /** * set the package name, the default is to use the process name, * * @return null if simply concern only package with process name. */ public List<String> concernPackages() { return null; } /** * use @{code concernPackages} to specify the stack information to filter ** @return true if filter, false it not. */ public boolean filterNonConcernStack() { return false; } /** * specify a whitelist, * * @return return null if you don't need white-list filter. */ public list <String> provideWhiteList() { LinkedList<String> whiteList = new LinkedList<>(); whiteList.add("org.chromium"); return whiteList; } /** * When using whitelist, * * @return true if delete, false it not. */ public Boolean deleteFilesInWhiteList() {return true; Public void onBlock(Context Context, BlockInfo BlockInfo) {}}Copy the code
As you can see, in the above configuration, we specified a threshold of 1000ms for caton. Next, we can test BlockCanary to monitor the effect of snap. Here I add the following code to the onCreate method of the Activity to sleep the thread for 3s:
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
e.printStackTrace();
}
Copy the code
Then, we run the project, open the App and see a stack of stuck messages like the LeakCanary interface.
In addition to BlockCanary, which provides a graphical interface for developers and testers to view the cause of the crash directly. Its greatest use is to collect and analyze a wide range of logs online in the environment or in automated Monkey tests. The analysis can be done at two latitudes:
- Caton time.
- Sort and categorize according to the number of times the same stack appears.
BlockCanary has the following advantages
- Non-invasive.
- Convenient and accurate, can locate a line of code code.
So what’s wrong with this automatic stutter detection scheme?
During the lag period, the application does lag, but the information obtained may be inaccurate, just like in OOM, that is, the last stack information is only a symptom, not a real problem of the stack. Let’s take a look at the following schematic diagram:
Suppose the main thread in T1 and T2 time caton, caton detection scheme for CARDS immediately the stack information is T2, but actually caton moment may be another time in this period of time area long function, so can we capture caton moment, real card execution time had accomplished, Therefore, a stuck information captured at T2 time cannot reflect the scene of stuck, that is, the stack information finally presented is only a representation, not the hiding place of the real problem.
So, how do we optimize this situation?
We can capture multiple stacks during a holdup period, not just the last one, so that if a holdup occurs, we can reconstruct the entire holdup scene clearly based on the stack information. Because we have multiple stacks of caton field information, we know exactly what happened at Caton, and which functions take longer to call. Next, let’s take a look at the following flowchart of stuck detection optimization:
According to the figure, specific implementation steps after optimization can be sorted out as follows:
- 1. First, we monitor the process through the startMonitor method.
- 2. Next, we start to collect stack information at high frequency. If a lag occurs, we call the endMonitor method.
- 3. Then, record the stack information we collected earlier into a file.
- 4. Finally, report to our server when appropriate.
Through the above optimization, we can know which methods are being executed and which methods are time-consuming during the whole caton cycle.
However, there is another problem in the processing of this massive stuck stack, that is, the high frequency stuck stack is reported too much, and the server is under great pressure. Here we analyze how to reduce the amount of stack information processing on the server side.
In the case of holdup, we collected multiple stacks. In the case of large probability, there may be multiple repeated stacks, and this repeated stack information is what we should pay attention to. We can hash a stack under a gridlock to find duplicate stacks. This reduces the amount of data the server needs to process and filters out what we need to focus on. It’s much faster for developers to find the cause of the lag.
4, summary
In this section, we learned the principle of automatic kadon detection, and then, we use this scheme for actual combat, finally, I also introduced the problem of this scheme and its optimization ideas.
Third, summary
In this article, we mainly give a comprehensive and in-depth explanation to the analysis methods and tools of caton optimization, automatic caton detection scheme and knowledge related to optimization. Here we briefly summarize the two major themes involved in this article:
- 1. Kadon optimization analysis method and tools: Background introduction, kadon analysis method using shell command to analyze CPU time, kadon optimization tool.
- 2, automatic caton detection scheme and optimization: optimization principle, the AndroidPerformanceMonitor actual combat and caton test plan.
In the next chapter, the author will lead you to further study the related knowledge of Cartun optimization, please look forward to ~
Android Caton Optimization (Part 2)
Reference links:
1, domestic Top team bull take you to play Android performance analysis and optimization chapter 6 Caton optimization
2, Geek time Android development master class stuck optimization
3. “Android Mobile Performance Combat” chapter 4 CPU
4. Android Mobile Performance In Action, Chapter 7 fluency
5, Android dumpsys cpuInfo information interpretation
6. How to explain the definition of “UV and PV” clearly and easily?
7, Nanoscope -An extremely accurate Android Method tracing tool
8, DroidAssist-A Lightweight Android Studio Gradle plugin based on Javassist for Editing
9, Lancet -A Lightweight and FAST AOP Framework for Android App and SDK developers
10. MethodTraceMan- Used to quickly find time-consuming methods to solve Android App lag problems
11. CPU usage of processes in Linux
12. Use fTrace
Profilo -A library for performance functions from production
14. Introduction to Ftrace
Atrace source code
16, AndroidAdvanceWithGeektime/Chapter06
17, AndroidAdvanceWithGeektime/Chapter06 – plus