Vigorously intelligent technology team – the client is not curious
background
With the booming development of operating lamp business, the deterioration of client compiler is becoming more and more serious. Sync projects can take up to five minutes to compile locally, and GC over Limit errors are common, which can seriously affect development efficiency. CI compilation often takes more than 20 minutes, which seriously affects the efficiency of code combination.
The above deterioration has seriously affected the daily research and development work and urgently needs to be improved.
The early stage of the research
In view of the above situation, we first do a specific investigation of local compilation.
This is the first full local compilation, which takes 10 minutes. A second incremental compilation is attempted without any changes, which takes up to 15 minutes. The third incremental compilation directly reports a GC over Limit error. If you clean up the Java process after each compilation, you won’t have this problem. Every time you compile a Java process, the memory is filled to 8GB.
Sync a project locally, which takes up to 9 minutes. The second sync was completed 10 minutes later. Error GC over Limit is reported for the third time. Similarly, every time the Java process is cleaned up after sync, there will be no deadlock issues.
At the same time, because the local Java process occupies too much memory, the computer will have obvious heating and lag phenomenon, which affects the development experience very much.
Thought analysis
First, memory leaks can be easily observed from local compilations. The memory footprint doubles after each sync, indicating a serious memory leak. Solving these problems can effectively alleviate local compilation problems.
Memory management
VariantFilter filters redundant configurations
At first we guessed that some plugins were leaking, but we had no idea which plugins were leaking and why. In the preliminary investigation, IT was found that Sync occupies a large amount of memory and has serious memory leakage (sync will be blocked for multiple times and give an error). We decided to start with the Sync scenario for memory governance.
A subsequent review revealed that it was the right decision. When we run out of ideas to deal with a problem, it’s best to find a simple scenario to analyze in depth. The point here is to find sync as a simple scenario, as the sync task is much simpler and better for us to reproduce the problem than build and compile.
First we need to know the memory usage after sync. Use VisualVM to get real-time memory performance of Sync. This is using VisualVM to monitor the newly created Java process in real time. This Java process is also the Deamon process created by Gradle.
Check the memory changes during sync. After sync is complete, the heap memory is filled to 8 GB, and the actual memory usage is as high as 6.3 GB. Dump hprof file, we use MAT to analyze the current memory situation.
In the dump file, 83% of the 6.3GB memory is DefaultConfiguration_Decorated. I don’t know whether this phenomenon is normal or not. At this time, I happened to see a company document about how to solve the OOM problem in compiling. The document mentioned that, The number of Configurations is determined by the number of modules * flavor * BuildTypes * Configurations corresponding to each variant.
We have three flavors in the project (one of which is a recent addition, which explains why the deterioration is so bad), 80+ Modules in the main warehouse, plus debug and Release buildTypes, Android Studio Sync will load all flavors and buildType situations, which will give us all the options in build variants. As a result, sync Configuration takes up to 5G of memory in our project.
Here, we can refer to the Android official website about using variantFilter to block the other two flavors besides the current one. This reduces memory footprint during Sync and development, and reduces configuration time. Add the following code to your project under build.gradle:
if (! Project. RootProject. Ext BuildContext. Group) {/ / local development to reduce the flavor of the configuration configuration afterEvaluate {the if (it.plugins.hasPlugin('com.android.library') || it.plugins.hasPlugin('com.android.application')) { def flavorName = DEFAULT_FLAVOR_NAME def mBuildType = DEFAULT_BUILD_TYPE boolean needIgnore = false for(String s : Gradle. StartParameter. TaskNames) {s = s.t oLowerCase () println (" variantFilter taskName = ${s} ") / / when it comes to upgrade or the component inspection, Do not use variantFilter if (s.c ontains (" publish ") | | s.c ontains (" checkchanged ")) {needIgnore = false} if(s.contains("release")){ mBuildType = "release" } if(s.contains("flavor1")){ flavorName = "flavor1" break }else if(s.contains("flavor2")){ flavorName = "flavor2" break }else if(s.contains("flavor3")){ flavorName = "flavor3" break } } if(needIgnore){ println("variantFilter flavorName = ${flavorName},mBuildType = ${mBuildType}") android { variantFilter { variant -> def names = variant.flavors*.name if (! names.empty && ! names.contains(flavorName)) { setIgnore(true) println("ignore variant ${names}") } def buildType = variant.getBuildType().getName() if (buildType ! = mBuildType) { setIgnore(true) println("ignore variant ${buildType}") } } } } } } }Copy the code
Set the default flavor and build type in gradle.properties. If you need to switch the flavor during development, you can do so here.
# flavor default setting
DEFAULT_FLAVOR_NAME = flavor1
DEFAULT_BUILD_TYPE = debug
Copy the code
After filtering, let’s take a look at sync memory:
The heap size is 5.5g, and the actual memory usage is 3.2G. We reduced the memory footprint of 3G by adding variant filtering.
Sync Memory leak management
The above filter reduces 3G footprint, which is good news. We started to continue to check the problem that Sync would GC over limit for several times. When you try sync again, the memory is increased by 3.2GB, doubling the memory footprint.
There are two conjectures:
1. Failed to reclaim the memory occupied during the last sync, resulting in a memory leak.
2. The second sync should have used the cache of the first sync, but for some reason it didn’t reuse it and created a new cache on its own.
Dump the heap first. Here is a direct grab of the memory after sync twice to see where the leak occurred.
As you can see, the configuration is doubled after two sync sessions. What is the cause? At the time, I was still a little rusty with the software, so I searched MAT for the correct way to use it, and found a leak Suspects feature that would automatically help us find possible memory leaks in memory.
In the case of problem A, defaultConfiguration_Decorated is referenced by dependencyManager in SEER. At this point, I’m still not sure if it’s a memory leak or not overusing memory. In fact, it was found after the recheck that MAT had clearly given the suggestion of memory leakage, and the problem should have been clear by this time. However, the gradle Sync mechanism is still unknown due to lack of understanding.
After looking at the path2GC of the VisitableURLClassLoader above (that is, looking at its chain of references to GCroots), it was discovered that a thread in the Build Scan package had a reference to it, causing its memory leak. And after sync twice the thread went from one to two!
Through this step of analysis, we can determine that this is the leakage problem. GCRoot comes from the company plug-in we plugged in. After contacting the plugin’s maintainer to fix the problem, sync was executed twice in a row and the memory was still doubled.
At this point I also learned how to use MAT to analyze memory leaks by directly looking at the hprof file after sync. Looking at the Leak Suspects, the first problem becomes an ActionRunningListener, and the second problem is Configuration.
The second memory leak is the big head, which has a total of 1.1GB, while the first one is only 280MB. Let’s analyze the first one first.
We can see that there are two identical Listener objects here, and we look directly at the path2GC of one listener to find the GCROOTS that are leaking memory.
As you can see here, GCROOT comes from the KVJsonHelper class in another plug-in. KVJsonHelper uses a static variable that references Gradle.
I’m also trying to figure out why this is a memory leak. We use the same gradle process for both sync processes. Static variables can only exist in one process. Checked the relevant information, also read the corresponding documents of the company, finally found the reason.
Gradle objects are recreated every time they are compiled or synced (without caching). This re-creation creates a new Classloader, so gradle objects are not the same. Gradle is a GCroots that references an ActionRunningListener, causing a memory leak. Gradle class loading is a gradle class loading mechanism. See gradle documentation for details.
Asked the relevant students to explain the context and helped us solve the problem. Sync again finds that the memory is still doubled.
There seems to be a lot of such problems. The following problem screening is similar to the above, so I won’t repeat it. We subsequently identified several others with the same problem. Gradle objects cannot be recycled because GCROOT references Gradle directly or indirectly. Gradle objects, and the objects they reference, are up to 3G.
There was actually a little bit of an interlude here. After we had resolved all the memory leaks, we synced again and found that the memory still doubled. When the dump heap is ready for analysis, memory usage is reclaimed. The dump function of VisualVM will execute FULL GC first, and our project will also execute FULL GC after sync is completed. However, the mbox plugin will execute a buildSrc after sync, so the fullGC is not successfully collected. There are no subsequent GC operations, so memory remains.
At this point, the memory leak is completely resolved. In total, we helped five plug-ins resolve memory leaks, reducing the local memory footprint from 3G to 100M. One remaining question is why, after GC, the actual memory footprint is 100M while the heap size is still 6GB. This requires the following Gradle JVM tuning.
Gradle JVM tuning
The memory of sync was indeed reduced, but the build compilation time was still very long, and the release compilation on CI was ridiculed by students for being too slow. What is to be done? Check the compilation time on CI at this time, it is more than 20Mins.
I picked a long compile task and looked at the time-consuming tasks.
The overall compile time was 24 minutes, and the R8 task took 18 minutes. At this point to the memory analysis, GC time was close to 12mins. That’s half the total time.
I looked at memory and found that by the end of compilation, memory was almost full, resulting in constant GC.
It seems that the maximum heap we set for 8GB is not enough memory and we have decided to increase it to 16GB. In gradle.properties, change the maximum value of the Gradle process Java heap.
org.gradle.jvmargs=-Xmx16384M -XX:MaxPermSize=8192m -Dkotlin.daemon.jvm.options="-Xmx8192M"
Copy the code
The above parameter increases the maximum memory of a Gradle process to 16GB and the maximum memory of a Kotlin Gradle process to 8GB. Try it locally, and it does compile much faster. Compiling the Release package on CI reduced the compilation time from 20 minutes to 10 minutes, much more than we expected.
The main reason is that most of our compile time is spent on GC time (50 percent +), we increased the maximum amount of process memory, and GC time was greatly reduced, which in turn reduced compile time.
At this point, we found a new problem, as the compilation process, as the memory footprint increased, the heap became bigger and bigger, later reached 13GB. But when the compilation is complete, the memory is reclaimed to 1 GB, and the heap is still 13 GB, with 12 GB of free memory. Isn’t that a waste of space?
Similar to the sync legacy above, we started trying to reduce the percentage of free space. -xx :MaxHeapFreeRatio=60 -xx :MinHeapFreeRatio=40 Specifies the maximum and minimum proportion of free memory in the heap.
In fact, the figure above is the result of the test with this parameter set. It’s not feasible. Why is that? Some of the answers to this question are that GC does not now change the heap size in real time.
So what do you do with this free memory? I’ve tried a few things here and found that Gradle has already done a good job of optimizing its Own Deamon process. I tried to add parameters to do optimization, may be counterproductive.
At this point, we don’t need to worry about having too much free memory, we just need to make sure that the Java process doesn’t interfere with our daily computer usage.
The Deamon process has a parameter to set the duration of the deamon process. The duration of the deamon process is defined as the duration of the deamon process. The duration of the deamon process is defined as the duration of the deamon process. The default keepalive time is 3 hours. Here, we can set it to 1 hour to avoid occupying the computer memory for a long time and affecting other work.
The optimization results
At this point, our memory governance came to an end.
- We fixed memory leaks during the compilation of the project, and the memory footprint of multiple compilations only increased slowly, completely eliminating GC over limit compilation errors. Meanwhile, sync time was optimized from 8 minutes to 1.5 minutes to improve local r&d efficiency.
- We increased the maximum memory usage of gradle process, reduced the GC usage time from 50% to 5% during compilation, and reduced the CI compilation time from 20 minutes to 10 minutes, greatly improving the efficiency of code development.
Memory governance, the effect is very significant, not only solves the problem of local compilation, but also improves the speed of CI compilation.
conclusion
Through the above, we have summarized the following experiences:
- In the multi-flavor project, we can reduce the memory footprint during compilation by using variantFilter to filter non-essential Variant resources.
- When writing gradle plug-ins, we should also be careful not to reference Gradle objects directly with static variables to avoid unnecessary memory leaks.
- The gradle Daemon process threshold should be properly configured to reduce the proportion of GC duration during project compilation.