preface
The content of this article is I review and re-study the process of Android memory optimization to sort out, the purpose of sorting out is to make my own knowledge of Android memory optimization more comprehensive, the purpose of sharing is to hope that we can also get some inspiration from these knowledge.
The following memory is divided into three parts: basic knowledge of memory optimization, memory analysis tools and memory optimization skills. Among them, memory analysis tools need us to practice more, because the analysis results of the applications we develop are very important optimization basis.
The common mistake of memory optimization is to think that the less memory the better. VSS (Virtual memory, Vertual Set Size), PSS (physical memory, Propertional Set Size), Java heap insufficient memory may cause the lag, but it does not mean that the less memory the better user experience. The amount of memory an application consumes depends on the device and system, not the absolute value of 300MB or 400MB.
VSS refers to virtual memory consumption, including all memory occupied by the Shared Dynamic Library and allocated but unused memory.
When the system memory is sufficient, we can use more memory to improve the application performance, when the system memory is insufficient, it is necessary to do time allocation, timely release, when the system memory pressure, can quickly release all kinds of cache to reduce the system pressure.
1. 3 reasons to do memory optimization
The purpose of memory optimization is to reduce the Crash rate, make the application run more smoothly, and keep the application alive longer.
1. Reduce Crash rate
There are many reasons for Android app Crash, and memory optimization can make our app avoid the Crash caused by memory problems. The specific manifestation of the Crash caused by memory problems is memory overflow and OOM exception. There are many reasons for OOM. I’ll cover them in more detail later.
2. Run more smoothly
In Android, there are many reasons for interface lag, one of which is caused by memory problems. The reason why memory problems affect interface fluency is because of Garbage Collection (GC). During GC, all threads stop, including the main thread. When both GC and drawing are triggered at the same time, the execution of the drawing is put on hold, causing frames to drop and the interface to get stuck.
For more on GC, see my last article.
3. Long survival time
If an application is running in the background and consumes more memory, it will be cleared first. The process clearing mechanism is called low kill, which will be explained in more detail later.
If a user zhang want to now in our electricity used to buy a commodity, having found the one you love after the goods, when he is ready to buy, zhang’s wife asked him to give the child in diapers, such as zhang opened the application again, found merchandise page has been closed, or application is killed, zhang again remind of the child’s milk powder money at this moment, I might just walk away from the purchase.
It’s not uncommon for users to be interrupted while using apps on mobile devices, and if our apps don’t live long enough for users to come back, the experience of having them do it again will be poor.
2. Dalvik
To understand the memory management mechanism of Android applications, it is necessary to understand Dalvik, the virtual machine carrying Android applications. Although Android now uses ART to carry the execution of applications, ART is also based on Dalvik optimization.
Dalvik is short for Dalvik Virtual Machine (Dalvik Virtual Machine), which is one of the core components of the Android platform. The differences between Dalvik and JVM are as follows.
2.1 Six characteristics of Dalvik
1. Register-based
The JVM is stack-based, meaning that data needs to be read in the stack and requires more instructions, which can result in slow speeds that are not suitable for performance-first mobile devices.
Dalvik is register-based, and the instructions are more compact and concise. Because operands are explicitly specified, register-based instructions are larger than stack-based instructions, but the total number of codes will not increase much due to the decrease in the number of instructions.
2. Dx tool
In Java SE programs, Java classes are compiled into one or more.class files, which are then packaged into JAR files, and the JVM obtains the corresponding bytecodes from the corresponding.class and JAR files.
Dalvik has its own bytecode. Dalvik uses the dx tool to convert all.class files into a.dex file and then reads instructions and data from the.dex file.
3. Share the memory area with Zygote
Dalvik was created by the Zygote incubator. Zygote is itself a Dalvik VM process. When the system needs to create a process, Zygote forks and quickly creates and initializes a DVM instance.
For some read-only system libraries, all Dalvik instances can share a memory area with Zygote to save memory overhead.
4. Independent process space
In Androd, each application runs in an instance of Dalvik VM, and each Dalvik VM runs in an independent process space. This mechanism enables Dalvik to run multiple processes simultaneously in limited memory.
5. Class sharing mechanism
Dalvik has a preload-and-share mechanism, which enables different applications to share the same classes at run time, thus achieving higher efficiency.
JVM does not have such a sharing mechanism, different programs, packaged programs are independent of each other, even if the package uses the same class, the runtime is loaded and run separately, cannot be shared.
6. Incompatible with JVMS
Dalvik is not a Java VIRTUAL machine, it is not implemented according to the Java Virtual Machine specification, and the two are incompatible.
2.2 Viewing Dalvik Heap Information
Each handset vendor can set the heap size that each process in the device can use. The values of the process heap size are as follows. If the dalvik heap information can be viewed with commands such as the following.
adb shell getprop dalvik.vm.heapsize
Copy the code
##### 1. Initial heap allocation value
Dalvik.vm. heapStartSize is the initial size of the heap allocation. The smaller this value is, the slower the system memory consumption is, but the slower the application is when it extends the heap, causing GC and heap adjustments.
The higher the value, the smoother the application, but the fewer applications that can be run.
##### 2. Maximum available memory of a single application
Dalvik. Vm. Heapgrowthlimit is the single largest memory available to the application, if the statement in the listing file largeHeap to true, the App to use memory to heapsize OOM, Otherwise you will get OOM if you reach heapGrowthLimit.
##### 3. Maximum heap memory size
Dalvik.vm. Heapsize is the maximum heap memory available to the process. If the application requests more memory than this value, it will receive OOM.
3. ART
The full name of ART is Android Runtime. ART is a new application Runtime environment added from Android 4.4. It is a virtual machine that executes local machine instructions and is used to replace Dalvik VIRTUAL machine.
Both Dalvik VM and ART can support the running of Java applications that have been converted to the.dex (Dalvik Executable) format.
The differences between ART and Dalvik are as follows.
1. The precompiled
Every time an application is run in Dalvik, bytecode needs to be converted into machine code by just-in-time compiler JIT, which reduces the running efficiency of the application.
In ART, the system will precompile (AOT, Ahead-of-time) when installing the application, which precompiles the bytecode into machine code and stores it locally. In this way, the application does not need to be compiled every Time it runs, and the running efficiency is greatly improved.
2. Garbage collection algorithm
The garbage collection algorithm adopted in Dalvik is mark-sweep algorithm. Starting the garbage collection mechanism will cause two pauses (one in traversal phase and the other in marking phase).
Under ART, GC is faster than Dalvik, because the application itself does some garbage collection work. After GC starts, it stops once instead of twice. Moreover, ART uses a new technology (Packard pre-cleaning) to do a lot of things before pauses. Reduced workload during pauses.
3. 64
Dalvik was designed for 32-bit cpus, while ART supports 64-bit and is compatible with 32-bit cpus, which was the main reason Dalvik was phased out.
4. Java garbage collection mechanism
To understand why memory leaks occur, it is important to understand Java’s garbage collection mechanism. Let’s take a look at the Java garbage collector.
4.1 Accessibility analysis algorithm
Rechability Analysis algorithm is used to determine whether an object is alive or not. The basic idea of this algorithm is to use a series of adjacent objects called GC Roots as the starting node set. From these nodes, search down according to the reference relationship. The path taken in the search process is called the Reference Chain. If there is no Reference Chain between an object and GC Roots, that is, unreachable, it proves that the object cannot be used again.
GC Roots objects include:
-
Objects referenced in the virtual machine stack (the local variable table in the stack frame), such as parameters, local variables, and temporary variables used in the stack of methods invoked by individual threads
-
An object referenced by a class static attribute in a method area, such as a reference in a string constant pool
-
Objects referenced by JNI (Native methods) in the Native method stack
-
Internal references to the Java virtual machine, such as Class objects corresponding to basic data types
-
All objects held by the synchronized keyword
-
Jmxbeans, callbacks registered in JVMTI, and local code caches that reflect Java virtual machine internals
4.2 Four Reference Types
Java expanded the concept of references after JDK 1.2, The references are divided into Strongly Reference, Soft Reference, Weak Reference and Phanton Reference.
1. Strong reference
A strong reference is a reference assignment in program code, similar to Object obj = new Object(). As long as the strong reference exists, the garbage collector will not reclaim the referenced Object.
2. Soft references
Soft references are used to describe some and use, but not necessary objects, as long as being soft references associated with the object, in front of the system is going to happen out of memory, will be the scope of these objects are included in the recycling of secondary recovery, if has not enough memory, after recycling will be thrown out of memory exception, soft references can use the SoftReference class implements.
3. A weak reference
Weak references are also used to describe non-essential objects. No matter whether the current memory is sufficient, the objects associated with weak references will be recovered when the next garbage collection occurs. Weak references can be realized by using WeakReference class.
4. Virtual reference
Virtual references are the weakest kind of reference relationship. It doesn’t matter whether an object has a virtual reference or not. If you set a virtual reference to an object, you will receive a notification when the object is collected by the garbage collector.
4.3 Generational recycling algorithm
The Generational Collection algorithm divides the Java heap into different areas and then assigns Collection objects to different areas based on age (how many times the object survived the garbage Collection) so that the garbage collector can recycle just one or a few areas at a time, That’s why there are Minor (Young), Major (Old), and Full (entire Java heap and method area garbage collection) collections. In order to arrange the garbage collection algorithm matching the survival characteristics of the stored objects in different regions, so as to develop the targeted garbage collection algorithm such as mark-copy algorithm, mark-clear algorithm and mark-collation algorithm.
According to the generational collection theory, the Java heap is divided into at least two regions: the Young Generation and the Old Generation. In the new Generation, a large number of objects die during each garbage collection and the surviving objects are promoted to the Old Generation storage.
5. Five process priorities
LowMemoryKiller is similar to the garbage collector (GC), which ensures that an application has enough memory to use, while LowMemoryKiller ensures that the system has enough memory to use.
GC retrieves objects based on the strength of the reference, while low kill cleans the process based on the priority of the process, which in this case corresponds to the strength of the application being referenced by the user.
In Android, different processes have different priorities. When two processes have the same priority, LowMemoryKiller prioritizes the process that consumes more memory. That is, if our application occupies less memory than other applications and is in the background, our application can live longer in the background. Make more time.
Android processes can be divided into five priority categories: foreground, visible, server, background, and empty.
1. Foreground processes
Foreground Process is the Foreground Process that is interacting with users and has the highest priority. If the Foreground Process meets the following five conditions, it is Foreground Process.
-
The process holds an Activity that interacts with the user (the Activity’s onResume method is called).
-
The process holds a Service that is bound to the Activity the user is interacting with
-
The process holds a Service that calls the startForeground() method
-
The process holds a Service that is executing onCreate(), onStart(), and onDestroy()
-
The process holds a BroadcastReceiver that is executing the onReceive() method
2. See the process
A Visible Process does not contain any foreground components, but the user can see it on the screen. A Process is considered Visible if it meets the following two conditions.
-
A process holds an Activity that is in the pause state, such as a foreground Activity that opens a dialog so that subsequent activities are paused
-
A process holds a Service that is bound to a visible Activity.
Visible processes are so important that the system does not kill them unless the foreground process has exhausted the system’s available memory.
3. Service process
Service processes are typically used to play music or download files in the background, and Android tries to keep them running unless the system runs low on memory.
A process is a Service when it runs a Service started by startService()
4. Background processes
The system saves Background processes in a LruCache list. Terminating Background processes has little impact on user experience, so the system cleans some Background processes as appropriate.
If necessary, you can save some data in the Activity’s onSaveInstanceState() method to avoid retyping after the application has been cleaned up in the background.
When a process holds an Activity that is not visible to the user (the Activity’s onStop() method is called), but onDestroy() is not called, the process is considered a background process.
5. Empty process
If a process does not contain any active application components, the system identifies it as an empty process. The empty process is reserved to speed up the next startup of the process.
6. Bitmap
Most apps, such as e-commerce apps and food delivery apps, use a lot of images.
Images in Android correspond to the Bitmap and Drawable classes, and images loaded from the web are eventually converted into bitmaps.
Images consume a lot of memory, and if you use them incorrectly, you can easily end up in OOM.
Let’s take a look at some of the memory-related aspects of bitmaps.
6.1 Obtaining the Memory Used by Bitmap
1. Bitmap.getByteCount()
Bitmap provides a getByteCount() method to get the memory footprint of the image, but this method can only be calculated dynamically while the program is running.
2. Picture memory formula
Formula: Width * height * memory occupied by one pixel.
If we now have a 2048 * 2048 image with the encoding format ARGB_8888, the size of the image is 2048 * 2048 * 4 = 16, 777, 216 bytes, or 16M.
If the vendor sets the heap size for the virtual machine to be 256 MB, then the application can only use 16 images like this at the maximum.
When our application runs, it’s not just the code we write that consumes memory, it’s also the objects created in the library that consume the heap memory, so let alone 16, the application dies after a few more.
6.2 Four Bitmap decoding options
The size of each pixel in an image depends on its decoding options, and There are four Bitmap decoding options available on Android.
The ARGB of the following four decoding options represents transparency and Alpha, Red, Green and Blue, respectively.
- ARGB_8888
Each of the four ARGB channels has a value of 8 bits, adding up to 32 bits, or 4 bytes per pixel
- ARGB_4444
Each of the four ARGB channels has a value of 4 bits, adding up to 16 bits, or 2 bytes per pixel
- RGB_565
RGB three channels are 5 bits, 6 bits, 5 bits, add up to 16 bits, that is, each pixel accounted for 2 bytes
- ALPHA_8
Only channel A, which is 8 bits, is 1 byte per pixel
6.3 Different Bitmap Memory Allocation Modes
Before Android 3.0, Bitmap objects were stored in the Java heap and pixel data was stored in Native memory. If you do not manually call the Bitmap Recycle () method, The collection of Bitmap’s Native memory depends on the callback of Finalize (), which is a method of Object class. When an Object in the heap space is not pointed to by the stack space variable, the Object will wait to be collected by Java, and the timing is not controllable
In Android 3.0 to 7.0, Bitmap objects and pixel data are stored in the Java heap, and the Bitmap memory is recycled with the object even if the recycle() method is not called. However, Bitmaps are big memory hogs, and putting them in the Java heap squeezes the available memory of other resources, and by doing so, causes a lot of GC and underuses system memory.
In Android 8.0, a new mechanism called NativeAllocationRegistry was added to help reclaim Native memory. Pixels are stored in Native memory. In addition, the new Hardware Bitmap can reduce image memory and improve drawing efficiency, so that the Native memory of Bitmap can be quickly released with the object, and avoid the abuse of these memory during GC.
6.4 Glide
If the server returns a 200 * 200 image, but our ImageView size is 100 * 100, it would be a waste of memory to load the image directly into the ImageView.
But with Glide, you don’t have to worry about that, because Glide will load the image based on the size of the ImageView, and Glide has a level 3 cache, in the memory cache, Glide selects the appropriate size for the image memory cache based on the screen size.
7. Memory leakage and memory jitter
Common memory problems include memory leakage and memory jitter. Let’s take a look at what memory leakage is.
A memory leak is when a block of memory is unused and cannot be reclaimed by the GC, resulting in wasted memory, such as when the Handler anonymous internal class holds a reference to an Activity and the GC cannot reclaim it when the Activity needs to be destroyed.
A memory leak is a gradual decrease in available memory. For example, a serious memory leak is shown in the figure below. The memory that cannot be reclaimed accumulates until the application has no more available memory to apply for, resulting in an OOM.
The three common causes of memory leaks are non-static inner classes, static variables, and unreleased resources. The essence of a memory leak is that a long-life object holds a reference to a short-life object, so that a short-life object cannot be released.
7.1 Non-static inner Classes
Reason 1.
A non-static inner class holds an instance of an external class, such as an anonymous inner class. An anonymous inner class is a class that has no human-identifiable name, but in bytecode, an anonymous inner class also has a constructor that takes an instance of the external class.
For example, when you declare a Handler or AsyncTask in an Activity as an anonymous inner class, when the Activity is closed, the GC cannot reclaim the Activity because the Handler holds a strong reference to the Activity.
When we send a message through Handler, the message is added to the MessageQueue queue and handed to Looper. When a message is not sent, Looper will continue to run and hold the Handler during this process. The Handler holds an instance of the external Activity class, so the Activity cannot be released.
2. Solutions
We can declare Handler or AsyncTask as a static inner class and use WeakReference to wrap the Activity so that the Handler gets a WeakReference to the Activity and the GC can reclaim the Activity.
This approach applies to all memory leaks caused by anonymous inner classes.
public static class MyHandler extends Handler {
Activity activity;
public MyHandler(Activity activity) {
activity = new WeakReference<>(activity).get();
}
@Override
public void handleMessage(Message message) {
// ...}}Copy the code
7.2 Static Variables
Reason 1.
Static variables cause memory leaks because long-life objects hold references to short-life objects that cannot be freed.
For example, when a singleton holds a reference to an Activity, the Activity’s life cycle may be short and the user starts it and closes it, but the singleton’s life cycle is often the same as the application’s life cycle, so the Activity cannot be released.
2. Solutions
If a singleton needs a Context, consider using ApplicationContext so that the Context reference held by the singleton is the same as the life cycle of the application.
7.3 Resources are not released
-
Forgot to register the BroadcastReceiver
-
Forgetting to close database cursors
-
Forget to close the stream
-
Forgot to call the recycle() method to reclaim the memory used by the Bitmap you created
-
Forgot to cancel asynchronous tasks started by RxJava or coroutines when the Activity exits
-
Webview
Different Android versions of Webview will have differences, plus the differences of Webview ROM customized by different manufacturers, resulting in a great compatibility problem of Webview. In general, as long as Webview is used once in the application, the memory it takes up will not be released, the solution: WebView memory Leak – Summary of solutions.
7.4 Memory Jitter
Memory jitter occurs when a large number of temporary objects are frequently created in a short period of time, such as creating temporary object instances in a for loop. This is a representation of memory jitter in a zigzag shape with the garbage can in the middle representing a GC.
This is a real-time map of Memory provided by the Memory Profiler, which will be covered in more detail later.
- Try to avoid creating objects in the body of the loop
- Try not to create objects in the onDraw() method of a custom View, as this method will be called frequently
- For objects that can be reused, consider using object pools to cache them
What is a Memory Profiler?
1. Memory Profiler
The Memory Profiler, MAT, and LeakCanary are three tools that are commonly used to analyze the Memory usage of Android applications.
Memory Profiler is one of the sections in the Profiler, which is a performance analysis tool provided by Android Studio to analyze your application’s CPU, Memory, network, and power usage.
There are three ways to open Profiler.
- View > Tool Windows > Android Profiler
- The Profiler TAB below
- Double-click Shift to search for profiler
When you open the Profiler, you see a panel like the one below, and in the upper right corner of the SESSIONS panel on the left, there is a plus sign to select the application we want to analyze.
With the advanced option turned on, we can see the GC action represented by a white trash can in the Memory Profiler.
To open Profiler, Run > Edit Configucation > Profiling > Enable Advanced Profiling
Memory Profiler is one of the features of the Profiler. By clicking on the blue Memory panel in the Profiler, we enter the Memory Profiler interface.
2. The heap dump
In the Dump Java Heap panel there is the Instance View panel, and in the lower part of the Instance View panel there are two References and Bitmap Preview. We can see which image the Bitmap corresponds to, and this way, we can easily find memory problems caused by the image.
Note that Bitmap Preview is only available on devices with versions 7.1 and below.
3. View memory allocation details
On devices 7.1 and later, you can use the Record button to Record the memory allocation over a period of time.
In versions 8.0 and above, you can drag the timeline to see how much memory is allocated over a period of time.
After clicking the Record button, the Profiler records memory allocations for us over a period of time. In the memory allocation panel, we can see where objects are allocated. For example, the following Bitmap is created on line 22 of the onCreate method.
9. Memory Analyzer Tool
Memory profilers can only provide a simple analysis of Memory leaks, but can’t help us identify the exact location of the problem.
MAT, which stands for Memory Analyzer Tool, is a powerful Java heap Memory analysis Tool that can be used to find Memory leaks and check Memory consumption.
1. MAT usage procedure
To analyze memory leaks with MAT, we do a few things.
-
Download MAT from MAT’s official website.
-
Export the Hprof (Heap Profile) file using the Memory Profiler’s Heap dump function.
-
Configure the environment variable platform-tools
-
Run the following command to convert the hprof file derived from Memory Profiler to the hprof file that MAT can parse
platform-tools hprof-conv .. / original file.hprof.. / Output file. HprofCopy the code
-
Open the MAT
-
File > Open Heap dump, select the converted File
2. Precautions
-
If you cannot open MAT on a MAC, refer to Eclipse Memory Analyzer to report an error on MAC startup
-
If platform-tools fails to be configured on a MAC, you can directly locate the platform-tools directory under the Android SDK and run the hprof-conv tool
hprof-conv -z .. / original file.hprof.. / Output file. HprofCopy the code
3. Analyze memory leaks
I defined a static list of callbacks sCallbacks in the project, and added MemoryLeakActivity to the list, and then repeatedly entered and left the Activity. We can see that there are eight instances of this Activity, which is a memory leak. Let’s look at how to find this memory leak.
First, follow the steps in Section 9.2 to open our heap dump file, and when we open it, we can see a preview page that MAT analyzed for us.
By opening the histogram in the upper left corner, we can see a list of classes. By typing in the class we want to search for, we can see the number of instances.
To see an instance of this Activity, right-click the MemoryLeakActivity class and select List Objects > With Incoming References.
After clicking, we can see a list of instances, right-click one of them and choose Path to GC Roots > With All References to see who referenced the instance and could not reclaim it.
After selecting With All References, we can see that the instance is held by the static object sCallbacks and cannot be freed.
This completes a simple memory leak analysis.
10. LeakCanary
To help quickly detect memory leaks, Square has opened LeakCanary, a leak-detection framework based on MAT.
10.1 LeakCanary principle
##### 1. Check reserved instances
LeakCanary is based on LeakSentry, which hooks into the Android lifecycle to automatically detect whether instances of activities or fragments are recycled when they are destroyed.
Destroyed instances are passed to the RefWatcher, which holds weak references to them.
You can also observe all instances that are no longer needed, such as a View that is no longer used, a Presenter that is no longer used, etc.
If you wait five seconds and the weak reference is not cleaned up after GC fires, the RefWatcher observed instance could be in a memory leak state.
##### 2. Heap dump
Once the number of Retained instances reaches a threshold, LeakCanary takes a heap dump and puts the data into the hprof file.
The threshold is 5 reserved instances when the App is visible and 1 reserved instance when the App is not visible.
##### 3. Leakage trace
LeakCanary parses the hprof file and finds the chain of references that is causing the GC to fail to reclaim the instance, known as a Leak Trace.
A leak trail, also known as the shortest strong reference path, is the path of GC Roots to an instance.
##### 4. Leakage grouping
When two leak analysis results are identical, LeakCanary determines whether they are caused by the same cause based on the sub-reference chain, and if so, LeakCanary groups them together so as not to display the same leak information twice.
10. LeakCanary installation
##### 1. AndroidX project
First add add dependency.
dependencies {
// debugImplementation is used because LeakCanary is not normally used to release versions
debugImplementation 'com. Squareup. Leakcanary: leakcanary - android: 2.0 - alpha - 3'
}
Copy the code
LeakCanary by default only monitors Activity instances for leaks, and if we want to monitor other objects for leaks, we use RefWatcher.
// 1. Define a static variable for RefWatcher in Application
companion object {
val refWatcher = LeakSentry.refWatcher
}
Copy the code
// 2. Use RefWatcher to monitor the object
MyApplication.refWatcher.watch(object);
Copy the code
Configure monitoring options.
private fun initLeakCanary(a) {
LeakSentry.config = LeakSentry.config.copy(watchActivities = false)}Copy the code
##### 2. Non-androidx projects
Add dependencies.
dependencies {
debugImplementation 'com. Squareup. Leakcanary: leakcanary - android: 1.6.3'
releaseImplementation 'com. Squareup. Leakcanary: leakcanary - android - no - op: 1.6.3'
// This is only required if you are using support Library fragments
debugImplementation 'com. Squareup. Leakcanary: leakcanary - support - fragments: 1.6.3'
}
Copy the code
Initialize LeakCanary.
public class MyApplication extends Application {
@Override public void onCreate(a) {
super.onCreate();
// There is no need to initialize LeakCanary in the LeakCanary process used to do heap analysis
if(! LeakCanary.isInAnalyzerProcess(this)) {
LeakCanary.install(this);
return; }}}Copy the code
Monitor specific objects.
// 1. Define a static method to get RefWatcher in Application
public static RefWatcher getRefWatcher(a) {
return LeakCanary.installedRefWatcher();
}
Copy the code
// 2. Use RefWatcher to monitor the object
MyApplication.getRefWatcher().watch(object);
Copy the code
Configure monitoring options.
public class MyApplication extends Application {
private void installLeakCanary(a) {
RefWatcher refWatcher = LeakCanary.refWatcher(this)
.watchActivities(false) .buildAndInstall(); }}Copy the code
After the installation is complete and the application is reinstalled, we can see the LeakCanary application for analyzing memory leaks on the desktop.
In the following two images, the first one shows the LeakCanary app installed for non-AndroidX projects and the second one shows the LeakCanary app installed for AndroidX projects.
10.3 Using LeakCanary to Analyze a Memory Leak
Here is an example of an Activity that cannot be released because it is held in a static variable.
public class MemoryLeakActivity extends AppCompatActivity {
public static List<Activity> activities = new ArrayList<>();
@Override
protected void onCreate(@Nullable Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
activities.add(this); }}Copy the code
We can see the chain of references to the leak instance in Logcat.
In addition to Logcat, you can also see chains of references in the Leaks App.
After clicking on the Leaks application installed for us by LeakCanary on the desktop, we can see the Activities variable, which is displayed here because the result of LeakCanary analysis is that this variable holds an instance which cannot be recycled.
Click on this leak to see a leak overview page.
We click on the first item MemoryActivity Leaked to see details of the Leaked reference chain.
With the above steps, it’s easy to find where LeakCanary has analyzed the memory leak for us.
11. Obtain and monitor the system memory status
Android provides two ways to listen for system memory status, and let’s take a look at how they are used.
1. ComponentCallback2
After Android 4.0, Android applications can achieve ComponentCallback2 interface in the Activity to obtain the relevant events of the system memory, so that the system memory is insufficient to know this matter in advance, make the operation of releasing memory in advance, To keep our own applications from getting killed by the system.
ComponentCallnback2 provides the onTrimMemory(level) callback method, in which we can do different free operations for different events.
import android.content.ComponentCallbacks2
class MainActivity : AppCompatActivity(), ComponentCallbacks2 {
/** * When the application is in the background or the system resources are tight, we can release resources in this method, * to avoid the system to reclaim our application *@paramLevel Memory-related events */
override fun onTrimMemory(level: Int) {
// Different operations are performed according to different application life cycles and system events
when (level) {
// The application interface is in the background
ComponentCallbacks2.TRIM_MEMORY_UI_HIDDEN -> {
// You can release UI objects here
}
// The application is running properly and will not be killed, but the system memory is a little low
ComponentCallbacks2.TRIM_MEMORY_RUNNING_MODERATE,
// The application is running properly and will not be killed, but the system memory is very low.
// Some unnecessary resources should be freed to improve system performance
ComponentCallbacks2.TRIM_MEMORY_RUNNING_LOW,
// The application is running properly, but the system memory is very tight.
// The system has started to kill most of the cached processes based on the LRU cache
// At this point we need to release all unnecessary resources, otherwise the system may continue to kill all cached processes
ComponentCallbacks2.TRIM_MEMORY_RUNNING_CRITICAL -> {
// Release resources
}
// System memory is low, the system is ready to start cleaning process according to the LRU cache,
// At this point, our program is in the nearest LRU cache list and is unlikely to be cleaned up.
// However, it is also necessary to free up some resources that can be easily recovered to make the system full of memory
ComponentCallbacks2.TRIM_MEMORY_BACKGROUND,
// System memory is low, and our application is in the middle of the LRU list,
// If we do not release unnecessary resources, our application may be killed by the system
ComponentCallbacks2.TRIM_MEMORY_MODERATE,
// System memory is very low, and our application is at the very edge of the LRU list,
// The system has limited consideration for killing our application, and must release all available resources if it wants to survive
ComponentCallbacks2.TRIM_MEMORY_COMPLETE -> {
/* * frees all available resources */
}
// The application received an unrecognized memory level from the system.
// Treat this event like a normal low memory message alert
else- > {// Release all non-important data structures.}}}}Copy the code
2. ActivityManager.getMemoryInfo()
Android provides a ActivityManager getMemoryInfo () method queries memory information to us, this method returns a ActivityManager MemoryInfo object, this object contains the system current memory state, The status information includes available memory, total memory, and low-kill memory thresholds.
MemoryInfo contains a lowMemory Boolean that indicates whether the system is in a lowMemory state.
fun doSomethingMemoryIntensive(a) {
// Before doing some tasks that require a lot of memory,
// Check whether the device is in low memory state.
if(! getAvailableMemory().lowMemory) {// Do tasks that require a lot of memory}}// Get the MemoryInfo object
private fun getAvailableMemory(a): ActivityManager.MemoryInfo {
val activityManager = getSystemService(Context.ACTIVITY_SERVICE) as ActivityManager
return ActivityManager.MemoryInfo().also { memoryInfo ->
activityManager.getMemoryInfo(memoryInfo)
}
}
Copy the code
12. Seven memory optimization tips
1. Use Service with caution
Having a useless Service running in the background is one of the worst things you can do for an application’s memory management.
Stop the Service when its task is complete, or the memory occupied by the Service will leak.
When you have a Service running in your application, it will not be killed unless the system runs out of memory.
This makes services expensive for the system to run because they occupy memory that is not available to other processes.
Android has a list of cached processes that shrink as available memory decreases, making switching between applications slow.
If we are listening to some system broadcasts with Service, we can consider using JobScheduler.
If you really want to use a Service, consider using IntentService. IntentService is a subclass of Service that has an internal worker thread to handle time-consuming tasks. When the task is finished, the IntentService automatically stops.
2. Select the optimized data container
Some of the data containers provided by Java are not suitable for Android, such as HashMap, which requires an extra Entry object for each key-value pair stored in it.
Android provides several optimized data containers, including SparseArray, SparseBooleanArray, and LongSparseArray.
SparseArray is more efficient because it is designed to use only integers as keys, thus avoiding the overhead of automatic boxing.
3. Be careful of code abstraction
Abstraction can optimize the flexibility and maintainability of code, but abstraction can also impose other costs.
Abstraction causes more code to be executed, which takes more time and maps more code into memory.
If the benefits of some abstract code are small, such as a place that can be implemented directly without an interface, then don’t use an interface.
4. Use Protobuf as serialized data
Protocol Buffers were designed by Google to serialize structured data, similar to XML, but smaller, faster, and simpler than XML.
If you decide to use Protobuf as a serialized data format, you should use lightweight Protobuf in your client code.
Because protobuf typically generates lengthy code, this can cause problems such as increased memory, increased APK size, slower execution, etc.
More information about Protobuf can be found in the “Lightweight Version” section of the Protobuf readme.
5. Apk thin body
Some resources and third-party libraries can consume a lot of memory without our knowledge.
Bitmap size, resources, animations, and third-party libraries can affect the size of APK, and Android Studio provides R8 and ProGuard to help shrink APK and remove unnecessary resources.
If you’re running Android Studio under 3.3, you can use ProGuard, or R8 if you’re running Android Studio under 3.3.
Use Dagger2 for dependency injection
Dependency injection frameworks not only simplify our code, but also make it easier to test it.
If we want to use dependency injection in our application, we can consider using Dagger2.
Dagger2 is implemented by generating code at compile time rather than reflection, which avoids the memory overhead associated with reflection and instead generates code at compile time,
7. Use third-party libraries with caution
When you decide to use a third-party library that is not designed for mobile platforms, you need to optimize it to run better on mobile devices. These third-party libraries include logging, analysis, image loading, caching, and other frameworks that can cause performance issues.
The resources
### 1. Audio and video
-
Play with Android performance analysis and optimization
-
Geek Hour — Android Development Pro class
Book 2.
- Android Mobile Performance In Action
- Android Advanced Decryption
- Android Virtual Machine in Depth
- An in-depth understanding of the Java Virtual Machine (Version 3)
3. The article
-
Android Low memory killer
-
Android onTrimMemory
-
ARGB_8888, ALPHA_8, ARGB_4444, RGB_565 of Bitmap in Android
-
Android Dalvik Heap analysis
-
An issue with Android memory allocation/reclamation – why does memory use very little when also GC
-
IntentService is different from Service
-
Analyze and optimize the memory footprint of Android applications
-
Measure application performance using Android Profiler
-
Manage Your App’s Memory
-
Use Memory Profiler to view Java heap and Memory allocation
-
Performance tips
-
LeakCanary website
-
Processes and threads