Long ago wrote a “Detailed explanation of Android memory management mechanism” click has 70,000 +, now put the Google official document output, for your reference.
An overview of memory management
The Android Runtime (ART) and Dalvik virtual machines use paging and memory mapping to manage memory. This means that any memory changes applied, whether by allocating a new object or tapping a memory-mapped page, will always reside in RAM and cannot be swapped out. To free memory from an application, only the object references retained by the application can be freed, making the memory available for collection by the garbage collector. There is one exception to this case: for any unmodified memory-mapped file (such as code), the system can swap it out of RAM if it wants to use its memory elsewhere.
This page describes how Android manages application processes and memory allocation. For more details on how to manage memory more efficiently in your applications, see Managing Application Memory.
The garbage collection
An in-tube storage environment such as the ART or Dalvik VIRTUAL Machine tracks each allocation. Once it is determined that the program is no longer using a piece of memory, it releases that memory back into the heap without any intervention by the programmer. This mechanism for reclaiming unused memory in the in-tube storage environment is called “garbage collection.” Garbage collection has two goals: to find data objects in a program that cannot be accessed in the future, and to reclaim the resources used by those objects.
Android’s memory heap is generational, which means it tracks different allocated storage partitions based on the expected life and size of allocated objects. For example, the most recently allocated object belongs to the “new generation.” When an object remains active long enough, it can be promoted to an older generation, and then to a permanent generation.
Each generation of the heap has its own dedicated upper limit on the amount of memory that can be used for the corresponding object. Every time a generation starts to fill up, the system performs a garbage collection event to free up memory. The duration of garbage collection depends on which generation of objects it is recycling and how many live objects are in each generation.
Although garbage collection is very fast, it still affects the performance of your application. In general, you have no control from your code when garbage collection events occur. The system has a set of criteria for when garbage collection is performed. When the conditions are met, the system stops the execution process and starts garbage collection. If garbage collection occurs during intensive processing loops such as animation or music playback, it may increase processing time, which in turn may cause code execution in the application to exceed the recommended 16ms threshold for efficient, smooth frame rendering.
In addition, your code flow performs various tasks that may force garbage collection events to occur more frequently or cause them to last longer than normal. For example, if you allocate multiple objects in the innermost layer of the for loop during each frame of the Alpha blending animation, you might expose the memory heap to a large number of objects. In this case, the garbage collector performs multiple garbage collection events and may degrade the performance of the application.
For more general information about garbage collection, see Garbage Collection.
The Shared memory
To accommodate everything you need in RAM, Android tries to share RAM pages across processes. It can do this in the following ways:
- Each application process forks from an existing process called Zygote. The Zygote process starts when the system starts and loads the common framework code and resources, such as the Activity theme background. To start a new application process, the system forks the Zygote process, and then loads and runs the application code in the new process. This approach makes most of the RAM pages allocated for the framework code and resources shared across all application processes.
- Most static data is memory-mapped to a process. This approach allows data not only to be shared between processes, but also to be swapped out as needed. Examples of static data include: Dalvik code (direct memory mapping by placing it in a pre-linked.odex file), applied resources (by designing resource tables into memory-mapped structures and by aligning ZIP entries with APK), and traditional project elements (such as native code in a.so file).
- In many places, Android uses explicitly allocated shared memory areas (via Ashmem or Gralloc) to share the same dynamic RAM between processes. For example, the window Surface uses memory shared between the application and the screen synthesizer, while the cursor buffer uses memory shared between the content provider and the client.
Due to the widespread use of shared memory, care needs to be taken when determining the amount of memory used by an application. For tips on correctly determining application memory usage, see Investigating RAM usage.
Allocate and reclaim application memory
The Dalvik heap is limited to a single virtual memory range per application process. This defines the logical heap size, which can grow as needed but cannot exceed the upper limit defined by the system for each application.
The logical size of the heap is different from the amount of physical memory used by the heap. When examining the application heap, Android calculates the proportional memory size (PSS) value, which takes into account both dirty and clean pages shared with other processes, but the number is proportional to the number of applications sharing that RAM. This (PSS) amount is the physical memory usage considered by the system. For more information about PSS, see the Survey RAM Usage Guide.
The Dalvik heap does not compress the logical size of the heap, which means that Android does not defragment the heap to reduce space. Android can only reduce the logical heap size if there is unused space at the end of the heap. However, the system can still reduce the physical memory used by the heap. After garbage collection, Dalvik traverses the heap and looks for unused pages, which are then returned to the kernel using madvise. Therefore, the paired allocation and unallocation of large data blocks should cause all (or nearly all) of the physical memory used to be reclaimed. However, it is much less efficient to reclaim memory from smaller allocations because pages used for smaller allocations may still be shared with other blocks of data that have not been freed.
Limit application memory
To keep a multitasking environment running, Android sets a hard upper limit on the heap size of each application. The exact heap size upper limit for different devices depends on the overall available RAM size of the device. If your application tries to allocate more memory after reaching the maximum heap capacity, you may receive an OutOfMemoryError.
In some cases, for example, to determine how much data is safe to keep in the cache, you may need to query the system to determine the exact amount of heap space currently available on the device. You can query the system for this value by calling getMemoryClass(). This method returns an integer representing the number of megabytes available in the application heap.
Switching applications
Android keeps non-foreground applications in the cache when users switch between applications. A non-foreground application is an application where the user does not see or run a foreground service (such as music playback). For example, when a user starts an application for the first time, the system creates a process for it. However, when the user leaves the application, the process does not exit. The system will keep the process in the cache. If the user returns to the application later, the system will reuse the process to speed up application switching.
If your application has a cached process and retains resources that are not currently needed, it can affect the overall performance of the system even if users are not using your application. It terminates processes in the cache when system resources (such as memory) run out. The system also considers terminating the processes that use the most memory to free up RAM.
Note: The less memory an application uses when it is in the cache, the more likely it is to avoid termination and recover quickly. However, the system may also terminate the caching process at any time based on current requirements without considering its resource usage.
2. Inter-process memory allocation
The Android platform does not waste available memory at runtime. It’s always trying to make use of all the available memory. For example, the system keeps apps in memory after they are shut down so that users can quickly switch back to them. As a result, Android devices typically run with almost no memory available. Memory management is critical to correctly allocating memory between important system processes and many user applications.
This chapter discusses the basics of how Android allocates memory for the system and user applications. It also explains how the operating system can cope with low memory.
Memory type
Android devices contain three different types of memory: RAM, zRAM, and memory. Note that the CPU and GPU access the same RAM.
Figure 1. Memory types – RAM, zRAM, and memory
- RAM is the fastest type of memory, but its size is usually limited. High-end devices typically have the largest RAM capacity.
- ZRAM is a partition of RAM used for swap space. All data is compressed as it is put into the zRAM and then decompressed as it is copied out of the zRAM. This portion of RAM grows and shrinks as pages move in and out of zRAM. Device manufacturers can set a zRAM size upper limit.
- The storage contains all persistent data (such as file systems, etc.), as well as object code added for all applications, libraries, and platforms. Memory has a much larger capacity than the other two types of memory. On Android, memory is not used for swap space as it is on other Linux implementations, because frequent writes can corrupt this memory and shorten the service life of the storage medium.
Memory page
RAM is divided into “pages”. Typically, there is 4KB of memory per page.
The system treats the page as “available” or “used.” The available pages are unused RAM. Used pages are the RAM currently being used by the system and are classified as follows:
- Cache pages: Memory supported by files in storage, such as code or memory-mapped files. There are two types of cache memory:
- Private page: Owned by a process and not shared
- Clean page: An unmodified copy of a file in storage that can be deleted by KSWAPd to increase available memory
- Dirty pages: Modified copies of files in storage; Can be moved to zRAM by KSWAPD or compressed in zRAM to increase the available memory
- Shared pages: Used by multiple processes
- Clean page: An unmodified copy of a file in storage that can be deleted by KSWAPd to increase available memory
- Dirty pages: Modified copies of files in storage; Allows changes to be rewritten back into files in memory by kSWapd or by explicitly using msync() or munmap() to increase the available space
- Private page: Owned by a process and not shared
- Anonymous pages: Memory not supported by files in storage (for example, allocated by mmap() with the MAP_ANONYMOUS tag set)
- Dirty pages: Can be moved from kSWAPD to zRAM/ compressed in zRAM to increase available memory
Note: Clean pages contain an exact copy of a file (or part of a file) that exists in storage. If a clean page no longer contains an exact copy of the file (for example, because of an application operation), it becomes dirty. Clean pages can be deleted because they can always be regenerated using data in storage; Dirty pages cannot be deleted. Otherwise, data will be lost. As the system actively manages RAM, the ratio of available and used pages changes. The concepts introduced in this section are critical for managing out-of-memory situations. These concepts are described in more detail in the next section of this document.
Insufficient memory management
Android has two main mechanisms for dealing with out-of-memory situations: the kern-swapping daemon and the low-memory termination daemon. Kernel switching daemon The kernel switching daemon (KSWAPd) is a part of the Linux kernel that converts used memory into available memory. When there is not enough available memory on the device, the daemon becomes active. The Linux kernel has upper and lower thresholds for available memory. When the available memory is lower than the lower threshold, kSWapd starts to reclaim the memory. When the available memory reaches the upper limit, kSWapd stops reclaiming memory.
Kswapd can remove clean pages to reclaim them, because these pages are supported by memory and are unmodified. If a process tries to process a clean page that has been deleted, the system copies the page from memory to RAM. This operation is called request paging.
Figure 2. Clean pages supported by storage have been deleted
Kswapd can move cached private and anonymous dirty pages to zRAM for compression. This frees up free memory (free pages) in RAM. If a process tries to process a dirty page in zRAM, the page is unpacked and moved back to RAM. If the process associated with the compressed page is terminated, the page is removed from zRAM.
If the amount of available memory falls below a certain threshold, the system starts to terminate the process.
Figure 3. Dirty pages are moved to zRAM and compressed
Low memory termination daemons Many times, kSWapd does not free enough memory for the system. In this case, the system uses onTrimMemory() to notify the application that there is not enough memory and that its allocation should be reduced. If this is not enough, the kernel starts terminating processes to free up memory. It uses the low memory termination daemon (LMK) to do this.
LMK uses an “out of memory” score value called OOM_ADJ_SCORE to determine the priority of running processes, which determines which processes to terminate. The process with the highest score is terminated first. Background applications are terminated first and system processes are terminated last. The following table lists the LMK scoring categories from high to low. The highest rated category, the item in the first line, will be terminated first:
Figure 4. Android process, high score on top, low score on bottom
Here’s an explanation of the various categories in the table above:
- Background applications: Applications that have been running before and are not currently active. LMK will first terminate the background application, starting with the application with the highest OOM_ADJ_SCORE.
- Last application: Recently used background application. The previous app has a higher priority (lower score) than the background app because users are more likely to switch to the previous app than to a background app.
- Home screen app: This is the initiator app. Terminating the application will cause the wallpaper to disappear.
- Services: Services are started by the application and may include synchronization or uploading to the cloud.
- Perceptible applications: Non-foreground applications that the user can detect in some way, such as running a search process that displays a small interface or listening to music.
- Foreground application: The application currently in use. Terminating the foreground application looks like the application crashed and may indicate to the user that something is wrong with the device.
- Persistence (services) : These are the core services of the device, such as telephony and WLAN.
- System: System process. After these processes are terminated, the phone may appear to be about to restart.
- Native: a very low-level process used by the system (for example, KSWapd).
The device manufacturer can change the behavior of the LMK.
Calculate the memory usage
The kernel keeps track of all memory pages in the system.
Figure 5. Pages used by different processes
When determining the amount of memory used by an application, the system must consider shared pages. Applications accessing the same service or library will share the memory page. For example, a Google Play service and a game app might share a location information service. This makes it difficult to determine how much memory belongs to the entire service and to each application.
Figure 6. Page shared by two applications (middle)
You can use any of the following indicators to determine the memory usage of an application:
- Resident memory size (RSS) : The number of shared and unshared pages used by the application
- Prorated memory size (PSS) : The number of unshared pages used by the application plus the number of evenly shared pages (for example, if three processes share 3MB, the PSS per process is 1MB)
- Exclusive memory size (USS) : The number of unshared pages used by an application (excluding shared pages)
PSS is useful if the operating system wants to know how much memory is being used by all processes, because pages are only counted once. It takes a long time to calculate the PSS because the system needs to determine the number of pages shared and the number of processes sharing the pages. RSS does not distinguish between shared and unshared pages (and therefore computes faster) and is better suited to tracking changes in memory allocation.
3. Manage the application memory
Random access memory (RAM) is a valuable resource in any software development environment, but in mobile operating systems, where physical memory is often limited, RAM is even more valuable. Although both the Android runtime (ART) and the Dalvik VIRTUAL Machine perform routine garbage collection tasks, this does not mean that you can ignore where and when your application allocates and frees memory. You still need to avoid introducing memory leaks (often caused by keeping object references in static member variables) and free all Reference objects at an appropriate time (as defined by the lifecycle callback).
This page describes how to actively reduce your application’s memory usage. For an overview of how the Android operating system manages memory, see Android Memory Management Overview.
Monitors available memory and memory usage
You need to find the memory usage problem in your application before you can fix it. The memory performance profiler in Android Studio can help you find and diagnose memory problems in the following ways:
Learn how your application allocates memory over time. The memory profiler can display a real-time chart showing the amount of memory used by the application, the number of Java objects allocated, and when garbage collection events occurred. Initiate a garbage collection event and take a snapshot of the Java heap while the application is running. Record your application’s memory allocation, then examine all allocated objects, see the stack trail for each allocation, and jump to the appropriate code in the Android Studio editor.
Free memory in response to events As described in the Android Memory Management Overview, There are a number of ways In which Android can reclaim memory from an application or, if necessary, terminate an application completely to free up memory for critical tasks. To further help balance system memory and avoid the need for the system to terminate your application process, you can implement the ComponentCallbacks2 interface in the Activity class. With the provided onTrimMemory() callback method, your application can listen for memory-related events while in the foreground or background, and then release objects in response to application lifecycle or system events that indicate that the system needs to reclaim memory.
For example, you can implement the onTrimMemory() callback in response to different memory-related events, as follows:
import android.content.ComponentCallbacks2;
// Other import statements ...
public class MainActivity extends AppCompatActivity
implements ComponentCallbacks2 {
// Other activity code ...
/**
* Release memory when the UI becomes hidden or when system resources become low.
* @param level the memory-related event that was raised.
*/
public void onTrimMemory(int level) {
// Determine which lifecycle or system event was raised.
switch (level) {
case ComponentCallbacks2.TRIM_MEMORY_UI_HIDDEN:
/* Release any UI objects that currently hold memory. The user interface has moved to the background. */
break;
case ComponentCallbacks2.TRIM_MEMORY_RUNNING_MODERATE:
case ComponentCallbacks2.TRIM_MEMORY_RUNNING_LOW:
case ComponentCallbacks2.TRIM_MEMORY_RUNNING_CRITICAL:
/* Release any memory that your app doesn't need to run. The device is running low on memory while the app is running. The event raised indicates the severity of the memory-related event. If the event is TRIM_MEMORY_RUNNING_CRITICAL, then the system will begin killing background processes. */
break;
case ComponentCallbacks2.TRIM_MEMORY_BACKGROUND:
case ComponentCallbacks2.TRIM_MEMORY_MODERATE:
case ComponentCallbacks2.TRIM_MEMORY_COMPLETE:
/* Release as much memory as the process can. The app is on the LRU list and the system is running low on memory. The event raised indicates where the app sits within the LRU list. If the event is TRIM_MEMORY_COMPLETE, the process will be one of the first to be terminated. */
break;
default:
/* Release any non-critical data structures. The app received an unrecognized memory level value from the system. Treat this as a generic low-memory message. */
break; }}}Copy the code
The onTrimMemory() callback was added in Android 4.0 (API level 14). For earlier versions, you could use onLowMemory(), which is roughly equivalent to the TRIM_MEMORY_COMPLETE event.
To allow multiple processes to run simultaneously, Android sets a hard limit on the heap size that can be allocated to each application. The exact heap size limit for a device varies depending on how much RAM is available overall for the device. If your application reaches the maximum heap capacity and tries to allocate more memory, the system throws an OutOfMemoryError.
To avoid running out of memory, you can query the system to determine how much heap space is currently available on the device. You can query the system for this value by calling getMemoryInfo(). It will return a ActivityManager MemoryInfo object, which will provide with the current memory status information about the equipment, including the available memory, the total memory and memory threshold (if it reaches this level of memory, system will start to terminate processes). ActivityManager. MemoryInfo object will also provide a simple Boolean value lowMemory, you can determine whether the equipment according to this value out of memory.
The following code snippet example demonstrates how to use the getMemoryInfo() method in an application.
public void doSomethingMemoryIntensive(a) {
// Before doing something that requires a lot of memory,
// check to see whether the device is in a low memory state.
ActivityManager.MemoryInfo memoryInfo = getAvailableMemory();
if(! memoryInfo.lowMemory) {// Do memory intensive work ...}}// Get a MemoryInfo object for the device's current memory status.
private ActivityManager.MemoryInfo getAvailableMemory(a) {
ActivityManager activityManager = (ActivityManager) this.getSystemService(ACTIVITY_SERVICE);
ActivityManager.MemoryInfo memoryInfo = new ActivityManager.MemoryInfo();
activityManager.getMemoryInfo(memoryInfo);
return memoryInfo;
}
Copy the code
Use more memory-efficient code structures
Some Android features, Java classes, and code structures tend to use more memory than others. You can choose a more efficient alternative in your code to minimize your application’s memory usage.
Keeping a service running when it is not needed is one of the most serious memory management mistakes an Android application can make. If your application needs a service to perform work in the background, do not leave it running unless it needs to run jobs. Be careful to stop the service after it completes its task. Otherwise, you could inadvertently cause a memory leak.
After you start a service, the system prefers to keep the process of that service running. This behavior can be very expensive for the service process, because once the service uses a portion of RAM, that portion is no longer available for other processes to use. This reduces the number of cache processes the system can keep in the LRU cache, thus reducing application switching efficiency. This can even lead to system jitter when memory is tight and the system cannot maintain enough processes to host all the services currently running.
You should generally avoid using persistence services because they require persistence of available memory. We recommend an alternative implementation such as JobSchedulerJobScheduler. For more details on how to use JobScheduler to schedule background processes, see Background Optimization.
If you must use a service, then the best way to limit the life cycle of that service is to use an IntentService, which will end itself as soon as it has processed the Intent that started it. For more details, see Running in a Background service.
Some of the classes provided with the optimized data container programming language are not optimized for mobile devices. For example, a regular HashMap implementation can be very memory inefficient because each map needs to correspond to a separate entry object.
The Android framework contains several optimized data containers, including SparseArray, SparseBooleanArray, and LongSparseArray. For example, the SparseArray classes are more efficient because they prevent the system from having to automatically box the keys (and sometimes the values), which creates one or two more objects for each entry.
You can always switch to raw arrays if you need to to get a very compact data structure.
Be careful with code abstractions Developers often treat abstractions simply as a good programming practice because they increase code flexibility and maintainability. However, abstractions come at a high cost: often they require more code to execute, more time, and more RAM to map the code into memory. Therefore, you should avoid abstractions if they do not provide significant benefits.
Using a compact version of the Protobuf buffer for serialized data is a language – and platform-independent, extensible mechanism designed by Google for serializing structured data. The mechanism is similar to XML, but smaller, faster, and simpler. If you decide to use Protobuf for your data, you should always use the compact version of Protobuf in your client code. Regular Protobuf generates extremely lengthy code, which can lead to multiple problems with the application, such as increased RAM usage, significantly increased APK size, and slower execution.
See the “Lite” section of the Protobuf readme for more details.
Avoiding memory jitter As mentioned earlier, garbage collection events generally do not affect application performance. However, if many garbage collection events occur in a short period of time, the frame time can quickly be exhausted. The more time a system spends on garbage collection, the less time it can spend on other tasks such as presenting or streaming audio.
In general, “memory jitter” can result in a large number of garbage collection events. In fact, memory jitter can account for the number of allocated temporary objects present in a given time.
For example, you can allocate multiple temporary objects in a for loop. Alternatively, you can create a new Paint or Bitmap object in the view’s onDraw() function. In both cases, the application creates a large number of objects quickly. These operations can quickly consume all available memory in the Young Generation region, forcing garbage collection events to occur.
Of course, you have to find high jitter locations in your code before you can fix them. For this, you should use a memory profiler in Android Studio.
Once you have identified the problem areas in your code, try to reduce the number of allocations in areas that are critical to performance. You might consider moving some of the code logic out of the inner loop or into a Factory-based allocation structure.
Remove resources and libraries that consume a lot of memory
Certain resources and libraries in your code can devour memory without your knowledge. The overall size of the APK, including third-party libraries or embedded resources, can affect the memory consumption of an application. You can reduce your application’s memory consumption by removing any redundant, unnecessary, or bloated components, resources, or libraries from your code.
Reducing the overall APK Size You can significantly reduce the memory usage of your application by reducing the overall size of your application. Bitmap size, resources, number of animation frames, and third-party libraries all affect the size of the APK. Android Studio and the Android SDK provide a variety of tools that can help you reduce the size of your resources and external dependencies. These tools support modern code shrinking methods, such as R8 compilation. (Android Studio 3.3 and earlier uses ProGuard, not R8.)
For more details on how to reduce the overall size of the APK, see the guide on how to reduce the size of the application.
Implementing DI with Dagger 2 THE DI framework simplifies the code you write and provides an adaptive environment where you can make tests and other configuration changes.
If you plan to use a dependency injection framework in your application, consider using Dagger 2. Dagger does not use reflection to scan the code you apply. The static compile-time implementation of the Dagger means that it can be used in Android applications without unnecessary runtime costs or memory consumption.
Other dependency injection frameworks that use reflection tend to initialize processes by scanning code for comments. This process may require more CPU cycles and RAM, and may cause significant delays when the application is started.
Use external libraries with care External library code is often not written for mobile environments and can be inefficient when run on mobile clients. If you decide to use an external library, you may need to optimize the library for mobile devices. Before deciding whether to use the library, plan ahead and analyze the library in terms of code size and RAM consumption.
Even some libraries that are optimized for mobile devices can cause problems depending on how they are implemented. For example, one library might use a compact version of Protobuf, while another might use Micro Protobuf, resulting in two different Implementations of Protobuf for your application. Different implementations of logging, analysis, image loading frameworks and caching, as well as many other features that you might not expect can lead to this.
While ProGuard can remove apis and resources with appropriate markup, it cannot remove large internal dependencies of a library. The functionality you need in these libraries may require lower-level dependencies. This is particularly problematic if you’re using an Activity subclass in a library (which tends to have a lot of dependencies), the library is using reflection (which is common and means you’ll spend a lot of time manually tuning ProGuard to get it running), and so on.
Also, avoid using shared libraries for just one or two of dozens of features. You don’t want to create a lot of code and overhead that you don’t even need. When considering whether to use a library, look for an implementation that is a good fit for your needs. Otherwise, you can decide to create the implementation yourself.
This article is translated from the Google explanatory document “Managing Application Memory”.