** This is the fourth day of my participation in the August Changwen Challenge. For details, see: August Changwen Challenge **
preface
In app development, pictures are indispensable. All kinds of icon picture resources, if not a good use of images. It will lead to a serious decline in app performance and affect user experience. The most intuitive feeling is that the app is slow and the phone is hot, sometimes even OOM.
Today we’ll look at oom and memory optimization summaries;
Pay attention to the public number [Android development programming] more knowledge
What is OOM?
-
OOM, full name “Out Of Memory”, translated into Chinese is “ran Out Of Memory”, derived from the Java. Lang. OutOfMemoryError;
-
This error is thrown when the JVM does not have enough memory to allocate space for an object and the garbage collector has run out of space to reclaim.
-
In client-side apps, this is usually the case for apps that use a lot of images or very large images; Popular speak be when our APP needs to apply for a block of memory to hold pictures, system think we have enough memory used by the APP, even if it has 1 gb of free memory, it does not agree to give my APP more memory, then even if the system immediately thrown OOM error, and program does not capture the error, therefore, plays the casing collapse;
2. Type of OOM
1. JVM memory model:
According to the JVM specification, the JAVA Virtual Machine manages the following memory areas at runtime:
-
Program counter: an indicator of the line number of bytecode executed by the current thread, thread private;
-
JAVA virtual machine stack: a memory model for the execution of JAVA methods, with each execution of a JAVA method corresponding to a stack frame in and out of the stack;
-
Native method stack: similar to the “JAVA virtual machine stack”, but provides a memory environment for native methods to run;
-
JAVA heap: The place where object memory is allocated, the main area of memory garbage collection, shared by all threads. Can be divided into Cenozoic, old generation;
-
Method area: Used to store data such as class information that has been loaded by the JVM, constants, static variables, and code compiled by the just-in-time compiler. “Permanent generation” in Hotspot;
-
Runtime constant pool: Part of a method area that stores constant information, such as various literals, symbolic references, etc.
-
Direct memory: Memory that is not part of the JVM runtime data area but is directly accessible, such as NIO.
-
According to the JVM specification, the OOM may be thrown from every memory area except the program counter.
2. The most common OOM situations are as follows:
-
Java. Lang. OutOfMemoryError: Java heap space — — — — — – > Java heap memory leak, the situation is the most common, usually caused by poor memory leaks or the size of the heap Settings. For memory leak, need to find the leaked code in the program through memory monitoring software, and the heap size can be modified by virtual machine parameters such as -xms, -xmx, etc.
-
java.lang.OutOfMemoryError: PermGen space ——> Java permanent generation overflow, or method area overflow, occurs when there are a large number of classes or JSP pages, or when a reflection mechanism such as Cglib is used, because a large amount of Class information is stored in the method area. This situation can be solved by changing the size of the method area, using something like -xx :PermSize=64m -xx :MaxPermSize=256m. In addition, too many constants, especially strings, can also cause the method area to overflow;
-
Java. Lang. StackOverflowError — — — — — – > not throw OOM error, but it is also more common Java memory leak. JAVA virtual machine stack overflow is usually caused by an infinite loop or deep recursive call in the program, and can also occur when the stack size is set too small. The stack size can be set with the vm parameter -xss.
3. Why OOM?
If an Android app has a maximum memory limit per process or per vm, the system will raise an OOM error.
It has nothing to do with the remaining memory of the entire device. For example, the early Android system of a virtual machine up to 16M memory, when an app started, the virtual machine non-stop application for memory resources to load pictures, when more than the memory limit will appear OOM
Why is there no memory? There are two reasons for this:
1. Less memory is allocated. For example, the memory available to the VM (usually specified by the VM parameters during startup) is too small.
2, the application used too much, and used up not to release, waste, this will cause memory leak or memory overflow;
-
Memory leak: The memory that has been used up is not released, so the VM cannot use the memory again. At this time, the memory is leaked because the applicant does not use it, and the VM cannot allocate it to others.
-
Out of memory: The amount of memory requested exceeds the amount that the JVM can provide.
In the days before garbage collection, such as C and C++, we had to be responsible for the application and release of memory. If we applied for memory and forgot to release it after we used it up, such as NEW in C++ but no DELETE, it might cause memory leaks. An occasional memory leak may not cause a problem, while a large one may cause a memory overflow;
In The Java language, because of the existence of automatic garbage collection, we generally do not have to actively free the memory of unused objects, so in theory, there is no “memory leak”. However, if the code is not correct, for example, if a reference to an object is placed in the global Map, the method will end, but the garbage collector will reclaim memory based on the reference of the object, resulting in the object not being reclaimed in a timely manner. If this happens too often, it can lead to a memory overflow, such as the caching mechanism that is often used in the system. Memory leaks in Java, unlike forgetting to delete in C++, are usually caused by logical causes.
How to circumvent OOM and optimize memory
1. Reduce the memory usage of objects
The first step to avoiding OOM is to minimize the memory footprint of newly allocated objects and try to use lighter objects.
1) Use lighter data structures
Consider using ArrayMap/SparseArray instead of a traditional data structure such as HashMap.
The short version of how HashMap works is that, in most cases, it is inefficient and takes up more memory than Android’s ArrayMap container, which is written specifically for the mobile operating system.
The usual implementation of a HashMap is more memory intensive because it requires an additional instance object to record the Mapping operation.
In addition, SparseArray is more efficient because they avoid autoboxing of keys and values and unboxing after boxing.
2) Avoid using enUms in Android
Enumerations typically require twice as much memory as static constants. You should strictly avoid using enumerations on Android. , so avoid using enumerations in Android.
3) Reduce the memory usage of Bitmap objects
Bitmaps are extremely easy to consume memory. Reducing the memory footprint of created bitmaps is a top priority. Generally speaking, there are two measures
InSampleSize: Scale, before loading the image into memory, we need to calculate a proper scale to avoid unnecessarily large image loading.
Decode format: The decoding format is ARGB_8888/RBG_565/ARGB_4444/ALPHA_8, which is very different.
4) Use smaller pictures
When it comes to giving a resource image, we need to pay special attention to whether the image has room to compress and whether we can use a smaller image. Using smaller images not only reduces memory usage, but also avoids a large number of inflationExceptions. If you have a large image that is referenced directly by an XML file, it is likely that an InflationException will occur when initializing the view due to a lack of memory. The underlying cause of this problem is in the OOM.
2. Reuse of memory objects
-
For most object reuse, the ultimate solution is to use object pooling, or to explicitly create object pools in the program while writing code, and then handle the implementation logic for reuse. The other is to reduce the repeated creation of objects by using the existing reuse features of the system framework, so as to reduce the allocation and reclamation of memory.
-
Reuse system resources: Android has many built-in resources, such as strings/colors/images/animations/styles and simple layouts, that can be referenced directly in your application. This can not only reduce the load of the application itself, reduce the size of APK, but also reduce the memory overhead to a certain extent, better reuse. But it is also necessary to pay attention to the differences in Android system version, for those different system version performance is very different, do not meet the needs of the situation, or need to be built in the application itself;
-
Note the reuse of ConvertView in views such as ListView/GridView that have a large number of repeating sub-components;
-
Reuse of Bitmap objects;
-
Avoid creating objects in the onDraw method;
-
For frequently called methods such as onDraw, it is important to avoid creating objects here, because it will quickly increase memory usage, and it is easy to cause frequent GC, and even memory jitter.
-
StringBuilder: There are times when code needs to use a lot of string concatenation, so consider using StringBuilder instead of the frequent “+”;
3. Avoid memory leaks for objects
Memory object leak, will lead to some no longer used objects can not be released in time, so on the one hand occupied the precious memory space, it is easy to lead to the subsequent need to allocate memory, free space is insufficient and appear OOM. Obviously, this also makes the memory region available for each Generation level smaller, making GC more likely to be triggered and prone to memory jitter, which can cause performance problems
1) Be aware of Activity leaks
-
Generally speaking, Activity leak is the most serious problem in memory leak. It occupies a large amount of memory and has a wide impact. We need to pay special attention to the following two cases of Activity leak:
-
Internal class references cause Activity leaks: The most typical scenario is an Activity leak caused by a Handler. If there are delayed tasks in the Handler, or if the queue of tasks waiting to be executed is too long, the Activity can be leaked because the Handler continues to execute. The reference chain is Looper -> MessageQueue -> Message -> Handler -> Activity. To solve this problem, you can execute the remove Handler message queue and runnable objects before the UI exits. Or use Static + WeakReference to break the reference relationship between Handler and Activity.
-
The Activity Context is passed to another instance, which may cause itself to be referenced and leak;
-
Leaks caused by inner classes do not only occur on an Activity, but any other place where an inner class appears requires special attention! We can consider to use the internal class of static type as far as possible, and use the mechanism of WeakReference to avoid leakage because of mutual reference;
2) Consider using an Application Context instead of an Activity Context
For most cases where the Activity Context is not required (the Dialog Context must be the Activity Context), we can consider using the Application Context instead of the Activity Context. This prevents inadvertent Activity leaks;
3) Pay attention to the timely collection of temporary Bitmap objects
-
In most cases, we will add caching to bitmaps, but at some point, part of a Bitmap will need to be reclaimed in time. For example, if a relatively large bitmap object is created temporarily, the original bitmap should be reclaimed as soon as possible after the transformation to obtain a new bitmap object. In this way, the space occupied by the original bitmap can be released faster.
-
Pay special attention to the createBitmap() method provided by the Bitmap class: This function may return the same bitmap as the source bitmap. If the source bitmap is not the same as the return bitmap, you need to check whether the source bitmap is the same as the return bitmap. To execute the source Bitmap recycle method.
4) Notice that the listener is logged out
There are many listeners that need to be registered and unregistered in Android applications. We need to make sure to unregister those listeners at the right time. Add the listener manually. Remove the listener in time.
5) Notice object leaks in the cache container
Sometimes, we put some objects into the cache container to improve the reuse of objects, but if the objects are not cleared from the container in time, it can cause memory leaks. For example, if drawable is added to the cache container for the 2.3 system, it is easy to leak the activity because of the strong application of Drawable and View. Since 4.0, there is no such problem. To solve this problem, the buffer drawable on the 2.3 system needs to be specially encapsulated to handle reference unbinding and avoid leakage.
6) Be aware of WebView leaks
There is a big compatibility problem with WebView in Android. Not only do different Versions of Android system produce great differences in WebView, but also there are great differences in WebView in ROMs shipped by different manufacturers. More seriously, the standard WebView has a memory leak problem, see here. Therefore, the usual way to solve this problem is to start another process for WebView and communicate with the main process through AIDL. The process where WebView is located can choose the appropriate time to destroy according to the needs of the business, so as to achieve the complete release of memory.
7) Notice whether the Cursor object is closed in time
If you are running a Cursor over a database, you are running a Cursor over a database. If you are running a Cursor over a database, you are running a Cursor over a database. These Cursor leaks can have a negative impact on memory management, and we need to remember to close the Cursor object in time.
4. Optimize memory usage policies
1) Use large heap carefully
-
As mentioned earlier, Android devices have different memory sizes depending on the hardware and software Settings, and they set different Heap limit thresholds for applications. You can obtain the available Heap size for an application by calling getMemoryClass(). In special cases, you can declare a larger heap space for your application by adding the largeHeap=true attribute under the application tag in the manifest. You can then get the larger heap size threshold by getLargeMemoryClass(). However, the claim for a larger Heap threshold is intended for a small number of applications that consume a large amount of RAM (such as a large image editing application).
-
Do not ask for a large Heap Size just because you need to use more memory. Use large Heap only when you know exactly where a large amount of memory is being used and why it must be reserved. So use the Large Heap attribute with caution. Using extra memory space affects the overall user experience of the system and can make each GC run longer.
-
During task switching, the performance of the system will suffer greatly. In addition, large heap does not necessarily capture a larger heap. On some machines with strict restrictions, the size of the large heap is the same as the normal heap size. So even if you apply for a large heap, you should check the actual size of the heap obtained by executing getMemoryClass().
2) Design an appropriate cache size considering the device memory threshold and other factors
When designing a ListView or GridView Bitmap LRU cache, the following points need to be considered:
-
How much free memory does the application have left?
-
How many images will be presented to the screen at once? How many images need to be cached so that a quick swipe can bring them to the screen immediately?
-
What is the screen size and density of the device? An XHDPI device will need a larger Cache than hdPI to hold the same number of images.
-
What are the sizes and configurations of the different pages designed for bitmaps, and how much memory do they cost?
-
How often are page images accessed? Are some of these images more frequently accessed than others? If so, you may want to keep the most frequently accessed ones in memory, or set up multiple LruCache containers for different groups of bitmaps (grouped by frequency of access).
3) onLowMemory() and onTrimMemory()
Android users can quickly switch between apps at will. In order for background applications to quickly switch to Forground, each background application takes up a certain amount of memory. The Android system decides to reclaim some background application memory based on the current system memory usage. If the background application is recovered directly from the paused state to Forground, the recovery experience will be faster than if the background application is recovered from the Kill state, it will be slightly slower.
-
OnLowMemory () : Android provides a number of callbacks to notify the current application of memory usage. Generally, forGround will receive an onLowMemory() callback when all background applications have been killed. In this case, the non-essential memory resources of the current application need to be released as soon as possible to ensure that the system can continue to run stably.
-
OnTrimMemory (int) : Android has also provided onTrimMemory() callback since 4.0. When the system memory reaches certain conditions, all running applications will receive this callback. The following parameters are passed in the callback, representing the different memory usage. When receiving the onTrimMemory() callback, we need to judge according to the type of parameters passed, and reasonably choose to release some of our own memory. On the one hand, it can improve the overall operation smoothness of the system, and on the other hand, we can avoid being judged as the first application to be killed by the system.
-
TRIM_MEMORY_UI_HIDDEN: All of your app’s UI is hidden. This means that the user clicks the Home or Back button to exit the app, making the app’s UI completely invisible. This is the time to release resources that are not required when they are not visible
While the program is running in the foreground, it may receive one of the following values returned from onTrimMemory() :
-
TRIM_MEMORY_RUNNING_MODERATE: Your application is running and is not listed as killable. However, the device is running in a low memory state, and the system starts triggering a mechanism to kill the Process in the LRU Cache.
-
TRIM_MEMORY_RUNNING_LOW: Your application is running and not listed as killable. However, the device is running in a lower memory state, so you should free up unused resources to improve system performance.
-
TRIM_MEMORY_RUNNING_CRITICAL: Your application is still running, but the system has already killed most of the processes in the LRU Cache, so you should release all non-essential resources immediately. If the system can’t reclaim enough RAM, the system will clear all processes in the LRU cache and start killing processes that were not supposed to be killed, such as the one that contains a running Service.
When an application process is back in the background being Cached, it may receive one of the following values returned from onTrimMemory() :
-
TRIM_MEMORY_BACKGROUND: The system is running in a low memory state and your process is in the least killable location in the LRU cache list. Although your application process is not in high danger of being killed, the system may have already started killing other processes in the LRU cache. You should release resources that are easy to recover so that your process can keep them so that users can recover quickly when they fall back on your application.
-
TRIM_MEMORY_MODERATE: The system is running in a low memory state and your process is nearing the middle of the LRU list. If the system starts to get more memory tight, your process can be killed.
-
TRIM_MEMORY_COMPLETE: The system is running in a low memory state and your process is in the LRU list where it is most likely to be killed. You should release any resources that do not affect the recovery state of your application.
Since the onTrimMemory() callback was introduced in API 14, for older versions, you can use the onLowMemory) callback for compatibility. OnLowMemory is comparable to TRIM_MEMORY_COMPLETE.
Note that when the system begins to clear processes from the LRU cache, although it does so in LRU order first, it also takes into account the memory usage of the process, among other factors. Processes that occupy less are more likely to be left behind.
4) Select an appropriate folder to store resource files
We know that pictures in different DPI folders such as HDPI/xHDPI/XXHDPI will be processed by scale on different devices. For example, we only put a 100100 image in the directory of HDPI, then according to the conversion relationship, xxHDPI mobile phone to reference that picture will be stretched to 200200. Note that in this case, the memory footprint will increase significantly. For images that you don’t want to stretch, place them in assets or nodpi’s directory.
5) Try catch some large memory allocation operations
In some cases, we need to evaluate the code that is likely to occur in OOM. For code that is likely to occur in OOM, we can consider trying a degraded memory allocation operation in a catch. For example, when you try to decode bitmap and catch to OOM, try to double the sample size and try decode again.
6) Use static objects with caution
Because the lifetime of static objects is long and consistent with the process of the application, improper use of static objects may cause object leakage. You should use static objects with caution in Android.
7) Pay special attention to unreasonable holding in singletons
Although the singleton pattern is simple and practical, it provides a lot of convenience, but because the life cycle of the singleton is consistent with the application, it is easy to leak the holding object when using it improperly.
8) Cherish Services resources
If your application needs to use a service in the background, the service should be stopped at all times unless it is triggered to perform a task. Also be aware of memory leaks caused by failure to stop the service after it has completed its task. When you start a Service, the system tends to keep the process of the Service in order to keep the Service. This makes the process expensive to run because there is no way to free up the RAM space used by the Service for other components, and the Service cannot be Paged out. This reduces the number of processes the system can store in the LRU cache, affects the efficiency of switching between applications, and can even lead to unstable system memory usage, making it impossible to maintain all currently running services. It is recommended to use IntentService, which will finish itself as soon as it has completed its assigned task. For more information, see Running in a Background Service.
9) Optimize the layout hierarchy to reduce memory consumption
The flatter the view layout, the less memory it takes and the more efficient it is. We need to make the layout as flat as possible, and consider using a custom View when the system-provided View is not flat enough.
10) Be careful with “abstract” programming
Many times, developers use abstract classes as a “good programming practice” because abstractions make code more flexible and maintainable. However, abstractions lead to a significant additional memory overhead: they require the same amount of code to execute, and that code is mapped into memory, so avoid them if your abstractions are not significantly more efficient.
11) Serialize data using Nano Protobufs
Protocol Buffers is designed by Google for serializing structured data. It is a language independent, platform independent, and very scalable. Similar to XML, but lighter, faster, and simpler than XML. If you need to serialize and protocol your data, use Nano Protobufs. Refer to the “Nano Version” section of the Protobuf Readme for more details.
12) Use dependency injection frameworks with caution
The injection framework does a lot of initialization by scanning your code, which causes your code to need a lot of memory space to map the code, and the mapped Pages stay in memory for a long time. Use of this technique with caution is recommended unless absolutely necessary;
13) Use multiple processes with caution
-
Use more process can be part of the application components running in a separate process, so that we can expand the application range of the footprint, but this technology must be used carefully, the vast majority of applications use multiple processes should not be rushed, on the one hand, because of the use of multiple processes will make the code more complex logic, and if use undeserved, It may instead result in a significant increase in memory. Consider using this technique when your application needs to run a resident background task that is not lightweight;
-
A typical example is creating a Music Player that can play in the background for a long time. If the entire application is running in one process, there is no way to release the foreground UI resources when playing in the background. An application like this can be split into two processes: one for the UI and one for the backend Service.
14) Use ProGuard to weed out unwanted code
ProGuard compresses, optimizes, and obfuscates code by removing unwanted code, renaming classes, fields, methods, and so on. Using ProGuard makes your code more compact, which reduces the memory required for mapping code.
15) Use third-party libraries with caution
Much of the open source Library code is not written for the mobile web environment and may not be suitable for mobile devices. Even libraries designed for Android need to be very careful, especially if you don’t know exactly what the library is doing. For example, one library uses Nano Protobufs, while the other uses Micro Protobufs. That way, there are two implementations of Protobuf in your application. Similar conflicts can occur in modules that output logs, load images, cache, and so on. Also, do not import the entire library for one or two features. If there is not a suitable library that matches your needs, you should consider implementing it yourself rather than importing a large and complete solution.
conclusion
-
Memory optimization does not necessarily mean that your application uses less memory. If you want to keep the memory footprint low, you will frequently trigger GC operations, which in some ways will reduce the overall performance of your application. There are tradeoffs to be made.
-
There’s a lot more to Android memory optimization: the details of memory management, how garbage collection works, how to find memory leaks, and so on. OOM is one of the most important memory optimizations, and it’s important to minimize OOM probability