Digression: the friend who likes the article points to encourage encouragement bai ~

This article is a simple summary of Android memory management, process management, mainly theoretical, but to understand the principle of the Android system, the writing of high quality procedures, the understanding of procedures are of great benefit ~

Android Memory Management

Android’s memory management philosophy

Android is an operating system based on the Linux kernel, and Linux’s memory management philosophy is that Free memory is wasted memory.

Linux wants to use as much memory as possible and reduce disk IO because memory is much faster than disk. Linux is always striving to cache more data and information, memory, some do not often use data transfer to Swap partition (Swap Space) to release more available physical memory, of course, if the Swap partition data to be read again, and will be transferred to physical memory, the design ideas to improve the overall performance of the system.

The difference between Linux and Windows in memory management mechanism


When you use Linux, you’ll find that no matter how good your computer’s memory configuration is, there will still be times when it feels like it’s running out of memory, but it’s not. This is an excellent feature of Linux memory management. No matter how big the physical memory is, Linux will make full use of it. It will cache the disk data called by some programs into the memory, and improve the data access performance of the system by using the high speed of memory read and write. Windows allocates memory for applications only when it is needed and cannot make full use of large memory space. In other words, Linux can take advantage of every extra bit of memory it adds, taking full advantage of its hardware investment, whereas Windows can just put it on display.

Android’s use of memory is also “Max out,” which inherits the advantages of Linux. The difference is that Linux focuses on caching as much disk data as possible to reduce disk I/O and improve data access performance, while Android focuses on caching as many processes as possible to improve application startup and switching speed. Linux terminates process activity when it stops, while Android keeps application processes in memory as long as possible until the system needs more memory. These in-memory processes usually do not affect the overall system speed, but rather speed up the process when the user reactivates them because they do not need to reload interface resources, which is one of Android’s touted features. As a result, Android does not currently recommend explicit “exit” apps.

The reason why it is slow to run a large program with low memory is that opening a large program with low memory triggers the system’s own process scheduling policy, which is a very resource-consuming operation, especially when an application frequently requests memory from the system. In this case, the system does not close all open processes, but selectively closes them, and frequent scheduling naturally slows the system down.

In a nutshell, caching improves performance when it takes up a lot of memory. We developers in the daily app development work, also use a large number of caching mechanism to improve the performance of the application, improve the user experience.

For example, the classic three-level cache, first from the memory, memory can not find from the disk to find, disk can only go to the network request;

For example, why do some applications need to be preserved? One of the most important reasons is to keep the application process cached in the Android system from being cleaned up, so as to optimize the application launch. But note that an empty process will take up 10 MB of memory, and some application startup, there are a dozen process and some applications are from double process live to upgrade to the four process keep alive, it’s going too far, should reduce the number of application to start the process at this time, reduce in process, have moral integrity of live, it is very important for low-end machine memory optimization;

For example, development work commonly used Glide picture loading framework, OkHttp network request framework and other third-party frameworks, their source code design framework is based on excellent caching mechanism (subsequent articles, I will be on some common third-party framework source analysis)……

Android application process memory allocation and reclamation

When Android allocates memory for each process, it does not allocate a lot of memory to each process at first. Instead, each process is allocated a “sufficient” virtual memory range. This range is determined by the actual physical memory size of each device and can be increased by subsequent application requirements, but only up to the upper limit defined by the system for each application.

Android tries to keep application processes in memory as long as possible, even if some processes are no longer in use. In this way, when you start an application next time, you only need to restore the current process, and do not need to create a new process. In this way, the application startup time is reduced and user experience is improved. However, when The Android system finds that memory is low and other processes that provide more urgent services to users need memory, Android decides to shut down certain processes to reclaim memory. This part involves Android’s ghost executioner LMK low Memory management mechanism, low Memory Killer, which is born from OOM Killer mechanism based on Linux kernel.

Oom_score_adj (before Android 7.0, oOM_adj), what is oom_score_adj? It is a value assigned by the Linux kernel to each system process. It represents the priority of the process, and the process reclamation mechanism determines whether to recycle based on this priority. Here’s what we need to know about oOM_score_adj:

  • The larger the oOM_score_adj value of a process is, the lower its priority is and the easier it is to kill and recycle the process. Conversely, a smaller process has a higher priority and is less likely to be killed and recycled
  • If the oOM_score_adj of a common APP process is 0, the oOM_score_adj of a system process can be less than 0
  • The value of oOM_score_adj for a process is not fixed, and The Android system will change this value depending on whether the application is in the foreground, etc

doubt  oom_score_adj 
What is the value range of, andoom_score_adj 
What does each value of phi mean? When will the system be updatedoom_score_adj 
(i.e., process priority), and at what pointoom_score_adj 
Value to selectively kill processes? How can application processes be improvedoom_score_adj 
To get a higher process priority without being killed? We’ll leave these questions to the next section, “Android Processes.”

Android application memory limits

To maintain an efficient multitasking environment, Android sets a hard limit on the maximum DalvikHeapSize threshold for each application. This limit varies from device to device, depending on the RAM available to the device overall. If the application has reached this limit and tries to allocate more memory, it is easy to raise an OutOfMemoryError, or OOM crash.

promptIn some cases, you may want to query the system to determine exactly how much heap space is currently available on the device, for example, to determine how much data can safely be kept in the cache. ActivityManager. GetMemoryClass () can be used to query the current application of HeapSize threshold, this method returns an integer that indicates that application of HeapSize threshold is how many MB, instructs the application heap available on the number of megabytes.

Android application process switch

When a user switches between applications, Android caches non-foreground applications (that is, processes that are not visible to the user or running foreground services such as music playback) into a least-recently used Cache (LRU Cache). For example, when a user first starts an application, a process is created for it; However, the process does not exit when the user leaves the application. The system caches the process. If the user returns to the application later, the system will reuse the process, making the application switch faster.

If your application has a caching process and it retains memory that is not currently needed, your application will affect the overall performance of the system even if the user does not use it. When the system runs out of memory, processes in the LRU Cache are terminated, starting with the least recently used process. In addition, the processes that retain the most memory are considered and may be terminated to free up RAM. In short, free the application cache when the process is no longer in the foreground, because it can be the “last straw” when the system runs out of memory.

When the system starts to terminate processes in the LRU Cache, it is primarily bottom-up. The system also considers which processes take up more memory, since they provide the system with more memory gain when it is killed. Therefore, the less memory consumed in the entire LRU list, the better the chance that it will remain in the list and be able to recover quickly.

Android garbage collection mechanism

The Android system divides the memory used by the process into multiple Spaces based on the type of objects allocated and how the system manages those objects during GC. The space where the newly allocated objects are located depends on your Android Runtime version. Up to 5.0, the ART (Android Runtime) vm is used. Up to 5.0, the Dalvik VM is used.

Both ART and Dalvik virtual machines, like many Java virtual machines, belong to a managed memory environment (the programmer does not need to display the management of memory allocation and reclamation, the system automatically manages). The managed memory environment keeps track of each memory allocation, and once it determines that the program is no longer using a chunk of memory, it releases it back into the heap without any programmer intervention. The mechanism for reclaiming allocated but unreachable object memory in a managed memory environment is called garbage collection, or GC. GC determines whether an object is collected by determining whether it is referenced by an active object, thereby dynamically reclaiming memory occupied by an object without any references.

Each space has a size limit, and the system keeps track of how much memory the entire program takes up. When the program occupies a certain amount of memory, the system triggers GC to reclaim memory for future allocation to other objects:

                    

ART virtual machine has a great improvement in GC performance compared with Dalvik.

  • Dalvik VM will “Stop the world” most of the time in the whole GC process, that is, it will suspend all threads (including the main thread responsible for rendering interface) and only execute GC thread until GC is completed. This will lead to a problem: “When GC is too frequent, the main thread is blocked. Resulting in frame loss (Android renders a frame every 16ms), eventually causing the interface to lag “:

                 

  • On the basis of Dalvik VIRTUAL machine’s obstructive GC, ART VIRTUAL machine extends the concurrent GC algorithm to eliminate a large area of GC blocking time (of course, it does not mean that there is no blocking in the whole process, but “Stop the world” still occurs in a short period of time), thus improving GC performance. It also optimizes the stuck situation to a certain extent:

                         


Android’s memory heap is Generational, meaning it divides all allocated objects into generations and then tracks those objects through generations. For example, the most recently allocated object belongs to the Young Generation. When an object remains active for a long time, it can be promoted to Older Generation, and then to Permanent Generation.

                                

  • Young Generation: Faster but more frequent, garbage collection is efficient and frequent

The young generation is divided into three zones, one Eden zone, and the other two S0 and S1 are Survivor zones (S0 and S1 are essentially the same, with interchangeable directions, just for illustration). Most of the new objects generated in the program are in Eden zone. When Eden zone is full, the surviving objects will be copied to one Survivor zone. When the space occupied by objects in Survivor zone is full, the surviving objects in Survivor zone will be copied to another Survivor zone. When the Survivor zone is also full, objects copied from the first Survivor zone that are still alive at this time are copied to the aged generation.

  • Old Generation: Slower but less frequent garbage collection than younger Generation

The old generation stores objects copied from the young generation above, that is, objects that are still alive in the young generation, and are full of copied objects. In general, objects in the aged generation have long lifetimes.

  • Permanent Generation

Used to store static classes and methods, garbage collection has no significant impact on persistent generation.

In the three-level Generation memory model, the size of each region is fixed, when the total size of objects into a certain level of memory region threshold will trigger the GC mechanism, garbage collection, free up space for other objects to enter.

Reading this, we can summarize how an object migrates over time in the three-level memory model from the time it is allocated:

  1. When the object is created, it is stored in Eden
  2. When GC is performed, objects are copied to S0 if they are still alive
  3. When S0 is full, live objects in the zone are copied to S1, and then S0 is emptied. The roles of S0 and S1 are switched
  4. When step 3 reaches a certain number of times (the system version varies), the surviving objects are copied to the Old Generation
  5. When the object has been in the Old Generation area for a certain amount of time, it is moved to the PermanentGeneration area. The PermanentGeneration section also holds static files such as Java classes.

                             


Now that we know the memory generational model of the GC mechanism and the role of different generational regions, why do we need to generational the heap? Can’t all generations do what he does? At its simplest, why merge the young and the old into one, rather than separate them? The only reason for generational is to optimize GC performance. Let’s think about it this way, if there is no generation, then all objects are in the same region, and when GC is done we need to find out which objects need to be collected, which can be time-consuming to scan all the regions of the heap. The life cycle of most of our objects is very short, is the so-called “toward death”, if the generation strategy is adopted, we put the newly created object into a generation, when GC the first place to save the “toward death” object area for recycling, so that it will be efficient to vacate a lot of space out. At the same time, different garbage collection algorithms can be used between generations:

  • The younger generation uses the “mark-copy” method for GC

  • The old days used the “mark-tidy” method for GC

I will not elaborate on the algorithm here. To understand the GC algorithm, please refer to the following two articles:


https://juejin.cn/post/6844903749782077448


https://www.cnblogs.com/fangfuhai/p/7203468.html


Android process

Process life cycle

The Android system will help us cache application processes as long as possible to achieve the effect of fast startup, but when memory is running short, the system will still remove relatively unimportant processes from the cache, to achieve the effect of memory release. A basic feature of the Android system is that the life cycle of an application process is not directly controlled by the application itself, but determined by the system, which will weigh the relative importance of each process to the user and the total amount of available memory of the system to determine. For example, the system is more likely to terminate an Activity that is no longer visible on the screen than to terminate an Activity that is interacting with the user. Otherwise, the consequences would be dire and would cause a very bad user experience. Therefore, whether or not to terminate a process depends on the state of the components running in that process. Android does a limited cleanup of processes that are no longer in use to ensure minimal side effects.

As an application developer, it is important to understand how application components, especially activities, Services, and BroadcastReceivers, affect the application process lifecycle. Improper use of these components can cause the system to terminate processes while the application is performing important work.

As a common example, a BroadcastReceiver starts a thread when it receives an Intent in its onReceive() method and returns from that function. Once returned, the system considers the BroadcastReceiver to be no longer active and therefore no longer needs its hosting process (unless there are other components active in that process). As a result, it is possible for the system to terminate the process at any time to reclaim memory, which eventually causes the thread running in the process to terminate. The solution to this problem is usually to schedule a JobService from the BroadcastReceiver so that the system knows that there is still active work in that process.

To determine which processes to kill when out of memory, Android places each process in an “importance hierarchy” based on the components that are running in the process and the state of those components. If necessary, the system kills the least important processes first, and so on, to reclaim system resources. This is equivalent to the concept of prioritizing a process.

Process priority

We left a few questions about the oOM_score_adj process priority, which we now resolve one by one:

So the first question is, what is the range of values oom_score_adj, and what does each value of oom_score_adj mean?

For each running process, the Linux kernel exposes such a file through the PROC file system to allow other programs to change the priority of a given process:

The/proc / / oom_score_adj [pid]. (You need root permission to modify this file)

The allowed values in this file range from -1000 to +1000. The smaller the value, the more important the process is.

When memory is very tight, the system iterates through all the processes to determine which one needs to be killed to reclaim memory, and reads the value of the file oom_score_adj. The use of this value will be covered in more detail later when we talk about process recycling.

Tip: In versions prior to Linux 2.6.36, the file that Linux provides to adjust the priority is
/proc/[pid]/oom_adj. The range of values allowed in this file is
- 17 ~ + 15In between. The smaller the value, the more important the process. This file is deprecated in newer versions of Linux.

But you can still use this file, and when you modify it, the kernel directly converts the results to the oom_score_adj file.

Conversion formula:
oom_score_adj = oom_adj * 1000 / 17

The Android operating system is based on the Linux operating system, so early implementations of Android versions also relied on the oom_adj file. The oom_score_adj file has been changed from -17 to +15 to -1000 to +1000. This adjustment can further specify the process priority. For example, between VISIBLE_APP_ADJ(100) and PERCEPTIBLE_APP_ADJ(200), there can be adj =101 and 102 processes.

Possible values of oOM_score_adj are predefined in ProcessList. Java for administrative purposes.

In fact, the predefined values are also a classification of application processes. They are:

ADJ level The values meaning
NATIVE_ADJ – 1000. Native process
SYSTEM_ADJ – 900. Only the system_server process
PERSISTENT_PROC_ADJ – 800. System Persistent Process
PERSISTENT_SERVICE_ADJ – 700. Associated with systems or persistent processes
FOREGROUND_APP_ADJ 0 Foreground process
VISIBLE_APP_ADJ 100 Visible process
PERCEPTIBLE_APP_ADJ 200 Aware of processes, such as background music playback
BACKUP_APP_ADJ 300 The backup process
HEAVY_WEIGHT_APP_ADJ 400 Heavyweight process
SERVICE_ADJ 500 Service process
HOME_APP_ADJ 600 The Home process
PREVIOUS_APP_ADJ 700 Previous process
SERVICE_B_ADJ 800 B Service in the List
CACHED_APP_MIN_ADJ 900 Minimum adj for an invisible process
CACHED_APP_MAX_ADJ 906 Maximum adj for invisible processes



prompt
The OOM_score_adj value of a process in different states is different, that is, the priority of the process is not constant. From the table, we can clearly see the meaning of each oOM_score_adj value and what characteristics a process should have in order to obtain the corresponding priority.

FOREGROUND_APP_ADJ = 0, this is the priority of the foreground application process, this is the application that the user is interaction with, they are very important, the system should not reclaim them, this is also the highest priority of ordinary applications, any priority less than 0 is held by the system process.


Second, when does the system update oOM_score_adj (process priority), and when does it selectively kill processes based on oom_score_adj (process priority)? Or do you want to dive into the Android source world?

When does the system update oOM_score_adj?

The system sets different priorities for processes in different states. But in reality, the state of the process is always changing. For example, a user can start a new Activity at any time, or switch from a foreground Activity to the background. At this point, the priority of the process in which the Activity’s state changes needs to be updated.

Also, the Activity may use other services or contentProviders. When the Activity’s process priority changes, the priority of the Service or ContentProvider it uses should also change.

There are two methods in ActivityManagerService to update the priority of a process:

  • final boolean updateOomAdjLocked(ProcessRecord app)
  • final void updateOomAdjLocked()

The first method is to update the priority for a specified individual process. The second is to update priorities for all processes.

The priority of the specified application process needs to be updated in the following cases:

  • When a new process starts using the ContentProvider in this process
  • When a Service in this process is bind or unbind by another process
  • When the execution of a Service in this process is complete or exits
  • When a BroadcastReceiver is receiving broadcasts in this process
  • BackUpAgent in this process starts or exits

In some cases, the system needs to update the priority of all application processes, for example:

  • When a new process starts
  • When a process exits
  • When the system is cleaning background processes
  • When a process is marked as a foreground process
  • When a process enters or exits cached
  • When the system locks or unlocks the screen
  • When an Activity starts or exits
  • When the system is processing a broadcast event
  • When the Activity changes on the console
  • When a Service is started

When does the system kill processes according to oOM_score_adj?

When ActivityManagerService calls updateOomAdjLocked(), it determines whether the process needs to be killed. If so, the ProceeRecord::kill() method is called to kill the process:

The default limit for empty or cached processes is 16, and empties out processes that have not been active for 30 minutes when empty exceeds 8. The main difference between cached and Empty is whether there is an Activity or not.



Third question, how can an application process lower the oOM_score_adj value so that it can get higher process priority and not be killed?

In Android, processes are usually divided into five levels of priority (from high priority to Low priority). LMK (Low Memory Killer) kills processes in the order of empty process > background process > service process > visible process > foreground process. To lower the oOM_score_adj value and reduce the chance that a process will be killed by LMK, we can control the process in the server -> visible -> foreground process, the closer it is to the foreground process level, the less it will be killed:

                 

  • Foreground process (foreground process)

The foreground process is the process required for the user’s current operation. A process is considered a foreground process if it meets any of the following criteria:

  1. Hosting the Activity the user is interacting with (the Activity’s onResume() method has been called)

  2. Hosts a Service that is bound to the Activity the user is interacting with

  3. Hosting a Service (onCreate(), onStart(), or onDestroy()) that is performing a lifecycle callback

  4. Host the BroadcastReceiver that is executing its onReceive() method

Usually, there are only a few foreground processes in the system. Foreground processes are the most advanced processes, and the system only kills them and reclaims memory when the system is running out of memory, or even memory, for them to run properly. In this case, the device is often paged out of memory, so some foreground processes need to be terminated to ensure that the user interface responds properly.

  • Visible processesNormal will not be killed)

Visible processes are processes that do not have any foreground components but still affect what the user sees on the screen, and killing them can also significantly affect the user experience. A process is considered visible if it meets any of the following criteria:

  1. Hosts an Activity (whose onPause() method has been called) that is not in the foreground but is still visible to the user. For example, if you start a dialog-style foreground activity, you can still see the previous activity behind it. (Runtime permission dialogs fall into this category. Consider, what other situation would trigger onPause but not onStop?
  2. Host foreground Service that gets started through service.startforeground (). (service.startforeground () : It requires the system to treat it as a Service that is perceptible to, or substantially visible to, the user.)
  3. A Service that hosts a system for a specific function that is perceptible to the user, such as dynamic wallpaper, input method services, and so on.

Visible processes are considered extremely important and will not be terminated unless necessary to keep all foreground processes running at the same time. If such processes are killed, from the user’s perspective, this means that the visible activity behind the current activity is replaced by a black screen.

  • Service process (service processNormal will not be killed)

A process that is running a service started with the startService() method and does not belong to either of the higher categories of processes described above. Although server processes are not directly related to what the user sees, they are often performing operations that the user cares about (for example, background network uploads or downloads of data). Therefore, unless there is not enough memory to keep all foreground and visible processes running at the same time, the system will leave the server processes running.

  • Background process (The Background/Cached Process is killed at any time)

Such processes typically hold one or more activities that are currently invisible to the user (the Activity’s onStop() method has been called). They are not currently required, so the system may terminate them at any time to reclaim memory when other higher-priority processes need it. However, if the Activity lifecycle is implemented correctly, the user experience will not be affected when the user returns to the application even if the system terminates the process: The associated Activity can be restored to its previously saved state when it is recreated in a new process.

  • An Empty Process,Be killed at any time.)

A process that does not contain any active application components. The sole purpose of keeping such processes is to be used as a cache to reduce the startup time needed to run components in it the next time. To balance overall system resources between the process cache and the underlying kernel cache, systems often kill these processes.

conclusion

Here, we’ve given an overview of Android’s memory management mechanism and process model. Some readers may think, “Android application development does not need these things, what is the use of learning?” But is that really the case?

At least I don’t think so. By understanding these operating mechanisms, we will have a deeper understanding of The Android system, and have a certain understanding of the system’s memory mechanism and process mechanism, which will be of great help to us in writing high-quality code, daily solving problems, overall architecture thinking and performance optimization.

What have we learned in this passage? The main ones are as follows:

  • Free memory is Wasted memory is a great example of the Philosophy behind Android memory management. For the most part, we use caching and process retention in our daily development work
  • Heap generation model, overview of GC garbage collection mechanism
  • Process priority:Empty process -> Background Process -> Service process -> Visible Process -> Foreground processFrom left to right, as the priority increases, the system tends to kill lower-priority processes, so the process can be saved by increasing the priority of the process
  • A simple overview of Android process management

Finally, my level is limited, there will be mistakes, if there are mistakes, please give me more advice ~~~

Digression: the friend who likes the article points to encourage encouragement bai ~

References:


Best Time for Android Application Performance Optimization by Luo Yucheng


https://developer.android.google.cn/guide/components/processes-and-threads#Processes


https://developer.android.google.cn/guide/components/activities/process-lifecycle


http://gityuan.com/2015/10/01/process-lifecycle/


https://juejin.cn/post/6844903749782077448


https://paul.pub/android-process-priority/


https://gityuan.com/2018/05/19/android-process-adj/